text
stringlengths 1
1.03M
| n_tokens
int64 1
623k
|
---|---|
\section{Introduction}
Our main result is the following.
\begin{theorem}\label{thm:general_three_col}
For any planar convex body $C$ there is a positive integer $m=m(C)$ such that any finite point set $P$ in the plane can be three-colored in a way that there is no translate of $C$ containing at least $m$ points of $P$, all of the same color.
\end{theorem}
This result closes a long line of research about coloring points with respect to planar range spaces that consist of translates of a fixed set, a problem that was proposed by Pach over forty years ago \cite{Pach80}.
In general, a pair $(P, \Sc)$, where $P$ is a set of points in the plane and $\Sc$ is a family of subsets of the plane, called the \emph{range space}, defines a \emph{primal} hypergraph $\Hc(P,\Sc)$ whose vertex set is $P$, and for each $S\in\Sc$ we add the edge $S\cap P$ to the hypergraph.
Given any hypergraph $\Hc$, a planar realization of $\Hc$ is defined as a pair $(P, \Sc)$ for which $\Hc(P,\Sc)$ is isomorphic to $\Hc$.
If $\Hc$ can be realized with some pair $(P, \Sc)$ where $\Sc$ is from some family $\Fc$, then we say that $\Hc$ is realizable with $\Fc$.
The dual of the hypergraph $\Hc(P,\Sc)$, where the elements of the range space $\Sc$ are the vertices and the points $P$ define the edges, is known as the \emph{dual} hypergraph and is denoted by $\Hc(\Sc,P)$.
If $\Hc=\Hc(\Sc,P)$ where $\Sc$ is from some family $\Fc$, then we say that $\Hc$ has a dual realization with $\Fc$.
Pach observed \cite{Pach80,surveycd} that if $\Fc$ is the family of translates of some set, then $\Hc$ has a dual realization with $\Fc$ if and only if $\Hc$ has a (primal) realization with $\Fc$.
Pach proposed to study the chromatic number of hypergraphs realizable with different geometric families $\Fc$.
It is important to distinguish between two types of hypergraph colorings that we will use, the \emph{proper} coloring and the \emph{polychromatic} coloring.
\begin{definition}
A hypergraph is \emph{properly $k$-colorable} if its vertices can be colored with $k$ colors such that each edge contains points from at least two color classes. Such a coloring is called a \emph{proper $k$-coloring}.
If a hypergraph has a proper $k$-coloring but not a proper $(k-1)$-coloring, then it is called \emph{$k$-chromatic}.
A hypergraph is \emph{polychromatic $k$-colorable} if its vertices can be colored with $k$ colors such that each edge contains points from each color class. Such a coloring is called a \emph{polychromatic $k$-coloring}.
\end{definition}
Note that for a polychromatic $k$-coloring to exist, it is necessary that each edge of the underlying hypergraph has at least $k$ vertices.
More generally, we say that a hypergraph is \emph{$m$-heavy} if each of its edges has at least $m$ vertices.
The main question that Pach raised can be rephrased as follows.
\begin{question}
For which planar families $\Fc$ is there an $m_k=m(\Fc,k)$ such that any $m_k$-heavy hypergraph realizable with $\Fc$ has a proper/polychromatic $k$-coloring?
\end{question}
Initially, this question has been mainly studied for polychromatic $k$-colorings (known in case of a dual range space as \emph{cover-decomposition} problem), and it was shown that such an $m_k$ exists if $\Fc$ is the family of translates of some convex polygon \cite{Pach86,TT07,PT10}, or the family of all halfplanes \cite{wcf2,MR2844088}, or the homothetic\footnote{A \emph{homothetic copy}, or \emph{homothet}, is a scaled and translated (but non-rotated) copy of a set. We always require the scaling factor to be positive. Note that this is sometimes called a positive homothet.} copies of a triangle \cite{octants} or of a square \cite{homotsquare}, while it was also shown that not even $m_2$ exists if $\Fc$ is the family of translates of some appropriate concave polygon \cite{MR2364757,MR2679054} or any body\footnote{By \emph{body}, we always mean a compact subset of the plane with a non-empty interior, though our results (and most of the results mentioned) also hold for sets that are unbounded, or that contain an arbitrary part of their boundary, and are thus neither open, nor closed. This is because a realization of a hypergraph can be perturbed slightly to move the points off from the boundaries of the sets realizing the respective edges of the hypergraph.} with a smooth boundary \cite{unsplittable}.
It was also shown that there is no $m_k$ for proper $k$-colorings if $\Fc$ is the family of all lines \cite{MR2364757} or all axis-parallel rectangles \cite{Chen}; for these families, the same holds in case of dual realizations \cite{MR2364757,PT08}.
For homothets of convex polygons other than triangles, it is known that there is no $m_2$ for dual realizations \cite{kovacs}, unlike for primal realizations.
Higher dimensional variants \cite{octants,CKMU13} and improved bounds for $m_k$ have been also studied \cite{Alou,MR2812512,MR3151767,MR3216669,MR3126347,CKMPUV20}.
For other results, see also the decade old survey \cite{surveycd}, or the up-to-date website \url{https://coge.elte.hu/cogezoo.html}.
If $\Fc$ is the translates or homothets of some planar convex body, it is an easy consequence of the properties of generalized Delaunay-triangulations and the
Four Color Theorem that any hypergraph realizable with $\Fc$ is proper 4-colorable if every edge
contains at least two vertices.
We have recently shown that this cannot be improved for homothets.
\begin{theorem}[Dam\'asdi, Pálvölgyi \cite{fourchromatic}]
Let $C$ be any convex body in the plane that has two parallel supporting lines such that $C$ is strictly convex in some neighborhood of the two points of tangencies. For any positive integer $m$, there exists a 4-chromatic $m$-uniform hypergraph that is realizable with homothets of $C$.
\end{theorem}
For translates, we recall the following result.
\begin{theorem}[Pach, Pálvölgyi \cite{unsplittable}]\label{thm:unsplittable}
Let $C$ be any convex body in the plane that has two parallel supporting lines such that $C$ is strictly convex in some neighborhood of the two points of tangencies.\footnote{This condition can be relaxed to require only one smooth neighborhood on the boundary. Since this is not the main topic of our paper, we just give a sketch of the construction in Appendix \ref{sec:halfdisk}.} For any positive integer $m$, there exists a 3-chromatic $m$-uniform hypergraph that is realizable with translates of $C$.
\end{theorem}
This left only the following question open: Is there for any planar convex body $C$ a positive integer $m$ such that that no 4-chromatic $m$-uniform hypergraph is realizable with translates of $C$?
Our Theorem \ref{thm:general_three_col} answers this question affirmatively for all $C$ by showing that all realizable $m$-heavy hypergraphs are three-colorable for some $m$.
This has been hitherto known to hold only when $C$ is a polygon (in which case 2 colors suffice \cite{PT10}, and 3 colors are known to be enough even for homothets \cite{3propercol}) and pseudodisk families that intersect in a common point \cite{MR4012917} (which generalizes the case when $C$ is unbounded, in which case 2 colors suffice \cite{unsplittable}).
Note that the extended abstract of our first proof attempt appeared recently in the proceedings of EuroComb 2021 \cite{threechromaticdisk}.
That proof, however, only worked when $C$ was a disk, and while the generalization to other convex bodies with a smooth boundary seemed feasible, we saw no way to extend it to arbitrary convex bodies.
The proof of Theorem \ref{thm:general_three_col} relies on a surprising connection to two other famous results, the solution of the two dimensional case of the Illumination conjecture \cite{MR76368}, and a recent solution of the Erdős-Sands-Sauer-Woodrow conjecture by Bousquet, Lochet and Thomassé~\cite{esswproof}.
In fact, we need a generalization of the latter result, which we prove with the addition of one more trick to their method; this can be of independent interest.\\
The rest of the paper is organized as follows.\\
In Section \ref{sec:tools} we present the three main ingredients of our proof:
\begin{itemize}
\item the Union Lemma (Section \ref{sec:unionlemma}),
\item the Erdős-Sands-Sauer-Woodrow conjecture (Section \ref{sec:essw})---the proof of our generalization of the Bousquet-Lochet-Thomassé theorem can be found in Appendix \ref{app:essw},
\item the Illumination conjecture (Section \ref{sec:illum}), which is a theorem of Levi in the plane.
\end{itemize}
In Section \ref{sec:proof} we give the detailed proof of Theorem \ref{thm:general_three_col}.\\
In Section \ref{sec:overview} we give a general overview of the steps of the algorithm requiring computation to show that we can find a three-coloring in randomized polynomial time.\\
Finally, in Section \ref{sec:open}, we pose some problems left open.
\section{Tools}\label{sec:tools}
\subsection{Union Lemma}\label{sec:unionlemma}
Polychromatic colorability is a much stronger property than proper colorability. Any polychromatic $k$-colorable hypergraph is proper $2$-colorable. We generalize this trivial observation to the following statement about unions of polychromatic $k$-colorable hypergraphs.
\begin{lemma}[Union Lemma]\label{lem:combine} Let $\Hc_1=(V,E_1),\dots, \Hc_{k-1}=(V,E_{k-1})$ be hypergraphs on a common vertex set $V$. If $\Hc_1,\dots, \Hc_{k-1}$ are polychromatic $k$-colorable, then the hypergraph $\bigcup\limits_{i=1}^{k-1} \Hc_i=(V,\bigcup\limits_{i=1}^{k-1} E_i)$ is proper $k$-colorable.
\end{lemma}
\begin{proof}
Let $c_i:V\rightarrow \{1,\ldots,k\}$ be a polychromatic $k$-coloring of $\Hc_i$.
Choose $c(v)\in \{1,\ldots,k\}$ such that it differs from each $c_i(v)$.
We claim that $c$ is a proper $k$-coloring of $\bigcup\limits_{i=1}^{k-1} \Hc_i$.
To prove this, it is enough to show that for every edge $H\in\Hc_i$ and for every color $j\in\{1,\ldots,k-1\}$, there is a $v\in H$ such that $c(v)\ne j$.
We can pick $v\in H$ for which $c_i(v)=j$.
This finishes the proof.
\end{proof}
Lemma \ref{lem:combine} is sharp in the sense that for every $k$ there are $k-1$ hypergraphs such that each is polychromatic $k$-colorable but their union is not properly $(k-1)$-colorable.\\
We will apply the Union Lemma combined with the theorem below.
A \emph{pseudoline arrangement} is a collection of simple curves, each of which splits $\mathbb R^2$ into two unbounded parts, such that any two curves intersect at most once.
A \emph{pseudohalfplane} is the region on one side of a pseudoline in such an arrangement.
For hypergraphs realizible by pseudohalfplanes the following was proved, generalizing a result of Smorodinsky and Yuditsky \cite{MR2844088} about halfplanes.
\begin{theorem}[Keszegh-P\'alv\"olgyi \cite{abafree}]\label{thm:pseudohalfplane}
Any $(2k-1)$-heavy hypergraph realizable by pseudohalfplanes is polychromatic $k$-colorable, i.e., given a finite set of points and a pseudohalfplane arrangement in the plane, the points can be $k$-colored such that every pseudohalfplane that contains at least $2k-1$ points contains all $k$ colors.
\end{theorem}
Combining Theorem \ref{thm:pseudohalfplane} with Lemma \ref{lem:combine} for $k=3$, we obtain the following.
\begin{corollary}\label{cor:pseudohalfplane}
Any $5$-heavy hypergraph realizable by two pseudohalfplane families is proper $3$-colorable, i.e., given a finite set of points and two different pseudohalfplane arrangements in the plane, the points can be $3$-colored such that every pseudohalfplane that contains at least $5$ points contains two differently colored points.
\end{corollary}
\subsection{Erdős-Sands-Sauer-Woodrow conjecture}\label{sec:essw}
Given a quasi order\footnote{A quasi order $\prec$ is a reflexive and transitive relation, but it is not required to be antisymmetric, so $p\prec q\prec p$ is allowed, unlike for partial orders.} $\prec$ on a set $V$, we interpret it as a digraph $D=(V,A)$, where the vertex set is $V$ and a pair $(x,y)$ defines an arc in $A$ if $x \prec y$.
The \emph{closed in-neighborhood} of a vertex $x\in V$ is $N^-(x)=\{x\}\cup \{y|(y,x)\in A \}$. Similarly the \emph{closed out-neighborhood} of a vertex $x$ is $N^+(x)=\{x\}\cup \{y|(x,y)\in A \}$. We extend this to subsets $S\subset V$ as $N^-(S) = \bigcup\limits_{ x\in S } N^-(x)$ and $N^+(S) = \bigcup\limits_{ x\in S } N^+(x)$.
A set of vertices $S$ such that $N^+(S) = V$ is said to be \emph{dominating}.
For $A,B\subset V$ we will also say that \emph{$A$ dominates $B$ } if $B\subset N^+(A)$.
A \emph{complete multidigraph} is a digraph where parallel edges are allowed and in which there is at least one arc between each pair of distinct vertices. Let
$D$ be a complete multidigraph whose arcs are the disjoint union of $k$ quasi orders $\prec_1, \dots , \prec_k$ (parallel arcs are allowed). Define $N^-_i(x)$ (resp.\ $N^+_i(x)$) as the closed in-neighborhood (resp.\ out-neighborhood) of the digraph induced by $\prec_i$.
Proving a conjecture of Erdős, and of Sands, Sauer and Woodrow \cite{sandssauer}, Bousquet, Lochet and Thomassé recently showed the following.
\begin{theorem}[Bousquet, Lochet, Thomassé~\cite{esswproof}]\label{thm:multi_essw_old}
For every $k$, there exists an integer $f(k)$ such that if $D$ is a complete multidigraph whose arcs are the union of $k$ quasi orders, then $D$ has a dominating set of size at most $f(k)$.
\end{theorem}
We show the following generalization of Theorem \ref{thm:multi_essw_old}.
\begin{theorem}\label{thm:multi_essw_new}
For every pair of positive integers $k$ and $l$, there exist an integer $f(k,l)$ such that if $D=(V,A)$ is a complete multidigraph whose arcs are the union of $k$ quasi orders $\prec_1,\dots, \prec_k$, then $V$ contains a family of pairwise disjoint subsets $S_{i}^j$ for $i\in [k]$, $j\in [l]$ with the following properties:
\begin{itemize}
\item $|\bigcup\limits_{i,j}S_{i}^j|\le f(k,l)$
\item For each vertex $v\in V\setminus \bigcup\limits_{i,j}S_{i}^j$ there is an $i\in [k]$ such that for each $j\in [l]$ there is an edge of $\prec_i$ from a vertex of $S_{i}^j$ to $v$.
\end{itemize}
\end{theorem}
Note that disjointness is the real difficulty here, without it the theorem would trivially hold from repeated applications of Theorem \ref{thm:multi_essw_old}.
We saw no way to derive Theorem \ref{thm:multi_essw_new} from Theorem \ref{thm:multi_essw_old}, but with an extra modification the proof goes through.
The full proof of Theorem \ref{thm:multi_essw_new} can be found in Appendix \ref{app:essw}.
\subsection{Hadwiger's Illumination conjecture and pseudolines}\label{sec:illum}
Hadwiger's Illumination conjecture has a number of equivalent formulations and names.\footnote{These include names such as Levi–Hadwiger Conjecture, Gohberg–Markus Covering Conjecture, Hadwiger Covering Conjecture, Boltyanski–Hadwiger Illumination Conjecture.} For a recent survey, see \cite{MR3816868}. We will use the following version of the conjecture.
Let $\mathbb{S}^{d-1}$ denote the unit sphere in $\mathbb R^d$.
For a convex body $C$, let $\partial C$ denote the boundary of $C$ and let $int(C)$ denote its interior.
A direction $u\in \mathbb{S}^{d-1}$ \emph{illuminates} $b\in \partial C$ if $\{b+\lambda u:\lambda>0 \}\cap int (C)\ne \emptyset$.
\begin{conjecture}
The boundary of any convex body in $\mathbb{R}^d$ can be illuminated by $2^d$ or fewer directions. Furthermore, the $2^d$ lights are necessary if and only if the body is a parallelepiped.
\end{conjecture}
The conjecture is open in general. The $d=2$ case was settled by Levi \cite{MR76368} in 1955. For $d=3$ the best result is due to Papadoperakis \cite{MR1689273}, who showed that 16 lights are enough.
In the following part we make an interesting connection between the Illumination conjecture and pseudolines. Roughly speaking, we show that the Illumination conjecture implies that for any convex body in the plane the boundary can be broken into three parts such that the translates of each part behave similarly to pseudolines, i.e., we get three pseudoline arrangements from the translates of the three parts.
To put this into precise terms, we need some technical definitions and statements.
Fix a body $C$ and an injective parametrization of $\partial C$, $\gamma:[0,1]\rightarrow \partial C$, that follows $\partial C$ counterclockwise.
For each point $p$ of $\partial C$ there is a set of possible tangents touching at $p$. Let $g(p)\subset \mathbb{S}^1$ denote the Gauss image of $p$, i.e., $g(p)$ is the set of unit outernormals of the tangent lines touching at $p$. Note that $g(p)$ is an arc of $\mathbb{S}^1$ and $g(p)$ is a proper subset of $\mathbb{S}^1$.
Let $g_+:\partial C\rightarrow\mathbb{S}^1$ be the function that assigns to $p$ the counterclockwise last element of $g(p)$. (See Figure \ref{fig:gauss_tan} left.) Similarly let $g_-$ be the function that assigns to $p$ the clockwise last element of $g(p)$. Thus, $g(p)$ is the arc from $g_-(p)$ to $g_+(p)$. Let $|g(p)|$ denote the length of $g(p)$.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(5.824158146704215,1.6939909822621093) rectangle (10.8,4.4022061705050906);
\draw [shift={(7.,2.)},line width=1.0] plot[domain=0.5235987755982986:1.5707963267948966,variable=\t]({1.*2.*cos(\t r)+0.*2.*sin(\t r)},{0.*2.*cos(\t r)+1.*2.*sin(\t r)});
\draw [shift={(7.,4.)},line width=1.0] plot[domain=4.71238898038469:5.759586531581288,variable=\t]({1.*2.*cos(\t r)+0.*2.*sin(\t r)},{0.*2.*cos(\t r)+1.*2.*sin(\t r)});
\draw [shift={(7.,3.)},line width=1.0] plot[domain=1.5707963267948966:4.71238898038469,variable=\t]({1.*1.*cos(\t r)+0.*1.*sin(\t r)},{0.*1.*cos(\t r)+1.*1.*sin(\t r)});
\draw [line width=1.0,domain=5.824158146704215:10.274171485061972] plot(\x,{(--18.124355652982153-1.7320508075688785*\x)/1.});
\draw [line width=1.0,domain=5.824158146704215:10.274171485061972] plot(\x,{(-12.12435565298215--1.7320508075688785*\x)/1.});
\draw [->,line width=1.pt] (8.732050807568879,3.) -- (9.832456454322593,3.6353194963710407);
\draw [->,line width=1.pt] (8.732050807568879,3.) -- (9.844045808452828,2.3579893869021333);
\draw (8.15,3.25) node[anchor=north west] {$p$};
\draw (9.7,2.95) node[anchor=north west] {$g_-(p)$};
\draw (9.7,3.8) node[anchor=north west] {$g_+(p)$};
\end{tikzpicture}
~~~~~~~~
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.6cm,y=0.6cm]
\clip(1.010405779360095,0.7779725023481115) rectangle (9.945938145792228,4.969084292744436);
\draw (4.75,3.7) node[anchor=north west] {$p$};
\draw (4.9,1.4) node[anchor=north west] {$q$};
\draw [line width=1.0,dash pattern=on 3pt off 3pt] (5.52700500582115,3.969615338937309)-- (5.701093102682993,1.3323544637799518);
\draw [line width=1.0,dash pattern=on 3pt off 3pt] (4.691398477047223,3.8957218914504184)-- (4.86141088560568,1.3202036253261604);
\draw [line width=1.0] (5.516440449569205,1.8865088084802843)-- (4.995922845299596,1.0326578708773406);
\draw [line width=1.0] (8.392512263686044,1.2568699665550929)-- (8.913029867955656,2.1107209041580446);
\draw [line width=1.0] (3.0820370497082816,4.613210333135324)-- (4.995922845299596,4.032657870877341);
\draw [line width=1.0] (8.39251226368604,4.256869966555094)-- (6.478626468094716,4.837422428813083);
\draw [shift={(3.495922845299593,2.532657870877341)},line width=1.0] plot[domain=-0.30951591373703113:0.7853981633974475,variable=\t]({1.*2.1213203435596446*cos(\t r)+0.*2.1213203435596446*sin(\t r)},{0.*2.1213203435596446*cos(\t r)+1.*2.1213203435596446*sin(\t r)});
\draw [shift={(3.495922845299593,2.532657870877341)},line width=1.0] plot[domain=1.7671635199760698:3.9269908169872414,variable=\t]({1.*2.121320343559645*cos(\t r)+0.*2.121320343559645*sin(\t r)},{0.*2.121320343559645*cos(\t r)+1.*2.121320343559645*sin(\t r)});
\draw [shift={(6.892512263686043,2.756869966555093)},line width=1.0] plot[domain=-0.3095159137370276:0.7853981633974495,variable=\t]({1.*2.121320343559643*cos(\t r)+0.*2.121320343559643*sin(\t r)},{0.*2.121320343559643*cos(\t r)+1.*2.121320343559643*sin(\t r)});
\draw [shift={(6.892512263686043,2.756869966555093)},line width=1.0] plot[domain=1.7671635199760762:3.9269908169872414,variable=\t]({1.*2.1213203435596544*cos(\t r)+0.*2.1213203435596544*sin(\t r)},{0.*2.1213203435596544*cos(\t r)+1.*2.1213203435596544*sin(\t r)});
\draw (8.676673546001679,4.6) node[anchor=north west] {$J_2$};
\draw (0.8,4.6) node[anchor=north west] {$J_1$};
\end{tikzpicture}
\caption{Extremal tangents at a boundary point (on the left) and parallel tangents on two intersecting translates (on the right).}
\label{fig:gauss_tan}
\end{figure}
\begin{obs}\label{obs:continuity}
$g_+\circ \gamma$ is continuous from the right and $g_-\circ \gamma$ is continuous from the left.
\end{obs}
For $t_1<t_2$ let $\gamma_{[t_1,t_2]}$ denote the restriction of $\gamma$ to the interval $[t_1,t_2]$. For $t_1>t_2$ let $\gamma_{[t_1,t_2]}$ denote the concatenation of $\gamma_{[t_1,1]}$ and $\gamma_{[0,t_2]}$.
When it leads to no confusion, we identify $\gamma_{[t_1,t_2]}$ with its image, which is a connected part of the boundary $\partial C$.
For such a $J=\gamma_{[t_1,t_2]}$, let $g(J)=\bigcup\limits_{p\in J}g(p)$. Clearly, $g(J)$ is an arc of $\mathbb{S}^1$ from $g_-(t_1)$ to $g_+(t_2)$; let $|g(J)|$ denote the length of this arc.
\begin{lemma}
Let $C$ be a convex body and assume that $J$ is a connected part of $\partial C$ such that $|g(J)|<\pi$. Then there are no two translates of $J$ that intersect in more than one point.
\end{lemma}
\begin{proof}
Suppose $J$ has two translates $J_1$ and $J_2$ such that they intersect in two points, $p$ and $q$. Now both $J_1$ and $J_2$ has a tangent that is parallel to the segment $pq$. (See Figure \ref{fig:gauss_tan} right.) This shows that $J$ has two different tangents parallel to $pq$ and therefore $g(J)\ge \pi$.
\end{proof}
\begin{lemma}\label{lemma:our_illumination}
For a convex body $C$, which is not a parallelogram, and an injective parametrization $\gamma$ of $\partial C$, we can pick $0\le t_1<t_2<t_3\le 1$ such that $|g(\gamma_{[t_1,t_2]})|,|g(\gamma_{[t_2,t_3]})|$ and $|g(\gamma_{[t_3,t_1]})|$ are each strictly smaller than $\pi$.
\end{lemma}
\begin{proof}
We use the 2-dimensional case of the Illumination conjecture (proved by Levi \cite{MR76368}). If $C$ is not a parallelogram, we can pick three directions, $u_1,u_2$ and $u_3$, that illuminate $C$. Pick $t_1$ such that $\gamma(t_1)$ is illuminated by both $u_1$ and $u_2$. To see why this is possible, suppose that the parts illuminated by $u_1$ and $u_2$ are disjoint. Each light illuminates a continuous open ended part of the boundary. So in this case there are two disjoint parts of the boundary that are not illuminated. If $u_3$ illuminates both, then it illuminates everything that is illuminated by $u_1$ or everything that is illuminated by $u_2$. But two lights are never enough for illumination, a contradiction.
Using the same argument, pick $t_2$ and $t_3$ such that $\gamma(t_2)$ is illuminated by both $u_2$ and $u_3$ and $\gamma(t_3)$ is illuminated by both $u_3$ and $u_1$.
Note that $u_1$ illuminates exactly those points for which $g_+(p)<u_1+\pi/2$ and $g_-(p)>u_1-\pi/2$. Therefore, $|g(\gamma_{[t_1,t_3]})|<u_1+\pi/2-(u_1-\pi/2)=\pi$. Similarly $|g(\gamma_{[t_1,t_2]})|<\pi$ and $|g(\gamma_{[t_2,t_3]})|<\pi$.
\end{proof}
Observation \ref{obs:continuity} and Lemma \ref{lemma:our_illumination} immediately imply the following statement.
\begin{lemma}\label{lemma:our_illumination_epsilon}
For a convex body $C$, which is not a parallelogram, and an injective parametrization $\gamma$ of $\partial C$, we can pick $0\le t_1<t_2<t_3\le 1$ and $\varepsilon>0$ such that $|g(\gamma_{[t_1-\varepsilon,t_2+\varepsilon]})|$, $|g(\gamma_{[t_2-\varepsilon,t_3+\varepsilon]})|$ and $|g(\gamma_{[t_3-\varepsilon,t_1+\varepsilon]})|$ are each strictly smaller than $\pi$.
\end{lemma}
\section{Proof of Theorem \ref{thm:general_three_col}}\label{sec:proof}
\subsection{Quasi orderings on planar point sets}
Cones provide a natural way to define quasi orderings on point sets (see \cite{TT07} for an example where this idea was used). A \emph{cone} is a closed region in the plane that is bounded by two rays that emanate from the origin. For a cone $K$ let $-K$ denote the cone that is the reflection of $K$ across the origin and let $p+K$ denote the translate of $K$ by the vector $p$.
\begin{obs}\label{obs:cones}
For any $p,q\in \mathbb{R}^2$ and cone $K$, the following are equivalent (see Figure \ref{fig:basic_cones}):
\begin{itemize}
\item $p\in q+K$
\item $q \in p+(-K)$
\item $p+K\subset q+K$
\end{itemize}
\end{obs}
\begin{figure}[!ht]
\centering
\definecolor{zzttqq}{rgb}{0.6,0.2,0.}
\scalebox{0.7}{
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.5cm,y=0.5cm]
\clip(0.7905167827637798,-0.6536209763473118) rectangle (28.955457063301157,10.602349828485595);
\fill[line width=1.0,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (21.914471376040254,4.093030278329455) -- (23.914471376040254,7.093030278329451) -- (24.914471376040254,4.093030278329455) -- cycle;
\fill[line width=1.0,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (18.,2.) -- (20.,5.) -- (21.,2.) -- cycle;
\fill[line width=1.0,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (2.,2.) -- (4.,5.) -- (5.,2.) -- cycle;
\fill[line width=1.0,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (8.339322178085318,5.977538969999748) -- (6.339322178085318,2.977538969999748) -- (5.339322178085318,5.977538969999748) -- cycle;
\draw [line width=1.0] (21.914471376040254,4.093030278329455)-- (23.914471376040254,7.093030278329451);
\draw [line width=1.0] (23.914471376040254,7.093030278329451)-- (25.914471376040254,10.093030278329444);
\draw [line width=1.0] (21.914471376040254,4.093030278329455)-- (24.914471376040254,4.093030278329455);
\draw [line width=1.0] (24.914471376040254,4.093030278329455)-- (27.914471376040247,4.093030278329455);
\draw [line width=1.0] (18.,2.)-- (20.,5.);
\draw [line width=1.0] (20.,5.)-- (22.,8.);
\draw [line width=1.0] (18.,2.)-- (21.,2.);
\draw [line width=1.0] (21.,2.)-- (24.,2.);
\draw (16.899061667902384,2.598103922826639) node[anchor=north west] {$q$};
\draw (20.801131546911115,4.799271546882852) node[anchor=north west] {$p$};
\draw [line width=1.0] (2.,2.)-- (4.,5.);
\draw [line width=1.0] (4.,5.)-- (6.,8.);
\draw [line width=1.0] (2.,2.)-- (5.,2.);
\draw [line width=1.0] (5.,2.)-- (8.,2.);
\draw [line width=1.0] (8.339322178085318,5.977538969999748)-- (6.339322178085318,2.977538969999748);
\draw [line width=1.0] (6.339322178085318,2.977538969999748)-- (4.339322178085318,-0.022461030000251903);
\draw [line width=1.0] (8.339322178085318,5.977538969999748)-- (5.339322178085318,5.977538969999748);
\draw [line width=1.0] (5.339322178085318,5.977538969999748)-- (2.339322178085318,5.977538969999748);
\draw (8.844789225333082,6.550200338745749) node[anchor=north west] {$p$};
\draw (0.9405963934948848,2.6481304597370077) node[anchor=north west] {$q$};
\begin{scriptsize}
\draw [fill=black] (21.914471376040254,4.093030278329455) circle (2.5pt);
\draw [fill=black] (18.,2.) circle (2.5pt);
\draw [fill=black] (2.,2.) circle (2.5pt);
\draw [fill=black] (8.339322178085318,5.977538969999748) circle (2.5pt);
\end{scriptsize}
\end{tikzpicture}}\caption{Basic properties of cones.}
\label{fig:basic_cones}
\end{figure}
For a cone $K$ let $\prec_K$ denote the quasi ordering on the points of the plane where a point $p$ is bigger than a point $q$ if and only if $p+K$ contains $q$, i.e., when interpreted as a digraph, $qp$ is an edge of $\prec_K$.
By Observation \ref{obs:cones}, this ordering is indeed transitive.
\begin{figure}[!ht]
\centering
\definecolor{qqttcc}{rgb}{0.,0.2,0.8}
\definecolor{yqqqqq}{rgb}{0.5019607843137255,0.,0.}
\definecolor{qqwuqq}{rgb}{0.,0.39215686274509803,0.}
\definecolor{qqttzz}{rgb}{0.,0.2,0.6}
\scalebox{0.7}{
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.9cm,y=0.9cm]
\clip(2.8081291197505673,2.5852872443375357) rectangle (19.480030573405248,6.726256287901889);
\draw [shift={(14.,6.)},line width=1.0,color=qqttzz,fill=qqttzz,fill opacity=1.0] (0,0) -- (-135.:0.5401263969866527) arc (-135.:-71.56505117707799:0.5401263969866527) -- cycle;
\draw [shift={(15.,3.)},line width=1.0,color=qqwuqq,fill=qqwuqq,fill opacity=1.0] (0,0) -- (108.43494882292202:0.5401263969866527) arc (108.43494882292202:161.56505117707798:0.5401263969866527) -- cycle;
\draw [shift={(12.,4.)},line width=1.0,color=yqqqqq,fill=yqqqqq,fill opacity=1.0] (0,0) -- (-18.43494882292201:0.5401263969866527) arc (-18.43494882292201:45.:0.5401263969866527) -- cycle;
\draw [shift={(3.,4.)},line width=1.0,color=yqqqqq,fill=yqqqqq,fill opacity=1.0] (0,0) -- (-18.43494882292201:0.5401263969866527) arc (-18.43494882292201:45.:0.5401263969866527) -- cycle;
\draw [line width=1.0] (12.,4.)-- (14.,6.);
\draw [line width=1.0] (14.,6.)-- (15.,3.);
\draw [line width=1.0] (15.,3.)-- (12.,4.);
\draw [line width=1.0,color=qqttcc] (16.4144675126188,5.222364386041088)-- (16.,4.);
\draw [line width=1.0,color=qqttcc] (17.39966534681431,6.059782545107268)-- (17.8,4.2);
\draw [line width=1.0,color=qqttcc] (17.39966534681431,6.059782545107268)-- (17.,3.);
\draw [line width=1.0,color=qqttcc] (18.2124535600256,5.444033898735077)-- (17.8,4.2);
\draw [line width=1.0,color=qqttzz] (17.8,4.2)-- (17.,3.);
\draw [line width=1.0,color=qqwuqq] (19.185336421293663,3.732252661820378)-- (17.39966534681431,6.059782545107268);
\draw [line width=1.0,color=qqwuqq] (19.185336421293663,3.732252661820378)-- (18.2124535600256,5.444033898735077);
\draw [line width=1.0,color=qqwuqq] (17.8,4.2)-- (16.4144675126188,5.222364386041088);
\draw [line width=1.0,color=qqwuqq] (17.,3.)-- (16.,4.);
\draw [line width=1.0,color=yqqqqq] (16.4144675126188,5.222364386041088)-- (17.39966534681431,6.059782545107268);
\draw [line width=1.0,color=yqqqqq] (16.,4.)-- (18.2124535600256,5.444033898735077);
\draw [line width=1.0,color=yqqqqq] (17.8,4.2)-- (19.185336421293663,3.732252661820378);
\draw [line width=1.0,color=yqqqqq] (17.,3.)-- (19.185336421293663,3.732252661820378);
\draw [line width=1.0,color=yqqqqq] (16.,4.)-- (17.8,4.2);
\draw [line width=1.0,color=qqwuqq] (18.2124535600256,5.444033898735077)-- (17.39966534681431,6.059782545107268);
\draw [line width=1.0,color=qqwuqq] (16.4144675126188,5.222364386041088)-- (19.185336421293663,3.732252661820378);
\draw [line width=1.0,color=qqttcc] (17.,3.)-- (18.2124535600256,5.444033898735077);
\draw [line width=1.0,color=yqqqqq] (16.4144675126188,5.222364386041088)-- (18.2124535600256,5.444033898735077);
\draw [line width=1.0,color=qqttcc] (16.,4.)-- (17.39966534681431,6.059782545107268);
\draw [line width=1.0,color=yqqqqq] (16.,4.)-- (19.185336421293663,3.732252661820378);
\draw [line width=1.0,color=qqttcc] (17.,3.)-- (16.4144675126188,5.222364386041088);
\draw [line width=1.0] (3.,4.)-- (5.,6.);
\draw [line width=1.0] (6.,3.)-- (3.,4.);
\draw [line width=1.0,color=yqqqqq] (8.399665346814308,6.059782545107264)-- (7.414467512618802,5.222364386041083);
\draw [line width=1.0,color=yqqqqq] (7.804180545110583,5.553620463659098) -- (7.802122827418463,5.764536527101339);
\draw [line width=1.0,color=yqqqqq] (7.804180545110583,5.553620463659098) -- (8.012010032014645,5.517610404047007);
\draw [line width=1.0,color=yqqqqq] (9.212453560025601,5.444033898735072)-- (7.414467512618802,5.222364386041083);
\draw [line width=1.0,color=yqqqqq] (8.17944361442798,5.316676508181941) -- (8.293633375274837,5.494019448661143);
\draw [line width=1.0,color=yqqqqq] (8.17944361442798,5.316676508181941) -- (8.333287697369565,5.172378836115012);
\draw [line width=1.0,color=yqqqqq] (8.8,4.2)-- (7.,4.);
\draw [line width=1.0,color=yqqqqq] (7.765794289841775,4.085088254426863) -- (7.882105905312236,4.26104685218987);
\draw [line width=1.0,color=yqqqqq] (7.765794289841775,4.085088254426863) -- (7.917894094687763,3.938953147810129);
\draw [line width=1.0,color=yqqqqq] (10.185336421293664,3.732252661820378)-- (8.8,4.2);
\draw [line width=1.0,color=yqqqqq] (9.36473230652311,4.009322826499032) -- (9.544504005353444,4.119649415858658);
\draw [line width=1.0,color=yqqqqq] (9.36473230652311,4.009322826499032) -- (9.440832415940221,3.81260324596172);
\draw [line width=1.0,color=yqqqqq] (10.185336421293664,3.732252661820378)-- (7.,4.);
\draw [line width=1.0,color=yqqqqq] (8.458111127749135,3.8774366906380453) -- (8.606240642320259,4.027594830387427);
\draw [line width=1.0,color=yqqqqq] (8.458111127749135,3.8774366906380453) -- (8.579095778973405,3.704657831432951);
\draw [line width=1.0,color=yqqqqq] (10.185336421293664,3.732252661820378)-- (8.,3.);
\draw [line width=1.0,color=yqqqqq] (8.96463306203684,3.323224891359415) -- (9.041186483185905,3.5197685092421795);
\draw [line width=1.0,color=yqqqqq] (8.96463306203684,3.323224891359415) -- (9.144149938107761,3.2124841525781975);
\begin{scriptsize}
\draw [fill=black] (16.4144675126188,5.222364386041088) circle (2.5pt);
\draw [fill=black] (16.,4.) circle (2.5pt);
\draw [fill=black] (17.,3.) circle (2.5pt);
\draw [fill=black] (19.185336421293663,3.732252661820378) circle (2.5pt);
\draw [fill=black] (17.8,4.2) circle (2.5pt);
\draw [fill=black] (18.2124535600256,5.444033898735077) circle (2.5pt);
\draw [fill=black] (17.39966534681431,6.059782545107268) circle (2.5pt);
\draw [fill=black] (7.414467512618802,5.222364386041083) circle (2.5pt);
\draw [fill=black] (7.,4.) circle (2.5pt);
\draw [fill=black] (8.,3.) circle (2.5pt);
\draw [fill=black] (10.185336421293664,3.732252661820378) circle (2.5pt);
\draw [fill=black] (8.8,4.2) circle (2.5pt);
\draw [fill=black] (9.212453560025601,5.444033898735072) circle (2.5pt);
\draw [fill=black] (8.399665346814308,6.059782545107264) circle (2.5pt);
\end{scriptsize}
\end{tikzpicture}}\caption{Quasi orderings on a point set.}
\label{fig:ordering}
\end{figure}
Suppose the cones $K_1, K_2, K_3$ correspond to the three corners of a triangle, in other words the cones $K_1,-K_3,K_2,-K_1,K_3,-K_2$ partition the plane around the origin in this order. Then we will say that $K_1, K_2, K_3$ is a \emph{set of tri-partition} cones. In this case the intersection of any translates of $K_1, K_2, K_3$ forms a (sometimes degenerate) triangle.
\begin{obs}
Let $K_1,K_2,K_3$ be a set of tri-partition cones and let $P$ be a planar point set. Then any two distinct points of $P$ are comparable in either $\prec_{K_1}$, $\prec_{K_2}$ or $\prec_{K_3}$. (See Figure \ref{fig:ordering}.)
\end{obs}
In other words, when interpreted as digraphs, the union of $\prec_{K_1}$, $\prec_{K_2}$ and $\prec_{K_3}$ forms a complete multidigraph on $P$. As a warm up for the proof of Theorem \ref{thm:general_three_col}, we show the following theorem.
\begin{theorem}\label{thm:three_cones}
There exists a positive integer $m$ such that for any point set $P$, and any set of tri-partition cones $K_1,K_2,K_3$, we can three-color $P$ such that no translate of $K_1$, $K_2$ or $K_3$ that contains at least $m$ points of $P$ is monochromatic.
\end{theorem}
\begin{proof}
We set $m$ to be $f(3,2)+13$ according to Theorem \ref{thm:multi_essw_new}.
Consider the three quasi orders $\prec_{K_1}$, $\prec_{K_2}$ or $\prec_{K_3}$. Their union gives a complete multidigraph on $P$, hence we can apply Theorem \ref{thm:multi_essw_new} with $k=3$ and $l=2$, resulting in subsets $S_i^j$ for $i\in[3],j\in [2]$. Let $S=\bigcup\limits_{i\in [3],j\in[2]}S_i^j$. For each point $p\in P\setminus S$ there is an $i$ such that $\prec_{K_i}$ has an edge from a vertex of $S_{i,1}$ and $S_{i,2}$ to $p$. Let $P_1,P_2,P_3$ be the partition of $P\setminus S$ according to this $i$ value.
We start by coloring the points of $S$. Color the points of $S_{1,1}\cup S_{2,1} \cup S_{3,1}$ with the first color and color the points of $S_{1,2}\cup S_{2,2}\cup S_{3,2}$ with the second color.
Any translate of $K_1$, $K_2$ or $K_3$ that contains $f(3,2)+13$ points of $P$, must contain $5$ points from either $P_1,P_2$ or $P_3$ by the pigeonhole principle. (Note that the cone might contain all points of $S$.) Therefore, it is enough to show that for each $i\in [3]$ the points of $P_i$ can be three-colored such that no translate of $K_1$, $K_2$, or $K_3$ that contains at least $5$ points of $P_i$ is monochromatic.
Consider $P_1$; the proof is the same for $P_2$ and $P_3$. Take a translate of $K_1$ and suppose that it contains a point $p$ of $P_1$. By Theorem \ref{thm:multi_essw_new}, there is and edge of $\prec_{K_1}$ from a vertex of $S_{1,1}$ to $p$ and another edge from a vertex of $S_{1,2}$ to $p$. Thus any such translate contains a point from $S_{1,1}$ and another point from $S_{1,2}$, and hence it cannot be monochromatic.
Therefore, we only have to consider the translates of $K_2$ and $K_3$. Two translates of a cone intersect at most once on their boundary. Hence, the translates of $K_2$ form a pseudohalfplane arrangement, and so do the translates of $K_3$. Therefore, by Corollary \ref{cor:pseudohalfplane}, there is a proper three-coloring for the translates of $K_2$ and $K_3$ together.
\end{proof}
\begin{remark}
From Theorem \ref{thm:three_cones}, it follows using standard methods (see Section \ref{sec:proofend}) that Theorem \ref{thm:general_three_col} holds for triangles.
This was of course known before, even for two-colorings of homothetic copies of triangles.
Our proof cannot be modified for homothets, but a two-coloring would follow if instead of Corollary \ref{cor:pseudohalfplane} we applied a more careful analysis for the two cones.
\end{remark}
\subsection{Proof of Theorem \ref{thm:general_three_col}}\label{sec:proofend}
If $C$ is a parallelogram, then our proof method fails.
Luckily, translates of parallelograms (and other symmetric polygons) were the first for which it was shown that even two colors are enough \cite{Pach86}; in fact, by now we know that two colors are enough even for homothets of parallelograms \cite{homotsquare}.
So from now on we assume that $C$ is not a parallelogram.
The proof of Theorem \ref{thm:general_three_col} relies on the same ideas as we used for Theorem \ref{thm:three_cones}. We partition $P$ into several parts, and for each part $P_i$, we divide the translates of $C$ into three families such that two of the families each form a pseudohalfplane arrangement over $P_i$, while the third family will only contain translates that are automatically non-monochromatic. Then Corollary \ref{cor:pseudohalfplane} would provide us a proper three-coloring. As in the proof of Theorem \ref{thm:three_cones}, this is not done directly. First, we divide the plane using a grid, and then in each small square we will use Theorem \ref{thm:multi_essw_new} to discard some of the translates of $C$ at the cost of a bounded number of points.\\
The first step of the proof is a classic divide and conquer idea \cite{Pach86}. We chose a constant $r=r(C)$ depending only on $C$ and divide the plane into a grid of squares of side length $r$. Since each translate of $C$ intersects some bounded number of squares, by the pidgeonhole principle we can find for any positive integer $m$ another integer $m'$ such that the following holds: each translate $\hat C$
of $C$ that contains at least $m'$ points intersects a square $Q$ such that $\hat C\cap Q$ contains at least $m$ points.
For example, choosing $m'=m(diam(C)/r+2)^2$ is sufficient, where $diam(C)$ denotes the diameter of $C$.
Therefore, it is enough to show the following localized version of Theorem \ref{thm:general_three_col}, since applying it separately for the points in each square of the grid provides a proper three-coloring of the whole point set.
\begin{theorem}\label{thm:local_three_col}
There is a positive integer $m$ such that for any convex body $C$ there is a positive real $r$ such that any finite point set $P$ in the plane that lies in a square of side length $r$ can be three-colored in a way that there is no translate of $C$ containing at least $m$ points of $P$, all of the same color.
\end{theorem}
We will show that $m$ can be chosen to be $f(3,2)+13$ according to Theorem \ref{thm:multi_essw_new}, independently of $C$.
\begin{proof}
We pick $r$ the following way. First we fix an injective parametrization $\gamma$ of $\partial C$ and then fix $t_1,t_2,t_3$ and $\varepsilon$ according to Lemma \ref{lemma:our_illumination_epsilon}. Let $\ell_1,\ell_2,\ell_3$ be the tangents of $C$ touching at $\gamma(t_1),\gamma(t_2)$ and $\gamma(t_3)$. Let $K_{1,2}$, $K_{2,3}$, $K_{3,1}$ be the set of tri-partition cones bordered by $\ell_1,\ell_2,\ell_3$, such that $K_{i,i+1}$ is bordered by $\ell_i$ on its counterclockwise side, and by $\ell_{i+1}$ on its clockwise side (see Figure \ref{fig:cone_in_C} left, and note that we always treat $3+1$ as 1 in the subscript).
For a translate $\hat{C}$ of $C$ we will denote by $\hat{\gamma}$ the translated parametrization of $\partial \hat{C}$, i.e., $\hat{\gamma}(t)=\gamma(t)+v$ if $\hat{C}$ was translated by vector $v$. Our aim is to choose $r$ small enough to satisfy the following two properties for each $i\in [3]$.
\begin{enumerate}[label=(\Alph*)]
\item Let $\hat C$ be a translate of $C$, and $Q$ be a square of side length $r$ such that $\partial \hat C\cap Q\subset \hat{\gamma}_{[t_i+\varepsilon/2,t_{i+1}-\varepsilon/2]}$ (see Figure \ref{fig:cone_in_C} right). Then for any translate $K$ of $K_{i,i+1}$ whose apex is in $Q\cap \hat C$, we have $K\cap Q\subset \hat C$. (I.e., $r$ is small with respect to $C$.)
\item Let $\hat C$ be a translate of $C$, and $Q$ be a square of side length $r$ such that $\hat{\gamma}_{[t_i-\varepsilon/2,t_{i+1}+\varepsilon/2]}$ intersects $Q$. Then $\partial \hat C\cap Q\subset \hat{\gamma}_{[t_i-\varepsilon,t_{i+1}+\varepsilon]}$. (I.e., $r$ is small compared to $\varepsilon$.)
\end{enumerate}
\begin{figure}[!ht]
\centering
\definecolor{zzttqq}{rgb}{0.6,0.2,0.}
\definecolor{uuuuuu}{rgb}{0.26666666666666666,0.26666666666666666,0.26666666666666666}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.5cm,y=0.5cm]
\clip(2.1640924696902344,-3.291941380065454) rectangle (16.64624177595318,6.606761736183365);
\fill[line width=1.0,fill=black,fill opacity=0.25] (3.768827753322032,4.392669166650977) -- (4.123243589875512,3.4575812484314694) -- (4.758793204447675,4.5339787755868235) -- cycle;
\fill[line width=1.0,fill=black,fill opacity=0.25] (14.058734708892569,5.861470654848949) -- (13.068769257766968,5.720161045913108) -- (13.374479808664983,5.132227738178157) -- cycle;
\fill[line width=1.0,fill=black,fill opacity=0.25] (6.332889037089297,-2.3723296870708763) -- (7.017143937316888,-1.6430867704000818) -- (5.978473200535815,-1.4372417688513646) -- cycle;
\draw [shift={(7.958515351695592,2.108914472950761)},line width=1.0] plot[domain=2.6956780077804776:4.321854967035546,variable=\t]({1.*3.1083274241025274*cos(\t r)+0.*3.1083274241025274*sin(\t r)},{0.*3.1083274241025274*cos(\t r)+1.*3.1083274241025274*sin(\t r)});
\draw [shift={(7.261346221122771,2.5938329918446867)},line width=1.0] plot[domain=0.13035761915140343:2.755875028289039,variable=\t]({1.*2.2743120841793814*cos(\t r)+0.*2.2743120841793814*sin(\t r)},{0.*2.2743120841793814*cos(\t r)+1.*2.2743120841793814*sin(\t r)});
\draw [shift={(6.496593035223344,2.298949087855251)},line width=1.0] plot[domain=-1.4801162709845777:0.19311405339801058,variable=\t]({1.*3.0769654110024027*cos(\t r)+0.*3.0769654110024027*sin(\t r)},{0.*3.0769654110024027*cos(\t r)+1.*3.0769654110024027*sin(\t r)});
\draw [line width=1.0,domain=2.1640924696902344:16.64624177595318] plot(\x,{(--12.223776958212898--0.4526542136088514*\x)/3.17113631658728});
\draw [line width=1.0,domain=2.1640924696902344:16.64624177595318] plot(\x,{(--18.39532881276564-3.3853951579956414*\x)/1.2831281782249193});
\draw [line width=1.0,domain=2.1640924696902344:16.64624177595318] plot(\x,{(-21.960768293888048--2.565850114616926*\x)/2.407559228948587});
\draw (9.4,6.6) node[anchor=north west] {$\ell_1$};
\draw (4.6,-0.1) node[anchor=north west] {$\ell_2$};
\draw (6.6,2.6) node[anchor=north west] {$C$};
\draw (10.94582130433904,2.4) node[anchor=north west] {$\ell_3$};
\draw [shift={(3.768827753322032,4.392669166650977)},line width=1.0,fill=black,fill opacity=0.25] plot[domain=-1.2085070485393068:0.14178417369315438,variable=\t]({1.*1.*cos(\t r)+0.*1.*sin(\t r)},{0.*1.*cos(\t r)+1.*1.*sin(\t r)});
\draw [shift={(6.332889037089297,-2.3723296870708763)},line width=1.0,fill=black,fill opacity=0.25] plot[domain=0.817214862644781:1.9330856050504859,variable=\t]({1.*1.*cos(\t r)+0.*1.*sin(\t r)},{0.*1.*cos(\t r)+1.*1.*sin(\t r)});
\draw [shift={(14.058734708892569,5.861470654848949)},line width=1.0,fill=black,fill opacity=0.4000000059604645] plot[domain=3.283376827282948:3.958807516234576,variable=\t]({1.*1.*cos(\t r)+0.*1.*sin(\t r)},{0.*1.*cos(\t r)+1.*1.*sin(\t r)});
\draw (13.6,5.5) node[anchor=north west] {$K_{3,1}$};
\draw (2.203321433221302,4.026165982141844) node[anchor=north west] {$K_{1,2}$};
\draw (6.7,-1.9) node[anchor=north west] {$K_{2,3}$};
\begin{scriptsize}
\draw [fill=uuuuuu] (6.939964069909312,4.845323380259829) circle (2.0pt);
\draw [fill=uuuuuu] (8.740448266037884,0.19352042754604953) circle (2.0pt);
\draw [fill=uuuuuu] (5.051955931546951,1.0072740086553358) circle (2.0pt);
\end{scriptsize}
\end{tikzpicture}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.8cm,y=0.8cm]
\clip(-0.5212593802625312,0.9024160297185335) rectangle (7.098126520651556,7.480250043437565);
\fill[line width=1.0,fill=black,fill opacity=0.30000001192092896] (2.9139611807128176,4.440100887949994) -- (3.068078600743505,3.0862602098415906) -- (4.272853164612676,4.54034726918462) -- cycle;
\fill[line width=1.0,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (2.12382,3.74) -- (3.54248,3.74) -- (3.54248,5.15866) -- (2.12382,5.15866) -- cycle;
\draw [shift={(4.663491963072474,3.1523141871657336)},line width=1.0] plot[domain=2.63100772848181:3.9408911121618377,variable=\t]({1.*2.759430143068236*cos(\t r)+0.*2.759430143068236*sin(\t r)},{0.*2.759430143068236*cos(\t r)+1.*2.759430143068236*sin(\t r)});
\draw [shift={(4.858950201988104,2.01321086543119)},line width=1.0] plot[domain=1.0014831356942346:2.3788491897615827,variable=\t]({1.*3.6008052563532615*cos(\t r)+0.*3.6008052563532615*sin(\t r)},{0.*3.6008052563532615*cos(\t r)+1.*3.6008052563532615*sin(\t r)});
\draw [line width=1.0,domain=-0.8212593802625312:7.098126520651556] plot(\x,{(--5.714971243081739-2.5455417413714536*\x)/0.2897773217368045});
\draw [line width=1.0,domain=-0.8212593802625312:7.098126520651556] plot(\x,{(--15.596206877223619--0.21851199420715073*\x)/2.962044052434135});
\draw [shift={(2.9139611807128176,4.440100887949994)},line width=1.0,fill=black,fill opacity=0.30000001192092896] plot[domain=-1.4574470824511945:0.07363728921063928,variable=\t]({1.*1.3625845885147592*cos(\t r)+0.*1.3625845885147592*sin(\t r)},{0.*1.3625845885147592*cos(\t r)+1.*1.3625845885147592*sin(\t r)});
\draw [line width=1.0] (2.9139611807128176,4.440100887949994)-- (3.068078600743505,3.0862602098415906);
\draw [line width=1.0] (4.272853164612676,4.54034726918462)-- (2.9139611807128176,4.440100887949994);
\draw [line width=1.0] (3.1733300410036582,2.161681526748409)-- (3.068078600743505,3.0862602098415906);
\draw [line width=1.0] (4.272853164612676,4.54034726918462)-- (5.154440683911443,4.6053825770666235);
\draw (4.3,5.9) node[anchor=north west,rotate=50] {$\hat{\gamma}(t_1)$};
\draw (0.6,3.183055506852521) node[anchor=north west] {$\hat{\gamma}(t_2)$};
\draw (6.487561621261355,6.433415706547415) node[anchor=north west] {$\ell_1$};
\draw (1.3,1.6925588406940881) node[anchor=north west] {$\ell_2$};
\draw [line width=1.0,color=zzttqq] (2.12382,3.74)-- (3.54248,3.74);
\draw [line width=1.0,color=zzttqq] (3.54248,3.74)-- (3.54248,5.15866);
\draw [line width=1.0,color=zzttqq] (3.54248,5.15866)-- (2.12382,5.15866);
\draw [line width=1.0,color=zzttqq] (2.12382,5.15866)-- (2.12382,3.74);
\draw (-0.7,3.7) node[anchor=north west] {$\hat{\gamma}(t_2-\varepsilon/2)$};
\draw (3.6,5.9) node[anchor=north west,rotate=50] {$\hat{\gamma}(t_1+\varepsilon/2)$};
\draw (4.35,3.919324944352469) node[anchor=north west] {$K$};
\begin{scriptsize}
\draw [fill=black] (1.9217694985931228,2.840204202956866) circle (2.0pt);
\draw [fill=black] (4.594036229290453,5.60425793853547) circle (2.0pt);
\draw [fill=black] (1.9127284810063392,3.370843316127941) circle (2.0pt);
\draw [fill=black] (4.030305286656739,5.517372120064994) circle (2.0pt);
\end{scriptsize}
\end{tikzpicture}
\caption{Selecting the cones (on the left) and Property (A) (on the right).}
\label{fig:cone_in_C}
\end{figure}
We show that an $r$ satisfying properties (A) and (B) can be found for $i=1$. The argument is the same for $i=2$ and $i=3$, and we can take the smallest among the three resulting $r$-s.
First, consider property (A). Since the sides of $K$ are parallel to $\ell_1$ and $\ell_2$, the portion of $K$ that lies ``above'' the segment $\overline{\hat{\gamma}(t_1)\hat{\gamma}(t_2)}$ is in $\hat{C}$. Hence, if we choose $r$ small enough so that $Q$ cannot intersect $\overline{\hat{\gamma}(t_1)\hat{\gamma}(t_2)}$, then property (A) is satisfied. For example, choosing $r$ to be smaller than $\frac{1}{\sqrt{2}}$ times the distance of the segments $\overline{\hat{\gamma}(t_1)\hat{\gamma}(t_2)}$ and $\overline{\hat{\gamma}(t_1+\varepsilon/2)\hat{\gamma}(t_2-\varepsilon/2)}$ works.
Using that $\gamma$ is a continuous function on a compact set, we can pick $r$ such that property (B) is satisfied.
Therefore, there is an $r$ satisfying properties (A) and (B).
\bigskip
The next step is a subdivision of the point set $P$ using Theorem \ref{thm:multi_essw_new}, like we did in the proof of Theorem \ref{thm:three_cones}.
The beginning of our argument is exactly the same.
Apply Theorem \ref{thm:multi_essw_new} for the graph given by the union of $\prec_{K_{1,2}}$, $\prec_{K_{2,3}}$ and $\prec_{K_{3,1}}$. By Observation \ref{obs:cones}, this is indeed a complete multidigraph on $P$.
We apply Theorem \ref{thm:multi_essw_new} with $k=3$ and $l=2$, resulting in subsets $S_i^j$ for $i\in[3],j\in [2]$. Let $S=\bigcup\limits_{i\in [3],j\in[2]}S_i^j$. For each point $p\in P\setminus S$ there is an $i$ such that $\prec_{K_{i,i+1}}$ has an edge from a vertex of $S_{i,1}$ and $S_{i,2}$ to $p$. Let $P_1,P_2,P_3$ be the partition of $P\setminus S$ according to this $i$ value.
We start by coloring the points of $S$. Color the points of $S_{1,1}\cup S_{2,1} \cup S_{3,1}$ with the first color and color the points of $S_{1,2}\cup S_{2,2}\cup S_{3,2}$ with the second color.
Note that $m$ is at least $f(3,2)+13$. Any translate of $C$ that contains $f(3,2)+13$ points of $P$ must contain $5$ points from either $P_1,P_2$ or $P_3$. (Note that the cone might contain all points of $S$). Therefore it is enough to show that for each $i\in [3]$ the points of $P_i$ can be colored with three color such that no translate of $C$ that contains at least $5$ points of $P_i$ is monochromatic.\\
Consider $P_1$, the proof is the same for $P_2$ and $P_3$. We divide the translates of $C$ that intersect $Q$ into four groups. Let $\mathcal{C}_0$ denote the translates where $\hat{C}\cap Q=Q$. Let $\mathcal{C}_1$ denote the translates for which $\partial \hat{C}\cap Q\subset \hat{\gamma}_{[t_1+\varepsilon/2,t_{2}-\varepsilon/2]}$. Let $\mathcal{C}_2$ denote the translates for which $\partial \hat{C}\cap Q\cap \hat{\gamma}_{[t_2-\varepsilon/2,t_{3}]}\ne \emptyset$. Let $\mathcal{C}_3$ denote the remaining translates for which $\partial \hat{C}\cap Q\cap \hat{\gamma}_{[t_3,t_{1}+\varepsilon/2]}\ne \emptyset$.
We do not need to worry about the translates in $\mathcal{C}_0$, as $Q$ itself will not be monochromatic.
Take a translate $\hat C$ from $\mathcal{C}_1$ and suppose that it contains a point $p\in P_1$. By Theorem \ref{thm:multi_essw_new}, there is an edge of $\prec_{K_{1,2}}$ from a vertex of $S_{1,1}$ to $p$ and another edge from a vertex of $S_{1,2}$ to $p$. I.e., the cone $p+K_{1,2}$ contains a point from $S_{1,1}$ and another point from $S_{1,2}$, and hence it is not monochromatic. From property (A) we know that every point in $(p+K_{1,2})\cap P$ is also in $\hat C$. Therefore, $\hat C$ is not monochromatic.
Now consider the translates in $\mathcal{C}_2$. From property (B) we know that for these translates we have $\partial \hat C\cap Q\subset \hat{\gamma}_{[t_2-\varepsilon,t_3+\varepsilon]}$. By the definition of $t_1,t_2$ and $t_3$, we know that this implies that any two translates from $\mathcal{C}_2$ intersect at most once on their boundary within $Q$, i.e., they behave as pseudohalfplanes. To turn the translates in $\mathcal{C}_2$ into a pseudohalfplane arrangement as defined earlier, we can do as follows. For a translate $\hat{C}$, replace it with the convex set whose boundary is $\hat{\gamma}_{[t_2-\varepsilon,t_3+\varepsilon]}$ extended from its endpoints with two rays orthogonal to the segment $\overline{\hat{\gamma}(t_2-\varepsilon)\hat{\gamma}(t_3+\varepsilon)}$. This new family provides the same intersection pattern in $Q$ and forms a pseudohalfplane arrangement. We can do the same with the translates in $\mathcal{C}_3$. Therefore, by Corollary \ref{cor:pseudohalfplane} there is a proper three-coloring for the translates in $\mathcal{C}_2\cup \mathcal{C}_3$.
\end{proof}
\section{Overview of the computational complexity of the algorithm}\label{sec:overview}
In this section we show that given a point set $P$ and a convex set $C$, we can determine some $m=m(C)$ and calculate a three-coloring of $P$ efficiently if $C$ is given in a natural way, for example, if $C$ is a disk.
Our algorithm is randomized and its running time is a polynomial of the number of points, $n=|P|$.
\begin{itemize}
\item First, we need to fix three points on the boundary, $\tau_1,\tau_2,\tau_3\subset \partial C$ such that Lemma \ref{lemma:our_illumination_epsilon} is satisfied with
$\tau_i=\gamma(t_i)$
for some $t_i$ and $\varepsilon>0$ for each $i$.
Note that we do not need to fix a complete parametrization $\gamma$ of $\partial C$ or $\varepsilon>0$; instead, it is enough to choose some points $\tau_i^{\scalebox{0.6}{$--$}}$ and $\tau_i^{\scalebox{0.6}{$++$}}$ that satisfy the conclusion of Lemma \ref{lemma:our_illumination_epsilon} if we assume $\tau_i^{\scalebox{0.6}{$--$}}=\gamma(t_i-\varepsilon)$ and $\tau_i^{\scalebox{0.6}{$++$}}=\gamma(t_i+\varepsilon)$ for each $i$.
If $C$ has a smooth boundary, like a disk, we can pick $\tau_1,\tau_2,\tau_3$ to be the touching points of an equilateral triangle with $C$ inscribed in it.
If the boundary of $C$ contains vertex-type sharp turns, the complexity of finding these turns depends on how $C$ is given, but for any reasonable input method, this should be straight-forward.
After that, one can follow closely the steps of the proof of the Illumination conjecture in the plane to get an algorithm, but apparently, this has not yet been studied in detail.
\item To pick $r$, the side length of the squares of the grid, we can fix some arbitrary points $\tau_i^{\scalebox{0.6}{$-$}}$ between $\tau_i^{\scalebox{0.6}{$--$}}$ and $\tau_i$, and points $\tau_i^{\scalebox{0.6}{$+$}}$ between $\tau_i$ and $\tau_i^{\scalebox{0.6}{$++$}}$, to play the roles of $\gamma(t_i-\varepsilon/2)$ and $\gamma(t_i+\varepsilon/2)$, respectively, for each $i$.
It is sufficient to pick $r$ so that $r\sqrt{2}$, the diameter of the square of side length $r$, is less than
\begin{itemize}
\item the distance of $\tau_i^{\scalebox{0.6}{$+$}}$ and $\tau_{i+1}^{\scalebox{0.6}{$-$}}$ from the segment $\overline{\tau_i\tau_{i+1}}$,
\item the distance of $\tau_i^{\scalebox{0.6}{$-$}}$ from $\tau_i^{\scalebox{0.6}{$--$}}$, and
\item the distance of $\tau_i^{\scalebox{0.6}{$+$}}$ from $\tau_i^{\scalebox{0.6}{$++$}}$,
\end{itemize}
for each $i$, to guarantee that properties (A) and (B) are satisfied.
\item Set $m=f(3,2)+13$, which is an absolute constant given by Theorem \ref{thm:multi_essw_new}.
We need to construct the complete multidigraph given by the tri-partition cones determined by $\tau_1,\tau_2,\tau_3$, which needs a comparison for each pair of points.
To obtain the subsets $S_i^j\subset P$ for $i\in[3],j\in [2]$, where $P$ is the set of points that are contained in a square of side length $r$, we randomly sample the required number of points from each of the constantly many $T_{j_1,\dots, j_i}$ according to the probability distributions $w_{j_1,\dots, j_i}$ given by Lemma \ref{lemma:prob_dist}.
These probability distributions can be computed by LP.
With high probability, all the $S_i^j$-s will be disjoint---otherwise, we can resample until we obtain disjoint sets.
\item To find the three-coloring for the two pseudohalfplane arrangements, given by Corollary \ref{cor:pseudohalfplane}, it is enough to determine the two-coloring given by Theorem \ref{thm:pseudohalfplane} for one pseudohalfplane arrangement.
While not mentioned explicitly in \cite{abafree}, the polychromatic $k$-coloring can be found in polynomial time if we know the hypergraph determined by the range space, as this hypergraph can only have a polynomial number of edges, and the coloring algorithm only needs to check some simple relations among a constant number of vertices and edges.
\item Finally, to compute a suitable $m'$ for Theorem \ref{thm:general_three_col} from the $m$ of Theorem \ref{thm:local_three_col}, it is enough to know any upper bound $B$ for the diameter of $C$, and let $m'=m(B/r+2)^2$.
\end{itemize}
\section{Open questions}\label{sec:open}
It is a natural question whether there is a universal $m$ that works for all convex bodies in Theorem \ref{thm:general_three_col}, like in Theorem \ref{thm:local_three_col}.
This would follow if we could choose $r$ to be a universal constant.
While the $r$ given by our algorithm can depend on $C$, we can apply an appropriate affine transformation to $C$ before choosing $r$; this does not change the hypergraphs that can be realized with the range space determined by the translates of $C$.
To ensure that properties (A) and (B) are satisfied would require further study of the Illumination conjecture.
Our bound for $m$ is quite large, even for the unit disk, both in Theorems \ref{thm:general_three_col} and \ref{thm:local_three_col}, which is mainly due to the fact that $f(3,2)$ given by Theorem \ref{thm:multi_essw_new} is huge.
It has been conjectured that in Theorem \ref{thm:multi_essw_old} the optimal value is $f(3)=3$, and a similarly small number seems realistic for $f(3,2)$ as well.
While Theorem \ref{thm:general_three_col} closed the last question left open for primal hypergraphs realizable by translates of planar bodies, the respective problem is still open in higher dimensions.
While it is not hard to show that some hypergraphs with high chromatic number often used in constructions can be easily realized by unit balls in $\mathbb{R}^5$, we do not know whether the chromatic number is bounded or not in $\mathbb{R}^3$.
From our Union Lemma (Lemma \ref{lem:combine}) it follows that to establish boundedness, it would be enough to find a polychromatic $k$-coloring for pseudohalfspaces, whatever this word means.
| 23,371 |
\section{Introduction}
One of the most significant incentives for recent research on movement assessment, is the availability of affordable 3D skeleton recognition devices, such as Kinect, which redefine the target audience of applications that are based on user pose and movement. Addressing this problem is considered a hard task, as it requires paying attention to timings, performances and low-level details.
In the recent decade, different solutions have been proposed for dealing with automatic assessment of movements, based on machine-learning algorithms. In this work, we review these solutions and compare them.
We divide the assessment problem into two typical problems of detecting abnormalities in repetitive movements and predicting scores in structured movements. We then list the existing works and their features and the existing public datasets. We elaborate on the secondary problems that take part in the algorithmic flow of typical movement assessment solutions and list the methods and algorithms used by the different works. Finally, we discuss the findings in a high level.
The outline of this review is as follows. In the next chapter, we first present the main types of movement assessment problems, list the features of existing works and list the used public datasets. In addition, we elaborate on the secondary problems and list the methods that were implemented to solve them. The last two chapters include a discussion and conclusions, respectively.
\section{Movement Assessment}
\label{lbl:movementAssessment}
There are generally two main types of movement assessment solutions. The first type focuses on detecting abnormalities in relatively long, repetitive movements~\cite{paiement2014online,jun2020feature,chaaraoui2015abnormal,nguyen2016skeleton,devanne2016learning,baptista2018deformation,nguyen2018estimating,nguyen2018skeleton,khokhlova2018kinematic}, such as gait, as visualized in Figure~\ref{fig:gait}. The second type of movements, on the other hand, focuses on assessing structured movements~\cite{parisi2016human,su2013personal,eichler20183d,eichler2018non,hakim2019mal,hakim2019malf,masalha2020predicting,dressler2019data,dressler2020towards,hu2014real,capecci2016physical,capecci2018hidden,osgouei2018objective,al2019quantifying,cao2019novel,williams2019assessment,yu2019dynamic,lei2020learning}, such as movements from the Fugl-Meyer Assessment (FMA)~\cite{fugl1975post} or Berg Balance Scale (BBS)~\cite{bbs} medical assessments, as visualized in Figure~\ref{fig:fma_and_bbs}, which usually have clear definitions of starting positions, ending positions, objectives and constraints.
\begin{figure}[]
\centering
\includegraphics[width=0.75\linewidth,keepaspectratio]{images/gait.png}
\caption[]{A walking-up-stairs movement with 3D skeleton joints detected from a Kinect RGB-D video~\cite{paiement2014online}.}
\label{fig:gait}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[width=0.75\linewidth,keepaspectratio]{images/fma_and_bbs.png}
\caption[]{An FMA assessment~\cite{eichler2018non} and a BBS assessment~\cite{masalha2020predicting}.}
\label{fig:fma_and_bbs}
\end{figure}
While most of the works deal with assessing known-in-advance, limited range of movement types, only a few works try to provide general solutions, which aim to be adaptive to new types of movements. Such works, which were evaluated on multiple types of movements~\cite{parisi2016human,su2013personal,eichler20183d,eichler2018non,hakim2019mal,hakim2019malf,masalha2020predicting,hu2014real,capecci2016physical,capecci2018hidden,al2019quantifying,cao2019novel,williams2019assessment,lei2020learning}, may therefore assume no prior knowledge on a learned movement type, such that they may need to automatically extract its most important properties from the training set, or use learning algorithms that are adaptive in their nature.
A typical movement assessment algorithm will need to address the following fundamental problems: capturing or detecting human skeleton joint positions, geometric normalization, temporal alignment, feature extraction, score prediction and feedback generation. In this chapter, we review the solutions existing works implemented for each of these problems.
\subsection{Movement Domains and Solution Features}
Most of the works that deal with structured movements, mainly deal with predicting the quality of a performance and sometimes producing feedback. On the contrary, most of the works that deal with repetitive movements, such as gait, give more focus to detecting abnormalities and computing scores that are based on similarity to normal movements.
Table~\ref{tbl:features} summarizes the features of each of the works that deal with structured movements. When a solution produces a quality score on a continuous scale, then we consider the numerical score feature as existing. When a solution classifies performances into a discrete scale of qualities, then we consider the quality classification feature as existing. When a solution produces unbound textual feedback or presents describable features that can be directly translated into textual feedback, then we consider the unbound feedback feature as existing. When a training set that only consists of proper performances is sufficient for a solution to work, then we consider the trains-on-proper-movements feature as existing.
\begin{table}
\centering
\resizebox{0.98\linewidth}{!}{%
\begin{tabular}{ |c|c|c|c|c|c|c| }
\hline
\textbf{} & \textbf{Movement} & \textbf{No. Movement} & \textbf{Numerical} & \textbf{Quality} & \textbf{Unbound} & \textbf{Trains on} \\
\textbf{Work} & \textbf{Domain} & \textbf{Types Evaluated} & \textbf{Score} & \textbf{Classification} & \textbf{Feedback} & \textbf{Proper Movements} \\
\hline
\cite{parisi2016human} & Powerlifting & 3 & \checkmark & & \checkmark & \checkmark \\
\hline
\cite{su2013personal} & Rehabilitation & - & \checkmark & \checkmark & & \checkmark \\
\hline
\cite{eichler20183d,eichler2018non} & FMA & 2 & & \checkmark & & \\
\hline
\cite{hakim2019mal,hakim2019malf} & FMA & 3 & \checkmark & \checkmark & \checkmark & \checkmark \\
\hline
\cite{masalha2020predicting} & BBS & 14 & & \checkmark & & \\
\hline
\cite{dressler2019data,dressler2020towards} & Deep Squats & 1 & \checkmark & & - & \checkmark\\
\hline
\cite{hu2014real} & Qigong+others & 4+6 & \checkmark & & & \checkmark\\
\hline
\cite{capecci2016physical,capecci2018hidden} & Physiotherapy & 5 & \checkmark & & & \checkmark\\
\hline
\cite{osgouei2018objective} & Shoulder Abduction & 1 & \checkmark & & & \\
\hline
\cite{al2019quantifying} & General & 3 & & \checkmark & & \checkmark \\
\hline
\cite{cao2019novel} & Brunnstrom Scale & 9 & & \checkmark & & \\
\hline
\cite{williams2019assessment} & Rehabilitation & 2 & \checkmark & & & \\
\hline
\cite{yu2019dynamic} & Tai Chi & 1 & \checkmark & & & \checkmark \\
\hline
\cite{lei2020learning} & Olympic Sports & 9 & \checkmark & & & \\
\hline
\end{tabular}}
\\
\caption[]{Features of works that deal with assessing structured movements. The minus sign represents missing information.}
\label{tbl:features}
\end{table}
\subsection{Public Datasets}
Many of the works used existing public datasets for evaluating their solutions, while others created their own datasets, for different assessment tasks. The used datasets have either been kept private or made public~\cite{paiement2014online,nguyen2018estimating,nguyen2018skeleton,chaaraoui2015abnormal}. Some of the works used both public and private datasets. Table~\ref{tbl:datasets} lists the public datasets used by existing works.
\begin{table}
\centering
\resizebox{0.9\linewidth}{!}{%
\begin{tabular}{ |c|c|c| }
\hline
\textbf{Dataset} & \textbf{Movement Types} & \textbf{Used by} \\
\hline
SPHERE-staircase 2014,2015~\cite{paiement2014online} & Gait on stairs & \cite{paiement2014online,chaaraoui2015abnormal,devanne2016learning,baptista2018deformation,khokhlova2018kinematic} \\
\hline
DGD: DAI gait dataset~\cite{chaaraoui2015abnormal} & Gait & \cite{chaaraoui2015abnormal,devanne2016learning,khokhlova2018kinematic} \\
\hline
Walking gait dataset~\cite{nguyen2018walking} & Gait, under 9 different conditions & \cite{jun2020feature,nguyen2018estimating,nguyen2018skeleton,khokhlova2018kinematic} \\
\hline
UPCV Gait K2~\cite{kastaniotis2016pose} & Gait - normal walking & \cite{khokhlova2018kinematic} \\
\hline
Eyes. Mocap data~\cite{eyesmocapdata} & Gait captured by a Mocap system & \cite{nguyen2016skeleton} \\
\hline
HMRA~\cite{hmra} & Qigong and others & \cite{hu2014real} \\
\hline
UI-PRMD~\cite{vakanski2018data} & Physical therapy & \cite{williams2019assessment} \\
\hline
MIT Olympic Scoring Dataset~\cite{mitolympic} & Olympic scoring on RGB videos & \cite{lei2020learning} \\
\hline
UNLV Olympic Scoring Dataset~\cite{unlvoplymic} & Olympic scoring on RGB videos & \cite{lei2020learning} \\
\hline
\end{tabular}}
\\
\caption[]{Public movement assessment datasets.}
\label{tbl:datasets}
\end{table}
\subsection{Methods and Algorithms}
\subsubsection{Skeleton Detection.}
The majority of the works use 3D cameras, such as Kinect1 or Kinect2, with the Microsoft Kinect SDK~\cite{shotton2011real} or OpenNI for detection of 3D skeletons. Sometimes, marker-based motion-capture (Mocap) systems are used~\cite{nguyen2016skeleton,al2019quantifying,williams2019assessment}. Lei \textit{et al.}~\cite{lei2020learning} used 2D skeletons that were extracted from RGB videos, using OpenPose~\cite{cao2017realtime}, as visualized in Figure~\ref{fig:openpose}.
\begin{figure}[]
\centering
\includegraphics[width=0.2\linewidth,keepaspectratio]{images/openpose.png}
\caption[]{An OpenPose 2D skeleton~\cite{lei2020learning}.}
\label{fig:openpose}
\end{figure}
\subsubsection{Geometric Normalization.}
People perform movements in different distances and angles in respect to the 3D camera that captures their motion. Additionally, different people have different body dimensions, which have to be addressed by either pre-normalizing the skeleton dimensions and coordinates, as demonstrated in Figure~\ref{fig:geometric}, or extracting features that are inherently invariant to the camera location and body-dimensions, such as joint angles. This step therefore, may be considered an either independent or integral part of the feature-extraction process. Table~\ref{tbl:geometric} summarizes the geometric normalization methods used by existing works.
\begin{figure}[]
\centering
\includegraphics[width=0.35\linewidth,keepaspectratio]{images/geometric.png}
\caption[]{A geometric normalization step~\cite{hakim2019mal,hakim2019malf}.}
\label{fig:geometric}
\end{figure}
\begin{table}
\centering
\resizebox{0.98\linewidth}{!}{%
\begin{tabular}{ |c|l| }
\hline
\textbf{Work} & \textbf{Implementation} \\
\hline
\cite{paiement2014online} & Translation, rotation and scaling due to varying heights of the subjects. \\
\hline
\cite{jun2020feature} & Implementing the method from~\cite{paiement2014online}. \\
\hline
\cite{chaaraoui2015abnormal} & Translation, rotation by shoulder and hip joints, scaling.\\
\hline
\cite{nguyen2016skeleton} & Using features that are invariant to camera location and angle. \\
\hline
\cite{devanne2016learning} & - \\
\hline
\cite{baptista2018deformation} & Projection on the main direction of the motion variation. \\
\hline
\cite{nguyen2018estimating,nguyen2018skeleton} & Scaling the coordinates to the range between 0 and 1. \\
\hline
\cite{khokhlova2018kinematic} & Using features that are invariant to camera location and angle. \\
\hline
\cite{parisi2016human} & Translation. \\
\hline
\cite{su2013personal} & Geometric calibration as a system initialization step, before capturing skeleton videos. \\
\hline
\cite{eichler20183d,eichler2018non} & Using features that are invariant to camera location and angle. \\
\hline
\cite{hakim2019mal,hakim2019malf} & Projection on spine-shoulders plane, translation and equalizing skeleton edge lengths. \\
\hline
\cite{masalha2020predicting} & Using features that are invariant to camera location and angle. \\
\hline
\cite{dressler2019data,dressler2020towards} & Using features that are invariant to camera location and angle. \\
\hline
\cite{hu2014real} & Using features that are invariant to camera location and angle. \\
\hline
\cite{capecci2016physical,capecci2018hidden} & Using features that are invariant to camera location and angle. \\
\hline
\cite{osgouei2018objective} & Using features that are invariant to camera location and angle. \\
\hline
\cite{al2019quantifying} & Using features that are invariant to camera location and angle. \\
\hline
\cite{cao2019novel} & - \\
\hline
\cite{williams2019assessment} & Using features that are invariant to camera location and angle. \\
\hline
\cite{yu2019dynamic} & Projection on arm-leg-based coordinate system. \\
\hline
\cite{lei2020learning} & Scaling of the 2D human body. \\
\hline
\end{tabular}}
\caption[]{Geometric normalization methods.}
\label{tbl:geometric}
\end{table}
\subsubsection{Temporal Alignment.}
In order to produce reliable assessment outputs, a tested movement, which is a temporal sequence of data, usually has to be well-aligned in time with movements it will be compared to. For that purpose, most works either use models that inherently deal with sequences, such as HMMs and RNNs, as illustrated in Figure~\ref{fig:hmm}, or use temporal alignment algorithms, such as the DTW algorithm or its variants, as illustrated in Figure~\ref{fig:dtw}.
\begin{figure}[]
\centering
\includegraphics[width=0.45\linewidth,keepaspectratio]{images/hmm.png}
\caption[]{A Hidden Markov Model (HMM), which defines states, observations and probabilities of state transitions and observations~\cite{nguyen2016skeleton}.}
\label{fig:hmm}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[width=0.65\linewidth,keepaspectratio]{images/dtw.png}
\caption[]{Dynamic Time Warping (DTW) for alignment of two series of scalars, by matching pairs of indices~\cite{simpledtw}.}
\label{fig:dtw}
\end{figure}
Hakim and Shimshoni~\cite{hakim2019mal,hakim2019malf} introduced a novel warping algorithm, which was based on the detection of temporal points-of-interest (PoIs) and on linearly warping the sequences between them, as illustrated in Figure~\ref{fig:warp}. Dressler \textit{et al.}~\cite{dressler2019data,dressler2020towards} introduced a novel DTW variation with skips, similarly to Hu \textit{et al.}~\cite{hu2014real}. Other novel approaches were introduced by Devanne \textit{et al.}~\cite{devanne2016learning}, by Baptista \textit{et al.}~\cite{baptista2018deformation} and by Yu and Xiong~\cite{yu2019dynamic}.
Another less mentioned algorithm is the Correlation Optimized Warping
(COW) algorithm~\cite{tomasi2004correlation}.
Palma \textit{et al.}~\cite{palma2016hmm} and Hagelb\"{a}ck \textit{et al.}~\cite{hagelback2019variants} elaborated more on the topic of temporal alignment algorithms in the context of movement assessment. Table~\ref{tbl:temporal} summarizes the alignment methods used by existing works.
\begin{figure}[]
\centering
\includegraphics[width=0.8\linewidth,keepaspectratio]{images/warp.png}
\caption[]{A continuous warping by scaling between detected pairs of temporal points-of-interest~\cite{hakim2019mal,hakim2019malf}.}.
\label{fig:warp}
\end{figure}
\begin{table}
\centering
\resizebox{0.8\linewidth}{!}{%
\begin{tabular}{ |c|l| }
\hline
\textbf{Work} & \textbf{Implementation} \\
\hline
\cite{paiement2014online} & Inherently solved by the choice to use an HMM statistical model. \\
\hline
\cite{jun2020feature} & Inherently solved by the choice to use an RNN Autoencoder. \\
\hline
\cite{chaaraoui2015abnormal} & Discrete warping using the Dynamic Time Warping (DTW) algorithm. \\
\hline
\cite{nguyen2016skeleton} & Inherently solved by the choice to use an HMM statistical model. \\
\hline
\cite{devanne2016learning} & Riemannian shape analysis of legs shape evolution within a sliding window. \\
\hline
\cite{baptista2018deformation} & Key-point detection with deformation-based curve alignment~\cite{demisse2017deformation}. \\
\hline
\cite{nguyen2018estimating,nguyen2018skeleton} & Inherently solved by the choice to use a recurrent neural network.\\
\hline
\cite{khokhlova2018kinematic} & - \\
\hline
\cite{parisi2016human} & Inherently solved by the choice to use a recurrent neural network. \\
\hline
\cite{su2013personal} & Discrete warping using the Dynamic Time Warping (DTW) algorithm. \\
\hline
\cite{eichler20183d,eichler2018non} & - \\
\hline
\cite{hakim2019mal,hakim2019malf} & Detecting mutual temporal PoIs and continuously warping between them. \\
\hline
\cite{masalha2020predicting} & - \\
\hline
\cite{dressler2019data,dressler2020towards} & A novel DTW variant, with skips. \\
\hline
\cite{hu2014real} & A novel DTW variant with tolerance to editing. \\
\hline
\cite{capecci2016physical,capecci2018hidden} & DTW and Hidden Semi-Markov Models (HSMM). \\
\hline
\cite{osgouei2018objective} & DTW and HMM. \\
\hline
\cite{al2019quantifying} & - \\
\hline
\cite{cao2019novel} & Inherently solved by the choice to use a recurrent neural network. \\
\hline
\cite{williams2019assessment} & - \\
\hline
\cite{yu2019dynamic} & A novel DTW variant that minimizes angles between pairs of vectors. \\
\hline
\cite{lei2020learning} & - \\
\hline
\end{tabular}}
\caption[]{Temporal alignment methods.}
\label{tbl:temporal}
\end{table}
\subsubsection{Feature Extraction.}
The assessment of different types of movements requires paying attention to different details, which may include joint angles, pairwise joint distances, joint positions, joint velocities and event timings. Many of the feature extraction methods are invariant to the subject's skeleton scale and to the camera location and angle, as illustrated in Figure~\ref{fig:feature}, while others are usually preceded by a geometric normalization step. In the recent years, some works used deep features, which were automatically learned and were obscure, rather than using explainable handcrafted features.
It is notable that while some works were designated for specific domains of movements and exploited their prior knowledge to choose their features, other works were designated to be more versatile and adaptive to many movement domains, and therefore used general features. Table~\ref{tbl:feature} summarizes the feature extraction methods used by existing works.
\begin{figure}[]
\centering
\includegraphics[width=0.6\linewidth,keepaspectratio]{images/features.png}
\caption[]{Angles as extracted skeleton features, which are invariant to the camera location and to body dimension differences~\cite{nguyen2016skeleton}.}.
\label{fig:feature}
\end{figure}
\begin{table}
\centering
\resizebox{0.98\linewidth}{!}{%
\begin{tabular}{ |c|l| }
\hline
\textbf{Work} & \textbf{Implementation} \\
\hline
\cite{paiement2014online} & Applying Diffusion Maps~\cite{coifman2006diffusion} on the normalized 3D skeleton joint positions. \\
\hline
\cite{jun2020feature} & Deep features learned by training RNN Autoencoders. \\
\hline
\cite{chaaraoui2015abnormal} & Joint Motion History (JMH): spatio-temporal joint 3D positions. \\
\hline
\cite{nguyen2016skeleton} & Lower-body joint angles and the angle between the two feet. \\
\hline
\cite{devanne2016learning} & Square-root-velocity function (SRVF)~\cite{joshi2007novel} on temporal sequences of joint positions. \\
\hline
\cite{baptista2018deformation} & Distances between the projections of the two knees on the movement direction. \\
\hline
\cite{nguyen2018estimating,nguyen2018skeleton} & Deep features learned by Autoencoders. \\
\hline
\cite{khokhlova2018kinematic} & Covariance matrices of hip and knee flexion angles. \\
\hline
\cite{parisi2016human} & 13 joint 3D positions and velocities. \\
\hline
\cite{su2013personal} & Joint 3D positions and velocities. \\
\hline
\cite{eichler20183d,eichler2018non} & Joint angles, distances and heights from the ground. \\
\hline
\cite{hakim2019mal,hakim2019malf} & Joint 3D positions and velocities, distances and edge angles. Sequence timings. \\
\hline
\cite{masalha2020predicting} & Relative joint positions, joint distances, angles and height of joints from the ground. \\
\hline
\cite{dressler2019data,dressler2020towards} & Joint positions and NASM features (a list of selected skeleton angles). \\
\hline
\cite{hu2014real} & Torso direction and joint relative position represented by elevation and azimuth. \\
\hline
\cite{capecci2016physical,capecci2018hidden} & Selected features varying between movement types. \\
\hline
\cite{osgouei2018objective} & Shoulder and arm angles. \\
\hline
\cite{al2019quantifying} & Autoencoder embeddings of manually-extracted attributes. \\
\hline
\cite{cao2019novel} & Raw skeleton 3D data. \\
\hline
\cite{williams2019assessment} & GMM encoding of Autoencoder dimensionality-reduced joint angle data. \\
\hline
\cite{yu2019dynamic} & Angles of selected joints. \\
\hline
\cite{lei2020learning} & Self-similarity descriptors of joint trajectories and a joint displacement sequence.\\
\hline
\end{tabular}}
\caption[]{Feature extraction methods.}
\label{tbl:feature}
\end{table}
\subsubsection{Score Prediction.}
The prediction of an assessment score refers to one or more of the following cases:
\begin{enumerate}
\item Classifying a performance into a class from a predefined set of discrete quality classes.
\item Performing a regression that will map a performance into a number on a predefined continuous scale.
\item Producing scores that reflect the similarity between given model and performance, unbound to ground-truth or predefined scales.
\end{enumerate}
\noindent The two first types of scoring capabilities are mainly essential for formal assessments, such as medical assessments or Olympic performance judgements. The third type of scoring is mainly useful for comparing subject performances, which can be either a certain subject whose progress is monitored over time, or different subjects who compete.
Table~\ref{tbl:score} lists the algorithms used to produce quality scores. It is notable that score prediction was not implemented in many works, as it was irrelevant for them, since they only addressed normal/abnormal binary classifications.
\begin{table}
\centering
\resizebox{0.98\linewidth}{!}{%
\begin{tabular}{ |c|l| }
\hline
\textbf{Work} & \textbf{Implementation} \\
\hline
\cite{paiement2014online} & Pose and dynamics log likelihoods. \\
\hline
\cite{jun2020feature} & - \\
\hline
\cite{chaaraoui2015abnormal} & - \\
\hline
\cite{nguyen2016skeleton} & - \\
\hline
\cite{devanne2016learning} & Mean log-probability over the segments of the test sequence. \\
\hline
\cite{baptista2018deformation} & Distance between time-aligned feature sequences with reflection of time-variations. \\
\hline
\cite{nguyen2018estimating,nguyen2018skeleton} & - \\
\hline
\cite{khokhlova2018kinematic} & - \\
\hline
\cite{parisi2016human} & Difference between actual and RNN-predicted next frames. \\
\hline
\cite{su2013personal} & Handcrafted classification using Fuzzy Logic~\cite{zadeh1965fuzzy}. \\
\hline
\cite{eichler20183d,eichler2018non} & SVM, Decision Tree and Random Forest quality classification using handcrafted features. \\
\hline
\cite{hakim2019mal,hakim2019malf} & Thresholded weighted sum of normalized, time-filtered active/inactive joint and timing scores. \\
\hline
\cite{masalha2020predicting} & SVM and Random Forest quality classification using handcrafted features. \\
\hline
\cite{dressler2019data,dressler2020towards} & Weighted sum of selected feature differences. \\
\hline
\cite{hu2014real} & Average of frame cross-correlations. \\
\hline
\cite{capecci2016physical,capecci2018hidden} & Normalized log-likelihoods or DTW distances. \\
\hline
\cite{osgouei2018objective} & Difference from proper performance divided by difference between worst and proper performances. \\
\hline
\cite{al2019quantifying} & Classification using One-Class SVM. \\
\hline
\cite{cao2019novel} & Classification using a hybrid LSTM-CNN model. \\
\hline
\cite{williams2019assessment} & Normalized log-likelihoods. \\
\hline
\cite{yu2019dynamic} & DTW similarity. \\
\hline
\cite{lei2020learning} & Regression based on high-level features combined with joint trajectories and displacements. \\
\hline
\end{tabular}}
\caption[]{Score prediction methods.}
\label{tbl:score}
\end{table}
\subsubsection{Feedback Generation.}
There are two main types of feedback that can be generated: bound feedback and unbound feedback. Feedback is bound when it can only consist of predefined mistakes or abnormalities that can be detected. Feedback is unbound when it is a generated natural language text that can describe any type of mistake, deviation or abnormality. The generation of unbound feedback usually requires the usage of describable low-level features, so that when a performance is not proper, it will be possible to indicate the most significant features that reduced the score and translate them to natural language, such that the user can use the feedback to learn how to improve their next performance. Such feedback may include temporal sequences that deviate similarly, as visualized in Figure~\ref{fig:ParameterTimeSegmentation}.
Table~\ref{tbl:feedback} summarizes the types of feedback and generation methods used by the works. It is notable that: 1) Most of the works do not generate feedback. 2) There are no works that produce feedback while not predicting quality scores, for the same reason of only focusing on binary detection of abnormalities.
\begin{figure}[]
\centering
\includegraphics[width=0.75\linewidth,keepaspectratio]{images/segmentation.png}
\caption[]{Temporal segmentation of parameter deviations for feedback generation~\cite{hakim2019mal,hakim2019malf}.}
\label{fig:ParameterTimeSegmentation}
\end{figure}
\begin{table}
\centering
\resizebox{0.9\linewidth}{!}{%
\begin{tabular}{ |c|l| }
\hline
\textbf{Work} & \textbf{Implementation} \\
\hline
\cite{paiement2014online} & - \\
\hline
\cite{jun2020feature} & - \\
\hline
\cite{chaaraoui2015abnormal} & - \\
\hline
\cite{nguyen2016skeleton} & - \\
\hline
\cite{devanne2016learning} & Visualizing the deviations of the body parts. \\
\hline
\cite{baptista2018deformation} & - \\
\hline
\cite{nguyen2018estimating,nguyen2018skeleton} & - \\
\hline
\cite{khokhlova2018kinematic} & - \\
\hline
\cite{parisi2016human} & Sequences of parameter deviations and detection of predefined typical mistakes. \\
\hline
\cite{su2013personal} & Three quality classifications indicating trajectory similarity and right speed. \\
\hline
\cite{eichler20183d,eichler2018non} & - \\
\hline
\cite{hakim2019mal,hakim2019malf} & Translation of worst class-segmented parameter temporal deviations into text. \\
\hline
\cite{masalha2020predicting} & - \\
\hline
\cite{dressler2019data,dressler2020towards} & Indication of weak links according to angle differences. \\
\hline
\cite{hu2014real} & - \\
\hline
\cite{capecci2016physical,capecci2018hidden} & - \\
\hline
\cite{osgouei2018objective} & - \\
\hline
\cite{al2019quantifying} & - \\
\hline
\cite{cao2019novel} & - \\
\hline
\cite{williams2019assessment} & - \\
\hline
\cite{yu2019dynamic} & - \\
\hline
\cite{lei2020learning} & - \\
\hline
\end{tabular}}
\caption[]{Feedback generation methods.}
\label{tbl:feedback}
\end{table}
\section{Discussion}
From the reviewed works, we can learn that a typical movement assessment solution may deal with detection of abnormal events or predicting quality scores, using classification, regression or computation of a normalized similarity measure. The task of detecting abnormal events is usually associated with repetitive movements, while the task of predicting scores is usually associated with structured movements.
We can learn that while public skeleton datasets exist and are used by some of the works, most of the works use private datasets that were acquired for the sake of a specific work. It is notable that while many novelties are proposed in the different works, the absence of common datasets and evaluation metrics leads to different works dealing with different problems, evaluating themselves on different datasets of different movement domains, using different metrics.
It is notable that temporal alignment is a key-problem in movement assessment. From the reviewed works, we can learn that around a half of the works base their temporal alignment solutions on models that are designated for sequential inputs, such as Hidden Markov Models and recurrent neural networks, while others use either the Dynamic Time Warping algorithm, sometimes with novel improvements, or other novel warping and alignment approaches.
We can learn that while a few works use features that were automatically learned by neural networks, most of the works make use of handcrafted skeleton features. In many of those works, the use features are invariant to the camera location and angle and to the body-dimensions of the performing subjects. Other works that make use of handcrafted features usually have to apply a geometric normalization step before continuing to the next steps. It is worth to mention that while some of the works were designed to deal with a specific type of movement, other works were designed to be adaptive and deal with many types of movements, a choice that is usually clearly reflected in the feature extraction step.
We can learn that a quality score is produced by most of the works. While works that deal with medical assessments mainly focus on classification into predefined discrete scoring scales, other works predict scores on continuous scales. Such scores are rarely learned as a regression problem and are usually based on a normalized similarity measure. Finally, we can learn that only a few works deal with producing feedback, which can be bound or unbound.
In the future, the formation of a large, general public dataset and a common evaluation metric may help define the state-of-the-art and boost the research on the topic of movement assessment. In addition, the improvement of mobile-device cameras, as well as computer vision algorithms that detect skeletons in RGB-D or even RGB videos, may raise the interest in researching this topic.
\section{Conclusions}
We have provided a review of the existing works in the domain of movement assessment from skeletal data, which gives a high level picture of the problems addressed and approaches implemented by existing solutions.
We divided the types of assessment tasks into two main categories, which were detection of abnormalities in long, repetitive movements and scoring structured movements, sometimes while generating textual feedback. The objectives and challenges of the assessment task were discussed and the ways they were addressed by each of the works were listed, including skeleton joint detection, geometric normalization, temporal alignment, feature extraction, score prediction and feedback generation. The existing public datasets and evaluated movement domains were listed. Finally, a high level discussion was provided. We hope that this review will provide a good starting point for new researchers.
\bibliographystyle{splncs}
| 9,017 |
\section{Introduction}
\label{sec: intro}
Gauge models with anomaly are interesting from different points of view.
First, there is a problem of consistent quantization for these models.
Due to anomaly some constraints change their nature after quantization:
instead of being first-class constraints, they turn into second-class
ones. A consistent canonical quantization scheme clearly should take
into account such a change \cite{jack85}-\cite{sarad91}.
Next is a problem of the relativistic invariance. It is known that in the
physical sector where the local gauge invariance holds the relativistic
invariance is broken for some anomalous models, namely the chiral
Schwinger model (CSM) and chiral $QCD_2$ \cite{niemi86}-\cite{sarad96}.
For both models the Poincare
algebra commutation relations breaking term can be constructed
explicitly \cite{sarad96}.
In the present paper we address ourselves to another aspect of anomalous
models: the Berry phase and its connection to anomaly. A common topological
nature of the Berry phase, or more generally quantum holonomy, and gauge
anomalies was noted in \cite{alva85},\cite{niemi85}. The former was shown
to be crucial in the hamiltonian interpretation of anomalies.
We consider a general version of the CSM with a ${\rm U}(1)$ gauge field
coupled with different charges to both chiral components of a fermionic
field. The non-anomalous Schwinger model (SM) where these charges are
equal is a special case of the generalized CSM. This will allow us
to see any distinction between the models with and without anomaly.
We suppose that space is a circle of length ${\rm L}$,
$-\frac{\rm L}{2} \leq x < \frac{\rm L}{2}$, so space-time manifold
is a cylinder ${\rm S}^1 \otimes {\rm R}^1$. We work in the temporal
gauge $A_0=0$ and use the system of units where $c=1$. Only matter
fields are quantized, while $A_1$ is handled as a classical
background field. Our aim is to calculate the Berry phase and the
corresponding ${\rm U}(1)$ connection and curvature for the fermionic
Fock vacuum as well as for many particle states constructed over the
vacuum and to show explicitly a connection between the nonvanishing
vacuum Berry phase and anomaly.
Our paper is organized as follows. In Sect.~\ref{sec: quant}, we
apply first and second quantization to the matter fields and obtain
the second quantized fermionic Hamiltonian. We define the Fock
vacuum and construct many particle Fock states over the vacuum. We
use a particle-hole interpretation for these states.
In Sect.~\ref{sec: berry} , we first derive a general formula for
the Berry phase and then calculate it for the vacuum and many
particle states. We show that for all Fock states the Berry phase
vanishes in the case of models without anomaly. We discuss a connection
between the nonvanishing vacuum Berry phase, anomaly and effective
action of the model.
Our conclusions are in Sect.~\ref{sec: con}.
\newpage
\section{Quantization of matter fields}
\label{sec: quant}
The Lagrangian density of the generalized CSM is
\begin{equation}
{\cal L} = - {\frac{1}{4}} {\rm F}_{\mu \nu} {\rm F}^{\mu \nu} +
\bar{\psi} i {\hbar} {\gamma}^{\mu} {\partial}_{\mu} \psi +
e_{+} {\hbar} \bar{\psi}_{+} {\gamma}^{\mu} {\psi_{+}} A_{\mu} +
e_{-} {\hbar} \bar{\psi}_{-} {\gamma}^{\mu} {\psi_{-}} A_{\mu} ,
\label{eq: odin}
\end{equation}
where ${\rm F}_{\mu \nu}= \partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu}$ ,
$(\mu, \nu) = \overline{0,1}$ , $\gamma^{0}={\sigma}_1$,
${\gamma}^{1}=-i{\sigma}_2$, ${\gamma}^0 {\gamma}^1={\gamma}^5=
{\sigma}_3$, ${\sigma}_i (i=\overline{1,3})$ are Pauli matrices.
The field $\psi$ is $2$--component Dirac spinor, $\bar{\psi} =
\psi^{\dagger} \gamma^0$ and $\psi_{\pm}=\frac{1}{2} (1 \pm \gamma^5)
\psi$.
In the temporal gauge $A_0=0$, the Hamiltonian density is
\begin{equation}
{\cal H} = \frac{1}{2}{\rm E}^2 + {\cal H}_{+} + {\cal H}_{-},
\label{eq: dva}
\end{equation}
with ${\rm E}$ momentum canonically conjugate to $A_1$, and
\[
{\cal H}_{\pm} \equiv \hbar \psi_{\pm}^{\dagger} d_{\pm} \psi_{\pm} =
\mp \hbar \psi_{\pm}^{\dagger}(i{\partial}_{1}+e_{\pm}A_1)\psi_{\pm}.
\]
On the circle boundary conditions for the fields must be specified.
We impose the periodic ones
\begin{eqnarray}
{A_1} (- \frac{\rm L}{2}) & = & {A_1} (\frac{\rm L}{2}) \nonumber \\
{\psi_{\pm}} (- \frac{\rm L}{2}) & = & {\psi_{\pm}} (\frac{\rm L}{2}).
\label{eq: tri}
\end{eqnarray}
The Lagrangian and Hamiltonian densities
are invariant under local time-independent gauge
transformations
\begin{eqnarray*}
A_1 & \rightarrow & A_1 + {\partial}_{1} \lambda,\\
\psi_{\pm} & \rightarrow & \exp\{ie_{\pm} \lambda\} \psi_{\pm},
\end{eqnarray*}
$\lambda$ being a gauge function.
For arbitrary $e_{+},e_{-}$, the gauge transformations do not respect
the boundary conditions ~\ref{eq: tri}.
The gauge transformations compatible with the boundary conditions
must be either of the form
\[
\lambda (\frac{\rm L}{2})=\lambda (- \frac{\rm L}{2}) +
{\frac{2\pi}{e_{+}}}n,
\hspace{5 mm}
{\rm n} \in \cal Z.
\]
with $e_{+} \neq 0$ and
\begin{equation}
\frac{e_{-}}{e_{+}} = {\rm N},
\hspace{5 mm}
{\rm N} \in \cal Z,
\label{eq: cet}
\end{equation}
or of the form
\[
\lambda(\frac{\rm L}{2}) = \lambda(-\frac{\rm L}{2}) +
\frac{2\pi}{e_{-}} n ,
\hspace{5 mm}
{\rm n} \in \cal Z,
\]
with $e_{-} \neq 0$ and
\begin{equation}
\frac{e_{+}}{e_{-}} = \bar{\rm N},
\hspace{5 mm}
\bar{\rm N} \in \cal Z.
\label{eq: pet}
\end{equation}
Eqs. ~\ref{eq: cet} and ~\ref{eq: pet} imply a quantization condition
for the charges. Without loss of generality, we choose ~\ref{eq: cet}.
For ${\rm N}=1$, $e_{-}=e_{+}$ and we have the standard Schwinger model.
For ${\rm N}=0$, we get the model in which only the positive chirality
component of the Dirac field is coupled to the gauge field.
We see that the gauge transformations under consideration are divided
into topological classes characterized by the integer $n$. If
$\lambda(\frac{\rm L}{2}) = \lambda(-\frac{\rm L}{2})$, then the
gauge transformation is topologically trivial and belongs to the
$n=0$ class. If $n \neq 0$ it is nontrivial and has winding number $n$.
The eigenfunctions and the eigenvalues of the first quantized
fermionic Hamiltonians are
\[
d_{\pm} \langle x|n;{\pm} \rangle = \pm \varepsilon_{n,{\pm }}
\langle x|n;{\pm } \rangle ,
\]
where
\[
\langle x|n;{\pm } \rangle = \frac{1}{\sqrt {\rm L}}
\exp\{ie_{\pm} \int_{-{\rm L}/2}^{x} dz{A_1}(z) +
i\varepsilon_{n,{\pm}} \cdot x\},
\]
\[
\varepsilon_{n,{\pm }} = \frac{2\pi}{\rm L}
(n - \frac{e_{\pm}b{\rm L}}{2\pi}).
\]
We see that the spectrum of the eigenvalues depends on the zero
mode of the gauge field:
\[
b \equiv \frac{1}{\rm L} \int_{-{\rm L}/2}^{{\rm L}/2} dx
A_1(x,t).
\]
For $\frac{e_{+}b{\rm L}}{2\pi}={\rm integer}$, the spectrum contains
the zero energy level. As $b$ increases from $0$ to
$\frac{2\pi}{e_{+}{\rm L}}$, the energies of
$\varepsilon_{n,+}$ decrease by $\frac{2\pi}{\rm L}$, while the energies
of $(-\varepsilon_{n,-})$ increase by $\frac{2\pi}{\rm L} {\rm N}$.
Some of energy levels change sign. However, the spectrum at the
configurations $b=0$ and $b=\frac{2\pi}{e_{+}{\rm L}}$
is the same, namely, the integers, as it must be since these gauge-field
configurations are gauge-equivalent. In what follows, we
will use separately the integer and fractional parts of
$\frac{e_{\pm}b{\rm L}}{2\pi}$, denoting them as
$[\frac{e_{\pm}b{\rm L}}{2\pi}]$ as $\{\frac{e_{\pm}b{\rm L}}{2\pi}\}$
correspondingly.
Now we introduce the second quantized right-handed and
left-handed Dirac fields. For the moment, we will assume that $d_{\pm}$
do not have zero eigenvalues. At time $t=0$, in terms of the
eigenfunctions of the first quantized fermionic Hamiltonians the second
quantized ($\zeta$--function regulated) fields have the expansion
\cite{niese86} :
\[
\psi_{+}^s (x) = \sum_{n \in \cal Z} a_n \langle x|n;{+} \rangle
|\lambda \varepsilon_{n,+}|^{-s/2},
\]
\begin{equation}
\psi_{-}^s (x) = \sum_{n \in \cal Z} b_n \langle x|n;{-} \rangle
|\lambda \varepsilon_{n,-}|^{-s/2}.
\label{eq: vosem}
\end{equation}
Here $\lambda$ is an arbitrary constant with dimension of length
which is necessary to make $\lambda \varepsilon_{n,\pm}$ dimensionless,
while $a_n, a_n^{\dagger}$ and $b_n, b_n^{\dagger}$ are correspondingly
right-handed and left-handed fermionic creation and annihilation
operators which fulfil the commutation relations
\[
[a_n , a_m^{\dagger}]_{+} = [b_n , b_n^{\dagger}]_{+} =\delta_{m,n} .
\]
For $\psi_{\pm }^{s} (x)$, the equal time anticommutators are
\begin{equation}
[\psi_{\pm}^{s}(x) , \psi_{\pm}^{\dagger s}(y)]_{+}=\zeta_{\pm} (s,x,y),
\label{eq: devet}
\end{equation}
with all other anticommutators vanishing, where
\[
\zeta_{\pm} (s,x,y) \equiv \sum_{n \in \cal Z} \langle x|n;{\pm} \rangle
\langle n;{\pm}|y \rangle |\lambda \varepsilon_{n,\pm}|^{-s},
\]
$s$ being large and positive. In the limit, when the regulator
is removed, i.e. $s=0$, $\zeta_{\pm}(s=0,x,y) = \delta(x-y)$ and
Eq.~\ref{eq: devet} takes the standard form.
The vacuum state of the second quantized fermionic Hamiltonian
\[
|{\rm vac};A \rangle = |{\rm vac};A;+ \rangle \otimes
|{\rm vac};A;- \rangle
\]
is defined such that all negative energy
levels are filled and the others are empty:
\begin{eqnarray}
a_n|{\rm vac};A;+\rangle =0 & {\rm for} & n>[\frac{e_{+}b{\rm L}}{2\pi}],
\nonumber \\
a_n^{\dagger} |{\rm vac};A;+ \rangle =0 & {\rm for} & n \leq
[\frac{e_{+}b{\rm L}}{2\pi}],
\label{eq: deset}
\end{eqnarray}
and
\begin{eqnarray}
b_n|{\rm vac};A;-\rangle =0 & {\rm for} & n \leq
[\frac{e_{-}b{\rm L}}{2\pi}], \nonumber \\
b_n^{\dagger} |{\rm vac};A;- \rangle =0 & {\rm for} & n >
[\frac{e_{-}b{\rm L}}{2\pi}].
\label{eq: odinodin}
\end{eqnarray}
In other words, in the positive chirality vacuum all the levels
with energy lower than ${\varepsilon}_{[\frac{e_{+}b{\rm L}}
{2\pi}]+1,+}$ and in the negative chirality one all the levels
with energy lower than $(-{\varepsilon}_{[\frac{e_{-}b{\rm L}}
{2\pi}],-})$ are filled:
\begin{eqnarray*}
|{\rm vac}; A;+ \rangle & = & \prod_{n=\infty}^{[\frac{e_{+}b
{\rm L}}{2\pi}]} a_m^{\dagger} |0;+ \rangle, \\
|{\rm vac}; A;- \rangle & = & \prod_{n=[\frac{e_{-}b{\rm L}}
{2\pi}]+1}^{+\infty} b_n^{\dagger} |0;- \rangle, \\
\end{eqnarray*}
where $|0 \rangle = |0,+ \rangle \otimes |0,- \rangle$ is the state
of "nothing" with all the energy levels empty.
The Fermi surfaces which are defined to lie halfway between the highest
filled and lowest empty levels are
\[
{\varepsilon}_{\pm}^{\rm F} = \pm \frac{2\pi}{\rm L}
(\frac{1}{2} - \{\frac{e_{\pm}b{\rm L}}{2\pi}\}).
\]
For $e_{+}=e_{-}$, ${\varepsilon}_{+}^{\rm F}=-{\varepsilon}_{-}^{\rm F}$.
Next we define the fermionic parts of the second-quantized Hamiltonian as
\[
\hat{\rm H}_{\pm}^s = \int_{-{\rm L}/2}^{{\rm L}/2} dx
\hat{\cal H}_{\pm}^s(x)= \frac{1}{2} \hbar \int_{-{\rm L}/2}^{{\rm L}/2}dx
(\psi_{\pm}^{\dagger s} d_{\pm} \psi_{\pm}^s
- \psi_{\pm}^s d_{\pm}^{\star} \psi_{\pm}^{\dagger s}).
\]
Substituting ~\ref{eq: vosem} into this expression, we get
\begin{equation}
\hat{\rm H}_{\pm} = \hat{\rm H}_{0,\pm} \mp
e_{\pm} b \hbar :\rho_{\pm}(0): + {\hbar} \frac{\rm L}{4\pi}
({\varepsilon}_{\pm}^{\rm F})^2,
\label{eq: hamil}
\end{equation}
where double dots indicate normal ordering with respect to
$|{\rm vac},A \rangle$ ,
\begin{eqnarray*}
\hat{\rm H}_{0,+} & = & \hbar \frac{2 \pi}{\rm L} \lim_{s \to 0}
\{ \sum_{k >[\frac{e_{+}b{\rm L}}{2 \pi}]} k a_k^{\dagger} a_k
|\lambda \varepsilon_{k,+}|^{-s} - \sum_{k \leq [\frac{e_{+}b{\rm L}}
{2 \pi}]} k a_k a_k^{\dagger} |\lambda \varepsilon_{k,+}|^{-s} \},\\
\hat{\rm H}_{0,-} & = & \hbar \frac{2 \pi}{\rm L} \lim_{s \to 0}
\{ \sum_{k>[\frac{e_{-}b{\rm L}}{2 \pi}]} k b_{k} b_{k}^{\dagger}
|\lambda \varepsilon_{k,-}|^{-s} - \sum_{k \leq [\frac{e_{-}b{\rm L}}
{2 \pi}]} k b_{k}^{\dagger} b_{k} |\lambda \varepsilon_{k,-}|^{-s} \}
\end{eqnarray*}
are free fermionic Hamiltonians, and
\begin{eqnarray*}
:\rho_{+} (0): & = & \lim_{s \to 0} \{ \sum_{k >[\frac{e_{+}b{\rm L}}
{2 \pi}]} a_k^{\dagger} a_k |\lambda \varepsilon_{k,+}|^{-s} -
\sum_{k \leq [\frac{e_{+}b{\rm L}}{2 \pi}]} a_k a_k^{\dagger}
|\lambda \varepsilon_{k,+}|^{-s} \}, \\
:\rho_{-} (0): & = & \lim_{s \to 0} \{ \sum_{k \leq [\frac{e_{-}b{\rm L}}
{2 \pi}]} b_{k}^{\dagger} b_{k} |\lambda \varepsilon_{k,-}|^{-s} -
\sum_{k>[\frac{e_{-}b{\rm L}}{2 \pi}]} b_{k} b_{k}^{\dagger}
|\lambda \varepsilon_{k,-}|^{-s} \}
\end{eqnarray*}
are charge operators for the positive and negative chirality fermion
fields respectively. The fermion momentum operators constructed
analogously are
\[
\hat{\rm P}_{\pm} = \hat{\rm H}_{0,\pm}.
\]
The operators $:\hat{\rm H}_{\pm}:$, $:\rho_{\pm}(0):$
and $\hat{\rm P}_{\pm}$ are
well defined when acting on finitely excited states which have only a
finite number of excitations relative to the Fock vacuum.
For the vacuum state,
\[
:\hat{\rm H}_{\pm}:|{\rm vac}; A;\pm \rangle =
:{\rho}_{\pm}(0):|{\rm vac}; A;\pm \rangle =0.
\]
Due to the normal ordering, the energy of the vacuum which is at the
same time the ground state of the fermionic Hamiltonians turns out
to be equal to zero ( we neglect an infinite energy of the filled
levels below the Fermi surfaces ${\varepsilon}_{\pm}^{\rm F}$).
The vacuum state can be considered also as a state of the zero charge.
Any other state of the same charge will have some of the levels above
${\varepsilon}_{+}^{\rm F}$ (${\varepsilon}_{-}^{\rm F}$) occupied
and some levels below ${\varepsilon}_{+}^{\rm F}$ (${\varepsilon}_{-}
^{\rm F}$) unoccupied. It is convenient to use the vacuum state
$|{\rm vac}; A \rangle$ as a reference, describing the removal
of a particle of positive (negative) chirality from one of the levels
below ${\varepsilon}_{+}^{\rm F}$ (${\varepsilon}_{-}^{\rm F}$) as
the creation of a "hole" \cite{dirac64},\cite{feyn72}.
Particles in the levels above
${\varepsilon}_{+}^{\rm F}$ (${\varepsilon}_{-}^{\rm F}$) are still
called particles. If a particle of positive (negative) chirality
is excited from the level $m$ below the Fermi surface to the level
$n$ above the Fermi surface, then we say that a hole of positive
chirality with energy $(-{\hbar}{\varepsilon}_{m,+})$ and
momentum $(-{\hbar}\frac{2\pi}{\rm L} m)$ ( or of negative chirality
with energy ${\hbar}{\varepsilon}_{m,-}$ and momentum
${\hbar}\frac{2\pi}{\rm L} m$) has been created as well as the
positive chirality particle with energy ${\hbar}{\varepsilon}_{n,+}$
and momentum ${\hbar}\frac{2\pi}{\rm L}n$ ( or the negative chirality
one with energy $(-{\hbar}{\varepsilon}_{n,-})$ and momentum
$(-{\hbar}\frac{2\pi}{\rm L}n)$ ). The operators $a_k (k \leq
[\frac{e_{+}b{\rm L}}{2\pi}])$ and $b_k (k>[\frac{e_{-}b{\rm L}}{2\pi}])$
behave like creation operators for the positive and negative chirality
holes correspondingly.
In the charge operator a hole counts as $-1$, so that, for example,
any state with one particle and one hole as well as the vacuum state
has vanishing charge.
The number of particles and holes of positive and negative chirality
outside the vacuum state is given by the operators
\begin{eqnarray*}
{\rm N}_{+} & = & \lim_{s \to 0} \{ \sum_{k>[\frac{e_{+}b{\rm L}}
{2\pi}]} a_k^{\dagger} a_k + \sum_{k \leq [\frac{e_{+}b{\rm L}}
{2\pi}]} a_k a_k^{\dagger} \} |{\lambda}{\varepsilon}_{k,+}|^{-s}, \\
{\rm N}_{-} & = & \lim_{s \to 0} \{ \sum_{k \leq [\frac{e_{-}b{\rm L}}
{2\pi}]} b_k^{\dagger} b_k + \sum_{k>[\frac{e_{-}b{\rm L}}{2\pi}]}
b_k b_k^{\dagger} \} |{\lambda}{\varepsilon}_{k,-}|^{-s},\\
\end{eqnarray*}
which count both particle and hole as $+1$.
Excited states are constructed by operating creation operators
on the vacuum. We start with $1$-particle states. Let us define the
states $|m; A;\pm \rangle$ as follows
$$
|m; A;+ \rangle \equiv \left\{
\begin{array}{cc}
a_m^{\dagger}|{\rm vac}; A;+
\rangle & {\rm for} \hspace{5 mm} m>[\frac{e_{+}b{\rm L}}{2\pi}], \\
a_m |{\rm vac}; A;+
\rangle & {\rm for} \hspace{5 mm} m \leq [\frac{e_{+}b{\rm L}}{2\pi}]
\end{array}
\right.
$$
and
$$
|m; A;- \rangle \equiv
\left\{ \begin{array}{cc}
b_m^{\dagger} |{\rm vac}; A;-
\rangle & {\rm for} \hspace{5 mm} m \leq [\frac{e_{-}b{\rm L}}{2\pi}],\\
b_m |{\rm vac}; A;- \rangle & {\rm for} \hspace{5 mm} m>[\frac{e_{-}b{\rm
L}}{2\pi}]. \end{array}
\right .
$$
The states $|m; A;\pm \rangle$ are orthonormalized,
\[
\langle m; A;\pm |n, A; \pm \rangle = \delta_{mn},
\]
and fulfil the completeness relation
\[
\sum_{m \in \cal Z} |m; A;\pm \rangle \cdot
\langle m; A;\pm| =1.
\]
It is easily checked that
\begin{eqnarray*}
:\hat{\rm H}_{\pm}: |m; A;\pm \rangle & = & {\hbar}{\varepsilon}
_{m,\pm} |m; A;\pm \rangle, \\
\hat{\rm P}_{\pm} |m; A;\pm \rangle & = & {\hbar}\frac{2\pi}{\rm L}
m |m; A;\pm \rangle, \\
:{\rho}_{\pm}(0): |m; A;\pm \rangle & = & \pm |m; A;\pm \rangle
\hspace{5 mm}
{\rm for}
\hspace{5 mm}
m > [\frac{e_{\pm}b{\rm L}}{2\pi}]
\end{eqnarray*}
and
\begin{eqnarray*}
:\hat{\rm H}_{\pm}: |m; A;\pm \rangle & = & - {\hbar}{\varepsilon}
_{m,\pm} |m; A;\pm \rangle, \\
\hat{\rm P}_{\pm} |m; A;\pm \rangle & = & -{\hbar} \frac{2\pi}{\rm L}
m |m; A;\pm \rangle, \\
:{\rho}_{\pm}(0): |m; A;\pm \rangle & = & \mp |m; A;\pm \rangle
\hspace{5 mm}
{\rm for}
\hspace{5 mm}
m \leq [\frac{e_{\pm}b{\rm L}}{2\pi}].
\end{eqnarray*}
We see that $|m; A;+ \rangle$ is a state with one particle of
positive chirality with energy ${\hbar}{\varepsilon}_{m,+}$ and
momentum ${\hbar}\frac{2\pi}{\rm L} m$ for $m>[\frac{e_{+}b{\rm L}}
{2\pi}]$ or a state with one hole of the same chirality with energy
$(-{\hbar}{\varepsilon}_{m,+})$ and momentum $(-\hbar \frac{2\pi}{\rm L}
m)$ for $m \leq [\frac{e_{+}b{\rm L}}{2\pi}]$. The negative chirality
state $|m; A;- \rangle$ is a state with one particle with energy
$(-\hbar {\varepsilon}_{m,-})$ and momentum $(-\hbar \frac{2\pi}{\rm L}
m)$ for $m \leq [\frac{e_{-}b{\rm L}}{2\pi}]$ or a state with one hole
with energy $\hbar {\varepsilon}_{m,-}$ and momentum $\hbar
\frac{2\pi}{\rm L} m$ for $m >[\frac{e_{-}b{\rm L}}{2\pi}]$. In any
case,
\[
{\rm N}_{\pm}|m; A;\pm \rangle = |m; A;\pm \rangle,
\]
that is why $|m; A;\pm \rangle$ are called $1$-particle states.
By applying $n$ creation operators to the vacuum states $|{\rm vac};
A;\pm \rangle$ we can get also $n$-particle states
\[
|m_1;m_2;...;m_n; A;\pm \rangle
\hspace{5 mm}
(m_1 < m_2 < ... <m_n),
\]
which are orthonormalized:
\[
\langle m_1;m_2;...;m_n; A;\pm |\overline{m}_1;\overline{m}_2;
...;\overline{m}_n; A;\pm \rangle =
{\delta}_{m_1 \overline{m}_1} {\delta}_{m_2 \overline{m}_2} ...
{\delta}_{m_n \overline{m}_n}.
\]
The completeness relation is written in the following form
\begin{equation}
\frac{1}{n!} \sum_{m_1 \in \cal Z} ... \sum_{m_n \in \cal Z}
|m_1;m_2;...;m_n; A;\pm \rangle \cdot
\langle m_1;m_2;...;m_n; A;\pm| =1.
\label{eq: polnota}
\end{equation}
Here the range of $m_i$ ($i=\overline{1,n}$) is not restricted by
the condition $(m_1<m_2<...<m_n)$, duplication of states being taken care
of by the $1/n!$ and the normalization. The $1$ on the right-hand side
of Eq.~\ref{eq: polnota} means the unit operator on the space of
$n$-particle states.
The case $n=0$ corresponds to the zero-particle states. They form a
one-dimensional space, all of whose elements are proportional to
the vacuum state.
The multiparticle Hilbert space is a direct sum of an infinite
sequence of the $n$-particle Hilbert spaces. The states of different
numbers of particles are defined to be orthogonal to each other.
The completeness relation in the multiparticle Hilbert space has the
form
\begin{equation}
\sum_{n=0}^{\infty} \frac{1}{n!} \sum_{m_1,m_2,...m_n \in \cal Z}
|m_1;m_2;...;m_n; A;\pm \rangle \cdot
\langle m_1;m_2;...;m_n; A;\pm| = 1,
\label{eq: plete}
\end{equation}
where "1" on the right-hand side means the unit operator on the
whole multiparticle space.
For $n$-particle states,
\[
:\hat{\rm H}_{\pm}: |m_1;m_2;...;m_n; A;\pm \rangle =
\hbar \sum_{k=1}^{n} {\varepsilon}_{m_k,\pm} \cdot {\rm sign}
({\varepsilon}_{m_k,\pm}) |m_1;m_2;...;m_n; A;\pm \rangle
\]
and
\[
:{\rho}_{\pm}(0): |m_1;m_2;...;m_n; A;\pm \rangle =
\pm \sum_{k=1}^{n} {\rm sign}({\varepsilon}_{m_k,\pm})
|m_1;m_2;...;m_n; A;\pm \rangle.
\]
\newpage
\section{Calculation of Berry phases}
\label{sec: berry}
In the adiabatic approach \cite{schiff68}-\cite{zwanz}, the
dynamical
variables are divided into two sets, one which we call fast variables
and the other which we call slow variables. In our case, we treat the
fermions as fast variables and the gauge fields as slow variables.
Let ${\cal A}^1$ be a manifold of all static gauge field
configurations ${A_1}(x)$. On ${\cal A}^1$ a time-dependent
gauge field ${A_1}(x,t)$ corresponds to a path and a periodic gauge
field to a closed loop.
We consider the fermionic part of the second-quantized Hamiltonian
$:\hat{\rm H}_{\rm F}:=:\hat{\rm H}_{+}: + :\hat{\rm H}_{-}:$
which depends on $t$ through the background
gauge field $A_1$ and so changes very slowly with time. We consider
next the periodic gauge field ${A_1}(x,t) (0 \leq t <T)$ . After a
time $T$ the periodic field ${A_1}(x,t)$ returns to its original
value: ${A_1}(x,0) = {A_1}(x,T)$, so that $:\hat{\rm H}_{\pm}:(0)=
:\hat{\rm H}_{\pm}:(T)$ .
At each instant $t$ we define eigenstates for $:\hat{\rm H}_{\pm}:
(t)$ by
\[
:\hat{\rm H}_{\pm}:(t) |{\rm F}, A(t);\pm \rangle =
{\varepsilon}_{{\rm F},\pm}(t) |{\rm F}, A(t);\pm \rangle.
\]
The state $|{\rm F}=0, A(t);\pm \rangle \equiv |{\rm vac}; A(t);\pm \rangle$
is a ground state of $:\hat{\rm H}_{\pm}:(t)$ ,
\[
:\hat{\rm H}_{\pm}:(t) |{\rm vac}; A(t);\pm \rangle =0.
\]
The Fock states $|{\rm F}, A(t) \rangle = |{\rm F},A(t);+ \rangle
\otimes |{\rm F},A(t);- \rangle $
depend on $t$ only through
their implicit dependence on $A_1$. They are assumed to be
orthonormalized,
\[
\langle {\rm F^{\prime}}, A(t)|{\rm F}, A(t) \rangle =
\delta_{{\rm F},{\rm F^{\prime}}},
\]
and nondegenerate.
The time evolution of the wave function of our system (fermions
in a background gauge field) is clearly governed by the Schrodinger
equation:
\[
i \hbar \frac{\partial \psi(t)}{\partial t} =
:\hat{\rm H}_{\rm F}:(t) \psi(t) .
\]
For each $t$, this wave function can be expanded in terms of the
"instantaneous" eigenstates $|{\rm F}, A(t) \rangle$ .
Let us choose ${\psi}_{\rm F}(0)=|{\rm F}, A(0) \rangle$, i.e.
the system is initially described by the eigenstate
$|{\rm F},A(0) \rangle$ . According to the adiabatic approximation,
if at $t=0$ our system starts in an stationary state $|{\rm F},A(0)
\rangle $ of $:\hat{\rm H}_{\rm F}:(0)$, then it will remain,
at any other instant of time $t$, in the corresponding eigenstate
$|{\rm F}, A(t) \rangle$ of the instantaneous Hamiltonian
$:\hat{\rm H}_{\rm F}:(t)$. In other words, in the adiabatic
approximation transitions to other eigenstates are neglected.
Thus, at some time $t$ later our system will be described up to
a phase by the same Fock state $|{\rm F}, A(t) \rangle $:
\[
\psi_{\rm F}(t) = {\rm C}_{\rm F}(t) \cdot |{\rm F},A(t) \rangle,
\]
where ${\rm C}_{\rm F}(t)$ is yet undetermined phase.
To find the phase, we insert $\psi_{\rm F}(t)$ into the
Schrodinger equation :
\[
\hbar \dot{\rm C}_{\rm F}(t) = -i {\rm C}_{\rm F}(t)
(\varepsilon_{{\rm F},+}(t) + \varepsilon_{{\rm F},-}(t))
- \hbar {\rm C}_{\rm F}(t)
\langle {\rm F},A(t)|\frac{\partial}{\partial t}|{\rm F},A(t) \rangle.
\]
Solving this equation, we get
\[
{\rm C}_{\rm F}(t) = \exp\{- \frac{i}{\hbar} \int_{0}^{t} d{t^{\prime}}
({\varepsilon}_{{\rm F},+}({t^{\prime}}) +
{\varepsilon}_{{\rm F},-}({t^{\prime}}) ) - \int_{0}^{t} d{t^{\prime}}
\langle {\rm F},A({t^{\prime}})|\frac{\partial}{\partial{t^{\prime}}}|
{\rm F},A({t^{\prime}}) \rangle \}.
\]
For $t=T$, $|{\rm F},A(T) \rangle =|{\rm F},A(0) \rangle$ ( the
instantaneous eigenfunctions are chosen to be periodic in time)
and
\[
{\psi}_{\rm F}(T) = \exp\{i {\gamma}_{\rm F}^{\rm dyn} +
i {\gamma}_{\rm F}^{\rm Berry} \}\cdot {\psi}_{\rm F}(0),
\]
where
\[ {\gamma}_{\rm F}^{\rm dyn} \equiv - \frac{1}{\hbar}
\int_{0}^{T} dt \cdot ({\varepsilon}_{{\rm F},+}(t)
+ {\varepsilon}_{{\rm F},-}(t)),
\]
while
\begin{equation}
{\gamma}_{\rm F}^{\rm Berry} = {\gamma}_{\rm F,+}^{\rm Berry} +
{\gamma}_{\rm F,-}^{\rm Berry},
\label{eq: summa}
\end{equation}
\[
{\gamma}_{{\rm F},\pm}^{\rm Berry} \equiv \int_{0}^{T} dt \int_{-{\rm L}/2}^
{{\rm L}/2} dx \dot{A_1}(x,t) \langle {\rm F},A(t);\pm|i \frac{\delta}
{\delta A_1(x,t)}|{\rm F},A(t);\pm \rangle
\]
is Berry's phase \cite{berry84}.
If we define the $U(1)$ connections
\begin{equation}
{\cal A}_{{\rm F},\pm}(x,t) \equiv \langle {\rm F},A(t);\pm|i \frac{\delta}
{\delta A_1(x,t)}|{\rm F},A(t);\pm \rangle,
\label{eq: dvatri}
\end{equation}
then
\[
{\gamma}_{{\rm F},\pm}^{\rm Berry} = \int_{0}^{T} dt \int_{-{\rm L}/2}^
{{\rm L}/2} dx \dot{A}_1(x,t) {\cal A}_{{\rm F},\pm}(x,t).
\]
We see that upon parallel transport around a closed loop on
${\cal A}^1$ the Fock states $|{\rm F},A(t);\pm \rangle$ acquire an
additional phase which is integrated exponential of ${\cal A}_{{\rm F},\pm}
(x,t)$. Whereas the dynamical phase ${\gamma}_{\rm F}^{\rm dyn}$
provides information about the duration of the evolution, the
Berry's phase reflects the nontrivial holonomy of the Fock states
on ${\cal A}^1$.
However, a direct computation of the diagonal matrix elements of
$\frac{\delta}{\delta A_1(x,t)}$ in ~\ref{eq: summa} requires a
globally single-valued basis for the eigenstates $|{\rm F},A(t);\pm \rangle$
which is not available. The connections ~\ref{eq: dvatri} can be
defined only locally on ${\cal A}^1$, in regions where
$[\frac{e_{+}b{\rm L}}{2 \pi}]$ is fixed. The values of $A_1$ in regions
of different $[\frac{e_{+}b{\rm L}}{2 \pi}]$ are connected by
topologically nontrivial gauge transformations.
If $[\frac{e_{+}b{\rm L}}{2 \pi}]$ changes, then
there is a nontrivial spectral flow , i.e. some of energy levels
of the first quantized fermionic Hamiltonians cross zero and change
sign. This means that the definition of the Fock vacuum of the second
quantized fermionic Hamiltonian changes (see Eq.~\ref{eq: deset}
and ~\ref{eq: odinodin}). Since the creation and annihilation operators
$a^{\dagger}, a$ (and $b^{\dagger}, b$ ) are
continuous functionals of $A_1(x)$, the definition of all excited
Fock states $|{\rm F},A(t) \rangle$ is also discontinuous. The
connections ${\cal A}_{{\rm F},\pm}$ are not therefore well-defined
globally.
Their global characterization necessiates the usual introduction of
transition functions.
Furthermore, ${\cal A}_{{\rm F},\pm}$ are not invariant under
$A$--dependent
redefinitions of the phases of the Fock states: $|{\rm F},A(t);\pm \rangle
\rightarrow \exp\{-i {\chi}_{\pm}[A]\} |{\rm F},A(t);\pm \rangle$, and
transform like a $U(1)$ vector potential
\[
{\cal A}_{{\rm F},\pm} \rightarrow {\cal A}_{{\rm F},\pm} +
\frac{\delta {\chi}_{\pm}[A]}{\delta A_1}.
\]
For these reasons, to calculate ${\gamma}_{\rm F}^{\rm Berry}$ it
is more convenient to compute first the $U(1)$ curvature tensors
\begin{equation}
{\cal F}_{\rm F}^{\pm}(x,y,t) \equiv \frac{\delta}{\delta A_1(x,t)}
{\cal A}_{{\rm F},\pm}(y,t) - \frac{\delta}{\delta A_1(y,t)}
{\cal A}_{{\rm F},\pm}(x,t)
\label{eq: dvacet}
\end{equation}
and then deduce ${\cal A}_{{\rm F},\pm}$.
i) $n$-particle states $(n \geq 3)$.
For $n$-particle states $|m_1;m_2;...;m_n; A;\pm \rangle$
$(m_1<m_2<...<m_n)$, the ${\rm U}(1)$ curvature tensors are
\[
{\cal F}_{m_1,m_2,...,m_n}^{\pm}(x,y,t)
= i \sum_{k=0}^{\infty}
\frac{1}{k!} \sum_{\overline{m}_1, \overline{m}_2, ...,
\overline{m}_k \in \cal Z} \{ \langle m_1;m_2:...;m_n; A;\pm|
\frac{\delta}{\delta {A}_1(x,t)}
\]
\[
|\overline{m}_1;\overline{m}_2;
...;\overline{m}_k; A;\pm \rangle
\cdot \langle \overline{m}_1; \overline{m}_2; ...; \overline{m}_k;
A;\pm| \frac{\delta}{\delta A_1(y,t)}|
m_1;m_2;...;m_n; A;\pm \rangle - (x \leftrightarrow y) \}
\]
where the completeness condition ~\ref{eq: plete} is inserted.
Since
\[
\langle m_1;m_2;...;m_n; A;\pm |\frac{\delta
:\hat{\rm H}_{\pm}:}{\delta A_1(x,t)}|
\overline{m}_1;\overline{m}_2;...;\overline{m}_k; A;\pm \rangle
= {\hbar} \{ \sum_{i=1}^k {\varepsilon}_{\overline{m}_i,\pm} \cdot
{\rm sign}({\varepsilon}_{\overline{m}_i,\pm})
\]
\[
-\sum_{i=1}^n {\varepsilon}_{m_i,\pm} \cdot
{\rm sign}({\varepsilon}_{m_i,\pm}) \} \cdot
\langle m_1;m_2;...;m_n;A;\pm|\frac{\delta}{\delta A_1(x,t)}|
\overline{m}_1; \overline{m}_2;...;\overline{m}_k; A;\pm \rangle
\]
and $:\hat{\rm H}_{\pm}:$ are quadratic in the positive and negative
chirality creation and annihilation operators, the matrix elements
$\langle m_1;m_2;...;m_n; A;\pm|\frac{\delta}{\delta A_1(x,t)}|
\overline{m}_1;\overline{m}_2;...;\overline{m}_k; A;\pm \rangle$
and so the corresponding curvature tensors
${\cal F}_{m_1,m_2,...,m_n}^{\pm}$ and Berry phases
${\gamma}_{m_1,m_2,...,m_n;\pm}^{\rm Berry}$ vanish for all values
of $m_i (i=\overline{1,n})$ for $n \geq 3$.
ii) $2$-particle states.
For $2$-particle states $|m_1;m_2; A;\pm \rangle$ $(m_1<m_2)$,
only the vacuum state survives in the completeness condition inserted
so that the curvature tensors ${\cal F}_{m_1m_2}^{\pm}$ take the form
\[
{\cal F}_{m_1m_2}^{\pm}(x,y,t) = \frac{i}{{\hbar}^2} \frac{1}
{({\varepsilon}_{m_1,\pm} \cdot {\rm sign}({\varepsilon}_{m_1,\pm}) +
{\varepsilon}_{m_2,\pm} \cdot {\rm sign}({\varepsilon}_{m_2,\pm}))^2}
\]
\[
\cdot \{ \langle m_1;m_2;A;\pm| \frac{\delta :\hat{\rm H}_{\pm}:}
{\delta A_1(y,t)}|{\rm vac}; A;\pm \rangle
\langle {\rm vac};A;\pm|\frac{\delta :\hat{\rm H}_{\pm}:}
{\delta A_1(x,t)}|m_1;m_2;A;\pm \rangle -
(x \leftrightarrow y) \}.
\]
With $:\hat{\rm H}_{\pm}:(t)$ given by ~\ref{eq: hamil},
${\cal F}_{m_1m_2}^{\pm}$ are evaluated as
$$
{\cal F}_{m_1m_2}^{\pm}= \left \{
\begin{array}{cc}
0 & \mbox{for $m_1,m_2 >[\frac{e_{\pm}b{\rm L}}
{2\pi}] \hspace{3 mm} {\rm and} \hspace{3 mm} m_1,m_2 \leq
[\frac{e_{\pm}b{\rm
L}}{2\pi}]$},\\ \mp \frac{e_{\pm}^2}{2{\pi}^2} \frac{1}{(m_2-m_1)^2}
\sin\{\frac{2\pi}{\rm L}(m_2-m_1)(x-y)\} & \mbox{for
$m_1 \leq [\frac{e_{\pm}b{\rm L}}{2\pi}], m_2>[\frac{e_{\pm}b{\rm L}}
{2\pi}]$},
\end{array}\right.
$$
i.e. the curvatures are nonvanishing only for states with one
particle and one hole.
The corresponding connections are easily deduced as
\[
{\cal A}_{m_1m_2}^{\pm}(x,t) =
-\frac{1}{2} \int_{-{\rm L}/2}^{{\rm L}/2} dy
{\cal F}_{m_1m_2}^{\pm}(x,y,t) A_1(y,t).
\]
The Berry phases become
\[
{\gamma}_{m_1m_2,\pm}^{\rm Berry} = - \frac{1}{2} \int_{0}^{\rm T} dt
\int_{-{\rm L}/2}^{{\rm L}/2} dx \int_{-{\rm L}/2}^{{\rm L}/2} dy
\dot{A}_1(x,t) {\cal F}_{m_1m_2}^{\pm}(x,y,t) A_1(y,t).
\]
If we introduce the Fourier expansion for the gauge field
\[
A_1(x,t) =b(t) + \sum_{\stackrel{p \in \cal Z}{p \neq 0}}
e^{i\frac{2\pi}{\rm L} px} {\alpha}_p(t),
\]
then in terms of the gauge field Fourier components the Berry phases
take the form
\[
{\gamma}_{m_1m_2,\pm}^{\rm Berry} =
\mp \frac{e_{\pm}^2{\rm L}^2}{8{\pi}^2} \frac{1}{(m_2-m_1)^2}
\int_{0}^{\rm T} dt i ({\alpha}_{m_2-m_1} \dot{\alpha}_{m_1-m_2}
- {\alpha}_{m_1-m_2} \dot{\alpha}_{m_2-m_1})
\]
for $m_1 \leq [\frac{e_{\pm}b{\rm L}}{2\pi}],
m_2>[\frac{e_{\pm}b{\rm L}}{2\pi}]$,
vanishing for $m_1,m_2 >[\frac{e_{\pm}b{\rm L}}{2\pi}]$ and
$m_1,m_2 \leq [\frac{e_{\pm}b{\rm L}}{2\pi}]$.
Therefore, a parallel transportation of the states $|m_1;m_2;A;\pm
\rangle$ with two particles or two holes around a closed loop in
$({\alpha}_p,{\alpha}_{-p})$-space $(p>0)$ yields back the same states,
while the states with one particle and one hole are multiplied by
the phases ${\gamma}_{m_1m_2,\pm}^{\rm Berry}$.
For the Schwinger model when ${\rm N}=1$ and $e_{+}=e_{-}$
as well as for axial electrodynamics when ${\rm N}=-1$ and
$e_{+}=-e_{-}$, the nonvanishing
Berry phases for the positive and negative chirality $2$-particle states
are opposite in sign,
\[
{\gamma}_{m_1m_2,+}^{\rm Berry} = - {\gamma}_{m_1m_2,-}^{\rm Berry},
\]
so that for the states $|m_1;m_2;A \rangle =
|m_1;m_2;A;+ \rangle \otimes |m_1;m_2;A;- \rangle$
the total Berry phase is zero.
iii) $1$-particle states.
For $1$-particle states $|m;A;\pm \rangle$, the ${\rm U}(1)$ curvature
tensors are
\[
{\cal F}_{m}^{\pm}(x,y,t) = i
\sum_{\stackrel{\overline{m} \in \cal Z}{\overline{m} \neq m}}
\frac{1}{{\hbar}^2}
\frac{1}{({\varepsilon}_{\overline{m},\pm} \cdot {\rm sign}
({\varepsilon}_{\overline{m},\pm}) - {\varepsilon}_{m,\pm} \cdot
{\rm sign}({\varepsilon}_{m,\pm}))^2}
\]
\[
\cdot \{ \langle m;A;\pm|
\frac{\delta : \hat{\rm H}_{\pm}:}{\delta A_1(y,t)}
|\overline{m};A;\pm \rangle
\langle \overline{m};A;\pm|
\frac{\delta :\hat{\rm H}_{\pm}:} {\delta A_1(x,t)}
|m;A;\pm \rangle - (x \longleftrightarrow y) \}. \\
\]
By a direct calculation we easily get
\begin{eqnarray*}
{\cal F}_{m>[\frac{e_{\pm}b{\rm L}}{2\pi}]}^{\pm} & = &
\sum_{\overline{m}=m-[\frac{e_{\pm}b{\rm L}}{2\pi}]}^{\infty}
{\cal F}_{0\overline{m}}^{\pm}, \\
{\cal F}_{m \leq [\frac{e_{\pm}b{\rm L}}{2\pi}]}^{\pm} & = &
\sum_{\overline{m}= [\frac{e_{\pm}b{\rm L}}{2\pi}] - m+1}^{\infty}
{\cal F}_{0\overline{m}}^{\pm},
\end{eqnarray*}
where ${\cal F}_{0\overline{m}}^{\pm}$ are curvature tensors for the
$2$-particle states $|0;\overline{m};A;\pm \rangle$ $(\overline{m}>0)$.
The Berry phases acquired by the states $|m;A;\pm \rangle$ by their
parallel transportation around a closed loop in $({\alpha}_p,
{\alpha}_{-p})$-space $(p>0)$ are
\begin{eqnarray*}
{\gamma}_{\pm}^{\rm Berry}(m>[\frac{e_{\pm}b{\rm L}}{2\pi}]) & = &
\sum_{\overline{m}=m - [\frac{e_{\pm}b{\rm L}}{2\pi}]}^{\infty}
{\gamma}_{0\overline{m};\pm}^{\rm Berry}, \\
{\gamma}_{\pm}^{\rm Berry}(m \leq [\frac{e_{\pm}b{\rm L}}{2\pi}]) & = &
\sum_{\overline{m}=[\frac{e_{\pm}b{\rm L}}{2\pi}] -m+1}^{\infty}
{\gamma}_{0\overline{m};\pm}^{\rm Berry},
\end{eqnarray*}
where ${\gamma}_{0\overline{m};\pm}^{\rm Berry}$ are phases
acquired by the states $|0;\overline{m};A;\pm \rangle$ by the same
transportation.
For the ${\rm N}=\pm 1$ models, the total $1$-particle curvature
tensor ${\cal F}_m ={\cal F}_m^{+} + {\cal F}_m^{-}$ and total Berry
phase ${\gamma}^{\rm Berry} ={\gamma}_{+}^{\rm Berry} +
{\gamma}_{-}^{\rm Berry}$ vanish.
iv) vacuum states.
For the vacuum case, only $2$-particle states contribute to the sum
of the completeness condition, so the vacuum curvature tensors are
\[
{\cal F}_{\rm vac}^{\pm}(x,y,t) = - \frac{1}{2}
\sum_{\overline{m}_1; \overline{m}_2 \in \cal Z}
{\cal F}_{\overline{m}_1 \overline{m}_2}(x,y,t).
\]
Taking the sums, we get
\begin{equation}
{\cal F}_{\rm vac}^{\pm} =
\pm \frac{e_{+}^2}{2{\pi}} \sum_{n>0}
( \frac{1}{2} \epsilon(x-y)
- \frac{1}{\rm L} (x-y) ).
\label{eq: dvasem}
\end{equation}
The total vacuum curvature tensor
\[
{\cal F}_{\rm vac} = {\cal F}_{\rm vac}^{+} + {\cal F}_{\rm vac}^{-}=
(1-{\rm N}^2) \frac{e_{+}^2}{2\pi} (\frac{1}{2} \epsilon(x-y) -
\frac{1}{\rm L} (x-y))
\]
vanishes for ${\rm N}=\pm 1$.
The corresponding ${\rm U}(1)$ connection is deduced as
\[
{\cal A}_{\rm vac}(x,t) = - \frac{1}{2} \int_{-{\rm L}/2}^{{\rm L}/2}
dy {\cal F}_{\rm vac}(x,y,t) A_1(y,t),
\]
so the total vacuum Berry phase is
\[
{\gamma}_{\rm vac}^{\rm Berry} = - \frac{1}{2} \int_{0}^{T} dt
\int_{-{\rm L}/2}^{{\rm L}/2} dx \int_{-{\rm L}/2}^{{\rm L}/2} dy
\dot{A_1}(x,t) {\cal F}_{\rm vac}(x,y,t) A_1(y,t),
\]
For ${\rm N}=0$ and in the limit ${\rm L} \to \infty$,
when the second term in ~\ref{eq: dvasem} may be neglected,
the $U(1)$ curvature tensor
coincides with that obtained in \cite{niemi86,semen87},
while the Berry phase becomes
\[
{\gamma}_{\rm vac}^{\rm Berry} = \int_{0}^{T} dt
\int_{- \infty}^{\infty} dx {\cal L}_{\rm nonlocal}(x,t),
\]
where
\[
{\cal L}_{\rm nonlocal}(x,t) \equiv - \frac{e_{+}^2}{8 {\pi}^2}
\int_{- \infty}^{\infty}
dy \dot{A_1}(x,t) \epsilon(x-y) A_1(y,t)
\]
is a non-local part of the effective Lagrange density of the CSM
\cite{sarad93}. The effective Lagrange density is a sum of the
ordinary Lagrange density of the CSM and the nonlocal part
${\cal L}_{\rm nonlocal}$. As shown in \cite{sarad93}, the effective
Lagrange density is equivalent to the ordinary one in the sense that
the corresponding preliminary Hamiltonians coincide on the constrained
submanifold ${\rm G} \approx 0$. This equivalence is valid at the
quantum level, too. If we start from the effective Lagrange density
and apply appropriately the Dirac quantization procedure, then we
come to a quantum theory which is exactly the quantum theory
obtained from the ordinary Lagrange density. We get therefore
that the Berry phase is an action and that the CSM can be defined
equivalently by both the effective action with the Berry phase
included and the ordinary one without the Berry phase.
In terms of the gauge field Fourier components, the connection
${\cal A}_{\rm vac}$ is rewritten as
\[
\langle {\rm vac};A(t)|\frac{d}{db(t)}|{\rm vac};A(t)\rangle =0,
\]
\[
\langle {\rm vac};A(t)|\frac{d}{d{\alpha}_{\pm p}(t)}|{\rm vac};A(t)\rangle
\equiv {\cal A}_{{\rm vac};\pm}(p,t)= \pm (1-{\rm N}^2)
\frac{e_{+}^2{\rm L}^2}{8{\pi}^2} \frac{1}{p} {\alpha}_{\mp p},
\]
so the nonvanishing vacuum curvature is
\[
{\cal F}_{\rm vac}(p) \equiv \frac{d}{d{\alpha}_{-p}}
{\cal A}_{{\rm vac};+} - \frac{d}{d{\alpha}_p}
{\cal A}_{{\rm vac};-} =
(1-{\rm N}^2) \frac{e_{+}^2{\rm L}^2}{4{\pi}^2} \frac{1}{p}.
\]
The total vacuum Berry phase becomes
\[
{\gamma}_{\rm vac}^{\rm Berry} = \int_{0}^{\rm T} dt
\sum_{p>0} {\cal F}_{\rm vac}(p) {\alpha}_p \dot{\alpha}_{-p}.
\]
For the ${\rm N} \neq \pm 1$ models where the local gauge symmetry
is known to be realized projectively \cite{sarad91},
the vacuum Berry phase is
non-zero. For ${\rm N}=\pm 1$ when the representation is unitary,
the curvature ${\cal F}_{\rm vac}(p)$ and the vacuum Berry phase
vanish.
The projective representation of the local gauge symmetry is
responsible for anomaly. In the full quantized theory of the
CSM when the gauge fields are also quantized the physical states
respond to gauge transformations from the zero topological class
with a phase \cite{sarad91}. This phase contributes to the
commutator of the Gauss law generators by a Schwinger term and
produces therefore an anomaly.
A connection of the nonvanishing vacuum Berry phase to the
projective representation can be shown in a more direct way.
Under the topologically trivial gauge transformations,
the gauge field Fourier components
${\alpha}_p, {\alpha}_{-p}$ transform as follows
\begin{eqnarray*}
{\alpha}_p & \stackrel{\tau}{\rightarrow} & {\alpha}_p - ip{\tau}_{-}(p),\\
{\alpha}_{-p} & \stackrel{\tau}{\rightarrow} & {\alpha}_{-p} -ip{\tau}_{+}(p),\\
\end{eqnarray*}
where ${\tau}_{\pm}(p)$ are smooth gauge parameters.
The nonlocal Lagrangian
\[
{\rm L}_{\rm nonlocal}(t) \equiv \int_{-{\rm L}/2}^{{\rm L}/2} dx
{\cal L}_{\rm nonlocal}(x,t) =
\sum_{p>0} {\cal F}_{\rm vac}(p)
i{\alpha}_{p} \dot{\alpha}_{-p}
\]
changes as
\[
{\rm L}_{\rm nonlocal}(t) \stackrel{\tau}{\rightarrow}
{\rm L}_{\rm nonlocal}(t) - 2{\pi} \frac{d}{dt} {\alpha}_1(A;{\tau}),
\]
where
\[
{\alpha}_1(A;{\tau}) \equiv - \frac{1}{4\pi}
\sum_{p>0} p{\cal F}_{\rm vac}(p) ({\alpha}_{-p} {\tau}_{-}
- {\alpha}_{p} {\tau}_{+})
\]
is just $1$--cocycle occuring in the projective
representation of the gauge group. This examplifies a connection
between the nonvanishing vacuum Berry phase and the fact that the local
gauge symmetry is realized projectively.
\newpage
\section{Conclusions}
\label{sec: con}
Let us summarize.
i) We have calculated explicitly the Berry phase and the corresponding
${\rm U}(1)$ connection and curvature for the fermionic vacuum and many
particle Fock states. For the ${\rm N} \neq \pm 1$ models, we get that
the Berry phase is non-zero for the vacuum, $1$-particle and $2$-particle
states with one particle and one hole. For all other many particle states
the Berry phase vanishes. This is caused by the form of the second
quantized fermionic Hamiltonian which is quadratic in the positive
and negative chirality creation and annihilation operators.
ii) For the ${\rm N}= \pm 1$ models without anomaly, i.e. for the SM and
axial electrodynamics, the Berry phases acquired by the negative and
positive chirality parts of the Fock states are opposite in sign
and cancel each other , so that
the total Berry phase for all Fock states is zero.
iii) A connection between the Berry phase and anomaly becomes more
explicit for the vacuum state. We have shown that for our model
the vacuum Berry phase contributes to the effective action, being
that additional part of the effective action which differs it from the
ordinary one. Under the topologically trivial gauge transformations
the corresponding addition in the effective Lagrangian changes by a
total
time derivative of the gauge group $1$-cocycle occuring in the projective
representation. This demonstrates an interrelation between the Berry
phase, anomaly and effective action.
\newpage
| 17,515 |
\section{ Introduction }
There has been a strong revival of interest, recently, in the physics
of magnetic vortices in type II and high-temperature superconductors
\cite{reviews}. Most research efforts have been devoted to phenomena
relating to the nature of the mixed phase of a superconductor in some
externally applied magnetic field and supercurrent. Issues connected
with the pinning of the flux lines by defects have been widly studied.
We \cite{ieju}, as well as Ao and Thouless \cite{aoth} and Stephen
\cite{stephen}, have addressed the problem of the quantum dynamics of
vortices in the absence of an external field but in the presence of
an externally driven supercurrent, quantum dissipation and pinning.
This leads to the decay of a supercurrent, or a residual zero-temperature
resistance in the superconductor. Whilst most of the dissipation seems
to be ascribed to vortices tunneling in the sample from the edge, an
interesting novel possibility also explored by us in depth is that of
a residual resistance arising from spontaneous vortex-antivortex pair
creation in the bulk of a thin film. This is the mesoscopic counterpart
of electron-positron pair production of two-dimensional (2D) quantum
electrodynamics (QED) in the presence of static e.m. fields, which in a
superconductor arise from the static and velocity-dependent components of
the Magnus force acting on the vortices. Exploiting this analogy with QED,
a powerful ``relativistic'' quantum field theory approach has been
set up to study vortex nucleation in the 2D geometry in the presence of
quantum dissipation and of pinning potentials. The central result is that
the nucleation rate $\Gamma$ has a strong exponential dependence on the
number current density $J$, given by
\begin{equation}
\Gamma{\propto}\eta^{1/2}\eta_{eff}J^{-1}
\exp\{-\eta_{eff}{\cal E}_{0R}^2/4\pi J^2\}
\label{rate0}
\end{equation}
\noindent
Here $\eta_{eff}$ is an effective viscosity coefficient as renormalised by
the magnetic-like part of the Magnus force, and ${\cal E}_{0R}$ is the rest-
or nucleation-energy of a single vortex as renormalized by screened
Coulomb interactions and (fake) Landau-level corrections. This
exponential dependence would make the vortex nucleation (folded, e.g.,
into the sample's resistance) observable in a rather narrow range of
$J$-values. Thus, normally the superconductor is essentially resistance-free.
However, the high values of $J$ that can be reached in the high-$T_c$
materials make the possibility of observing pair creation in static fields
within reach for thin films. One particular feature that would uniquely
relate the residual resistance to the phenomenon of spontaneous vortex-pair
creation is the presence of {\em oscillations} in the $J$-dependence of
$\Gamma(J)$ in case a {\em periodic} pinning potential is artificially
created in the film. These oscillations are in fact strictly connected to
the pinning-lattice spacing $d=2\pi/k$ of the periodic potential (we assume
a square lattice), e.g.
\begin{equation}
U({\bf q}(t))=U_0 \sum_{a=1}^2 \left [ 1 - \cos \left ( kq_a(t)
\right ) \right ]
\label{potent}
\end{equation}
\noindent
acting on the nucleating vortex-pairs described by a coordinate ${\bf q}$.
The problem of quantum dissipation for a particle moving in a periodic
potential has some interesting features in its own right
\cite{schmid,ghm,fizw}. It is characterised by a localization phase
transition driven by dissipation; accordingly, two phases can occur
depending on whether the dissipation coefficient \cite{cale} $\eta$ is
greater (confined phase) or smaller (mobile phase) than a critical
value $\eta_c=k^2/2\pi=2\pi/d^2$. This localization transition is described
by a Kosterlitz-type renormalization group (RG) approach, yet with some
important differences that will be recalled below. We have implemented
the RG approach for the evaluation of the dependence of the spontaneous
nucleation rate of vortex-antivortex pairs on the external parameters for
our own quantum dynamical system. A remnant of the dissipation-driven
phase transition is observed and the pair production rate $\Gamma$ can
be derived in both phases by means of a frequency-space RG procedure
leading to observable current-oscillations if $\eta > \eta_c$.
\section{ RG approach to dissipative localization transition }
First, we briefly recall the RG description of the localization
transition driven by quantum dissipation \cite{fizw}. The effective
action for a particle diffusing in a periodic potential and subject to
quantum dissipation of the Caldeira-Leggett type \cite{cale} is, in
Fourier frequency space:
\begin{equation}
{\cal S}=\int_0^{\tau}{\cal L}({\bf q})=\tau
\sum_n \{ \frac{1}{2}m\omega_n^2+\frac{1}{2}\eta |\omega_n| \}
\bar{q}_a(\omega_n)\bar{q}_a(-\omega_n)+\int_0^{\tau} dt U({\bf q})
\label{action0}
\end{equation}
\noindent
where $m$ is the mass of the quantum particle and $\eta$ the
phenomenological friction coefficient. In the low-frequency limit the
effects of inertia can be neglected and the problem would acquire the same
phenomenology as for the sine-Gordon model (in (0+1)-dimensions),
except for the peculiar $\omega_n$-dependence of the propagator reflecting
the broken time-reversal symmetry of quantum dissipation. When the RG
procedure is applied to Eq. (\ref{action0}) a renormalization of the
potential amplitude $U_0$ occurs, but not of the friction coefficient
$\eta$ since only local operators in the time variable can be generated
within a RG transformation. In terms of the dimensionless parameters
($\Omega$ is a large frequency-cutoff) ${\cal U}=U_0/\Omega$ and
$\alpha=2\pi\eta/k^2$, the RG recursion relations read
\begin{equation}
\frac{d{\cal U}}{d\ell}=\left ( 1-\frac{1}{\alpha} \right ) {\cal U}
+ \cdots, \qquad
\frac{d\alpha}{d\ell}=0
\label{recrel}
\end{equation}
\noindent
with $e^{-\ell}$ the frequency-scale renormalization parameter. These have
the simple solution
\begin{equation}
{\cal U}(\ell)={\cal U}(0)e^{(1-\eta_c/\eta)\ell}, \qquad
\alpha(\ell)=\alpha(0)
\label{rgflow}
\end{equation}
\noindent
displaying the localization transition for $\eta=\eta_c=k^2/2\pi=2\pi/d^2$.
The potential's amplitude vanishes
under a RG change of time scale for $\eta < \eta_c$, but for
$\eta > \eta_c$ it tends to diverge and the RG procedure must be
interrupted. Unlike in the Kosterlitz RG scheme, this cannot be done
unequivocally in the present situation, for there is no true characteristic
correlation time or frequency owing to the fact that one never moves away
from the neighbourhood of the critical point $\eta_c$. An alternative
strategy for the confined phase is to resort to a variational treatment
\cite{fizw}, which dynamically generates a correlation time.
In this procedure the action of Eq. (\ref{action0}) is replaced by a
trial Gaussian form (neglecting inertia)
\begin{equation}
{\cal S}_{tr}=\frac{\eta}{4\pi} \int_0^{\tau} dt \int_{-\infty}^{+\infty} dt'
\left ( \frac{{\bf q}(t)-{\bf q}(t')}{t-t'} \right )^2 + \frac{1}{2} M^2
\int_0^{\tau} dt {\bf q}(t)^2
\label{actiontr}
\end{equation}
\noindent
where $M^2$ is determined by minimising self-consistently the free energy
$F_{tr}+\langle S-S_{tr} \rangle_{tr}$. This leads to the equation
\begin{equation}
M^2=U_0k^2 \exp \left \{ -\frac{k^2}{2\tau} \sum_n \frac{1}{\eta|\omega_n|
+M^2} \right \}
\end{equation}
\noindent
having a solution $M^2{\neq}0$ only in the confined phase ($\eta > \eta_c$),
since introducing the cutoff $\Omega$ in the (continuous) sum over frequency
modes $\omega_n=2{\pi}n/\tau$, the equation for $M^2$ leads to (for
$M^2\rightarrow 0$)
\begin{equation}
M^2=\eta\Omega \left ( \frac{2\pi U_0}{\Omega} \frac{\eta_c}{\eta}
\right )^{\eta/(\eta-\eta_c)}{\equiv}\eta\Omega\mu
\label{mass}
\end{equation}
\noindent
This spontaneously generated ``mass'' interrupts the divergent
renormalization of the periodic potential amplitude $U_0$, which in the
RG limit $\ell{\rightarrow}{\infty}$ tends to
\begin{equation}
U_0(\ell)=U_0 \left ( \frac{e^{-\ell}+\mu}{1+\mu} \right )^{\eta_c/\eta}
{\rightarrow}U_0 \left ( \frac{\mu + 1/n^{*}}
{\mu + 1} \right )^{\eta_c/\eta}
\end{equation}
\noindent
Here, we have put $\Omega=2\pi n^{*}/\tau=n^{*}\omega_1$ and
$\mu=M^2/\Omega\eta$.
\section{ RG treatment of vortex-antivortex pair-creation in the presence
of a periodic pinning potential }
We begin by recalling the need for a relativistic description of the
process. This leads \cite{ieju} to a Schwinger-type formula for the decay
of the ``vacuum'', represented by a thin superconducting film in which static
e.m.-like fields arise when a supercurrent is switched on. The quantum
fluctuations of these fields are vortex-antivortex pairs, nucleating at a
rate given by
\begin{equation}
\frac{\Gamma}{L^2}=\frac{2}{L^2T} Im \int_{\epsilon}^{\infty}
\frac{d\tau}{\tau} e^{-{\cal E}_0^2\tau} \int
{\cal D}q(t) \exp\{ -\int_0^{\tau} dt {\cal L}_E \}
\label{rate}
\end{equation}
\noindent
where $L^2T$ is the space-time volume of the sample and ${\cal E}_0$ the
vortex-nucleation energy (suitably renormalised by vortex-screening effects).
Also
\begin{eqnarray}
{\cal L}_E&=&\frac{1}{2}m_{\mu}\dot{q}_{\mu}\dot{q}_{\mu}-\frac{1}{2}i
\dot{q}_{\mu}F_{\mu\nu}q_{\nu} + V({\bf q}) \nonumber \\
&+&\sum_k \left \{ \frac{1}{2}m_k\dot{\bf x}_k^2
+\frac{1}{2}m_k\omega_k^2 \left( {\bf x}_k+\frac{c_k}{m_k\omega_k^2}{\bf q}
\right )^2 \right \}
\label{lagran}
\end{eqnarray}
\noindent
is the Euclidean single-particle relativistic Lagrangian, incorporating the
pinning potential $V({\bf q})=2{\cal E}_0U({\bf q})$ and the Caldeira-Leggett
mechanism \cite{cale}. In the absence of the pinning potential, the
relativistic action is quadratic and the path integral in Eq. (\ref{rate})
can be evaluated exactly. The leading term in the expression for $\Gamma$
follows from the lowest pole in the $\tau$-integral and this can be obtained
exactly in the (non-relativistic) limit in which
$m_1=m_2=\frac{\gamma}{2}{\rightarrow}0$, with
$\frac{1}{\gamma}={\cal E}_0/m{\rightarrow}{\infty}$ playing the role of the
square of the speed of light. The result \cite{ieju} is Eq. (\ref{rate0}).
We now come to the evaluation of $\Gamma$ in the presence of the periodic
potential, which calls for the RG approach of Section 2. Integrating out the
Euclidean ``time''-like component $q_3(t)$, we reach a formulation in which
the electric-like and the magnetic-like Magnus field components are
disentangled. In terms of Fourier components, dropping the magnetic-like part
and for $\gamma{\rightarrow}0$:
\begin{equation}
\int_0^{\tau} dt {\cal L}_E({\bf q})=\tau\sum_{n\neq 0} \{ \frac{1}{2}\eta
|\omega_n| - E^2{\delta}_{a1} \} \bar{q}_a(\omega_n) \bar{q}_a(-\omega_n)
+\int_0^{\tau} dt V({\bf q})
\label{lagranr}
\end{equation}
\noindent
with $E=2\pi J$ the electric-like field due to the supercurrent donsity $J$.
We have shown \cite{ieju} that the only role of the magnetic-like field is
to renormalize the nucleation energy and the friction coefficient, hence our
problem amounts to an effective one-dimensional system in the presence of
${\bf E}$ and dissipation. The evaluation of the Feynman Path Integral (FPI)
proceeds by means of integrating out the zero-mode, $\bar{q}_0$, as well as
the high-frequency modes $\bar{q}_n$ with $n>1$, since again the leading
term for $\Gamma$ in Eq. (\ref{rate}) comes from the divergence of the FPI
associated with the lowest mode coupling to ${\bf E}$. The effect of
$\bar{q}_n$ with $n > 1$ is taken into account through the frequency-shell
RG method of Section 2, leading to a renormalization of the amplitude
$V_0=2{\cal E}_0U_0$ of the (relativistic) pinning potential. The
renormalization has to be carried out from the outer shell of radius $\Omega$
to $\omega_1=2\pi/\tau$. In the mobile phase ($\eta < \eta_c$) this implies
$e^{\ell}=\Omega\tau/2\pi=n^{*}$ in Eq. (\ref{rgflow}), with (from the leading
pole of the FPI) $\tau=\pi\eta/E^2$ (${\rightarrow}\infty$ for relatively
weak currents). In the more interesting confined phase ($\eta > \eta_c$) we
must integrate out the massive $n > 1$ modes with a Lagrangian
${\cal L}(\bar{q}_n)=\tau \left ( \frac{1}{2}\eta |\omega_n|+\frac{1}{2}
M^2-E^2 \right ) \bar{q}_n\bar{q}_n^{*}$. This leads to an additional,
entropy-like renormalization of the activation energy ${\cal E}_{0R}$, beside
the renormalization of $V_0$. We are therefore left with the integration
over the modes $\bar{q}_0$ and $\bar{q}_1$, with a renormalised potential
\begin{eqnarray}
&&\int_0^{\tau} dt V_R(q_0,q_1(t)) = V_0\tau - V_{0R}\int_0^{\tau} dt
\cos ( k(q_0+q_1(t)) ) \nonumber \\
&&{\simeq} V_0\tau - V_{0R}\tau J_0(2k|\bar{q}_1|)\cos ( kq_0 )
\label{potentr}
\end{eqnarray}
\noindent
Here, $J_0$ is the Bessel function and the renormalised amplitude $V_{0R}$ is
\begin{eqnarray}
V_{0R}= \left \{ \begin{array}{ll}
V_0 \left ( \frac{\Omega\tau}{2\pi} \right )^{-\eta_c/\eta}
& \mbox{if $\eta < \eta_c$} \\
V_0 \left ( \frac{\mu +1/n^{*}}{ \mu +1} \right )^{\eta_c/\eta}
& \mbox{if $\eta > \eta_c$}
\end{array} \right.
\label{amplitr}
\end{eqnarray}
\noindent
In Eq. (\ref{potentr}) the phase of the $\bar{q}_1$ mode has been integrated
out, allowing us to integrate out the $\bar{q}_0$-mode exactly; this leads
to the expression
\begin{equation}
\frac{\Gamma}{2L^2}= Im \int_{\epsilon}^{\infty} d{\tau} {\cal N}(\tau)
e^{-({\cal E}_{0R}^2+V_0)\tau}
\int_0^{\infty} d|\bar{q}_1|^2 e^{-(\pi\eta-E^2\tau)|\bar{q}_1|^2}
I_0 \left (
V_{0R}\tau J_0(2k|\bar{q}_1|) \right )
\label{rate1}
\end{equation}
\noindent
where $I_0$ is the modified Bessel function. It is clear that the singularity
from the $\bar{q}_1$-integral occurs at $\tau=\pi\eta/E^2$; evaluating the
normalization factor ${\cal N}(\tau)$, we finally arrive at
\begin{eqnarray}
&&\Gamma=\Gamma_0K(J) \\
\label{final}
&&K(J)=e(1+\mu) \left ( 1+\frac{\mu\Omega\eta}{8\pi^2 J^2} \right )
I_0 \left ( \frac{V_{0R}\eta}{4\pi J^2} J_0(2k{\ell}_N) \right ) \nonumber
\end{eqnarray}
\noindent
where $\Gamma_0$ is given by Eq. (\ref{rate0}), there is a further
renormalization ${\cal E}_{0R}^2{\rightarrow}{\cal E}_{0R}^2+V_0$ and
we have set $E=2\pi J$. $\ell_N$ is a nucleation length, which is in first
approximation given by
\begin{equation}
{\ell}_N^2{\simeq}\frac{ {\cal E}_{0R}^2}{4\pi^2 J^2}
-\frac{V_{0R}}{4\pi^2 J^2} \left | J_0 \left ( k
\frac{ {\cal E}_{0R} } {\pi J} \right ) \right |
\label{nuclen}
\end{equation}
\noindent
and corresponds physically to the distance a vortex and antivortex
pair must travel to acquire the nucleation energy ${\cal E}_{0R}$.
The presence of the $J_0(2k{\ell}_N)$ argument in the correction factor
$K(J)$ due to the pinning lattice thus gives rise to oscillations in
$\Gamma (J)$ (hence in the sample's resistance) through the parameter
$2k{\ell}_N=4\pi{\ell}_N/d$. Vortex nucleation is therefore
sensitive to the corrugation of the pinning substrate. However, these
oscillations should be observable only in the confined phase, $\eta > \eta_c$,
where interrupted-renormalization prevents the prefactor in front of the
$J_0(x)$ oscillating function from becoming too small for relatively
small current densities.
\section*{References}
| 5,539 |
\section{Introduction}
The precision data collected to date have confirmed the
Standard Model to be a good description of physics below
the electroweak scale \cite{Schaile}.
Despite of its great success, there are many reasons to believe
that some kind of new physics must exist. On the other hand, the
non-abelian structure of the gauge
boson self-couplings is still poorly tested and one of the most sensitive
probes for new physics is provided by the trilinear gauge boson couplings
(TGC) \cite{TGC}.
Many studies have been devoted to the $WW\gamma$ and $WWZ$ couplings.
At hadron colliders and $e^+e^-$ colliders, the present bounds
(Tevatron \cite{Errede}) and prospects (LHC, LEP2 and
NLC \cite{TGC,LEP2}) are mostly based on diboson production ($WW$,
$W\gamma$ and $WZ$).
In $ep$ collisions, HERA could provide
further information
analyzing single $W$ production ($ep\to eWX$ \cite{ABZ})
and radiative charged current scattering
($ep\to\nu\gamma X$ \cite{hubert}). There is also some
literature on $WW\gamma$ couplings in $W$-pair production at future
very high energy photon colliders (bremsstrahlung photons in peripheral
heavy ion collisions \cite{HIC} and Compton backscattered laser
beams \cite{gg}).
Only recently, attention has been paid to the $Z\gamma Z$, $Z\gamma\g$ and
$ZZZ$ couplings. There is a detailed analysis of $Z\gamma V$
couplings ($V=\gamma,Z$) for hadron colliders in \cite{BB}.
CDF \cite{CDF} and D\O\ \cite{D0} have obtained bounds on the
$Z\gamma Z$ and $Z\gamma\g$ anomalous couplings, while L3 has studied
only the first ones \cite{L3}. Studies on the sensitivities to
these vertices in future $e^+e^-$ colliders,
LEP2 \cite{LEP2} and NLC \cite{Boudjema}, have been performed during
the last years.
Some proposals have been made to probe these neutral boson gauge
couplings at future photon colliders in $e\gamma\to Ze$ \cite{eg}.
In this work we study the prospects for measuring the
TGC in the process $ep\to
e\gamma X$. In particular, we will concentrate on the $Z\gamma\g$ couplings,
which can be more stringently bounded than the $Z\gamma Z$ ones
for this process.
In Section 2, we present the TGC. The next section deals with the
different contributions to the process $ep\to e\gamma X$ and the cuts
and methods we have employed
in our analysis. Section 4 contains our results
for the Standard Model total cross section and distributions and
the estimates of the sensitivity of these quantities to the
presence of anomalous couplings. Finally, in the last section we
present our conclusions.
\section{Phenomenological parametrization of the neutral TGC}
A convenient way to study deviations from the standard model predictions
consists of considering the most general lagrangian compatible with
Lorentz invariance, the electromagnetic U(1) gauge symmetry, and
other possible gauge symmetries.
For the trilinear $Z\gamma V$ couplings ($V=\gamma,Z)$ the most general vertex
function invariant under Lorentz and electromagnetic gauge transformations
can be described in terms of four independent dimensionless form
factors \cite{hagiwara}, denoted by $h^V_i$, i=1,2,3,4:
\begin{eqnarray}
\Gamma^{\a\b\mu}_{Z\gamma V} (q_1,q_2,p)=\frac{f(V)}{M^2_Z}
\{
h^V_1 (q^\mu_2 g^{\a\b} - q^\a_2 g^{\mu\b})
+\frac{h^V_2}{M^2_Z} p^\a (p\cdot q_2g^{\mu\b}-q^\mu_2 p^\b)
\nonumber \\
+h^V_3 \varepsilon^{\mu\a\b\r}q_{2_\r}
+\frac{h^V_4}{M^2_Z}p^\a\varepsilon^{\mu\b\r\sigma}p_\r q_{2_\sigma}
\}. \hspace{3cm}
\label{vertex}
\end{eqnarray}
Terms proportional to $p^\mu$, $q^\a_1$ and $q^\b_2$ are omitted as long as
the scalar components of all three vector bosons can be neglected
(whenever they couple to almost massless fermions) or they are zero
(on-shell condition for $Z$ or U(1) gauge boson character of the photon).
The overall factor, $f(V)$, is $p^2-q^2_1$ for $Z\gamma Z$ or $p^2$ for $Z\gamma\g$
and is a result of Bose symmetry and electromagnetic gauge invariance.
These latter constraints reduce the familiar seven form factors
of the most general $WWV$ vertex to only these four for the
$Z\gamma V$ vertex. There still remains a global factor that can be fixed,
without loss of generality, to $g_{Z\gamma Z}=g_{Z\gamma\g}=e$. Combinations
of $h^V_3 (h^V_1)$ and $h^V_4 (h^V_2)$ correspond to electric
(magnetic) dipole and magnetic (electric) quadrupole transition
moments in the static limit.
All the terms are $C$-odd. The terms proportional to $h^V_1$ and $h^V_2$
are $CP$-odd while the other two are $CP$-even. All the form factors
are zero at tree level in the Standard Model. At the one-loop level,
only the $CP$-conserving $h^V_3$ and $h^V_4$ are nonzero \cite{barroso}
but too small (${\cal O}(\a/\pi$)) to lead to any observable
effect at any present or planned experiment. However, larger effects
might appear in theories or models beyond the Standard Model,
for instance when the gauge bosons are composite objects
\cite{composite}.
This is a purely phenomenological, model independent parametrization.
Tree-level unitarity restricts the $Z\gamma V$ to the Standard Model
values at asympotically high energies \cite{unitarity}. This
implies that the couplings $h^V_i$ have to be described by form factors
$h^V_i(q^2_1,q^2_2,p^2)$ which vanish when $q^2_1$, $q^2_2$ or $p^2$
become large. In hadron colliders, large values of $p^2=\hat{s}$
come into play and the energy dependence has to be taken into
account, including unknown dumping factors \cite{BB}.
A scale dependence appears as an additional parameter (the scale
of new physics, $\L$). Alternatively,
one could introduce a set of operators invariant under SU(2)$\times$U(1)
involving the gauge bosons and/or additional would-be-Goldstone bosons
and the physical Higgs. Depending on the new physics dynamics,
operators with dimension $d$ could be generated at the scale $\L$,
with a strength which is generally suppressed by factors like
$(M_W/\L)^{d-4}$ or $(\sqrt{s}/\L)^{d-4}$ \cite{NPscale}.
It can be shown that $h^V_1$ and $h^V_3$ receive contributions from
operators of dimension $\ge 6$ and $h^V_2$ and $h^V_4$ from
operators of dimension $\ge 8$.
Unlike hadron colliders, in $ep\to e\gamma X$ at HERA energies, we can ignore
the dependence of the form factors on the scale. On the other
hand, the anomalous couplings are tested in a different kinematical region,
which makes their study in this process complementary to the ones
performed at hadron and lepton colliders.
\section{The process $ep\to e\gamma X$}
The process under study is $ep\to e\gamma X$, which is described in the
parton model by the radiative neutral current electron-quark and
electron-antiquark scattering,
\begin{equation}
\label{process}
e^- \ \stackrel{(-)}{q} \to e^- \ \stackrel{(-)}{q} \ \gamma .
\end{equation}
There are eight Feynman diagrams contributing to this process in
the Standard Model and three additional ones if one includes anomalous vertices:
one extra diagram for the $Z\gamma Z$ vertex and two for the $Z\gamma\g$
vertex (Fig. \ref{feyndiag}).
\bfi{htb}
\begin{center}
\bigphotons
\bpi{35000}{21000}
\put(4000,8000){(a)}
\put(200,17000){\vector(1,0){1300}}
\put(1500,17000){\vector(1,0){3900}}
\put(5400,17000){\line(1,0){2600}}
\drawline\photon[\S\REG](2800,17000)[5]
\put(200,\pbacky){\vector(1,0){1300}}
\put(1500,\pbacky){\vector(1,0){2600}}
\put(4100,\pbacky){\vector(1,0){2600}}
\put(6700,\pbacky){\line(1,0){1300}}
\put(0,13000){$q$}
\put(8200,13000){$q$}
\put(3300,\pmidy){$\gamma,Z$}
\drawline\photon[\SE\FLIPPED](4900,\pbacky)[4]
\put(0,18000){$e$}
\put(8200,18000){$e$}
\put(8200,\pbacky){$\gamma$}
\put(13000,8000){(b)}
\put(9500,17000){\vector(1,0){1300}}
\put(10800,17000){\vector(1,0){2600}}
\put(13400,17000){\vector(1,0){2600}}
\put(16000,17000){\line(1,0){1300}}
\drawline\photon[\S\REG](12100,17000)[5]
\put(9500,\pbacky){\vector(1,0){1300}}
\put(10800,\pbacky){\vector(1,0){3900}}
\put(14700,\pbacky){\line(1,0){2600}}
\drawline\photon[\NE\FLIPPED](14200,17000)[4]
\put(22000,8000){(c)}
\put(18500,17000){\vector(1,0){3250}}
\put(21750,17000){\vector(1,0){3250}}
\put(25000,17000){\line(1,0){1300}}
\drawline\photon[\S\REG](23700,17000)[5]
\put(18500,\pbacky){\vector(1,0){1300}}
\put(19800,\pbacky){\vector(1,0){2600}}
\put(22400,\pbacky){\vector(1,0){2600}}
\put(25000,\pbacky){\line(1,0){1300}}
\drawline\photon[\SE\FLIPPED](21100,\pbacky)[4]
\put(31000,8000){(d)}
\put(27500,17000){\vector(1,0){1300}}
\put(28800,17000){\vector(1,0){2600}}
\put(31400,17000){\vector(1,0){2600}}
\put(34000,17000){\line(1,0){1300}}
\drawline\photon[\S\REG](32700,17000)[5]
\put(27500,\pbacky){\vector(1,0){3250}}
\put(30750,\pbacky){\vector(1,0){3250}}
\put(33900,\pbacky){\line(1,0){1300}}
\drawline\photon[\NE\FLIPPED](30100,17000)[4]
\put(17800,0){(e)}
\put(17100,5500){$\gamma,Z$}
\put(17100,3000){$\gamma,Z$}
\put(14000,7000){\vector(1,0){1300}}
\put(15300,7000){\vector(1,0){3900}}
\put(19200,7000){\line(1,0){2600}}
\drawline\photon[\S\REG](16600,7000)[5]
\put(16750,\pmidy){\circle*{500}}
\put(14000,\pbacky){\vector(1,0){1300}}
\put(15300,\pbacky){\vector(1,0){3900}}
\put(19200,\pbacky){\line(1,0){2600}}
\drawline\photon[\E\REG](16750,\pmidy)[5]
\put(22300,\pbacky){$\gamma$}
\end{picture}
\end{center}
\caption{\it Feynman diagrams for the process $e^- q \to e^- q \gamma$.
\label{feyndiag}}
\end{figure}
Diagrams with $\gamma$ exchanged in the t-channel are dominant. Nevertheless,
we consider the whole set of diagrams in the calculation.
On the other side, u-channel fermion exchange poles appear, in the limit
of massless quarks and electrons (diagrams (c) and (d)).
Since the anomalous diagrams (e) do not present such infrared or
collinear singularities, it seems appropriate to avoid almost
on-shell photons exchanged and fermion poles by cutting the
transverse momenta of the final fermions (electron and jet) to
enhance the signal from anomalous vertices.
Due to the suppression factor coming from $Z$ propagator, the
anomalous diagrams are more sensitive to $Z\gamma\g$ than to $Z\gamma Z$ vertices.
In the following we will focus our attention on the former.
The basic variables of the parton level process are five. A
suitable choice is: $E_\gamma$ (energy of the final photon),
$\cos\th_\gamma$, $\cos\th_{q'}$ (cosines of the polar angles of the
photon and the scattered quark defined with respect to the proton direction),
$\phi$ (the angle between the transverse momenta of the photon and the
scattered quark in a plane perpendicular to the beam), and a
trivial azimuthal angle that is integrated out (unpolarized beams).
All the variables are referred to the laboratory frame. One needs
an extra variable, the Bjorken-x, to connect the partonic process
with the $ep$ process. The phase space integration over these six
variables is carried out by {\tt VEGAS} \cite{VEGAS} and has been
cross-checked with the {\tt RAMBO} subroutine \cite{RAMBO}.
We adopt two kinds of event cuts to constrain conveniently
the phase space:
\begin{itemize}
\item
{\em Acceptance and isolation} cuts. The former are to exclude
phase space regions
which are not accessible to the detector, because of angular or
efficiency limitations:\footnote{The threshold for the transverse
momentum of the scattered quark ensures that its kinematics can be
described in terms of a jet.}
\begin{eqnarray}
\label{cut1}
8^o < \theta_e,\ \theta_\gamma,\ \theta_{\rm jet} < 172^o; \nonumber\\
E_e, \ E_\gamma, \ p^{\rm q'}_{\rm T} > 10 \ {\rm GeV}.
\end{eqnarray}
The latter keep the final photon well separated
from both the final electron and the jet:
\begin{eqnarray}
\label{cut2}
\cos \langle \gamma,e \rangle < 0.9; \nonumber\\
R > 1.5,
\end{eqnarray}
where $R\equiv\sqrt{\Delta\eta^2+\phi^2}$ is the separation between
the photon and the jet in the rapidity-azimuthal plane, and $\langle \gamma,e \rangle$ is the angle between the photon and the scattered electron.
\item
Cuts for {\em intrinsic background suppression}. They consist of
strengthening some of the
previous cuts or adding new ones to enhance the signal of the anomalous
diagrams against the Standard Model background.
\end{itemize}
We have developed a Monte Carlo program for the simulation of the
process $ep\to e\gamma X$ where $X$ is the remnant of the proton plus one jet
formed by the scattered quark of the subprocess (\ref{process}). It
includes the Standard Model helicitity amplitudes computed using the {\tt HELAS} subroutines \cite{HELAS}. We added new code to account for the
anomalous diagrams. The squares of these anomalous amplitudes have been
cross-checked with their analytical expressions computed using {\tt FORM}
\cite{FORM}. For the parton distribution functions,
we employ both the set 1 of Duke-Owens' parametrizations \cite{DO}
and the modified MRS(A) parametrizations \cite{MRS}, with the scale chosen to
be the hadronic momentum transfer.
As inputs, we use the beam energies $E_e=30$ GeV and $E_p=820$ GeV,
the $Z$ mass $M_Z=91.187$ GeV, the weak angle $\sin^2_W=0.2315$
\cite{PDB} and the fine structure constant $\a=1/128$. A more correct choice
would be the running fine structure constant with $Q^2$ as the argument.
However, as we are interested in large $Q^2$ events, the value $\a(M^2_Z)$
is accurate enough for our purposes. We consider only
the first and second generations of quarks, assumed to be massless.
We start by applying the cuts (\ref{cut1}) and (\ref{cut2})
and examining the contribution to a set of observables of the
Standard Model and the anomalous diagrams, separately. Next, we
select one observable such that, when a cut on it is performed,
only Standard Model events are mostly eliminated. The procedure
is repeated with this new cut built in. After several runs, adding
new cuts, the ratio standard/anomalous cross sections is reduced
and hence the sensitivity to anomalous couplings is improved.
\section{Results}
\subsection{Observables}
The total cross section of $ep\to e\gamma X$ can be written as
\begin{equation}
\sigma=\sigma_{{\rm SM}} + \sum_{i} \t_i \cdot h^\gamma_i + \sum_{i}\sigma_i\cdot (h^\gamma_i)^2
+ \sigma_{12} \cdot h^\gamma_1 h^\gamma_2 + \sigma_{34} \cdot h^\gamma_3 h^\gamma_4.
\end{equation}
\bfi{htb}
\setlength{\unitlength}{1cm}
\bpi{8}{7}
\epsfxsize=11cm
\put(-1,-4){\epsfbox{eng_acciso.ps}}
\end{picture}
\bpi{8}{7}
\epsfxsize=11cm
\put(0.,-4){\epsfbox{ptg_acciso.ps}}
\end{picture}
\bpi{8}{6}
\epsfxsize=11cm
\put(-1,-5){\epsfbox{angge_acciso.ps}}
\end{picture}
\bpi{8}{6}
\epsfxsize=11cm
\put(0.,-5){\epsfbox{anggj_acciso.ps}}
\end{picture}
\bpi{8}{6}
\epsfxsize=11cm
\put(-1,-5){\epsfbox{angej_acciso.ps}}
\end{picture}
\bpi{8}{7}
\epsfxsize=11cm
\put(0.,-5){\epsfbox{q2e_acciso.ps}}
\end{picture}
\caption{\it Differential cross sections (pb) for the process $ep\to e\gamma X$ at
HERA, with only acceptance and isolation cuts.
The solid line is the Standard Model contribution and the dash (dot-dash) line
correspond to 10000 times the $\sigma_1$ ($\sigma_2$) anomalous contributions.\label{A}}
\end{figure}
The forthcoming results are obtained using the MRS'95
pa\-ra\-me\-tri\-za\-tion of the parton densities\footnote{The values
change $\sim 10$\% when using the (old) Duke-Owens' structure functions.}
\cite{MRS}.
The linear terms of the $P$-violating couplings $h^\gamma_3$
and $h^\gamma_4$ are negligible, as they mostly arise from the interference of
standard model diagrams with photon exchange ($P$-even) and anomalous
$P$-odd diagrams ($\t_3\simeq \t_4\simeq 0$). Moreover, anomalous diagrams with
different $P$ do not interfere either. On the other hand, the quadratic terms
proportional to $(h^\gamma_1)^2$ and $(h^\gamma_3)^2$ have identical expressions, and
the same for $h^\gamma_2$ and $h^\gamma_4$ ($\sigma_1=\sigma_3$, $\sigma_2=\sigma_4$). Only the
linear terms make their bounds different. The interference terms $\sigma_{12}$
and $\sigma_{34}$ are also identical.
\bfi{htb}
\setlength{\unitlength}{1cm}
\bpi{8}{7}
\epsfxsize=11cm
\put(-1,-4){\epsfbox{eng_bkgsup.ps}}
\end{picture}
\bpi{8}{7}
\epsfxsize=11cm
\put(0.,-4){\epsfbox{ptg_bkgsup.ps}}
\end{picture}
\bpi{8}{6}
\epsfxsize=11cm
\put(-1,-5){\epsfbox{angge_bkgsup.ps}}
\end{picture}
\bpi{8}{6}
\epsfxsize=11cm
\put(0.,-5){\epsfbox{anggj_bkgsup.ps}}
\end{picture}
\bpi{8}{6}
\epsfxsize=11cm
\put(-1,-5){\epsfbox{angej_bkgsup.ps}}
\end{picture}
\bpi{8}{7}
\epsfxsize=11cm
\put(0.,-5){\epsfbox{q2e_bkgsup.ps}}
\end{picture}
\caption{\it Differential cross sections (pb) for the process $ep\to e\gamma X$ at
HERA, after intrinsic background suppression.
The solid line is the Standard Model contribution and the dash (dot-dash) line correspond to 500 times the $\sigma_1$ ($\sigma_2$) anomalous contributions.\label{B}}
\end{figure}
We have analyzed the distributions of more than twenty
observables in the laboratory frame, including the energies, transverse
momenta and angular distributions of the jet, the photon and the final
electron, as well as their spatial, polar and azimuthal separations.
Also the bjorken-x, the leptonic and hadronic momenta transfer and other fractional energies are considered.
The process of intrinsic background suppression is illustrated
by comparing Figures \ref{A} and \ref{B}. For simplicity, only
the most interesting variables are shown: the energy $E(\gamma)$ and transverse
momentum $p_T(\gamma)$ of the photon; the angles between the photon and
the scattered electron $\langle \gamma,e \rangle$, the photon and the jet
$\langle \gamma,j \rangle$, and the scattered electron and the jet $\langle e,j
\rangle$; and the leptonic momentum transfer $Q^2(e)$.
In Fig.~\ref{A}, these variables
are plotted with only acceptance and isolation cuts
implemented.
All of them share the property of disposing of a range
where any anomalous effect is negligible, whereas the contribution
to the total SM cross section is large. The set of cuts
listed below were added to reach eventually the distributions of
Fig.~\ref{B}:
\begin{itemize}
\item
The main contribution to the Standard Model cross section comes from
soft photons with very low transverse momentum. The following cuts
suppress a 97$\%$ of these events, without hardly affecting the
anomalous diagrams which, conversely, enfavour high energy photons:
\begin{eqnarray}
E_\gamma > 30 \ {\rm GeV} \nonumber \\
p^\gamma_T > 20 \ {\rm GeV}
\label{cut3}
\end{eqnarray}
\item
Another remarkable feature of anomalous diagrams is the very different
typical momentum transfers. Let's concentrate on the leptonic momentum
transfer, $Q^2_e=-(p'_e-p_e)^2$. The phase space enhances high
$Q^2_e$, while the photon propagator of the Standard Model diagrams
prefer low values (above the threshold for electron detectability,
$Q^2_e>5.8$~GeV$^2$, with our required minimum energy and angle). On the
contrary, the anomalous diagrams have always a $Z$ propagator
which introduces a suppression factor of the order of $Q^2_e/M^2_Z$ and
makes irrelevant the $Q^2_e$ dependence, which is only determined by
the phase space. As a consequence, the following cut looks appropriate,
\begin{equation}
Q^2_e > 1000 \ {\rm GeV}^2
\label{cut4}
\end{equation}
\end{itemize}
It is important to notice at this point why usual form factors for the
anomalous couplings can be neglected at HERA. For our process, these
form factors should be proportional to $1/(1+Q^2/\L^2)^n$. With the scale of
new physics $\L=500$~GeV to 1~TeV, these factors can be taken to be one. This
is not the case in lepton or hadron high energy colliders where the diboson production in the s-channel needs dumping factors $1/(1+\hat{s}/\L^2)^n$.
The total cross section for the Standard Model with acceptance and isolation
cuts is $\sigma_{\rm SM}=21.38$~pb and is reduced to 0.37~pb when all the cuts are applied, while the quadratic contributions only change from
$\sigma_1=2\times10^{-3}$~pb, $\sigma_2=1.12\times10^{-3}$~pb to
$\sigma_1=1.58\times10^{-3}$~pb, $\sigma_2=1.05\times10^{-3}$~pb. The linear
terms are of importance and change from $\t_1=1.18\times10^{-2}$~pb, $\t_2=1.27\times10^{-3}$~pb to $\t_1=7.13\times10^{-3}$~pb, $\t_2=1.26\times10^{-3}$~pb. Finally, the interference term $\sigma_{12}=1.87\times10^{-3}$~pb changes to $\sigma_{12}=1.71\times10^{-3}$~pb.
The typical Standard Model events consist of soft and low-$p_T$ photons
mostly backwards, tending to go in the same direction of the scattered
electrons (part of them are emitted by the hadronic
current in the forward direction), close to the required angular separation ($\sim 30^o$). The low-$p_T$ jet goes opposite to both the photon and the scattered electron, also in the transverse plane.
On the contrary, the anomalous events have not so soft and high-$p_T$ photons,
concentrated in the forward region as it the case for the scattered electron
and the jet.
\subsection{Sensitivity to anomalous couplings}
In order to estimate the sensitivity to anomalous couplings, we
consider the $\chi^2$ function.
One can define the $\chi^2$, which is related to the likelihood
function ${\cal L}$, as
\begin{equation}
\label{chi2}
\chi^2\equiv-2\ln{\cal L}=
2 L \displaystyle\left(\sigma^{th}-\sigma^{o}+\sigma^{o}
\ln\displaystyle\frac{\sigma^{o}}{\sigma^{th}}\right)
\simeq L \displaystyle\dis\frac{(\sigma^{th}-\sigma^{o})^2}{\sigma^{o}},
\end{equation}
where $L=N^{th}/\sigma^{th}=N^o/\sigma^o$ is the integrated luminosity
and $N^{th}$ ($N^o$) is the number of theoretical (observed)
events. The last line of (\ref{chi2}) is a useful
and familiar approximation, only valid when $\mid \sigma^{th}-\sigma^o \mid/
\sigma^o \ll 1$.
This function is a measure of the probability that statistical
fluctuations can make undistinguisable the observed and the predicted
number of events, that is the Standard Model prediction. The well
known $\chi^2$-CL curve allows us to determine the corresponding
confidence level (CL).
We establish bounds on the anomalous couplings by fixing a
certain $\chi^2=\d^2$ and allowing for the $h^\gamma_i$
values to vary, $N^o=N^o(h^\gamma_i)$. The parameter $\d$ is often referred
as the number of
standard deviations or `sigmas'. A $95\%$ CL corresponds to almost
two sigmas ($\d=1.96$).
When $\sigma \simeq \sigma_{{\rm SM}} + (h^\gamma_i)^2 \sigma_i$ (case of the $CP$-odd
terms) and the anomalous contribution is small enough, the
upper limits present some useful, approximate scaling properties,
with the luminosity,
\begin{equation}
h^\gamma_i (L')\simeq\sqrt[4]{\frac{L}{L'}} \ h^\gamma_i (L).
\end{equation}
A brief comment on the interpretation of the results is in order.
As the cross section grows with $h^\gamma_i$, in the relevant range of
values, the $N^o$ upper limits can be regarded as the lowest number
of measured events that would discard the Standard Model, or the
largest values of $h^\gamma_i$ that could be bounded if no effect is
observed, with the given CL. This procedure approaches the
method of upper limits for Poisson processes when the
number of events is large ($\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 10$).
\bfi{htb}
\setlength{\unitlength}{1cm}
\bpi{8}{8}
\epsfxsize=12cm
\put(3.35,4.245){+}
\put(-2.5,-1.5){\epsfbox{conh1h2.nogrid.ps}}
\end{picture}
\bpi{8}{8}
\epsfxsize=12cm
\put(4.1,4.245){+}
\put(-1.75,-1.5){\epsfbox{conh3h4.nogrid.ps}}
\end{picture}
\caption{\it Limit contours for $Z\gamma\g$ couplings at HERA with an integrated luminosity of 10, 100, 250, 1000 pb$^{-1}$ and a 95\% CL.\label{contour}}
\end{figure}
In Fig. \ref{contour} the sensitivities for different luminosities are shown.
Unfortunately, HERA cannot compete with Tevatron, whose best
bounds, reported by the D\O\ collaboration \cite{D0}, are
\begin{eqnarray}
|h^\gamma_1|, \ |h^\gamma_3| &<& 1.9 \ (3.1), \nonumber
\\
|h^\gamma_2|, \ |h^\gamma_4| &<& 0.5 \ (0.8).
\end{eqnarray}
For the first value it was assumed that only one anomalous coupling contributes
(`axial limits') and for the second there are two couplings contributing (`correlated limits'). Our results are summarized in Table \ref{table}.
\begin{table}
\begin{center}
\begin{tabular}{|c|r|r|r|r|r|r|r|r|}
\hline
HERA & \multicolumn{2}{c|}{10 pb$^{-1}$} & \multicolumn{2}{c|}{100 pb$^{-1}$}
& \multicolumn{2}{c|}{250 pb$^{-1}$} & \multicolumn{2}{c|}{1 fb$^{-1}$} \\
\hline \hline
$h^\gamma_1$ & -19.0 & 14.5 & -11.5 & 7.0 & -9.5 & 5.5 & -8.0 & 3.5 \\
& -26.0 & 19.5 & -16.0 & 9.5 & -14.0 & 7.0 & -11.5 & 4.5 \\
\hline
$h^\gamma_2$ & -21.5 & 20.0 & -12.0 & 10.0 & - 9.5 & 8.0 & -7.0 & 6.0 \\
& -26.0 & 30.0 & -13.0 & 18.0 & -10.0 & 15.0 & - 7.5 & 12.0 \\
\hline
$h^\gamma_3$ & -17.0 & 17.0 & -9.0 & 9.0 & -7.5 & 7.5 & -5.5 & 5.5 \\
& -22.5 & 22.5 & -12.0 & 12.0 & -10.0 & 10.0 & -7.0 & 7.0 \\
\hline
$h^\gamma_4$ & -20.5 & 20.5 & -11.0 & 11.0 & -8.5 & 8.5 & -6.0 & 6.0 \\
& -27.5 & 27.5 & -14.5 & 14.5 & -12.0 & 12.0 & -8.5 & 8.5 \\
\hline
\end{tabular}
\end{center}
\caption{\it Axial and correlated limits for the $Z\gamma\g$ anomalous couplings
at HERA with different integrated luminosities and $95\%$ CL. \label{table}}
\end{table}
The origin of so poor results is the fact that, unlike diboson production
at hadron or $e^+e^-$ colliders, the anomalous diagrams of $ep\to e\gamma X$
have a $Z$ propagator decreasing their effect.
The process $ep\to eZX$ avoids this problem thanks to the absence
of these propagators: the Standard Model cross section is similar
to the anomalous one but, as a drawback, they are of the order of
femtobarns.
\section{Summary and conclusions}
The radiative neutral current process $ep\to e\gamma X$
at HERA has been studied. Realistic cuts have been applied in order to
observe a clean signal consisting of detectable and well separated
electron, photon and jet.
The possibility of testing the trilinear neutral gauge boson couplings
in this process has also been explored. The $Z\gamma Z$ couplings are
very suppressed by two $Z$ propagators. Only the $Z\gamma \gamma$ couplings
have been considered. A Monte Carlo program has been developed to
account for such anomalous vertex and further cuts have been implemented
to improve the sensitivity to this source of new physics.
Our estimates are based on total cross sections since the expected number
of events is so small that a distribution analysis is not possible.
The distributions just helped us to find the optimum cuts. Unfortunately,
competitive bounds on these anomalous couplings cannot be achieved at
HERA, even with the future luminosity upgrades.\footnote{We would like to
apologize for the optimistic but incorrect results that were presented
at the workshop due to a regrettable and unlucky mistake in our programs.}
As a counterpart, a different kinematical region is explored, in which
the form factors can be neglected.
\section*{Acknowledgements}
One of us (J.I.) would like to thank the Workshop organizers for financial
support and very especially the electroweak working group conveners
and the Group from Madrid at ZEUS for hospitality and useful conversations.
This work has been partially supported by the CICYT and the European Commission
under contract CHRX-CT-92-0004.
| 9,621 |
\section*{References}
| 7 |
\section{Introduction}
Systems producing absorption in the spectra of distant quasars offer
an excellent probe of the early Universe. At high redshifts, they
easily outnumber other observed tracers of cosmic structure, including
both normal and active galaxies. Mounting evidence that
the high column density absorbers are young galaxies links relatively
pristine baryonic matter to highly evolved objects at the present day.
The amount of atomic hydrogen in damped Ly$\alpha$\ (DLA) absorbers
at $z \sim 3$ is comparable to the mass in stars at the current epoch (Wolfe
1988), and two DLA\ systems are known to have radial extents $\mathrel{\copy\simgreatbox}
10 h^{-1}$ kpc (Briggs et al.\ 1989; Wolfe et al.\ 1993). Photometry of
damped absorbers supports the view that they are high-redshift
galaxies (Djorgovski et al.\ 1996; Fontana et al.\ 1996).
At somewhat lower column densities and redshifts,
deep imaging and spectroscopy indicate that Lyman
limit systems are associated with lines of sight passing near bright
galaxies (Yanny 1990; Lanzetta \& Bowen 1990; Bergeron \& Boiss\'e
1991; Steidel, Dickinson, \& Persson 1994) or galaxy clusters
(Lanzetta, Webb, \& Barcons 1996).
The interpretation of quasar absorption systems has undergone
something of a revolution during the past two years, with the
recognition that they may be gas aggregating into nonlinear structures
in hierarchical models like those invoked to account for the observed galaxy
distribution (e.g., Cen {et al.\/} 1994; Petitjean, Mucket, \&
Kates 1995; Zhang, Anninos, \& Norman 1995; Hernquist et al.\
1996; Miralda-Escud\'e et al.\ 1996). In particular, Katz et al.\
(1996; hereafter KWHM) used
simulations that evolve baryons and dark matter in the presence of a
background radiation field to show that high column density absorbers
arise naturally in a cold dark matter (CDM) universe from radiatively
cooled gas in galaxy-sized halos, supporting the notion that damped
Ly$\alpha$\ systems are a byproduct of galaxy formation. Together with the
results of Hernquist et al.\ (1996) for the Ly$\alpha$\ forest, the column
density distribution predicted by KWHM matches existing data
reasonably well, but it falls below the observations by factors $\approx
2$ and $\approx 8$ for DLA\ and Lyman limit absorbers, respectively.
This discrepancy can be attributed at least
partly to resolution effects in the simulations. Owing to
computational expense, the KWHM simulation could not resolve halos
with circular velocities below $v_c \approx 100$ km s$^{-1}$. However,
higher resolution simulations of localized regions by Quinn, Katz, \&
Efstathiou (1996; hereafter QKE)
indicate that halos down to $v_c \approx 35$
km s$^{-1}$\ can host damped absorbers, so clearly the number of high
column density systems found by KWHM is artificially depressed
by the finite resolution of their simulation.
In this paper,
we overcome this numerical limitation using a two-step correction
procedure. First, we employ the Press \& Schechter (1974) algorithm
to correct the KWHM data by extending the halo mass function to
values lower than could be resolved by their simulation. Then, we
account for absorption by gas in these halos from a relation between
the absorption cross section for a halo and its circular velocity.
This relation is established by fitting both the KWHM data and
high-resolution simulations that use the QKE initial conditions
and the KWHM background radiation field.
These additional simulations examine localized
regions around low mass objects with sufficient resolution to resolve
halos down to $v_c \approx 35$ km s$^{-1}$.
Heating by the UV background prevents the collapse and cooling of gas
in smaller halos (QKE; Thoul \& Weinberg 1996).
The high-resolution volumes are small and
were chosen in a non-random way, so they cannot be used directly to
infer the number of DLA\ and Lyman limit systems. By convolving the
absorption area vs.\ circular velocity relation with the halo mass function
given by the Press-Schechter method, we can predict the absorption at
any mass scale, effectively extending the dynamic range of the
simulations down to the lowest mass objects that produce high
column density absorption.
We also present another calculation, similar
to that in KWHM but including star formation, to quantify the effects
of gas depletion on high column density absorption.
\section{Simulations and Methods}
\label{secSimulation}
Our primary simulation, the same as that used by KWHM, follows the
evolution of a periodic cube whose edges measure 22.22 Mpc in comoving
units. This region was drawn randomly from a CDM universe with
$\Omega=1$, $h \equiv H_0/100$ km s$^{-1}$\ Mpc$^{-1}=0.5$, baryon
density $\Omega_b=0.05$, and power spectrum normalization
$\sigma_8=0.7$. A uniform background radiation field was imposed to
mimic the expected ultraviolet (UV) output of quasars, with a spectrum
of the form $J(\nu) = J_0(\nu_0/\nu) F(z)$, where $\nu_0$ is the
Lyman-limit frequency, $J_0=10^{-22}$ erg s$^{-1}$ cm$^{-2}$ sr$^{-1}$
Hz$^{-1}$, and $F(z)=0$ if $z>6$, $F(z)=4/(1+z)$ if $3 \le z \le 6$,
and $F(z)=1$ if $2<z<3$. The simulations employ $64^3$ gas and $64^3$
dark-matter particles, with a gravitational softening length of 20
comoving kpc (13 comoving kpc equivalent Plummer softening). The
particle mass is $1.45 \times 10^8 M_\odot$ and $2.8 \times 10^9
M_\odot$ for gas and dark matter, respectively. Detailed descriptions
of the simulation code and the radiation physics can be found in
Hernquist \& Katz (1989) and Katz, Weinberg, \& Hernquist (1996;
hereafter KWH). The low column density absorption in this simulation
is discussed by Hernquist et al.\ (1996), and the galaxy population is
discussed by Weinberg, Hernquist, \& Katz (1996).
We also employ two simulations that have the same initial conditions,
cosmological parameters, and numerical parameters as QKE but the UV
background spectrum given above. These comprise smaller, 10 Mpc
periodic volumes (with $\Omega=1$, $h=0.5$, $\Omega_b=0.05$ as
before), which are evolved using a hierarchical grid of particles in
the initial conditions. The central region forms a collapsed object
that is represented using a large number of low mass particles, while
regions further away are modeled using a small number of more massive
particles. A simulation of the same volume as QKE would require
$256^3$ particles of each species to match the resolution of the
central region throughout; the nesting technique allows us to achieve
high-resolution locally while preserving the cosmological context of
the calculation.
QKE find that a photoionizing background suppresses the collapse
and cooling of gas in halos with circular velocities
$v_c \mathrel{\rlap{\lower 3pt\hbox{$\mathchar"218$} 35$ km s$^{-1}$. Thoul \& Weinberg (1996) find a similar
cutoff in much higher resolution, spherically symmetric calculations.
Hence, it should be possible to estimate the amount of gas capable of
producing DLA\ and Lyman limit absorption by accounting for
halos down to this cutoff in $v_c$.
Both QKE and Thoul \& Weinberg (1996) find that photoionization
has little effect on the amount of gas that cools in halos
with $v_c \mathrel{\copy\simgreatbox} 60$ km s$^{-1}$, consistent with the results of
Navarro \& Steinmetz (1996) and Weinberg et al.\ (1996).
The current generation of hydrodynamic simulations lacks
the dynamic range necessary to represent halos over the entire range
$35 < v_c \mathrel{\rlap{\lower 3pt\hbox{$\mathchar"218$} 300$ km s$^{-1}$. To overcome this limitation, we use
the approximation developed by Press \& Schechter (1974), who give the
following analytic estimate for the number density of halos of mass
$M$ at redshift $z$:
\begin{equation}
N(M,z) dM = \sqrt{2\over \pi} {\rho_0\over M}
{\delta_c\over \sigma_0} \left({\gamma R_f\over R_*}\right)^2
\exp{\left({-\delta_c^2\over 2\sigma_0^2}\right)} dM ,
\label{PSnumber}
\end{equation}
where $\rho_0$ is the mean comoving density, $R_f$ is the Gaussian
filter radius corresponding to mass
$M= (2\pi)^{3/2} \rho_0 R_f^3$, and $\delta_c$ is
the critical linear density contrast that corresponds to
gravitational collapse. The
parameters $\sigma_0$, $\gamma$ and $R_*$ are related to moments of
the power spectrum (Bardeen {et al.\/} 1986). Equation (\ref{PSnumber}) can
be integrated from $M$ to infinity to yield the number density
of objects above a given mass.
In what follows, for comparison with our simulations, we use the CDM
transfer function given by Bardeen {et al.\/} (1986).
To determine the number of DLA\ and Lyman limit systems per
unit redshift, we first fix the parameters in the Press-Schechter
algorithm so that it reproduces the mass function of our 22.22 Mpc
simulations. Then, we use the 22.22 Mpc and 10 Mpc
simulations together to fit a relation between the circular velocity
of a halo and its cross section for producing DLA\
or Lyman limit absorption.
To identify halos in the simulations, we apply a friends-of-friends
algorithm with a linking length equal to the mean interparticle
separation on an isodensity contour of an isothermal sphere with an
overdensity $177$, $b= (177 n/3)^{-1/3}$ where $n$ is the particle number
density.
We also apply the algorithm of Stadel {et al.\/} (1996;
see also KWH and
http://www-hpcc.astro.washington.edu/tools/DENMAX)
to the cold gas particles
in the same volume to locate regions of collapsed gas capable of
producing Lyman limit and damped Ly$\alpha$\ absorption. A region of gas is
considered a potential absorber only if it contains at least four
gas particles that are mutually bound, have a smoothed overdensity
$\rho_g/\bar\rho_g > 170$, and a temperature $T < 30,000$ K.
All of the gas concentrations found by this method
are associated with a friends-of-friends halo,
even at $z=4$. We match each absorber with its
parent halo and discard halos that contain no absorbers.
For each of the halos that contains a cold gas concentration, we
determine the radius of the sphere centered on the most tightly bound
particle within which the average density is equal to 177 times the
mean background density. We characterize halo masses and circular
velocities by their values at this radius. This method of quantifying
the properties of halos in the simulations corresponds to that
envisioned in the Press-Schechter approximation, which is based on the
spherical collapse model. We find that the mass distribution of halos
in the simulations is best fit using the Press-Schechter form with a
Gaussian filter and $\delta_c = 1.69$. Many workers have instead used
a top-hat filter, with $M_f=(4 \pi/3) \rho_0 R_f^3$ ({\it cf.\/} Ma 1996; Ma
\& Bertschinger 1994; Mo \& Miralda-Escud\'e 1994; Mo {et al.\/} 1996), or a
Gaussian filter with a modified relation between filter radius and
associated mass, $M_f=6 \pi^2 \rho_0 R_f^3$ (Lacey \& Cole 1994), with
similar values for $\delta_c$. However, these studies used the halo
masses as returned by the friends-of-friends algorithm itself, and if
we do this we also find that top-hat or modified Gaussian filters
provide good fits to the mass function for $\delta_c \approx 1.7$.
The combination $\delta_c=1.69$, Gaussian filter, and
$M_f=(2\pi)^{3/2} \rho_0 R_f^3$ is appropriate for our definition of
halo masses within overdensity 177 spheres. Including or excluding
the ``absorberless'' halos in our mass function does not change the
results above $v_c=100$ km s$^{-1}$\ because all halos above this
circular velocity contain at least one absorber.
We calculate HI column densities for the halos by encompassing each
halo with a sphere which is centered on the most tightly bound
gas particle and is of a sufficient size to contain all gas particles
which may contribute to absorption within the halo. We
project the gas
distribution within this sphere onto a uniform grid of cell size 5.43
comoving kpc, equal to the highest resolution achieved anywhere in the
22.22 Mpc simulation.
Using the method of KWHM, we calculate an initial HI column
density for each gridpoint assuming that the gas is optically thin,
then apply a self-shielding correction to yield a true HI column
density (see KWHM for details). For each halo we compute the
projected area over which it produces damped absorption, with $N_{\rm HI} >
10^{20.3} \;\cdunits$, and Lyman limit absorption, with $N_{\rm HI} >
10^{17.2}\;\cdunits$. For simplicity, we project all halos from a
single direction, though we obtain a similar fit of absorption area to
circular velocity if we project randomly in the $x$, $y$, and $z$
directions or average the projections in $x$, $y$, and $z$.
\begin{figure}
\vglue-0.65in
\plottwo{f1a.eps}{f1b.eps} \\
\vglue-0.2in
\plottwo{f1c.eps}{f1d.eps} \\
\vglue-0.2in
\plottwo{f1e.eps}{f1f.eps}
\vglue-0.26in
\caption{Comoving absorbing area in kpc$^2$ vs. circular velocity
$v_c$ in km s$^{-1}$\ for halos in the 22.22 Mpc simulation
(skeletal points) and the 10 Mpc simulations (open circles).
Left hand panels show the area for DLA absorption,
$N_{\rm HI} \geq 10^{20.3}\;\cdunits$, and right hand panels
for Lyman limit absorption, $N_{\rm HI} \geq 10^{17.2}\;\cdunits.$
The number of vertices in the skeletal points corresponds
to the number of gas concentrations in the halo. The solid line shows the
fitted smooth relation of equation~(\ref{avc}), with
parameter values listed in Table 1.}
\label{figVAplot}
\end{figure}
Figure~\ref{figVAplot} shows the cross section for damped
absorption (left hand panels) and
Lyman limit absorption (right hand panels)
as a function of circular velocity for each of
our halos, at redshifts 2, 3, and 4.
The open circles at low $v_c$ represent
halos from the 10 Mpc, high-resolution runs. Other
points refer to the 22.22 Mpc simulation, and the number of vertices
in each symbol indicates the number of absorbers (i.e., distinct regions
of cold, collapsed gas) within each halo. For these halos
there are two competing effects that determine the trend between
absorption cross section and circular velocity.
Higher mass halos have deeper potential
wells, so concentrations of cold gas contract further, explaining the
downward trend in cross section with circular velocity exhibited by
points with a fixed number of vertices. However,
more massive halos tend to harbor more than one
concentration of gas, increasing their absorption cross section.
The overall trend in Figure 1 is that
halos of higher circular velocities on average have larger absorption
cross sections.
The solid lines in Figure~\ref{figVAplot} show a smooth function
$\alpha_z(v_c)$ fitted to the relation between absorption area
and circular velocity. We will need this function for our
Press-Schechter correction procedure below. As a functional form
we adopt a linear relation between ${\rm log}\,\alpha$ and
${\rm log}\,v_c$ with a damping factor $1-\exp(-(v_c-35)/12)$,
which reflects the suppression of gas cooling in low $v_c$ halos.
We bin the data points in intervals of 0.15 in ${\rm log}\,v_c$,
compute the mean and standard deviation of ${\rm log}\,\alpha$
in each bin, and determine the parameters of the smooth
relation by $\chi^2$ minimization. Fitting binned data rather
than individual halos gives more appropriate weight to the relatively
rare, high $v_c$ halos. Table 1 lists the fitted values
of $A$ and $B$ for the functional relation
\begin{equation}
{\rm log}\,\alpha = (A\,{\rm log}\,v_c + B)(1-\exp(-(v_c-35)/12)),
\label{avc}
\end{equation}
with $\alpha$ in comoving kpc$^2$, $v_c$ in km s$^{-1}$, and
base-10 logarithms.
We determine values separately for DLA\ and Lyman limit
absorption and for each redshift. Figure~\ref{figVAplot}
shows that there is substantial scatter about this
mean relation, and our adopted functional form is rather arbitrary,
but we will see shortly that this characterization of the
$\alpha_z(v_c)$ relation suffices for our purposes.
\begin{table}
\begin{tabular}{lllll}
\tableline\tableline
\multicolumn{1}{c}{$z$} & \multicolumn{1}{c}{$A_{\rm DLA}$} &
\multicolumn{1}{c}{$B_{\rm DLA}$}& \multicolumn{1}{c}{$B_{\rm LL}$} &
\multicolumn{1}{c}{$B_{\rm LL}$} \\ \tableline
2.0& 2.32& -1.87 & 2.70 & -2.13 \\
3.0& 2.94& -3.03 & 3.21 & -2.96 \\
4.0& 2.84& -2.63 & 3.02 & -2.28 \\ \tableline\tableline
\end{tabular}
\caption{Fitted parameter values for $\alpha_z(v_c)$, with
the functional form in equation~(\ref{avc}).}
\label{tabalpha}
\end{table}
The observable quantity that we would like to test the CDM model
against is $n(z)$, the number of DLA\ or Lyman limit absorbers
per unit redshift interval along a random line of sight.
We can estimate this from the projected HI map of the 22.22 Mpc
simulation as in KWHM, by dividing the fractional area that has
projected column density above the DLA\ or Lyman limit threshold
by the depth of the box in redshift. However, because the
simulation does not resolve gas cooling in halos with
$v_c \mathrel{\copy\simlessbox} 100$ km s$^{-1}$, this procedure really yields
estimates of $n(z,100\;\vunits)$, where $n(z,v_c)$
denotes the number of absorbers per unit redshift produced
by halos with circular velocity greater than $v_c$.
Since halos with $35\;\vunits < v_c < 100\;\vunits$ can
harbor DLA\ and Lyman limit absorbers,
$n(z,100\;\vunits)$ is only a lower limit to the observable
quantity $n(z)$.
We have now assembled the tools to fix this problem, for the
Press-Schechter formula~(\ref{PSnumber}) tells us the number
density of halos as a function of circular velocity and the
relation $\alpha_z(v_c)$ tells us how much absorption these
halos produce. Equation~(\ref{PSnumber}) is given in terms
of the mass $M$; since we define the halo mass within a sphere
of overdensity 177, the corresponding circular velocity is
\begin{equation}
v_c = (GM/R_{177})^{1/2} =
\left[GM^{2/3} \left({4\pi \over 3} 177 \rho_c\right)^{1/3}\right]^{1/2} =
117~ \left({M \over 10^{11} M_\odot}\right)^{1/3}
\left({1+z \over 4}\right)^{1/2} \; \vunits.
\label{vcM}
\end{equation}
Thus,
\begin{equation}
n(z,v_c)= {dr \over dz} \int_M^{\infty} N(M',z)
\alpha_z(v_c) dM',
\label{nofzM}
\end{equation}
where $N(M',z)$ is taken from equation~(\ref{PSnumber}), and
equation~(\ref{vcM}) is used to convert between $v_c$ and $M$
as necessary. Multiplying the comoving number density of halos by
the comoving absorption area yields a number of absorbers per
comoving distance, and multiplying by $dr/dz$, the derivative of
comoving distance with respect to redshift, converts to a number
per unit redshift.
Figure~\ref{figNZplot} shows $n(z,v_c)$ computed from
equation~(\ref{nofzM}) using our fitted relations $\alpha_z(v_c)$.
Starting from high $v_c$, the abundance first rises steeply with
decreasing $v_c$ because of the increasing number of halos,
but it flattens at low $v_c$ because of the suppression of gas
cooling in small halos. Points with error bars show $n(z,v_c)$
obtained directly from the halos in the 22.22 Mpc simulation.
The curves fit these points quite well --- they are, of course,
constructed to do so, but the agreement shows that our full
procedure, including the details of the Press-Schechter calibration
and fitting for $\alpha_z(v_c)$, is able to reproduce the original
numerical results in the regime where halos are resolved.
We can therefore be fairly confident in using this method to
extrapolate to $n(z,0) = n(z)$, the incidence of high column
density absorption produced by gas in all halos, thus
incorporating the new information provided by the high-resolution simulations.
These values of $n(z)$, the $y$-intercepts of the curves in
the panels of Figure~\ref{figNZplot}, are the principal numerical
results of this paper. We will compare them to observations in
the next section.
Table 2 lists the values of $n(z)$ determined by this procedure
at $z=2$, 3, and 4. It also lists the correction factors that
must be applied to the quantities $n(z,100\;\vunits)$ obtainable
by the KWHM procedure in order to get the total abundance
$n(z)=n(z,0)$. In all cases, roughly half of the absorption
occurs in halos with $v_c > 100\;\vunits$ and half in the
more common but smaller halos with lower circular velocities.
\begin{figure}
\vglue-0.65in
\plottwo{f2a.eps}{f2b.eps} \\
\vglue-0.2in
\plottwo{f2c.eps}{f2d.eps} \\
\vglue-0.2in
\plottwo{f2e.eps}{f2f.eps}
\vglue-0.23in
\caption{Incidence of DLA (left) and Lyman limit (right)
absorption at $z=2,$ 3, and 4. Curves show $n(z,v_c)$,
the number of absorbers per unit redshift arising in halos
with circular velocity greater than $v_c$, computed from
equation~(\ref{nofzM}). The $y$-intercepts show the
incidence of absorption produced by all halos. Points with $N^{1/2}$ error
bars show numerical results from the 22.22 Mpc simulation.}
\label{figNZplot}
\end{figure}
\section{Comparison to Observations}
\label{secResults}
\begin{table}
\begin{tabular}{lllcllclllcll} \tableline\tableline
\multicolumn{6}{c}{Damped Ly$\alpha$\ } && \multicolumn{6}{c}{Lyman Limit}
\\ \cline{1-6} \cline{8-13}
\multicolumn{3}{c}{Calculated}&&\multicolumn{2}{c}{Observed}&&
\multicolumn{3}{c}{Calculated}&&\multicolumn{2}{c}{Observed}\\
z&\multicolumn{1}{c}{$n(z)$}&\multicolumn{1}{c}{$F_C$}&&
\multicolumn{1}{c}{$z$}&\multicolumn{1}{c}{$n(z)$} &&
z&\multicolumn{1}{c}{$n(z)$}&\multicolumn{1}{c}{$F_C$}&&
\multicolumn{1}{c}{$z$}&\multicolumn{1}{c}{$n(z)$}
\\ \cline{1-3}\cline{5-6}\cline{8-10}\cline{12-13}
2& 0.17857 & 2.05&& $1.75\pm 0.25$& $0.14\pm 0.073$ &&
2& 0.59586 & 1.74&& $0.90\pm 0.5$& $0.65\pm 0.25$ \\
3& 0.17411 & 1.91&& $2.5\pm 0.5$& $0.18\pm 0.039$ &&
3& 0.72439 & 1.81&& $2.95\pm 0.6$& $2.08\pm 0.35$ \\
& & && $3.25\pm 0.25$& $0.21\pm 0.10$ &&
& & && & \\
4& 0.19422 & 2.54&& $4.1\pm 0.6$& $0.47\pm 0.17$ &&
4& 1.00660 & 2.31&& $4.15\pm 0.6$& $3.45\pm 0.95$ \\ \tableline\tableline
\end{tabular}
\caption{The incidence $n(z)$ of DLA\ and Lyman limit absorption for
the $\Omega=1$ CDM model, computed by our calibrated
Press-Schechter procedure. Observational values are taken from
Storrie-Lombardi {et al.\/} (1996) for DLA\ absorption and from
Storrie-Lombardi {et al.\/} (1994) for Lyman limit absorption.
Also listed is $F_C$, the correction factor by which the KWHM results
for $n(z,100\;\vunits)$ must be multiplied to obtain the
absorption $n(z)$ produced by all halos.}
\label{tabResults}
\end{table}
\begin{figure}
\epsfxsize=6.5truein
\centerline{\epsfbox[18 144 590 718]{f3.eps}}
\caption{
\label{figObsComp}
Incidence of DLA\ and Lyman limit absorption as a function of
redshift. Triangles and squares show the resolution-corrected
theoretical predictions for DLA\ and Lyman limit absorption,
respectively. The upper error crosses represent the Lyman limit
data of Storrie-Lombardi {et al.\/} (1994), with $1\sigma$ and $2\sigma$
abundance errors shown. The smooth curve shows their fitted power law.
The lower set of error crosses and solid curve represent the DLA\
data of Storrie-Lombardi {et al.\/} (1996), with $1\sigma$ and $2\sigma$
errors. The dotted error crosses and curve show the data, $1\sigma$
errors, and fit from Wolfe {et al.\/} (1995).}
\end{figure}
Figure~\ref{figObsComp} compares our derived values of $n(z)$ to
observational estimates of the incidence of damped Ly$\alpha$
absorption, taken from Storrie-Lombardi {et al.\/} (1996) and
Wolfe {et al.\/} (1995), and Lyman limit absorption, taken from
Storrie-Lombardi {et al.\/} (1994).
The theoretical predictions and observed values are listed in Table 2.
The resolution correction increases the predicted $n(z)$ values
relative to those of KWHM by about a factor of two, leading to
quite good agreement with the observed abundance of DLA\ absorbers
at $z=2$ and 3. At $z=4$ the predicted abundance is $1.6\sigma$
below the Storrie-Lombardi {et al.\/} (1996) data. Since there are
systematic as well as statistical uncertainties in this observational
estimate --- in particular, it includes candidate DLA\ systems
that have not yet been confirmed by Echelle spectroscopy ---
we regard this level of agreement as acceptable.
The situation for Lyman limit absorption is quite different.
Here the theoretical predictions fall systematically below the
observed abundances, by about a factor of three.
The correction for unresolved halos reduces the discrepancy found
by KWHM, but it does not remove it.
The deficit of Lyman limit systems could reflect a failing of
the CDM model considered here, or it could indicate the presence
in the real universe of an additional population of Lyman limit
absorbers that are not resolved by our simulations.
We discuss this issue further in \S~\ref{secSummary}
\section{Effects of Star Formation}
\label{secStars}
The simulations examined in the previous section do not allow
conversion of gas into stars, and one might worry that depletion
of the atomic gas supply by star formation would substantially
reduce the predicted abundance of DLA\ absorbers.
We investigate this issue by analyzing a simulation identical to the
KWHM run considered above except that it incorporates star formation.
The algorithm, a modified form of that introduced by Katz (1992),
is described in detail by KWH; we summarize it here.
A gas particle becomes ``eligible'' to form stars if (a)
the local hydrogen density exceeds 0.1 cm$^{-3}$ (similar to that of
neutral hydrogen clouds in the interstellar medium), (b)
the local overdensity exceeds the virial overdensity,
and (c) the particle resides in a converging flow that is Jeans-unstable.
Star formation takes place
gradually, with a star formation rate that depends on an
assumed efficiency for conversion of gas into stars and on the local
collapse timescale (the maximum of the local dynamical timescale and
the local cooling timescale).
We set the efficiency parameter defined by KWH to $c_*=0.1$,
though the tests in KWH show that results are insensitive to
an order-of-magnitude change in $c_*$.
Until the gas mass of such a particle
falls below 5\% of its original mass, it is categorized as a
``star-gas'' particle. Thereafter, it is treated as a collisionless
star particle. This gradual transition overcomes computational
difficulties associated with alternative implementations of
star formation, such as
the artificial reduction in resolution caused by rapid removal of
collisionless gas particles from converging flows, or the
spawning of a large number of extra particles that slow the
computations and consume memory.
When stars form, we add supernova feedback energy to the
surrounding gas in the form of heat, assuming that
each supernova yields $10^{51}$ ergs and that all stars greater than
$8M_\odot$ go supernova.
We add this energy gradually, with an exponential decay time of
2 $\times 10^7$ years, the approximate lifetime of an $8M_\odot$ star.
Thermal energy deposited in the dense, rapidly cooling gas is quickly radiated
away, so although feedback has some effect in our simulation, the
impact is usually not dramatic.
\begin{figure}
\epsfysize=5.0truein
\centerline{\epsfbox{f4.eps}}
\caption{
\label{starfig}
The column density distribution $f(N_{\rm HI})$ --- the number of absorbers
per unit redshift per linear interval of $N_{\rm HI}$ --- for simulations
with and without star formation.
Histograms show the simulation results at $z=2$ (solid), $z=3$ (dotted),
and $z=4$ (dashed). Heavier lines represent the simulation
without star formation and lighter lines the simulation with
star formation.
}
\end{figure}
Figure~\ref{starfig} shows column density distributions
for the simulations with and without star formation at $z = 2$, 3, and 4;
$f(N_{\rm HI})$ is the number of absorbers per unit redshift per linear
interval of column density. Star formation alters $f(N_{\rm HI})$ only at
column densities greater than $10^{22}$ cm$^{-2}$, higher than any observed
column density.
Star formation does affect the amount of cold, collapsed gas, however.
The simulation without star formation
yields an $\Omega$ in cold, collapsed gas, i.e.\ gas with
$\rho/\bar\rho > 1000$ and $T<30,000$K, of (6.5, 3.6, 1.7)$\times 10^{-3}$
at $z = (2, 3, 4)$. In the simulation with star formation,
the $\Omega$ in cold, collapsed gas is (3.4, 2.3, 1.2)$\times 10^{-3}$
at $z = (2, 3, 4)$, while the $\Omega$ in stars is
(3.1, 1.2, 0.4)$\times 10^{-3}$,
making a total $\Omega$ in collapsed galactic
baryons of (6.5, 3.5, 1.6)$\times 10^{-3}$, just slightly below the
simulation without star formation. Hence, star formation simply
converts very high column density gas into stars while
affecting little else.
It does not significantly alter the predicted values of $n(z)$ given
previously because absorbers with $N_{\rm HI} \geq 10^{22}\;\cdunits$
are a small fraction of all DLA\ absorbers.
All of the distributions in Figure~\ref{starfig} show a clear
flattening in the column density range
$10^{18.5}\;\cdunits \leq N_{\rm HI} \leq 10^{20.5}\;\cdunits$.
This flattening reflects the onset of self-shielding.
A small range of total hydrogen column densities maps into a wider
range of neutral hydrogen column densities because the neutral
fraction rises rapidly with column density as self-shielding
becomes important. While the optical depth to Lyman limit photons
is one at $N_{\rm HI} = 10^{17.2}\;\cdunits$, self-shielding does not
become strong until significantly higher column densities because
higher frequency photons have a lower ionization cross section and
can still penetrate the cloud.
\section{Summary}
\label{secSummary}
The finite resolution of numerical simulations affects their predictions
for the abundance $n(z)$ of DLA\ and Lyman limit absorption systems.
It is not currently feasible to simulate a volume large enough to
contain a representative population of high circular velocity halos
while maintaining enough resolution to accurately model the smallest
halos ($v_c \approx 35\;\vunits$) that can harbor such systems.
We have therefore devised a method that integrates results from high-
and low-resolution simulations to obtain accurate predictions for $n(z)$.
We use the simulations to determine the relation between absorption
cross section and halo circular velocity over the full range
of relevant circular velocities, then combine this relation with
the Press-Schechter formula for halo abundance --- itself calibrated
against the simulated halo population --- to compute $n(z)$ via
equation~(\ref{nofzM}).
As a method to correct for finite resolution, this technique should
be quite reliable, and it can be applied to other cosmological models
once the appropriate simulations are available for calibrating
$\alpha_z(v_c)$. In the absence of these simulations, one can
make the plausible but uncertain assumption that the relation between
absorbing area and halo circular velocity is similar from one
model to another, then combine $\alpha_z(v_c)$ from this study
with the Press-Schechter halo abundance for other models to predict $n(z)$.
We apply this approach to a number of popular cosmological scenarios
in a separate paper (Gardner {et al.\/} 1996).
While it is less secure than the resolution-corrected numerical
approach of this paper, it is an improvement over existing
semi-analytic calculations of DLA\ abundances
(e.g., Mo \& Miralda-Escud\'e 1994; Kauffmann \& Charlot 1994;
Ma \& Bertschinger 1994; Klypin {et al.\/} 1995),
which usually assume
that {\it all} gas within the halo virial radius cools and becomes neutral,
and which either assume a form and scale for the collapsed gas
distribution or compare to observations only through the
atomic gas density parameter $\Omega_g$, which is sensitive mainly
to the very highest column density systems.
Our resolution correction increases the incidence of
DLA\ and Lyman limit absorption in the CDM model by about a factor
of two, relative to the results of KWHM. This increase brings the
predicted abundance of DLA\ absorbers into quite good agreement
with observations at $z=2$ and 3,
indicating that the high redshift galaxies that form in the CDM
model can account naturally for the observed damped Ly$\alpha$\ absorption.
At $z=4$ the predicted $n(z)$ is $1.6\sigma$ (a factor 2.4)
below a recent observational estimate. However, many of
the systems that contribute to this data point have not yet been confirmed
by high-resolution spectroscopy, so the estimate may decrease with
future observations.
The underprediction of Lyman limit absorption in the simulations is
more dramatic, nearly a factor of three at $z=2$, 3, and 4.
This discrepancy could represent a
failure of the CDM model with our adopted parameters
($\Omega=1$, $h=0.5$, $\Omega_b=0.05$, $\sigma_8=0.7$),
though most of the popular alternatives to standard CDM have
less small scale power and therefore fare at least as badly in this regard.
An alternative possibility is that most Lyman limit
absorption occurs in structures far below the resolution scale of
even our high-resolution, individual object simulations.
For example, Mo \& Miralda-Escud\'e (1996) propose that most Lyman limit
systems are low mass ($\sim 10^5 M_\odot$) clouds formed by thermal
instabilities in galactic halo gas.
We could also be underestimating Lyman limit absorption if some of it
arises in partially collapsed structures --- sheets or filaments ---
that are not accounted for by the Press-Schechter halo formula.
While the KWHM simulation includes such structures, it may underestimate
their numbers in regions of low background density, where its spatial
resolution is degraded, and the QKE simulations select high density
regions from the outset. High resolution
simulations focused on underdense regions could investigate this
possibility. At lower redshifts Lyman limit absorption is
always associated with normal galaxies (Steidel {et al.\/} 1994; Lanzetta {et al.\/} 1996),
but this is not necessarily the case at high redshifts.
In addition to resolution-corrected estimates of $n(z)$, our
results provide some insights into the physical nature of DLA\ absorbers.
As shown in Figure~\ref{figNZplot}, roughly half of the absorbers
reside in halos with circular velocities greater than $100\;\vunits$
and half in halos with $35\;\vunits \leq v_c \leq 100\; \vunits$.
High resolution spectroscopy of metal-line absorption in damped
systems (e.g., Wolfe {et al.\/} 1994) may be able to test this prediction
over the next few years, and future simulations can provide predictions
for other cosmological models. We find that halos with
$v_c \geq 150 \;\vunits$ frequently host more than one gas concentration
(Figure~\ref{figVAplot}), so imaging observations might often
reveal multiple objects close to the line of sight.
At $z\geq 2$, star formation and feedback --- at least as implemented
in our simulations --- have virtually no effect on the predicted
numbers of Lyman limit and DLA\ absorbers. Roughly half of the
cold, collapsed gas is converted to stars by $z=2$, but this
affects the absorption statistics only at $N_{\rm HI} \geq 10^{22} \;\cdunits$.
Depletion of the gas supply by star formation may account for the
absence of observed systems with column densities in this range,
though the number expected in existing surveys would be small in any case.
At lower redshifts, the effects of gas depletion may extend to lower
column densities. For $\Omega=1$ and $h=0.5$,
there are just over a billion years between $z=4$ and $z=2$,
but there are over two billion years between $z=2$ and $z=1$
and over eight billion years from $z=1$ to the present.
Assuming a roughly constant star formation rate in disk galaxies,
most of the depletion of DLA\ gas would occur at low redshifts.
Ongoing searches for DLA\ absorbers are improving the observational
constraints on their abundance at high redshift, and
follow-up spectroscopic studies of their metal-line absorption
and imaging studies of associated Ly$\alpha$\ and continuum emission
are beginning to yield important insights into their physical
properties. Multi-color searches for ``Lyman-break'' galaxies are
beginning to reveal the population of ``normal'' high redshift galaxies,
which are the likely sources of most DLA\ absorption.
In the hierarchical clustering framework, the abundance,
properties, and clustering of these objects depend on the
amount of power in the primordial fluctuation spectrum on galactic
mass scales, which in turn depends on the nature of dark matter,
on the mechanism that produces the fluctuations, and on
cosmological parameters such as $\Omega$, $h$, and $\Omega_b$.
The initial fluctuations on galactic scales are difficult to
constrain with local observations because much larger structures
(e.g., galaxy clusters) have since collapsed. The comparison
between rapidly improving high redshift data and numerical
simulations like those used here opens a new window for testing
cosmological models, and we expect that it will take us much further
towards understanding the origin of quasar absorbers, high
redshift galaxies, and the galaxies that we observe today.
\acknowledgments
This work was supported in part by the San Diego, Pittsburgh, and Illinois
supercomputer centers, the Alfred P. Sloan Foundation, NASA Theory
Grants NAGW-2422, NAGW-2523, NAG5-2882, and NAG5-3111, NASA HPCC/ESS Grant
NAG5-2213, NASA grant NAG5-1618, and the NSF under Grant ASC 93-18185
and the Presidential Faculty Fellows Program.
| 11,075 |
\section{Introduction}
Many ``connection-dynamic'' theories of gravity with propagating torsion
have been proposed in the last decades. Contrary to the usual
Einstein-Cartan (EC) gravity\cite{hehl}, in such theories one could
in principle
have long-range torsion mediated interactions.
In the same period, we have also witnessed a spectacular progress in the
experimental description of the solar system\cite{will}.
Many important tests using
the parameterized post-Newtonian (PPN) formalism have been performed. Tight
limits for the PPN parameters have been establishing and
several alternatives
theories to General Relativity (GR)
have been ruled out. Indeed, such solar system experiments
and also observations of the binary pulsar $1913+16$ offer strong evidence
that the metric tensor must not deviate too far from the predictions of
GR\cite{will}.
Unfortunately, the situation with respect to the torsion tensor is
much more obscure. The interest in experimental consequences of propagating
torsion models has been revived recently\cite{CF,hamm}.
Carroll and Field\cite{CF} have examined the
observational consequences of propagating torsion in a wide class of
models involving scalar fields. They conclude that for reasonable models
the torsion must decay quickly outside matter distribution, leading to
no long-range interaction which could be detected experimentally.
Nevertheless, as also stressed by them, this does not mean that torsion
has not relevance in Gravitational Physics.
Typically, in
propagating torsion models the Einstein-Hilbert action is modified in
order to induce a differential equation for the torsion tensor, allowing for
non-vanishing torsion configurations to the vacuum. In almost all cases
a dynamical scalar field is involved, usually related to the torsion
trace or pseudo-trace. Such modifications are introduced in a
rather arbitrary way; terms are added to the Lagrangian in
order to produce previously desired differential
equations for the torsion tensor.
The goal of this paper is to present a propagating torsion model
obtained from first principles of EC theory. By exploring some basic
features of the Einstein-Hilbert action in spacetimes with torsion we
get a model with a new and a rather intriguing type of propagating torsion
involving a non-minimally coupled scalar field.
We write and discuss the metric and torsion equations for the vacuum
and in the presence of different matter fields.
Our model does not belong to
the large class of models studied in \cite{CF}.
The work is organized as follows. Section II is a brief revision of
Riemann-Cartan (RC)
geometry, with special emphasis to the concept of parallel volume
element. In the Section III, we show how a propagating torsion model
arises from elementary considerations on the compatibility between
minimal action principle and minimal coupling procedure. The Section
IV is devoted to study of the proposed model in the vacuum and in presence of
various type of matter. Section V is left to some concluding
remarks.
\section{RC manifolds and parallel volume elements}
A RC spacetime is a differentiable four dimensional
manifold endowed with a metric tensor $g_{\alpha\beta}(x)$ and with a
metric-compatible connection $\Gamma_{\alpha\beta}^\mu$, which is
non-symmetrical in its lower indices. We adopt in this work
${\rm sign}(g_{\mu\nu})=(+,-,-,-)$.
The anti-symmetric part of the
connection defines a new tensor, the torsion tensor,
\begin{equation}
S_{\alpha\beta}^{\ \ \gamma} = \frac{1}{2}
\left(\Gamma_{\alpha\beta}^\gamma-\Gamma_{\beta\alpha}^\gamma \right).
\label{torsion}
\end{equation}
The metric-compatible connection can be written as
\begin{equation}
\Gamma_{\alpha\beta}^\gamma = \left\{_{\alpha\beta}^\gamma \right\}
- K_{\alpha\beta}^{\ \ \gamma},
\label{connection}
\end{equation}
where $\left\{_{\alpha\beta}^\gamma \right\}$ are
the usual Christoffel symbols
and $K_{\alpha\beta}^{\ \ \gamma}$ is the
contorsion tensor, which is given in terms of the torsion tensor by
\begin{equation}
K_{\alpha\beta}^{\ \ \gamma} = - S_{\alpha\beta}^{\ \ \gamma}
+ S_{\beta\ \alpha}^{\ \gamma\ } - S_{\ \alpha\beta}^{\gamma\ \ }.
\label{contorsion}
\end{equation}
The connection (\ref{connection}) is used to define the covariant derivative of
vectors,
\begin{equation}
D_\nu A^\mu = \partial_\nu A^\mu + \Gamma_{\nu\rho}^\mu A^\rho,
\label{covariant}
\end{equation}
and
it is also important to our purposes to introduce the covariant derivative of
a density $f(x)$,
\begin{equation}
D_\mu f(x) = \partial_\mu f(x) - \Gamma^\rho_{\rho\mu}f(x).
\end{equation}
The contorsion tensor (\ref{contorsion}) can be covariantly
split in a traceless part and in a trace,
\begin{equation}
K_{\alpha\beta\gamma} = \tilde{K}_{\alpha\beta\gamma} -
\frac{2}{3}\left(
g_{\alpha\gamma} S_\beta - g_{\alpha\beta} S_\gamma
\right),
\label{decomposit}
\end{equation}
where $\tilde{K}_{\alpha\beta\gamma}$ is the traceless part and $S_\beta$ is
the trace of the torsion tensor, $S_\beta = S^{\ \ \alpha}_{\alpha\beta}$.
In four dimensions the traceless part $\tilde{K}_{\alpha\beta\gamma}$
can be also decomposed in a pseudo-trace and a part with vanishing
pseudo-trace, but for our purposes (\ref{decomposit}) is sufficient.
The curvature tensor is given by:
\begin{equation}
\label{curva}
R_{\alpha\nu\mu}^{\ \ \ \ \beta} = \partial_\alpha \Gamma_{\nu\mu}^\beta
- \partial_\nu \Gamma_{\alpha\mu}^\beta
+ \Gamma_{\alpha\rho}^\beta \Gamma_{\nu\mu}^\rho
- \Gamma_{\nu\rho}^\beta \Gamma_{\alpha\mu}^\rho .
\end{equation}
After some algebraic manipulations we get the following expression for the
scalar of curvature $R$, obtained from suitable contractions of (\ref{curva}),
\begin{equation}
R\left(g_{\mu\nu},\Gamma^\gamma_{\alpha\beta}\right) =
g^{\mu\nu} R_{\alpha\mu\nu}^{\ \ \ \ \alpha} =
{\cal R} - 4D_\mu S^\mu + \frac{16}{3}S_\mu S^\mu -
\tilde{K}_{\nu\rho\alpha} \tilde{K}^{\alpha\nu\rho},
\label{scurv}
\end{equation}
where ${\cal R}\left(g_{\mu\nu},\left\{_{\alpha\beta}^\gamma \right\}\right)$
is the Riemannian scalar of curvature, calculated from the
Christoffel symbols.
In order to define a general covariant volume element in a manifold, it is
necessary to introduce a density quantity $f(x)$ which will compensate
the Jacobian that arises from the transformation law
of the usual volume element
$d^4x$ under a coordinate transformation,
\begin{equation}
d^4x \rightarrow f(x) d^4x = d{\rm vol}.
\end{equation}
Usually, the density $f(x) = \sqrt{-g}$ is took to this purpose. However,
there are
natural properties that a volume element shall exhibit.
In a Riemannian manifold, the usual covariant volume element
\begin{equation}
d{\rm vol} = \sqrt{-g}\, d^4x,
\label{vele}
\end{equation}
is parallel, in the sense that the scalar density $\sqrt{-g}$ obeys
\begin{equation}
{\cal D}_\mu\sqrt{-g} = 0,
\end{equation}
where ${\cal D}_\mu$ is the covariant derivative defined using the
Christoffel symbols.
One can infer that the volume element (\ref{vele}) is not parallel
when the spacetime is not torsionless, since
\begin{equation}
D_\mu\sqrt{-g}= \partial_\mu\sqrt{-g} - \Gamma^\rho_{\rho\mu}\sqrt{-g} =
-2 S_\mu\sqrt{-g},
\end{equation}
as it can be checked using Christoffel symbols properties. This is the
main point that we wish to stress, it will be the basic argument to our claim
that the usual volume element (\ref{vele}) is not the most appropriate one
in the presence of torsion, as it will be discussed in the next section.
The question
that arises now is if it is possible to define a
parallel volume element in RC manifolds.
In order to do it,
one needs to find a density $f(x)$ such that
$D_\mu f(x)=0$. Such density exists only if the trace
of the torsion tensor, $S_\mu$, can be obtained
from a scalar potential\cite{saa1}
\begin{equation}
S_\beta(x) = \partial_\beta \Theta(x),
\label{pot}
\end{equation}
and in this case we have $f(x)=e^{2\Theta}\sqrt{-g}$, and
\begin{equation}
d{\rm vol} = e^{2\Theta}\sqrt{-g} \,d^4x,
\label{u4volume}
\end{equation}
that is the parallel RC volume element,
or in another words, the volume element (\ref{u4volume}) is compatible
with the connection in RC manifolds obeying (\ref{pot}).
It is not usual to find in the literature applications where volume
elements different from the canonical one are used.
Non-standard
volume elements have been used
in the characterization of half-flats solutions of
Einstein equations\cite{volu},
in the description of field theory on Riemann-Cartan
spacetimes\cite{saa1,saa2} and of dilatonic gravity\cite{saa4},
and in the study of some aspects of BRST symmetry\cite{AD}.
In our case the
new volume element appears naturally; in the same way that we require
compatibility conditions between the metric tensor and the linear
connection we can do it for the connection and volume element.
With
the volume element (\ref{u4volume}), we have the following generalized
Gauss' formula
\begin{equation}
\int d{\rm vol}\, D_\mu V^\mu =
\int d^4x \partial_\mu e^{2\Theta}\sqrt{-g} V^\mu =\
{\rm surface\ term},
\label{gauss}
\end{equation}
where we used
that
\begin{equation}
\label{gammacontr}
\Gamma^\rho_{\rho\mu}=\partial_\mu\ln e^{2\Theta}\sqrt{-g}
\end{equation}
under the hypothesis (\ref{pot}). It is easy to see that one cannot have a
generalized Gauss' formula of the type (\ref{gauss}) if the torsion does not
obey (\ref{pot}). We will return to discuss the actual role of the condition
(\ref{pot}) in the last section.
\section{Minimal coupling procedure and minimal action principle}
As it was already said, our model arises from elementary considerations
on the minimal coupling procedure and minimal action principle.
Minimal coupling procedure (MCP) provides us with an useful rule to get
the equations for any physical field on non-Minkowskian manifolds starting
from their versions of Special Relativity (SR). When studying classical
fields on a non-Minkowskian manifold $\cal X$ we usually require that the
equations of motion for such fields have an appropriate SR limit. There are,
of course, infinitely many covariant equations on $\cal X$ with the same
SR limit, and MCP solves this arbitrariness by saying that the relevant
equations should be the ``simplest'' ones. MCP can be heuristically formulated
as follows. Considering the equations of motion for a classical field in
the SR, one can get their version for a non-Minkowskian spacetime $\cal X$
by changing the partial derivatives by the $\cal X$ covariant ones and the
Minkowski metric tensor by the $\cal X$ one. MCP is also
used for the classical and quantum
analysis of gauge fields, where the gauge field is to be interpreted
as a connection, and it is in spectacular agreement with
experience for QED an QCD.
Suppose now that the SR equations of motion for a classical field follow
from an action functional via minimal action principle (MAP). It is natural to
expect that the equations obtained by using MCP to the SR equations
coincide with the Euler-Lagrange equations of the action obtained
via MCP of the SR one. This can be better visualized with the help of the
following diagram\cite{saa5}
\setlength{\unitlength}{1mm}
$$
\addtocounter{equation}{1}
\newlabel{diagr}{{3.1}{3.1}}
\hspace{106pt}
\begin{picture}(52,28)
\put(3,20) {$ {\cal C}_{ {\cal L}_{\rm SR} }$}
\put(7,18){\vector(0,-1){9}}
\put(3,5){$ E({\cal L}_{\rm SR}) $}
\put(45,20){${ \cal C_{L_X} }$ }
\put(40,5){$ E({\cal L}_{\cal X})$}
\put(47,18){\vector(0,-1){9}}
\put(12,22){\vector(1,0){30}}
\put(17,7){\vector(1,0){22}}
\put(24,24){${\scriptstyle \rm MCP}$}
\put(27,9){${\scriptstyle \rm MCP}$}
\put(8,13){${\scriptstyle \rm MAP}$}
\put(48,13){${\scriptstyle \rm MAP}$}
\end{picture}
\hspace{116pt}\raise 7ex \hbox{(\theequation)}
$$
where $E({\cal L})$ stands to the Euler-Lagrange equations for the
Lagrangian $\cal L$, and ${\cal C}_{\cal L}$ is the equivalence class
of Lagrangians, ${\cal L}'$ being equivalent to $\cal L$ if
$E({\cal L}')=E({\cal L})$.
We restrict ourselves to the case of non-singular Lagrangians.
The diagram (\ref{diagr}) is verified in GR. We say that MCP is
compatible with MAP if (\ref{diagr}) holds. We stress that if (\ref{diagr})
does not hold we have another arbitrariness to solve, one needs to choose
one between two equations, as we will shown with a simple example.
It is not difficulty to check that in general MCP is not compatible with MAP
when spacetime is assumed to be non-Riemannian.
Let us examine for simplicity
the case of a massless scalar field $\varphi$ in the frame of Einstein-Cartan
gravity\cite{saa1}. The equation for $\varphi$ in SR is
\begin{equation}
\partial_\mu\partial^\mu\varphi=0,
\label{e2}
\end{equation}
which follows from the extremals of the action
\begin{equation}
\label{act}
S_{\rm SR} =
\int d{\rm vol}\, \eta^{\mu\nu}\partial_\mu\varphi\partial_\nu\varphi.
\end{equation}
Using MCP to (\ref{act}) one gets
\begin{equation}
\label{act1}
S_{\cal X} = \int d{\rm vol}\, g^{\mu\nu}
\partial_\mu\varphi\partial_\nu\varphi,
\end{equation}
and using the Riemannian volume element for $\cal X$, $
d{\rm vol} = \sqrt{g}d^nx$, we get the following equation from the
extremals of (\ref{act1})
\begin{equation}
\label{aa22}
\frac{1}{\sqrt{g}}\partial_\mu \sqrt{g}\partial^\mu\varphi = 0.
\end{equation}
It is clear that (\ref{aa22}) does not coincide in general with the
equation obtained via MCP of (\ref{e2})
\begin{equation}
\label{e3}
\partial_\mu\partial^\mu\varphi + \Gamma^\mu_{\mu\alpha}
\partial^\alpha\varphi =
\frac{1}{\sqrt{g}}\partial_\mu \sqrt{g}\partial^\mu\varphi
+ 2 \Gamma^\mu_{[\mu\alpha]} \partial^\alpha\varphi = 0.
\end{equation}
We have here an ambiguity, the equations (\ref{aa22}) and (\ref{e3}) are in
principle equally acceptable ones, to choose one of them corresponds to choose
as more fundamental the equations of motion or the action formulation from
MCP point of view. As it was already said, we do not have such ambiguity
when spacetime is assumed to be
a Riemannian manifold. This
is not a feature of massless scalar fields, all matter fields have the
same behaviour in the frame of Einstein-Cartan gravity.
An accurate analysis of the diagram (\ref{diagr}) reveals that the source
of the problems of compatibility between MCP and MAP is the volume element
of $\cal X$. The necessary and sufficient condition to the validity of
(\ref{diagr}) is that the equivalence class of Lagrangians
${\cal C}_{\cal L}$ be preserved under MCP. With our definition of
equivalence we have that
\begin{equation}
\label{class}
{\cal C}_{ {\cal L}_{\rm SR} } \equiv \left\{ {\cal L}'_{\rm SR}|
{\cal L}'_{\rm SR} - {\cal L}_{\rm SR} = \partial_\mu V^\mu \right\},
\end{equation}
where $V^{\mu}$ is a vector field. The application of MCP to the
divergence $\partial_\mu V^\mu$ in (\ref{class}) gives $D_\mu V^\mu$,
and in order to the set
\begin{equation}
\left\{ {\cal L}'_{\cal X}|
{\cal L}'_{\cal X} - {\cal L}_{\cal X} = D_\mu V^\mu \right\}
\end{equation}
be an equivalence class one needs to have a Gauss-like law like
(\ref{gauss}) associated to
the divergence $D_\mu V^\mu$.
As it was already said in Section II, the necessary and sufficient
condition to have such a Gauss law is that the trace of the torsion
tensor obeys (\ref{pot}).
With the use of the parallel volume element in the
action formulation for EC gravity we can have qualitatively
different predictions. The scalar of curvature (\ref{scurv})
involves terms quadratic in the torsion.
Due to (\ref{pot}) such quadratic terms will provide a differential
equation for $\Theta$, what will allow for non-vanishing torsion
solutions for the vacuum.
As to the matter fields, the
use of the parallel volume element, besides of guarantee
that the diagram (\ref{diagr}) holds, brings also qualitative changes.
For example, it is possible to have a minimal interaction between
Maxwell fields and torsion preserving gauge symmetry. The next section
is devoted to the study of EC equations obtained by using the
parallel volume element (\ref{u4volume}).
\section{The model}
Now, EC gravity will be reconstructed by
using the results of the previous sections. Spacetime will be
assumed to be a Riemann-Cartan
manifold with the parallel volume element (\ref{u4volume}), and of course,
it is implicit the restriction that the trace of the torsion tensor is
derived from a scalar potential, condition (\ref{pot}).
With this hypothesis, EC theory of gravity will predict new effects, and they
will be pointed out in the following subsections.
\subsection{Vacuum equations}
According to our hypothesis,
in order to get the EC gravity equations we will assume that they
can be obtained from an Einstein-Hilbert action using the scalar of
curvature (\ref{scurv}), the condition (\ref{pot}), and the
volume element (\ref{u4volume}),
\begin{eqnarray}
\label{vaction}
S_{\rm grav} &=& -\int d^4x e^{2\Theta} \sqrt{-g} \, R \\
&=&-\int d^4x e^{2\Theta} \sqrt{-g} \left(
{\cal R} + \frac{16}{3} \partial_\mu\Theta \partial^\mu \Theta
- \tilde{K}_{\nu\rho\alpha} \tilde{K}^{\alpha\nu\rho}
\right) + {\rm surf. \ terms}, \nonumber
\end{eqnarray}
where the generalized Gauss' formula (\ref{gauss}) was used.
The equations for the $g^{\mu\nu}$, $\Theta$, and
$\tilde{K}_{\nu\rho\alpha}$ fields follow from the extremals of the action
(\ref{vaction}).
The variations of $g^{\mu\nu}$ and $S_{\mu\nu}^{\ \ \rho}$ are assumed to
vanish in the boundary.
The equation $\frac{\delta S_{\rm grav}}{\delta\tilde{K}_{\nu\rho\alpha}} =0$
implies that $\tilde{K}^{\nu\rho\alpha} = 0$,
$\frac{\delta S_{\rm grav}}{\delta\tilde{K}_{\nu\rho\alpha}}$ standing for the
Euler-Lagrange equations for
${\delta\tilde{K}_{\nu\rho\alpha}}$.
For the other equations we have
\begin{eqnarray}
\label{1st}
-\frac{e^{-2\Theta}}{\sqrt{-g}}
\left.\frac{\delta }{\delta g^{\mu\nu}}S_{\rm grav}
\right|_{\tilde{K}=0} &=& {\cal R}_{\mu\nu}
-2D_\mu \partial_\nu\Theta \nonumber \\
&&-\frac{1}{2}g_{\mu\nu}
\left(
{\cal R} + \frac{8}{3}\partial_\rho\Theta \partial^\rho \Theta
-4 \Box \Theta
\right) = 0, \\
-\frac{e^{-2\Theta}}{2\sqrt{-g}}
\left.\frac{\delta }{\delta \Theta}S_{\rm grav}
\right|_{\tilde{K}=0} &=&
{\cal R} + \frac{16}{3}\left(
\partial_\mu\Theta \partial^\mu \Theta -
\Box \Theta \right) =0, \nonumber
\end{eqnarray}
where
${\cal R}_{\mu\nu}
\left(g_{\mu\nu},\left\{_{\alpha\beta}^\gamma \right\}\right)$
is the usual Ricci tensor, calculated using the
Christoffel symbols, and $\Box = D_\mu D^\mu$.
Taking the trace of the first equation of (\ref{1st}),
\begin{equation}
{\cal R} + \frac{16}{3}\partial_\mu\Theta \partial^\mu \Theta =
6\Box\Theta,
\end{equation}
and using it, one finally obtains
the equations for the vacuum,
\begin{eqnarray}
\label{vacum0}
{\cal R}_{\mu\nu} &=& 2D_\mu\partial_\nu \Theta
- \frac{4}{3} g_{\mu\nu}\partial_\rho\Theta \partial^\rho \Theta
= 2D_\mu S_\nu - \frac{4}{3}g_{\mu\nu}S_\rho S^\rho, \nonumber \\
\Box \Theta &=& \frac{e^{-2\Theta}}{\sqrt{-g}}
\partial_\mu e^{2\Theta}\sqrt{-g}\partial^\mu\Theta = D_\mu S^\mu = 0, \\
\tilde{K}_{\alpha\beta\gamma} &=& 0. \nonumber
\end{eqnarray}
The vacuum equations (\ref{vacum0})
point out new features of our model. It is
clear that torsion, described by the last two equations,
propagates.
The torsion mediated interactions are not of
contact type anymore. The traceless tensor $\tilde{K}_{\alpha\beta\gamma}$
is zero for the vacuum, and only the trace $S_\mu$ can be non-vanishing
outside matter distributions. As it is expected, the gravity field
configuration for the vacuum is determined only
by boundary conditions, and if
due to such conditions we have that $S_\mu=0$, our equations reduce to the
usual vacuum equations, $S_{\alpha\gamma\beta}=0$, and
${\cal R}_{\alpha\beta}=0$. Note that this is the case if one considers
particle-like solutions (solutions that go to zero asymptotically).
Equations (\ref{vacum0}) are valid only to the exterior region of the
sources. For a discussion to the case with sources see \cite{H1}.
The first term in the right-handed side of the first equation
of (\ref{vacum0}) appears
to be non-symmetrical under the change $(\mu\leftrightarrow\nu)$,
but in fact it is symmetrical as one can see using (\ref{pot}) and
the last equation of (\ref{vacum0}). Of course that if
$\tilde{K}_{\alpha\beta\gamma}\neq 0$ such term will be non-symmetrical,
and this is the case when fermionic fields are present, as we will see.
It is not difficult to generate solutions for (\ref{vacum0})
starting from the well-known solutions of the minimally coupled
scalar-tensor gravity\cite{saa6}.
\subsection{Scalar fields}
The first step to introduce matter fields in our discussion
will be the description of
scalar fields on RC manifolds.
In order to do it, we will use MCP according to Section II.
For a massless scalar field one gets
\begin{eqnarray}
\label{scala}
S &=& S_{\rm grav} + S_{\rm scal} = -\int \,d^4xe^{2\Theta}\sqrt{-g}
\left(R -\frac{g^{\mu\nu}}{2} \partial_\mu\varphi \partial_\nu \varphi
\right)\\
&=&-\int d^4x e^{2\Theta} \sqrt{-g} \left(
{\cal R} + \frac{16}{3} \partial_\mu\Theta \partial^\mu \Theta
- \tilde{K}_{\nu\rho\alpha} \tilde{K}^{\alpha\nu\rho}
-\frac{g^{\mu\nu}}{2} \partial_\mu\varphi \partial_\nu \varphi
\right), \nonumber
\end{eqnarray}
where surface terms were discarded.
The equations for this case are obtained by varying (\ref{scala}) with
respect to $\varphi$, $g^{\mu\nu}$, $\Theta$, and
$\tilde{K}_{\alpha\beta\gamma}$. As in the vacuum case, the equation
$\frac{\delta S}{\delta \tilde{K}}=0$
implies $\tilde{K}=0$. Taking it into
account we have
\begin{eqnarray}
\label{e1}
-\frac{e^{-2\Theta}}{\sqrt{-g}} \left.
\frac{\delta S}{\delta\varphi}
\right|_{\tilde{K}=0} &=& \frac{e^{-2\Theta}}{\sqrt{-g}}\partial_\mu
e^{2\Theta}\sqrt{-g}\partial^\mu\varphi
=\Box \varphi = 0, \nonumber \\
-\frac{e^{-2\Theta}}{\sqrt{-g}} \left.
\frac{\delta S}{\delta g^{\mu\nu}}
\right|_{\tilde{K}=0} &=& {\cal R}_{\mu\nu}
- 2 D_\mu S_\nu - \frac{1}{2} g_{\mu\nu}
\left(
{\cal R} + \frac{8}{3}S_\rho S^\rho - 4 D_\rho S^\rho
\right) \nonumber \\
&&-\frac{1}{2} \partial_\mu \varphi \partial_\nu\varphi
+ \frac{1}{4} g_{\mu\nu}\partial_\rho \varphi \partial^\rho \varphi = 0, \\
-\frac{e^{-2\Theta}}{2\sqrt{-g}} \left.
\frac{\delta S}{\delta \Theta}
\right|_{\tilde{K}=0} &=& {\cal R} +
\frac{16}{3}\left( S_\mu S^\mu - D_\mu S^\mu\right)
-\frac{1}{2} \partial_\mu\varphi \partial^\mu\varphi = 0. \nonumber
\end{eqnarray}
Taking the trace of the second equation of (\ref{e1}),
\begin{equation}
{\cal R} + \frac{16}{3} S_\mu S^\mu = 6 D_\mu S^\mu +
\frac{1}{2} \partial_\mu\varphi \partial^\mu \varphi,
\end{equation}
and using it, we get the following
set of equations for the massless scalar case
\begin{eqnarray}
\label{aa}
\Box \varphi &=& 0, \nonumber \\
{\cal R}_{\mu\nu} &=& 2D_\mu S_\nu - \frac{4}{3}g_{\mu\nu} S_\rho S^\rho
+\frac{1}{2} \partial_\mu\varphi \partial_\nu\varphi, \\
D_\mu S^\mu &=& 0, \nonumber \\
\tilde{K}_{\alpha\beta\gamma} &=& 0. \nonumber
\end{eqnarray}
As one can see, the torsion equations have the same form than the ones
of the vacuum case (\ref{vacum0}). Any
contribution to the torsion will be due to boundary conditions, and not due
to the scalar field itself.
It means that if such boundary conditions imply that $S_\mu=0$, the
equations for the fields $\varphi$ and $g_{\mu\nu}$ will be the same ones
of the GR.
One can interpret this by saying that,
even feeling the torsion (see the second equation of (\ref{aa})),
massless scalar fields do not produce it. Such behavior is
compatible with the idea that torsion must be governed by spin distributions.
However, considering massive scalar fields,
\begin{eqnarray}
S_{\rm scal} = \int \,d^4xe^{2\Theta}\sqrt{-g}
\left(\frac{g^{\mu\nu}}{2} \partial_\mu\varphi \partial_\nu \varphi
-\frac{m^2}{2}\varphi^2 \right),
\end{eqnarray}
we have the
following set of equations instead of (\ref{aa})
\begin{eqnarray}
\label{aa1}
(\Box+m^2) \varphi &=& 0, \nonumber \\
{\cal R}_{\mu\nu} &=& 2D_\mu S_\nu - \frac{4}{3}g_{\mu\nu} S_\rho S^\rho
+\frac{1}{2} \partial_\mu\varphi \partial_\nu\varphi
-\frac{1}{2} g_{\mu\nu} m^2\varphi^2, \\
D_\mu S^\mu &=& \frac{3}{4}m^2\varphi^2, \nonumber \\
\tilde{K}_{\alpha\beta\gamma} &=& 0. \nonumber
\end{eqnarray}
The equation for the trace of the torsion tensor is different than the one of
the vacuum case, we have that massive scalar field
couples to torsion in a different way than the massless one.
In contrast to the massless case, the equations (\ref{aa1}) do not admit as
solution $S_\mu=0$ for non-vanishing $\varphi$ (Again for particle-like
solutions we have $\phi=0$ and $S_\mu=0$).
This is in disagreement with the traditional belief that torsion must be
governed by spin distributions. We will return to this point in the last
section.
\subsection{Gauge fields}
We need to be careful with the use of MCP to gauge fields. We will restrict
ourselves to the abelian case in this work,
non-abelian gauge fields will bring some
technical difficulties that will not contribute to the understanding
of the basic problems of gauge fields on Riemann-Cartan spacetimes.
Maxwell field can be described by the differential
$2$-form
\begin{equation}
F = dA = d(A_\alpha dx^\alpha) = \frac{1}{2}F_{\alpha\beta}dx^\alpha
\label{form}
\wedge dx^\beta,
\end{equation}
where $A$ is the (local) potential $1$-form, and
$F_{\alpha\beta}=\partial_\alpha A_\beta- \partial_\beta A_\alpha$ is the
usual electromagnetic tensor. It is important to stress that the
forms $F$ and
$A$ are covariant objects in any differentiable manifolds. Maxwell equations
can be written in Minkowski spacetime in terms of exterior calculus as
\begin{eqnarray}
\label{maxeq}
dF&=&0, \\
d {}^*\!F &=& 4\pi {}^*\! J, \nonumber
\end{eqnarray}
where ${}^*$ stands for the Hodge star operator and $J$ is the current
$1$-form, $J=J_\alpha dx^\alpha$. The first equation in (\ref{maxeq}) is
a consequence of the definition (\ref{form}) and of Poincar\'e's lemma.
In terms of components, one has the familiar homogeneous and non-homogeneous
Maxwell's equations,
\begin{eqnarray}
\label{maxeq1}
\partial_{[\gamma} F_{\alpha\beta]} &=& 0, \\
\partial_\mu F^{\nu\mu} &=& 4\pi J^\nu, \nonumber
\end{eqnarray}
where ${}_{[\ \ \ ]}$ means antisymmetrization. We know also that the
non-ho\-mo\-ge\-nous equation follows from the extremals
of the following action
\begin{equation}
S = -\int \left(4\pi{}^*\!J\wedge A +\frac{1}{2} F \wedge {}^*\!F\right) =
\int d^4x\left(4\pi J^\alpha A_\alpha - \frac{1}{4}
F_{\alpha\beta}F^{\alpha\beta} \right).
\label{actmink}
\end{equation}
If one tries to cast (\ref{actmink}) in a covariant way by using MCP in the
tensorial quantities, we have that Maxwell tensor will be given by
\begin{equation}
\label{tilda}
F_{\alpha\beta}\rightarrow
\tilde{F}_{\alpha\beta} =
F_{\alpha\beta} - 2 S_{\alpha\beta}^{\ \ \rho}A_\rho,
\end{equation}
which explicitly breaks gauge invariance. With this analysis, one usually
arises the conclusion that gauge fields cannot interact minimally with
Einstein-Cartan gravity. We would stress another undesired
consequence, also related to the breaking of gauge symmetry, of the use of MCP
in the tensorial quantities. The homogeneous Maxwell equation, the
first of (\ref{maxeq1}), does not come from a Lagrangian, and of course,
if we choose to use
MCP in the tensorial quantities we need also apply MCP to it. We get
\begin{equation}
\partial_{[\alpha} \tilde{F}_{\beta\gamma]} +
2 S_{[\alpha\beta}^{\ \ \rho} \tilde{F}_{\gamma]\rho} = 0 ,
\label{falac}
\end{equation}
where $\tilde{F}_{\alpha\beta}$ is given by (\ref{tilda}). One can see that
(\ref{falac}) has no general solution for arbitrary
$S_{\alpha\beta}^{\ \ \rho}$. Besides the breaking of gauge symmetry,
the use of MCP in the tensorial quantities also leads to a non consistent
homogeneous equation.
However, MCP can be successfully applied for general gauge fields
(abelian or not) in the differential form quantities \cite{saa2}. As
consequence, one has that the homogeneous equation is already in a
covariant form in any differentiable manifold, and that the covariant
non-homogeneous equations can be gotten from a Lagrangian obtained only by
changing the metric tensor and by
introducing the parallel volume element in the Minkowskian action
(\ref{actmink}). Considering the case where $J^\mu=0$, we have the
following action to describe the interaction of Maxwell fields and
Einstein-Cartan gravity
\begin{equation}
\label{actmax}
S = S_{\rm grav} + S_{\rm Maxw} = -\int \,d^4x e^{2\Theta} \sqrt{-g}
\left(
R + \frac{1}{4}F_{\mu\nu}F^{\mu\nu}
\right).
\end{equation}
As in the previous cases, the equation $\tilde{K}_{\alpha\beta\gamma}=0$
follows from the extremals of (\ref{actmax}).
The other equations will be
\begin{eqnarray}
\label{ee1}
&&\frac{e^{-2\Theta}}{\sqrt{-g}}\partial_\mu e^{2\Theta}\sqrt{-g} F^{\nu\mu}
=0, \nonumber \\
&& {\cal R}_{\mu\nu} = 2D_\mu S_\nu - \frac{4}{3}g_{\mu\nu}S_\rho S^\rho
-\frac{1}{2} \left(F_{\mu\alpha}F^{\ \alpha}_\nu
+\frac{1}{2}g_{\mu\nu} F_{\omega\rho}F^{\omega\rho} \right), \\
&& D_\mu S^\mu = -\frac{3}{8}F_{\mu\nu}F^{\mu\nu}. \nonumber
\end{eqnarray}
One can see that the equations (\ref{ee1}) are invariant under the usual
$U(1)$ gauge transformations. It is also clear
from the equations (\ref{ee1}) that Maxwell fields can interact with the
non-Riemannian structure of spacetime. Also, as in the massive
scalar case, the equations do not admit as solution $S_\mu=0$ for arbitrary
$F_{\alpha\beta}$, Maxwell fields are also sources to the spacetime torsion.
Similar results can be obtained also for non-abelian gauge fields\cite{saa2}.
\subsection{Fermion fields}
The Lagrangian for a (Dirac)
fermion field with mass $m$ in the Minkowski spacetime
is given by
\begin{equation}
{\cal L}_{\rm F}=\frac{i}{2}\left(\overline{\psi}\gamma^a\partial_a\psi
- \left(\partial_a\overline{\psi} \right)\gamma^a\psi \right)
- m\overline{\psi}\psi,
\label{fermion}
\end{equation}
where $\gamma^a$ are the Dirac matrices and
$\overline{\psi}=\psi^\dagger\gamma^0$. Greek indices denote spacetime
coordinates (holonomic), and roman ones locally flat coordinates
(non-holonomic). It is well known\cite{hehl}
that in order to cast (\ref{fermion}) in a covariant way, one needs to
introduce the vierbein field, $e^\mu_a(x)$, and
to generalize the Dirac matrices,
$\gamma^\mu(x) = e^\mu_a(x)\gamma^a$. The partial derivatives also must be
generalized with the introduction of the spinorial connection $\omega_\mu$,
\begin{eqnarray}
\partial_\mu\psi \rightarrow
\nabla_\mu\psi &=& \partial_\mu\psi+ \omega_\mu \psi, \nonumber \\
\partial_\mu\overline{\psi} \rightarrow
\nabla_\mu\overline{\psi} &=& \partial_\mu\overline{\psi} -
\overline{\psi}\omega_\mu,
\end{eqnarray}
where the spinorial connection is given by
\begin{eqnarray}
\label{spincon}
\omega_\mu &=& \frac{1}{8}[\gamma^a,\gamma^b]e^\nu_a\left(
\partial_\mu e_{\nu b} -\Gamma^\rho_{\mu\nu}e_{\rho b}\right) \\
&=& \frac{1}{8}\left(
\gamma^\nu\partial_\mu\gamma_\nu - \left(\partial_\mu\gamma_\nu \right)
\gamma^\nu - \left[\gamma^\nu,\gamma_\rho \right] \Gamma^\rho_{\mu\nu}
\right). \nonumber
\end{eqnarray}
The last
step, according to our hypothesis, shall be
the introduction of the parallel
volume element, and after that one
gets the following action for fermion fields on RC manifolds
\begin{equation}
S_{\rm F} = \int d^4x e^{2\Theta}\sqrt{-g}\left\{
\frac{i}{2}\left(\overline{\psi}\gamma^\mu(x)\nabla_\mu\psi -
\left(\nabla_\mu\overline{\psi}\right)\gamma^\mu(x)\psi \right)
-m\overline{\psi}\psi
\right\}.
\label{fermioncov}
\end{equation}
Varying the action (\ref{fermioncov}) with respect to $\overline{\psi}$ one
obtains:
\begin{equation}
\frac{e^{-2\Theta}}{\sqrt{-g}}\frac{\delta S_{\rm F}}{\delta\overline{\psi}} =
\frac{i}{2}\left(\gamma^\mu\nabla_\mu\psi + \omega_\mu\gamma^\mu\psi \right)
-m \psi + \frac{i}{2}\frac{e^{-2\Theta}}{\sqrt{-g}} \partial_\mu
e^{2\Theta}\sqrt{-g}\gamma^\mu\psi = 0.
\end{equation}
Using the result
\begin{equation}
[\omega_\mu,\gamma^\mu]\psi = - \left(
\frac{e^{-2\Theta}}{\sqrt{-g}}\partial_\mu e^{2\Theta}\sqrt{-g}\gamma^\mu
\right)\psi,
\end{equation}
that can be check using (\ref{spincon}),
(\ref{gammacontr}), and
properties of ordinary Dirac matrices and of the vierbein field,
we get the following equation for $\psi$ on a RC spacetime:
\begin{equation}
\label{psi}
i\gamma^\mu(x)\nabla_\mu\psi - m\psi =0.
\end{equation}
The equation for $\overline{\psi}$ can be obtained in a similar way,
\begin{equation}
\label{psibar}
i \left( \nabla_\mu\overline{\psi}\right) \gamma^\mu(x)
+ m\overline{\psi} = 0.
\end{equation}
We can see that the equations (\ref{psi}) and (\ref{psibar}) are the same
ones that arise from MCP used in the minkowskian equations of motion. In the
usual EC theory, the equations obtained from the action principle do not
coincide with the equations gotten by generalizing the minkowskian
ones. This is another new feature of the proposed model.
The Lagrangian that describes the interaction of fermion fields with the
Einstein-Cartan gravity is
\begin{eqnarray}
\label{actferm}
S &=& S_{\rm grav} +
S_{\rm F} \\ &=& - \int d^4x e^{2\Theta}\sqrt{-g} \left\{
R - \frac{i}{2}\left(\overline{\psi}\gamma^\mu\partial_\mu\psi -
\left(\partial_\mu\overline{\psi}\right)\gamma^\mu\psi
\right.\right. \nonumber \\
&& \ \ \ \ \ \ \ \ \ \ \
+ \left.\left.
\overline{\psi}\left[\gamma^\mu,\omega_\mu\right] \psi\right)
+ m\overline{\psi}\psi \right\} \nonumber \\
&=& - \int d^4x e^{2\Theta}\sqrt{-g} \left\{
R - \frac{i}{2}\left(\overline{\psi}\gamma^\mu\partial_\mu\psi -
\left(\partial_\mu\overline{\psi}\right)\gamma^\mu\psi
\right.\right. \nonumber \\
&& \ \ \ \ \ \ \ \ \ \ \
+ \left.\left.
\overline{\psi}\left[\gamma^\mu,\tilde{\omega}_\mu\right] \psi\right)
-\frac{i}{8}\overline{\psi}\tilde{K}_{\mu\nu\omega}
\gamma^{[\mu}\gamma^\nu\gamma^{\omega]} \psi
+ m\overline{\psi}\psi \right\},\nonumber
\end{eqnarray}
where it was used that
$\gamma^a\left[\gamma^b,\gamma^c\right]+
\left[\gamma^b,\gamma^c\right]\gamma^a=
2\gamma^{[a}\gamma^b\gamma^{c]}$, and that
\begin{equation}
\omega_\mu = \tilde{\omega}_\mu +\frac{1}{8}K_{\mu\nu\rho}
\left[\gamma^\nu,\gamma^\rho\right],
\end{equation}
where $\tilde{\omega}_\mu$ is the
Riemannian spinorial connection, calculated by
using the Christoffel symbols instead of the full connection in
(\ref{spincon}).
The peculiarity of fermion fields is that one has a non-trivial equation
for $\tilde{K}$ from (\ref{actferm}).
The Euler-Lagrange equations for $\tilde{K}$ is given by
\begin{eqnarray}
\frac{e^{-2\Theta}}{\sqrt{-g}} \frac{\delta S}{\delta\tilde{K}} =
\tilde{K}^{\mu\nu\omega} + \frac{i}{8}\overline{\psi}
\gamma^{[\mu}\gamma^\nu\gamma^{\omega]}\psi = 0.
\label{ka}
\end{eqnarray}
Differently from the previous cases, we have that the traceless part of
the contorsion tensor,
$\tilde{K}_{\alpha\beta\gamma}$, is proportional to the spin
distribution. It is still zero outside
matter distribution, since its equation is an algebraic one, it does not
allow propagation. The other equations follow from the extremals of
(\ref{actferm}). The main difference between these equations and the usual
ones obtained from standard EC gravity, is that in the present case one
has non-trivial solution for the trace of the torsion tensor, that is
derived from $\Theta$. In the standard EC gravity, the torsion tensor is
a totally anti-symmetrical tensor and thus it has a vanishing trace.
\section{Final remarks}
In this section, we are going to discuss the role of the
condition (\ref{pot}) and the source for torsion in the proposed model.
The condition (\ref{pot}) is the necessary
condition in order to be possible the definition of a parallel
volume element on a manifold. Therefore, we have that our approach is
restrict to spacetimes which admits such volume elements.
We automatic have this restriction if we wish to use
MAP in the sense discussed in Section II.
Although it is not clear how to get EC gravity equations without
using a minimal action principle, we can speculate about matter fields
on spacetimes not obeying (\ref{pot}). Since it is not equivalent
to use MCP in the equations of motion or in the action formulation, we
can forget the last and to cast the equations of motion for matter
fields in a covariant way directly. It can be done easily, as example,
for scalar fields\cite{saa1}. We get the equation (\ref{e3}),
which is, apparently, a consistent equation. However, we need to define
a inner product for the space of the solutions of (\ref{e3})
\cite{dewitt}, and we are able to do it only if (\ref{pot}) holds.
We have that the dynamics of matter fields requires some restrictions
to the non-riemannian structure of spacetime, namely, the condition
(\ref{pot}). This is more evident for gauge fields, where
(\ref{pot}) arises directly as an integrability condition for the
equations of motion \cite{saa2}. It seems that condition (\ref{pot}) cannot
be avoided.
We could realize from the matter fields studied that the trace of the
torsion tensor is not directly related to spin distributions. This is a
new feature of the proposed model, and we naturally arrive to the
following question: What is the source of torsion? The situation for the
traceless part of the torsion tensor is the same that one has in the
standard EC theory, only fermion fields can be sources to it. As to the
trace part, it is quite different.
Take for example $\tilde{K}_{\alpha\beta\gamma}=0$, that corresponds to
scalar and gauge fields.
In this case, using the definition of the energy-momentum tensor
\begin{equation}
\frac{e^{-2\Theta}}{\sqrt{-g}}
\frac{\delta S_{\rm mat}}{\delta g^{\mu\nu}} = -\frac{1}{2}T_{\mu\nu},
\end{equation}
and that for scalar and gauge fields we have
\begin{equation}
\frac{e^{-2\Theta}}{\sqrt{-g}}
\frac{\delta S_{\rm mat}}{\delta \Theta} = 2 {\cal L}_{\rm mat},
\end{equation}
one gets
\begin{equation}
D_\mu S^\mu = \frac{3}{2}
\left( {\cal L}_{\rm mat} - \frac{1}{2}T
\right),
\end{equation}
where $T$ is the trace of the energy-momentum tensor.
The quantity between parenthesis, in general, has nothing to do with spin, and
it is the source for a part of the torsion, confirming that in
our model part of torsion is not determined by spin distributions. See
also \cite{H1} for a discussion on possible source terms to the torsion.
This work was supported by FAPESP. The author wishes to thank an anonymous
referee for pointing out the reference \cite{H1}.
| 13,913 |
\section{Introduction}
Before we can use our Galaxy as a tool for the interaction of cosmic rays
and thermal material, we need to understand the origin of cosmic rays. The
origin of cosmic rays is still a question
\cite{CRA,CRB,Fermi49,Fermi54,G69,Hayakawa69,Venyabook} which is not finally
answered; however, already some time ago Cocconi \cite{Cocconi56} argued
convincingly that the very high energy cosmic rays must originate outside
our Galactic disk, since their Larmor motion could not be contained.
While the questions about the subtleties of cosmic ray acceleration provide
ample material for discussion, the debate about the origin of cosmic rays of
moderate energy has reached a consensus, that they are produced in the
shockwaves of supernova explosions
\cite{BZ34,Shkl53,G53,Ginzburg53,LaCe83,Drury83,BE87,BK88,JE91,G93,G96}, be it
into the interstellar medium, or into a stellar wind
\cite{VB88,Silberberg90,CRI}. However, the origin of the cosmic rays of the
highest energy has remained under dispute. Many of the relevant issues here
have been dealt with in the excellent review by Hillas (1984 \cite{Hillas84})
and in the books by Berezinsky {\it et al.} (1990, \cite{Venyabook}) and
Gaisser (1990, \cite{GaisserCRPP}).
Here we are concerned with the interactions of cosmic rays in the Galaxy, and
so we will adopt the picture that indeed the cosmic ray particles originate in
the shocks of supernova explosions.
Using this concept (see,
{\it e.g.} the review by Ginzburg \cite{G96}), we will describe recent
advances in our theoretical attempt to formulate a quantitative theory for
the cosmic rays in the Galaxy. The interaction between energetic particles
and the interstellar medium has long been of interest
\cite{Reeves74,Wentzel74}. We observe consequences of such interaction, such
as gamma ray emission in lines or in continuum, as well as abundances of some
elements and isotopes (see the comprehensive review by Reeves
\cite{Reeves94} and the account given by Bloemen \cite{Bloemen95}). A recent
example of a new measurement of the Boron isotope ratio, together with a
summary of relevant references, has been given in \cite{Federman96}. The
detection of gamma ray lines, presumably from excited nuclei after nuclear
collisions between energetic particles and interstellar medium nuclei
(predicted a long time ago by Meneguzzi \& Reeves \cite{Men75}, and
Ramaty {\it et al.} \cite{R79}), from the Orion complex \cite{Bloemen94} has
aroused the interest of many
\cite{Bozhokin94,Bykov95,Nath94b,Casse95,Vangioni95}. Especially the group
around R. Ramaty has contributed to the discussion, based on their experience
with energetic particle interactions in the solar activity regions
\cite{R79,R95a,R95b,R95c,R95d,R96a,R96b,R96c}. The situation has possibly
improved, as we will try to demonstrate, since we have now a quantitative
proposal to account for the origin of cosmic rays, and while many of the
aspects of this proposal remain to be worked out and verified, it may
provide a useful basis for further investigations. Therefore here we will
try to demonstrate that it will be worthwile to obtain better cross
sections for many of these interactions, so that these interactions
may become a quantitative tool in the future.
The structure of this review is as follows: First we briefly summarize the
recent proposal to account for the origin of cosmic rays; then we describe
some aspects of injection of cosmic rays, and their electromagnetic
interaction with the interstellar medium gas; then we go through the arguments
for the various interaction sites, near the source and far from the source; for
the latter argument we go through the concept of trapping and leakage from
interstellar clouds in some detail, since it is new. Finally we draw some
conclusions and stress the importance of better cross sections.
\section{A quantitative proposal for the origin of galactic cosmic rays}
Cosmic rays arrive at earth with energies from several hundred MeV/particle
to $3 \, 10^{20}$ eV; their spectrum for protons is at GeV energies close
to $E^{-2.75}$, and for He and higher elements close to $E^{-2.65}$ below
a {\it knee} at $\approx 5 \, 10^{15}$ eV, where the spectrum turns down to
about $E^{-3.1}$, to flatten out again near $3 \, 10^{18}$ eV, called the
{\it ankle} ({\it e.g.} \cite{Lawrence91,Nagano92,Zatsepin95}). The chemical
composition is roughly similar to that of the interstellar medium, with
reduced hydrogen and helium relative to silicon, and with the same general
enhancement of elements of low first ionization potential as we find in
solar energetic particles. The low energy end of the observed spectrum
is cut off due to interaction with the solar wind. There is reason to
believe that in interstellar space the cosmic ray spectrum extends far
below what we can observe at Earth.
In the newly proposed theory (starting with \cite{CRI}) the origin of the
cosmic rays below $3 \, 10^{18}$ eV is traced to the shockwaves caused by
supernovae exploding either into the interstellar medium, or into the
predecessor stellar wind, following some rather classical ideas; the new
element is a premise on the particle transport in the shock region,
inspired by the observations of the radio polarization in supernova remnants,
and the actual motion of radio features, as well as the size of the observed
X-ray and radio supernova remnant shells \cite{TucsonCR}: These data suggest
a strongly turbulent interaction region rather than a smooth shock wave,
consistent with several arguments which have demonstrated that cosmic ray
influenced shocks are unstable (see \cite{Zank90,Ratkiewicz94} and the detailed
discussion of this point in \cite{TucsonCR}). This premise is the {\it
principle of the smallest dominant scale}, which follows work by Prandtl (1925
\cite{Prandtl25}) and von Karman \& Howarth (1938 \cite{Karman38}):
This principle is used to find a length scale and a velocity scale,
describing turbulent transport. Applied to supernova shock shells, this
principle leads to some fraction of the radius of the spherical shock as a
length scale and the velocity difference across the shock as the velocity
scale, associated with fast convective shock turbulence, and therefore to a
specific model of the transport of particles in the shock region. In the
construction of a transport coefficient for energetic particles, then these
scales are used, and thus determine, {\it e.g.}, the time which a particle
spends on either side of a shock; this time scale is in turn important for
adiabatic losses, which a particle experiences, as well as energy gains
by drifts in the electric fields, seen in the moving shock frame, and thus
determines the spectrum of the final particle spectrum. This then gives
net an appreciable energy loss during the expansion of the supernova shock,
and leads to a steepening of the predicted spectrum as compared to the
plane-parallel shock case.
\vspace{.5cm}
\noindent
{Figure 1. A schematic picture of the proposed three different source sites
and their respective contributions (adapted from \cite{CRIV}). There is a
contribution from supernovae exploding into the interstellar medium,
component 1. The next two components arise from supernovae exploding into a
predecessor stellar wind, components 2 and 3; the polar cap contribution, 3,
comes from the polar region of the acceleration in wind-supernovae.
Finally, component 4 comes from the hot spots of radio galaxies}.
\vspace{.5cm}
The proposal leads to quantitative predictions for i) the spectra both below
and above the knee of the cosmic ray spectrum near $5 \, 10^{15}$ eV, where
the spectrum turns downwards, ii) the particle energies of the knee and the
various cutoffs, as well as iii) the chemical composition. We have been able
to subject these predictions \cite{BS87,CRI,UHECRI} to a variety of tests in
various publications ({\it e.g.} \cite{PRL95}) and reviewed them as well; the
latest overviews of these developments are \cite{MichiganCR,TucsonCR,JPhGCR}.
We continue to perform further tests using ever more detailed and newer data.
\subsection{Summary of the predictions for nuclei}
The proposal is that three sites of origin account for the cosmic rays
observed, i) supernova explosions into the interstellar medium, ISM-SN,
ii) supernova explosions into the stellar wind of the predecessor star,
wind-SN, and iii) radio galaxy hot spots. Here the cosmic rays
attributed to supernova-shocks in stellar winds, wind-SN, produce an
important contribution at all energies up to $3 \, 10^9$ GeV.
Particle energies go up to $100$ Z TeV for ISM-SN, and to $100$ Z PeV
with a bend at $600$ Z TeV for wind-SN. Radiogalaxy hot spots contribute up
to about $100$ EeV at the source, with some sources up to $4$ ZeV,
$ = 4 \, 10^{21}$ eV \cite{JPhGCR}. These numerical values are estimates
with uncertainties of surely larger than a factor of $2$, since they derive
from an estimated strength of the magnetic field, and estimated values of the
effective shock velocity.
The spectra are predicted to be $E^{-2.75 \pm 0.04}$ for ISM-SN, and
$E^{-2.67 - 0.02 \pm 0.02}$ for wind-SN below the knee, and
$E^{-3.07 - 0.07 \pm 0.07}$ for wind-SM above the knee, and $E^{-2.0}$
at injection for radiogalaxy hot spots. The polar cap of the wind-SN
contributes an $E^{-2.33}$ component (allowing for leakage from the Galaxy),
which, however, contributes significantly only near and below the knee, if
at all. These spectra are for nuclei and are corrected for leakage from the
galaxy.
The chemical abundances are near normal for the injection from ISM-SN, and
are strongly enriched for the contributions from wind-SN. At the knee the
spectrum bends downwards at a given rigidity, and so the heavier
elements bend downwards at higher energy per particle. Thus beyond the
knee the medium nuclear mass elements dominate all the way to the switchover to
the extragalactic component, which is, once again, mostly hydrogen and helium,
corresponding to what is expected to be contributed from the interstellar
medium of a radiogalaxy, as well as from any intergalactic contribution
mixed in \cite{MauryCR}. This continuous mix in the chemical composition
at the knee already renders the overall knee feature in a spectrum in
energy per particle unavoidably quite smooth, a tendency which can only
partially be offset by the possible polar cap contribution, since that
component also is strongest at the same rigidity, where the bend in the
overall spectrum occurs; this term {\it rigidity} refers to the factor
occurring in the expression for the Larmor radius for any energetic
particle, and stands for $p/Z$, the momentum divided by the charge; thus
nuclei at the same rigidity have the same Larmor radius in their
gyromotion in a magnetic field.
\subsection{Observational tests}
These predictions can be compared at some detail with data, and we have
given adequate comparisons in previous work; a summary of the
predictions and tests is given in Table 1, adapted from \cite{PRD95}:
\begin{center}
Table 1. Spectral indices for hydrogen, helium and heavier nuclei.
\begin{tabular}{llrrr}\hline
Experiment & Energy Range & element range & sp.index &
\\[.2cm]\hline
Predicted & & & & \\
& below knee & H & $2.75\pm 0.04$ & \\
Webber \cite{Webber} & 1--50 GeV & H + He & $2.70\pm0.05$ & \\
LEAP \cite{Seo} & 10--100 GeV& H & $2.74\pm0.02$ & \\
JACEE \cite{JACEE1} & $<$40 TeV & H & $2.64\pm0.12$ & \\
JACEE \cite{JACEE1} & $>$40 TeV & H & $3.22\pm0.28$ & \\
Sokol \cite{Ivanenko} & $>$5 TeV & H & $2.85 \pm 0.14$ & \\
Ryan {\it et al.} \cite{Ryan} & 50--2000 GeV & H &$2.75\pm0.03$ & \\
MSU \cite{Zatsepin} & 10--200 TeV & H &$3.14\pm0.08$ & \\
JACEE \cite{JACEE2,JACEE1} & 50--200 TeV & H & $2.77\pm0.06$& \\
Japan \cite{Kawamura} & 8--50 TeV & H & $2.82\pm0.13$ & \\
& & & & \\
predicted & & & & \\
& below knee & He,..,Fe & $2.67+0.02$ & $ \pm 0.02$ \\
LEAP \cite{Seo} & 10--100 GeV& He & $2.68\pm0.03$ & \\
RICH \cite{RICH} & 100--1000 GV & He & $2.64\pm0.09$ & \\
Ryan {\it et al.} \cite{Ryan} & 50--2000 GeV & He & $2.77\pm0.05$ & \\
Sokol \cite{Ivanenko} & $>$5 TeV & He & $2.64\pm0.12$ &\\
JACEE \cite{JACEE2,JACEE1} & 50--200 TeV & He & $2.67\pm0.08$ & \\
Japan \cite{Kawamura} & 8--50 TeV & He & $2.75\pm0.15$ & \\
Sokol \cite{Ivanenko} & $>$5 TeV & all & $2.68\pm0.07$ &\\
Akeno \cite{Nagano92} & $< 5 \, 10^{15}$ eV & all & $2.62 \pm 0.12$ & \\
Akeno \cite{CRIV} & below knee & all & $2.66$ +syst. & \\
Tibet AS$\gamma$ \cite{Tibet95} & $< \, 10^{14.75}$ eV & all &
$2.60 \pm 0.04$ & \\
& & & & \\
predicted & & & & \\
& above knee & all & $3.07 + 0.07$ & $ \pm 0.07$ \\
HP \cite{Lawrence91} & $< 0.4 \,10^{18}$ eV & all & $ 3.01 \pm 0.02$ & \\
HP \cite{Lawrence91} & 0.4 - 4 $10^{18}$ eV & all & $3.14 \pm 0.06$ & \\
FE \cite{UHECRSp1} & 2 - 4 $10^{17}$ eV & all & $3.07 \pm 0.01$ & \\
Akeno \cite{CRIV} & above knee & all & $3.07$ +syst.& \\
Akeno \cite{Nagano92} & 5 $10^{15}$ eV - 6 $10^{17}$ eV & all &
$3.02 \pm 0.03$& \\
Tibet AS$\gamma$ \cite{Tibet95} & $> \, 10^{15.85}$ eV & all &
$3.00 \pm 0.05$ & \\
FE \cite{UHECRSp1} & 2 $10^{17}$ - 4 $10^{19}$ eV & all
& $3.18 \pm 0.01$ & \\
Akeno \cite{Nagano92} & $6 \, 10^{17} - 7 \, 10^{18}$ eV & all
& $3.18 \pm 0.08$ & \\ \hline
\end{tabular}
\end{center}
We note that the error distribution of the prediction below and above the
knee, for the elements He and higher nuclei is asymmetric with respect to the
central prediction. The systematic errors inherent in the analysis given in
\cite{CRIV}, and indicated as such in the Table, cannot be easily quantified,
since they arise from the errors in the Monte-Carlo used for the modelling
the airshowers; however, the fit to the data is quite acceptable, and so we
believe that this systematic error is small. The cutoffs in the three source
components and their chemical abundances can be checked using vertical and
slanted airshowers, and are all consistent to within 20 \% with prediction
\cite{CRIV}. The gradual switch from one spectral range to another across
the knee is clearly recognizable for the Tibet AS$\gamma$-experiment, for
which this energy range is about a factor of 10, consistent with the
expected gradual change in chemical composition (see
\cite{Peters59,Peters61,CRIV}). The last two lines in the Table refer
to energy ranges which cover some of the {\it ankle}, where the spectrum
varies, due to the switch to a new contributor, the expected extragalactic
cosmic rays. Here we note also that the cosmic ray spectra of the
various chemical elements and electrons can be studied separately, and
all are consistent with the predictions in the GeV to TeV range
\cite{ICRC-CRe,ICRC-CRsp}. This is the range of interest here.
\section{Injection of cosmic ray nuclei}
For the elements He,..C, O,.. Fe the injection law can be written as a powerlaw
in momentum $p$
\begin{equation}
N(p) \; \sim \; p^{-2.67} \, d p ,
\end{equation}
\noindent which extends all the way down to non-relativistic energies. This
means that with $p \, = \, A \, m_p \, c \, \gamma \, \beta$, where $A$ is the
atomic weight of the nucleus considered, and $\gamma$ and $\beta$ are the
Lorentz-factor and velocity in units of the velocity of light $c$, the spectrum
at sub-relativistic energies can be written as
$\sim \; {\beta}^{-2.67} \, d \beta $.
The energy loss in interactions with electrons, unbound (then proportional
to $n_e$, the density of free electrons) or bound in a shell around a
hydrogen nucleus (then proportional to $n_H$, the density of neutral hydrogen
atoms; heavier elements can normally be neglected here) of the thermal matter
can be written as
\begin{equation}
\frac{d \beta}{d t} \; \sim \; \frac{n_e,n_H}{\beta^2} \, Z^2 ,
\end{equation}
\noindent where $Z$ is the charge of the energetic
nucleus losing energy. This simple behaviour is valid only for suprathermal
energies and sub-relativistic speeds (see, {\it e.g.} \cite{Nath93}).
After traversal of thermal matter for some time $\tau$ the interaction results
in a low energy cutoff of the distribution of energetic nuclei, and a law of
\begin{equation}
\sim \; \beta^2 \, d \beta
\end{equation}
\noindent below the cutoff, and the original law above the cutoff. The cutoff
energy is given by
\begin{equation}
\beta_{crit} \; \sim \; \lbrace{Z^2 \, (n_e, n_H) \tau }\rbrace^{1/3} .
\end{equation}
All the particles which are lost to the energetic particle spectrum below the
cutoff are shifted in phase space to the thermal particles, and can modify the
chemical abundances there. This effect is especially important in the case
that the chemical abundances in energetic particles are very different from
those in the interstellar medium, and this is the case for some elements, such
as for many isotopes of Li, Be, B.
The column density along the twisted
and scattering path of a charged particle in a highly chaotic magnetic field
configuration is referred to as {\it grammage}, and this grammage is the
relevant quantitity to discuss cosmic ray interactions. This grammage can be
inserted into the above expression, and then leads to estimates of the cutoff
energies of order 100 MeV for hydrogen and correspondingly more for heavier
nuclei.
\section{Spallation of cosmic ray nuclei}
Cosmic ray nuclei can be broken up in collisions with thermal matter; this
process is called spallation. Obviously, there is a corresponding
interaction between energetic protons, and thermal material comprising
heavier nuclei such as Carbon. In such collisions the remaining nuclei
can also be excited, and then emit $\gamma$-ray lines.
There are several sites, which can be distinguished, where spallation is
relevant (see, {\it e.g.}, the recent work in this area
\cite{GM77,Eng85,GM87,Eng90,Shibata95}):
\subsection{Sites of spallation}
First of all, the massive stars, which explode as supernovae after going
through an evolutionary phase accompanied by heavy mass loss, usually have
a molecular cloud shell around their wind-zone. When the central star
explodes, it gives rise to a powerful shock wave, which races through
the wind, and then smashes into the shell \cite{Nath94b}; since the shock
is loaded with energetic particles, these particles then spallate in the
shell. From the abundance of sub-Fe elements one can estimate that the
grammage in this shell is of order $1 \rm \, g/cm^2$ \cite{CRVII,CRVIII},
consistent with the data from radio and millimeter observations. This
apparently is the dominant process at higher energy to account for the
abundances in cosmic rays for most odd-Z elements, for the sub-Fe elements,
and for some Li, Be, and B isotopes.
In this case the spectrum of the secondary particles $N_s$ is the same as the
primary particles $N_p$:
\begin{equation}
N_s \; \sim \; N_p .
\end{equation}
Next is the interaction in clouds, and here we have to distinguish between the
energy range for which the particles move diffusively through a cloud, and the
higher energy range, where they move unencumbered through the cloud material.
It is this latter approximation which is commonly used in the literature.
The secondary particles are then created in the clouds, and diffuse out of the
galaxy, and so their creation equation can be written as
\begin{equation}
\frac{d \, N_{s}}{d \, t} \; = \; \frac{N_{p}}{\tau_{s}} -
\frac{N_{s}}{\tau_{L,gal}} ,
\end{equation}
\noindent where $\tau_{s}$ is the spallation time scale, and $\tau_{L,gal}$
is the time scale for diffusion out from the disk of the Galaxy. There is
a fair amount of evidence that this latter diffusive transport can be derived
from a Kolmogorov spectrum of interstellar turbulence
\cite{Rickett90,Goldstein95}. The evidence for such a law of turbulence in
the ISM has been discussed extensively in \cite{GamowCR,ICRC-CRe}. The
solution to this equation is in the stationary case
\begin{equation}
N_s \; = \; N_p \, \frac{\tau_{L,gal}}{\tau_{s}} ,
\end{equation}
\noindent which translates to an energy dependence of the ratio of secondary
to primary isotopes and elements of
\begin{equation}
\frac{N_s}{N_p} \; \sim \; E^{-1/3} ,
\end{equation}
\noindent in the case of a Kolmogorov spectrum; here we have neglected
for didactic simplicity the energy dependence of the spallation. The ratio
of secondary to primary nuclei has been used in the past to argue that in
fact the spectrum of interstellar turbulence is {\it not} a Kolmogorov law.
Since the boron/carbon ratio B/C gives an energy dependence of close to
$E^{-0.6}$ \cite{Eng85,GM87,Eng90}, a Kolmogorov law did not seem to be
consistent with the data.
However, this line of argument is {\it only} true, if the cloud interaction
is stationary; on the other hand we do know that interstellar clouds have
their own temporal evolution, and so we need to check what happens when
clouds form and dissipate again, {\it e.g.} by heating from newly formed
stars. The decisive difference to the argument above arises, when we consider
the formation of clouds, and we will proceed to do this in the next section.
\subsection{The capture of cosmic rays in clouds}
Here we wish to explore the following concept: The interstellar medium is
forming large molecular clouds out of its small fragments and warmer parts.
Gravitational instability is a key process in the collapse of clouds or cloud
fragments. Gravitational instability sets in, as soon as the time scale for
free-free collapse is shorter than the time scale for any pressure signal to
propagate through the cloud. This means that the collapse also needs to be
faster than the Alfv{\'e}n velocity \cite{SpitzerISM}. As a consequence,
cosmic rays are trapped upon the formation of a gravitationally bound
system, such as a molecular cloud, since cosmic rays cannot stream
significantly faster than the Alfv{\'e}n velocity \cite{Wentzel74}.
Trapped cosmic rays can get out of the cloud by diffusion; diffusion is a good
approximation only as long as the mean free path for scattering by magnetic
irregularities is significantly shorter than the size of the cloud. This
entails an upper particle energy limit for the diffusion approximation.
Consider then a particle population of cosmic rays $N_{p,1}(E,t)$ trapped in
a cloud, where the index 1 stands for {\it inside}:
\begin{equation}
\frac{d \, N_{p,1}}{d \, t} \; = \; - \frac{N_{p,1}}{\tau_{L,cl}}
\end{equation}
\noindent with
\begin{equation}
\tau_{L,cl} \; = \; \tau_{L,cl,0} \, (\frac{E}{E_0})^{-1/3} .
\end{equation}
This energy dependence follows from the concept that small scale turbulence
in media, which are magnetic and partially or fully ionized, can be
approximated by a Kolmogorov law, as discussed above.
The solution is clearly
\begin{equation}
N_{p,1} \; = \; N_{p,1,0}(E) \, exp(-\frac{t}{\tau_{L,cl}}) .
\end{equation}
The particle population outside the cloud, but coming from inside,
is then given by
\begin{equation}
\frac{d \, N_{p,2}}{d \, t} \; = \; + \frac{N_{p,1}}{\tau_{L,cl}}
\end{equation}
\noindent which translates to
\begin{equation}
N_{p,2} \; = \; N_{p,1,0}(E) \lbrace{1 - exp(-\frac{t (E/E_0)^{1/3}}
{\tau_{L,cl,0}}) }\rbrace
\end{equation}
Secondaries are produced in nucleus-nucleus collisions inside the cloud, and so
their production equation reads
\begin{equation}
\frac{d \, N_{s,1} }{ d \, t} \; = \; \frac{N_{p,1} }{ \tau_{s}} \, - \,
\frac{N_{s,1} }{ \tau_{L,cl}}
\end{equation}
The solution is
\begin{equation}
N_{s,1} (E) \; = \; N_{p,1,0}(E) \, \frac{t}{\tau_s} \,
exp(-\frac{t}{\tau_{L,cl}}) .
\end{equation}
The secondaries outside the cloud are just those produced inside and
leaking out, and so we have the relation
\begin{equation}
\frac{d \, N_{s,2} }{ d \, t} \; = \; + \frac{N_{s,1} }{ \tau_{L,cl}} .
\end{equation}
The solution to this differential equation is then
\begin{equation}
N_{s,2}(E) \; = \; \frac{N_{p,1,0}(E)}{ \tau_s} \, \tau_{L,cl} \,
\int^x_0 x' e^{-x'} d x' ,
\end{equation}
\noindent where $x = t/\tau_{L,cl}$.
This entails for {\it long times} then
\begin{equation}
N_{s,2}(E) \; = \; \frac{N_{p,1,0}(E)}{ \tau_s} \, \tau_{L,cl,0} \,
(\frac{E}{ E_0})^{-1/3} .
\end{equation}
Therefore, the secondary particles, injected into the interstellar
medium outside the original cloud, have a spectrum which is steeper than the
primary particles by 1/3. Or, given that the primary particles are well
approximated by a spectrum of $E^{-8/3}$, the secondary particles at injection
have a spectrum of $E^{-3}$.
Now, considering then also the leakage from the Galaxy generalizes this results
and gives the equilibrium spectrum for secondaries:
\begin{equation}
\frac{d \, N_{s,2} }{ d \, t} \; = \; + \frac{N_{s,1} }{ \tau_{L,cl} }
\, - \, \frac{N_{s,2} }{ \tau_{L,gal} }.
\end{equation}
The solution then is
\begin{equation}
N_{s,2} \; = \; \frac{N_{p,1,0}(E)}{ \tau_s} \, \tau_{L,cl} \,
exp(-\frac{t}{ \tau_{L,gal}}) \,
(1 - \tau_{L,cl}/\tau_{L,gal})^{-2} \, \int^x_0 x' e^{-x'} d x' .
\end{equation}
\noindent where now
\begin{equation}
x \; = \; t \, (\frac{1}{\tau_{L,cl}} - \frac{1}{\tau_{L,gal}}) .
\end{equation}
Without loss of generality we can assume that $\tau_{L,cl} \, < \,
\tau_{L,gal}$, when the integral converges; in the opposite case a brief
calculation confirms also the convergence.
The next step is to assume that we are at present at no particular time;
for each individual source this corresponds of $N_{s,2}$ to an integration
over past injection time to give $N^{\star}_{s,2}$; the sum over many sources
then no longer changes the spectrum, but only the normalization. Clearly,
after some long time, the remnants of the cloud are dispersed, but then the
residual population of secondaries is no longer significant; this then
ensures that the sum over many sources does not diverge. This then means the
further integral gives already the proper energy dependence
\begin{equation}
N^{\star}_{s,2} \; = \; \frac{N_{p,1,0}(E)}{ \tau_s} \, \tau_{L,cl}(E) \,
\tau_{L,gal}(E) \, (1 - \tau_{L,cl}/\tau_{L,gal})^{-2} \, I(t) ,
\end{equation}
\noindent with
\begin{equation}
I(t) \; = \; \int_0^{x"} e^{-x'} \, d x' \int_0^{x} s e^{-s} d s ,
\end{equation}
\noindent where
\begin{eqnarray}
x" \; & = & \; t/\tau_{L,gal} , \nonumber \\
x \, & = & \, x' \, \tau_{L,gal} \, (1/\tau_{L,cl} - 1/\tau_{L,gal}) .
\end{eqnarray}
The ratio of $x/x'$ is energy independent, since in our concept both the
diffusion from the cloud and the leakage from the Galaxy have the same energy
dependence. The integral $I(t)$ can be worked out in closed form analytically,
and approaches a constant value for reasonable large times $x" \gg 1$.
The energy dependence of the secondaries, as compared to the primaries
is then clearly
\begin{equation}
N_{s,2} / N_{p,1} \; \sim \; E^{-2/3} ,
\end{equation}
\noindent with our modelling of the interstellar and intracloud turbulence
with a Kolmogorov spectrum.
This is in accord with the observations, such as by Engelmann et al.
\cite{Eng90}. This is in contrast to the usual finding that a
{\it stationary} leaky box gives a ratio of secondary to primaries $\sim
E^{-1/3}$, if we use a Kolmogorov spectrum for turbulence.
Therefore, considering the non-stationarity of the normal interstellar medium,
we can readily explain the ratio of secondaries to primaries, and at the same
time use a spectrum of turbulence which is consistent with all other
observational evidence. Key was the use of the gravitational instability
condition for the formation of a cloud.
Translating this result into the language common in the literature, this means
that escape length as measured in gm/cm$^2$ and escape time can no longer used
synonymously. The escape time is given by $\tau_{L,gal}$, and is proportional
to $E^{-1/3}$ in the relativistic range of particle energies. The escape
length as a means to describe interaction has three different regimes, and the
one relevant in the GeV/nucleon range is, as before, about $E^{-0.6}$, and
here, in our simplistic model, $\sim E^{-2/3}$.
\subsection{Energy dependence of secondary/primary ratio}
In the following we adopt for nuclei such as He and higher in mass the primary
cosmic ray spectrum of $E^{-7/3}$ at injection and the Kolmogorov law of
turbulence, giving an energy dependence of a diffusive time of $E^{-1/3}$.
Therefore, the energy dependence of the secondary to primary ratio has three
simple domains, which can be summarized as follows:
\begin{itemize}
\item{} The spallation in the molecular cloud shell around exploding massive
star winds leads to a ratio of secondary to primary nuclei as a function of
energy in the interstellar medium observable of
\begin{equation}
N_{s} / N_{p} \; \sim \; const .
\end{equation}
\item{} The spallation in the energy range where trapping occurs for cosmic
ray nuclei leads to
\begin{equation}
N_{s} / N_{p} \; \sim \; E^{-2/3} .
\end{equation}
\item{} And the higher energy range when the interaction is no longer
diffusive, we return to the canonical solution, which in our case gives
\begin{equation}
N_{s} / N_{p} \; \sim \; E^{-1/3} .
\end{equation}
\end{itemize}
A comparison with the data suggests that we discern only regime 1 and 2, and
that regime 3 is never a dominant contributor. The data suggest that the
switch between regime 1 and 2 occurs near an energy per nucleon of about 20
GeV/n. To repeat, the spallation is described by the first two domains
given above, and the escape time corresponds to a $E^{-1/3}$ law throughout the
relativistic particle energy range.
\section{Chemical abundances}
The origin of the chemical elements and their isotopes can be traced to three
main source sites (see \cite{Reeves74,Reeves94}):
\begin{itemize}
\item{} The big bang nucleosynthesis accounts readily for H, $^4$He, $^2$H,
$^3$He, and $^7$Li. Deuterium, after some excitement about absorption lines in
quasars, seems to be now in agreement given the first measurements in a
neighboring galaxy \cite{Chin96}. Thus big bang nucleosynthesis does seem
to give a coherent picture of a universe, where only a small fraction of the
critical density is made up of normal baryons.
\item{} Stellar interiors and stellar envelopes provide clearly most heavy
elements, spewed into interstellar space in supernova explosions. Some light
isotopes such as deuterium are destroyed in the interior of stars.
\item{} The interactions of cosmic rays with thermal matter can explain a
number of features both in the abundance distribution of thermal matter, as
well as in the distribution of cosmic rays: First, the
even-odd-Z distribution is dissimilar between the interstellar medium and the
higher energy cosmic rays, with spallation providing
a higher abundance for the odd-Z elements of cosmic rays. Second, the sub-Fe
elements in the cosmic rays are also due to spallation. And finally, most
isotopes of the elements Li, Be, and B are provided by cosmic ray interaction
both in the interstellar medium and in the cosmic rays.
\end{itemize}
One test \cite{Nath93,Nath94a} is the effect of ionization losses on the
low energy protons, which provide also an ionization and heating source in
molecular clouds; it is an important test for the entire concept that the
cutoff in the proton spectrum due to such losses is consistent with the
cutoff in the spallation product spectrum required to explain the
abundances of Li, Be, and B in the interstellar medium. This is the case.
There is a large amount of work yet to be done, to test the detailed concept
proposed, in order to account for the chemical abundances in some detail, for
the abundances of radioactive isotopes, and for accurate isotope ratios. This
will provide stringent tests for this theory as for any other, and may yet
disprove it.
\section{Outlook}
Given that a quantitative theory is beginning to show the promise of an
explanation for the origin of cosmic rays, it may be worthwile to obtain
much better cross sections for the cosmic ray interactions, especially
near the critical threshold for any reaction. This would then allow to
not only provide a quantitative explanation of the various abundances,
but also to actually use them to study both cosmic rays and the interstellar
medium.
{\center{Acknowledgments}}
The report is based on much work and help by my present and former graduate
students, mostly here Alina and Fanel Donea, Torsten Ensslin, Karl Mannheim,
Heino Falcke, Wolfram Kr{\"u}lls, J{\"o}rg Rachen, Henning Seemann, Yiping
Wang, and Christian Zier, as well as that resulting from my interactions and
collaborations with Venya Berezinsky, Jim Cronin, Tom Gaisser, Gopal-Krishna,
Hyesung Kang, Phil Kronberg, Jeremy Lloyd-Evans, Hartmut Machner, Matthew
Malkan, Hinrich Meyer, Motohiko Nagano, Biman Nath, Ray Protheroe, Reuven
Ramaty, Wolfgang Rhode, Marcia and George Rieke, Dongsu Ryu, Eun-Suk Seo,
Todor Stanev, Alan Watson, and Barbara Wiebel-Sooth. The new element, the
concept of time-dependent trapping in interstellar clouds, was developed
during the Calgary conference 1993, and then further evolved in many
discussions; I wish to thank the organizers of the Calgary conference and
also the organizers of many subsequent conferences for generously inviting
me; the meeting MESON96 at Krakow 1996 - where this lecture was given -
has been especially stimulating. I thank all my discussion partners and
apologize for any errors and omissions which surely remain in the manuscript.
\vskip 20pt
\small
\parindent 0pt
| 10,507 |
\section{Introduction}
The problem of transmission and storage of quantum states has received
a considerable amount of attention recently, owing to the flurry of
activity in the field of quantum computation~\cite{bib_reviews}
sparked by Shor's discovery of a quantum algorithm for
factoring~\cite{bib_shor}. In anticipation of physical realizations of
such computers (which still face major conceptual challenges), it is
necessary to extend to the quantum regime the main results of
Shannon's information theory~\cite{bib_shannon}, which provides limits
on how well information can be compressed, transmitted, and preserved.
In this spirit, the quantum analogue of the noiseless coding theorem
was obtained recently by Schumacher~\cite{bib_schum}. However, noisy
quantum channels are less well understood, mainly because quantum
noise is of a very different nature than classical noise, and the
notion of ``quantum information'' is still under discussion. Yet,
important results have been obtained concerning the correction of
errors induced by the decoherence of quantum bits via suitable quantum
codes. These error-correcting
codes~\cite{bib_shor1,bib_calder,bib_steane,bib_laflamme,bib_ekert,bib_bdsw,bib_knill,bib_calder1}
work on the principle that quantum information can be encoded in
blocks of qubits (codewords) such that the decoherence of any qubit
can be corrected by an appropriate code, much like the classical
error-correcting codes. Therefore, it is expected that a
generalization of Shannon's fundamental theorem to the quantum regime
should exist, and efforts towards such a proof have appeared
recently~\cite{bib_schum1,bib_schum2,bib_lloyd}. The capacity for the
transmission of {\em classical} information through quantum channels
was recently obtained by Hausladen {\it et al.}~\cite{bib_hausladen}
for the transmission of pure states, and by
Kholevo~\cite{bib_kholevo97} for the general case of mixed states.
When discussing quantum channels, it is important to keep in mind that
they can be used in two very different modes. On the one hand, one
may be interested in the capacity of a channel to transmit or else
store, an {\em unknown} quantum state in the presence of quantum
noise. This mode is unlike any use of a channel we are accustomed to
in classical theory, as strictly speaking classical information is not
transmitted in such a use (no measurement is involved). Rather, such
a capacity appears to be a measure of how much {\em entanglement} can
be transmitted (or maintained) in the presence of noise induced by the
interaction of the quantum state with a ``depolarizing'' environment.
On the other hand, a quantum channel can be used for the transmission
of {\em known} quantum states (classical information), and the
resulting capacity (i.e., the classical information transmission
capacity of the quantum channel) represents the usual bound on the
rate of arbitrarily accurate information transmission. In this paper,
we propose a definition for the {\em von Neumann} capacity of a
quantum channel, which encompasses the capacity for procesing quantum
as well as classical information. This definition is based on a
quantum mechanical extension of the usual Shannon mutual entropy to a
von Neumann mutual entropy, which measures quantum as well as
classical correlations. Still, a natural separation of the von
Neumann capacity into classical and purely quantum pieces does not appear to
be straightforward. This reflects the difficulty in separating
classical correlation from quantum entanglement (the ``quantum
separability'' problem, see, e.g., \cite{bib_horo} and references
therein). It may be that there is no unambiguous way to separate
classical from purely quantum capacity for all channels and all noise
models. The von Neumann capacity we propose, as it does not involve
such a separation, conforms to a number of ``axioms'' for such a
measure among which are positivity, subadditivity, concavity
(convexity) in the input (output), as well as the data processing
inequalities. We also show that the von Neumann capacity naturally
reverts to the capacity for classical information transmission through
noisy quantum channels of Kholevo~\cite{bib_kholevo97} (the Kholevo
capacity) if the unknown states are measured just before transmission,
or, equivalently, if the quantum states are {\em prepared}. In such a
use, thus, the ``purely quantum piece'' of the von Neumann capacity
vanishes. We stop short of proving that the von Neumann capacity can
be achieved by quantum coding, i.e., we do not prove the quantum
equivalent of Shannon's noisy coding theorem for the total capacity.
We do, however, provide an example where the von Neumann capacity
appears achievable: the case of noisy superdense coding.
In the next section we recapitulate the treatment of the {\em
classical} communication channel in a somewhat novel manner, by
insisting on the deterministic nature of classical physics with
respect to the treatment of information. This treatment paves the way
for the formal discussion of quantum channels along the lines of
Schumacher~\cite{bib_schum1} in Section III, which results in a
proposal for the definition of a von Neumann capacity for transmission
of entanglement/correlation that parallels the classical construction.
We also prove a number of properties of such a measure, such as
subadditivity, concavity/convexity, forward/backward quantum
data-processing inequalities, and derive a quantum Fano inequality
relating the loss of entanglement in the channel to the fidelity of
the code used to protect the quantum state. This proof uses an
inequality of the Fano-type obtained recently by
Schumacher~\cite{bib_schum1}. In Section IV we demonstrate that the
von Neumann capacity reduces to the recently obtained Kholevo
capacity~\cite{bib_kholevo97} if the quantum states are {\em known},
i.e., measured and ``kept in memory'', before sending them on. In
Section V then we apply these results directly to a specific example,
the quantum depolarizing channel~\cite{bib_channel}. This generic
example allows a direct calculation of all quantities involved.
Specifically, we calculate the entanglement/correlation processed by
the channel as a function of the entropy of the input and the
probability of error of the channel. We also show that this capacity
reverts to the well-known capacity for classical information
transmission in a depolarizing channel if {\em known} quantum states
are transmitted through the channel. In Section VI finally, we
interpret the von Neumann capacity in the context of superdense coding
and derive a quantum Hamming bound consistent with it.
\section{Classical channels}
The information theory of classical channels is well known since
Shannon's seminal work on the matter~\cite{bib_shannon}. In this
section, rather than deriving any new results, we expose
the information theory of classical channels in the light of the {\em
physics} of information, in preparation of the quantum treatment of
channels that follows. Physicists are used to classical laws of
physics that are {\em deterministic}, and therefore do not consider
noise to be an intrinsic property of channels. In other words,
randomness, or a stochastic component, does not exist {\em per se},
but is a result of incomplete measurement. Thus, for a physicist there
are no noisy channels, only incompletely monitored ones. As an
example, consider an information transmission channel where the
sender's information is the face of a coin before it is flipped, and
the receiver's symbol is the face of the coin after it is flipped.
Information theory would classify this as a useless channel, but for a
physicist it is just a question of knowing the initial conditions of
the channel {\em and the environment} well enough. From this, he can
calculate the trajectory of the coin, and by examining the face at the
received side infer the information sent by the sender. Classical
physics, therefore, demands that all {\em conditional} probability
distributions can be made to be {\em peaked}, if the environment,
enlarged enough to cover all interacting systems, is monitored. In
other words, $p_{i|j}=1$ or $0$ for all $i$, $j$: if the
outcome $j$ is known, $i$ can be inferred with certainty.
As a consequence, {\em all} conditional entropies can be made to
vanish for a closed system.
According to this principle, let us then construct the
classical channel. Along with the ensemble of source symbols $X$
(symbols $x_1,\cdots,x_N$ appearing with probabilities
$p_1,\cdots,p_N$), imagine an ensemble of received symbols $Y$. The
usual noisy channel is represented by the diagram on the left in
Fig.~\ref{fig_class}: the conditional entropy $H(X|Y)$ represents the
loss $L$ in the channel, i.e., the uncertainty of inferrring $X$ from
$Y$, whereas $H(Y|X)$ stands for noise $N$ in the output,
which is unrelated to the error-rate of the channel.
\begin{figure}
\caption{(a) Entropy Venn diagram for the classical channel
$XY$, and its ``physical'' extension including the environment.}
\vskip 0.25cm
\centerline{\psfig{figure=channel-fig_class.ps,width=3.0in,angle=-90}}
\label{fig_class}
\vskip -0.25cm
\end{figure}
A channel for which $L=0$ is called a ``lossless'' channel (no
transmission errors occur), whereas $N=0$ characterizes a
``deterministic'' channel (the input unambiguously determines the
output). On the right-hand side in Fig.~\ref{fig_class}, we have
extended the channel to include the environment. All conditional
entropies are zero, and the noise and loss are simply due to
correlations of the source or received ensembles with an
environment, i.e., $L=H(X{\rm:}E|Y)$ and $N=H(Y{\rm:}E|X)$.
The capacity of the classical channel is obtained by
maximizing the mutual entropy between source and received symbols [the
information $I=H(X{\rm:}Y)$ processed by the channel] over all input
distributions:
\begin{eqnarray}
C = \max_{p(x)}\; I\;.
\end{eqnarray}
If the output of the channel $Y$ is
subjected to {\em another} channel (resulting in the output $Z$, say),
it can be shown that the information processed by the combined channel,
$H(X{\rm:}Z)$, cannot possibly be larger than the information processed
in the {\em first} leg, $H(X{\rm:}Y)$. In other words, any subsequent
processing of the output
cannot possibly increase the transmitted information. This is
expressed in the
data-processing inequality (see, e.g.,~\cite{bib_ash}):
\begin{eqnarray}
H(X{\rm:}Z)\leq H(X{\rm:}Y)\leq H(X) \label{dataproc}\;.
\end{eqnarray}
On the same token, a ``reverse'' data-processing inequality can be
proven, which implies that the information processed in the {\em second}
leg of the channel, $H(Y{\rm:}Z)$, must exceed the information
processed by the total channel, $H(X{\rm:}Z)$:
\begin{eqnarray}
H(X{\rm:}Z)\leq H(Y{\rm:}Z)\leq H(Z) \label{dataproc1}\;.
\end{eqnarray}
This inequality reflects microscopic time-reversal invariance: any
channel used in a forward manner can be used in a backward manner.
As far as
coding is concerned, the troublesome quantity is the loss $L$, while
the noise $N$ is unimportant.
Indeed, for a message of length $n$, the typical number of input
sequences for every output sequence is $2^{nL}$, making decoding
impossible. The principle of error-correction is to embed the messages
into {\em codewords}, that are chosen in such a way that the
conditional entropy of the ensemble of codewords {\em vanishes}, i.e.,
on the level of message transmission the channel is lossless. Not
surprisingly, there is then a relationship between the channel loss
$L$ and the probability of error $p_c$ of a {\em code} $c$
that is composed of $s$ codewords:
\begin{eqnarray}
L\leq H_2[p_c]+p_c\log(s-1)\;, \label{fanocl}
\end{eqnarray}
where $H_2[p]$ is the dyadic Shannon entropy
\begin{eqnarray}
H_2[p] = H_2[1-p] = -p\log p\,-\,(1-p)\log(1-p)\;.
\end{eqnarray}
Eq.~(\ref{fanocl}) is the Fano inequality (see, e.g.,~\cite{bib_ash}),
which implies, for example, that the loss vanishes
if the error of the code vanishes. Note that the noise of the
channel itself in general is not zero in this situation. Let us now turn to
quantum channels.
\section{Quantum channels}
\subsection{Information theory of entanglement}
Quantum channels have properties fundamentally different from the
classical channel just described owing to the superposition principle
of quantum mechanics and the non-cloning theorem that
ensues~\cite{bib_nocloning}. First and foremost, the ``input'' quantum
state, after interaction with an environment, is ``lost'', having
become the output state. Any attempt at copying the quantum state
before decoherence will result in a classical channel, as we
will see later. Thus, a joint probability for input and output symbols
does not exist for quantum channels. However, this is not essential as
the quantity of interest in quantum communication is {\em not} the
state of an isolated quantum system (a ``product state''), but the
degree of entanglement between one quantum system and another,
parameterized by their mutual entropy as shown below. A
single non-entangled quantum system (such as an isolated spin-1/2
state) carries no entropy and is of no interest for quantum
communication as it can be arbitrarily recreated at any time. Entangled
composite systems (such as Bell states) on the other hand are
interesting because the entanglement can be used for communication.
Let us very briefly recapitulate the quantum
information theory of
entanglement~\cite{bib_neginfo,bib_entang,bib_meas,bib_reality}.
For a composite quantum system $AB$, we can write relations between
von Neumann entropies that precisely parallel those written by Shannon for
classical entropies. Specifically, we can define the conditional entropy
of $A$ (conditional on the knowledge of $B$)
\begin{eqnarray}
S(A|B) = S(AB)-S(B)
\end{eqnarray}
via a suitable definition of a ``conditional'' density matrix
$\rho_{A|B}$. The latter matrix can have eigenvalues larger than
unity, revealing its non-classical nature and allowing conditional quantum
entropies to be {\em negative}~\cite{bib_neginfo}. Similarly, we can define
a ``mutual'' density matrix $\rho_{A{\rm:}B}$ giving rise to a mutual von
Neumann entropy
\begin{eqnarray}
S(A{\rm:}B) = S(A) + S(B) - S(AB)
\end{eqnarray}
which exceeds the usual bound obtained for mutual Shannon entropies
by a factor of two:
\begin{eqnarray}
S(A{\rm:}B) \le 2\,{\rm min}[S(A),S(B)]\;.
\end{eqnarray}
The latter equation demonstrates that quantum systems can be more
strongly correlated than classical ones: they can be {\em
supercorrelated}. These relations can be conveniently summarized by
entropy Venn diagrams (Fig.~\ref{fig_venn}a) as is usual in classical
information
theory. The extension to the quantum regime implies that negative
numbers can appear which are classically forbidden\footnote{
In classical entropy Venn diagrams, negative numbers can only
appear in the mutual entropy of three or more systems.}.
As an example, we show in Fig.~\ref{fig_venn}b the quantum entropies of Bell
states (which are fully entangled states of two qubits). These notions
can be extended to multipartite systems, and will be used throughout
the paper.
\begin{figure}
\caption{(a) Entropy Venn diagram for a bipartite entangled quantum
system $AB$, depicting $S(AB)$ (total area), marginal entropies
[$S(A)$ viz. $S(B)$], conditional [$S(A|B)$ viz. $S(B|A)$] and mutual
[$S(A{\rm:}B)$] entropies. (b) Entropy diagram for a fully entangled
Bell-state.}
\vskip 0.3cm
\centerline{\psfig{figure=channel-fig1.ps,width=3.4in,angle=0}}
\label{fig_venn}
\vskip 0.25cm
\end{figure}
The degree of entanglement of a bipartite pure quantum state is
customarily indicated by the marginal entropy of one of its parts,
i.e., the von Neumann entropy of the density matrix obtained by
tracing the joint density matrix over the degrees of freedom of the
other part (the entropy of entanglement, see~\cite{bib_bdsw}).
However, since the parts of an entangled system
do not possess a state on their own, it takes up to twice the
marginal entropy of one of the parts to specify (in bits) the state of
entanglement. For example, it takes up to two bits to specify the
entanglement between two qubits (there are four Bell-basis
states). Thus, we propose to measure the entanglement of pure states
by the mutual entropy between the two parts, which takes values between 0 (for
non-entangled systems) and $2S$ (for entangled systems of
marginal entropy $S$ each). In order to avoid confusion with the previously
defined entropy of entanglement, we propose to call this quantity
the {\em mutual entanglement} (or simply von Neumann mutual entropy),
and denote it by the symbol $I_Q$:
\begin{eqnarray}
I_Q = S(A{\rm:}B)\;.
\end{eqnarray}
For pure entangled states, the mutual
entanglement $I_Q$ is just twice the entropy of entanglement,
demonstrating
that either is a good measure for the {\em degree} of entanglement,
but not necessarily for the absolute amount. Estimating the
entanglement of {\em mixed} states, on the other hand, is more
complicated, and no satisfying definition is available
(see~\cite{bib_bdsw} for the
most established ones). The quantum mutual entropy for mixed states
does {\em not} represent pure quantum entanglement, but rather
classical {\em and} quantum entanglement that is difficult to
separate consistently. For reasons that become more clear in the
following, we believe that the mutual
entanglement $I_Q$ between two systems is the most
straightforward generalization
of the mutual information $I$ of classical information theory, and
will serve as the vehicle to define a quantum/classical {\em von
Neumann} capacity for quantum channels.
\subsection{Explicit model}
In constructing a general quantum channel formally, we follow
Schumacher~\cite{bib_schum1}. A quantum mixed state $Q$ suffers
entanglement with an environment $E$ so as to lead to a new mixed
state $Q'$ with possibly increased or decreased entropy. In order to
monitor the
entanglement transmission, the initial mixed state $Q$ is ``purified''
by considering its entanglement with a ``reference'' system
$R$:
\begin{eqnarray}
|RQ\rangle=\sum_i\sqrt{p_i}\,|r_i,i\rangle \label{eq9}
\end{eqnarray}
where $|r_i\rangle$ are the $R$ eigenstates.
Indeed, this can always be achieved via a Schmidt decomposition.
Then, the mixed state $Q$ is simply obtained as a partial trace of the
pure state $QR$:
\begin{eqnarray}
\rho_Q = \tra_R[\rho_{QR}]=\sum_i p_i\,|i\rangle\langle i|\;.\label{eq10}
\end{eqnarray}
Also, the interaction with the environment
\begin{eqnarray}
QRE\stackrel {U_{QE}\otimes 1_R}\longrightarrow Q'R'E' \label{eq11}
\end{eqnarray}
now can be viewed as a channel to transmit the entanglement between $QR$ to the
system $Q'R'$. Here, $U_{QE}$ is the unitary operation entangling $QR$
with the environment $E$, which is initially in a pure state. This
construction is summarized in Fig.~\ref{fig_channel}.
\begin{figure}
\caption{Quantum network representation of a noisy quantum
channel. $R$ purifies the mixed state $Q$; the corresponding
entanglement is indicated by a dashed line.
\label{fig_channel}}
\vskip 0.25cm
\centerline{\psfig{figure=fig-channel.ps,width=2.5in,angle=-90}}
\vskip -0.25cm
\end{figure}
The
evolution of entropies in such a channel is depicted in
Fig.~\ref{fig_unitary}, where the entropy of the
reference state [which is the same as the entropy of $Q$ {\em before}
entanglement, $S(Q)=S(R)$] is denoted by $S$,
\begin{eqnarray}
S= -\sum_i p_i\log p_i\;, \label{eq12}
\end{eqnarray}
while the entropy of the quantum state $Q'$
after entanglement $S(Q')=S'$, and the entropy of the environment $S(E)=S_e$.
The latter was termed ``exchange entropy'' by Schumacher~\cite{bib_schum1}.
\begin{figure}
\caption{Unitary transformation entangling the pure environment
$|E\rangle$ with the pure system $|QR\rangle$. The reference system $R$ is not
touched by this transformation, which implies that no entropy can be exchanged
across the double solid lines in the diagram on the left.}
\vskip 0.25cm
\centerline{\psfig{figure=channel-fig2.ps,width=3.2in,angle=-90}}
\label{fig_unitary}
\vskip -0.25cm
\end{figure}
Note that, as for any tripartite pure state, the entropy dia\-gram of the
entangled
state $Q'R'E'$ is uniquely fixed by three parameters, the marginal entropies
of $Q'$, $R'$, and $E'$ respectively, i.e., the numbers $S$, $S'$, and $S_e$.
Also, in any pure entangled diagram involving three systems, the
ternary mutual entropy [the center of the ternary diagram,
$S(Q'{\rm:}R'{\rm:}E')$],
is always zero~\cite{bib_entang,bib_meas,bib_reality}.
To make contact with the classical channel of the previous section,
let us define the {\em quantum loss} $L_Q$\footnote{We follow here
the nomenclature that ``quantum'' always means ``quantum including
classical'', rather than ``purely quantum'', in the same sense as
the von Neumann entropy is not just a purely quantum entropy. This
nomenclature is motivated by the difficulty to separate classical
from quantum entanglement.}:
\begin{eqnarray}
L_Q= S(R'{\rm:}E'|Q')=S_e+S-S'\;.
\end{eqnarray}
It represents the difference between the entropy acquired by the environment,
$S_e$, and the entropy change of $Q$, ($S'-S$), and thus stands for the
loss of entanglement in the quantum transmission. It
plays a central role in error correction as shown below and in Section III.D.
The entropy diagram in terms of
$S$, $S_e$, and $L_Q$ is depicted in Fig.~\ref{fig_loss}. From this diagram
we can immediately read off inequalities relating the loss $L_Q$ and the
entropies $S$ and $S_e$ by considering triangle
inequalities for quantum entropies~\cite{bib_araki}, namely
\begin{eqnarray}
0&\le&L_Q\le 2S \;\label{lossbound},\label{ineq1}\\
0&\le&L_Q\le 2S_e\label{ineq2}\;,
\end{eqnarray}
which can be combined to
\begin{eqnarray}
0\le L_Q\le 2 \min\,(S,S_e)\;.
\end{eqnarray}
We find therefore that the initial mutual entanglement $2S$ is split,
through the action of the environment, into a piece shared with $Q'$
[i.e., $S(Q'{\rm:}R')=2S-L_Q$],
and a piece shared with the environment (the remaining loss $L_Q$)
according to the relation
\begin{eqnarray}
S(R'{\rm:}Q')+S(R'{\rm:}E'|Q')=S(R'{\rm:}E'Q')=S(R{\rm:}Q)\;,
\end{eqnarray}
or equivalently
\begin{eqnarray}
I_Q +L_Q = 2S\;.
\end{eqnarray}
\begin{figure}
\caption{Entropy diagram summarizing the entropy relations between the
entangled systems $Q'$, $R'$, and $E'$. }
\vskip 0.25cm
\centerline{\psfig{figure=channel-fig_loss.ps,width=1.75in,angle=-90}}
\label{fig_loss}
\vskip -0.25cm
\end{figure}
Finally, we are ready to propose a definition for the
von Neumann capacity. Again, in analogy
with the classical construction, the von Neumann capacity $C_Q$ would
be the mutual entanglement processed by the channel (mutual von
Neumann entropy), maximized over
the density matrix of the input channel, i.e.,
\begin{eqnarray}
C_Q = \max_{\rho_Q} I_Q\;,\label{quantcap}
\end{eqnarray}
where $I_Q=S(R'{\rm:}Q')=S(R{\rm:}Q')$ is the entanglement processed
by the channel:
\begin{eqnarray}
I_Q=2S-L_Q\;.
\end{eqnarray}
From the bound (\ref{lossbound}) we find that
the entanglement processed by the channel is non-negative, and bounded
from above by the initial entanglement $2S$. An interesting situation
arises when the entanglement processed by the channel saturates
this upper bound. This is the case of the {\em lossless} quantum
channel, where $L_Q=0$.
It was shown recently by Schumacher and Nielsen~\cite{bib_schum2} that
an error-correction procedure meant to restore the initial quantum
state (and thus the initial entanglement $2S$) can only be successful
when $L_Q=0$. From Fig.~\ref{fig_loss} we can see that when $L_Q=0$,
$Q'$ is entangled {\em separately} with the reference state and the
environment, leading to the diagram represented in
Fig.~\ref{fig_lossless}. For this reason alone it is possible to
recover the initial entanglement between $Q$ and $R$ via interaction
with an ancilla $A$ (that can be viewed as a {\em second} environment
in a ``chained'' channel). The latter effects a transfer of the
entanglement between $Q'$ and $E'$ to entanglement between $E'$ and
$A$. This operation can be viewed as an ``incomplete'' measurement of
$Q'$ by $A$ which only measures the environment $E'$ while keeping
intact the entanglement of $Q'$ with $R$. It was shown
in~\cite{bib_schum2} that $L_Q=0$ is in fact a necessary {\em and}
sufficient condition for this to be feasible. Such a transfer of
entanglement corresponds to the quantum equivalent of error
correction, and will be discussed with reference to the quantum Fano
inequality in Section III.D.
\begin{figure}
\caption{Entanglement between $Q'$, $R'$, and $E'$ in the lossless quantum
channel}
\vskip 0.25cm
\centerline{\psfig{figure=channel-fig_lossless.ps,width=1.75in,angle=-90}}
\label{fig_lossless}
\vskip -0.25cm
\end{figure}
\subsection{Axioms for quantum information}
In the following, we present a number of reasonable ``axioms'' for a
quantum mutual information, and show that $I_Q$ defined above has the required
properties. These are:
\begin{itemize}
\item[(i)] non-negativity
\item[(ii)] concavity in $\rho_Q$ (for a fixed channel)
\item[(iii)] convexity in $\rho_Q'$ (for fixed $\rho_Q$)
\item[(iv)] subadditivity
\end{itemize}
These requirements for a quantum mutual entropy (``entanglement
processed by the channel'') are very natural and reflect the kind of
requirements that are put on classical channels.
The non-negativity of $I_Q$ is simply a consequence of the
subadditivity of quantum entropies. (Just like the mutual Shannon entropy,
the mutual quantum entropy is a non-negative quantity).
Concavity of quantum information in $\rho_Q$
[axiom (ii)] reflects that the information processed by a
channel with a mixture of quantum states $\rho_Q=\sum_i w_i\rho_Q^i$
(with $\sum_i w_i=1$) as
input should be larger than the average information processed by
channels that each have a mixture $\rho_Q^i$ as input, i.e.,
\begin{eqnarray}
I_Q(\rho_Q)\geq\sum_i w_i I_Q(\rho_Q^i)\;.
\end{eqnarray}
This is the quantum analogue of the concavity of the Shannon mutual
information $H(X{\rm:}Y)$ in the input probability distribution $p(x)$
for a fixed channel, i.e., fixed $p(y|x)$. The proof uses
that, if the quantum operation achieved by the channel
is fixed, we have
\begin{eqnarray}
\rho'_{QE} &=& U_{QE} \left( \sum_i w_i \rho^i \otimes
|0\rangle\langle 0|\right)
U_{QE}^{\dagger} \nonumber\\
&=& \sum_i w_i U_{QE}(\rho^i \otimes |0\rangle\langle 0|)
U_{QE}^{\dagger} \nonumber\\
&=& \sum_i w_i \rho'^i_{QE}\;.
\end{eqnarray}
Therefore, using
\begin{eqnarray}
I_Q(\rho_Q) &=& S(R{\rm:}Q') \nonumber\\
&=& S(R)+S(Q')-S(RQ') \nonumber\\
&=& S(Q'E')+S(Q')-S(E') \nonumber\\
&=& S(Q'|E')+S(Q')
\label{eq-24}
\end{eqnarray}
the concavity of the quantum information in the input
results from the concavity of $S(Q'|E')$ in $\rho'_{QE}$
and from the concavity of $S(Q')$ in $\rho'_Q$~\cite{bib_wehrl}.
\par
Convexity of the processed information in $\rho_Q'$ [axiom
(iii)] states that, if the superoperator that takes
a fixed $\rho_Q$ into $\rho_Q'$ is such that
\begin{eqnarray}
\rho_Q'=\sum_j w_j \rho'^j_Q\;,
\end{eqnarray}
then
\begin{eqnarray}
I_Q(\rho_Q\to \rho_Q') \leq \sum_j w_j I_Q(\rho_Q\to \rho'^j_Q)\;.
\end{eqnarray}
Thus, the processed information of a channel that is a
``superposition'' of
channels (each used with probability $w_j$)
that result in $\rho_Q'$ cannot exceed the average of the
information for each channel. One has a similar property
for classical channels: the mutual information $H(X{\rm:}Y)$
is a convex function of $p(y|x)$ for a fixed input distribution $p(x)$.
The proof follows from noting that, if the input is fixed,
we have
\begin{eqnarray}
\rho'_{RQ}=\sum_j w_j \rho'^j_{RQ}
\end{eqnarray}
Then, expressing the quantum information as
\begin{eqnarray}
I_Q(\rho_Q\to \rho_Q') = S(R{\rm:}Q') = S(R)-S(R|Q')\;.
\end{eqnarray}
and noting that $S(R)$ is constant, the concavity of
$S(R|Q')$ in $\rho'_{RQ}$ implies the convexity of the quantum
information in the output.
\par
Finally, the subadditivity of quantum information [axiom (iv)] is a
condition which ensures that the information processed by a joint
channel with input $\rho_{Q_1Q_2}$ is smaller or equal to the
information processed ``in parallel'' by two channels with input
$\rho_{Q_1}=\tra_{Q_2}(\rho_{Q_1Q_2})$ and
$\rho_{Q_2}=\tra_{Q_1}(\rho_{Q_1Q_2})$ respectively.
Thus, if $R$ is the reference system purifying the joint input
$Q_1Q_2$, $Q_1$ is purified by $RQ_2$ while $Q_2$ is purified by
$RQ_1$ (see Fig.~\ref{channel-figsub}).
\begin{figure}
\caption{Parallel channels as quantum network, in the derivation of
the subadditivity of mutual von Neumann entropies. The
entanglement between $Q_1$, $Q_2$, and the reference is indicated by a
dashed line.}
\vskip 0.25cm
\centerline{\psfig{figure=figsub-new.ps,width=2.0in,angle=-90}}
\label{channel-figsub}
\vskip -0.25cm
\end{figure}
The subadditivity of von Neumann mutual entropies for such a
channel can be written as
\begin{eqnarray} \label{eq_subadditiv}
S(R{\rm:} Q_1' Q_2') \leq S(RQ_2{\rm:}Q_1')+S(RQ_1{\rm:}Q_2')\;,
\end{eqnarray}
which can be read as
\begin{eqnarray}
I_{12}\leq I_1 + I_2
\end{eqnarray}
with the corresponding identifications, and mirrors the classical
inequality
\begin{eqnarray}
H(X_1X_2{\rm:}Y_1Y_2)\leq H(X_1{\rm:}Y_1)+H(X_2{\rm:}Y_2)
\end{eqnarray}
for two independent channels taking $X_1\to Y_1$ and $X_2\to Y_2$.
To prove inequality~(\ref{eq_subadditiv}), we
rewrite the quantum information of each channel
using Eq.~(\ref{eq-24}) and the fact that $E_1$ and $E_2$ are initially
in a {\em product} state.
Eq.~(\ref{eq_subadditiv}) then becomes
\begin{eqnarray}
&&S(Q_1'Q_2'|E_1'E_2')+S(Q_1'Q_2')\leq\nonumber \\
&&S(Q_1'|E_1')+S(Q_1')+S(Q_2'|E_2')+S(Q_2')\;.
\end{eqnarray}
Subadditivity of {\em conditional} entropies, i.e.,
\begin{eqnarray}
&&\hspace{-0.3cm}S(Q_1'Q_2'|E_1'E_2')\nonumber\\
&=&S(Q_1'|E_1'E_2')+S(Q_2'|E_1'E_2')-
\underbrace{S(Q_1'{\rm:}Q_2'|E_1'E_2')}_{\geq0}\nonumber\\
&\leq&S(Q_1'|E_1'E_2')+S(Q_2'|E_1'E_2')\nonumber\\
&\leq&S(Q_1'|E_1')-\underbrace{S(Q_1'{\rm:}E_2'|E_1')}_{\geq0}
+S(Q_2'|E_2')-\underbrace{S(Q_2'{\rm:}E_1'|E_2')}_{\geq0}\nonumber\\
&\leq& S(Q_1'|E_1')+ S(Q_2'|E_2')\;,
\end{eqnarray}
together with the subadditivity property of ordinary (marginal)
von Neumann entropies, proves Eq.~(\ref{eq_subadditiv}). The terms that
are ignored in the above inequality are positive due to strong
subadditivity. This property of subadditivity of the information
processed by quantum channels can be straightforwardly extended
to $n$ channels.
An alternative definition for the quantum information processed by a
channel, called ``coherent information'', has been proposed by
Schumacher and Nielsen~\cite{bib_schum2}, and by
Lloyd~\cite{bib_lloyd}.
This quantity $I_e=S(R'|E')=S-L_Q$ is not positive [axiom (i)], and
violates axioms (ii) and (iv), which leads to a {\em violation} of the
reverse data-processing inequality, while the
``forward'' one is respected~\cite{bib_schum2} (as opposed to the von
Neumann mutual entropy which observes both, see below).
The coherent information attempts to capture the ``purely'' quantum
piece of the processed information while separating out any classical
components. This separation appears to be at the origin of the
shortcomings mentioned above.
\subsection{Inequalities for quantum channels}
From the properties of the ``mutual entanglement'' $I_Q$
derived above, we can prove data-processing inequalities for
$I_Q$ which reflect probability conservation, as well as the Fano
inequality which relates the loss of a channel to the fidelity of a code.
\vskip 0.25cm
\noindent{\it (i) Data-processing}
\vskip 0.25cm
Assume that starting with
the entangled state $QR$, entanglement with environment $E_1$ produces
the mixed state $Q_1$. This output is used again as an input to
another channel, this time
entangling $Q_1$ with $E_2$ to obtain $Q_2$ (see Fig.~\ref{fig_dpi}).
\begin{figure}
\caption{Chaining of channels in the derivation of the data-processing
inequality. The output $Q_1$ is subjected to a second channel by entangling
with an environment $E_2$ independent from $E_1$, to give output $Q_2$.}
\vskip 0.25cm
\centerline{\psfig{figure=chain-new.ps,width=3in,angle=-90}}
\label{fig_dpi}
\vskip -0.25cm
\end{figure}
The quantum analogue of the (forward) data-processing inequality
(\ref{dataproc}) that holds
for mutual informations in classical channels involves the mutual
entanglements $S(R{\rm:}Q_1)$ and $S(R{\rm:}Q_2)$, and asserts that
the mutual entanglement between reference and output cannot be increased by
any further ``processing'':
\begin{eqnarray}
S(R{\rm:}Q_2)\leq S(R{\rm:}Q_1)\leq 2S\;.\label{qdatapr}
\end{eqnarray}
That such an inequality should hold is almost obvious from the
definition of the mutual entanglement, but a short proof is given below.
This proof essentially follows Ref.~\cite{bib_schum2},
and is based on the property of strong subadditivity applied to
the system $RE_1E_2$:
\begin{eqnarray}
S(R{\rm:}E_2|E_1)=S(R{\rm:}E_1E_2)-S(R{\rm:}E_1)\geq 0\;.\label{strongsub}
\end{eqnarray}
For the channel $Q\rightarrow Q_1$,
we see easily (see Fig.~\ref{fig_loss}) that
\begin{eqnarray}
S(R{\rm:}E_1) &=& S(R{\rm:}Q_1E_1)-S(R{\rm:}Q_1|E_1) \nonumber\\
& = & 2S-S(R{\rm:}Q_1)\;. \label{app1}
\end{eqnarray}
Similarly, considering $E_1E_2$ as the environment for the ``overall''
channel $Q\rightarrow Q_2$, we find
\begin{eqnarray}
S(R{\rm:}E_1E_2)=2S-S(R{\rm:}Q_2)\;. \label{app2}
\end{eqnarray}
Plugging Eqs.~(\ref{app1}) and (\ref{app2}) into the positivity condition
(\ref{strongsub}), we obtain the quantum data processing inequality,
Eq.~(\ref{qdatapr}), as claimed.
\par
The {\em reverse} quantum data-processing inequality implies
that the entanglement processed by the second leg of the
channel, $S(RE_1{\rm:}Q_2)$, must be larger than the entanglement
processed by the entire channel:
\begin{eqnarray}\label{eq36}
S(R{\rm:}Q_2) \leq S(R E_1{\rm:}Q_2)\leq S(R E_1 E_2{\rm:}Q_2)= 2 S(Q_2)\;.
\end{eqnarray}
The proof relies on strong subadditivity applied to $Q_2 E_1 E_2$:
\begin{eqnarray}
S(Q_2{\rm:}E_1|E_2)=S(Q_2{\rm:}E_1E_2)-S(Q_2{\rm:}E_2)\geq 0\;.
\label{strongsub2}
\end{eqnarray}
For treating the channel $Q_1\rightarrow Q_2$ (i.e., the ``second
leg''), we have to purify
the input state of $Q_1$, that is consider $RE_1$ as the ``reference''.
Thus, we have
\begin{eqnarray}
S(Q_2{\rm:}RE_1) = 2 S(Q_2) - S(Q_2{\rm:}E_2)\;.
\end{eqnarray}
For the ``overall'' channel $Q\rightarrow Q_2$, we have
\begin{eqnarray}
S(Q_2{\rm:}R)=2 S(Q_2) - S(Q_2{\rm:}E_1 E_2)\;.
\end{eqnarray}
These two last equations together with Eq.~(\ref{strongsub2}),
result in the reverse quantum data-processing inequality, Eq.~(\ref{eq36}).
\par
From Eq.~(\ref{qdatapr}) we obtain immediately an inequality relating
the loss of entanglement after the first stage $L_1$ (we drop the
index $Q$ that indicated the quantum nature of the loss in this
discussion), with the overall loss,
$L_{12}$:
\begin{eqnarray}
0\leq L_1\leq L_{12}\;. \label{lossineq}
\end{eqnarray}
Physically, this implies that the loss $L_{12}$ cannot decrease from
simply chaining channels, just as in the classical case. As
emphasized earlier, the loss $L_1$ corresponds to the share of initial
entanglement that is irretrievably lost to the environment. Indeed, if
the environment cannot be accessed (which is implicit by calling it an
environment) the decoherence induced by the channel cannot be
reversed. Only if $L_1=0$ can this be achieved~\cite{bib_schum2}.
In view of this fact, it is natural to seek for a quantum
equivalent to the classical Fano inequality~(\ref{fanocl}).
\vskip 0.25cm
\noindent{\it (ii) Fano inequality}
\vskip 0.25cm
To investigate this issue, let us consider the chained channel
above, where error correction has taken place via transfer of entanglement
with a second environment. Let us also recall the definition of
``entanglement fidelity'' of Schumacher~\cite{bib_schum1}, which is
a measure of how faithfully the dynamics of the channel has preserved
the initial entangled quantum state $QR$:
\begin{eqnarray}
F_e(QR,Q'R) = \langle QR|\,\rho_{Q'R}\,|QR\rangle\equiv F_e^{QQ'}\;.\label{fid}
\end{eqnarray}
Since this entanglement fidelity does not depend on the reference
system~\cite{bib_schum1}, we drop $R$ from $F_e$ from here on, as indicated
in Eq.~(\ref{fid}).
Naturally, the entanglement fidelity can be related to the
probability of error of the channel. The quantum analogue of the
classical Fano inequality should relate the fidelity of the {\em code}
(in our example above the fidelity between $QR$ and $Q_2R$, the
error-corrected system) to the loss of the error-correcting channel $L_{12}$.
The derivation of such an inequality is immediate
using the Fano-type inequality derived by Schumacher~\cite{bib_schum1},
which relates
the entropy of the environment of a channel $S(E')$
to the fidelity of entanglement,
\begin{eqnarray}
S(E')\leq H_2[F_e^{QQ'}]\,+\,(1-F_e^{QQ'})\log{(d_Qd_R-1)}\;,\label{eqfano}
\end{eqnarray}
where $d_Q$ and $d_R$ are the Hilbert-space dimensions of $Q$ and $R$
respectively, and $H_2[F]$ is again the dyadic Shannon entropy.
Let us apply this
inequality to an error-correcting channel (decoherence +
error-correction), i.e.,
the chained channel considered above. In that case, the environment is
$E_1E_2$, and the entanglement fidelity is now between $Q$ and $Q_2$, i.e.,
the fidelity of the {\em code}, and we obtain
\begin{eqnarray}
S(E_1E_2)\leq H_2[F_e^{QQ_2}]\,+\,(1-F_e^{QQ_2})\log{(d-1)}\;.
\end{eqnarray}
Here, $d=d_R\,d_{Q_2}$ can be viewed as the Hilbert space dimension of
the code (this is more apparent in superdense coding discussed in the
Section VI).
To derive the required relationship, we simply note that
\begin{eqnarray}
S(E_1E_2) \geq L_{12}/2
\end{eqnarray}
[this is Eq.~(\ref{ineq2}) applied to the composite channel]. This relates
the fidelity of the code $F_e^{QQ_2}$ to the loss $L_{12}$, yielding the Fano
inequality for a quantum code
\begin{eqnarray}
L_{12}\leq 2\left[H_2[F_e^{QQ_2}]+
\left(1-F_e^{QQ_2}\right)\log{(d-1)}\right]\;. \label{fano}
\end{eqnarray}
As we noticed throughout the construction of quantum channels, a factor
of 2 appears also in the quantum Fano inequality, commensurate with
the fact that the loss can be twice the initial entropy.
Inequality (\ref{fano}) puts an upper limit on the fidelity
of a code for any non-vanishing loss $L_{12}$.
\section{Classical use of quantum channel}
In recent papers~\cite{bib_hausladen,bib_kholevo97}, the capacity for
the transmission of {\em classical} information through quantum channels has
been discussed. Essentially, this capacity is equal to the
maximal accessible information $\chi$ in the system, known as the Kholevo
bound~\cite{bib_kholevo}.
What we show in the following is that the mutual entanglement
introduced in the previous section, i.e., the quantum mutual entropy
$S(R:Q')$ between the ``decohered'' quantum state $Q'$ and the
``reference'' state $R$, reduces to $\chi$ if the quantum state is
measured before it is transmitted, or, equivalently, if Q is prepared
by a classical ``preparer'' $X$. Let the system $QR$ be ``purified'' again
via a Schmidt decomposition as in Eq.~(\ref{eq9}). If we measure
$Q$ in its eigenbasis we can write
\begin{eqnarray}
|RXQ\rangle=\sum_i \sqrt{p_i}\,|r_i\,x_i\, i\rangle\;,
\end{eqnarray}
where $x_i$ are the eigenstates of $X$ (if $X$ is in state $x_i$, $Q$
is in state $i$ etc.). (Figure~\ref{fig_trip} summarizes the
relationship between the respective entropies.)
Naturally then, tracing over $R$ we obtain
\begin{eqnarray}
\rho_{XQ}=\sum_ip_i\,|x_i\rangle\langle x_i|\otimes \rho_i\label{eq28}
\end{eqnarray}
with $\rho_i=|i\rangle\langle i|$, and similarly for $\rho_{RQ}$.
\begin{figure}
\caption{Entanglement between $Q$, $R$, and the ancilla (or preparer)
$X$ after measurement of the initial state of $Q$ by $X$, but prior
to entanglement with the environment. The initial state of $Q$
(before decoherence) is kept in memory, as it were, by $X$ via
classical correlation with $Q$. }
\vskip 0.25cm
\centerline{\psfig{figure=channel-fig_trip.ps,width=1.25in,angle=-90}}
\label{fig_trip}
\vskip -0.25cm
\end{figure}
Thus, $X$ and $Q$ are {\em classically}
correlated: each state of the ``preparer'' $X$ represents a state of
$Q$, or alternatively, $X$ reflects (keeps in memory) the initial quantum
state of $Q$. If the entropy of the quantum system $Q$ before
transmission is $S$ (just like in the previous section), the mutual
entropy between $R$ and $Q$ (as well as between $X$ and $Q$) is also
$S$, unlike the value $2S$ found in the quantum use. Decoherence now
affects $Q$ by entangling it with the environment, just like earlier.
Thus,
\begin{eqnarray}
\rho_{XQ}\to\rho_{XQ'}=\sum_ip_i|x_i\rangle\langle x_i|\otimes \rho'_i
\end{eqnarray}
where
\begin{eqnarray}
\rho_i^\prime=\tra_E\left\{U_{QE}\,\left(\rho_i\otimes
|0\rangle\la0|\right)\,U^\dagger_{QE}\right\}\;, \label{eq31}
\end{eqnarray}
and we assumed again that the environment $E$ is in a fixed ``0'' state
before interacting with $Q$. Now our proof proceeds as before, only
that the loss in the ``classical'' channel obeys different
inequalities. The requirement that the entangling
operation $U_{QE}$ does not affect $X$ or $R$ now implies
\begin{eqnarray}
S(X'{\rm:}E'Q')=S(X{\rm:}Q)=S(R{\rm:}Q)= S \label{eq32}
\end{eqnarray}
(see Figure~\ref{classic-fig1}).
\begin{figure}
\caption{Unitary transformation entangling the ``preparer'' (or
alternatively, the classical ``memory'') $X$ with the pure
environment $E$ and the quantum system $Q$. Neither the reference
$R$ nor the preparer $X$ are affected by this operation. As the
ternary Venn diagram between $Q'$, $E'$ and $X'$ is not pure in this
case, mutual entropy between $Q'$ and $X'$ {\em can} be shared by
$E'$.
\label{classic-fig1}}
\vskip 0.25cm
\centerline{\psfig{figure=classic-fig1.ps,width=3.3in,angle=-90}}
\vskip -0.25cm
\end{figure}
Applying the chain rule to the left hand side of Eq.~(\ref{eq32}) leads to
\begin{eqnarray}
S(X'{\rm:}E'Q')= S(X{\rm:}Q')+S(X{\rm:}E'|Q')\;.\label{eq33}
\end{eqnarray}
The quantum mutual entropy between the preparer and the quantum state
after decoherence, $S(X{\rm:}Q')$, can be shown to be equal to the Kholevo
bound $\chi$ (see Ref.~\cite{bib_access}). With $L=S(X{\rm:}E'|Q')$
(the classical loss of the channel) we thus conclude from
Eqs.~(\ref{eq33}) and (\ref{eq32}) that
\begin{eqnarray}
S=\chi+L\;.
\end{eqnarray}
Note that $S(X{\rm:}Q')$ is equal to $S(R{\rm:}Q')$, the mutual
entanglement $I_Q$ introduced earlier, as
$S(X)=S(R)$ and $S(XQ')=S(RQ')$.
Thus,
\begin{eqnarray}
I_Q\equiv S(R:Q')=\chi
\end{eqnarray}
if known quantum states are sent through the
channel, as advertised. It was shown recently by Kholevo~\cite{bib_kholevo97}
that the maximum of the latter quantity indeed plays the role of
channel capacity for classical information transmission
\begin{eqnarray}
C=\max_{p_i}\,\left[S(\rho')-\sum_i p_iS(\rho'_i)\right]\equiv\max_{p_i}\,\chi
\end{eqnarray}
where $\{p_i\}$ is a probability distribution of symbols at the
source, and $\rho'_i$ are the (not necessarily orthogonal) quantum
states received at the output, with the probability
distribution $\{p_i\}$ and $\rho'=\sum_i p_i\rho'_i$.
Thus, the quantity $C_Q$ that we propose as a capacity for
entanglement/correlation transmission reverts to the capacity for information
transmission $C$ if the unknown quantum states are {\em measured} before
transmission. This represents solid evidence in favor of our interpretation.
Let us now calculate
the quantities introduced here for a specific simple model of quantum noise.
\section{Quantum depolarizing channel}
The quantum depolarizing channel is an idealization of a quantum
storage and transmission process in which the stored quantum state can
undergo bit-flip and phase errors. This is not the most general
one-qubit channel\footnote{A more general depolarizing channel could
be constructed by allowing each of the possible errors a different
probability.}, but appears to be sufficient to examine a number of
interesting aspects of quantum communication.
\subsection{Quantum use}
Imagine a quantum state
\begin{eqnarray} |\Psi\rangle =
\alpha\,|0\rangle + \beta\,|1\rangle\;,
\end{eqnarray}
where the basis states of
the qubit can be taken to be spin-1/2 states polarized in the
$z$-direction, for example. (Specifically, we use the convention
$\sigma_z|1\rangle=|1\rangle$.) The depolarizing channel is
constructed in such a way that, due to an interaction with an
environment, the quantum state survives with probability $1-p$, but is
depolarized with probability $p/3$ by either a pure bit-flip, a pure
phase-error, or a combination of both:
\begin{eqnarray}
|\Psi\rangle &\stackrel{1-p}\longrightarrow & |\Psi\rangle\;,\nonumber\\
|\Psi\rangle& \stackrel{p/3}\longrightarrow & \sigma_x|\Psi\rangle=
\alpha\,|1\rangle+\beta\,|0\rangle\;, \nonumber\\
|\Psi\rangle &\stackrel{p/3}\longrightarrow & \sigma_z|\Psi\rangle=
-\alpha\,|0\rangle+\beta\,|1\rangle\;, \nonumber\\
|\Psi\rangle &\stackrel{p/3}\longrightarrow & \sigma_x\sigma_z|\Psi\rangle=-
\alpha\,|1\rangle+\beta\,|0\rangle \;,
\end{eqnarray}
where the $\sigma$ are Pauli matrices. Such an ``arbitrary'' quantum
state $\Psi$ can, without loss of generality, considered to be a state
$Q$ that is entangled with a reference state $R$, such that
the marginal density matrix of $Q$ can be written as
\begin{eqnarray}
\rho_Q =q\,|0\rangle\la0|\,+\,(1-q)\,|1\rangle\la1|
\label{rhomix}
\end{eqnarray}
with entropy $S(\rho_Q)=-\tra\rho_Q\log\rho_Q=H_2[q]$ and $q$ a probability
$(0\leq q\leq1$). In other words, the coefficients $\alpha$ and
$\beta$ need not be complex numbers. Conversely,
we can start with such a mixed state at the input, and consider $QR$
as a {\em pure} quantum state that this mixed
state obtains from. For example,
\begin{eqnarray}
|QR\rangle = \sqrt{1-q}\,|10\rangle\, - \,\sqrt q\,|01\rangle\;. \label{qr}
\end{eqnarray}
Naturally then, the mixed state Eq.~(\ref{rhomix}) is obtained by simply
tracing over this reference state. Pure states with real coefficients
such as (\ref{qr}) are not general, but suffice for the depolarizing channel
as $R$ is always traced over.
Let us now construct a basis for $QR$ that interpolates between
completely independent and completely entangled states, and allows us
to choose the initial entropy of $Q$ with a single parameter $q$. We thus
introduce the orthonormal ``$q$-basis'' states
\begin{eqnarray}
|\Phi^-(q)\rangle &=& \sqrt{1-q}\,|00\rangle\,-\,\sqrt q\,|11\rangle\;, \nonumber \\
|\Phi^+(q)\rangle &=& \sqrt q\,|00\rangle\,+\,\sqrt{1-q}\,|11\rangle\;, \nonumber \\
|\Psi^-(q)\rangle &=& \sqrt{1-q}\,|10\rangle\,-\,\sqrt q\,|01\rangle\;, \nonumber \\
|\Psi^+(q)\rangle &=& \sqrt q\,|10\rangle\,+\,\sqrt{1-q}\,|01\rangle\;.
\end{eqnarray}
Note that for $q=0$ or 1, these states are product states,
while for $q=1/2$ they are completely entangled, and $\Psi^\pm(1/2)$
and $\Phi^\pm(1/2)$ are just the usual Bell basis states. The
possibility of quantum decoherence of these states is introduced by
entangling them with an environment in a pure state, taken to be of
the same Hilbert space dimension as $QR$ for simplicity, i.e., a
four-dimensional space for the case at hand. This is the
minimal realization of a depolarizing channel.
Let us assume that $QR$
(for definiteness) is initially in the state $|\Psi^-(q)\rangle$, and the
environment in a superposition
\begin{eqnarray}
|E\rangle &= &\sqrt{1-p}\,|\Psi^-(q)\rangle \nonumber \\
&+&\sqrt{p/3}\left(|\Phi^-(q)\rangle+|\Phi^+(q)\rangle+|\Psi^+(q)\rangle\right)\;.
\end{eqnarray}
The environment and $QR$ are then entangled by means of the unitary operator
$U_{QRE}=U_{QE}\otimes 1_R$, with
\begin{eqnarray}
U_{QE} &=
&1\otimes P_{\Psi^-}(q)+
\sigma_x\otimes P_{\Phi^-}(q)\nonumber\\
&+&(-i\sigma_y)\otimes P_{\Phi^+}(q)+
\sigma_z\otimes P_{\Psi^+}(q)\;,
\end{eqnarray}
where the $P_{\Phi}(q)$ and $P_{\Psi}(q)$ stand for projectors
projecting onto $q$-basis states. Note that the Pauli matrices act
only on the first bit of the $q$-basis states, i.e., the entanglement
operation only involves $Q$ and $E$. Depending on the entanglement
between $Q$ and $R$, however, this operation also affects the
entanglement between $R$ and $E$. Thus, we obtain the state
\begin{eqnarray}
\lefteqn{|Q^\prime R^\prime E^\prime\rangle = U_{QRE}|QR\rangle|E\rangle = }\nonumber \\
&& \sqrt{1-p}\,|\Psi^-_{QR}(q),\,\Psi^-_E(q)\rangle +
\sqrt{p/3}\left(|\Phi^-_{QR}(q)\;,\Phi^-_E(q)\rangle +\right.\nonumber\\
&&\left.|\Phi^+_{QR}(1-q)\;,\Phi^+_E(q)\rangle +
|\Psi^+_{QR}(1-q)\;,\Psi^+_E(q)\rangle\right
\end{eqnarray}
on account of the relations
\begin{eqnarray}
\sigma_x |\Psi^-_{QR}(q)\rangle & = &
|\Phi^-_{QR}(q)\rangle \;, \\ (-i\,\sigma_y) |\Psi^-_{QR}(q)\rangle & = &
|\Phi^+_{QR}(1-q)\rangle \;,\\ \sigma_z |\Psi^-_{QR}(q)\rangle & = &
|\Psi^+_{QR}(1-q)\rangle\;,
\end{eqnarray}
and with obvious notation to distinguish the environment ($E$) and
quantum system ($QR$) basis states. The (partially depolarized)
density matrix
for the quantum system is obtained by tracing over the environment:
\begin{eqnarray}
\rho_{Q'R'} & = & \tra_E\left(|Q'R'E'\rangle\langle Q'R'E'|\right) =
(1-p)\,P_{\Psi^-}(q) +\nonumber \\
& & p/3\left[P_{\Phi^-}(q)+P_{\Phi^+}(1-q)+P_{\Psi^+}(1-q)\right]\;.
\end{eqnarray}
Its eigenvalues can be obtained to calculate the entropy:
\begin{eqnarray}
S_e(p,q)\equiv S(Q'R') = \nonumber \hspace{4.5cm}\\
H[\frac{2p}3(1-q),\frac{2pq}3,\frac12(1-\frac{2p}3+\Delta),
\frac12(1-\frac{2p}3-\Delta)]\;,
\end{eqnarray}
with $H[p_1,...,p4]$ the Shannon entropy, and
\begin{eqnarray}
\Delta = \left[(1-2p/3)^2-16/3\,p(1-p)\,q(1-q)\right]^{1/2}\;.
\end{eqnarray}
By tracing over the reference state we obtain the density matrix of
the quantum system after the interaction $\rho_{Q'}$, and its respective
entropy
\begin{eqnarray}
S'(p,q)\equiv S(Q')= H_2[q+\frac{2p}3(1-2q)]\;.
\end{eqnarray}
Together with the entropy of the reference state (which is unchanged
since $R$ was not touched by the interaction), $S(R')=S(R)=H_2[q]$,
this is enough to fill in the ternary entropy diagram reflecting the
dynamics of the channel, Fig.~\ref{fig_loss}. We thus find the
mutual entanglement processed by the channel:
\begin{eqnarray}
I_Q=S(Q'{\rm:}R)=2H_2[q]-L_Q(p,q)\;,
\end{eqnarray}
where the loss is
\begin{eqnarray}
L_Q(p,q)=H_2[q]-H_2[q+\frac{2p}3(1-2q)]+ S_e(p,q)\;.
\end{eqnarray}
The mutual entanglement is plotted in Fig.~\ref{fig_3d},
as a function of the error probability $p$ of the channel
and of the parameter $q$ which determines the initial entropy.
\begin{figure}
\caption{Mutual entanglement between the depolarized state $Q'$ and the
reference system $R'=R$, as a function of error $p$ and
parameter $q$. Note that the channel is 100\% depolarizing
at $p=3/4$. The concavity in $q$ [according to axiom (ii)] as well as
the convexity in $p$ [axiom (iii)] are apparent.}
\vskip 0.25cm
\centerline{\psfig{figure=newmutentpq.ps,width=3.5in,angle=0}}
\label{fig_3d}
\vskip -0.25cm
\end{figure}
The mutual entanglement is maximal when the entropy of the
source is maximal (as in the classical theory), i.e., $q=1/2$. Then:
\begin{eqnarray}
C_Q &=& \max_q\, I_Q\nonumber\\
&=& 2-S_e(p,1/2)
= 2-H_2[p]-p\,\log3\;. \label{depolcap}
\end{eqnarray}
In that case, the maximal rate of entanglement transfer is 2 bits
(error-free transfer, $p=0$). The capacity only vanishes at $p=3/4$,
i.e., the 100\% depolarizing channel. This is analogous to the
vanishing of the classical capacity of the binary symmetric channel at
$p=1/2$. As an example of such a channel, we shall discuss the
transmission of the entanglement present in a Bell state (one out of
four fully entangled qubit pairs) through a ``superdense coding''
channel in Section VI.A. The maximal mutual entanglement and minimal
loss implied by Eq.~(\ref{depolcap}) are plotted in
Fig.~\ref{fig_depol} as a function of $p$. This error rate $p$ can be
related to the fidelity of the channel by
\begin{eqnarray}
F_e^{Q'Q}= 1-p+\frac p3\,(1-2q)^2\;,
\end{eqnarray}
where $F_e^{Q'Q}$ is Schumacher's fidelity of entanglement introduced earlier.
Note that this implies that the Fano inequality Eq.~(\ref{eqfano}) is
saturated at $q=1/2$ for any $p$.
\begin{figure}
\caption{Maximal entanglement transfer $C(p)$ and minimal loss $L(p)$
as a function of the error probability $p$.}
\vskip 0.25cm
\centerline{\psfig{figure=capacity.ps,width=2.5in,angle=90}}
\label{fig_depol}
\vskip -0.25cm
\end{figure}
\subsection{Classical use}
Now, instead of using the channel to transmit entanglement (sending
unknown quantum states), one could equally
well use it to send classical information (known quantum states) as
outlined in section IV. Here, we calculate the capacity for the
transmission of classical information through the quantum depolarizing
channel and verify that the result is equal to the value obtained by
Calderbank and Shor~\cite{bib_calder} using the Kholevo theorem.
Before entanglement with the environment, let us then measure the mixed state
$Q$ via an ancilla $X$, after which $Q$ and $X$ are classically correlated,
with mutual entropy $H_2[q]$.
Note that this operation leads to an entangled triplet $QRX$
at the outset, as in Fig.~\ref{fig_trip}, with $S=H_2[q]$.
We now proceed with the calculation
as before. The basis states for the system $|QXR\rangle$ are then simply
\begin{eqnarray}
|\Phi_X^-(q)\rangle &=& \sqrt{1-q}\,|000\rangle\,-\,\sqrt q\,|111\rangle\;, \nonumber \\
|\Phi^+_X(q)\rangle &=& \sqrt q\,|000\rangle\,+\,\sqrt{1-q}\,|111\rangle\;, \nonumber \\
|\Psi^-_X(q)\rangle &=& \sqrt{1-q}\,|110\rangle\,-\,\sqrt q\,|001\rangle\;, \nonumber \\
|\Psi^+_X(q)\rangle &=& \sqrt q\,|110\rangle\,+\,\sqrt{1-q}\,|001\rangle\;,
\end{eqnarray}
where we used the index $X$ on the basis states to distinguish them
from the two-qubit basis states introduced earlier. The entanglement operation
is as before, with a unitary operator acting on $Q$ and $E$ only.
Because of the additional trace over the ancilla $X$, however,
we now find for the density matrix $\rho_{Q'R'}$:
\begin{eqnarray}
\rho_{Q'R'}
& = & (1-2p/3)\left[\,(1-q)|10\rangle\la10|\,+\,q|01\rangle\langle 01|\,\right]\nonumber\\
&+&2p/3\left[\,(1-q)|00\rangle\la00|\,+\,q|11\rangle\la11|\,\right]\;.
\end{eqnarray}
Consequently, we find for the mutual information transmitted through the
channel
\begin{eqnarray}
I = S(Q'{\rm:}R)=H_2[q]-L(p,q)\;,
\end{eqnarray}
with the (classical) loss of information
\begin{eqnarray}
L(p,q) &=&H[\frac{2p}3(1-q),\frac{2p}3q,
(1-\frac{2p}3)(1-q),(1-\frac{2p}3)q]\nonumber\\
&-&H_2[q+\frac{2p}3(1-2q)] \;.
\end{eqnarray}
Maximizing over the input distribution as before, we obtain
\begin{eqnarray}
C= \max_q S(Q'{\rm:}R) = 1-H_2[2p/3]\;, \label{classcap}
\end{eqnarray}
the result derived recently for the depolarizing channel simply from
using the Kholevo theorem~\cite{bib_calder}. Note that
Eq.~(\ref{classcap}) is just the Shannon capacity of a binary symmetric
channel~\cite{bib_ash}, with a bit-flip probability of $2p/3$ (of the
three quantum error ``syndromes'', only two are classically detectable
as bit-flips).
\section{Interpretation}
\subsection{Quantum capacity and superdense coding}
The interpretation of the capacity suggested here as a quantum
mechanical extension of the classical construction
can be illustrated in an
intuitive manner with the example of the depolarizing channel
introduced above. The idea is that $I_Q$ reflects the capacity for
transmission of quantum mutual entropy (entanglement and/or classical
information) but that the amount transferred in a particular channel
depends on how this channel is used.
A particularly elegant channel
that uses $I_Q$ to its full extent is the noisy ``superdense coding''
channel. There, the entanglement between sender and receiver is used
to transmit two bits of classical information by sending just {\em
one} quantum bit~\cite{bib_superdense,bib_neginfo}. In a general
superdense coding scheme, the initial state $QR$ is one of a set of
entangled states conditionally on classical bits $C$.
This situation
can be related to our previous discussion by noting that all
entropies appearing there are to be understood as
{\em conditional} on the classical bits $C$ that
are to be sent through the channel as shown in Fig.~\ref{fig-super}. The
von Neumann capacity introduced above is then just
\begin{eqnarray}
I_Q=S(R:Q'|C)\;. \label{eq-81}
\end{eqnarray}
It is not immediately obvious that this von Neumann capacity is equal
to the {\em classical} capacity between preparer (usually termed
Alice) and the receiver (Bob). However, it is not difficult to
prove [using the fact that $S(R{\rm:}Q)=S(R{\rm:}C)=S(Q{\rm:}C)=0$] that
Eq.~(\ref{eq-81}) is in fact equal to the maximal
amount of classical information about $C$ extractable from $RQ'$
(after $Q$ decohered),
which is\footnote{That the quantum mutual entropy between a preparer and
a quantum system is an upper bound to the amount of
classical information obtainable by measuring the quantum system
(the Kholevo bound) is shown in Ref.~\cite{bib_access}.}
\begin{eqnarray}
\chi=S(RQ':C)\;.
\end{eqnarray}
Thus, in this example the amount of entanglement processed in a channel
can be viewed as the amount of {\em classical} information about the
``preparer'' of the entangled state $QR$. This amount of information
can reach {\em twice} the entropy of $Q$ (2 bits in standard
superdense coding), which is classically impossible.
(The superdense coding and
teleportation channels will be discussed in detail elsewhere).
\begin{figure}
\caption{Quantum Venn diagram for the noisy superdense coding
channel before decoherence. Conditionally on the classical bits $C$,
$QR$ is in a pure entangled state described by a Venn diagram of the
form $(-S,2S,-S)$. Note that no information about $C$ is contained
in $R$ or $Q$ {\em alone}, i.e., $S(C{\rm:}R)=S(C{\rm:}Q)=0$.
\label{fig-super}}
\vskip 0.25cm
\centerline{\psfig{figure=fig-super.ps,width=1.25in,angle=-90}}
\vskip -0.25cm
\end{figure}
Having established this relation between superdense coding and the
general quantum channels treated here, let us imagine that the qubit
that is sent through the channel (and which is ``loaded'' with
entanglement) is subject to the depolarizing noise of the previous
section. Indeed, if $p=0$ the two classical bits can be decoded
perfectly, achieving the value of the capacity. It has been argued
recently~\cite{bib_neginfo} that this can be understood by realizing
that besides the qubit that is sent forwards in time in the channel,
the entanglement between sender and receiver can be viewed as an
antiqubit sent {\em backwards} in time (which is equivalent to a qubit
sent forwards in time if the appropriate operations are performed on
it in the future). Thus, the quantum mechanics of superdense coding
allows for the time-delayed (error-free) transmission of information,
which shows up as excessive capacity of the respective channel. On the
other hand, it is known that (for un-encoded qubits) superdense coding
becomes impossible if $p\approx0.189$, which happens to be the precise
point at which $I_Q=1$. This is related to the fact that at this point
the ``purification'' of ``noisy'' pairs becomes impossible.
However, the capacity of this channel is not zero. While no
information can be retrieved ``from the past'' in this case, the
single qubit that is sent through the channel still carries
information, indeed, it shares one bit of mutual entropy with the qubit
stored by the receiver. Clearly, this is still a quantum channel: if
it were classical, the transmission of one bit could not take place
with unit rate and perfect reliability, due to the noise level
$p=0.189$. As the receiver possesses both this particle and the one
that was shared earlier, he can perform joint measurements (in the
space $Q'R$) to retrieve at least one of the two classical bits.
An extreme example is the
``dephasing'' channel, which is a depolarizing channel with only
$\sigma_z$-type errors, affecting the phase of the qubit.
As is well known, classical
bits are unaffected by this type of noise, while quantum
superpositions are ``dephased''. The channel becomes useless (for the
storage of superpositions) at $p=0.5$, yet measuring the qubit yields
one {\em classical} bit in an error-free manner. A calculation of
$\max_q S(R:Q')$ for this channel indeed yields
\begin{eqnarray}
I_Q(p)=2-H_2[p]\;.
\end{eqnarray}
In this limiting case thus, it appears possible to separate the
classical ($I=1$) from the purely quantum capacity. However, it might
well be possible that this cannot be achieved in general. Below, we
show that such an ``excessive'' von Neumann capacity (as in superdense
coding) is consistent with a commensurate quantum Hamming bound.
\subsection{Quantum Hamming bounds}
Classically, the Hamming
bound~\cite{bib_ash} is an upper bound on the number $s$ of codewords
(bit-strings of length $n$) for a code to correct $t$ errors:
\begin{eqnarray}
s\,\sum_{i=0}^t {n \choose i}\le 2^n\;. \label{classham}
\end{eqnarray}
This is a necessary (but not sufficient) condition for error-free
coding, which reflects the necessary space to accommodate all the
codewords and associated descendants for all error syndromes.
For $s$ codewords coding for $k$ bits ($s=2^k$), we can
consider the asymptotics of (\ref{classham}) in the limit of
infinitely long messages ($n\rightarrow\infty$), and find that the
rate of error-free transmission is limited by
\begin{eqnarray}
R\le - \frac1n\log \sum_{i=0}^{pn}{n \choose i}
\left(\frac12\right)^i\left(\frac12\right)^{n-i}
\end{eqnarray}
where $R=k/n$ is the transmission rate and $p=t/n$ is the asymptotic
probability of error.
Using
\begin{eqnarray}
\lim_{n\to\infty} &-&\frac1n\log\left\{\sum_{i=0}^{pn} {n \choose i}
\,r^i\,(1-r)^{n-i}\right\} \nonumber \\
&=& p\,\log\frac pr\, + \, (1-p)\,\log\frac{1-p}{1-r}\nonumber\\
&\equiv&H(p,1-p\,\|\,r,1-r)\;,
\end{eqnarray}
where $H(p,1-p\,\|\,r,1-r)$ is the {\em relative} entropy between the
probability distributions $p$ and $r$, we can write
\begin{eqnarray}
R\le H(p,1-p\,\|\,1/2,1/2)=1-H_2(p)\;.
\end{eqnarray}
The relative entropy thus turns out to be just the classical capacity
of the channel, and measures the ``distance''
of the error-probability
of the channel relative to the ``worst case'', i.e., $p=1/2$. Note
that relative entropies are positive semi-definite.
For quantum channels, the standard quantum Hamming bound for
non-degenerate (orthogonal) codes is written
as~\cite{bib_laflamme,bib_ekert,bib_bdsw}
\begin{eqnarray}
2^k\,\sum_{i=0}^t 3^i{n \choose i}\le 2^n\;,
\end{eqnarray}
which expresses that the number of orthogonal states identifying the
error syndromes on the $2^k$ different messages must be smaller than
$2^n$, the dimension of the Hilbert space of the quantum
state $Q$ ($n$ qubits). In the limit of large $n$, this translates
into an upper bound for the rate of non-degenerate quantum codes
\begin{eqnarray}
R\le -\frac1n\log\left\{ \sum_{i=0}^{pn}{n \choose i}\left(\frac34\right)^i
\left(\frac14\right)^{n-i} \right\} -1 \;.
\end{eqnarray}
which can (as in the classical case) be written in terms of a relative
entropy
\begin{eqnarray}
R\le H(p,1-p\,\|\,3/4,1/4)\,-\,1\,=\,1-S_e(p)\;,\label{usualqhb}
\end{eqnarray}
Thus, the usual quantum Hamming bound limits the rate of
non-degenerate quantum codes by
the capacity based on ``coherent information'' proposed
in~\cite{bib_schum2,bib_lloyd}, which is thought of as the ``purely quantum''
piece of the capacity.
Note that the positivity of
relative entropy does {\em not} in this case guarantee such a capacity
to be positive, which may just be a reflection of the
``inseparability'' of the von Neumann capacity.
The quantum Hamming bound shown above relies on coding the error
syndromes only into the quantum state $Q$ that is processed, or, in
the case of superdense coding, sent through the noisy channel. As we
noted earlier, however, a quantum system that is entangled does not,
as a matter of principle, have a state on its own. Thus, the entangled
reference system $R$ {\em necessarily} becomes part of the quantum
system, even if it is not subject to decoherence. Thus, the Hilbert
space available for ``coding'' automatically becomes as large as $2n$,
the combined Hilbert space of $Q$ and $R$. This is most obvious again
in superdense coding, where the ``decoding'' of the information
explicitly involves joint measurements of the decohered $Q'$ {\em and}
the ``reference'' $R$, shared between sender and receiver (in a
noise-free manner).
The corresponding {\em
entanglement} quantum Hamming bound therefore can be written by
remarking that while the coding space is $2n$, only $n$ qubits are
sent through the channel, and thus
\begin{eqnarray}
2^k\,\sum_{i=0}^t 3^i{n \choose i}\le 2^{2n}\;.
\end{eqnarray}
Proceeding as before, the rate of such quantum codes is limited by
\begin{eqnarray}
R\le H(p,1-p\,\|\,3/4,1/4)\,=\,2-S_e(p)\;, \label{entham}
\end{eqnarray}
the von Neumann capacity $C_Q$ for the depolarizing channel
proposed in this paper, Eqs.~(\ref{quantcap}) and (\ref{depolcap}).
The latter is always positive, and represents the
``distance'' between the error probability $p$ of the channel and the
worst-case error $p=3/4$ (corresponding to a 100\% depolarizing
channel), in perfect analogy with the classical
construction. Eq.~(\ref{entham}) thus guarantees the {\em weak
converse} of the quantum fundamental theorem: that no code can be
constructed that maintains a rate larger than the capacity $C_Q$ with a
fidelity arbitrarily close to one.
\section{Conclusions}
We have shown that the classical concept of information transmission
capacity can be extended to the quantum regime by defining a von
Neumann capacity as the maximum mutual von Neumann entropy between the
decohered quantum system and its reference. This mutual von Neumann
entropy, that describes the amount of information---classical and/or
quantum---processed by the channel, obeys ``axioms'' that any measure
of information should conform to. As for any quantum extension, the
von Neumann capacity reverts to its classical counterpart when the
information is ``classicized'' (i.e., it reverts to the Kholevo
capacity when measured or prepared states are sent), and ultimately to
the Shannon capacity if all quantum aspects of the channel are ignored
(i.e., if orthogonal states are sent and measured). Thus, the von
Neumann capacity of a channel can only vanish when the classical
capacity is also zero, but it can be excessive as entanglement allows
for superdense coding. In order to take advantage of this, however,
both the quantum system that decoheres {\em and} the reference system
it is entangled with need to be accessible. In practical quantum
channels this appears to be impossible, and the rate of practical codes must
then be considerably smaller than the von Neumann capacity. Yet,
because of the inseparability of entangled states, a consistent
definition of channel capacity {\em has} to take into account the full
Hilbert space of the state. Whether a capacity can be
defined {\em consistently} that characterizes the ``purely'' quantum
component of a channel is still an open question.
\acknowledgements
We would like to thank John Preskill and the members of the QUIC group
at Caltech for discussions on the depolarizing channel, as well as
Howard Barnum and Michael Nielsen for discussions during the Quantum
Computation and Quantum Coherence Program at the ITP in Santa Barbara,
where most of this work was done. This research was supported in part
by NSF Grant Nos. PHY 94-12818 and PHY 94-20470 at the Kellogg
Radiation Laboratory, and Grant No. PHY 94-07194 at the ITP in Santa
Barbara.
| 23,004 |
\section{Introduction}
At energies $\sqrt{s} > 100$ AGeV, copious mini-jet production in central
nuclear collisions is expected \cite{mini,hotglue,eskola}
to be the main source of a very dense plasma of quarks and gluons with an
initial (proper) energy density an order
of magnitude above the deconfinement and chiral symmetry restoration
scale \cite{lattice}, $\epsilon_c \sim 1\; {\rm GeV/fm}^3$ $(T_c\simeq 160$ MeV).
A large number of observable consequences of the formation of this new
phase of matter
have been proposed based on a wide range of dynamical assumptions \cite{qm93},
and experiments are currently under construction to search for evidence of
that quark--gluon plasma (QGP) at the Relativistic Heavy--Ion Collider
(RHIC) at Brookhaven.
The observables include dilepton and hard direct photon yields,
high--$p_\perp$ jets and hadrons, strangeness and charmed hadron production,
identical meson pair interferometry, and collective transverse expansion.
Evidently, these and other proposed signatures depend sensitively on the
assumed ensemble of initial conditions as well as on the
transport dynamics through the hadronization point.
In this paper we explore the dependence of several observables on the
fluctuations of the initial conditions induced by mini-jet production.
Our central dynamical
assumption is that after a rapid thermalization time ($\sim 0.5$ fm/c) the
evolution of the plasma through hadronization can be approximated by
non-dissipative hydrodynamics. We make this strong assumption to explore the
{\em maximal\/} effects that the sought after thermodynamic properties of
hot QCD matter may have on observables in ultrarelativistic
heavy-ion collisions. Only
hydrodynamics gives a direct link between observables and the fundamental QCD
equation of state \cite{lattice}. Finite dissipative effects generally tend to
reduce hydrodynamic signatures and require further knowledge of the transport
coefficients in the QGP as well. Even in the ideal hydrodynamic
limit, however, the observables are sensitive to the initial
formation physics of the plasma.
It is this aspect of the problem that we concentrate on in this paper.
We show that in contrast to the conventional picture of QGP formation, the
initial mini-jet ensemble is characterized by a wide fluctuation spectrum
of the local energy density (hot spots) and of the collective flow field
(turbulence). We also show that hydrodynamic evolution of that inhomogeneous,
turbulent ensemble can lead to novel observable consequences
including azimuthally asymmetric transverse energy fluctuations and
enhanced radiance of hard
probes. An especially significant result is the demonstration that the
time-delay signature of the QCD phase transition, as
found in previous studies
\cite{pratt,bertsch,risch_pipi} starting from homogeneous initial
conditions, survives this generalized class of more unfavourable
initial conditions. Meson interferometry therefore remains one of the most
robust and generic probes of QGP formation given present uncertainties of the
initial conditions in heavy-ion collisions.
Before elaborating on the nature of the inhomogeneous, turbulent initial
conditions induced by the mini-jet production mechanism, we review first the
(homogeneous) ``hot-glue scenario'' \cite{hotglue,eskola} assumed in many
previous calculations of the observable consequences of QGP formation in $A+A$
collisions. Mini jets are simply unresolved partons with
moderate $p_\perp> 1$ GeV/c predicted from perturbative QCD.
They are produced abundantly
at collider energies because the inclusive cross section for gluon jets
with moderate $p_\perp > p_0=1\;(2)$ GeV/c
rises to a value $\sigma_{jet}(p_0) \simeq 40\;(10)$
mb at $\sqrt{s}=200$ GeV, comparable to the total inelastic cross section.
Evidence for mini-jet production in $pp$ and $p\bar{p}$
reactions at collider energies has been inferred from the systematic rise with
increasing $\sqrt{s}$ of the yield of moderate--$p_\perp$ hadrons,
of the central rapidity density,
of the enhanced tails of multiplicity fluctuations, as well as
the flavour and multiplicity dependence of the
mean transverse energy of hadrons (see Refs.\ \cite{wang}).
In the eikonal approximation to nuclear dynamics, the total number of mini-jet
gluons produced in central $A+A$ collisions is expected to be $A^{4/3}\sim
10^3$ times larger than in $pp$ collisions since each incident projectile
nucleon interacts with $\sim A^{1/3}$ target nucleons along the beam axis.
This simple geometric effect leads to a high rapidity density of mini-jet
gluons with ${\rm d}N_g/{\rm d}y \sim 300-600$
in $Au+Au$ as shown in Fig.\ 1a. The curves are calculated via the
HIJING model \cite{wang} with shadowing and jet quenching
options turned off. Comparison of the curves for $p_0=1$ and 2 GeV/c gives an
indication of current theoretical uncertainties associated with the
extrapolation from $pp$ to $AA$ reactions. The observed
$\sqrt{s}$ systematics of the $p\bar{p}$ data are best accounted for with
$p_0=2$ GeV/c in the HIJING model \cite{wang}. That model combines an eikonal
multiple collision formalism with the PYTHIA algorithm \cite{pythia} to
generate
exclusive hard pQCD processes with $p_\perp>p_0$ and a variant of the LUND/DPM
string phenomenology \cite{lund} to model hadronization and account for the
non-perturbative, low--$p_\perp$ beam-jet fragmentation. Other parton cascade
Monte Carlo models, such as developed in Ref.\ \cite{geiger}, using different
structure functions and hard pQCD cutoff schemes, can account for the
$p\bar{p}$ data using a somewhat lower $p_0\simeq 1.5$ GeV/c.
Theoretically \cite{mini}, the
scale separating the perturbative QCD domain from the non-perturbative
(beam-jet) domain may be as low as $p_0=1$ GeV/c, although no hadronization
phenomenology has yet been developed with such a low scale that
could account for the available data. Another source of
moderate--$p_\perp$ gluons in
very heavy-ion reactions has recently been proposed based
on a semi-classical treatment of the non-Abelian Weizs\"acker--Williams gluon
fields \cite{mclerran}. The above uncertainties in the initial conditions
on the parton level are seen in Fig.\ 1b to correspond to approximately
a factor of two uncertainty of the transverse energy produced per unit rapidity
in central $Au+Au$ collisions at RHIC energies.
Figure 1c shows that the difference between the cases $p_0=1$ and 2 GeV/c
in the
HIJING model is due to the production of approximately twice as many gluons in
the moderate $p_\perp<4$ GeV/c region for $p_0=1$ GeV/c.
(The $p_\perp$--spectra extend to $p_\perp=0$ because of initial
and final state radiation associated with mini-jet production.)
This difference is significantly
smaller than the lowest order pQCD estimate would give because of the
unitarized eikonal formalism used in HIJING to calculate multiple collisions
and multiple jet production. For $p_0=1$ GeV/c the mini-jet cross section is
comparable to the inelastic cross section. Due to Glauber multiple collision
shadowing, the number of mini jets in that case must scale less rapidly
than with the number of binary $pp$ collisions.
In Figure 1d the hadronization mechanism of the mini-jet gluons via the
string fragmentation mechanism is found to approximately
double the final hadron transverse energy
distribution relative to Fig.\ 1b.
This is due to the pedestal or ``string'' effect and persists up to LHC
energies in this model. The mini-jet gluons are represented as kinks in the
beam-jet strings, and those kinks effectively boost the produced hadrons in the
transverse direction. The difference between the mini-jet contribution
(Fig.\ 1b) and the final hadronic transverse energy distribution is due
to the string model implementation of beam-jet fragmentation in
HIJING. That component necessarily
involves non-perturbative, low--$p_\perp$ multi-particle production
and is presently under experimental study via heavy-ion
reactions at lower CERN/SPS
energies ($\sqrt{s}=20$ AGeV) \cite{qm93}.
While the string model provides an adequate phenomenology
of beam-jet fragmentation at those energies, it is not obvious that it will
continue to do so at RHIC and LHC. This represents a significant source
of theoretical uncertainty in calculating RHIC initial conditions.
We will assume in this study that the extrapolation of the beam-jet physics
via string phenomenology as encoded in the
HIJING model holds up to RHIC energies. Possible sources of
fluctuations of the beam-jet component due, for example,
to ``colour rope'' formation
have been explored in the past \cite{ehtamec,iwagyu,biro}.
However, at collider energies, the consequences of fluctuations
due to the dominant mini-jet component have not been considered previously
to our knowledge.
In the hot-glue scenario, the thermalization proper time is assumed to be a few
times the mini-jet formation time
$\tau_0=\hbar/ p_0 \sim 0.1$ fm/c (our units are $c=k_B=1$).
In fact, the initial pQCD mini-jet $p_\perp$--distribution is not far from
thermal as can be seen in Fig.\ 1c, but it turns out
that the gluon and quark multiplicities are below chemical equilibrium.
Inelastic multi-gluon
production processes are therefore essential to achieve rapid equilibration.
Recent progress on radiative transport in QCD plasmas \cite{xiong,GyuWa,doksh}
suggests that equilibration is possible on scales less than 1 fm/c.
Taking longitudinal boost-invariant expansion into account
and assuming a cylindrical distribution of matter,
the proper energy density averaged over transverse coordinates
at proper time $\tau >\tau_0$ is given by
the Bjorken formula \cite{Bjorken}:
\begin{equation}
\bar{\epsilon}(\tau) \simeq
\frac{{\rm d}E_\perp}{{\rm d}y} \;\frac{1}{\tau\pi R^2} \; \; .
\eeq{bj}
For ${\rm d}E_\perp/{\rm d} y \simeq 1$ TeV from Fig.\ 1d,
and $R=7$ fm, this yields an order-of-magnitude estimate of
$\bar{\epsilon}(\tau)\simeq 65\, [0.1\,{\rm fm/c}\,/\tau]\; {\rm GeV/fm}^3 $.
If only the gluon degrees of freedom are equilibrated,
then the temperature of the
``hot glue'' at the thermalization time
is $T(\tau_{th})\simeq [\bar{\epsilon}(\tau_{th})/5.26]^{1/4}\simeq 555\,
(\tau_0/\tau_{th})^{1/4}$ MeV. The evolution
of the temperature after $\tau_{th}$ of course
depends on the equation of state as well as the
assumed viscosity \cite{danielewic}.
In this scenario, observables of the plasma phase such as thermal dileptons or
photons can be computed as discussed in \cite{therma}.
Transverse expansion can also be taken into account \cite{blaizot} as well as
more realistic equations of state with a rapid cross-over transition
region \cite{risch_pipi}. For transverse expansion,
the temperature field, $T(\tau,{\bf x}_\perp)$, acquires a dependence on the
transverse coordinates. In the hot-glue scenario
azimuthal symmetry is naturally assumed for collisions at
zero impact parameter.
Another implicit assumption of the hot-glue scenario
is that the {\em initial\/} fluid 4--velocity field,
$u^\mu(x)$, vanishes in the transverse direction and that
the initial flow field
only reflects the Bjorken longitudinal expansion,
\begin{equation}
u^\mu(t,{\bf x}_\perp,z)=(t/\sqrt{t^2-z^2},0,0,z/\sqrt{t^2-z^2}) \; \; .
\eeq{ubj}
Transverse expansion is allowed to develop in the course
of subsequent evolution, but initially
the plasma is assumed to be quiescent in the above sense.
In this paper, we call into question
the commonly assumed azimuthal symmetry and smooth radial
profile of $b=0$ collisions and the above quiescent form of the initial
velocity field. In the next section,
we show that the mini-jet mechanism does not
support those simplifying assumptions unless the
beam-jet component is much larger than estimated with HIJING.
In Section 3, the hydrodynamic evolution of inhomogeneous,
turbulent initial conditions is calculated and the novel
type of azimuthally asymmetric transverse shock collectivity
is discussed. In Section 4 the robustness of the time-delay signature
associated with a phase transition is demonstrated.
In Section 5, the enhanced radiance of direct photons from
hot spots is estimated. A brief summary is finally presented
in Section 6.
\section{The Inhomogeneous, Turbulent Glue Scenario}
Inhomogeneities arise in nuclear collisions
as a result of fluctuations of the number of soft and hard QCD interactions
per unit transverse area. Fluctuations of the soft beam-jet
component have been considered before \cite{ehtamec,iwagyu,biro},
but at collider energies a new source of fluctuations that
are induced by mini-jet production
is expected to become dominant. Both types of fluctuations are
strongly correlated as a function of transverse coordinates.
In this paper, however, we consider fluctuations arising from only
mini-jet production and
treat the soft component as smooth background component of the plasma.
Each nucleon in a central $Au + Au$
collision suffers approximately $ A^{1/3}\pm A^{1/6} \simeq 6\pm 2$
inelastic collisions.
Therefore, there are $\sim 40$ binary collisions per $\sigma_{in} \simeq
4$ fm$^2$. At RHIC energies, however, only a fraction
$\sigma_{jet}/\sigma_{in} \simeq 1/4$ of those produce mini jets.
The fluctuations of the mini-jet number density
are substantial because $A^{1/3}$ remains relatively
small even for the heaviest nuclei.
In principle, two--nucleon correlations in the initial nuclei could
reduce the above type of geometric fluctuations. However, the available
data on high--$E_\perp$ production in nuclear collisions at the SPS
indicates sizable fluctuations, even beyond the independent nucleon gas
approximation \cite{baym}.
In addition to geometric sources of fluctuations,
the broad transverse momentum spectrum of mini-jet gluons in Fig.\ 1c
further enhances the fluctuations of the energy and momentum
deposited by mini jets per unit area. These two effects conspire to induce
large fluctuations of the initial energy and momentum density
at early times as we show below.
The spectrum of hot spots can be computed
from the HIJING event list of parton transverse and longitudinal momenta
$({\bf p}_{\perp\alpha}, p_{z\alpha}= p_{\perp\alpha}\sinh y_\alpha)$, and their
longitudinal and transverse production coordinates, $(z_\alpha=0,{\bf x}_{\perp\alpha}$).
The production points are taken from the initial transverse coordinates
of the nucleon (diquark-quark string) in which the PYTHIA \cite{pythia}
subroutines of HIJING embed the gluons.
To simplify the problem we assume Bjorken boundary conditions
and neglect additional fluctuations along the longitudinal coordinate.
The longitudinal velocity is thus assumed to vanish at $z=0$.
With this simplification, the transverse coordinate dependence
of {\em local\/} energy and transverse momentum density in the $z=0$ plane
can then be estimated from
\begin{eqnarray}
\left(\begin{array}{c}
{\cal E} (\tau,{\bf x}_\perp,z=0) \\
{\bf M}_\perp (\tau,{\bf x}_\perp,z=0)
\end{array}\right)
= \sum_{\alpha}\left(\begin{array}{c}
1\\ {\bf v}_{\perp\alpha}
\end{array}\right)
\frac{p_{\perp\alpha}}{\tau}\; F(\tau p_{\perp\alpha})
\; \delta^{(2)}({\bf x}_\perp-{\bf x}_{\perp\alpha}(\tau)) \;
\delta(y_\alpha)
\; \; .\eeqar{xebj}
The above formula takes into account the free streaming of
gluons
in the transverse direction via ${\bf x}_{\perp\alpha}(\tau)={\bf x}_{\perp\alpha}+ {\bf v}_{\perp\alpha}\tau $ where
${\bf v}_{\perp\alpha}={\bf p}_{\perp\alpha} /p_\alpha $, as well as
ideal Bjorken longitudinal expansion.
The factor $F(\tau p_{\perp\alpha})\equiv(1+[\hbar/(\tau p_{\perp\alpha})]^2)^{-1}$
is an estimate of the formation probability \cite{GyuWa} of
partons with zero rapidity. High--$p_\perp$ gluons are produced first and
lower--$p_\perp$ gluons later according to the uncertainty principle.
Before $\tau\sim \hbar/p_{\perp\alpha}$, those $p_{\perp\alpha}$--components
of the radiation field and the evolving
transient Weizs\"acker--Williams field of the passing nuclei \cite{mclerran}
interfere strongly and cannot be treated as a
kinetic gas of partons or a plasma.
The exact form of $F$ is
not important except for extremely small times $\tau<0.5$ fm/c
prior to thermalization.
The above expression, when averaged over transverse coordinates,
reduces to the original Bjorken estimate, eq.\ (\ref{bj}) with
$\langle {\cal E}(\tau)\rangle \simeq \bar{\epsilon}(\tau)$.
In addition, the averaged transverse momentum density
and hence the flow field vanishes,
$\langle {\bf M}_\perp(\tau)\rangle=0$, up to finite multiplicity fluctuations.
To study the transverse coordinate dependence of the
initial conditions given discreet mini-jet phase space coordinates,
we must specify the transverse, $\Delta r_\perp$,
and longitudinal, $\Delta y$, resolution scales corresponding
to an elementary fluid cell.
The densities, coarse-grained on that resolution scale,
are obtained from (\ref{xebj}) by the simple substitution
\begin{equation}
\delta^{(2)}({\bf x}_\perp-{\bf x}_{\perp\alpha}(\tau))\; \delta(y_\alpha) \rightarrow
\frac{\Theta(\Delta y/2 - |y_{\alpha}|)
}{\Delta r_\perp^2 \, \Delta y}
\prod_{i=x,y}
\Theta(\Delta r_\perp/2- |x_{i\alpha}(\tau)-x_i|)
\; \; .\eeq{smear}
The uncertainty principle limits how small $\Delta r_\perp >
\hbar/\Delta p_\perp$
could be. However, in our dynamic problem, causality restricts how large
it can be. At a given proper time $\tau$, after the two highly contracted
nuclei pass through each other, the
local horizon for any gluon in the comoving $(y_\alpha=0)$
frame only has a radius
$c\tau$. Thus, at the thermalization time, $\tau_{th}$, each gluon can be
influenced by only a small neighbourhood of radius $c\tau_{th}\simeq 0.5$ fm
of the plasma. In order to {\em minimize\/}
fluctuations, we will take the transverse resolution scale to be the
maximal causally connected diameter, $\Delta r_\perp=2c\tau_{th}\simeq 1$ fm.
Also, we take the rapidity width to be $\Delta y=1$,
since gluons with larger relative
rapidities tend to be produced too late (due to time dilation)
to equilibrate with gluons with $y=0$.
Note that the number, $N(\tau)=[R/(c \tau)]^2$, of causally disconnected
domains in a nuclear area, $\pi R^2$, is initially very large and even at the
onset of hadronization, $\tau_h\sim 3$ fm/c, several disconnected domains
remain.
In Figure 1d we saw that
at $\sqrt{s} = 200$ AGeV, the HIJING model predicts that approximately
half of the final produced transverse energy
arises from beam-jet string fragmentation.
As emphasized before, it is unknown whether the string model coupling
between the beam-jet fragmentation and mini-jet fragmentation, that accounts so
well for $p\bar{p}$ multi-particle data \cite{wang}, can be applied to
collisions of heavy nuclei. Effective string degrees of freedom may be quenched
due to the colour conductivity of the plasma \cite{selik1},
but soft field fluctuations associated with beam-jet fragmentation are
likely to persist and contribute a background source of low--$p_\perp$ quanta
at some level.
In the present study, we do {\em not\/} hadronize the
partons via this default JETSET string mechanism in HIJING but only use their
phase space density to compute the mini-jet contribution to the initial
energy--momentum tensor $T^{\mu\nu}(x)$ as above. Since our dynamical
assumption is the applicability of hydrodynamics,
we model the soft beam-jet component by a homogeneous
background fluid. We estimate the energy density of that component
from the HIJING final hadronic ${\rm d}E_\perp/{\rm d}y$. One advantage
of the hydrodynamic approach is that
by treating the beam-jet component as a fluid,
we need not specify further details of its uncertain
microscopic nature. In our case, the main function of that background
fluid is to {\em reduce\/} the energy
density fluctuations induced by the mini jets.
The transverse coordinate distribution of that
soft component is assumed to reflect the density of wounded nucleons,
i.e., it is taken proportional to $(1-[r_\perp/R]^2)^{1/2}$.
If we take into account
the relatively long formation time of soft, $\langle p_\perp\rangle \simeq 0.3$
GeV/c, partons, only about half of the
${\rm d} E^{soft}_\perp/{\rm d} y \simeq 0.5 $ TeV from Fig.\ 1d
contributes to the soft background. In that case, this soft component adds
a relatively small, smooth contribution $\sim 5\; {\rm GeV/fm}^3$
to the central energy density at $\tau_{th}=0.5$ fm/c.
In Figures 2a,b the initial energy and momentum density profile
of a typical central $Au+Au$ event is shown as a function of the transverse
coordinates in the $z=0$ plane. This corresponds to a HIJING event
with the default mini-jet cutoff scale $p_0=2$ GeV/c. The profile for
$p_0=1$ GeV/c looks similar except the energy density scale increases
by approximately a factor of two.
The striking feature illustrated by the lego plot in Fig.\ 2a is
the existence of several prominent ``hot spots'' with ${\cal E} > 20\; {\rm GeV/fm}^3$
separated by $\sim 4-5$ fm.
In this event the hottest spot reaches an energy density of about
$40\; {\rm GeV/fm}^3 $. Between the hot spots are cooler regions
with energy density down to the soft background scale of $\sim 5 \; {\rm GeV/fm}^3$.
The turbulent nature of the initial conditions is illustrated
in Fig.\ 2b. An arrow plot representation of the transverse
momentum field is shown. The highest momentum regions tend to coincide
with the regions of highest energy density. The initial transverse
flow velocities are found to be distributed broadly up to
$\sim 0.5\,c$. We note that turbulence here is not inherent to QGP evolution
since the Reynolds number of the QGP, ${\cal R}e \sim R/l \sim 10$, is
not large. Thus, laminar flow is not expected to break up into
complex vortex structures. In our case, the turbulence of the QGP
initial conditions is {\em induced\/} by the external mini-jet production
mechanism. This type of turbulence is analogous to flow induced
by the blades of a mixer in a bowl of liquid.
In Figure 2c the distribution of energy densities is shown,
coarse-grained over 1 fm$^3$ cells and averaged over 200 HIJING events.
Only cells in the central region with $r_\perp<4$ fm
are considered in this histogram to reduce fluctuations from the
less interesting surface region. The event-averaged energy density
$\langle {\cal E}(\tau_{th})\rangle \simeq 12\; {\rm GeV/fm}^3$
includes the soft background contribution discussed above.
However, the distribution is highly asymmetric
with relatively high probability of large and small fluctuations.
The rms relative fluctuation of the initial energy density is found to be
$\Delta {\cal E}/\langle {\cal E} \rangle
\simeq 0.7$ for this assumed soft background level.
In Figure 2d the distribution of the effective local gluon temperature,
$T_{\rm eff}(\tau_{th},{\bf x}_\perp)=({\cal E}(\tau_{th},{\bf x}_\perp)/5.26)^{1/4}$,
corresponding to Fig.\ 2c is shown.
(This estimate of the temperature neglects collective flow velocities
and that the gluon number as computed in HIJING is not in chemical
equilibrium.)
The local temperature is seen to
fluctuate around the mean $\sim 350$ MeV with an rms width
of $\Delta T\simeq 60$ MeV.
It is clear from the above that the mean values of ${\cal E}$ and
$T$ do not provide an adequate characterization of such an ensemble of
initial conditions.
The ensemble of mini-jet initial conditions has a
broad fluctuation spectrum primarily
because the local event horizon is so small at early times.
Fluctuations of ${\cal E}$ in finite volumes arise even in
an ideal Stefan--Boltzmann thermal ensemble
with $\langle \epsilon \rangle=KT^4$.
A simple estimate of the magnitude of thermal fluctuations
is given by $\Delta \epsilon / \langle \epsilon \rangle \simeq
2/(K T^3 V)^{1/2} \simeq 0.5$ for $T=350$ MeV, $K\simeq 5.26$
(if only the gluon degrees
of freedom are assumed to be equilibrated), $V=4 (c\tau_{th})^3$
(for $\Delta y = 1$). This is comparable
to the fluctuations induced dynamically by the mini-jet formation.
The spectrum of fluctuations differs from an ideal
thermal one because they are driven here by Glauber nuclear geometry
and the pQCD mini-jet spectrum.
In Figure 3a the smooth, event-averaged energy density profile
is compared to the fluctuating energy density profiles of three
other separate HIJING events. The azimuthally symmetric, event-averaged
surface corresponds to the one usually assumed
in the hot-glue scenario. The three individual events in parts b--d,
on the other hand, show that the dynamical
fluctuations extend up to 40 $ {\rm GeV/fm}^3$ (see Fig.\ 2c) and
cause strong deviations from the smooth, event-averaged profile.
The shaded contour plots above the surface plots provide another
representation of the azimuthally asymmetric inhomogeneities
caused by mini-jet production at the event-by-event level.
It is important to note that
the hot spots are not due to isolated hard pQCD jets,
but rather to the accidental coincidence
of several moderate--$p_\perp$ (see Fig.\ 1c) mini jets
in the same 1 fm$^3$ volume. This is seen by comparing
the gluon number density profiles in Fig.\ 4 to the
corresponding energy density profiles in Fig.\ 3.
The typical gluon density is seen to be about 5--10 fm$^{-3}$
at this early time $\tau_{th}=0.5$ fm/c. This includes, in addition
to mini jets, softer gluons from
initial and final state radiation as well as the soft partons from
the beam-jet fragmentation component.
Again we emphasize that Figs.\ 2--4 correspond to the
minimal fluctuations from the point of view of hydrodynamic
evolution. Only at later times, when the energy density is significantly
lower, can the fluctuations
be significantly reduced by coarse graining on much larger resolution scales.
The dilemma is that {\em if\/} indeed the local thermalization is so
short as hoped for in the hot-glue scenario, then
inhomogeneous and turbulent initial conditions must be considered
when calculating signatures of QGP formation.
On the other hand, if the thermalization time turns out to be much longer
than current estimates, then hydrodynamics is of course inapplicable and
the observables sensitive to the initial ``hot'' phase of the reaction
will not provide information
about the sought after {\em thermal\/} properties of the dense
QGP. For our purpose of exploring optimal signatures associated with
the thermal properties of dense matter, we must
rely on rapid thermalization and therefore live with
inhomogeneities in calculating plasma observables. Inevitably this means
that many observables will depend strongly on the
precise form of the ensemble of initial conditions. The mini-jet
initial conditions with a string model beam-jet background
represent a particular ensemble obtained
in extrapolating present $pp$ phenomenology to nuclear collisions
via the HIJING model. We will refer to the above ensemble of initial conditions
as the ``turbulent-glue scenario'' to contrast it with the simpler
initial conditions assumed in the hot-glue scenario.
Examples of plasma probes sensitive to the
highest temperature phase
are high-mass dilepton pairs, direct photons, and heavy quark
production. They are exponentially sensitive to the
fluctuation spectrum of local temperatures because of
the Boltzmann suppression factor, $\exp(-M_\perp/T)$.
The hot-glue and turbulent-glue scenarios differ considerably
in the predicted yields
of such observables because the ensemble average of hydrodynamically
evolved turbulent initial conditions and the hydrodynamic evolution of
ensemble-averaged initial conditions are not equivalent.
\section{Hydrodynamic Evolution of the QGP}
\subsection{The SHASTA Algorithm and Tests}
In order to explore how such turbulent plasmas may evolve, we
solve numerically the hydrodynamic equations given the
above ensemble of initial conditions.
Hydrodynamic evolution of homogeneous plasmas has already been studied
extensively \cite{risch_pipi,blaizot}. In most previous studies, azimuthal
(cylindrical) symmetry was assumed to simplify the problem.
One exception is the study
of homogeneous but azimuthally asymmetric initial conditions
produced in non-central nuclear collisions \cite{ollit}.
Our study focuses exclusively on $b=0$ collisions, where cylindrical symmetry
of {\em ensemble-averaged} observables must always hold.
At the event-by-event level, azimuthal symmetry is of course broken by the
probabilistic nature of hadronic interactions. However, we are not
interested in statistically uncorrelated fluctuations, rather only
in the azimuthally asymmetric correlations that can evolve dynamically out of
specific inhomogeneous mini-jet initial conditions.
We assume longitudinal boost-invariance, i.e.,
inhomogeneities in the rapidity dimension are neglected in this first study.
Mini jets from HIJING actually lead to hot-spot fluctuations
that extend only one unit in rapidity.
The true 3--dimensional inhomogeneities are therefore significantly
larger than what will be studied here
assuming that the fluid density is {\em constant\/} along
fixed proper time surfaces $\tau=\sqrt{t^2-z^2}$.
The full treatment of the problem will require
much more elaborate 3--dimensional simulations in the future. At present,
a 3--dimensional code is in the development phase.
A very important advantage of using a hydrodynamic
approach is that hadronization can be taken into account
using the non-perturbative equation of
state, $p(\epsilon)$, as deduced from lattice QCD simulations.
The price paid for hydrodynamics is of course the necessary assumption
that the equilibration rate is large compared to space--time gradients of
the fluid $T^{\mu\nu}$--field.
In order to compute the evolution of turbulent initial
conditions we solve the equations of relativistic hydrodynamics,
\begin{equation}
\partial_{\mu} T^{\mu \nu} =0\,\, .
\eeq{hyd}
We rely on the most recent optimistic estimates \cite{xiong,GyuWa,doksh}
that suggest that the local equilibration time in a QGP may be
short enough to neglect dissipative effects. In that case,
$T^{\mu \nu} = (\epsilon + p) u^{\mu} u^{\nu} - p g^{\mu \nu}$
is the energy--momentum tensor for an ideal fluid, where
$\epsilon$ is the (proper) energy density and $p$ the pressure in the
local rest frame of a fluid element moving with 4--velocity $u^{\mu} =
\gamma (1, {\bf v})$ in the computational frame (${\bf v}$ is the
3--velocity, $\gamma = (1-{\bf v}^2)^{-1/2}$, and $g^{\mu \nu} =
{\rm diag} (+,-,-,-)$ is the metric tensor).
The equations (\ref{hyd}) are closed by specifying an
equation of state $p(\epsilon)$.
Since we assume boost-invariance in $z-$direction, it suffices
to solve the equations of motion at $z=0$. Furthermore, since
boost-invariance implies $v^z \equiv z/t$,
the four eqs.\ (\ref{hyd}) can be simplified to yield the three equations
(valid at $z=0$)
\bea
\partial_t\, {\cal E} + \partial_x\, [({\cal E}+p)v^x] +
\partial_y\, [({\cal E}+p)v^y] & = & -\, F({\cal E},p,t) \, \, , \nonumber \\
\partial_t\, M^x + \partial_x\, (M^x v^x+p) + \partial_y \, (M^x v^y) & = & -\,
G(M^x,t) \, \, , \label{eomb} \\
\partial_t\, M^y + \partial_x\, (M^y v^x) + \partial_y \, (M^y v^y+p) & = & -\,
G(M^y,t) \, \, , \nonumber
\end{eqnarray}
where $F({\cal E},p,t) \equiv ({\cal E}+p)/t$, and $G(M,t) \equiv M/t$.
(Our notation is $T^{00} \equiv {\cal E}$, $T^{0i} \equiv M^i, \; i=x,y$.)
These equations are solved numerically via a two--step operator splitting
method. The first operator splitting step decomposes the problem
into solving the above system of equations with
$F=G=0$ (this corresponds to purely two--dimensional fluid motion), and
then updating the obtained solution $\tilde{{\cal E}},\, \tilde{M}^x,\,
\tilde{M}^y, \, \tilde{p}, ...$ according to the ordinary
differential equations
\begin{equation}
\frac{{\rm d}{\cal E}}{{\rm d}t} = - F({\cal E},p,t)\,\,, \,\,\,\,
\frac{{\rm d}M^i}{{\rm d}t} = - G(M^i,t)\,\,, \;\; i=x,y\,\,,
\eeq{eomb2}
i.e., more explicitly one corrects
\begin{equation}
{\cal E} = \tilde{{\cal E}} - F(\tilde{{\cal E}},\tilde{p},t)\,
{\rm d}t \,\,, \,\,\,\,
M^i = \tilde{M}^i - G(\tilde{M}^i,t)\, {\rm d}t \,\,, \;\; i=x,y\,\,.
\eeq{eomb3}
This method was originally suggested by Sod \cite{sod} and was proven
to be adequate for treating the analogous,
azimuthally symmetric boost-invariant problem in \cite{risch_pipi}.
The solution to (\ref{eomb}) with $F=G=0$ itself involves another
operator splitting step, i.e., one first solves the system
of equations
\bea
\partial_t\, {\cal E} + \partial_x\, [({\cal E}+p)v^x] & = & 0
\, \, , \nonumber \\
\partial_t\, M^x + \partial_x\, (M^x v^x +p) & = & 0 \, \, , \label{eomb4} \\
\partial_t\, M^y + \partial_x\, (M^y v^x) & = & 0 \, \, , \nonumber
\end{eqnarray}
corresponding to transport of the hydrodynamic fields in $x-$direction
only, and with the solution to this set of equations one solves
\bea
\partial_t\, {\cal E} + \partial_y\, [({\cal E}+p)v^y] & = & 0
\, \, , \nonumber \\
\partial_t\, M^x + \partial_y\, (M^x v^y) & = & 0 \, \, , \label{eomb5} \\
\partial_t\, M^y + \partial_y\, (M^y v^y+p) & = & 0 \, \, , \nonumber
\end{eqnarray}
corresponding to transport in $y-$direction only.
Equations (\ref{eomb4}) and (\ref{eomb5}) are solved with the phoenical SHASTA
algorithm \cite{SHASTA}
as presented in \cite{test1}, with half-step updating of the source terms
and simplified source term treatment.
The transport steps (\ref{eomb4}) and (\ref{eomb5})
are alternated between successive time steps
to minimize systematical errors in the propagation.
Per each time step, the
fields are propagated according to (\ref{eomb4}) and (\ref{eomb5}),
and finally corrected with (\ref{eomb3}).
The fluid evolution is studied for two idealized forms of the QGP equation
of state. One is the ideal relativistic gas case
\begin{equation}
p (\epsilon) =\epsilon/3 = K\, T^4/3 \; \; .
\eeq{eos_ideal}
Here, $K\simeq 5.26$, corresponding to an equilibrated gluon gas.
In the second case, a Bag model equation of state
with a strong first order transition at a critical temperature
$T_c$ is assumed:
\begin{equation}
p(\epsilon)= \left\{ \begin{array}{cl}
(\epsilon- 4B)/3\; & {\rm if } \; \epsilon>\epsilon_Q\; , \\
p_c \; & {\rm if } \; \epsilon_Q \geq \epsilon \geq \epsilon_H\; , \\
\epsilon/3 \; & {\rm if } \; \epsilon_H > \epsilon \; .
\end{array}
\right.
\eeq{bag}
With the Bag constant $B$ and ratio of effective plasma and hadronic
degrees of freedom $r=K_Q/K_H$ fixed,
the energy density of the mixed phase is bounded by
\begin{eqnarray}
\epsilon_Q &=& \frac{4r - 1}{r - 1} \; B \nonumber \\
\epsilon_H &=& \frac{3}{r-1} \; B
\; \; .
\eeqar{eqh}
We take here $B=0.39691\; {\rm GeV/fm}^3$ and $r=37/3$.
For this choice, $\epsilon_Q \simeq 1.7 \; {\rm GeV/fm}^3$ and
$\epsilon_H \simeq 0.1 \; {\rm GeV/fm}^3$, the critical pressure is
$p_c=\epsilon_H/3$ and the critical temperature is
$T_c\simeq 169$ MeV.
Hydrodynamics with a more realistic equation of state
with a finite cross-over region $\Delta T/T_c \sim 0.1$ was considered in
Ref.\ \cite{risch_pipi}. In the present exploratory study, only the two
idealized forms above will be contrasted.
The evolution equations are solved on a two--dimensional cartesian
$200 \times 200$ mesh with grid spacing $\Delta x=0.2$ fm.
The Courant--Friedrichs--Lewy number is taken as
$\lambda \equiv \Delta t/\Delta x =0.4$.
The cartesian grid breaks the isotropy of space and might lead to
instabilities.
As a first check to determine
whether our multi-dimensional algorithm tends to produce such
numerical artifacts, we consider the expansion of a cylindrically
symmetric Gaussian hot spot with radius 2 fm and peak energy
density ${\cal E} = 30\; {\rm GeV/fm}^3$ at rest. The time evolution should respect the
initial cylindrical symmetry of the problem.
In Figure 5a we show the initial (calculational frame) energy density
profile ($T^{00} \equiv {\cal E}$).
Figure 5b shows the energy density profile after evolving the
hydrodynamical equations with the standard version of the
SHASTA algorithm as described in \cite{test1}
and the ideal gas equation of state (\ref{eos_ideal})
at time $t=14.9$ fm/c (for
the sake of clarity we show the profile for
positive $x, y$ only, the other quadrants are symmetric).
The observed strong fluctuations which break cylindrical symmetry are
due to the following:
the flux limiter in our version of the SHASTA
(cf.\ eq.\ (18) of \cite{test1}) prevents the occurrence of
unphysical extrema {\em only along\/} the direction of propagation, i.e., the
$x-$ and $y-$direction. Off the grid axis, small perturbations are
not smoothed out and can grow to produce the features seen in Fig.\ 5b.
One way to cure this is to use a multi-dimensional flux limiter as
proposed by Zalesak \cite{zalesak}. Here, however, we choose
the simpler method of reducing the coefficient in front
of the antidiffusion fluxes (default value 1/8, cf.\ eq.\
(17) of \cite{test1}; the necessity to reduce this
coefficient occurs also in other, purely one--dimensional situations,
cf.\ the discussion in \cite{test1,test2}).
Figure 5d shows the same situation after evolving with the antidiffusion
coefficient reduced to 70\% of its default value, i.e., to 0.7/8=0.0875.
This obviously strongly reduces symmetry breaking. Although this
prescription increases the numerical diffusion of the algorithm and thus
produces physical entropy, all our following results are generated with
this reduced antidiffusion in order to avoid spurious azimuthal
asymmetries of the transverse energy, ${\rm d} E_\perp/{\rm d}y {\rm d} \phi$,
of purely numerical origin. In Figure 5c we show the
energy density after evolving with the equation of state with the first order
phase transition (\ref{bag}).
Also in this case, the reduction of the antidiffusion helps to preserve
the cylindrical symmetry of the problem.
Another test of the algorithm is to check how well it reproduces
test cases with analytically known solutions.
One such test would be the expansion of a cylinder with a
sharp surface. Here, however, we focus instead on the more
physically relevant problem of the expansion of the cylindrically symmetric
(smooth) Gaussian studied in the previous figure.
Although that problem is effectively
one--dimensional, it does not have a purely
analytical solution. We may, however, compare the numerical solution generated
with our multi-dimensional SHASTA algorithm to that
obtained with a one--dimensional algorithm which is known to reproduce
analytical solutions very well, namely the relativistic
Harten--Lax--van Leer--Einfeldt (RHLLE)
algorithm \cite{test1,test2,schneider,dhrmg},
for our purposes modified with a Sod operator splitting step
as described in \cite{risch_pipi}
to account for longitudinal boost-invariance (that operator splitting
is in fact analogous to the one in eq.\ (\ref{eomb3})).
In Figure 6 we compare energy density profiles along the $x-$axis
for (a) the ideal gas equation of
state and (b) the Bag model equation of state with phase transition.
For the RHLLE run we used a 2000 cell grid with $\Delta x = 0.02/\lambda$
fm, $\lambda = 0.99$.
The larger prediffusion \cite{test1} for the SHASTA visible in (a)
is expected on account of
the smaller Courant--Friedrichs--Lewy number 0.4 as compared to 0.99
used in the RHLLE run \cite{test1}. The sightly slower cooling of the center is
due to the larger numerical diffusion of the SHASTA.
The sharp cusp-like structures that develop
in the RHLLE solution in (b) and which are
associated with deflagration discontinuities in the transition from
quark--gluon to hadron matter \cite{dhrmg,mgkaj},
are therefore also broadened in the SHASTA calculation.
(The numerical diffusion for the RHLLE is so small that it even tends to
produce small-scale oscillations around the true solutions at the
origin and for late times in (b).)
Up to these small numerical effects, however,
agreement is satisfactory and establishes
confidence in that the SHASTA algorithm is able to generate approximately
correct hydrodynamical solutions also for more complicated initial conditions.
A third test of the numerical stability of the algorithm
is shown in Fig.\ 7, comparing the experimentally observable
transverse energy flow, ${\rm d}E_\perp/{\rm d}y {\rm d}\phi$,
and the azimuthal
correlations of the transverse energy flow, $C_{ET}(\Delta\phi)$,
in the case of ideal gas and Bag model equations of state.
An initially azimuthally symmetric Gaussian energy density profile
of radius 1 fm is evolved 100 time steps (with time step width
$\Delta t= \lambda\, \Delta x= 0.08$ fm/c).
The thermal smeared transverse energy distribution is
computed as a function of time via eq.\ (\ref{det})
derived in the Appendix. The azimuthal transverse energy correlation
is defined by
\begin{equation}
C_{ET}(\Delta\phi)= \int_0^{2\pi}\frac{{\rm d}\phi}{2\pi}\;
\frac{E_\perp(\phi+\Delta\phi) \; E_\perp(\phi)}{\langle E_\perp\rangle^2}
-1\; ,
\eeq{cet}
where we abbreviated $E_\perp(\phi)\equiv
{\rm d}E_\perp/{\rm d}y{\rm d}\phi$ and
$\langle E_\perp \rangle \equiv \int {\rm d}\phi\, E_\perp(\phi)/2\pi$.
For this test case, the exact solution
has $E_\perp(\phi)=\langle E_\perp\rangle$ independent of $\phi$ and
$C_{ET}=0$. As shown in Fig.\ 7, while the initial condition
agrees with that expectation, numerical errors develop at later times
due to the cartesian grid that breaks the symmetry of the initial conditions.
This is especially obvious from the fact that the
azimuthal anisotropy develops
directed along the four diagonal directions of the grid.
The numerical anisotropy is largest for the first order transition
case and reaches 10\% in the $E_\perp(\phi)$ distribution
while the numerical $E_\perp$ correlations remain below $0.1\%$.
These results indicate the magnitude of numerical errors
we must keep in mind in order to assess the significance of
results for more complex geometries.
\subsection{Transverse Shocks in Inhomogeneous Geometries}
In this section we demonstrate that a novel class of azimuthally asymmetric
collective flow patterns can develop from
inhomogeneous initial conditions. These patterns are analogous to the
hadronic volcanoes proposed by T.D.\ Lee \cite{tdlee}.
However, instead of emerging from
instabilities at the hadronization surface, these ``volcanoes''
arise from transverse directed shock waves
formed during the expansion of initial state inhomogeneities \cite{george}.
To illustrate this type of collectivity, consider
the evolution of two idealized Gaussian hot spots with radii 1 fm
and with
centroids separated by 4 fm in the transverse $x-$direction.
The initial transverse flow velocity is assumed to vanish
and only the Bjorken longitudinal flow velocity (\ref{ubj}) is taken
into account. The initial energy density of each Gaussian is assumed to peak
at 30 $ {\rm GeV/fm}^3$ to simulate the hot spots seen in Figs.\ 2--4.
In Figure 8 the evolution of the expanding hot spots
is shown for an ideal gas equation of state.
The shaded energy density contours are shown
every 10 computational time steps, corresponding to $0.8$ fm/c between
each frame. The expansion proceeds, as expected \cite{greinrisc},
with the formation of two (cylindrical) shells.
At 2.9 fm/c (left column, middle row), a high density wall of shocked matter
has formed at $x=0$ where the expanding shells intersect.
Subsequent evolution of that dense region leads
to the ``eruption'' of two back-to-back
jets of matter in the transverse $y-$direction.
These transverse shock patterns are
similar to the familiar ``squeeze-out''
flow pattern \cite{stocker} produced in lower energy nuclear collisions.
However, they are produced here by the collision of radially expanding shells
or bubbles of relativistic hot matter,
and in the case of multiple initial inhomogeneities, multiple
volcanoes form at azimuthal angles that depend on the particular geometry.
These transverse shocks are most clearly
visible in the ${\rm d}E_\perp/{\rm d}y{\rm d}\phi$
distribution as shown in Fig.\ 9.
The initial ${\rm d}E_\perp(\tau_{th})
/{\rm d}y {\rm d} \phi \simeq 24$ GeV is of course
rotation invariant. However, at the end of the evolution
($\tau = 7.7$ fm/c), two narrow towers of directed transverse energy
emerge at $\phi=90$ and 270 degrees.
This occurs because the initial azimuthal asymmetry in coordinate space
is transferred into azimuthal asymmetry
in momentum space through evolution. Finite dissipative effects would
of course decrease the intensity and broaden the azimuthal distribution
of these volcanoes.
Note also that the overall magnitude of the transverse energy decreases with
time because work is performed by the fluid associated with
longitudinal Bjorken expansion. The Gaussian hot spots expand not only in the
transverse direction but also along the beam direction.
The initial $E_\perp$--correlation function is of course zero.
By the time of freeze-out $({\cal E} \sim 0.1\; {\rm GeV/fm}^3)$, however,
a strong forward and backward correlation
develops for this geometry. It is important to observe that the
magnitude of the correlation is only 30\% even though the collective
signal-to-thermal noise ratio, $s$, in ${\rm d}E_\perp(\tau_f=7.7
\, {\rm fm/c})/{\rm d}y {\rm d}\phi$ is peaked at $s \simeq 4$.
This is a general feature of correlation functions
since the convolution introduces a dependence on the width
of the signal as well.
If the relative width of the signal to noise is $\delta$,
then a simple estimate of the auto-correlation is
\begin{equation}
C(0)\simeq \frac{(s-1)^2 \delta(1-\delta)}{(1+\delta (s-1))^2}
\; \; .
\eeq{c0}
We can understand qualitatively the magnitude of the correlation
at $\Delta \phi=0$ in Fig.\ 9 taking $s \simeq 3$ and $\delta \simeq 0.2$.
The off-peak anti-correlation can also be estimated as
\begin{equation}
C(\pi/2) \simeq -\frac{(s-1)^2 \delta^2}{(1+\delta (s-1))^2}
\; \; ,
\eeq{cpi2}
which is also in qualitative accord with the computed correlation.
These correlations are numerically significant
because from Fig.\ 7 we found that
the numerical errors lead only to a $0.01\%$ correlation in
the ideal gas case. The slight forward-backward asymmetry
in the correlation function is only due to
our histogram binning procedure.
In Figure 10 we show the evolution of the same initial condition assuming
the Bag model equation of state, eq.\ (\ref{bag}).
The expansion in this case is qualitatively different from
the ideal gas case. The expanding shells or bubble walls
are much thinner as is also the high density
intersection region. This is even more clearly seen in Fig.\ 11
that shows the energy density as a function of time along the $x-$axis for
both equations of state. The transverse velocity profile along
that slice is also shown. In the case of a first order transition,
a sharp cusp is produced at $x\simeq -5,0,5$ fm, which correspond
to points where matter cools to the
critical mixed phase transition point, $\epsilon_Q$.
In contrast, the bubbles and transverse shock zones in the ideal gas case
remain comparable to the width of the initial hot spots. Those structures
are much thinner in the phase transition case. The reason is that
mixed phase matter does not resist compression due to the vanishing
of the speed of sound and therefore fills less space than an ideal
gas with finite compressibility (see also Fig.\ 6).
Also, the evolution in the case of a first order transition
is slower by about a factor of two relative to Fig.\ 8. This is due to
the stall of the expansion in the mixed phase because
of the vanishing velocity of sound in those regions
\cite{risch_pipi,dhrmg}.
In the ideal gas case, we have $c_s^2=1/3$ throughout the expansion.
With $c_s=0$ in the region with $\epsilon_H <\epsilon<
\epsilon_Q$, sharp cusps are formed that move outwards only
slowly.
As can be seen from the velocity profiles the flow velocity has a discontinuity
through the cusp, typical of deflagration phenomena
\cite{risch_pipi,dhrmg,mgkaj}.
The hydrodynamic stability analysis of bubble formation is complex
and in the cosmological electroweak
context is still subject to controversy \cite{kajmec,kamion}.
In the QCD case even less is known,
though at least in \cite{kajmec} marginal stability was found to be possible
within the uncertainties of the relevant scale.
In our case, bubble formation does not result from supercooling
but rather from the dynamical expansion of initial state
inhomogeneities. Whether these hydrodynamic structures are stable is left
here as an open question,
especially since the thickness of the bubble walls is of hadronic dimensions.
As we show in the next section the stalled expansion and the formation
of thin shells of expanding mixed phase matter is the typical pattern
we find also for the more complex inhomogeneous, turbulent mini-jet initial
conditions.
Returning to Fig.\ 9, we see that the consequence of stalled expansion
is a considerable reduction of the transverse shock intensity
as measured by ${\rm d}E_\perp/{\rm d}y {\rm d}\phi$,
relative to the ideal gas case.
The signal to noise is reduced to $s\simeq 1.5$ and the relative width is
also reduced to $\delta \simeq 0.1$. With these reductions, the $E_\perp$
correlation is seen to be reduced by an order of magnitude in accord
with eqs.\ (\ref{c0},\ref{cpi2}). However, the few--percent correlation
is still numerically significant compared to the $0.1\%$ numerical
errors deduced from Fig.\ 7.
\subsection{Evolution of Turbulent Mini-Jet Initial Conditions}
In Figures 12, 13 we compare the evolution of a typical mini-jet
initial condition. The initial $T^{\mu\nu}$
required for the hydrodynamic evolution is taken from
eq.\ (\ref{xebj}) with $T^{00} \equiv {\cal E}, \, T^{0i} \equiv M^i$.
We note that the HIJING
initial conditions are not of the ideal fluid form.
By only taking the left hand
side of (\ref{xebj}) to fix all other components
of the fluid energy--momentum tensor, we convert the HIJING initial conditions
into an ideal fluid form through the assumption
of thermalization at $\tau_{th}$. Thermalization has the effect of {\em
reducing\/} the initial transverse energy somewhat from the HIJING input,
because some of the transverse energy is converted into longitudinal thermal
motion.
The time steps between frames in the case of a first order transition
(Fig.\ 13)
are taken to be twice as long (1.6 fm/c) as in the ideal gas case (Fig.\ 12)
to take into account the inherently slower and stalled expansion
in the first order transition case.
For this event, several prominent hot spot regions are seen to expand
in a manner analogous to the previous Gaussian examples. However, in
this case the hot spots also start with initial transverse collective flow
velocities determined from the mini-jet fluctuations.
The background fluid produced via soft beam-jet fragmentation
in this example is assumed to have the smooth, azimuthally symmetric profile
\begin{equation}
{\cal E}^{\rm soft}(\tau_{th},{\bf x}_\perp)= \frac{{\rm d}E^{\rm soft}_\perp}{
{\rm d}y}\,
\frac{1}{\tau_{th} \pi R^2}\, \frac{3}{2} (1-r_\perp^2/R^2)^{1/2} \; \; ,
\eeq{epssoft}
with ${\bf M}_\perp^{soft}=0$ as well.
Unlike in Figs.\ 2--4 where we included only about one
half of the soft transverse energy on account of the formation time
estimates, in this case we have taken the full
${\rm d}E^{\rm soft}_\perp/{\rm d}y=0.5$ TeV estimated from HIJING.
The soft component adds in this case (i.e.\ $p_0=2$ GeV/c)
a larger smooth background of depth
${\cal E}^{\rm soft}(\tau_{th},0) \simeq 10\; {\rm GeV/fm}^3$ to the central region.
As emphasized before, the magnitude of that soft background component
is uncertain to about a factor two,
and we take the above value to be on the conservative side.
In Figure 12 one can see two main expanding bubbles emerging from the
two hottest spots. Their evolution is more irregular than in Fig.\ 8
because of the inhomogeneous background in which they propagate.
The solid curve shows the freeze-out hadronization contour
with $\epsilon=\epsilon_H=0.1\; {\rm GeV/fm}^3$. In this ideal gas case,
this hadronization surface shrinks monotonically but does not correspond
to any obvious ${\cal E}$ contour because of the underlying chaotic
collective flow velocity field.
In Figure 13, in addition to slowing down
of the expansion, the evolution with the first order transition
leads to the production of much thinner and irregular
bubble fragments of mixed
phase matter reminiscent of the thin shells in Fig.\ 10.
It is the multiple inhomogeneities and shear velocity fields that
make these structures much more irregular in this case.
The hadronization surface ($\epsilon(\tau,{\bf x}_\perp)=\epsilon_H$) also seems to
acquire a foam-like structure\footnote{Bubble formation
is a natural characteristic of a mixed phase, the above foam, however,
is induced by the initial inhomogeneities and the subsequent collective
motion.}.
The hadronization surface tends in this case of a first
order transition to coincide
closer with the plotted ${\cal E}(\tau,{\bf x}_\perp)=T^{00}(\tau,{\bf x}_\perp)$ contours
because the velocity of the bubble walls are smaller
than in the ideal gas case while the ejected cooler hadronic matter
tends to have higher flow velocity (see \cite{mgkaj}).
In Figure 14, a cut along the $x-$axis
provides a close-up of the complex nature
of the sharp structures of mixed phase matter that emerge
as well as of the chaotic transverse velocity fields in between them.
It also shows the energy density scales corresponding
to the shaded contours in Figs.\ 12 and 13. In this view, the
sharp initial inhomogeneities due to mini-jets superimposed
on the smooth beam-jet background and the initial turbulent velocity field
are particularly clearly revealed.
The evolution of the transverse energy in this event is shown in Fig.\ 15.
Unlike in the static examples discussed before,
the initial ${\rm d}E_\perp/{\rm d}y {\rm d}\phi$
is not azimuthally symmetric in this
case because of the initial turbulent velocity field. Of course,
the azimuthal angles of the bumps and valleys vary from event to event.
That is why correlation functions must be studied experimentally. However,
the evolution of ${\rm d}E_\perp/{\rm d}y{\rm d}\phi$
in this event reveals the general tendency of inhomogeneous
initial conditions to evolve into multiple azimuthally directed flow
structures.
Comparing the ideal gas and first order transition cases,
we see again that the former leads to a lower average final transverse energy
than the latter due to extra work done by the ideal gas
upon longitudinal expansion.
In Figure 16 we show ${\rm d}E_\perp/{\rm d}y{\rm d}\phi$
averaged over 50 events
and $C_{ET}(\Delta\phi)$ for such turbulent initial conditions.
The event-averaged ${\rm d}E_\perp/{\rm d}y{\rm d}\phi$
remains approximately azimuthally symmetric as required. Initially,
there is a small ($<1\%)$ azimuthal
auto-correlation that is induced when the HIJING
parton data are coarse-grained
into 1 fm$^3$ fluid cells. The initial state correlation
also includes the small fluctuating dipole contribution
arising from the fact that $\sum {\bf p}_{\perp\alpha}\ne 0$ for a finite number
of mini jets in the central rapidity slice.
At later times, however, a $3\%$ auto-correlation
develops in the ideal fluid case and approximately $2\%$
in the first order transition case.
The magnitude of the correlations for both equations of state
is evidently small.
In Figure 17, the dependence of the induced $E_\perp$ correlations
on the soft background as well as on the mini-jet scale parameter
$p_0$ is shown. The HIJING default case from
Fig.\ 16 corresponds to the solid curves.
In the ideal gas case (left panels), the auto-correlation reaches
$6\%$ if the background is reduced by a factor of two (dashed curve)
to ${\rm d}E_\perp^{soft}/{\rm d}y
=250$ GeV as in Figs.\ 2--4. On the other hand, with $p_0=2$ GeV/c fixed
but the soft background increased by a factor of two (dotted curve)
to ${\rm d}E_\perp^{soft}/{\rm d}y
=1$ TeV, the final auto-correlation is reduced by around a factor of two
relative to the default case.
Finally, if $p_0$ is decreased to 1 GeV/c but
${\rm d}E_\perp^{soft}/{\rm d}y
=0.5$ TeV is kept fixed (dash--dotted curve),
the correlation hardly changes relative to the corresponding
$p_0=2$ GeV/c case (solid). The wiggle in this last case
is due to the more limited statistics (20 events) available for this average.
We conclude that in the ideal gas case, the collective azimuthal anisotropies
are approximately linearly dependent on the level of
the soft beam-jet component.
The initial state correlation function, however, also depends on the
$p_0$ and soft background scales. In the lower panels,
the ratio of the final correlation function to the initial one is shown
in the restricted $\Delta \phi< \pi/2$ interval that avoids the
artificial pole created at the point where the
initial correlation function crosses zero.
It is interesting to note that this ratio for the three $p_0=2$ GeV/c
curves is practically independent of the background level.
On the other hand, the $p_0=1$ GeV/c ratio is significantly larger.
Thus, while the absolute magnitude of the correlation function
depends roughly linearly on the soft background level, the dynamical
enhancement of the initial state correlations in the ideal gas case
peaks near 4--5 at $\Delta \phi=0$ approximately independent
of that background.
While the absolute collective signature as measured by the
$E_\perp$--correlation
function is small, it is quite significant compared to the initial state
correlations (Fig.\ 16) and the numerical accuracy (Fig.\ 7).
The transverse energy correlation function
is of course only one way
to search for azimuthally asymmetric collective flow phenomena.
The power spectrum, $E_\perp(m)
=\langle \int {\rm d} \phi\; e^{-i m \phi}\, E_\perp(\phi)\rangle$,
wavelet analysis, and factorial moment fluctuation analysis may provide more
sensitive probes of the induced collectivity that is apparent
in the ratio curves in Fig.\ 17.
In the case of the first order phase transition,
the $E_T$ correlations are significantly suppressed relative to the ideal gas
case and appear to be much more insensitive to the background level.
Comparing the solid and dotted curves
indicates, however, a stronger dependence
on the $p_0$ mini-jet scale.
For $p_0=1$ GeV/c and ${\rm d}E_\perp^{soft}/{\rm d}y
=1$ TeV, the azimuthal anisotropies fall below $1\%$.
On the other hand, in the ratio of final to initial
correlation functions, the largest enhancement occurs for the $p_0=1$
GeV/c case.
The ratios also show a qualitative shoulder feature in the first order
transition case for which we have not found a simple explanation. The
level of collectivity is, however, probably too small to see such structures.
The suppression of collective flow phenomena in the case of a first order
transition is due to the vanishing of the speed of sound over
a large interval of energy densities, $\epsilon_H<\epsilon<\epsilon_Q$.
This is also manifest in the smaller reduction of the initial
${\rm d}E_\perp/{\rm d}y$ due to longitudinal expansion
relative to the expansion with an ideal gas equation of state.
The above results can be understood as a consequence
of a rather general feature of evolution with any equation
of state that possesses a dip of $c_s^2={\rm d}p/{\rm d}\epsilon$
in a finite range of energy densities. In the case of a first order
transition, $c_s^2=0$
in the mixed phase, and pressure gradients driving collective flow phenomena
are strongly suppressed.
As emphasized in \cite{risch_pipi}, even a continuous
cross-over transition may feature such a minimum if the
cross-over temperature region is not too broad. For realistic
$\Delta T/T_c \simeq 0.1$, the softening of the equation of state
is sufficiently strong that the time-delay signature discussed in the next
section should still be observable.
The same physics of softening is expected to lead
to a suppression of directed transverse flow phenomena at much lower
AGS energies \cite{risch_flow}
and also to the suppression \cite{vanhove} of
${\rm d}\langle p_\perp\rangle/{\rm d}({\rm d}N_\pi/{\rm d}y)$
in the mixed phase region. In the turbulent-glue scenario,
the suppression of pressure gradients
in the cross-over region of the equation of state
has the observable consequence of reducing the azimuthally
asymmetric collective phenomena.
It is indeed curious that many ``barometric''
signatures of QGP formation involve
the {\em suppression\/} of collective behaviour that would otherwise
naturally arise in ideal gas systems. This makes the search for
signatures of the QGP more difficult because ordinary dissipative effects
due to viscocity, thermal and colour conductivity etc.\ work in the same
direction. It is only through the careful systematic studies
of all observables as a function of beam energy and nuclear size
that one can hope to unravel interesting threshold-type
behaviour caused by the passage through a mixed-phase region
from the suppression of collective phenomena due to
less interesting dissipative dynamics.
\section{Robustness of the Time-Delay Signature}
Since it is the reduction of collective observables in the evolution of a QGP
that signals rapid cross-over regions in the equation of state,
it is important to find as many correlated signatures
as possible in the search for that phase of matter.
As repeatedly stressed above,
one of the generic consequences of hydrodynamics
with an equation of state that has a soft region (reduction of $c_s^2$)
is time delay. Meson interferometry has been proposed
\cite{pratt,bertsch} as the main experimental tool to search
for such time-delay signatures.
As shown recently in \cite{risch_pipi}, that signature of stalled
dynamics is fortunately robust to an increase in the width
of the cross-over region. In this section,
we want to demonstrate in more detail the robust character of that
observable even to the much more unfavourable
turbulent initial conditions discussed above.
In Figure 18, the evolution of the {\em mean\/} energy
density is shown, averaged over the inner $r_\perp<3$ fm core of the plasma.
The average is again over 50 events for each equation of state.
At $\tau=\tau_{th}=0.5$ fm/c, the mean central energy density
is approximately $16\; {\rm GeV/fm}^3$ for the turbulent ensemble with $p_0=2$ GeV/c and
${\rm d}E_\perp^{soft}/{\rm d}y =0.5$ TeV. The solid curve, for the case of a
first order transition and 3+1--dimensional expansion, should be compared to
the thick dashed curve for the case of an ideal gas equation of state.
In addition, the light dashed and dash--dotted curves are shown
for comparison. They correspond to
ideal one--dimensional Bjorken expansion
with transverse expansion neglected.
The $\tau^{-1}$ curve represents pure longitudinal
expansion without work, $p=0$,
while the $\tau^{-4/3}$ curve corresponds to the ideal gas case.
The 3+1--dimensional ideal gas evolution starts to deviate from the
one--dimensional case after a short
time $\sim 2$ fm/c due to rapid radial expansion. In the first order transition
case, the mean energy density follows the ideal one--dimensional Bjorken
curve up to $\sim 6$ fm/c because the transverse expansion is stalled.
The freeze-out occurs near $\epsilon\simeq \epsilon_H\simeq 0.1\; {\rm GeV/fm}^3$.
It is clear from Fig.\ 18
that the freeze-out time is approximately twice as long in
the case of a phase transition to the QGP.
Especially important for the
$R_{\rm out}/R_{\rm side}$ signature \cite{pratt,bertsch,risch_pipi} of the QGP
is that the transverse coordinate distribution in the first order transition
case remains more compact even though it takes a longer time for the system
to freeze-out. This can be seen clearly in Figs.\ 12 and 13
which, together with Fig.\ 18, confirm that
the space--time geometry of freeze-out
remains so different in the two cases.
Figure 19 emphasizes the strongly inhomogeneous
character of expansion in the turbulent-glue scenario. The thick solid
and dashed curves are the same as in Fig.\ 18. The two thin curves
show the magnitude of the rms fluctuations of the energy density
in the central core for the first order transition case.
The initial fluctuations are already large. However, those fluctuations
grow rapidly as the system passes through
the mixed phase and the foam structure
in Fig.\ 13 develops. This shows that freeze-out is not a fixed-time event,
but is spread over the entire evolution.
The most essential aspect for the time-delay signal is
that the freeze-out time duration is large relative to the transverse
coordinate dispersion \cite{pratt,bertsch,risch_pipi}.
The actual computation of the pion interference pattern from the turbulent
evolution is beyond the scope of the present study. A substantial
generalization of the already computationally demanding methods used in
Ref.\ \cite{risch_pipi} will be required. However, based on that work,
we expect the general interference pattern to reflect well the underlying,
time-delayed freeze-out geometry. Therefore,
the time-delay signature survives even if the initial conditions
are as inhomogeneous as in the turbulent-glue scenario.
The detailed interference pattern can be expected, on the other hand,
to differ from the patterns produced from the evolution of
homogeneous initial conditions due to the overlapping
bubble wall geometries found in Fig.\ 13. The Fourier transform of
such hadronic foam is bound to contain extra structure since
multiple distance scales (shell thickness and radii, and relative separations)
enter the coordinate space distribution.
It would be interesting in the future to explore such novel interference
patterns that would be specific to inhomogeneous geometries.
\section{Enhanced Radiance of Hot Spots}
A specific consequence of hot-spot formation is enhanced
radiance of hard probes. For instance, the (invariant) rate for emitting
photons with momentum ${\bf k}$ from a fluid element consisting of
quarks and gluons
with temperature $T$ and 4--velocity $u^{\mu}$ is given by \cite{kapusta}
\begin{equation}
\omega\, \frac{{\rm d} R^{\gamma}}{{\rm d}^3 {\bf k}} =
\frac{5 \alpha \alpha_S}{18 \pi^2}\;T^2
e^{-k \cdot u/T} \ln \left( \frac{2.912\, k \cdot u}{g^2 T} + 1 \right)\;\;.
\eeq{phot}
It was moreover shown in \cite{kapusta}
that this rate is approximately the same if the fluid element
consists of hadrons instead of QGP.
To estimate photon radiation in the turbulent-glue scenario as
compared to that from a homogeneous Bjorken cylinder,
we integrate numerically the rate
(\ref{phot}) over the photon emission
angle, over (proper) time and transverse coordinates, and evaluate it at
$y=\eta = 0$ ($\eta= \ln[(t+z)/(t-z)]/2$ being the space--time rapidity).
The final differential photon spectrum
${\rm d}N^{\gamma}/{\rm d}\eta {\rm d}y k_\perp {\rm d}k_\perp |_{y=\eta=0}$
is shown in Fig.\ 20
averaged over 10 HIJING events (thick lines) and for a homogeneous
Bjorken cylinder (thin lines) of radius $R=4$ fm with the same initial
(average) energy as the HIJING events. We note that
even a single event
produces a quite similar spectrum as the 10--event average.
As one expects, the higher
temperatures of hot spots in the turbulent-glue scenario lead to an
enhancement of the exponential tail of the photon spectrum.
Comparing the yield in the case of a phase transition (dashed lines) to that
in the case of an ideal gas equation of state (solid lines), one observes
that the longer lifetime of the system in the case of a phase transition
leads to enhanced radiation of photons with small as well as
with large momenta in the case of the turbulent-glue scenario, while
for the homogeneous Bjorken cylinder, only radiation of
photons with small momenta is significantly enhanced.
The higher fluid velocities
in the case of the first order transition scenario as well as
reheating in the shocks created in the expansion of hot spots
seem to be responsible for the stronger population of the
high-momentum tail of the spectrum.
Correlations of direct gammas with azimuthally directed transverse shocks
should also be looked for.
Other probes such as strangeness enhancement may also be correlated
with transverse shocks.
\section{Summary}
In this paper we studied several possible consequences of initial state
inhomogeneities and turbulence in quark--gluon plasmas arising from
multiple mini-jet production in ultrarelativistic nuclear collisions.
The HIJING model was used to generate the initial mini-jet configurations
and to provide an estimate of the soft beam-jet background.
The ensemble of initial conditions was found to exhibit a wide
spectrum of fluctuations not only of the initial energy density
distribution but also of the initial
transverse velocity field. The fluctuations are large
if the equilibration time of the plasma is short, $\sim 0.5$ fm/c,
as suggested by recent estimates of radiation energy loss.
We refer to this type of initial conditions as the
turbulent-glue scenario to contrast it with the more conventional
hot-glue scenario which assumes cylindrical symmetry,
homogeneity, and quiescence of the initial plasma.
We assumed the validity of non-dissipative
hydrodynamics in order to assess the most optimistic observable
consequences of such unfavourable initial conditions.
We studied three observables that could serve as diagnostic
tools of such plasmas.
First, we showed that new types of azimuthally asymmetric
collective flow patterns (volcanoes) could arise in inhomogeneous
geometries. At the event-by-event level they could be observed
by looking for spikes
in the transverse energy distribution, ${\rm d}E_\perp/{\rm d}y{\rm d}\phi$.
However, the collective effects are difficult to identify
because of large uncorrelated fluctuations associated with the
turbulent nature of the initial conditions.
Those uncorrelated fluctuations can be averaged out
by studying instead the azimuthal
correlation function of the transverse energy.
The remaining dynamical correlations are small
but sensitive to the plasma equation of state. In general,
evolution with an equation of state featuring a minimum of the speed of sound
in the energy density range of interest reduces all collective
flow phenomena relative to the ideal gas case. For example, the overall
reduction of the initial transverse energy due to work associated with
longitudinal expansion is maximal for the ideal gas case.
Second, we discussed the time-delay signature of a QGP transition.
We found that, as in the conventional hot-glue scenario,
the evolution is stalled if there is a soft region of the equation of state.
It is remarkable that this phenomenon survives not only if
the QGP equation of state only features a smooth cross-over instead
of a first order transition \cite{risch_pipi,dhrmg},
but also if the
ensemble of initial conditions is as complex as in the turbulent
glue-scenario. Meson interferometry
\cite{pratt,bertsch,risch_pipi} appears therefore
to be one of the most robust diagnostic tools in the search for
the QGP transition.
Finally, we considered briefly the effect of hot spots on hard probes.
The thermal contribution to probes such as direct photons is obviously
exponentially sensitive to the local temperature. We showed that
the turbulent-glue scenario can enhance by more than an order
of magnitude the high--$k_\perp$ tails for both equations of state.
Other hard probes and even softer ones such as strangeness production
can be expected to be correlated and enhanced due to hot spots
and collective transverse shock phenomena. However, to identify
any enhanced yields with hot-spot formation
will require subtraction of
other pre-equilibrium contributions to those yields.
The above scenario represents in many ways the most
optimistic point of view for signatures of QGP formation.
We neglected all dissipative effects that tend
to dampen collective behaviour of the system and decrease the
freeze-out time. The thin expansion shell structures found may be diffused
considerably by such effects.
Chemical equilibrium may not be maintained through
the transition point. In general, chemical rate equations
would have to supplement the hydrodynamic equations.
Also, we assumed Bjorken boundary conditions and hence
neglected fluctuations along the longitudinal
direction. In the general 3+1--dimensional case,
fluctuations will produce hot-spot droplets
instead of the cylindrical structures studied here.
That extra dimension decreases the overlap phase space
of expanding inhomogeneities and therefore reduces
the pattern of collective transverse shocks.
We have also assumed a most conservative homogeneous beam-jet
background underneath the turbulent mini-jet plasma.
In confronting data eventually,
these and other complications will have to be considered in more
detail in future studies.\\[2ex]
\noindent
{\bf Acknowledgments}
\\ ~~ \\
We thank M.\ Asakawa, F.\ Cooper, C.\ Greiner, J.\ Harris, D.\ Heumann,
B.\ K\"ampfer, J.\ Kapusta, K.\ Kinder--Geiger, T.D.\ Lee,
B.\ M\"uller, P.V.\ Ruuskanen,
E.\ Shuryak, H.\ St\"ocker, and X.N.\ Wang for stimulating
discussions during the course of this work.
\\ ~~ \\
| 21,183 |
\section{Introduction}
\noindent
Ever since the pioneering work of Schonfeld and Deser
et al.~\cite{schonfeld},
2+1 dimensional Chern-Simons (CS) theories have received
quite some attention.
The motivations to study these theories range from applications in
knot theory to applications in condensed matter systems such as
fractional quantum Hall liquids.
In this talk, I will dicuss the implications of adding a CS term
to 2+1 dimensional gauge theories spontaneously broken down to a
finite residual gauge group by means of the Higgs mechanism. That is,
the focus is on models governed by an action of the form
\begin{eqnarray} \label{alg}
S &=& S_{\mbox{\scriptsize YMH} } + S_{\mbox{\scriptsize matter}}
+ S_{\mbox{\scriptsize CS}} \, ,
\end{eqnarray}
where the Yang--Mills--Higgs action $S_{\rm{\mbox{\scriptsize YMH}}}$
describes the spontaneous breakdown of some continuous compact gauge
group $G$ to a finite subgroup ${ H}$,
$S_{\mbox{\scriptsize matter}}$ a conserved matter current
coupled to the gauge fields and
$S_{\mbox{\scriptsize CS}}$ denotes the CS action~\cite{schonfeld}.
The so--called discrete $H$ gauge theories describing the long distance
physics of the models~(\ref{alg}) without CS term
have been studied in 2+1 and 3+1 dimensional space time and are by
now completely understood.
For a recent review and detailed references,
see Ref.~\cite{banff}.
To sketch the main results, the spectrum features topological
defects which in 2+1 dimensional space time appear as
vortices carrying
magnetic flux labeled by the elements of $H$. If
$H$ is nonabelian, the vortices exhibit a nonabelian Aharonov-Bohm
(AB) effect: upon braiding two vortices their fluxes affect each
other through conjugation. The residual
global gauge group $H$ also acts on the fluxes through
conjugation, so the different magnetic vortices
are labeled by the conjugacy classes of $H$. This is in a nutshell the physics
described by the Yang-Mills Higgs part $S_{\mbox{\scriptsize YMH} }$
of the action~(\ref{alg}).
The matter fields coupled to the gauge fields in
$S_{\mbox{\scriptsize matter}}$ form multiplets which
transform irreducibly under ${ G}$. In the broken
phase these branch to irreducible representations of the residual gauge
group ${ H}$. So the matter fields introduce point charges in the broken
phase labeled by the unitary irreducible representations (UIR's) $\Gamma$ of
$H$. If such a charge encircles a magnetic flux $h \in H$,
it also undergoes an AB effect: it returns
transformed by the matrix $\Gamma(h)$. Since all gauge fields
are massive, the foregoing AB effects form the only long range interactions
among the charges and vortices.
The complete spectrum also features dyons obtained
by composing the vortices and charges. These are labeled by the conjugacy
classes of $H$ paired with a nontrivial centralizer
representation~\cite{spm}. A breakthrough in the understanding
of these models
was the observation~\cite{spm} that this spectrum of charges,
vortices and dyons together with the spin, braiding and fusion properties
of these particles
is, in fact, fully described by the representation
theory of the quasitriangular Hopf
algebra $D(H)$ resulting~\cite{dpr} from Drinfeld's double
construction applied to the algebra
${\cal F}(H)$ of functions on ${ H}$.
As has been argued
in Ref.~\cite{spm1},
the presence of a CS term $S_{\mbox{\scriptsize CS}}$ for the broken gauge
group $G$ in the action~(\ref{alg}) gives rise to additional
AB interactions among the vortices which are completely
encoded in a 3-cocycle $\omega \in H^3(H,U(1))$
for the residual finite gauge group $H$.
The related algebraic structure is the quasi-Hopf algebra $D^{\omega}(H)$
being a deformation of $D(H)$ by this 3-cocycle $\omega$.
In Ref.~\cite{spm1}, these general results were just explicitly illustrated
by the the abelian CS Higgs model in which the (compact) gauge group
$G \simeq U(1)$ is broken down to a cyclic subgroup $H \simeq {\mbox{\bf Z}}_N$.
Here, I will summarize the results of my recent paper~\cite{spabcst}
in which this analysis was extended to spontaneously
broken abelian CS theories in full generality.
I will be rather burlesque concerning references.
For civilized referencing, the reader has to consult Ref.~\cite{spabcst}.
As for conventions, natural units
in which $\hbar=c=1$ are employed throughout. I will
exclusively work in 2+1 dimensional Minkowsky space with signature
$(+,-,-)$. Spatial coordinates are denoted by $x^1$ and $x^2$ and
the time coordinate by $x^0$. Greek indices run from 0 to 2, whereas spatial
components are labeled by latin indices $\in 1,2$.
\section{The models, their spectrum and the AB interactions \label{model}}
\noindent
Let us concentrate on the subset of models~(\ref{alg})
realizing symmetry breaking schemes
$
{ G} \simeq U(1)^k \rightarrow H
$
with $U(1)^k$ the direct product of $k$ compact $U(1)$ gauge
groups and the finite subgroup $H \simeq
{\mbox{\bf Z}}_{N^{(1)}} \times \cdots \times
{\mbox{\bf Z}}_{N^{(k)}} $ a direct product of $k$ cyclic groups
${\mbox{\bf Z}}_{N^{(i)}}$ of order $N^{(i)}$. So, the Yang--Mills--Higgs
part of the action~(\ref{alg}) contains $k$
complex scalar Higgs fields $\Phi^{(i)}$ (with $i \in 1,2,\ldots,k$)
carrying charge $N^{(i)}e^{(i)}$ with $e^{(i)}$
the coupling constant for the $i^{th}$ compact $U(1)$ gauge field
$A_{\kappa}^{(i)}$, i.e.
\begin{eqnarray} \label{ymh}
{S}_{\mbox{\scriptsize YMH}} &=& \int d\,^3x \; \left(
\sum_{i=1}^k\{-\frac{1}{4}F^{(i)\kappa\nu} F^{(i)}_{\kappa\nu} +
({\cal D}^\kappa \Phi^{(i)})^*{\cal D}_\kappa \Phi^{(i)} -
V(|\Phi^{(i)}|)\} \right) ,
\end{eqnarray}
with ${\cal D}_\kappa \Phi^{(i)}=
(\partial_{\kappa}+\imath N^{(i)}e^{(i)} A_{\kappa}^{(i)})\Phi^{(i)}$ and
$ F^{(i)}_{\kappa\nu} = \partial_{\kappa} A_{\nu}^{(i)}
-\partial_{\nu}A_\kappa^{(i)}$. All $U(1)$ gauge groups are
assumed to be broken down at the same energy scale $M_H = v \sqrt{2\lambda}$.
Hence,
$
V(|\Phi^{(i)}|) = \frac{\lambda}{4}(|\Phi^{(i)}|^2-v^2)^2$ with
$\lambda, v > 0$. In the matter part of~(\ref{alg}),
we then have $k$ conserved matter currents $j^{(i)}$
coupled to the gauge fields
\begin{eqnarray} \label{ma}
{S}_{\mbox{\scriptsize matter}} &=& \int d\,^3x \; \left(
-\sum_{i=1}^k j^{(i)\kappa}A^{(i)}_{\kappa} \right) .
\label{j12mat}
\end{eqnarray}
The matter charges $q^{(i)}$ introduced by the current $j^{(i)}$
are supposed to be multiples of $e^{(i)}$.
Finally, the most general CS action for this theory is of the form
\begin{eqnarray} \label{csact}
S_{\mbox{\scriptsize CS}} & = &
\int d\,^3x \; \left( \sum_{1 \leq i<j \leq k}
\; \frac{\mu^{(i)}}{2} \epsilon^{\kappa\sigma\rho}
A^{(i)}_{\kappa} \partial_{\sigma}
A^{(i)}_{\rho} \label{CSt1} +
\frac{\mu^{(ij)}}{2} \epsilon^{\kappa\sigma\rho}
A^{(i)}_{\kappa} \partial_{\sigma}
A^{(j)}_{\rho} \right) ,
\end{eqnarray}
with $\mu^{(i)}$ and
$\mu^{(ij)}$ the topological masses and $\epsilon^{\kappa\sigma\rho}$
the three dimensional anti-symmetric Levi-Civita tensor normalized
such that $\epsilon^{012}=1$. Hence, there are $k$ distinct CS
terms $(i)$ describing self couplings of the $U(1)$ gauge fields.
In addition, there are $\frac{1}{2}k(k-1)$
distinct CS terms $(ij)$
establishing pairwise couplings between different $U(1)$ gauge fields.
Note that by a partial integration a CS term $(ij)$ becomes a term $(ji)$,
so these terms are equivalent.
Let us also assume that this theory features a family of Dirac monopoles
for each compact $U(1)$ gauge group. That is, the spectrum of Dirac monopoles
consists of the magnetic charges
$g^{(i)} = \frac{2\pi m^{(i)}}{e^{(i)}}$ with $m^{(i)} \in {\mbox{\bf Z}}$
for $1 \leq i \leq k$. In this 2+1 dimensional Minkowsky setting
these monopoles are instantons tunneling between states with flux
difference $\Delta \phi^{(i)} = \Delta \int \!
d\,^2x \,\epsilon^{kl}\partial_k A^{(i)\, l}=\frac{2\pi m^{(i)}}{e^{(i)}}$.
It can be shown that a consistent implementation of these
monopoles requires
that the topological masses in~(\ref{csact}) are
quantized as~\cite{spabcst}
\begin{eqnarray}
\mu^{(i)} \;=\; \frac{p^{(i)} e^{(i)}e^{(i)}}{\pi}
\;\;\; \mbox{and} \;\;\;
\mu^{(ij)} \;=\; \frac{p^{(ij)} e^{(i)}e^{(j)}}{\pi}
\;\;\; \mbox{with $p^{(i)},
p^{(ij)} \in {\mbox{\bf Z}}$} \,.
\label{quantmu}
\end{eqnarray}
It is known that in contrast to ordinary 2+1 dimensional
QED, the presence of Dirac monopoles in these massive
gauge theories does {\em not} lead to confinement of the matter
charges $q^{(i)}$.
The spectrum of the theory defined
by~(\ref{alg}) with~(\ref{ymh})--(\ref{csact})
contains $k$ different quantized matter charges $q^{(i)}=n^{(i)}e^{(i)}$
with $n^{(i)} \in {\mbox{\bf Z}}$, $k$ different
vortex species carrying quantized magnetic flux
$\phi^{(i)} = \frac{2\pi a^{(i)}}{N^{(i)} e^{(i)}}$ with
$a^{(i)} \in {\mbox{\bf Z}}$ and dyonic combinations of these charges and
vortices. Since all gauge fields are massive, there are
no long range Coulomb interactions between these particles.
The remaining long range interactions are AB interactions.
As has been explained in~\cite{sam},
a counterclockwise monodromy of a vortex $\phi^{(i)}$ and a charge
$q^{(i)}$ gives rise to the AB phase $\exp(\imath q^{(i)}\phi^{(i)})$ in
the wave function. The crucial point was that the Higgs mechanism replaces
the fluxes attached to the charges $q^{(i)}$ in the unbroken
CS phase~\cite{schonfeld} by screening charges which screen the Coulomb
fields around the charges but do {\em not} couple to the AB
interactions. Hence, contrary to the unbroken CS phase there are {\em no}
AB interactions among the charges in the CS Higgs phase.
Instead, the CS term~$(i)$ in~(\ref{csact}) now implies the AB phase
$\exp(\imath \mu^{(i)} \phi^{(i)} \phi^{(i)'})$ for a counterclockwise
monodromy of two remote vortices $\phi^{(i)}$ and $\phi^{(i)'}$,
whereas a CS term~$(ij)$ gives rise to the AB
phase $\exp(\imath \mu^{(ij)} \phi^{(i)} \phi^{(j)})$ for a counterclockwise
monodromy of two remote vortices $\phi^{(i)}$ and
$\phi^{(j)}$ \cite{spabcst}.
Let us label the particles in this theory as
$\left( A,n^{(1)} \!\! \ldots n^{(k)}\right)$ with
$A := \left(a^{(1)},\ldots,a^{(k)} \right)$ and $a^{(i)}, n^{(i)} \in {\mbox{\bf Z}}$.
Upon implementing~(\ref{quantmu}),
the foregoing AB interactions can then be recapitulated as
\begin{eqnarray}
{\cal R}^{2 \;\; A \qquad \;\;\;\;\;\;A'}_{\; \; \;n^{(1)} \ldots n^{(k)} \;\;
n^{(1)'} \ldots n^{(k)'} } &=&
\varepsilon_A(A') \; \Gamma^{n^{(1)} \ldots n^{(k)}}(A') \;
\varepsilon_{A'} (A) \; \Gamma^{n^{(1)'} \ldots n^{(k)'}}(A) \, .
\label{brz2}
\end{eqnarray}
The indices attached to the monodromy operator
${\cal R}^2$ express the fact that it acts on the particles
$\left( A,n^{(1)} \!\! \ldots n^{(k)}\right)$ and
$\left( A',n^{(1)'} \!\! \ldots n^{(k)'}\right)$, whereas
\begin{eqnarray}
\varepsilon_A(A') &:=& \exp \left( \sum_{1\leq i < j \leq k}
\frac{2 \pi \imath p^{(i)}}{N^{(i)} N^{(i)}}
\, a^{(i)}a^{(i)'} +
\frac{2 \pi \imath p^{(ij)}}{N^{(i)}N^{(j)}}
\, a^{(i)}a^{(j)'}\right), \label{epsi}
\end{eqnarray}
and $\Gamma^{n^{(1)} \! \ldots n^{(k)}} (A)
:= \exp \left(
\sum_{i=1}^k \frac{2 \pi \imath}{N^{(i)}} \, n^{(i)} a^{(i)} \right)$.
It can also be shown that the particles in this theory satisfy the
canonical spin-statistics connection:
\begin{eqnarray} \label{spinstatis}
\exp \left(\imath \Theta_{(A,n^{(1)} \! \ldots n^{(k)})}\right) \;=\;
\exp \left(2 \pi \imath s_{(A,n^{(1)} \! \ldots n^{(k)})}\right) \;=\;
\varepsilon_A(A) \; \Gamma^{n^{(1)}\! \ldots n^{(k)}}(A) \, ,
\end{eqnarray}
with $\exp (\imath \Theta_{(A,n^{(1)} \! \ldots n^{(k)})})$
the quantum statistics phase resulting from a counterclockwise
braid operation ${\cal R}$ on two identical particles
$(A,n^{(1)} \! \ldots n^{(k)})$
and $s_{(A,n^{(1)} \! \ldots n^{(k)})}$ the spin assigned to these
particles.
Under the remaining (long range)
AB interactions~(\ref{brz2}) and~(\ref{spinstatis}),
the charge labels $n^{(i)}$
clearly become ${\mbox{\bf Z}}_{N^{(i)}}$ quantum numbers. Also, in the presence
of the aforementioned Dirac monopoles
the fluxes $a^{(i)}$ are conserved modulo $N^{(i)}$.
Specifically, the tunneling events induced by the minimal monopoles
read~\cite{spabcst}
\begin{eqnarray} \label{instb1}
\mbox{monopole $(i)$: } \left\{ \begin{array}{lcl}
a^{(i)} & \mapsto & a^{(i)} -N^{(i)} \\
n^{(i)} & \mapsto & n^{(i)} + 2p^{(i)} \, , \;
n^{(j)} \; \mapsto \; n^{(j)} + p^{(ij)} \, .
\end{array} \right.
\end{eqnarray}
Hence, the decay of an unstable flux $a^{(i)}$ through a monopole $(i)$
is accompanied by the creation of matter charges of species and strength
depending on the CS parameters~(\ref{quantmu}).
It is easily verified that these {\em local} tunneling events are invisible
to the long range AB interactions~(\ref{brz2}) and that the particles
connected by the monopoles have the same spin factor~(\ref{spinstatis}).
So the spectrum of this theory compactifies to
$
\left( A,n^{(1)} \!\! \ldots n^{(k)}\right)$
with $A=\left(a^{(1)}, \ldots, a^{(k)} \right)$ and $a^{(i)}, n^{(i)}
\in 0,1, \ldots, N^{(i)}-1$,
where the modulo calculus for the flux quantum numbers $a^{(i)}$
involves the charge jumps in~(\ref{instb1}).
Moreover, it can be shown that the CS parameters
$p^{(i)}$ and $p^{(ij)}$
become periodic with period $N^{(i)}$ and
greatest common divisor ${\gcd}(N^{(i)},N^{(j)})$
of $N^{(i)}$ and $N^{(j)}$ respectively~\cite{spabcst}.
That is, up to a relabeling of the dyons, the broken CS theory defined by
$p^{(i)}$ and $p^{(ij)}$
describes the same spectrum and AB interactions as that defined by
$p^{(i)}+N^{(i)}$ and
$p^{(ij)}+{\gcd}(N^{(i)},N^{(j)})$.
Finally, note that the additional AB phases~(\ref{epsi}) among the
vortices and the twisted tunneling properties of the
monopoles~(\ref{instb1}) form the only distinction with the abelian discrete
gauge theory describing the long distance physics in the absence
of the CS action~(\ref{csact}). As will be explained
in the following sections, this distinction is completely encoded in a
3-cocycle for the residual gauge group
${\mbox{\bf Z}}_{N^{(1)}} \times \cdots \times {\mbox{\bf Z}}_{N^{(k)}} $.
\section{Group cohomology and symmetry breaking}
\label{bgt}
\noindent
A deep result due to Dijkgraaf and Witten~\cite{diwi}
states that the CS actions $S_{\mbox{\scriptsize CS}}$ for
a compact gauge group $G$ are in one--to--one correspondence
with the elements of the cohomology
group $H^4 (B{ G}, {\mbox{\bf Z}})$ of the classifying space
$B{ G}$ with integer coefficients ${\mbox{\bf Z}}$.
In particular, this classification includes the case of finite groups $H$.
The isomorphism $H^4 ({BH}, {\mbox{\bf Z}}) \simeq H^3 ({ H}, U(1))$
which is only valid for finite ${ H}$ then implies
that the different CS theories
for a finite gauge group $H$ correspond to the different
elements $\omega \in H^3({ H}, U(1))$, i.e.\ algebraic
3-cocycles $\omega$ taking values in $U(1)$.
One of the new observations of~\cite{spabcst}
was that the long distance physics of the spontaneously
broken model~(\ref{alg}) is described by a CS theory with finite gauge
group $H$ and 3-cocycle $\omega \in H^3({ H}, U(1))$ determined
by the original CS action $S_{\mbox{\scriptsize CS}} \in H^4 (B{ G}, {\mbox{\bf Z}})$
for the broken gauge group $G$ by the natural homomorphism
\begin{eqnarray}
H^4 (B{ G}, {\mbox{\bf Z}}) &\longrightarrow& H^3 ({ H}, U(1)) \, ,
\label{homo}
\end{eqnarray}
induced by the inclusion $H \subset G$.
The physical picture behind this homomorphism, also known as
the restriction, is that the CS term
$S_{\mbox{\scriptsize CS}}$ gives rise to additional AB interactions
among the magnetic vortices which are completely encoded
in the 3-cocycle $\omega$ for the finite residual gauge group $H$
being the image of $S_{\mbox{\scriptsize CS}}$ under the homomorphism
(\ref{homo}).
Let me illustrate these general remarks with the abelian
example of the previous section where $G\simeq U(1)^k$ and
$H \simeq {\mbox{\bf Z}}_{N^{(1)}} \times \cdots \times
{\mbox{\bf Z}}_{N^{(k)}} $. A simple calculation~\cite{spabcst} shows
$
H^4(B(U(1)^k), {\mbox{\bf Z}}) \simeq {\mbox{\bf Z}}^{ k + \frac{1}{2}k(k-1)}.
$
Note that this classification of the CS actions for the compact
gauge group $G\simeq U(1)^k$ is indeed in agreement
with~(\ref{csact})--(\ref{quantmu}), i.e.\ the integral
CS parameters $p^{(i)}$ and $p^{(ij)}$ provide a complete labeling
of the elements of $H^4(B(U(1)^k), {\mbox{\bf Z}})$.
To proceed, it can be shown~\cite{spabcst} that for
$H \simeq {\mbox{\bf Z}}_{N^{(1)}} \times \cdots \times{\mbox{\bf Z}}_{N^{(k)}} $
\begin{eqnarray} \label{conj3e}
H^3(H,U(1))
&\simeq& \! \bigoplus_{1 \leq i < j < l \leq k} \! {\mbox{\bf Z}}_{N^{(i)}}
\oplus {\mbox{\bf Z}}_{{\gcd} (N^{(i)},N^{(j)})}
\oplus {\mbox{\bf Z}}_{{\gcd} (N^{(i)},N^{(j)},N^{(l)})} \, .
\end{eqnarray}
Let $A,B$ and $C$ denote elements of ${\mbox{\bf Z}}_{N^{(1)}} \times \cdots \times
{\mbox{\bf Z}}_{N^{(k)}}$, so
$
A := \left( a^{(1)} , a^{(2)},
\ldots, a^{(k)} \right)$ with
$a^{(i)} \in {\mbox{\bf Z}}_{N^{(i)}}$
for $i=1,\ldots, k$
and similar decompositions for $B$ and $C$. I adopt the additive
presentation, i.e.\
the elements $a^{(i)}$ of ${\mbox{\bf Z}}_{N^{(i)}}$ take values in the range
$ 0,\ldots,N^{(i)}-1$ and group multiplication is defined as:
$
A \cdot B = [A+B] := \left([a^{(1)}+b^{(1)}], \ldots ,
[a^{(k)}+b^{(k)}]\right)$.
The rectangular brackets denote modulo $N^{(i)}$
calculus such that the sum always lies in the range $0, \ldots, N^{(i)}-1$.
The most general 3-cocycle
for ${\mbox{\bf Z}}_{N^{(1)}} \times \cdots \times {\mbox{\bf Z}}_{N^{(k)}}$ can then be presented
as some product of
\begin{eqnarray}
\omega^{(i)}(A,B,C) &=&
\exp \left( \frac{2 \pi \imath p^{(i)}}{N^{(i)\;2}} \;
a^{(i)}\left(b^{(i)} +c^{(i)} -[b^{(i)}+c^{(i)}]\right) \right)
\label{type1} \\
\omega^{(ij)}(A,B,C) &=&
\exp \left(
\frac{2 \pi \imath p^{(ij)}}{N^{(i)}N^{(j)}} \;
a^{(i)}\left(b^{(j)} +c^{(j)} - [b^{(j)}+c^{(j)}]\right) \right)
\label{type2} \\
\omega^{(ijl)}(A,B,C) &=&
\exp \left( \frac{2 \pi \imath
p^{(ijl)}}{{\gcd}(N^{(i)}, N^{(j)},N^{(l)})} \;
a^{(i)}b^{(j)}c^{(l)} \right) , \label{type3}
\end{eqnarray}
with $1 \leq i < j < l \leq k$. The integral parameters
$p^{(i)}$, $p^{(ij)}$ and
$p^{(ijl)}$ label the different
elements of~(\ref{conj3e}). It can be verified that in agreement
with~(\ref{conj3e}) the functions~(\ref{type1}), (\ref{type2})
and (\ref{type3}) are periodic in these parameters with period $N^{(i)}$,
${\gcd}(N^{(i)},N^{(j)})$ and ${\gcd}(N^{(i)},N^{(j)},N^{(l)})$ respectively.
It is also readily checked that these three functions
and their products indeed satisfy the 3-cocycle relation
\begin{eqnarray}
\label{pentagon}
\delta\omega(A,B,C,D) \; = \;
\frac{\omega(A,B,C)\;\omega(A,B \cdot C,D)\;\omega(B,C,D)}{
\omega(A \cdot B,C,D)\;\omega(A,B,C \cdot D)} \; = \; 1 \, ,
\end{eqnarray}
where $\delta$ denotes the coboundary operator.
We are now ready to make the
homomorphism~(\ref{homo}) accompanying the spontaneous breakdown
of the gauge group $U(1)^k$ to
${\mbox{\bf Z}}_{N^{(1)}} \times \cdots \times {\mbox{\bf Z}}_{N^{(k)}}$ explicit.
In terms of the integral CS parameters~(\ref{quantmu}),
it takes the form~\cite{spabcst}
\begin{eqnarray}
H^4(B(U(1)^k), {\mbox{\bf Z}})
&\longrightarrow &
H^3({\mbox{\bf Z}}_{N^{(1)}} \times \cdots \times {\mbox{\bf Z}}_{N^{(k)}}, U(1))
\label{homo1en2} \\
p^{(i)} &
\longmapsto & p^{(i)} \qquad \; \bmod N^{(i)}
\label{homoI} \\
p^{(ij)} & \longmapsto &
p^{(ij)} \qquad \bmod
\gcd(N^{(i)}, N^{(j)}) \, . \label{homoII}
\end{eqnarray}
The periodic parameters being the images of this mapping label the
3-cocycles~(\ref{type1}) and~(\ref{type2}). So
the long distance physics of the spontaneously broken
$U(1)^k$ CS theory~(\ref{alg})--(\ref{quantmu}) is described by
a ${\mbox{\bf Z}}_{N^{(1)}} \times \cdots \times {\mbox{\bf Z}}_{N^{(k)}}$ CS theory with
3-cocycle being the product
$\omega=\prod_{1\leq i < j \leq k} \omega^{(i)}
\omega^{(ij)}$. That this 3-cocycle indeed
leads to the additional AB phases~(\ref{epsi}) and
the twisted tunneling properties~(\ref{instb1}) will become clear
in the next section. Finally,
note that the image of~(\ref{homo1en2}) does not
contain the 3-cocycles~(\ref{type3}). In other words,
abelian discrete CS theories defined by
these 3-cocycles can not be obtained from
the spontaneous breakdown of a $U(1)^k$ CS theory.
\section{The quasi-quantum double}
\noindent
The quasi-quantum double $D^{\omega}(H)$ related to a CS theory
with finite abelian gauge group $H \simeq
{\mbox{\bf Z}}_{N^{(1)}} \times \cdots \times {\mbox{\bf Z}}_{N^{(k)}}$ and some 3-cocycle
$\omega$ is spanned by the elements $\{ {\mbox{P}}_A \, B \}_{A,B\in { H}} $
representing a global symmetry transformation $B \in H$ followed
by the operator ${\mbox{P}}_A$ projecting
out the magnetic flux $A\in H$. In this basis, multiplication,
and comultiplication are defined as~\cite{dpr,spm1,spabcst}
\begin{eqnarray}
{\mbox{P}}_A \, B \cdot {\mbox{P}}_D \, C &=&
\delta_{A,D} \;\; {\mbox{P}}_A \, B \cdot C
\;\; c_A(B,C) \label{algebra} \\
\Delta(\,{\mbox{P}}_A \, B \,) &=&
\sum_{C\cdot D=A} \; {\mbox{P}}_C \, B \otimes {\mbox{P}}_D \, B \;\;
c_B(C,D) \, , \label{coalgebra}
\end{eqnarray}
with $c_A(B,C) := \frac{\omega (A,B,C) \omega(B,C,A)}{\omega(B,A,C)}$ and
$\delta_{A,B}$ the Kronecker delta.
From~(\ref{pentagon}) it follows
that $c_A$ satisfies the 2-cocycle relation
$\delta c_A(B,C,D)= \frac{c_A(B,C) c_A(B \cdot C, D)}{c_A(B,C \cdot D)
c_A(C,D)}=1$, which implies that the multiplication~(\ref{algebra})
is associative and that the comultiplication~(\ref{coalgebra})
is quasi-coassociative:
$
({\mbox{id}} \otimes \Delta) \, \Delta( \, {\mbox{P}}_A \, B \, ) =
\varphi\cdot (\Delta \otimes {\mbox{id}}) \, \Delta( \, {\mbox{P}}_A \, B \, )
\cdot\varphi^{-1}
$ with $\varphi := \sum_{A,B,C}\,\omega^{-1}(A,B,C) \;
{\mbox{P}}_A \otimes {\mbox{P}}_B \otimes {\mbox{P}}_{C}$.
The different particles in the associated CS theory are
labeled by their magnetic flux $A \in H$ paired with a projective
UIR $\alpha$ of $H$ defined as ${\alpha}(B) \cdot {\alpha}(C) = c_A(B,C)
\; {\alpha}(B \cdot C)$.
Each particle $( \, A, {\alpha} \,)$ is equipped with an internal Hilbert
space $V_{\alpha}^A$ (spanned by the states
$
\{|\, A,\,^{\alpha} v_j\rangle\}_{j=1,\ldots,d_\alpha}
$
with $^{\alpha}\!v_j$ a basis vector and $d_\alpha$ the dimension of
$\alpha$) carrying an irreducible
representation $\Pi^A_{\alpha}$
of $D^{\omega}({ H})$ given by~\cite{dpr}
\begin{eqnarray} \label{13}
\Pi^A_{\alpha}(\, {\mbox{P}}_B \, C \,) \; |\,A ,\,^{\alpha} v_j \rangle &=&
\delta_{A,B}\;\, |\,A,\,\alpha(C)_{ij}\,^{\alpha} v_i \rangle \, .
\end{eqnarray}
In the process of rotating a particle $(A,\alpha)$
over an angle of $2 \pi$ its charge $\alpha$
is transported around the flux $A$ and as a result picks
up a global transformation $\alpha(A)$.
With~(\ref{13}) it is easily checked that this AB effect is implemented
by the central element $\sum_B \; {\mbox{P}}_B \, B$.
Schur's lemma then implies:
$
\alpha(A) = e^{2 \pi \imath s_{(A,\alpha)}} \; {\mbox{\bf 1}}_\alpha $
with $s_{(A,\alpha)}$ the spin assigned to the particle
$( \, A, {\alpha} \,)$ and ${\mbox{\bf 1}}_\alpha$
the unit matrix.
The action~(\ref{13}) of $D^{\omega}(H)$ is extended to two-particle
states by means of the comultiplication~(\ref{coalgebra}).
Specifically, the
representation $(\Pi^A_{\alpha} \otimes \Pi^B_{\beta}, V_{\alpha}^A \otimes
V_{\beta}^B)$ of $D^{\omega}(H)$ related to a system of two particles
$(\,A, \alpha \,)$ and $(\,B, \beta \,)$ is defined by the action
$\Pi^A_{\alpha} \otimes \Pi^B_{\beta}( \Delta (\, {\mbox{P}}_A \, B \, ))$.
The tensor product representation of $D^{\omega}(H)$ related to
a system of three particles $(\,A, \alpha \,)$, $(\,B, \beta \,)$ and
$(\,C, \gamma \,)$ may now be constructed
through $(\Delta \otimes {\mbox{id}} ) \, \Delta$
or through $({\mbox{id}} \otimes \Delta) \, \Delta$.
The aforementioned quasi-coassociativity relation implies that
these two constructions are equivalent.
The braid operator ${\cal R}$ establishing
a counterclockwise interchange of two particles $(\, A, \alpha \,)$
and $(\, B, \beta \,)$ is defined as
\begin{eqnarray} \label{braidaction}
{\cal R} \;| \, A,\, ^{\alpha} v_j\rangle
|\, B,\,^{\beta} v_l\rangle &=&
|\,B,\,{\beta}(A)_{ml} \, ^{\beta} v_m\rangle
|\,A,\, ^{\alpha} v_j\rangle \, .
\end{eqnarray}
The tensor product representation
$(\Pi^A_{\alpha} \otimes \Pi^B_{\beta},V_\alpha^A \otimes V_\beta^B)$
in general decomposes into a direct sum of irreducible
representations $(\Pi^C_{\gamma},V_\gamma^C)$
\begin{eqnarray} \label{piet}
\Pi^A_{\alpha}\otimes\Pi^B_{\beta}& = & \bigoplus_{C , \gamma}
N^{AB\gamma}_{\alpha\beta C} \; \Pi^C_{\gamma} \, .
\end{eqnarray}
This so-called fusion rule determines which particles
$(\,C,\gamma \, )$ can be formed in the composition
of two particles $(\, A,\alpha \,)$ and $(\, B,\beta\,)$.
The modular matrices $S$ and $T$ associated to
the fusion algebra~(\ref{piet}) are determined by the braid
operator~(\ref{braidaction}) and the spin factors
$e^{2\pi \imath s_{(A,\alpha)}}:=\alpha(A)/d_\alpha$
\begin{eqnarray}
S^{AB}_{\alpha\beta} \; := \; \frac{1}{|{ H}|} \, \mbox{tr} \; {\cal
R}^{-2 \; AB}_{\; \; \; \; \; \alpha\beta} \qquad \mbox{and} \qquad
T^{AB}_{\alpha\beta} \; := \;
\delta_{\alpha,\beta} \, \delta^{A,B} \;
\exp\left(2\pi \imath s_{(A,\alpha)}\right) \, .\label{modular}
\end{eqnarray}
$|H|$ denotes the order of $H$ and tr abbreviates trace.
As usual, the multiplicities in~(\ref{piet}) can be expressed in
terms of the matrix $S$ by means of Verlinde's formula:
\begin{eqnarray} \label{verlindez}
N^{AB\gamma}_{\alpha\beta C}&=&\sum_{D,\delta}\frac{
S^{AD}_{\alpha\delta}S^{BD}_{\beta\delta}
(S^{*})^{CD}_{\gamma\delta}}{S^{eD}_{0\delta}} \, .
\end{eqnarray}
\begin{figure}[htb] \epsfxsize=11cm
\centerline{\epsffile{preybe.eps}}
\fcaption{The diagrams in (a) and (b) (with the ribbons representing
particle--trajectories) are homotopic. So the
result of braiding a particle with
two particles separately or with the composite that arises after fusion should
be the same.}
\label{qu1zon}
\end{figure}
\begin{figure}[htb] \epsfxsize=10cm
\centerline{\epsffile{gspst.eps}}
\fcaption{The fact that the ribbon diagrams in (a) are homotopic
indicates that the result of
a counteclockwise monodromy of two particles in a given fusion channel
followed by fusion of the pair should be the same as a clockwise
rotation of the two particles seperately followed
by fusion of the pair and a counterclockwise rotation
of the composite. The fact that the diagrams in (b) are homotopic implies
that the effect of a counterclockwise interchange of two particles in two
identical but separate particle/anti-particle pairs should be
the same as a counterclockwise rotation of a (anti-)particle in
one of these pairs.}
\label{kanaal}
\end{figure}
It is impossible to do justice to the complete
structure of $D^{\omega}(H)$ in this limited number of pages.
A detailed treatment can be found in~\cite{spabcst}.
Let me just flash some pictures giving an impression of some
relations unmentioned so far. First of all, the
comultiplication~(\ref{coalgebra}), braid operator~(\ref{braidaction}) and
associator $\varphi$ obey
the so-called quasitriangularity equations expressing
the compatibility of fusion and braiding depicted in Fig.~\ref{qu1zon}.
As an immediate consequence the braid operators
satisfy the quasi--Yang--Baxter equation implying that the multi-particle
internal Hilbert spaces carry a (possibly reducible)
representation of the braid group.
Since the braid operators~(\ref{braidaction}) are of finite order,
the more accurate statement is that we are dealing with representations
of truncated braid groups being factor groups of the braid
group~\cite{banff,spabcst}. The quasi-triangularity equations also state that
the action of the truncated braid group commutes with the action of $D^{\omega}(H)$.
Thus the multi-particle internal Hilbert spaces, in fact, carry
a (possibly reducible) representation of the direct product
of $D^{\omega}(H)$ and some truncated braid group. Further, to keep track of
the writhing of the particle--trajectories and the resulting
spin factors these are represented by
ribbons. Passing from worldlines to `worldribbons' can only be consistent
if the demands in Fig.~\ref{kanaal} are met. The consistency demand in
Fig.~\ref{kanaal}(a) is met by the generalized spin-statistics
connection $K^{ABC}_{\alpha\beta\gamma}
{\cal R}^2 =
e^{2\pi \imath(s_{(C,\gamma)}-s_{(A,\alpha)}-s_{(B,\beta)})}
K^{ABC}_{\alpha\beta\gamma}$ with $K^{ABC}_{\alpha\beta\gamma}$
the projection on the channel $(C,\gamma)$ in~(\ref{piet})
and the demand in Fig.~\ref{kanaal}(b) by the canonical spin-statistics
connection $K^{AAC}_{\alpha\alpha\gamma}
{\cal R} =
e^{2\pi \imath s_{(A,\alpha)}}
K^{AAC}_{\alpha\alpha\gamma}$ which only holds for the fusion channels
$(C,\gamma)$ in which both particles $(A,\alpha)$ are in identical
internal quantum states.
Let me finally establish that the CS term~(\ref{csact})
in the broken model of section~2 indeed boils down to the
3-cocycle $\omega=\prod_{1\leq i < j \leq k} \omega^{(i)}
\omega^{(ij)}$ as indicated by~(\ref{homo1en2}). To start with, it is readily
checked that the 2-cocycle $c_A$ entering~(\ref{algebra})
and~(\ref{coalgebra}) for this $\omega$ is trivial, i.e.\
$
c_A(B,C) = \delta \varepsilon_A (B,C) =
\frac{\varepsilon_A(B) \varepsilon_A(C)}{\varepsilon_A(B \cdot C)}
$
with $\varepsilon_A$ given by~(\ref{epsi}).
Thus the dyon charges $\alpha$ in~(\ref{piet}) for this
$\omega$ are trivial projective representations
of $H$ of the form $\alpha(C) = \varepsilon_A(C)
\Gamma^{n^{(1)} \! \ldots n^{(k)}}(C)$
with $\Gamma^{n^{(1)} \! \ldots n^{(k)}}$ the ordinary UIR of
$H$ appearing in~(\ref{brz2}).
It is now easily verified that the monodromy operator following
from~(\ref{braidaction}) for this case is the same as~(\ref{brz2}),
that the spinfactors $e^{2\pi \imath s_{(A,\alpha)}}=\alpha(A)$
coincide with~(\ref{spinstatis}) and
that the fusion rules~(\ref{piet}) following from~(\ref{modular})
and~(\ref{verlindez}) reproduce the tunneling
properties~(\ref{instb1}) of the Dirac monopoles. \qed
\section{Nonabelian electric/magnetic dualities}
\noindent
The 3-cocycles~(\ref{type3}) that can not be reached from a
spontaneously broken $U(1)^k$ CS theory are actually the most interesting.
They render an {\em abelian} discrete $H$ gauge theory
{\em nonabelian} and generally lead to dualities with 2+1 dimensional
theories with a nonabelian finite gauge group of the same order as $H$.
The point is that the 2-cocycles $c_A$ appearing
in~(\ref{algebra}) and~(\ref{coalgebra})
for such a 3-cocycle $\omega^{(ijl)}$ are {\em nontrivial}, so the dyon
charges $\alpha$ in~(\ref{piet}) become nontrivial (i.e.\ {\em higher}
dimensional) projective UIR's of $H$.
Consequently, the braid operator~(\ref{braidaction}) now generally
acts as a matrix leading to the usual host of nonabelian
phenomena~\cite{banff,spabcst}
such as nonabelian braid statistics, nonabelian AB scattering, exchange
of nontrivial Cheshire charges and quantum statistics
between particle pairs through monodromy processes, etc...
Let me briefly illustrate these general remarks with the simplest
example~\cite{spabcst}, namely a CS theory with gauge group
${ H} \simeq {\mbox{\bf Z}}_2^3 :=
{\mbox{\bf Z}}_2 \times {\mbox{\bf Z}}_2 \times {\mbox{\bf Z}}_2 $ defined by the corresponding
nontrivial 3-cocycle~(\ref{type3}).
Whereas the ordinary ${\mbox{\bf Z}}_2^3$ theory
features 64 different singlet particles, it turns out that the spectrum
just consists of 22 different particles in the presence of this
3-cocycle $\omega^{(123)}$. Specifically, the dyon charges
which formed 1-dimensional UIR's of ${\mbox{\bf Z}}_2^3$
are reorganized into 2-dimensional or doublet projective
UIR's of ${\mbox{\bf Z}}_2^3$, i.e.\ besides the
8 singlets (the vacuum and 7 ordinary nontrivial
${\mbox{\bf Z}}_2^3$ charges) the spectrum now contains
14 doublet dyons carrying a nontrivial
(singlet) flux and a nontrivial projective ${\mbox{\bf Z}}_2^3$ doublet charge.
Further, there are only two nonabelian finite groups of order
$|{\mbox{\bf Z}}_2^3|=8$:
the dihedral group $D_4$ and the double dihedral
group $\bar{D}_2$. Like the $\{{\mbox{\bf Z}}_2^3 , \omega^{(123)}\}$ CS theory, the
spectrum of the theories with gauge group $D_4$
and $\bar{D}_2$ both contain 8 singlet particles and 14 doublet particles
albeit of different nature. It can be checked that the
$\{{\mbox{\bf Z}}_2^3 , \omega^{(123)}\}$ CS theory
is dual to the $D_4$ theory, i.e.\ the exchange
$\{{\mbox{\bf Z}}_2^3 , \omega^{(123)}\}\leftrightarrow D_4$ corresponds to an invariance
of the modular matrices~(\ref{modular}) indicating that
these two theories indeed describe the same spectrum {\em and} the same
topological interactions. The duality transformation
exchanges the projective dyon charges in the
$\{{\mbox{\bf Z}}_2^3 , \omega^{(123)}\}$ CS theory with the magnetic doublet fluxes
in the $D_4$ theory. So we are actually dealing with
some kind of nonabelian electric/magnetic duality.
Let me also note that adding a 3-cocycle~(\ref{type2}) does not
spoil this duality, i.e.\ we also have the dualities
$\{{\mbox{\bf Z}}_2^3 , \omega^{(123)} \omega^{(ij)} \}
\leftrightarrow D_4$ with $1\leq i <j \leq k$.
Finally, the
$\{{\mbox{\bf Z}}_2^3 , \omega^{(123)} \omega^{(i)}\}$
CS theories (with $\omega^{(i)}$ given in~(\ref{type1}) and $i=1,2$ or $3$)
turn out to be dual to the $\bar{D}_2$ theory.
\section{Concluding remarks}
\noindent
Whether the interesting
3-cocycles~(\ref{type3}) can be reached from the spontaneous breakdown
of a nonabelian CS theory is under investigation. I am
currently also working on the generalization of the dualities described
in the previous section to abelian finite gauge groups of order higher
then $|{\mbox{\bf Z}}_2^3|=8$.
Finally, for a (concise)
discussion of CS theories in which a nonabelian compact gauge group is
spontaneously broken to a {\em nonabelian} finite subgroup, the reader
is referred to Ref.~\cite{thesis}.
\nonumsection{Acknowledgements}
\noindent
I would like to thank the organizers for an inspiring workshop. This
work was partly supported by an EC grant (contract no. ERBCHBGCT940752).
\nonumsection{References}
| 14,055 |
\section{Introduction}
The observations of OSSE on the Compton Gamma Ray Observatory showing a break
or an exponential cut--off in the high energy spectrum of some Seyfert galaxies
(Maisack et al. 1993, Johnson et al. 1993, Madejski et al. 1995) coupled with
the X-ray observations of GINGA, have had a strong impact on our theoretical
understanding of the X--ray and $\gamma-$ray emission processes in these
objects. Though they do not rule out pure non--thermal $e^{\pm}$ pair models
(Zdziarski, Lightman \& Maciolek-Niedzwiecki 1993), thermal or quasi--thermal
Comptonization models are favoured (e.g. Sunyaev \& Titarchuck 1980; Haardt \&
Maraschi 1991 and 1993, hereinafter HM91 and HM93; Ghisellini, Haardt \& Fabian
1993; Titarchuck \& Mastichiadis 1994; Zdziarski et al. 1994; Zdziarski et al.
1995; see Svensson 1996 for a recent review).
The origin of the Comptonizing electrons is at present unknown. They could be
related to an accretion disk corona heated by magnetic dissipation processes
and/or they may be electron positron pairs produced by photon photon collisions
(see e.g. Fabian 1994 and references therein). The electron temperature can be
constrained by a thermal balance equation, where the heating rate is the total
(direct plus reprocessed) luminosity and the cooling mechanism is
Comptonization of soft photons.
The hot corona is probably coupled to a cooler optically thick gas layer
(presumably the accretion disk itself), which has three major roles: i) it
reprocesses a fixed fraction of the Comptonized radiation into a soft blackbody
component; ii) a fixed fraction of the reprocessed blackbody photons crosses
the hot region acting as seed photons for the Comptonization process; iii) it
reflects medium energy X-rays giving rise to the so called reflection component
and Fe line emission. This picture (HM91, HM93) is suggested by the presence of
the Compton reflection hump (Pounds et al. 1990) and by the relatively small
dispersion in the distribution of spectral indices of Seyfert I galaxies
(Nandra \& Pounds 1994). In fact, if X-ray reprocessing contributes soft
photons proportionately to the medium-hard X-ray luminosity, the 'feed-back'
effect keeps a quasi-constant X-ray spectral shape, weakly dependent on the
source luminosity. The broad band spectra derived from this model are
consistent with the average Seyfert spectrum derived from OSSE observations
(Zdziarski et al. 1995; Gondek et al. 1996).
It is fair to point out that the best studied object, NGC 4151, shows
(at least in low state) an unusually flat X-ray spectral index
($\alpha\simeq 0.5$) and a low temperature ($kT\simeq 60$ keV) implying a
Comptonized--to--thermal luminosity ratio $L_c/L_s \gta 10$, far from the limit
$L_c/L_s \simeq 2$ obtained in a plane parallel geometry. This may be
accounted for by a special geometry, for instance
if the high energy emission originates in localized non planar active blobs
as in the model proposed by Haardt, Maraschi \& Ghisellini 1994 (hereinafter
HMG). Alternative models involving a mirror reflection of the observed
radiation (Poutanen et al. 1996) or reprocessing in clouds along the line of
sight (Zdziarski \& Magdziarz 1996) may also apply for this special object.
In Comptonization models the spectral shape in the X--ray range is mainly
determined by the combination of the temperature $\Theta=kT/mc^2$, and the
optical depth of the scattering electrons $\tau$, while the cut off energy is
related essentially to $\Theta$. Simultaneous measurements of the spectral
index $\alpha$ and of the cut off energy can therefore determine
the physical parameters of the Comptonizing region\footnote{The exact relation
between $\alpha$ and $\tau$ depends on the geometry of the scattering region.
The optical depth can therefore be deducted from observations only "modulo
geometry". On the other hand fitting the data assuming a
simple homogeneous sphere gives a value of $\tau$ that is also
a good estimate of the average (over all directions) scattering optical
depth for any other possible geometry.}.
``Spectral dynamics'',
i.e., the variation of the spectral shape with intensity is an additional basic
diagnostic tool (see, e.g., Netzer, Turner \& George 1994) which can test the
model and provide insight into the way the corona is heated and cooled. Some
of the questions that could be answered are: is the optical depth of the corona
dominated by $e^+e^-$ pairs? How do the optical depth and electron temperature
vary with luminosity? Are the reprocessed components proportionate to the
Comptonized one during variations? For instance a careful analysis of EXOSAT
spectra of Seyfert galaxies, showed that the observed spectral variability is
difficult to reconcile with non--thermal pair models (Grandi, Done \& Urry,
1994) .
Another important prediction of the model is the presence of a thermal
component which should be proportionate to the Comptonized one in the limit of
a constant geometry. Its temperature however is not completely constrained by
the model and could range from UV to soft X-rays. The presence of a "soft
excess" in the X--ray spectra of AGN has been detected by different experiments
(EXOSAT, ROSAT and more recently ASCA) (e.g. Arnaud et al. 1985; Pounds et al.
1986; Wilkes \& Elvis 1987; Turner \& Pounds 1989; Elvis, Wilkes \& Mc Dowell
1991; Comastri et al. 1991; Saxton et al. 1993; Walter \& Fink 1993; Turner,
George \& Mushotzky 1993; Brandt et al. 1994; Done et. al 1995). It has been
interpreted in some cases as a genuine thermal component possibly
due to reprocessing
(e.g. Leighly et al. 1996),
or as the high energy tail of the UV bump (e.g. Walter \& Fink 1993), while
in others as an artefact of the residuals due to
incomplete absorption (warm absorber, see e.g. Done et al. 1995).
Instruments with improved energy resolution in the soft band are
needed to clarify the interpretation of
the "soft excess" which may even involve more than one component.
However a distinctive point between different models is the predicted
correlation between the soft and medium X-ray intensities. On this aspect
useful data obtained from extensive observations of some Seyfert galaxies
with ROSAT and/or ASCA already exist and more will surely become available in
the near future.
We are therefore motivated to examine in detail the broad band spectral
variability expected in the accretion disk with hot corona model developed in
previous papers (HM91, HM93 and HMG). We will concentrate on the case of
extragalactic objects such as Seyfert I galaxies. The issue is complicated for
several reasons. A quantitative analysis requires spectral simulations since
"second order effects" are important (see \S 3) . However the main uncertainty
is our poor knowledge of the physics of the corona. For instance do the active
coronal regions vary in size and/or height above the reprocessing disk? And how
is the heating rate related to the optical depth?
In the following we chose the simple case in which the active and reprocessing
region(s) can vary in area but maintain a fixed ratio between Comptonized and
soft cooling photons. In particular we choose the limit of a plane-parallel
geometry characterized by a Comptonized--to--soft ratio $L_c/L_s\simeq 2$. This
is not inconsistent with an inhomogeneous picture where each active region can
be approximated as plane parallel. With these restrictions we compute
explicitly the expected relations between "observable quantities", e.g. the
variation
of the 2--10 keV spectral index (including reflection) with
temperature and intensity,
the variation of the soft intensity "S"(0.1--2 keV) with
medium "M" (2--10 keV) and hard "H" ($E\gta 30$ keV) intensity.
The plan of the paper is as follows. In \S 2 we describe the model and the
methods used for the computation of the spectra and for the derivation of the
"observable" quantities. Since the key parameter driving spectral variability in the
medium band is $\tau$, while spectral variability in the soft band is
essentially related to $T_{BB}$, we found convenient to discuss relations
between spectral and flux variability in the two bands separately.
For this reason, in \S 3 we deal with
the expected spectral variability
in the medium and hard X--ray bands, using the 2--10 keV spectral index, the
2--10 keV intensity, the temperature of the hot corona and the 30--100 keV
intensity, while in \S 4 we compute the intensity and spectral variations of the
soft reprocessed blackbody component for different variations of the system
parameters. The results are converted into count rates and hardness ratios in
the 0.1--0.4 and 0.9--2 keV bands using the ROSAT response matrix and a standard
neutral hydrogen column. We discuss the expected correlations between
variability in the soft and medium energy bands and compare the results to
available observations in \S 5. An exhaustive summary and final
conclusions are presented in \S 6.
\section{The Disk--Corona Model}
We follow a scenario originally proposed by Liang (1979), and revived
by HM91 and HM93, in which the accretion flow is illuminated by the
hard X--ray source (as the observed reflection spectrum indicates). A large
fraction (about $80-90$ per cent) of the incident power is expected to be
thermalized in the disk and re--emitted as thermal reprocessed radiation, which
is in turn Comptonized in the corona producing the X--rays. If the reprocessed
radiation energy density dominates over the local disk thermal emission, the
Compton $y$ parameter is forced to be $\simeq 1$ independently of the coronal
scattering opacity. This in turn ensures that the Comptonized radiation
exhibits a spectral index close to 1. One possible configuration which
satisfies the above condition is a plane parallel corona above an accretion
disk. In this case the condition $y\simeq 1$ is achieved only if the {\it
entire} available gravitational power is released in the hot corona. In this
case the {\it reprocessed thermalized} radiation coincides with the UV
bump, and the {\it reprocessed reflected} radiation forms the 30 keV
Compton hump.
The same condition can be satisfied if there are several such smaller regions
above the disk, rather than a single smooth corona (HMG). In the latter case
only a fraction of the available energy needs to be released in the hot corona.
However being localized in hot spots,
the reprocessed thermalized radiation can still
dominate the cooling. The underlying accretion disk can contribute to most or
at least part of the total luminosity in the UV band as in the standard
Shakura--Sunyaev disk model (Shakura \& Sunyaev 1973). As in the uniform corona
case, the {\it reprocessed reflected} radiation forms the 30--keV Compton hump,
while the {\it reprocessed thermalized} radiation is probably emitted in hot
spots in the EUV--soft X--ray band (HMG). Therefore the emission in the
EUV--soft X--ray band should be closely connected to the hard X--ray radiation.
In the present paper we assume that the cooling time of the electrons in the
corona is shorter than the time scale of variation of the heating rate, so that
we can consider spectral and flux variability as a succession of stationary
states. The Comptonized spectra are computed in plane parallel geometry for
different inclination angles $\vartheta$. A quadrature scheme was implemented
to solve the radiation field for successive scattering orders, taking into
account anisotropic Compton emission for the first scattering distribution
(Haardt 1993). The single scattering photon distribution is computed by means
of the fully relativistic kernel for isotropic unpolarized radiation (Coppi \&
Blandford 1990). This ensures that the exponential roll over of the spectrum is
correctly modeled as we tested against Montecarlo calculations. The
present numerical spectra are therefore in much closer agreement
with recent detailed calculations by Poutanen \& Svensson (1996) than
the approximated treatment of HM93. An extensive
discussion of the method we used can be found in Haardt (1994).
The downward Compton flux (i.e. the X--ray flux leaving the corona with an
angle $\vartheta>\pi/2$) is averaged over the solid angle, and then reflected
by an infinite neutral layer (which mimics the accretion disk) with solar metal
abundances. The angular distribution of the reflected spectrum is computed in
the approximated way described in Ghisellini, Haardt \& Matt (1994) (see
Burigana 1995 and Magdziarz \& Zdziarski 1995 for detailed calculations). The
angular dependent reflected spectrum is then Comptonized in the corona and
added to the primary continuum. The "measurable" spectral index in the [2--10]
keV band is obtained fitting with a power law to the composite spectrum
disregarding the goodness of the fit. The photon flux (i.e. number of photons
per unit time per unit surface) in interesting bands is readily obtained by
integration over energy.
Concerning the soft photon input, we consider cases in which the reprocessed
radiation is isothermal and has a temperature $T_{BB}$ comprised between 10 and
300 eV. This scenario can be regarded as a uniform corona located above the
very inner part of the accretion disk, or alternatively as a blobby structure
where the overall X--ray emission is dominated by few blobs with similar
properties.
With our approximations the interesting quantities are computed with good
accuracy in a quick way. A more detailed treatment of the full radiative
transfer problem in the same context, including free--free radiation, double
Compton scattering, detailed pair balance and polarized scattering, can be
found in Poutanen \& Svensson (1996).
\section{Variability in the Medium and Hard X--Ray Domains}
\subsection{The $\alpha-\tau$ Relation}
Since the model constrains the Compton parameter to be almost constant, one
expects that the spectral index of the Comptonized spectrum is fairly
constant too, and does not depend on the three parameters of the
system (i.e. $\tau$, $\Theta$ and $T_{BB}$).
The above argument is however valid only as a first order approximation,
in particular in the case of a disk--corona system, where the scattering anisotropy
is important and when we
consider the actually observed spectrum which includes the reflection hump.
We therefore computed the spectral index of the {\it
total} spectrum in the [2--10] keV range $\alpha_{[2-10]}$
as described in the previous
section for different values of the coronal parameters and of the
inclination angle. Comptonization of the reflected spectrum is also
included.
The results are shown in Fig. 1 and Fig. 2 where $\alpha_{[2-10]}$ is plotted
vs. the optical depth and the temperature of the corona respectively, for two
values of the inclination angle and two values of the temperature of the soft
photons. In order to show the effect of reflection on the total spectrum,
in Fig. 3 we plotted, as a function of $\tau$, the difference between the spectral index of the intrinsic continuum alone and that of the total spectrum (continuum+reflection).
The optical depth was varied between 0.1 and 1, corresponding
to an equilibrium temperature of the hot corona in the range 30--300
keV, as suggested by observations (Maisak et al. 1993, Johnson et al. 1993,
Madejski et al. 1995).
The important points to note are the following:
\begin{enumerate}
\item{}Changes in $\tau$ give rise to significant spectral variability
despite the fact that the ratio $L_c/L_s$ is constant in the model. In the
plane parallel limit considered here $\alpha_{[2-10]}$ is found to be in the
range [0.4--1.4] for $\tau$ varying between 0.1 and 1.
\item{} Steeper spectra are produced as $\tau$ increases approaching unity.
\item{} An increase of the spectral index is accompanied by a decrease of
the coronal temperature, i.e. {\it spectral index and coronal temperature are
anticorrelated}. The actual values of $\alpha$ and $kT$ in Fig 2 refer to
$L_c/L_s \simeq 2$, however the anticorrelation between spectral index and the
electron temperature is a general feature common to models based on Compton
cooling with fixed $L_c/L_s$. For higher values of $L_c/L_s$ a similar curve
would be obtained, but with values of $\alpha$ shifted downwards. As shown in
Fig. 2, the dispersion of the $\alpha_{[2-10]}$ vs. $kT$ relation is larger at
higher temperature.
\end{enumerate}
A physical understanding of these results involves several effects:
\begin{enumerate}
\item{} for any given $\tau$ the temperature has to adjust to maintain an
almost fixed ratio $L_c/L_s$, and for increasing $\tau$ the temperature
decreases. The resulting average spectral index (i.e. the spectral index
averaged over the total solid angle) is, for the plane parallel geometry
adopted here, slightly larger than 1 for any value of $\tau$ (HM93).
Comptonization theory implies that, in order to keep a constant ratio
$L_c/L_s$, a spectral index {\it steeper than 1} must increase with decreasing
$\Theta$ (i.e. increasing $\tau$). This can be understood noticing that the
extrapolation to low energy of the Compton power law must intercept the peak of
the {\it injected} black body soft photon distribution. Since for $\alpha>1$
the integral over energy of the power law
spectrum mainly depends on the lower integration
limit $E_1 \propto (16\Theta^2+4\Theta+1)kT_{BB}$, a shift to lower energy of
$E_1$ (i.e. a decrease of $\Theta$) keeping a constant $\alpha$ would result in
a larger ratio $L_c/L_s$. Therefore, in order to keep the ratio constant,
the spectral index is forced to steepen
(for the same reason, if $\alpha<1$, a reduction of $\Theta$ implies a further
{\it flattening}, rather than a steepening, of $\alpha$). The steepening of
$\alpha$ halts when a further decrease of $\Theta$ does not cause a shift of
$E_1$ to lower energy, i.e. when $16\Theta^2+4\Theta \ll 1$, say
$16\Theta^2+4\Theta\lta 0.5$. For even smaller
temperatures the spectral index must {\it flatten} (and hence there is a
maximum in the $\alpha$ vs. $\Theta$ curve) since, though $E_1$ is constant,
the upper integration limit ($\propto \Theta$) decreases anyway. In terms of
$\tau$, since in our adopted geometry the Compton parameter $y\simeq
(16\Theta^2+4\Theta)
\tau$ is $\simeq 0.6$ (HM93, Poutanen \& Svensson 1996),
the "turning point" occurs for $\tau \simeq 1$.
Note that for $\alpha<1$ ($L_c/L_s \gta 2$) there is a
monotonic flattening of $\alpha$ as $\Theta$ decreases. The effect discussed
here is independent of the particular geometry adopted. In brief, a fixed
$L_c/L_s$ ratio does not imply an exactly constant spectral index [e.g. see
Fig.2 in Ghisellini \& Haardt (1994; hereinafter GH94) for the simplest
possible geometry, i.e. a homogeneous spherical source, where this effect is
present];\par
\item{} the outgoing Compton spectrum in plane parallel geometry is a
function of the emission angle, because of the Compton rocket effect (Haardt
1993). Flatter spectra are seen for lower inclinations, and the importance of
the anisotropic Compton emission decreases with decreasing coronal
temperatures. Anisotropic Compton emission
is the main cause of the reduction of $\alpha_{[2-10]}$ for small $\tau$.
This effect goes in the same direction as that discussed in the
above point 1, but, contrary to it, depends on the adopted geometry (e.g. it is
absent in GH94). The larger dispersion of
the $\alpha_{[2-10]}$ vs. $kT$ relation at high temperature (see Fig. 2) is
due to the increasing importance of anisotropic Compton emission for increasing
$kT$ (see Haardt 1993 for details);\par
\item{} the contribution of the reflected Compton hump to the emitted
spectrum is $\propto {\rm e}^{-\tau/\mu}$, where $\mu$ is the cosine of the
inclination angle, so it is smaller for larger $\tau$
(for fixed inclination angle), and can be
boosted by anisotropic Compton scattering (HM93) for high temperatures.
As long as $\tau \gta 0.4$, the contribution of reflection to the [2--10]
spectrum is marginal ($\Delta \alpha_{[2-10]}\simeq 0.05-0.08$). For lower
$\tau$ the electron temperature is high enough for anisotropic
Comptonization to boost reflection. The effect, for fixed $\tau$, is
larger for larger $\mu$ and larger $kT_{BB}$ (see Fig. 3). Again,
this is an effect whose net result is to flatten the total spectrum as $\tau$
decreases;\par
\item{} finally we discuss the effect on $\alpha$ of a change in $T_{BB}$. In
fact the Compton $y$ parameter depends not only on $L_c/L_s$ but also (only
weakly when $y\lta 1$) on the mean energy of the soft photons. This can be
easily understood considering that the higher the energy of the soft photons
injected in the corona, the lower the number of scatterings needed to reach an
energy $\sim kT$. When $y\ll 1$ the (low) cooling is due only to the first
scattering, so that $L_c/L_s\simeq y$. The relation between $L_c/L_s$ and
$y$ does not depend on any other parameter.
But as soon as $y$ increases higher order scatterings
become important in the cooling process.
The asymptotic limit for $y\gg1$, i.e. when all the photons reached thermal
equilibrium, is $L_c/L_s=4\Theta/x_0$
where $x_0$ is the dimensionless soft photon mean
energy $\simeq 2.7 kT_{BB}/mc^2$. For this reason, for a given $\tau$ the
equilibrium temperature necessary to sustain a given $L_c/L_s$
is (only slightly for $y\simeq 1$) higher (and hence the Compton spectrum
flatter) for higher $T_{BB}$ (for a visualization of how $y$ varies with
$T_{BB}$ see Fig. 1 in Shapiro, Lightman \& Eardly 1976). Large changes in
$T_{BB}$, unlikely to occur in a given source, are required to produce
significant changes in $\alpha$. However, this effect could be relevant when
comparing different classes of sources (e.g. Seyferts and QSOs) where the
difference between the average values of $T_{BB}$ can be large enough to
influence the value of the spectral index of the hard X--rays. As we will
discuss in \S 4, variations of $T_{BB}$ in a particular object, even if small,
can play an important role in the observed variability in the soft X--ray band.
\end{enumerate}
As discussed in \S 2, $\alpha_{[2-10]}$ is a simple least square fit to the
total spectrum, which is not a perfect power law (because of, e.g., anisotropy
and inclusion of reflection). In order to quantify the differences between the
computed spectra and a power law, we associated a fixed percentual error to
each of our 10 computed spectral points.
By varying the error until a
reduced $\chi^2$ of unity was obtained in the least square fits, we
estimated a minimum observational precision required to detect the
spectral distortions. For the most distorted spectra
(those at low inclination and low $\tau$) an observational error $\lta$2\%
would result in a reduced $\chi^2$ larger than unity. The typical maximum error
allowed is $\lta$1\%.
We now discuss the relation between $\alpha_{[2-10]}$ and the X--ray luminosity
$L_c$ on the basis of two alternative limits. The first case corresponds to a
pair dominated corona where the optical depth is determined by the compactness.
In the second case the relation between $\tau$ and $L_c$ is essentially
unknown, and we will consider the limit case in which $\tau$ varies with
constant luminosity.
\subsection{The $\alpha-$Flux Relation in a Pair Dominated Corona}
If the corona is pair dominated, as plausible in the case of
compactnesses $\ell_c\gta 10$, where $\ell_c=(L_c/R)(\sigma_T/m_ec^3)$ and
$R$ is the typical size--scale of the emitting region
(see Svensson 1996 for an exact definition of compactness in
different geometries), the optical depth of the hot phase
are determined by the compactness alone. As discussed in HM91, once
the ratio $L_c/L_s$ and the absolute value of $\ell_c$
are specified, there
exists a univocal correspondence of these parameters with $\tau$, under the
assumption that pairs dominate the scattering opacity over normal plasma.
Thus if variations of the compactness are
due to variations of the dissipated luminosity $L_c$, the optical depth and
hence the actual value of $\alpha_{[2-10]}$, depend on the intensity of the
source.\footnote
{Stern et al. (1994) and more recently Poutanen \& Svensson (1996)
and Svensson (1996) pointed out that the exact relation between the rate of
pair production and the heating/cooling rates depends on the details of the
calculations. We have taken the compactness vs. $\tau$ curve given by
Svensson (1996) in order to compute the photon flux in the [2--10] keV band.}
From the $\alpha_{[2-10]}$ vs. $\tau$ relation computed previously (Fig. 1)
we can therefore derive for a pair dominated corona
an $\alpha_{[2-10]}$ vs. intensity relation. This is shown in Fig. 4
for two values of the viewing angle.
It is quite clear (see also Fig. 2 in Stern et al. 1996) that noticeable
variations of $\alpha_{[2-10]}$ (say $\Delta \alpha_{[2-10]} \simeq 0.3$) occur
as the luminosity (and hence the compactness) varies by at least a factor of
20. Thus {\it for a pair dominated corona the spectral
index must be essentially constant during ``normal" AGN
X--ray variability (i.e. flux variations within a factor of 2 or so)}.
Modest spectral steepening can occur during very large intensity increases.
On the other hand a spectral steepening also implies a decrease of the coronal
temperature (Fig. 2) (see also Pietrini \& Krolik 1995). Since an
observational determination of the latter with presently available
instrumentation is not easy, we also computed the variation in the hard photon
intensity ([30--100] keV), that would be associated with a variation in the
[2--10] keV intensity. This is shown in Fig. 5, where we also report values
of $\alpha_{[2-10]}$ corresponding to different values of the intensity. The
two intensity scales are arbitrary but their ratio is not. Due to the
temperature decrease for increasing luminosity, the intensity in the hard band
varies less than that in the medium band. However the two are positively
correlated. Because of the large overall luminosity variation the steeper and
cooler spectrum does not cross the flatter, hotter one, except at very high
energy.
Different conclusions hold if variations of $\ell_c$ are due to variations of
the linear dimension of the source $R$ rather than to variations of the
intrinsic luminosity. Such a circumstance can occur if the corona is formed by
many active regions rather than being homogeneous. In this case significant
spectral variations could occur without large variations in the output.
Strong effects on the soft X--ray spectrum are expected since the
temperature of the reprocessed radiation scales as $1/R^2$. A detailed
discussion will be presented in the next \S 4.
\subsection{The $\alpha-$Flux Relation in a Low Pair Density Corona}
If the compactness is $\lta 10$, the corresponding pair optical depth is $\lta
0.1$ (Stern et al. 1994). The main contribution to the
optical depth then comes from "normal" electrons,
whose amount is not related to the
source luminosity in a simple and easily predictable way. We therefore consider
the case in which the optical depth changes while the dissipated luminosity
does not vary substantially.
In Fig. 6 we show the total spectra for some choices of input parameters.
It is evident from the figure that, despite the fact that the total luminosity
in the spectra is the same (when integrated over all the emission angles), the
flux in the [2--10] keV band varies. The spectral index vs. the photon flux in
the [2--10] keV band is shown in Fig. 7. We can see that small observed
variations of the count rate can be accompanied by significant spectral
variations. Another important result, only weakly dependent on details, is that
as long as $\alpha_{[2-10]} \lta 1$, it correlates to the photon flux, i.e. the
spectrum softens with increasing intensity, while for
steeper spectra the two quantities are anticorrelated.
From Fig. 6 it is also evident that the flux above 50 keV is going to decrease
for increasing spectral index. The decrement is relatively modest as long as
$\alpha_{[2-10]} \lta 1$ since most of the power is carried by the few high
energy photons, but becomes large as $\alpha_{[2-10]} \gta 1$. Combining the
spectral index intensity relation shown in Fig. 7 with the spectral index
temperature relation we can compute the intensity in the hard X-ray band
([30--100] keV) vs. the [2--10] keV intensity. This is shown in Fig. 8. For
$\alpha_{[2-10]} \gta 1$ the emission in the [2--10] keV band is positively
correlated with the emission in the $\gamma-$ray band: small increments in the
X--ray flux are accompanied with larger increments in the $\gamma-$ray flux.
For $\alpha_{[2-10]} \lta 1$ the correlation is in the opposite sense, and is
less strong: small increments in the medium X--ray emission relate to small
decrements of the flux in the $\gamma-$ray band. Around the turning point
$\alpha\simeq 1$ we have the largest variation of the $\gamma$--ray flux for
the smallest variation of the X--ray flux.
In brief, the model predicts that if noticeable spectral variability is
observed while the luminosity variation is less than factor of $\sim 2$,
then {\it the scattering optical depth of the corona is not (or weakly)
related to the intrinsic coronal luminosity}.
This may lead to two possible conclusions:
\begin{enumerate}
\item{} pairs are {\it not} important as a source of scattering material,\par or
\item{}if pairs dominate the opacity, the linear dimensions of the source
{\it must} vary, i.e. the compactness varies while the luminosity stays
approximately constant.
\end{enumerate}
In the latter case the temperature of the soft reprocessed photons must vary,
increasing when the medium energy X--ray spectrum steepens. In any case,
strong spectral variability can well occur during
``normal" (within a factor of 2) flux variations.
We caution here that in order to measure a variation
in luminosity
one should measure the intensity up to the high frequency cut off, that is
up to the 100 keV range. In the absence of such information one can {\it
estimate} the luminosity variation from the known medium energy intensity
and the expected high energy cut off.
\section{Variability in the Soft X--Ray Domain and Correlations to
the Medium Band}
One of the basic features of the model is that a fraction of the Comptonized
X-rays is reprocessed into soft thermal photons which are in turn responsible
for the cooling of the corona. This fraction is fixed for fixed geometry,
therefore the model predicts a "perfect" correlation between the Comptonized
luminosity and the soft reprocessed one. Observationally, the latter component
could be identified with the soft excess observed in Seyfert galaxies, although
there is no general consensus on the very nature of the soft component.
The soft X--ray emission has been identified as the high energy tail of the UV
bump (Walter \& Fink 1993), but such claim and in general whether or not
the soft X--rays are correlated to the UV
emission is still controversial (see, e.g., Laor et al. 1994 and
Ulrich-Demoulin \& Molendi 1996a, 1996b). The low ROSAT PSPC energy
resolution does not allow to constrain the spectral shape of the excess, and
usually a steep power law is fairly adequate to fit the data (Gondhalekar,
Rouillon--Foley \& Kellet 1996). Higher resolution ASCA data seem to
support a thermal origin of this component (e.g. Leighly et al. 1996), though
absorption features due to highly ionized gas along the AGN line of sight
could be responsible (at least of part) of the excess of residuals in the
single power law fits (e.g. Cappi et al. 1996). Indeed
some recent ASCA data are consistent either with an {\it emission} thermal
component or with {\it absorption} due to highly ionized gas (IC 4329A, Cappi
et al. 1996; NGC 3227, Ptak et. al. 1995). In several objects both a warm
absorber {\it and} a thermal emission component seem to be present (NGC 5548,
Done et al. 1995; NGC 4051, Guainazzi et al. 1995; see also Netzer, Turner \&
George 1994). Mathur, Elvis \& Wilkes (1995) argued that a highly ionized, high
velocity outflowing gas responsible for X--ray and UV absorption might be a
common component in Quasars and should be located outside the Broad Emission
Line Region.
At least for NGC 5548 there are indications that the soft emission and the
power law vary in a correlated fashion, but the correlation is non linear, i.e.
the soft component varies more (Nandra et al. 1993, Done et al. 1995). In one
case a 30\% increase of the soft counts observed with ROSAT was not accompanied
by a significant variation in the estimated power law component.
This event suggested that at least soft X--ray ``flares" may well have a
different origin than reprocessing of hard x--rays (Done et al. 1995).
We wish to stress here that the behaviour of the observed soft X-ray intensity
depends strongly on the spectrum, i.e. the temperature of the reprocessed
emission $T_{BB}$. The latter depends on the absolute luminosity of the X--ray
emission (we recall that in the model it equals roughly half of the total
power produced via Comptonization) and on the size of the radiating surface.
Unfortunately we do not know whether and how these two quantities are related
with each other. As a first guess we may expect that both $R$ and $L$ scale
with the mass of the source. This may determine a mean value of $L/R^2 $ for a
given source ( sources with smaller masses would have hotter soft components),
however how the ratio varies when the luminosity varies in a given source
remains highly unpredictable. Furthermore, the amplitude of variations in
fixed energy bands can be different e.g. the hardness ratios can change
substantially when the response of a particular detector is considered.
Comparison with observations can then be made only "filtering" simulated light
curves through the actual response matrix of existing experiments.
In brief, although there is a one--to--one correspondence between the
Comptonized and reprocessed emission, this holds on global scale, i.e.
considering the integrated total luminosities. What is actually observed as
"count rate" in the given bands can be very different from what is expected on
this simple basis. In the following we compute the expected behaviour of the
soft X-ray intensity in response to variations of the medium energy intensity
in two limiting cases, for varying luminosity and fixed size of the
reprocessing region and for constant luminosity and varying size. Since most of
the observational data presently available derive from ROSAT we explicitly
compute intensities and hardness ratios in the most commonly used ROSAT bands.
\subsection{Simulated ROSAT light curves}
We performed simulations of variability patterns according to selected rules
that will be described in the following. The time dependent broad band spectra
were then filtered through a Galactic neutral absorber assuming $N_H = 2\times
10^{20}$ cm$^{-2}$, and finally folded with the ROSAT response matrix.
We divided
the ROSAT band in three intervals, defining a "hard count rate" as the count
rate in the [0.9--2] keV band ($C_{[0.9-2]}$; it should be kept in mind that in
all the other sections the "hard" band refers to energies $\gta 30$ keV) and a
"soft count rate" as the count rate in the [0.1--0.4] keV band
($C_{[0.1-0.4]}$). We then computed the hardness ratio defined as HR$=
C_{[0.9-2]}/C_{[0.1-0.4]}$. This quantity provides direct spectral information
in the ROSAT range.
It is important to realize that the dependence of HR on the temperature of the
soft component is not monotonic. This is shown in Fig. 9.a for two cases with
regard to the power law component (both with $0^o$ inclination), one
corresponding to low optical depth $\tau=0.1$ (resulting in a flat spectral
index $\alpha_{[2-10]}\simeq$0.5-0.6, see Fig. 1), the second to higher
optical depth $\tau=0.65$ (yielding to $\alpha_{[2-10]}\simeq$1.2--1.3) For a
pair dominated source these values of $\tau$ would imply a large difference in
compactness, roughly $10^3$ larger for the high $\tau$ case (see Svensson
1996). Intermediate cases must be comprised between the two curves. The
relation is independent of the actual mechanism causing the variations of
$T_{BB}$, but depends somewhat on the parameters defining the hard power law.
The behaviour of HR vs. $kT_{BB}$ is easily understandable. For
low temperatures, only the exponential tail of the Planck distribution falls in
the ROSAT band. In this regime a temperature increase leads to a large increase
of the count rate in the soft band only: the spectrum becomes softer. The
softening stops when the peak of the blackbody distribution is within the band,
and the exponential tail starts to affect the intensity in the [0.9--2] keV
band. In the latter regime when the temperature increases the spectrum becomes harder.
The transition occurs for $kT_{BB}$ around 60 eV.
In order to compute the relation between $T_{BB}$ and the count rate
in the ROSAT band, we have to specify the mechanisms leading to the temperature
variations. We have considered two different ideal ways in which $T_{BB}$ can
vary, the first corresponding to variations in luminosity for fixed size, the
second to variations in size for fixed luminosity, and discuss them in turn.
In all the cases discussed in the present section the reprocessed thermal
component is superimposed to a standard multicolor disk emission arising from a
$10^7 M_{\odot}$ black hole radiating at 10\% of the Eddington limit. The
X--ray luminosity is taken as 0.3 of the disk emission and a face--on line of
sight is assumed. We checked that our results are only marginally sensitive to
disk parameters. For rather extreme cases such as a $10^6 M_{\odot}$ black hole
accretion disk radiating at the Eddington limit, with an X--ray luminosity of
only 10\% of the UV disk emission, the ROSAT band is still dominated by the
reprocessed X--rays as long as the temperature of the latter
component is $\gta
20$ eV. The net effect of increasing the inclination angle is to reduce the
amplitude of variations of the count rate in the soft band.
\subsection{Variations of Intrinsic Luminosity}
Suppose the luminosity of the source in the power law component changes (i.e.
$L_c$ changes), while $\tau$ and $\Theta$ remain constant. In this case an
increase in luminosity produces an increase in temperature of the thermal
component, and hence a correlation between medium and soft X--rays is
expected.
The hardness ratio plotted in Fig. 9.a can now be plotted as a function of the
intensity which would give rise to the considered temperature variation. This
requires a very large intensity range, a factor $10^7$ for $kT_{BB}$ in the
range 10--316 eV. , implausible for the same source. Thus a single source will
be confined to some restricted interval in intensity, while different sources
could fall in different regions of the plot. The results are shown in Fig.
9.b. The two curves correspond to the same coronal parameters used in panel
(a). Clearly the count rate scale is arbitrary, but the absolute value of the
hardness ratio is meaningful, being independent of the source luminosity for a
given spectral shape. For reference, we plot along each curve discrete points
corresponding to an increase in luminosity by a factor 2, i.e. to an increase
of 20\% in $T_{BB}$ with respect to the preceding one.
The exact value of $kT_{BB}$ relative to each point can be easily
read from Fig. 9.a where the same discrete points are marked.
The count rates in
the two bands are always correlated, though the correlation is not linear. Even
though the temperature only varies as $L^{1/4}$, as long as the temperature is
$\lta 60$ eV the number of soft photons emitted in the ROSAT band changes
significantly for even modest variations of temperature. This regime
corresponds to the decreasing branches of the curves in Fig. 9.b. Along this
branch an increase in luminosity produces an exponential increase of
$C_{[0.1-0.4]}$, while the increase of $C_{[0.9-2]}$ is linear, being dominated
by power law photons. The hardness ratio is clearly anticorrelated to the count
rate. The total count rate shows large variations, being dominated by the
lowest energy channels.
The behaviour is very different when the peak of the black body spectrum is
well within the ROSAT band, i.e. for $kT\gta 60$ eV. The hardness ratio is
constant as long as the exponential tail of the Planck distribution does not
affect the [0.9--2] band. When this happens, $C_{[0.9-2]}$ responds
exponentially to $\Delta L$, while $C_{[0.1-0.4]}$ varies almost linearly. The
total count rate is almost linear with $\Delta L$, so the spectrum becomes
harder.
The case of a steep power law component is qualitatively similar but less
extreme. The variations of the hardness ratio are smaller in both branches.
In a pair dominated source luminosity variations with constant coronal
parameters are, strictly speaking, not allowed. States with different
luminosity belong to curves with different values of $\tau$. However, since
the separation of the two curves for constant $T_{BB}$ (dotted lines)
corresponds to an increase of $\ell_c$ of a factor $\simeq 10^3$, the shift
corresponding to a factor 2 is clearly negligible.
\subsection{Variations of Linear Dimensions}
We consider the case of a constant output radiating from a varying surface
area, while $\tau$ and $\Theta$ remain constant.
This may represent a case in which the primary hard emission occurs at a
constant rate in an expanding (and/or contracting) corona, or blob, or if the
emission occurs at different time in blobs with roughly the same luminosity but
different sizes.
Though the luminosity of the hard and soft components is constant, the
temperature of the thermal photons will change giving rise to observable
effects in the soft X--ray domain. To study the problem we varied the linear
size $R$ of the emitting region, so that the resulting black body temperature
covers the range 10--316 eV. Notice that under the assumption
adopted, the intrinsic number of photons emitted scales as $\sqrt R$, while the
temperature as $1/\sqrt R$. Therefore the highest {\it intrinsic} photon
emissivity occurs at the lowest temperatures.
The relation between hardness ratio and count rates for constant luminosity and
varying area is shown in Fig. 9.c. The two curves corresponds to the
same coronal
parameters used in Fig. 9.a and 9.b. The same discrete points plotted in Fig.
9.a and 9.b are also marked, corresponding to a decrease of $R$ of 30\%
(and hence resulting in an increase of $T_{BB}$ of 20\%) with respect
to the preceding one.
The behaviour of HR
is similar to that seen in the previous section. As long as $kT_{BB}\lta 60$
eV the soft counts increase, while the hard counts are much less
variable. This regime corresponds to the decreasing branches of the curves
in Fig. 9.c. Although HR is anticorrelated to the total count
rate independently of the coronal parameters, the hard and soft count rates
can be both correlated or anticorrelated,
depending on whether the hard power law is steep or flat, respectively.
The behaviour of HR vs. the total count rate is very different when
$kT_{BB}\gta 60$ eV. Since the total luminosity is constant,
the total count rate is
basically constant when the peak
of the black body distribution is within the ROSAT band.
This regime corresponds to the almost vertical branches
of the curves in Fig. 9.c: HR dramatically increases at almost constant
total count rate. In this regime the peak of the black body
spectrum moves from the soft band to the hard band. The net result is that
the two count rates are anticorrelated: as $T_{BB}$ increases $C_{[0.9-2]}$
increases while $C_{[0.1-0.4]}$ decreases.
If the corona is pair dominated, variations of $R$ should not happen
with constant coronal parameters. A reduction in the radius by a factor
$\simeq 10$ (which roughly corresponds to four successive points counting from
left to right along the curves in Fig. 9.c)
causes an increase in the opacity and hence a
reduction in the coronal temperature. For $kT_{BB}\lta 60$ eV
the variations of HR
will be smaller with respect to cases with constant $\tau$ and $\Theta$. For
larger values of $kT_{BB}$ the situation is similar to what seen before: a
sharp increase of the hardness ratio with a constant count rate.
In the case of a low pair density corona described in \S 3.2, the coronal
parameters can vary without any change of size or luminosity
of the soft component. In terms of ROSAT count rates, this is
represented by the dashed lines in Fig. 9.c.
Those lines connect points relative to spectra with different coronal
parameters, but same $T_{BB}$ and luminosity. If for example an
increase of $\tau$ occurs, the ROSAT count rate increases as
long as $kT_{BB}\lta 60$ eV, slightly decreases
for higher black body temperatures (Fig. 9.c).
The hardness ratio shows a slightly simpler behaviour:
for low temperatures decreases, for $kT_{BB}\gta 30$ eV it increases tending to
become constant at higher temperatures.
\section{Spectral and Flux Correlations: Comparison with Observations}
In the following we compare the results discussed in the present sections
with some of the
current observations of soft--hard X--ray variability in Seyfert galaxies.
The two recently studied cases of NGC5548 and Mrk 766 observed with ROSAT
and/or ASCA provide the most detailed available information on spectral and
flux variability of Seyfert I galaxies in the X--rays to date, and also seem to
indicate opposite behaviours. We should warn that
Mrk 766 is a rather extreme case, not
representative of the Seyfert I class. Indeed this source is classified as a
Narrow Line Seyfert 1 (NLS1). Sources of this subclass tend to show rapid
X--ray variability, unusual steep ROSAT spectra, and strong \ion{Fe}{2}
emission (Pounds \& Brandt 1996).
In the following we compare those observations with the analysis
developed in the previous sections. We do not aim to "fit" any data, but simply
to interpret (whenever possible) the general variability features in the
context of the reprocessing scenario.
\subsection{NGC 5548}
The soft X--ray variability of NGC 5548 observed by ROSAT has been deeply
analyzed by Done et al. (1995). The soft and hard counts seem to go up and down
together, but not in a linear way. In general the source softens (in the ROSAT
band) as it brightens. The hardness ratio roughly doubles (from 0.03 to 0.06)
for a count reduction by a factor 3. Also a state where only the counts in
the ROSAT soft band varied was observed. No simultaneous observations in the
hard X--rays are available. Old GINGA data showed a power law index close to
the canonical value 0.9 plus a reflection hump (Nandra \& Pounds 1994).
The source requires a highly variable soft component, and a more steady hard
one. This can be roughly interpreted assuming that variability in the soft band
is due to variations in size of the emitting area, as in the case discussed
in \S 4.1.2 (see Fig. 9.c).
The temperature of the thermal component should be less than 60--70 eV,
not inconsistent with that found
by Done et al. (1995).
The intrinsic luminosity of the source should be roughly constant. We may
notice that the lower value of the observed hardness ratio ($0.03$) is lower
that the minimum value consistent with the reprocessing hypothesis
(see Fig. 9). The discrepancy may be due to the value of $N_H$ for this
source ( $N_H=1.65\times 10^{20}$ cm$^{-2}$; Done et al. (1995) ) which is
lower than
that used throughout our calculations ($N_H=2\times 10^{20}$ cm$^{-2}$), but
it may also indicate
that part of the reprocessed thermal flux does not contribute to the cooling of
the corona, or alternatively, as suggested by Done et al. (1995), that there is
another soft component unrelated to the hard one, possibly much broader than
a single temperature black body.
\subsection{Mrk 766}
Mrk 766 has been observed by ROSAT and ASCA,
providing a full coverage of the X--ray domain from 0.1 to 10 keV. The ROSAT
data has been analyzed by Molendi, Maccacaro \& Schaeidt (1993), Molendi and
Maccacaro (1994) and recently by Leighly et al. (1996) who also analyzed the
ASCA data. ROSAT data show that the source hardens as it brightens, and can be
roughly described by an almost steady soft component plus a variable hard one.
In the ASCA band the source shows evidence of spectral variability above 1 keV
($\Delta \alpha \simeq 0.4$); the steepest state is brighter than the flat
state in the ASCA band by a factor of $\simeq 2$. The power law pivots at an
energy close to 10 keV. If the power law extends up to few hundreds of keV,
then the hard (and weaker in the ASCA band) state is actually more luminous
than the soft state.
Observations indicate that $\alpha_{[1-10]}$ varies between 0.6 and 1, while
the total luminosity changes by factor of 2 at most. This indicates that either
pairs are not dominant or, if pairs dominate, that the linear size of the
emitting
region is varying. In the latter case the required change in compactness
is roughly a factor 10. Since the overall X--ray luminosity of the hard
state is estimated to be 2
times larger than that of the soft state (see below), the region responsible
for the soft spectrum should be 20 times smaller than that responsible
for the hard one. The soft state, although fainter by a factor 2,
should then be characterized by a
$T_{BB}$ roughly 3--4 times larger than that relative to the hard state.
Data are consistent with a reduction of the excess normalization by a factor 2
as the power law steepens, but an increase of 3--4 times of $T_{BB}$ in the
soft state is not supported by the data.
We therefore conclude that the spectral variability in Mrk 766 could be due to
changes of the scattering opacity of the corona without large variations of the
intrinsic luminosity. The opacity of the corona is probably not dominated by
electron-positron pairs. A variation of $\tau$ from $\simeq 0.3-0.4$ to $\simeq
0.1-0.2$ can explain the hardening of the power law, and possibly (part of) the
decrease in the ASCA count rate. We predict that the hard state is
characterized by a coronal temperature $\simeq 150-200$ kev (and hence an
exponential cut-off with an e--folding energy $E_C\simeq 300-400$ keV should be
present), while the soft state should have $kT\simeq 80-100$ keV
(corresponding to
$E_C\simeq 150-200$ keV). The total X--ray luminosity of the hard state would
then be 2 times larger than that of the soft state.
Finally we note that the value of hardness ratio in the ROSAT band computed by
Molendi \& Maccacaro (1994) ranges from $\simeq 0.5$ to $\simeq 1$. According
to Fig. 9.a this would imply a temperature of the soft component of $\simeq 130
$ eV, in agreement with the observed hardening with increasing intensity
in the ROSAT band and with the ASCA results.
\section{Summary and Discussion}
\subsection{Summary}
We have examined the expected spectral variations within a model involving a
hot corona emitting medium to hard X-rays by Comptonization, coupled to a
cooler optically thick layer which i) provides the input of soft photons for
Comptonization by the hot corona ii) intercepts a fixed fraction of the
Comptonized photons and reprocesses them into soft thermal photons. The
fraction of hard photons which are reprocessed and the fraction of soft photons
which are Comptonized have been held fixed to the standard values of 1/2 and 1
respectively which refer to a flattened, sandwich-like geometry. These values
may be somewhat different in different sources but could be constant on "short"
timescales, over which the structure of the emitting region is not expected to
vary substantially.
Even with these restrictions the spectral variability is rather complex and
depends on two main issues, the importance of pairs as a source of opacity and
the expansion/ contraction of the active regions in the corona possibly
associated with luminosity variations.
The main point which emerges is that intrinsic spectral variations are governed
by changes in opacity (steeper spectra for larger $\tau$ as long as $\tau \lta
1$). For fixed $L_c/L_s$ this implies that {\it a steepening of $\alpha$ should
correspond to a decrease of the temperature.} This is a robust prediction of
Compton cooled models with fixed geometry and could be tested by the recently
launched XTE and SAX satellites. Should an opposite behaviour be found, the
concept of a quasistationary corona above an accretion disk should be
substantially revised in the sense of a much more chaotic situation.
The relation between spectral shape and intensity is univocal in the case
of pair dominated coronae. In this case large changes in compactness
(luminosity and /or size) are required for modest spectral variations. These
should appear as either large changes in the 2--10 keV intensity or large
changes in the temperature of the soft reprocessed component.
The case of Mrk 766 (which may be anomalous among Seyfert 1 galaxies)
seems to exclude a pair dominated corona.
In the case of negligible pairs the relation between
luminosity and opacity is essentially unknown.
Opacity variations could occur with little variations in luminosity and
give rise to a pivoting of the medium energy spectrum which
can yield little correlation and even anticorrelation between the hard and
medium X-ray fluxes. For constant luminosity and
flat spectral indices ($\alpha_{[2-10]}\lta 1$)
the hard photon flux varies less and is anticorrelated
with the photon flux in the medium energy band.
The integrated luminosity in the soft reprocessed emission is always
proportional to the Comptonized luminosity. However its contribution to
the counts in the band between the galactic absorption cut off and the Carbon
edge depends strongly on the temperature which increases with increasing
luminosity (for fixed size). This can give rise to anomalous observed
behaviours with larger variations in the soft band than in the medium one.
\subsection{Discussion}
From a more observational point of view, it may be useful to discuss in more
detail what can be deduced about the intrinsic properties of the source from a
given set of "observables" To this end we indicate the [0.1--2] keV band as the
"soft S band", the [2--10] keV band as the "medium M band" and the band above
30 keV as the "hard H band". We define four observable quantities describing
the continuum variability of a source in the X--ray domain: variation of the
[2--10] keV spectral index $\Delta \alpha$, variation of the [0.1--2] keV count
rate $\Delta S$, variation of the [2--10] keV count rate $\Delta M$, and
variation of the hard count rate $\Delta H$. The four quantities defined are in
principle independent, and hence all the possible variability behaviours can be
described by a combination of possibilities. We will examine the most
interesting ones.
The main relevant property of an observed X--ray light curve is whether
flux variability in the M band occurs keeping a constant spectral index, or
instead if it is accompanied to spectral variations. The two possibilities,
together with simultaneous observations in the S band, lead to very different
conclusions regarding the very nature of the observed variability.
As discussed in \S 3 spectral variations in the medium band require a
negligible amount of pairs in the corona, or, for large compactnesses,
variations of the source linear size $R$. Observations in the soft band can
discriminate between the two possibilities, since the latter case would imply
variations of $T_{BB}$ and hence variations of the soft counts. Hence in case
$\Delta S\simeq 0$ the intrinsic luminosity of the X--ray source and the
dimensions of the reprocessing region must not vary; the hard variability in
flux and spectral index is due to variations of optical depth of the corona,
with a negligible contribution of pairs. The model predicts that
$\alpha_{[2-10]}$, whenever steeper than 1, is anticorrelated to the medium
count rate. A positive correlation holds if instead $\alpha_{[2-10]}\lta 1$.
The temperature of the corona decreases for increasing spectral index, and the
flux in the H band is correlated to that in medium band if $\alpha_{[2-10]}
\gta 1$, anticorrelated if $\alpha_{[2-10]} \lta 1$. The case with $\Delta
S\neq 0$ is similar, but the observed variability properties can be due both to
variations of the intrinsic luminosity of the corona, and/or to variations of
the linear size of the reprocessing region. If the corona is pair dominated,
the model predicts that the soft component temperature is correlated to
$\alpha_{[2-10]}$ and anticorrelated to $\Theta$. In case pairs are
unimportant, it is not possible to draw any predictable relation between
$\alpha_{[2-10]}$ and the medium or soft count rates. A full coverage all
through the X--rays to the $\gamma-$rays can determine in principle if the
intrinsic luminosity of the source varies. In case of negligible luminosity
variations, changes in $T_{BB}$ must occur to produce a non null $\Delta S$. On
the other hand, if intrinsic luminosity variations were detected, they alone
could give raise to the observed variability in the soft band without
associated variations of $T_{BB}$. In principle the nature of variability in
the soft band (variations of intrinsic flux or variations of temperature) can
be tested by high spectral resolution observations. \par
Different conclusions can be drawn if the variability in the medium band occurs
with a constant spectral index. The main result is that the intrinsic
luminosity of the source must vary, but not the properties of the hot corona.
This can happen in a pair dominated source if the variations of compactness are
$\lta 2$. An observation of $\Delta M \gta 10$ with a constant spectral index
would imply that some mechanism different from pair balance works to keep the
coronal optical depth constant. In any case, the model predicts that $\Delta
S\neq 0$. The total counts in the soft and hard bands are positively
correlated, and much stronger variations occurs in the soft band than in the
medium one. $\Theta$ should not vary noticeably, while the flux in H band
follows the variations of the M band. If $\Delta S \simeq 0$ the model can
be ruled out, since any variation of the hard power law with a constant
spectral index must be accompanied by an observable variation in the luminosity
of the soft component. If this does not occur, the soft component cannot be due
to local reprocessing of part of the hard X--rays.
It should be mentioned that, though the model considered in the paper
constrains the ratio $L_c/L_s$ to be constant $\simeq 2$, in principle
different behaviours are possible if instead the ratio $L_c/L_s$ is allowed
to vary.
This can occur if the height--to--radius ratio of the scattering
region(s) is of order unity or larger (see, e.g. , HMG and Stern et al. 1995).
GH94 showed that
in the case of a pair dominated homogeneous plasma with $\Theta \lta 1$,
an increase of
$L_c/L_s$ (and hence a decrease of $\alpha$) would result in a lower temperature
(and hence a relevant increase in $\tau$ must occur) contrary to the case
of $\tau$--driven spectral variations with constant $L_c/L_s$.
Thus, at least in
principle, broad band observations could be used to check the stability of the
Compton--to--soft ratio during spectral variations, and hence the role played
by reprocessing.
Finally we wish to mention one aspect of the model neglected so far, namely
time lags between different bands. In the most basic picture of Comptonization
models, the hard X--rays respond to variations of the soft photon flux, and
hence variations in the hard band should follow variations of the soft thermal
photons, exhibiting also a steeper power spectrum (see e.g. the discussion in
Papadakis \& Lawrence 1995). Data in this field are sparse and often
contradictory (see Tagliaferri et al. 1996 and references therein). EXOSAT data
of NGC 4051 seem indicating that more power at high frequency is radiated by
means of high energy photons (Papadakis \& Lawrence 1995), contrary to what
expected from the basic Comptonization theory.
The above considerations assume that spectral variability is due to variations
of the primary soft photon input. In the model under investigation here however
the soft photons are supposed to be generated by mere reprocessing of a
fraction ($\simeq 50$\%) of the primary hard X--rays. Variations of the
physical conditions occur primarily in the corona. For example, we may expect
that a rapid variation of the heating rate appears in the Comptonized photons
roughly at the same time at every energy, and eventually in the soft component.
The situation appears to be extremely complicated, and we believe that data
interpretation can not avoid to deal with such complications (see, e.g.,
Nowak \& Vaughan 1996). In view of
forthcoming better timing data, in a work in progress we aim to perform a
detailed analysis of the timing properties of an active Comptonizing corona.
\acknowledgments
FH wishes to thank S. Molendi for stimulating discussions and the Astrophysics
Section of the University of Milan for hospitality. FH and GG thank the ITP in
Santa Barbara where part of this study was done. FH acknowledges financial
support by the Swedish Natural Science Research Council and by the Alice
Wallembergs foundation. This project was also partially supported (FH and GG)
by NSF grant PHY94-07194.
\clearpage
| 16,081 |
\section{Introduction}
Low-mass X-ray binary systems (LMXBs) consist of a central compact object (a neutron star or a black hole) accreting matter though an accretion disc from a low-mass (M$\le$1 M$_\odot$) donor star. According to the path they trace in an X-ray colour-colour diagram (CD) or hardness-intensity diagram (HID), \citet{hasinger89} classified the neutron-star LMXBs (NS LMXBs) into two classes, the Atoll and the Z sources. The Z sources ($\sim$0.5-1 $L_{\rm Edd}$) are brighter than the Atoll sources \citep[0.01-0.2 $L_{\rm Edd}$; e.g.][]{ford2000,done07,homan07}. The Atoll sources display three branches in the CD: the island, the lower and the upper banana. The branches in the CD correspond to different spectral states, with mass accretion rate increasing from the island (hard spectral state) to the upper banana branch (soft spectral state).
In a typical LMXB system, the emission can be decomposed into the contribution of two main components: (i) A soft, thermal, component due to the emission from the accretion disc and, when appropriate, the neutron-star surface and its boundary layer, and (ii) a hard component due to inverse Compton scattering of soft photons from the thermal component(s) by hot electrons in a corona. In the hard spectral state, the inner part of the accretion disc is relatively cool, $\sim$0.3-0.5 keV \citep[e.g.][]{sanna13,lyu14}, and contributes less than 20\% of the total emission between 0.5 and 20 keV. The energy spectrum in the hard state is dominated by a Comptonised component, which is usually described by a power-law-like model with a photon index of $\sim$ $1.6-2.5$ \citep{yoshida93,mariano97}. In the truncated disc scenario \citep[e.g.][and references therein]{done07}, the accretion disc in the hard state is truncated far from the central object, thus leading to a relative low inner-disc temperature and disc flux, while in the soft spectral state, the soft thermal emission becomes dominant. There are two possible sources of thermal emission in a neutron star system, either the surface of the neutron star (plus its boundary layer) or the accretion disc. The disc in the soft state extends down to the inner most stable circular orbit \citep{ss73}, leading to a high inner-disc temperature of $0.7-1.0$ keV \citep{sanna13,lyu14} and a strong thermal component. The electrons in the corona are efficiently cooled down through the inverse Compton scattering process, with seed photons coming from the thermal components. As a consequence, the Comptonised spectrum becomes steep and has a photon index of $\sim$$2-2.5$ \citep{miyamoto93,mariano97}, and little hard emission is detected in some cases \citep{gierliski03,zhang17}.
Many models have been proposed to explain the NS LMXB spectra. Historically, two main types of models were widely used: (i) the `eastern model' consists of a multi-temperature thermal emission from the accretion disc plus a Comptonised component from an optically thin but geometrically thick corona \citep{mitsuda84,mitsuda89}; (ii) the `western model' consists of a single-temperature blackbody component from the surface of the neutron star (and its boundary layer) and a Comptonised spectrum due to inverse Compton scattering of thermal photons off the hot electrons in the corona \citep{white86}.
4U 1728--34 was first detected by UHURU scans of the Galactic center region in 1976 \citep{forman76}. Later \citet{lewin76} and \citet{hoffman76} detected type I X-ray bursts from the source, identifying the neutron-star nature of the compact object. 4U 1728--34 was classified as an Atoll NS LMXB \citep{hasinger89}, at an estimated distance of $4.4-5.1$ kpc, deduced from measurements of the peak flux of photospheric radius expansion bursts \citep{disalvo00,galloway03}. \citet{migliari03} found a coupling between the disc and the jet in 4U 1728--34 based on a significant correlation between the radio flux density and the X-ray flux. Spectral analysis of 4U 1728--34 has been performed using observations from many satellites in the past, such as Einstein \citep{grindlay81}, SAS--3 \citep{basinska84}, EXOSAT \citep{white86}, SIGMA \citep{claret94}, ROSAT \citep{schulz99}, BeppoSAX \citep{piraino00,disalvo00}, ASCA \citep{narita01}, RXTE, Chandra \citep{dai06}, INTEGRAL \citep{falanga06} and XMM-Newton \citep{ng10,egron11}, BeppoSAX, RXTE \citep{seifina11}, INTEGRAL, RXTE \citep{tarana11}, NuSTAR, Swift \citep{sleator16,mondal17}.
In this work we used an XMM-Newton observation plus a simultaneous Rossi X-ray Timing Explorer (RXTE) observation, and a Suzaku observation to explore the spectral properties of 4U 1728--34. We applied full reflection models to investigate the spectral features of the source. Furthermore, we studied the ionisation state of the accretion disc by comparing the results from spectroscopy and theory, and we explored the possible mechanisms behind the emission spectrum in this source. The paper is organized as follows: We describe the observations and the details of the data reduction in Section 2. In Section 3 and Section 4, we show the spectral analysis process and the results, respectively. Finally, in Section 5 we discuss our findings.
\section{Observations and data reduction}
In this work we used data from three satellites: An XMM-Newton observation (ObsID:0671180201) plus a simultaneous RXTE observation (ObsID: 96322-01-01-00) taken on 2011-08-28, and a Suzaku observation (ObsID:405048010) obtained on 2010-10-04. We analyzed the XMM-Newton/RXTE and the Suzaku observation separately in this work.
The XMM-Newton observation was taken with the European Photon Imaging Camera, EPIC-PN \citep{xmm01} in timing mode, with a total exposure time of about 52 ks. We used the Science Analysis System (SAS) version 16.1.0 to reduce the PN data with the latest calibration files. We applied the tool {\tt epproc} to extract calibrated events, and converted the arrival time of photons to the barycenter of the solar system using the command {\tt barycen}. We excluded all events at the edge of a CCD and close to a bad pixel, and selected only single and double events for the extraction. We estimated the pileup effect using the command {\tt epatplot}, as suggested by the SAS thread. When we exclude the central one, three and five columns, the 0.5-2.0 keV observed-to-model fraction for doubles is 1.050$\pm$0.003, 1.022$\pm$0.004, and 0.990$\pm$0.004, respectively. We finally selected events within a 41-column wide region centered on the source position, excluding the central five columns to eliminate pile up. The EPIC pn spectrum of the full region has a count rate of $\sim$ 247 cts/s, whereas the one excluding the central five columns has a count rate of $\sim$ 70 cts/s. We removed all X-ray bursts before we extracted the spectra. For PN timing mode observation, the point spread function (PSF) of the instrument extends further than the CCD boundaries, thus the whole CCD was contaminated by the source photons \citep{ng10,hiemstra11}. To model the background, we used the observation of the neutron-star LMXB 4U 1608--52 (ObsID 0074140201) in timing mode when the source was close to quiescence. We excluded the bad time interval with flaring particle background and then extracted the background from a region far from the center of the PSF (RAWX in [2:5]). We produced the response matrices and ancillary response files with the commands {\tt rmfgen} and {\tt arfgen}, respectively. Finally, we rebinned the spectrum with the command {\tt specgroup} to ensure a minimum of 25 counts in every bin and a maximum oversampling factor of 3.
For the RXTE observation we used the Proportional Counter Array \citep[PCA;][]{jahoda06} data only, since the other instrument, the High Energy X-ray Timing Experiment \citep[HEXTE;][]{roth98}, was not in a good working condition after 2009. We reduced the data using the {\sc heasoft} package version 6.16 according to the RXTE cook book\footnote{http://heasarc.gsfc.nasa.gov/docs/xte/recipes/cook\_book.html}. We applied the tool {\tt saextrct} to extract PCA spectra from Standard-2 data, where only the events from the best calibrated detector, PCU2, were selected. We ran the commands {\tt pcabackest} to generate the PCA background files and {\tt pcarsp} to produce the response files. Finally, we applied the dead time correction to the spectra.
We used the Suzaku data taken with two instruments onboard: the X-ray Imaging Spectrometer (XIS) and the Hard X-ray Detector (HXD). The XIS and HXD detectors cover an energy range of $0.2-12$ keV and $10-70$ keV, respectively. The XIS data were collected by two front-illuminated (FI) detectors (XIS0 and XIS3) and one back-illuminated (BI) detector (XIS1). The 1/4 window option and the burst option were applied to the XIS detectors to limit possible pileup effects.
We followed exactly the steps from the Suzaku Data Reduction Guide\footnote{http://heasarc.gsfc.nasa.gov/docs/suzaku/analysis/abc/} to reduce all Suzaku data. We used the tool {\tt aepipeline} to recalibrate the XIS and HXD events with the latest calibration files, removed bad pixels and applied the standard GTI selection. After excluding X-ray bursts, we ran the {\sc heasoft} tool {\tt xselect} to extract XIS spectra from a circular region centered at the position of the source. We found that there was no pile up in the spectra. The redistribution matrix files (RMFs) and ancillary response files (ARFs) were generated using the tool {\tt xisrmfgen} and {\tt xissimarfgen}, respectively. Finally, we used the command {\tt addascaspec} to combine the spectra of XIS1 and XIS3 to produce the FI spectra. For the HXD-PIN data, we applied the tool {\tt hxdpinxbpi} to produce the source and background spectra. We applied the dead-time correction to the source spectrum using the pseudo-events files. Since the non X-ray background (NXB) has a count rate 10 times higher than the real background, in order to reduce the Poisson noise, we adjusted the exposure time of the NXB spectra by a factor of 10. Furthermore, a cosmic X-ray background (CXB) was simulated and added to the NXB to generate a total background spectrum. Finally, we downloaded the response file from the online CALDB according to the Suzaku Data Reduction Guide.
\section{Spectral Analysis}
In this work we used XSPEC version 12.10.0 \citep{arnaud96} to fit the PN and PCA spectra together in the $0.9-25$ keV energy range (PN: $0.9-10$ keV; PCA: $10-25$ keV), and the Suzaku spectra in the 1-50 keV energy range (FI/BI: $1-10$ keV; HXD-PIN: $10-50$ keV). We used the component {\sc phabs} to describe the interstellar absorption along the line of sight, with the solar abundance table of \citet{wilms00} and the photoionisation cross section table of \citet{verner96}. A multiplicative factor was applied to the model to account for possible calibration difference between different instruments. We fixed this factor to 1 for the PN and FI spectrum, and let it free for the spectrum of the other instruments. Finally, we added a 0.5\% systematic error to all the spectra to account for possible effects of the cross calibration on the spectral shape \citep[e.g.][]{sanna13,dai15}. Below we describe the models that we fitted to the spectra, and we present the corresponding fitting results in the next section.
\subsection{Direct emission}
We first tried a thermal component plus a Comptonised component to fit the spectra \citep[e.g.][]{sleator16,mondal17}. We selected the model {\sc bbody} to describe the thermal emission from the neutron-star surface and its boundary layer. For the Comptonised component we used the thermal comptonised component {\sc nthcomp} \citep{zdzi96,zyck99}, which describes more accurately the high energy shape and the low energy rollover than an exponentially cutoff power-law component. In order to test whether this combination is able to fit the continuum well, we excluded the 5 keV to 8 keV energy range where significant residuals caused by iron emission line were present (see $\S 3.2$ below). We found that the continuum could be well fitted by the {\sc bbody+nthcomp} model, with a reduced chi-squared value/number of degrees of freedom of 0.99/128 and 1.26/2592 for the XMM-Newton/RXTE observation and the Suzaku observation, respectively. We also tried to add the component {\sc diskbb} \citep{mitsuda84,maki86} to fit the possible emission from an accretion disc, and linked the temperature of the accretion disc, $kT_{dbb}$, in the {\sc diskbb} component to the seed photon temperature, $kT_{seed}$, in the {\sc nthcomp} component. We found that the additional component {\sc diskbb} did not significantly improve the fits: The reduced chi-squared/number of degrees of freedom for the fits were 0.99/127 and 1.21/2591 for the XMM-Newton/RXTE and the Suzaku observation, respectively. The {\sc diskbb} normalization $N_{dbb}$, in the Suzaku observation in this case became extremely large (208132$_{-65410}^{+99428}$), which is likely caused by a degeneracy of the parameters in the model. Therefore, we did not use the {\sc diskbb} component in the rest fitting process \citep[see also,][]{piraino00,dai06,seifina11}.
The seed photons for the thermal Compton process in the corona could either come from the accretion disc or the neutron-star surface and its boundary layer. \citet{sanna13} explored the origin of the seed photons by linking the seed photon temperature, $kT_{seed}$, in the {\sc nthcomp} to either the temperature of the accretion disc, $kT_{dbb}$, in the {\sc diskbb}, or the temperature of the neutron-star, $kT_{bb}$, in the {\sc bbody}, respectively. They found that both options gave statistically acceptable fits, however the blackbody emission became negligible when linking $kT_{bb}$ in the {\sc bbody} to $kT_{seed}$ in the {\sc nthcomp}. We also tried to link $kT_{seed}$ in the {\sc nthcomp} component to the temperature of the neutron-star, $kT_{bb}$, however, in this case the fitting became worse and $kT_{bb}$ decreased to $\sim$ 0.33 keV or pegged at the lower boundary 0.3 keV for the XMM-Newton/RXTE and the Suzaku observation, respectively. We therefore assumed that the thermal photons from the accretion disc are the seed photons for the Compton process in the corona \citep[e.g.][]{sanna13,lyu14}, and we therefore left the parameter $kT_{seed}$ free to vary, because, as we explained above, we did not detected the {\sc diskbb} component.
\subsection{Phenomenological reflection model of the line emission}
After fitting the continuum we found prominent residuals in the iron line region. The existence of residuals around $5-8$ keV in the fits indicates that reflection from the accretion disc may be present, therefore we added a {\sc gaussian} component to fit the possible iron line emission, constraining the energy of the line to be in the 6.4 $-$ 6.97 keV range. We found that the derived {\sc gaussian} line profiles were in general broad ($\sigma\sim$0.8 keV), so we then substituted the {\sc gaussian} component with a relativistically broadened {\sc laor} line \citep{laor91}
model, which was developed to describe the emission from a line emitted from an accretion disc around a maximally-rotating black hole. The parameters of the {\sc laor} component are the emissivity index of the disc, $\beta$, the inner and outer radii of the disc, $R_{in}$ and $R_{out}$, respectively, the central energy of the emission line, $E_{\rm line}$, and the inclination angle of the disc with respect to the line of sight, $i$. We found that the best-fitting parameters did not change significantly when the outer disc radius was fixed at 400 $R_{g}$ compared with the ones when it was free to change. We then fixed the outer disc radius in the model at 400 $R_{g}$, where $R_{g}$ = $GM/c^{2}$, with $G$ being Newton’s constant, $c$ the speed of light, and $M$ the mass of the neutron star.
\begin{figure*}
\center
\includegraphics[width=0.3\textwidth,angle=-90]{Fig1_1.eps}
\includegraphics[width=0.3\textwidth,angle=-90]{Fig1_2.eps}
\caption{Fitting results with the phenomenological reflection model {\sc phabs*(bbody+nthcomp+laor)} for the XMM-Newton/RXTE (left) and the Suzaku (right) observations of 4U 1728–34. Each plot shows the fitted spectra and the individual model components (main panel), and the residuals in terms of sigmas (sub panel). The component {\sc bbody}, {\sc nthcomp}, {\sc laor} and the sum of all these model components are plotted with a blue dotted, magenta dashed-dotted, yellow dotted line and green line, respectively. Since the FI and BI spectra in the Suzaku observation are mostly on top of each other in the plot, here we do not plot the BI spectra for clarity. The residuals in the plots are rebinned for plotting purposes.}
\label{ironline}
\end{figure*}
\subsection{Full reflection models}
When X-rays illuminate the accretion disc, the surface of the accretion disc is expected to be ionised, producing a reflection spectrum including fluorescence lines, recombination and other emissions \citep[e.g.][]{ross05}. The shape of the reflection spectrum is influenced by the ionisation state of the disc, and thus the reflection spectrum is important for understanding the thermal balance of the disc. Therefore, we also fitted the broad-band spectrum with a self-consistent reflection model. We first applied the reflection model {\sc relxill} \citep{garcia14,dauser16a}, which describes the reflection off the disc illuminated by a power-law source. The {\sc relxill} model combines the relativistic convolution kernel {\sc relconv} \citep{dauser10} with the reflection grid {\sc xillver} \citep{garcia13}, and it calculates the reflection spectrum for each emission angle. The fit parameters of the {\sc relxill} model are the inclination of the accretion disc, $i$, the dimensionless spin parameter, $a$, the redshift to the source, $z$, the inner and outer radii of the disc, $R_{in}$ and $R_{out}$, respectively, the inner and outer emissivity index of the accretion disc, $q_{in}$ and $q_{out}$, the breaking radius $R_{br}$ where the emissivity changes, the ionisation parameter, $\xi$, the iron abundance, $A_{Fe}$, the photon index of the power-law, $\Gamma$, the cut-off energy of the power-law, $E_{cut}$, the reflection fraction, $R_{refl}$, and the normalization. We fixed $a$ to 0.17 calculated as $a$ = 0.47/$P_{ms}$ \citep{braje00}, where $P_{ms}$=1000/363 ms \citep{stroh96} is the spin period of the neutron star in milliseconds. The inner radius, $R_{in}$, was set to be larger than 5.44 $R_{g}$, while $R_{out}$ and $R_{br}$ were fixed at 400 $R_{g}$, and hence $q_{in}$ and $q_{out}$ were linked to vary together. The redshift $z$ was set to zero and $A_{Fe}$ was fixed to the solar abundance.
In order to explore the possible geometry of 4U 1728$-$34, we also fitted the spectra with another reflection model, {\sc relxilllp} \citep{garcia14,dauser16}. The {\sc relxilllp} model is a relativistic reflection model for the lamppost geometry, which assumes that the corona is located above the accretion disc along the spin axis of the compact object. The {\sc relxilllp} model has the same parameters as {\sc relxill}, but replaces the emissivity indeces and the breaking radius with a new parameter, $h$, which is the height of the corona. We set the parameter $fixReflFrac=0$ to fit the reflection freely, and set the rest of the parameters in this model to the same values as in {\sc relxill}.
To sum up, we used the model {\sc phabs$\ast$(bbody+laor+\\nthcomp)} to describe the continuum and iron emission line, and the models, {\sc phabs$\ast$(bbody+relxill)} and {\sc phabs$\ast$\\(bbody+relxilllp)}, to model the full reflection spectra. We found that the inclination angle in the XMM-Newton/RXTE observation could not be well constrained (it was larger than 85 degress) in the fits, and hence we fixed it at 50 degrees, derived from the fits of the full reflection models to the Suzaku observation.
\section{Results}
\begin{table*}
\centering
\caption{Best-fitting results for the fit to the X-ray spectra of 4U 1728--34 with the phenomenological model. The inclination angle in the XMM-Newton/RXTE observation could not be well constrained, so we fixed it to the value in the Suzaku observation when we fit it with the full reflection model (see text). Here we give the unabsorbed flux (erg cm$^{-2}$ s$^{-1}$) in the energy range $0.1-100$ keV. All errors in the Tables are at the 90\% confidence level unless otherwise indicated. A symbol $^{*}$ means that the error pegged at the hard limit of the parameter range.}
\begin{tabular}{|c|c|c|c|}
\hline
Model Comp & Parameter & Suzaku & XMM-Newton/RXTE \\
\hline
{\sc phabs} &$N_{\rm H}$ (10$^{22}$cm$^{-2}$) & 4.84 $\pm $ 0.06 & 4.41$\pm$ 0.21 \\
{\sc bbody} &$kT_{\rm BB}$ (keV) & 2.2 $\pm $ 0.02 & 2.04$\pm$ 0.03 \\
&Norm (10$^{-3}$) & 7.1 $\pm $ 0.2 & 15.1$\pm$ 1.1 \\
&Flux (10$^{-10}$ c.g.s) & 6.1$\pm $ 0.2 & 13.1$\pm$ 0.9 \\
{\sc nthcomp} &$\Gamma$ & 2.23 $\pm $ 0.02 & 2.26$\pm$ 0.06 \\
&$kT_{\rm e}$ (keV) & 7.0 $\pm $ 0.3 & 4.5 $\pm$ 0.4 \\
&$kT_{\rm bb}$ (keV) & 0.21 $\pm $ 0.04 & 0.32$_{-0.22^{*}}^{+0.08}$ \\
&Norm & 0.65 $\pm $ 0.03 & 1.02$_{-0.17}^{+0.45}$ \\
&Flux (10$^{-10}$ c.g.s) & 38.7$\pm $ 2.0 & 60.6$_{-5.5}^{+21.1}$ \\
{\sc laor} &$E_{\rm line}$ (keV) & 6.97$_{-0.02}^{+0^{*}}$ & 6.59$\pm$ 0.07 \\
&$\beta$ & 3.8 $\pm $ 0.4 & 2.46$\pm$ 0.21 \\
&$R_{\rm in}$ ($R_{\rm g}$) & 6.2 $\pm $ 0.7 & 9.55$\pm$ 3.89 \\
&incl $ (^\circ)$ & 24.7 $\pm $ 1.6 & 50 (fixed) \\
&Norm (10$^{-3}$) & 1.5 $\pm $ 0.2 & 3.78$\pm$ 0.35 \\
&Flux (10$^{-10}$ c.g.s) & 0.15$\pm $ 0.1 & 0.39$\pm$ 0.03 \\
\hline
&$\chi^2_\nu$ ($\chi^2/dof)$ & 1.16 (4758/4093) & 1.29 (223/172) \\
&Total flux (10$^{-10}$ c.g.s) & 44.2$_{-1.6}^{+3}$ & 77.3$_{-8.6}^{+18.3}$ \\
\hline
\end{tabular}
\medskip
\\
\label{line}
\end{table*}
\begin{figure*}
\center
\includegraphics[width=0.3\textwidth,angle=-90]{Fig2_1.eps}
\includegraphics[width=0.3\textwidth,angle=-90]{Fig2_2.eps}
\caption{Fitting results with the full reflection model {\sc phabs*(bbody+relxilllp)} for the XMM-Newton/RXTE (left) and the Suzaku (right) observations of 4U 1728–34. Each plot shows the fitted spectra and the individual model components (main panel), and the residuals in terms of sigmas (sub panel). The component {\sc bbody}, {\sc relxilllp} and the sum of the model components are plotted with a blue dotted, magenta dashed-dotted and green line, respectively. Since the FI and BI spectra in the Suzaku observation are mostly on top of each other in the plot, here we do not plot the BI spectra for clarity. The residuals in the plots are rebinned for plotting purposes.}
\label{lp}
\end{figure*}
In Table \ref{line} we show the fitting results using the {\sc laor} model. The blackbody temperature, $kT_{\rm BB}$, is 2.2$\pm 0.02$ keV in the Suzaku observation and 2.04$\pm 0.03$ keV in the XMM-Newton/RXTE observation. The power-law index, $\Gamma$, is 2.23$\pm 0.02$ and 2.26$\pm 0.06$, while the electron temperature, $kT_{\rm e}$, is 7.0$\pm 0.3$ keV and 4.5 $\pm$ 0.4 keV in the Suzaku and the XMM-Newton/RXTE observation, respectively. The corresponding energy of the {\sc laor} line is 6.97$_{-0.02}^{+0}$ keV and 6.59$\pm 0.07$ keV in the Suzaku and XMM-Newton/RXTE observation, respectively, suggesting that the accretion disc is highly or moderately ionised in these two observations. The corresponding spectra, individual components and residuals of the fits are shown in Figure \ref{ironline}.
In Table \ref{reft} we summarise the fitting results with the reflection model {\sc relxill}. The reduced chi-square with this model is 1.14 and 1.0 for 4094 and 173 degrees of freedom for the Suzaku and XMM-Newton/RXTE observation, respectively. The inclination angle was well constrained at 49$_{-3}^{+8}$ degrees in the Suzaku observation, consistent with the fact that no eclipse has ever been observed in this source. The reflection fraction, $R_{refl}$, is 0.54$\pm$0.08 in the Suzaku observation and 1.39$_{-0.32}^{+0.6}$ in the XMM-Newton/RXTE observation. This may suggest that more photons from the corona illuminate the accretion disc in the XMM-Newton/RXTE than in the Suzaku observation, which is also consistent with the higher ionisation state of the disc in the XMM-Newton/RXTE observation. The ionisation parameter, $log(\xi)$, is 2.71$\pm$0.07 in the Suzaku observation, and 3.92$\pm$0.16 in the XMM-Newton/RXTE observation. The inner radius of the disc, $R_{in}$, is 14.1$_{-2.9}^{+10.7} R_{g}$ and 5.44$_{-0}^{+2.32} R_{g}$ in the Suzaku and XMM-Newton/RXTE observation, respectively.
We show the fitting results with the reflection model {\sc relxilllp} in Table \ref{lpt}. Most of the parameters in the fits with the model {\sc relxilllp} are similar to the ones in the fit with {\sc relxill}, except that the reflection fraction, $R_{refl}$, is systematically higher in {\sc relxilllp} than in {\sc relxill}. In the case of {\sc relxilllp} $R_{refl}$ is 1.68$_{-0.33}^{+1.19}$ and 3.05$_{-0.65}^{+1.11}$ in the Suzaku and XMM-Newton/RXTE observation, respectively. There is no clear difference in the height of the corona, $h$, in the two observations: $h$ is 15.5$_{-9.2}^{+11.8}$ $R_{g}$ (90\% confidence level) in the Suzaku observation, while it is 22.3$\pm$6.7 $R_{g}$ in the XMM-Newton/RXTE observation. The corresponding spectra, individual components and residuals of the fits are shown in Figure \ref{lp}.
\begin{table*}
\centering
\caption{Best-fitting results for the fit to the X-ray spectra of 4U 1728--34 with the reflection model {\sc phabs*(bbody+relxill)}. We fixed the inclination angle in the XMM-Newton/RXTE observation to the value in the Suzaku observation since this parameter in the XMM-Newton/RXTE observation could not be well constrained. Here we give the unabsorbed flux (erg cm$^{-2}$ s$^{-1}$) in the energy range $0.1-100$ keV. A symbol $^{*}$ means that the error pegged at the hard limit of the parameter range.}
\begin{tabular}{|c|c|c|c|}
\hline
\hline
Model Comp & Parameter & Suzaku & XMM-Newton/RXTE \\
\hline
{\sc phabs} &$N_{\rm H}$ (10$^{22}$cm$^{-2}$) & 4.92 $\pm $ 0.03 & 5.18 $\pm$ 0.14 \\
{\sc bbody} &$kT_{\rm BB}$ (keV) & 2.27 $\pm $ 0.04 & 2.15 $\pm$ 0.03 \\
&Norm (10$^{-3}$) & 6.9 $\pm $ 0.2 & 19.4 $\pm$ 0.5 \\
&Flux (10$^{-10}$ c.g.s) & 5.4 $\pm $ 0.1 & 16.2$_{-0.2}^{+0.7}$ \\
{\sc relxill} &$\beta$ & 2.8$_{-0.2}^{+0.5}$ & 2.32 $\pm$ 0.16 \\
&incl $(^\circ)$ & 49$_{-3}^{+8}$ & 50 (fixed) \\
&$R_{\rm in}$ ($R_{\rm g}$) & 14.1$_{-2.9}^{+10.7}$ & 5.44 $_{-0^{*}}^{+2.32}$ \\
&$\Gamma$ & 2.03 $\pm $ 0.04 & 2.43 $\pm$ 0.09 \\
&log($\xi$) & 2.71 $\pm $ 0.07 & 3.92 $\pm$ 0.16 \\
&E$_{cut}$ (keV) & 16.6 $\pm $ 1.2 & 19.95 $_{-2.0}^{+7.03}$ \\
&R$_{refl}$ & 0.54 $\pm $ 0.08 & 1.39 $_{-0.32}^{+0.6}$ \\
&Norm (10$^{-3}$) & 5.5 $\pm $ 0.3 & 9.3 $\pm$ 2.4 \\
&Flux (10$^{-10}$ c.g.s) & 66.7$_{-1.5}^{+2.8}$ & 206$_{-19}^{+65}$ \\
\hline
&$\chi^2_\nu$($\chi^2/dof)$ & 1.14 (4668/4094) & 1.0 (173/173) \\
&Total flux (10$^{-10}$ c.g.s) & 73.6$_{-3}^{+1.5}$ & 222$_{-19}^{+66}$ \\
\hline
\end{tabular}
\medskip
\\
\label{reft}
\end{table*}
\begin{table*}
\centering
\caption{Best-fitting results for the fit to the X-ray spectra of 4U 1728--34 with the reflection model {\sc phabs*(bbody+relxilllp)}. We fixed the inclination angle in the XMM-Newton/RXTE observation to the value in the Suzaku observation since this parameter in the XMM-Newton/RXTE observation could not be well constrained. Here we give the unabsorbed flux (erg cm$^{-2}$ s$^{-1}$) in the energy range $0.1-100$ keV. A symbol $^{*}$ means that the error pegged at the hard limit of the parameter range.}
\begin{tabular}{|c|c|c|c|}
\hline
\hline
Model Comp & Parameter & Suzaku & XMM-Newton/RXTE \\
\hline
{\sc phabs} &$N_{\rm H}$(10$^{22}$cm$^{-2}$) & 4.91 $\pm $ 0.03 & 5.17 $\pm$ 0.12 \\
{\sc bbody} &$kT_{\rm BB}$(keV) & 2.27 $\pm $ 0.03 & 2.15 $\pm$ 0.02 \\
&Norm (10$^{-3}$) & 6.9 $\pm $ 0.2 & 19.4 $\pm$ 0.4 \\
&Flux (10$^{-10}$ c.g.s) & 6.0 $\pm $0.2 & 16.2$_{-0.4}^{+0.7}$ \\
{\sc relxilllp} &h ($R_{\rm g}$) & 15.5$_{-9.2}^{+11.8}$ & 22.3 $\pm$ 6.7 \\
&incl $(^\circ)$ & 49$_{-3}^{+6}$ & 50 (fixed) \\
&$R_{\rm in}$($R_{\rm g}$) & 13.8$_{-3.0}^{+8.7}$ & 5.44 $_{-0^{*}}^{+3.87}$ \\
&$\Gamma$ & 2.04 $\pm $ 0.04 & 2.43 $\pm$ 0.10 \\
&log($\xi$) & 2.7 $\pm $ 0.06 & 3.91 $\pm$ 0.15 \\
&E$_{cut}$ (keV) & 16.6 $\pm $ 0.8 & 19.2 $_{-1.8}^{6.3}$ \\
&R$_{refl}$ & 1.68$_{-0.33}^{+1.19}$ & 3.05 $_{-0.65}^{+1.11}$ \\
&Norm (10$^{-3}$) & 7.2$_{-1.2}^{+3.8}$ & 11.8 $\pm$ 3.0 \\
&Flux (10$^{-10}$ c.g.s) & 64.6$_{-3.8}^{+2.8}$ & 204$_{-25}^{+43}$ \\
\hline
&$\chi^2_\nu$($\chi^2/dof)$ & 1.14 (4669/4094) & 1.04 (180/173) \\
&Total flux (10$^{-10}$ c.g.s) & 70.8$\pm $ 3.1 & 220$_{-17}^{+41}$ \\
\hline
\end{tabular}
\medskip
\label{lpt}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=0.6\textwidth]{Fig3.eps}
\caption{The predicted ionisation profiles of an accretion disc illuminated by a lamppost X-ray source located at different heights above the central source. The curves are calculated using eq.(1) in the text, taken from \citet{ballantyne17}, assuming $\eta$=0.1, $\alpha$=0.3, $\lambda$=0.2 and $f$=0.45. The spin parameter is fixed at 0.17 for 4U 1728--34 and $r_{\mathrm{in}}$ and $r_{\mathrm{out}}$ are 5.44 $R_{g}$ and 400 $R_{g}$, respectively. The $R_R, R_z$ and $R_T$ factors at $r< 12$ $R_{g}$ are fixed at their values at $r= 12$ $R_{g}$ to avoid an unphysical break in $\xi(r,h)$. As shown in the legend, ionisation profiles at different heights are marked in different line styles and colours. Here we compared the predicted ionisation profiles with the ionisation derived in this work and in \citet{mondal17}. The two observations in this work were labeled as XMM-Newton and Suzaku, and the two simultaneous NuSTAR and Swift observations in \citet{mondal17} were labeled as Ns1 and Ns2, respectively.}
\label{res}
\end{figure*}
\section{discussion}
\citet{egron11} analyzed one XMM-Newton observation of 4U 1728--34 with a reflection model and constrained the inclination angle of the system to be within 44 degrees to 60 degress. \citet{mondal17} analyzed two simultaneous NuSTAR and Swift observations of 4U 1728--34 and constrained the inclination angle to be within 22 degrees to 40 degrees. In this work we find that the inclination angle of 4U 1728--34 is about 49$\pm 5$ degrees from the fit to the Suzaku observation, consistent with the range from \citet{egron11} but a bit larger than the value of \citet{mondal17}. We also found that the inclination angle is larger than 85 degrees and could not be well constrained with the XMM-Newton/RXTE data, similar with previous findings \citep[e.g.][]{pandel08,sanna13}. \citet{sanna13} analysed another neutron-star LMXB 4U 1636--53 using six XMM-Newton observations with the PN camera in timing mode, plus simultaneous RXTE observations, and found high inclination angles in the fits, inconsistent with the absence of eclipses and dips in 4U 1636--53. They suggested that the PN timing mode data may be affected by calibration issues, which leads to high inclination angles in the fits. In this work, the derived high inclination angles in the XMM-Newton/RXTE observation may be due to the same calibration issues. When we fit the XMM-Newton spectrum alone, the inclination angle is still larger than 85 degrees. The problem with the calibration of EPIC instrument, would lead to a steeper spectrum, and hence to a higher inclination angle in the fits.
The fitting results show that the {\sc nthcomp} component dominated the broad band energy spectrum in both the XMM-Newton/RXTE and Suzaku observations. This agrees with the conclusion of \citet{seifina11} that the energy spectrum of 4U 1728--34 in all states is dominated by the power-law component. \citet{seifina11} investigated more than 120 RXTE observations of 4U 1728--34, and they found that most of the soft thermal emission from the accretion disc in 4U 1728--34 is reprocessed in the corona, thus the direct disc emission is weak. In this work, the thermal emission from the disc in both the XMM-Newton/RXTE and the Suzaku observation is not required, consistent with their finding. A further calculation shows that the upper limit of the weak {\sc diskbb} component in the fit in this work is able to produce the observed Comptonised component. The absence of the disc emission can also be the consequence of the relatively high column density, $N_{\rm H}$, of the interstellar medium along the line of sight to the source, which reduces the number of soft disc photons that we observe, thus leading to a weak disc component in the observed energy spectra.
\citet{ballantyne17} investigated the ionisation parameter in the accretion disc at a radius $r$ from the black hole, with the disc being irradiated by an X-ray source at a height, $h$, above the black hole symmetry axis. \citet{ballantyne17} developed a formula which takes into account the effects of gravitational light-bending and focusing of radiation onto the disc. According to their calculation, there is a strong ionisation gradient across the surface of the inner disc that depends on the black hole spin and the lamppost height. This model provides a good way to connect the height of the corona with the ionisation state and the inner radius of the accretion disc. For this we applied eq.10 in \citet{ballantyne17}:
\begin{equation}
\begin{aligned}
\xi(r,h)= & (5.44\times 10^{10}) \left ({\eta \over 0.1} \right )^{-2} \left (
{\alpha \over 0.1} \right) \lambda^3 \left ( {r \over r_g} \right )^{-3/2} R_z^{-2} R_T^{-1} \\
& \times R_R^3 f(1-f)^3 F(r,h) g_{lp}^2 \mathcal{A}^{-1} \mathrm{erg\ cm\ s^{-1}},
\label{eq:newxi}
\end{aligned}
\end{equation}
where $\xi(r,h)$ is the ionisation parameter of the disc at a radius $r$ from the central source where the illuminating lamppost is at a height $h$, $\eta$ is the radiative efficiency of the accretion process, $\alpha$ is the viscosity parameter, $\lambda$ is the Eddington ratio, $\lambda=L_{bol}/L_{Edd}$, $R_R, R_z$ and $R_T$ are relativistic corrections to the Newtonian $\alpha$-disc equations \citep{krolik99}, $f$ is the coronal dissipation fraction, and $g_{lp}$=$\nu_{\mathrm{disc}}$/$\nu_{\mathrm{src}}$ is the ratio between the measured frequency of a photon striking the disc and its frequency at the source \citep{dauser13}. The function $F(r,h)$ in the equation describes the shape of the irradiation pattern \citep{Fukumura07}, and $\mathcal{A}$ is the integral of $F(r,h)\times g_{\mathrm{lp}}^2$ over the entire disc, $\mathcal{A}=\int_{r_{\mathrm{in}}}^{r_{\mathrm{out}}} F(r,h) g_{\mathrm{lp}}^2 dS(r)$.
For the calculations, we set the $\eta$=0.1, $\alpha$=0.3 \citep{ballantyne17,penna13}, $\lambda$=0.2 and $f$=0.45 \citep{vasud07}. The spin parameter, $a$, is fixed at 0.17 for 4U 1728--34 and $r_{\mathrm{in}}$ and $r_{\mathrm{out}}$ are 5.44 $R_{g}$ and 400 $R_{g}$, respectively. We found that there was a break of the ionisation profiles around $r$=12 $R_{g}$, similar to the one at $r$=4 $R_{g}$ reported in the simulation of a fast-rotating black-hole system in \citet{ballantyne17}. This break is due to the divergence of $R_R$ and $R_T$ as the radius approaches the innermost stable circular orbit (ISCO). We then applied the same procedure as in \citet{ballantyne17} to fix this: the $(R_R, R_z, R_T)$ factors at $r< 12$ $R_{g}$ are fixed at their values at $r= 12$ $R_{g}$.
Figure \ref{res} shows the ionisation profiles when the illuminating corona is located at different heights above the disc. We compared the ionisation curves predicted by the formula with the values derived from the fits with the model {\sc relxilllp} in this work and the ones from the fits with the model {\sc relxill} to two simultaneous NuSTAR and Swift observations in \citet{mondal17}. As shown in the figure, the ionisation profiles predicted by the model of \citet{ballantyne17} are significantly smaller than the range derived from the observations. The ionisation parameter is below 100 at $r$ $>$ 10 $R_{g}$, while it increases rapidly as the radius further decreases but is still lower than the range from the fits.
The difference between the ionisation parameter predicted by the model and the one deduced from data can be bridged if we changed the value of certain parameters in the model. Notwithstanding, the new values that the parameters must take for the model to qualitatively match the data either contradict some observational result, or are unphysical. For instance, for the model to match the data, the luminosity of the source should be 40\% of the Eddington luminosity, $\lambda = 0.4$ in eq. 1, whereas the spectral fits indicate that the luminosity in this observation was about 20\% Eddington. To make it 40\% Eddington, either the mass of the neutron star needs to be 0.7 M$_\odot$, or the distance should be as large as 7 kpc, which is inconsistent with previous observational results. For instance, \citet{disalvo00} showed that the distance to 4U 1728--34 is about 5.1 kpc assuming a 1.4 M$_\odot$ neutron star. \citet{galloway03} derived a distance of 4.4 (4.8) kpc for a M=1.4 (2.0) M$_\odot$ neutron star with cosmic atmospheric abundance (X=0.7), while the distance could be up to 30\% larger ($\sim$6.2 kpc) if the atmosphere of the neutron star consisted purely of helium. The model would also match the data if we increased the viscosity parameter, $\alpha$, from 0.3 to 1, or decreased the radiative efficiency from 0.1 to 0.03, however, in these cases the adopted values of the parameters deviate significantly from the values deduced in previous work. Typically $\alpha\sim0.1$ \cite[e.g.][]{accretion02}, whereas some work yield even smaller values of this parameter. For example, in the black hole candidate GRS 1915+105, \citet{Belloni97} found that the viscosity parameter is only 0.00004 and 0.0005 for the Schwarzschild and extreme Kerr cases, respectively.
The inconsistency between the ionisation parameter predicted by the model and the ones obtained from the fits may be due to the fact that the disc is also illuminated by radiation from the neutron star and its boundary layer, which is not included in the standard reflection models. \citet{cackett10} investigated the iron emission lines in 10 neutron star LMXBs, and found that illuminating photons likely come from the neutron star and the boundary layer. Their analysis showed that the spectra could be well fitted with a reflection model, with the accretion disc being illuminated by the blackbody component. Besides, they further calculated the maximum height of the boundary layer, $z<24$ km, consistent with the scenario that the boundary layer illuminates a geometrically thin disc. A subsequent analysis of the bright atoll source 4U 1705--44 by \citet{daa10} also showed that the spectrum could be well-fitted by the sum of two thermal components together with a reflection component, wherein the blackbody component provides illuminating photons to the disc. Their analysis provides another example that the reflection component in neutron star LMXBs may come from hard X-ray thermal irradiation, which is likely to be the emission from the boundary layer.
There were several type I X-ray bursts in the observations processed in this work, and hence the burst emission may also illuminate and ionise the accretion disc. \citet{ballantyne04} presented models of X-ray reflection from a constant-density slab illuminated by a blackbody emitted from a neutron star burst. The simulated spectral showed a prominent Fe line, while the reflection profiles drop off quickly above 10 keV compared to the profiles assuming power-law component illumination. Nevertheless, as calculated in \citet{ballantyne04a}, the recombination time for He-like Fe (Fe XXV) is only $\sim$ 10$^{-4}$ s, so the disc will soon return to its previous ionisation state after a burst. Therefore, the bursts will likely have no or little contribution to the average ionisation state of the disc.
\section{Summary}
In this work we used an XMM-Newton plus simultaneous RXTE observation and a Suzaku observation to study the spectral properties of the neutron-star LMXB 4U 1728--34. We found that the spectra could be well fitted with a model that does not include thermal emission from the accretion disc, The continuum is dominated by the hard component, while both the full reflection model {\sc relxill} and {\sc relxilllp} provide good fits to the spectra. The inclination angle of 4U 1728--34 derived from the Suzaku observation is 49$\pm$5 degrees, while it is not well constrained in the XMM-Newton/RXTE observation. The accretion disc is moderately to highly ionised in these two observations, with the illuminating source located at similar heights in the Suzaku and XMM-Newton/RXTE observation. We found that the ionisation parameter derived in this work and in \citet{mondal17} are larger than, and inconsistent with, the ones predicted by the lamppost model of \citet{ballantyne17}, assuming that the disc is irradiated only by an X-ray source above the compact object. This inconsistency may be due to the effect of the contribution from the neutron star and its boundary layer to the reflected emission, which is not included in the model. A model focusing on the roles of both the the neutron star (its boundary layer) and the corona in the illuminating and ionising process is needed to investigate neutron star LMXBs; in the meantime, high quality data from observations is also required in order to break possible degeneracy problems in the spectral analysis.
This research has made use of data obtained from the High Energy Astrophysics Science Archive Research Center (HEASARC), provided by NASA's Goddard Space Flight Center. This research made use of NASA's Astrophysics Data System. Lyu is supported by National Natural Science Foundation of China (grant No.11803025); and the Hunan Provincial Natural Science Foundation (grant No. 2018JJ3483) and Hunan Education Department Foundation (grant No. 17C1520). F.Y.X. is supported by the Joint Research Funds in Astronomy (U1531108 and U1731106). J.F.Z. thanks the supports from the National Natural Science Foundation of China (grant No.11703020); and the Hunan Provincial Natural Science Foundation (grant No. 2018JJ3484).
\clearpage
\bibliographystyle{mn}
| 16,270 |
\section{Introduction} \label{sec:intro}
Perhaps the most distinguishing characteristic of
granular materials is their internal heterogeneity,
particularly when viewed at the micro-scale of
individual particles or particle clusters.
Granular materials often consist of a wide range of particle
sizes and shapes, and these particles are usually arranged
in an irregular manner.
This geometric and topologic multiformity produce
nonuniform distributions of internal force and deformation,
which are often expressed in spatial and temporal patterning.
In the paper, we catalog the many forms in which heterogeneity
may be manifest, and we provide a classification
scheme for its measurement.
Examples of several forms of heterogeneity are
presented, and certain expressions of their evolution and
spatial patterning are described.
Although the proposed classification scheme applies to both
two- and three-dimensional (2D and 3D) granular materials, to
particles of arbitrary shape and composition, to both sparse and dense
packings, and to both dynamic and quasi-static deformations,
the paper illustrates the classification within a two-dimensional
framework and with a 2D example of the quasi-static deformation
of a dense disk assembly.
\par
In Section~\ref{sec:classification} we consider
a classification scheme
for heterogeneity and the various forms in which it can be expressed
and measured.
Section~\ref{sec:methods} describes the simulation methods that
are used to explore several forms of heterogeneity.
In the same section, we also consider a means of
measuring the heterogeneity of vector and tensor objects.
Section~\ref{sec:results} presents experimental results and characterizes
several types of heterogeneity and their evolution during
biaxial compression.
\section{Classifying heterogeneity} \label{sec:classification}
Table~\ref{table:class1} gives a classification of material characteristics
that can manifest heterogeneity in granular materials.
\begin{table}
\centering
\caption{Heterogeneity categories and references to experimental studies}
\label{table:class1}
\input{Table1.tex}
\end{table}
The table references sample experimental studies in which these
characteristics have been measured, although the short lists of references
are far from exhaustive.
The characteristics in the table are organized within a hierarchy of
heterogeneity categories: topologic,
geometric, kinematic, static, and constitutive.
These categories are described in a general manner in the next paragraph.
Table~\ref{table:class2} presents a short list of informational
forms that can be used for describing each characteristic.
\begin{table}
\caption{Analyses of heterogeneity}
\label{table:class2}
\centering
\begin{tabular}{ll}
\hline\noalign{\smallskip}
Informational forms & Examples\\
\noalign{\smallskip}\hline\noalign{\smallskip}
Central tendency & Mean, median, modes\\
Dispersion & Standard deviation, \\
& variance, \\
& coefficient of variation,\\
& histograms,\\
& probability and cumulative \\
& \quad distributions,\\
& quartile plots\\
Spatial correlation & n-point correlations,\\
& correlation lengths\\
Temporal correlation & Rates of change \\
Spatial and temporal & Spatial plots,\\
\quad patterning & time series analyses,\\
& spatial domain transforms\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
The arrangement of the forms in Table~\ref{table:class2}
reflects their complexity and the
usual historical order in which measurements have been proposed
and collected.
The simplest form of information is some measure of central tendency:
a mean, median, or modal value.
Heterogeneity implies diversity and fluctuation, and this
dispersion in measured
values can be expressed as a variance or standard deviation,
with standard graphical means such as histograms, or by
fitting experimental results to an appropriate probability distribution.
Of greater complexity are measurements of temporal correlation
(e.g. rates of change) and spatial correlation.
The most complex data analyses can also disclose the spatial and
temporal patterning of heterogeneity.
The paper presents data on the six characteristics
that are accompanied by section numbers in Table~\ref{table:class1}, and
these characteristics are explored with a range of the
informational forms that are given in
Table~\ref{table:class2}.
\par
Table~\ref{table:class1} begins with topologic characteristics,
which concern the arrangement of the particles and their contacts,
but without reference to their position, size, or orientation.
This information can be expressed as a \emph{particle graph}
for both 2D and 3D assembles, which
gives the topologic connectivity of the particles in a packing,
as explained in~\cite{Satake:1993b}.
The paper presents data on the variation in local topology and
its evolution during loading.
A discrete metric is also proposed as a means of tracking
inter-particle processes between distant particles.
Geometric information includes the additional descriptors of
length, shape, and angle, which relate to the positional arrangements,
orientations, and sizes of particles.
Together, topology and geometry describe the \emph{fabric} of
a granular assembly.
The paper characterizes the evolution of one form of heterogeneity in
this fabric.
Kinematic information (Table~\ref{table:class1}) concerns
the movements and rotations of particles,
relative inter-particle movements,
and the local deformations within small groups
of particles.
The paper gives examples of heterogeneous movements and
deformations, the spatial correlation of inter-particle movements, and
the patterning of local rotations and deformations.
Static (or statical) information
(Table~\ref{table:class1})
involves the transmission of
force and stress within a material, and the paper depicts the local
diversity of stress and its evolution during
loading.
Table~\ref{table:class1} also includes the category of \emph{constitutive}
heterogeneity (or, perhaps, mechanical heterogeneity), which would
involve the diversity in local material stiffness.
Except for simple two-particle models that rely on uniform
strain assumptions, there is, as of yet, no consistent vocabulary or
experimental methods for measuring and characterizing
this form of heterogeneity.
The reader is referred to the recent work of
Gaspar and Koenders~\cite{Gaspar:2001b} and Gaspar~\cite{Gaspar:2002a},
which may
provide a needed framework for characterizing constitutive
heterogeneity.
\par
As a simple example of the classification scheme in Table~\ref{table:class2},
we could consider the diversity of grain size in a granular material.
Methods for measuring and describing particle size, such
as sieving methods, are standardized and widely applied, so
that references to these methods are excluded from Table~\ref{table:class2}.
These methods can readily depict a representative (central)
grain size as well as the dispersion of sizes.
Certain processes, such as shearing and compression, can cause
particle breakage, which could be measured with temporal correlations of
the size distribution.
Processes that promote size segregation could be studied with
methods that reveal the spatial correlation of size.
Size segregation usually leads to a spatial patterning of the local
size distribution, and processes that produce a periodic recurrence in
such patterning would lead to both spatial and temporal patterning.
\section{Methods and notation} \label{sec:methods}
A conventional implementation of the Discrete Element Method (DEM)
was used to simulate the quasi-static behavior of a large 2D
granular assembly and to illustrate different manifestations
of internal heterogeneity and their evolution.
\subsection{Simulation methods}
The study employs a square assembly containing 10,816 circular
disks of multiple diameters.
The disk sizes are randomly distributed over
a fairly small range of between 0.56$\overline{D}$ and 1.7$\overline{D}$,
where $\overline{D}$ is the mean particle diameter.
The material was created by slowly and isotropically compacting
a sparse arrangement of particles, during which friction between particle
pairs was disallowed
(friction was later restored for biaxial compression tests).
This compaction technique produced a
material that was dense, random, and isotropic,
at least when viewed at a macro-scale.
The average initial void ratio was 0.1715 (solid fraction of $0.854$),
the average coordination number was 3.95, and the average overlap between neighboring particles was
about 9$\times$10$^{-4}$ of $\overline{D}$.
The assembly was surround by periodic boundaries, a choice that would
eliminate the topologic and geometric nonuniformity that
might otherwise occur in the vicinity of rigid platens or assembly
corners.
The initial height and width of the assembly were each about
$102\overline{D}$.
\par
All examples of heterogeneity were collected from a single loading test
of biaxial compression.
The height of the assembly was reduced at a constant rate of compressive
strain ($\dot{\varepsilon}_{22}<0$), while maintaining a constant average
horizontal stress ($\dot{\sigma}_{11}=0$).
About 200,000 time steps were required to reach
the final vertical strain, $\overline{\varepsilon}_{22}$, of $-0.01$,
and at this
rate of loading, the average imbalance of
force on a particle was less than 1$\times$10$^{-4}$
times the average contact force.
\par
During biaxial compression,
a simple force mechanism was employed between contacting particles.
Linear normal and tangential contact springs were assigned equal
stiffnesses ($k_{\mathrm{n}}=k_{\mathrm{t}}$),
and slipping between particles would occur whenever
the contact friction coefficient of 0.50 was attained.
\par
The average, macro-scale mechanical behavior is shown in
Fig.~\ref{fig:crs_q}, which gives the dimensionless compressive stress
\mbox{$\Delta\overline{\sigma}_{22}/\overline{p}_{\mathrm{o}}$},
where $\overline{p}_{\mathrm{o}}$ is the initial mean stress,
$\overline{p}_{\mathrm{o}}=(\overline{\sigma}_{11}+\overline{\sigma}_{22})/2$.
\begin{figure}
\centering
\includegraphics{crs_q.eps}
\caption{Evolution of the average compressive stress within the assembly
of 10,816 circular disks during biaxial compression.}
\label{fig:crs_q}
\end{figure}
This initial mean stress was about 5$\times$10$^{-4}$ times the normal
contact stiffness,~$k_{\mathrm{n}}$.
\par
The rates of several micro-quantities
(position, force, orientation, etc.)
were periodically measured during
the loading.
These rates were calculated by first collecting
the assembly's status at two instants that were separated by
100 time steps, and the difference in these states was then used
to compute the rates.
Because time is used in quasi-static DEM simulations
as simply a means of ordering or parameterizing events,
the rates of micro-quantities will usually be expressed
in a dimensionless form by dividing by an average,
macro-scale rate (average stress rate, average strain rate, etc.).
\subsection{Notation}\label{sec:notation}
Vectors and tensors are represented by bold Roman letters,
lower and upper case respectively.
Their inner products are computed as
\begin{equation} \label{eq:innerp}
\mathbf{a} \cdot \mathbf{b} = a_{p}a_{p}, \quad
\mathbf{A} \cdot \mathbf{B} = A_{pq}B_{pq},
\end{equation}
with the associated norms
\begin{equation} \label{eq:norm}
|\mathbf{a}| = (\mathbf{a} \cdot \mathbf{a})^{1/2}, \quad
|\mathbf{A}| = (\mathbf{A} \cdot \mathbf{A})^{1/2}.
\end{equation}
A juxtaposed tensor and vector will represent the
conventional product
\begin{equation}
\mathbf{A} \mathbf{b} = A_{pq}b_{q},
\end{equation}
and juxtaposed tensors represent the product
\begin{equation}
\mathbf{A} \mathbf{B} = A_{pr}B_{qr}.
\end{equation}
Various quantities are measured at both micro and macro scales
so that the variability of the micro-scale measurements
can be deduced.
A macro-level, assembly average is indicated with
an overline ($\overline{\mathbf{L}}$, $\overline{\sigma}_{22}$,
$\overline{p}_{\mathrm{o}}$, $\overline{q}$);
whereas local, micro-level quantities appear with superscripts
($\mathbf{L}^{i}$, $\boldsymbol{\sigma}^{k}$, $\widehat{\mathbf{v}}^{j}$,
$p^{k}$, Table~\ref{table:superscripts}).
\begin{table}
\caption{Superscript notation}
\label{table:superscripts}
\centering
\begin{tabular}{cl}
\hline\noalign{\smallskip}
Index & Usage \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$i$ & A polygonal void cell having $m^{i}$ edges and \\
& vertices. An $m$-tuple of particles or contacts,\\
& $i=(k_{1},k_{2},\ldots,k_{m^{i}})$ or
$i=(j_{1},j_{2},\ldots,j_{m^{i}})$\\
$j$ & A contacting pair of particles $(k_{1},k_{2})$\\
$k$ & A single particle\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
The ``$k$'' superscript is used with quantities that can be measured
within a single particle or its immediate vicinity;
the ``$i$'' superscript is assigned to quantities that are
measured within a single void cell (the dual of particles);
and the ``$j$'' superscript is used for quantities associated
with a pair of particles or a pair of void cells
(e.g. contacts, contact forces, branch vectors,
and inter-particle velocities).
No contractive summation is implied with superscripts,
e.g. $a^{j}b^{j}$.
\par
The non-uniformity of scalar, vector, and tensor quantities is
considered in the paper.
A consistent notation is used to express the conformity (or diversity)
of a local quantity $\mathbf{a}^{\mathrm{local}}$
with respect to
the corresponding assembly average $\overline{\mathbf{a}}$.
The pair $\mathbf{a}^{\mathrm{local}}$ and $\overline{\mathbf{a}}$
may be scalars, vectors, or tensors.
Three dimensionless scalars measure the \emph{participation}
of $\mathbf{a}^{\mathrm{local}}$
($= \mathbf{a}^{\mathrm{local}} \!\parallel \overline{\mathbf{a}}$)
in the assembly-average $\overline{\mathbf{a}}$;
the \emph{non-conformity} of $\mathbf{a}^{\mathrm{local}}$
($= \mathbf{a}^{\mathrm{local}} \!\perp \overline{\mathbf{a}}$);
and the \emph{alignment} of $\mathbf{a}^{\mathrm{local}}$
($= \mathbf{a}^{\mathrm{local}} \!\circ \overline{\mathbf{a}}$)
with respect to the assembly-average~$\overline{\mathbf{a}}$:
\begin{align}
\mathbf{a}^{\mathrm{local}} \!\parallel \overline{\mathbf{a}} &=
\frac{1}{|\overline{\mathbf{a}}|^{2}}
\left( \mathbf{a}^{\mathrm{local}} \cdot \overline{\mathbf{a}}\right)
\label{eq:parallel}\\
\mathbf{a}^{\mathrm{local}} \!\perp \overline{\mathbf{a}} &=
\frac{1}{|\overline{\mathbf{a}}|}
\left| \mathbf{a}^{\mathrm{local}} -
(\mathbf{a}^{\mathrm{local}} \parallel \overline{\mathbf{a}})
\overline{\mathbf{a}}
\right|
\label{eq:perp}\\
\mathbf{a}^{\mathrm{local}} \!\circ \overline{\mathbf{a}} &=
\frac{1}{|\mathbf{a}^{\mathrm{local}}|\,|\overline{\mathbf{a}}|}
\left( \mathbf{a}^{\mathrm{local}} \cdot \overline{\mathbf{a}}\right)
\label{eq:circ}
\end{align}
The participation and non-conformity in Eqs.~\ref{eq:parallel}
and~\ref{eq:perp} are the
dimensionless magnitudes of $\mathbf{a}^{\mathrm{local}}$
in directions parallel and perpendicular to $\overline{\mathbf{a}}$,
and relative to the length of $\overline{\mathbf{a}}$.
The alignment $\mathbf{a}^{\mathrm{local}} \!\circ \overline{\mathbf{a}}$
is the cosine of the angle separating
$\mathbf{a}^{\mathrm{local}}$ and $\overline{\mathbf{a}}$.
These quantities are unambiguous when $\mathbf{a}$ is a
vector or tensor.
If $\mathbf{a}$ is a scalar,
then $\mathbf{a}^{\mathrm{local}} \!\parallel \overline{\mathbf{a}}$
is simply the quotient $a^{\mathrm{local}}/\,\overline{a}$;
$\mathbf{a}^{\mathrm{local}} \!\!\perp \overline{\mathbf{a}}$
is zero;
and $\mathbf{a}^{\mathrm{local}} \!\circ \overline{\mathbf{a}}$
is sgn($a^{\mathrm{local}},\,\,\overline{a}$).
By reducing vector and tensor objects to the scalars in
Eqs.~(\ref{eq:parallel}--\ref{eq:circ}),
we can compute conventional statistical measures such as the
mean, standard deviation, and coefficient of variation.
These measures will be represented with the notation
$\mathsf{Mean}(\cdot)$, $\mathsf{Std}(\cdot)$,
and $\mathsf{Cov}(\cdot)$,
where the coefficient of variation
$\mathsf{Cov}(\cdot) = \mathsf{Std}(\cdot) / \mathsf{Mean}(\cdot)$.
\par
As an example with vector quantities $\mathbf{a}$, we can consider
two different sets of two-dimensional vectors $\mathbf{a}^{\mathrm{local}}$,
and this example can serve as a reference case for comparing
the results given later in the paper.
In both sets, the vectors $\mathbf{a}^{\mathrm{local}}$ all have
unit length.
In the first set, the vectors
$\mathbf{a}^{\mathrm{local}}$ have a uniform direction that is aligned with
the reference vector $\overline{\mathbf{a}}$;
but in the second set, the vectors $\mathbf{a}^{\mathrm{local}}$
have uniformly random directions.
In the example, the reference vector $\overline{\mathbf{a}}$
is also assumed to have unit length.
The four statistical measures
$\mathsf{Mean}(\mathbf{a}^{\mathrm{local}} \!\parallel \overline{\mathbf{a}})$,
$\mathsf{Std}(\mathbf{a}^{\mathrm{local}} \!\parallel \overline{\mathbf{a}})$,
$\mathsf{Mean}(\mathbf{a}^{\mathrm{local}} \!\perp \overline{\mathbf{a}})$,
and $\mathsf{Mean}(\mathbf{a}^{\mathrm{local}} \circ \overline{\mathbf{a}})$
are used in the paper as indicators of local non-conformity and
heterogeneity, and their values
for this simple example are
summarized in Table~\ref{table:values}.
\begin{table}
\caption{Statistics of uniform and random vector sets}
\label{table:values}
\centering
\begin{tabular}{lcc}
\hline\noalign{\smallskip}
&\multicolumn{2}{c}{Vectors $\mathbf{a}^{\mathrm{local}}$}\\
& Uniform,& \\
Measure & aligned & Random \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$\mathsf{Mean}(\mathbf{a}^{\mathrm{local}} \parallel \overline{\mathbf{a}})$ &
1 & 0 \\
$\mathsf{Std}(\mathbf{a}^{\mathrm{local}} \parallel \overline{\mathbf{a}})$ &
0 & $1/2$ \\
$\mathsf{Mean}(\mathbf{a}^{\mathrm{local}} \perp \overline{\mathbf{a}})$ &
0 & $2/\pi$ \\
$\mathsf{Mean}(\mathbf{a}^{\mathrm{local}} \circ \overline{\mathbf{a}})$ &
1 & 0 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
In the simulated biaxial loading of 10,816 circular disks, certain local
vector and tensor quantities are found to have measured
values of
$\mathsf{Std}(\mathbf{a}^{\mathrm{local}} \!\parallel \overline{\mathbf{a}})$
and $\mathsf{Mean}(\mathbf{a}^{\mathrm{local}} \!\perp \overline{\mathbf{a}})$
that greatly exceed those of the random set, as given in the
final column of Table~\ref{table:values}.
These large values are due to variations in the magnitudes of the
local quantities as well as in their directions.
\section{Heterogeneity measurements} \label{sec:results}
The experimental results are analyzed for indications of four
categories of heterogeneity:
topologic, geometric (fabric), kinematic, and static.
\subsection{Topologic heterogeneity} \label{sec:topology}
In a 2D setting, the topology of an assembly can be
described by the \emph{particle graph} of its particles
(the graph vertices) and their contacts (the graph edges)~\cite{Satake:1993b}.
The particle graph is associated with the Voronoi-Dirichlet
tessellation of a 2D region, except that the particle
graph admits only the real contacts as graph edges.
The faces of the planar graph are polygonal void cells, which are
enclosed by the circuits of contacting particles
(an example void cell is shaded in Fig.~\ref{fig:graph}).
\begin{figure}
\centering
\includegraphics{graph.eps}
\caption{Particle graph of a 2D granular assembly. A single
void cell is shaded. The void cells labeled~a, b, and~c have
valences of~6, 4, and~3 respectively.}
\label{fig:graph}
\end{figure}
For this topologic description of a 2D granular material,
the simplest local topologic measures are the local
coordination number $n^{k}$ and the local valence $m^{i}$, defined
as the number of contacts of a single particle $k$, and the number
of edges of a single void cell $i$
(see Fig.~\ref{fig:graph} for examples of valence).
Because gravity is absent in the current simulations,
some particles will be unattached and, hence, excluded from the
particle graph.
The effective average coordination number $\overline{n}_{\mathrm{eff}}$
of the attached particles will be somewhat larger than the coordination
number $\overline{n}$ that includes both attached and unattached
particles~\cite{Kuhn:1999a,Thornton:2000a}.
Dense assemblies have large coordination numbers and small valences,
but during biaxial compression, the average effective
coordination number is reduced, while the average valence
increases~\cite{Kuhn:1999a,Thornton:2000a}.
In the simulation of biaxial compression, $\overline{n}_{\mathrm{eff}}$
is reduced from 4.14 in the initial particle arrangement to a value
of 3.50 at the final compressive strain, $\overline{\varepsilon}_{22}=-0.01$.
The average valence $\overline{m}$ increases from 3.87 to 4.66.
\par
A simple measure of topologic nonuniformity is the dispersion in
the local values of $n^{k}$ and $m^{i}$.
Figure~\ref{fig:topology} shows the evolution of the coefficients
of variation of these two local topologic measures.
\begin{figure}
\centering
\includegraphics{topology.eps}
\caption{The evolution of two measures of topologic heterogeneity
during biaxial compression:
the coefficients of variation ($\mathsf{Cof}$)
of the local coordination number ($n^{k}$)
and local valence ($m^{i}$).}
\label{fig:topology}
\end{figure}
Together, the results indicate an increase in topologic heterogeneity during
loading.
The large increase in the dispersion of local valence,
as expressed by the coefficient of variation $\mathsf{Cov}(m^{i})$,
is consistent with the results of
Tsuchikura and Satake~\cite{Tsuchikura:2001a},
who have shown that the sizes of void cells become more diverse
during biaxial compression.
The increase in the coefficient of variation of the local coordination
number, $\mathsf{Cov}(n^{i}) = \mathsf{Std}(n^{i}) / \mathsf{Mean}(n^{i})$,
is due, in part, to a reduction in the mean coordination number.
\subsection{Geometric heterogeneity}\label{sec:fabric}
Geometric characteristics of granular materials are listed
in Table~\ref{table:class1},
and numerous studies have shown how the assembly averages of these
characteristics evolve during loading.
Fewer studies indicate how the internal diversity
of these characteristics changes with loading.
Tsuchikura and Satake~\cite{Tsuchikura:2001a} have developed
methods for examining the diversity of local fabric
in a 2D granular material and found that void cells become more
elongated during loading, but that the variation in elongation
remains fairly uniform.
To study this form of fabric anisotropy, they
propose a method for computing the magnitude of the anisotropy of a general
second order symmetric tensor $\mathbf{T}$ by considering
its deviatoric part $\mathbf{T}'$.
The self-product of $\mathbf{T}'$ yields a scalar measure $\beta$
of anisotropy:
\begin{equation}\label{eq:beta}
\mathbf{T}' \mathbf{T}' = \beta^2 \,\mathbf{I}\;.
\end{equation}
In their experimental study, they used $\beta$ to measure
the local anisotropy (elongation magnitude) of the loop tensors
of individual void cells.
The current study applies the same methods to analyze heterogeneity in
the local fabric tensor.
\par
Satake~\cite{Satake:1982a} proposed the fabric tensor as a measure
of particle arrangement in a granular material, and we
use a local form, $\mathbf{F}^{k}$, to analyze fabric heterogeneity:
\begin{equation}
F_{pq}^{k} = \frac{1}{n^{k}} \sum_{j=1}^{n^{k}} \eta_{p}^{\,j}\eta_{q}^{\,j}\;,
\end{equation}
where the tensor for a particle $k$ involves its $n^{k}$ contacts.
Superscript $j$ denotes the $j$th contact with particle $k$
(Table~\ref{table:superscripts}).
Vectors $\boldsymbol{\eta}^{j}$ are unit vectors in the directions
of the branch vectors that join the center of particle $k$ with the centers of
its contacting neighbors.
The assembly average $\overline{\mathbf{F}}$ is computed from the sum
of local values for all $N_{\mathrm{eff}}$ particles that
are included in (attached to) the particle graph,
\begin{equation} \label{eq:Fbar}
\overline{\mathbf{F}} = \frac{1}{2 N_{\mathrm{eff}}}
\sum_{k=1}^{N_{\mathrm{eff}}} n^{k} \mathbf{F}^{k} \;.
\end{equation}
Studies have shown that $\overline{\mathbf{F}}$ becomes increasingly
anisotropic during deviatoric loading, with the major
principal direction of $\overline{\mathbf{F}}$ becoming more aligned with the
direction of compressive loading~\cite{Oda:1982a,Thornton:2000a}.
\par
The current study considers variability in the local anisotropy of
fabric.
We apply Eq.~\ref{eq:beta} to the local fabric tensor $\mathbf{F}^{k}$
to compute a local measure $\alpha^{k}$ of fabric anisotropy:
\mbox{$\mathbf{T} \rightarrow \mathbf{F}^{k}$},
\mbox{$\beta \rightarrow \alpha^{k}$}.
Fig.~\ref{fig:fabric} shows the results for the biaxial compression
tests.
\begin{figure}
\centering
\includegraphics{fabric.eps}
\caption{Changes in the average and local fabric anisotropies
during biaxial compression.}
\label{fig:fabric}
\end{figure}
The average fabric anisotropy of the entire assembly, $\overline{\alpha}$,
increases with loading (Eqs.~\ref{eq:beta} and~\ref{eq:Fbar}),
a results that is consistent with previous
experiments.
As would be expected, the mean local anisotropy, $\mathsf{Mean}(\alpha^{k})$,
is larger than the average assembly anisotropy $\overline{\alpha}$,
and the increase in local anisotropy parallels that of the entire assembly.
The results also show, however, that the standard deviation
of fabric anisotropy increases with strain.
The increase in $\mathsf{Std}(\alpha^{k})$
suggests that the geometric arrangement of particles becomes
more varied during loading.
\subsection{Inter-particle movements} \label{sec:move}
The change in stress within a dry granular material is
due to local changes in the inter-particle
forces that result from the relative shifting of particles during
assembly deformation.
The simplest models of this mechanism are based upon
the interactions of particle pairs that are constrained
to move in accord with a homogeneous deformation field.
Bathurst and Rothenburg~\cite{Bathurst:1988a} studied the
inter-particle movements at small strains in
the biaxial compression of a disk assembly.
Their results demonstrate that, on average, the inter-particle
movements at small strains are less than those that would be consistent with
uniform deformation (see also~\cite{Kruyt:2002a}).
The current study addresses the non-conformity of inter-particle movements
relative to the average deformation,
the diversity of this non-conformity, its evolution during loading,
and the spatial coherence of the non-conformity.
In this regard, we consider only those particles that are
included in the particle graph at a particular
stage of loading.
The relative velocity $\widehat{\mathbf{v}}^{j}$ of two particles
$k_{1}$ and~$k_{2}$ is the difference in their velocities
\begin{equation}
\widehat{\mathbf{v}}^{j} = \mathbf{v}^{k_{2}} - \mathbf{v}^{k_{1}}\;,
\end{equation}
where index $j$ represents the contacting pair \mbox{$(k_{1},k_{2})$}.
The relative movement that would be consistent with homogeneous deformation
is the product $\overline{\mathbf{L}}\,\mathbf{l}^{j}$,
where $\overline{\mathbf{L}}$ is the average velocity gradient of the assembly,
and $\mathbf{l}^{j}$ is the branch vector between the
centers of particles $k_{1}$ and $k_{2}$ (Table~\ref{table:superscripts}).
\par
The quantities in Eqs.~(\ref{eq:parallel}--\ref{eq:circ}) can
be applied to
describe the conformity (or non-conformity) and
diversity of the local, inter-particle
movements $\widehat{\mathbf{v}}^{j}$ with respect to the
mean-field displacement $\overline{\mathbf{L}}\,\mathbf{l}^{j}$.
We begin by considering only pairs of particles that are in
direct contact during biaxial compression
(the number of these pairs ranges from 17,600 to 21,300 for
the 10,816 particles),
although we will consider more distant pairs in a later paragraph.
\par
The evolution of measures~(\ref{eq:parallel}--\ref{eq:circ})
are shown in Fig.~\ref{fig:contactMove_strain}.
\begin{figure}
\centering
\includegraphics{contactMove_strain.eps}
\caption{Evolution of the non-conformity and heterogeneity
of inter-particle motions $\widehat{\mathbf{v}}^{j}$
during biaxial compression.
The motions are for particle pairs $j$ that are in direct contact
($\rho=1$). Over 18,000 pairs are represented in each point.}
\label{fig:contactMove_strain}
\end{figure}
The average inter-particle motions
$\widehat{\mathbf{v}}^{j}$ are consistently less than the
mean-field motions, as is shown by a mean conformity
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\parallel
\overline{\mathbf{L}}\,\mathbf{l}^{j})$
less than 1.
This result is consistent with studies~\cite{Kruyt:2002a}
and~\cite{Bathurst:1988a},
which investigated the local behavior at small strains.
Figure~\ref{fig:contactMove_strain} shows that the mean
conformity,
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\!\parallel
\overline{\mathbf{L}}\,\mathbf{l}^{j})$,
is modestly reduced during loading,
from about 0.91 to about 0.82.
As we will see, however, the diversity of the fluctuations
can be quite large.
Both the non-conformity and heterogeneity of inter-particle
motions are indicated
by the additional measures
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\!\perp
\overline{\mathbf{L}}\,\mathbf{l}^{j})$
and
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\circ
\overline{\mathbf{L}}\,\mathbf{l}^{j})$.
If the local motions were in uniform conformance with the
assembly deformation,
these two measures would have values of~0 and~1 respectively.
At large strains, the value of
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\perp
\overline{\mathbf{L}}\,\mathbf{l}^{j})$
approaches~2, compared with a value of
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\!\parallel\!
\overline{\mathbf{L}}\,\mathbf{l}^{j})$
of about~0.82.
These results reveal that, on average and at
large strains, the components of inter-particle
movements that are \emph{orthogonal} to their mean-field directions
can be more than twice as large as the components that
are aligned with the mean-field directions
(Eqs.~\ref{eq:parallel} and~\ref{eq:perp}).
This lack of vector alignment is also indicated by the
cosine-type measure
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \circ
\overline{\mathbf{L}}\,\mathbf{l}^{j})$,
which is reduced to a value of about 0.15 (see Eq.~\ref{eq:circ}).
At the end of the test, fully 40\% of inter-particle motions were in
the ``wrong'' direction, with values
$\widehat{\mathbf{v}}^{j}\cdot(\overline{\mathbf{L}}\,\mathbf{l}^{j})<0$.
The fourth measure in Fig.~\ref{fig:contactMove_strain} is
$\mathsf{Std}(\widehat{\mathbf{v}}^{j} \parallel
\overline{\mathbf{L}}\,\mathbf{l}^{j})$,
which displays a rather extreme degree of nonuniformity
in the components of inter-particle movements that are
parallel to the mean-field directions.
This nonuniformity is particularly sizable
at large strains.
A set of random vectors of uniform length would have a value
of
$\mathsf{Std}(\widehat{\mathbf{v}}^{j} \!\!\parallel\!
\overline{\mathbf{L}}\,\mathbf{l}^{j})$
of only 0.5 (Table~\ref{table:values}),
a value several times smaller than those
in Fig.~\ref{fig:contactMove_strain}.
Such large values
indicate a substantial heterogeneity in both the magnitudes
and directions of the inter-particle
movements $\widehat{\mathbf{v}}^{j}$.
\par
We can also use the biaxial compression simulation to investigate
the spatial correlation of inter-particle movements and the length scale
at which the inter-particle movements approximate the mean deformation
field.
Kruyt and Rothenburg~\cite{Kruyt:2002a} measured the spatial
correlation of movements at small strains by using a 2-point
correlation technique.
In the current study, we do not consider all possible particle
pairs, but instead use only those pairs of particles that are included
in (attached to) the particle graph, as only these particles participate
directly in the deformation and load-bearing mechanisms.
This limitation suggests a \emph{discrete metric} $\rho$
for describing the distance between two particles $k_{1}$ and $k_{2}$.
The distance $\rho(k_{1},k_{2})$ is the least number
of contacts (graph edges) that must be traversed to connect
$k_{1}$ and $k_{2}$ (Fig.~\ref{fig:distance}).
\begin{figure}
\centering
\includegraphics{distance.eps}
\caption{Discrete distances $\rho$ from a reference particle 0.
The vertices represent particle centers;
edges represent particle contacts.}
\label{fig:distance}
\end{figure}
The results in Fig.~\ref{fig:contactMove_strain},
which have already been described, were
collected from the sets of all particle pairs at a
discrete distance of~1,
i.e. the sets $\{ (k_{1},k_{2}\mbox{):}\; \rho(k_{1},k_{2})=1 \}$
at various stages of loading.
The discrete metric does not provide angle or size, so all subsequent
calculations with the objects $\widehat{\mathbf{v}}^{j}$,
$\overline{\mathbf{L}}$, and $\mathbf{l}^{j}$ were, of course,
performed in Euclidean space, but only on the selected particle pairs.
\par
Figure~\ref{fig:Contact_move_dist_005}
shows the non-conformity and heterogeneity of
inter-particle movements $\widehat{\mathbf{v}}^{j}$
for particle pairs $j$ at distances
$\rho$ of~1 to~10,
but at the single large strain $\overline{\varepsilon}_{22}=-0.005$
(see Fig.~\ref{fig:crs_q}).
\begin{figure}
\centering
\includegraphics{Contact_move_dist_005.eps}
\caption{The correlation of inter-particle motions with the
discrete distance $\rho$ between particle pairs at
a strain $\overline{\varepsilon}_{22}=-0.005$.
The superscript $j$ represents a pair of particles $(k_{1},k_{2})$
that are separated by distance $\rho$.
The results at $\rho=1$ involve 18,000 pairs; results
at $\rho=10$ involve over 250,000 pairs.}
\label{fig:Contact_move_dist_005}
\end{figure}
(The results for $\rho=10$ involve over one-quarter million particle pairs.)
As would be expected, the average conformity of
the observed inter-particle movements
with their corresponding mean-strain movements
improves with an increasing discrete
distance between the pairs.
This improved conformity is evidenced by increases
in the measures
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\parallel
\overline{\mathbf{L}}\,\mathbf{l}^{j})$
and
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\circ
\overline{\mathbf{L}}\,\mathbf{l}^{j})$
and in the reduction of
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\perp
\overline{\mathbf{L}}\,\mathbf{l}^{j})$.
However, at a distance of $\rho=10$ and at the strain
$\overline{\varepsilon}_{22}=-0.005$,
the values of these three measures are about the same as
those at distance $\rho=1$ with zero strain,
$\overline{\varepsilon}_{22}\approx 0$.
That is,
at the large strain of $-0.005$, the non-conformity of motion at a distance of
about~8--10 particle diameters is no better than the
modestly substantial non-conformity of neighboring particles at small
strains.
\par
The conformity between the actual and mean-field motions is particularly
poor at large strains if we consider only the \emph{normal} motions
between the particle pairs that are in direct contact (i.e. with $\rho=1$).
Figure~\ref{fig:Contact_move_orient_005} shows the assembly averages of the
normal and tangential motions of those particle pairs that are separated
by distances $\rho$ of~1 and~3, at the large strain
$\overline{\varepsilon}_{22}=-0.005$.
\begin{figure}
\centering
\includegraphics{Contact_move_orient_005.eps}
\caption{The average normal and tangential motions of particle
pairs as a function of the pair orientation $\theta^{j}$.
Mean-field motions $\overline{\mathbf{L}}\,\mathbf{l}^{j}$
are represented by heavy lines; whereas, the averaged
actual inter-particle motions are the lighter lines.
Values are given for pairs having discrete distances
$\rho$ of~1 and~3. The compressive strain $\overline{\varepsilon}_{22}$
is $-0.005$ (see Fig.~\ref{fig:crs_q}).}
\label{fig:Contact_move_orient_005}
\end{figure}
These motions are plotted against the orientation
angles $\theta^{j}$ of the pairs (Fig.~\ref{fig:theta}),
and advantage has been taken of the loading symmetry by
folding the angles
$\theta^{j}$ into the single quadrant 0$^{\circ}$ to 90$^{\circ}$.
\begin{figure}
\centering
\includegraphics{theta.eps}
\caption{Orientation angle $\theta^{j}$ for a particle pair.}
\label{fig:theta}
\end{figure}
The normal inter-particle motions are the inner products
\mbox{$\widehat{\mathbf{v}}^{j} \!\cdot\! \boldsymbol{\eta}^{\,j}$},
where $\boldsymbol{\eta}^{\,j}$ is the unit vector
aligned with the branch vector $\mathbf{l}^{j}$ that connects the centers
of a particle pair $j = (k_{1},k_{2})$.
Figure~\ref{fig:Contact_move_orient_005} compares the
averages of these values with the
corresponding averages of the mean-field motions
$\overline{\mathbf{L}}\,\mathbf{l}^{j}$
(the latter are represented with heavy lines).
The results have been normalized by dividing by the average
length $\ell^{,\rho} = \langle |\mathbf{l}^{j,\rho}| \rangle$
for a particular separation $\rho$ and by the strain rate
$\overline{L}_{22}$.
Figure~\ref{fig:Contact_move_orient_005} shows that, at large strains,
the movements of contacting particles ($\rho=1$) are predominantly tangential,
and that the mean normal motion is quite small.
That is, at $\rho = 1$ and at large strains,
the normal inter-particle movements are grossly overestimated
by the mean-field motion $\overline{\mathbf{L}}\,\mathbf{l}^{j}$.
At a distance $\rho=3$, the motions are, on average, in
much closer conformity with those predicted by a mean-field assumption.
The apparent conformity at $\rho=3$
in Fig.~\ref{fig:Contact_move_orient_005}
is, however, based upon an average of movements,
and the true diversity in their values is more appropriately reflected in
the measures
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\perp
\overline{\mathbf{L}}\,\mathbf{l}^{j})$,
$\mathsf{Mean}(\widehat{\mathbf{v}}^{j} \!\circ
\overline{\mathbf{L}}\,\mathbf{l}^{j})$,
and
$\mathsf{Std}(\widehat{\mathbf{v}}^{j} \!\parallel
\overline{\mathbf{L}}\,\mathbf{l}^{j})$,
which are reported in Figs.~\ref{fig:contactMove_strain}
and~\ref{fig:Contact_move_dist_005}.
\subsection{Deformation heterogeneity} \label{sec:deform}
Micro-scale deformations within a 2D granular material
can be computed by considering the small polygonal void cells
as being representative micro-regions among particle
clusters (Fig.~\ref{fig:graph})~\cite{Bagi:1996a,Kruyt:1996a,Kuhn:1999a}.
The region of 10,816 particles can be partitioned into over 7500
of these void cells.
The average velocity gradient
$\mathbf{L}^{i}$ within a single polygonal void cell $i$ is computed
from the motions of the particles at its vertices.
These local velocity gradients can then be compared with the
average assembly gradient $\overline{\mathbf{L}}$,
and the measures in Eqs.~(\ref{eq:parallel}--\ref{eq:circ})
can be used to investigate the non-conformity and
heterogeneity of local deformations.
Figure~\ref{fig:def_var_strain} shows the evolution of
these measures in the course of a biaxial compression test.
\begin{figure}
\centering
\includegraphics{def_var_strain.eps}
\caption{The evolution of deformation non-conformity
and heterogeneity during biaxial compression.
Each point represents the deformations $\mathbf{L}^{i}$
in over 7500 void cells,
where superscript $i$ represents an $i$th void cell.}
\label{fig:def_var_strain}
\end{figure}
At small strains, the local deformations are modestly aligned
with the averaged deformation:
the average cosine of alignment,
$\mathsf{Mean}(\mathbf{L}^{i} \!\circ \overline{\mathbf{L}})$,
is 0.91, only slightly lower than~1,
and the average component of the local gradient
$\mathbf{L}^{i}$ that is perpendicular to the assembly
average $\overline{\mathbf{L}}$ is about 35\% of $|\overline{\mathbf{L}}|$.
At larger strains, the local deformations are, on average, far more deviant
and exhibit a much larger dispersion of values.
The standard deviation of the aligned deformations,
$\mathsf{Mean}(\mathbf{L}^{i} \!\parallel \overline{\mathbf{L}})$,
becomes more than twice its mean value of~1.
Deformations that are orthogonal to $\overline{\mathbf{L}}$
become, on average, much larger than those parallel to
$\overline{\mathbf{L}}$
(compare the
$\mathsf{Mean}(\mathbf{L}^{i} \!\perp \overline{\mathbf{L}})$
in Fig.~\ref{fig:def_var_strain}
with a
$\mathsf{Mean}(\mathbf{L}^{i} \!\parallel \overline{\mathbf{L}})$
of 1).
\par
This non-conformity and heterogeneity is also illustrated in
Fig.~\ref{fig:Group_align},
which shows the distributions of aligned deformations
at moderate and large compressive strains, $\overline{\varepsilon}_{22}$
of $-0.0005$ and $-0.005$.
\begin{figure}
\centering
\parbox{8.5cm}
{\centering%
\includegraphics{Group_align_0005.eps}\\[0ex]
\small{(a) $\overline{\varepsilon}_{22} = -0.0005$}\\[3.0ex]
\includegraphics{Group_align_005.eps}\\[0ex]
\small{(b) $\overline{\varepsilon}_{22} = -0.005$}
}
\caption{Distributions of the aligned deformation of void cells
at two strains. The void cells have been grouped according
to a ranking of their $\mathbf{L}^{i} \!\parallel \overline{\mathbf{L}}$
values
(10,900 and 8300 void cells are included at the two strains).}
\label{fig:Group_align}
\end{figure}
In each figure, the void cells have been placed into~20 bins,
arranged according to a ranking of the
aligned deformations
$\mathbf{L}^{i} \!\parallel \overline{\mathbf{L}}$
of each, $i$th void cell.
At moderate strains, the 10\% of most contributory void cells
participate disproportionately in the average assembly
deformation and about 6.5 times more than
the lowest 10\% of void cells (Fig.~\ref{fig:Group_align}a).
At the larger strain of $-0.005$, about 22\% of the material makes
a \emph{negative} contribution to the overall assembly deformation,
and, in a sense, is deforming in the ``wrong'' direction
(Fig.~\ref{fig:Group_align}b).
As another measure of this heterogeneity at large strain, the 31\% of
most contributory void cells could account, by themselves, for
the entire assembly deformation.
This situation is akin to that of a material in which a shear band
has developed, where intense shearing within the band
is accompanied by unloading outside of the band.
No shear bands were observed in the current simulations,
although another type of localization, in the form of multiple
non-persistent \emph{micro-bands},
was present throughout the biaxial compression
test.
This type of deformation patterning, described in~\cite{Kuhn:1999a},
was subtly present at the start of deformation and became more
pronounced as deformation proceeded.
Microband localization accounts for much of the deformation
heterogeneity that is recorded in
Figs.~\ref{fig:def_var_strain} and~\ref{fig:Group_align}.
An example of micro-band patterning at small strain is shown in
Fig.~\ref{fig:microbands}, in which the local, void cell deformations
$\mathbf{L}^{i}$ have been filtered to highlight
a right-shear deformation mode
(see~\cite{Kuhn:1999a} for a discussion of the visualization
technique).
\begin{figure}
\centering
\includegraphics{d0005_cell_def.eps}
\caption{The presence of right-shear microbands at
strain $\overline{\varepsilon}_{22} = -0.0005$.
The local void cell deformations $\mathbf{L}^{i}$ have
been filtered as $\mathbf{L}^{i} \boldsymbol{\Phi}$,
where the filter $\boldsymbol{\Phi} = [0.49\;0.41 ; -0.58\; -0.49]$
captures a deformation mode that produces shearing that is
downward and to the right.
A complementary set of left-shear microbands would be present with
the use of an alternative filter.
The gray scale illustrates the magnitudes of
the local filtered deformations, but some of the white regions have
negative filtered values in this monochrome plot.}
\label{fig:microbands}
\end{figure}
\subsection{Particle rotation heterogeneity} \label{sec:rotate}
Particle rotations in granular materials are known to be large,
particularly in 2D assemblies of circular disks.
Dedecker et~al.~\cite{Dedecker:2000a} found that the standard
deviation of the particle rotation rates could be several times larger
than the average strain rate of an assembly.
Calvetti et~al.~\cite{Calvetti:1997a} reported that the variability
of particle rotations increased consistently with increasing
strain.
Figure~\ref{fig:rotations} shows that this variability
is expressed in a
spatial patterning of
particle rotations.
The figure is taken at the
moderate strain $\overline{\varepsilon}_{22}$ of $-0.0005$, but
\begin{figure}
\centering
\includegraphics{d0005_part_rotat.eps}
\caption{Particle spins in a biaxial compression test at
strain $\overline{\varepsilon}_{22} = -0.0005$. Only
clockwise spinning particles are shown in the plot.}
\label{fig:rotations}
\end{figure}
\emph{only counter-clockwise} rotations are shown in this
monochrome plot, where the shading depends upon the
dimensionless rotation rate $\omega^{k}/ |\overline{\mathbf{L}}|$.
The most rapidly rotating particles are usually aligned in chain-like
patterns oblique to the principal stress directions.
These chains are closely associated with microbands,
as can be seen by comparing Figs.~\ref{fig:microbands}
and~\ref{fig:rotations}~\cite{Kuhn:1999a}.
\subsection{Stress heterogeneity} \label{sec:stress}
The transmission of force among particles occurs in a
non-uniform manner, with certain chains of particles bearing
a disproportionate share of the surface tractions.
These force chains have been widely observed, and several related
references are given in Table~\ref{table:class1}.
The current study concerns the distribution of \emph{stress}
among an assembly's particles.
In two previous studies,
the local variation of stress within stacks of rods
has been studied by withdrawing groups of rods and
measuring the removal force~\cite{Bacconnet:1992a,Auvinet:1992a}.
The DEM simulations of
the current study allow the direct computation
of stress $\boldsymbol{\sigma}^{k}$ within each, $k$th disk:
\begin{equation}
\sigma_{pq}^{k} = \frac{r^{k}}{A^{k}}\sum_{j=1}^{n^{k}}
\eta_{p}^{\,j} f_{q}^{\,j} \;,
\end{equation}
where summation is over the $n^{k}$ contacts $j$ of the particle $k$,
$r^{k}$ is the disk radius, $\boldsymbol{\eta}^{k}$ is the
unit normal vector, and $\mathbf{f}^{j}$ is the contact force.
Satake~\cite{Satake:1992a} and
Kruyt and Rothenburg~\cite{Kruyt:2002a} have
described a dual of the particle graph that could be used
to compute a representative particle area $A^{k}$ that includes a portion
of the void space around a particle.
To compute a local stress that can be compared with the
average assembly stress, we instead use the (solid) disk area
$\pi (r^{k})^{2}$ and
simply divide it by the assembly-average solid fraction.
\par
Figure~\ref{ref:stress_var_strain}
shows the evolution of non-conformity and heterogeneity
in the local stress $\boldsymbol{\sigma}^{k}$
(eqs.~\ref{eq:parallel}--\ref{eq:circ}).
\begin{figure}
\centering
\includegraphics{stress_var_strain.eps}
\caption{The evolution of stress non-conformity and heterogeneity during
biaxial compression.}
\label{ref:stress_var_strain}
\end{figure}
The average, cosine-type alignment of the local stress,
$\mathsf{Mean}(\boldsymbol{\sigma}^{k} \!\circ
\overline{\boldsymbol{\sigma}})$,
is less than 0.6, but there is little
change in this average alignment during loading.
The spatial variation in local stress, as measured by
$\mathsf{Std}(\boldsymbol{\sigma}^{k} \!\parallel
\overline{\boldsymbol{\sigma}})$,
decreases at small strains, but then increases at larger strains.
At large strains,
all three measures in Fig.~\ref{ref:stress_var_strain}
depict a greater conformity
and homogeneity of stress than was found with inter-particle movements
and void cell deformations
(\textit{cf} Figs.~\ref{fig:contactMove_strain},
\ref{fig:def_var_strain} and~\ref{ref:stress_var_strain}).
This greater regularity is likely due to the stress being represented
in its status, whereas movement and deformation were represented
in their rates.
At small strains, however, the three measures
in Fig.~\ref{ref:stress_var_strain} show less conformity and heterogeneity
in stress than in the inter-particle movements.
The diversity of stress at small strain is primarily
the inheritance of the initial particle packing, and this diversity
increases only modestly during loading.
\par
The variation in stress is greatest in its deviatoric component.
Figures~\ref{fig:stress_hist}a and~\ref{fig:stress_hist}b are histograms
of the local mean stress and deviator stress,
defined for particle $k$ as $p^{k}=(\sigma_{11}^{k}+\sigma_{22})/2$
and $q^{k}=(\sigma_{22}^{k}-\sigma_{11})/2$
respectively.
\begin{figure}
\centering
\parbox{8.5cm}
{\centering%
\includegraphics{stress_hist_p_005.eps}\\[0ex]
\small{(a) Local mean stress}\\[3.0ex]
\includegraphics{stress_hist_q_005.eps}\\[0ex]
\small{(b) Local deviator stress}
}
\caption{Participation of the local stress in the average assembly
stress.
Figures~\ref{fig:stress_hist}a and~\ref{fig:stress_hist}b are
histograms of the local participation in the mean and deviator
stresses.
Both figures are compiled from the stresses in over 10,000
particles at the large strain $\overline{\varepsilon}_{22} = -0.005$.}
\label{fig:stress_hist}
\end{figure}
The figure gives these components at the large strain
$\overline{\varepsilon}_{22} = -0.005$.
Because only compressive force can be delivered between particles,
the local mean stress is uniformly positive, but the
standard deviation of the local mean stress $p^{k}$
is about 0.60 (Fig.~\ref{fig:stress_hist}a).
The standard deviation of the local deviator stress $q^{k}$ is 1.0
(Fig.~\ref{fig:stress_hist}b).
About~15\% of particles have a negative alignment of the
deviator stress, $q^{k} \!\parallel\! \overline{q}$,
and these particles provide a negative contribution toward bearing the
average assembly deviator stress.
\section{Conclusion}
In the paper, we have considered several categories
of heterogeneity in granular materials:
topologic, geometric, kinematic, and static.
In all respects, the heterogeneity can be described, at a minimum,
as being moderate.
Heterogeneity increases during biaxial compressive loading.
In the case of inter-particle movements, the non-uniformity
becomes extreme, and particle motions are only coarsely aligned
with the mean-field movement.
At large strains, significant fluctuations from the mean-field
motion extend to distances of at least eight particle diameters.
Non-uniform motion is expressed in the patterning of local
movements, which includes microband patterning and
rotation chain patterning.
The extent and magnitude of the heterogeneity and its patterning
proffer an imposing challenge to the continuum
representation of granular materials at micro and macro scales,
especially at large strains.
Before such efforts can be productive,
further statistical analyses should be undertaken to
further characterize heterogeneity,
to determine characteristic lengths at which heterogeneity
dominates the meso-scale behavior, to quantify the heterogeneity
in the local stress rates, and to establish the relationships among
topologic, geometric, kinematic, and static heterogeneities.
\bibliographystyle{unsrt}
| 15,921 |
\section*{Introduction}
Electrical breakdowns in general and vacuum electrical breakdowns, in particular, regain their important role in the development of modern technologies. The increasing usage of electric power in different environments inevitably leads to failure of surfaces facing electric fields. In vacuum, even high or ultra-high, the electric breakdowns appear in the form of vacuum arcs.
The latter in various cases are controllable and serve various technological advances, like ion sources \cite{MEVVA1985} and physical vapour deposition \cite{Anders}.
However, in most cases the vacuum arcs occur undesirably in an uncontrollable manner, causing problems in various vacuum devices such as fusion reactors \cite{juttner2001cathode,McCracken1980}, vacuum interrupters \cite{slade2007}, satellite systems \cite{de2006multipactor,rozario1994investigation}, X-ray tubes \cite{latham1995} and large particle accelerators.
Vacuum arcs are particularly detrimental for high precision devices that are built to employ high electric and electromagnetic fields.
Amongst these are multi-kilometre devices such as powerful particle colliders \cite{dobert2005high, clic2016} or tiny micro- or nano-electromechanical system (MEMS or NEMS) and capacitors \cite{lyon2013gap, ducharme2009inside}.
For instance, micro-fabricated devices such as nano electro-spray thruster arrays for spacecraft are built to withstand large electric fields between electrodes. However, if an arc occurs, the entire chip is destroyed \cite{sterling2013increased}.
Recently, particular attention obtained technologies that employ high accelerating field gradients for high energy physics \cite{clic2016}, free electron lasers \cite{vogel2013results} or medical hadron accelerators for cancer treatment purposes \cite{Chao2009}. Some of these devices are designed to operate with fields up to hundreds of MV/m \cite{clic2016}, which cause intolerably high frequency of vacuum breakdowns. This, in turn, increases wasteful power consumption, reduces the final luminosity of accelerated particles and overall destabilizes the performance of the device \cite{clic2016}.
Vacuum arcs have been under close attention of researchers since the early 1950s.
In spite of many empirical attempts to describe and quantify the phenomenon \cite{Dyke1953Arc, charbonnier1967electrical, Mesyats2005, latham1995, Anders, fursey2007field}, there is still no consensus on what are the physical processes that lead to its ignition.
The most common hypothesis is that a vacuum arc starts from micro-protrusions that exist due to different reasons on the metal surface and locally enhance the applied electric field.
If the local field reaches a critical value (about $10^{10}$ V/m) \cite{Descoeudres2009, Dyke1953I, Dyke1953Arc}, an intense and increasing field emission current appears.
The latter initiates violent physical processes that within a few ns form a plasma that is able to conduct very high current densities at very small voltage \cite{ArcPIC_1d, arcPIC}, thus rendering the gap conductive.
The high current flowing from the anode to the cathode is accompanied by intermittent light emission from the gap \cite{mazurek1988fast, mazurek1993x}.
Several mechanisms have been proposed to explain the formation of plasma in such high voltage conditions.
Some of them attribute the arc initiation to physical processes appearing in the cathode, while others to ones in the anode side.
For example, Charbonnier et al.\cite{charbonnier1967electrical} suggest that whether an arc is anode-initiated or cathode-initiated depends on the value of local enhancement factor $\beta$ on the cathode.
Others, make this distinction based on the time delay between the application of a pulse and the occurrence of an arc \cite{yen1984emission,chalmers1982breakdown}.
Slade \cite{slade2007} proposed to use the gap length as a criterion to consider a vacuum arc to be cathode- or anode-dominated.
According to the last suggestion, in a short gap electrode systems, the cathode plays a dominant role in initiating breakdowns, while in contrast, for larger gaps, greater than 2 mm, the anode effect takes over the cathode and the processes developed near the electrode with the higher electric potential determine the evolution of the vacuum arc.
Meanwhile, other proposed mechanisms attribute the dominant role for the initiation of a vacuum arc to cathodic processes.
Mesyats et al. \cite{fursey2007field, yen1984emission, mesyats1993ectons, Mesyats_Ecton, mesyats2000cathode} have proposed explosive electron emission mechanism (known as the "ecton" model) on the cathode electrode, which leads to plasma formation.
Timko et. al. \cite{ArcPIC_1d, arcPIC} reported Particle-In-Cell plasma (PIC) simulations showing that plasma can gradually build up from intensively emitting cathodes due to positive feedback ion bombardment, if a minimum initial neutral evaporation rate is assumed.
A possible origin for the latter was recently given by Kyritsakis et. al. \cite{kyritsakis2018thermal}, who performed multi-scale atomistic simulations and reported a thermal runaway mechanism on cathodic metal nano-tips.
All the above proposed mechanisms, along with various experimental studies \cite{Descoeudres2009, Dyke1953Arc, meng2014electrical}, attribute the vacuum arc ignition to processes in the cathode and consider the processes near the anode negligible.
In general, the scientific community has not reached a consensus about the role of each electrode on the vacuum arc ignition.
In the present work we follow the development of a vacuum arc with a nanosecond resolution, for gap lengths varying from 0.5 mm to 5 mm.
We observe the evolution of the light emission in the gap by an ultra-fast camera, while recording the gap current and voltage simultaneously.
Our experiments in combination with theoretical calculations we conducted, reveal that regardless the gap length, the vacuum arc is always ignited on the cathode side, within a few ns after the field in its vicinity reaches a critical value.
\section*{Results}
\subsection*{Phases of development of a vacuum arc.}
The geometry of our experiments allowed for a clear distinction between the cathode (thin tip) and the anode (flat surface).
The electrodes were installed in a high-vacuum chamber with a vacuum level of $2.5\times10^{-4}$ Pa, placed at a distance of a few mm from one another. This distance, or the gap length $d_g$, was varied from 0.5 to 5 mm in different experiments. A pulsed high voltage source with a pulse width $\Delta t_V = 1 - 5 \mu$s was connected to the cathode and provided up to $V_{max} = -40$ kV, which was sufficiently high to ensure the appearance of an arc in every single pulse.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\linewidth]{phases.pdf}
\caption{Typical waveforms of the voltage and the current during a vacuum breakdown event registered in the tip-to-plane geometry of the copper electrodes with a gap length of 5mm and a voltage pulse width of 1 $\mu$s. The geometry of the electrodes is shown in the inset. P0-P3 denote different phases of the arc development, $t_s$, $t_0$, $t_{VP}$ and $t_{V0}$ denote the instances when the system started to charge, the current started to rise, the voltage reached its maximum value and the voltage dropped to zero, respectively.}
\label{fig:phases}
\end{figure}
Fig. \ref{fig:phases} shows typical waveforms of the voltage and current recorded during a breakdown event for the set-up with $d_g$ = 5 mm and $\Delta t_V = 1 \mu$s (see the inset of Fig. \ref{fig:phases} for the geometry of the set-up).
The abscissa, the left ordinate and the right ordinate show time, gap voltage and gap current, respectively.
In Fig. \ref{fig:phases}, we identify four main phases of development of a vacuum arc.
Phase P0, the charging phase, starts when the pulse is applied from the voltage source ($t_s$).
During P0, the gap capacitor along with parasitic capacitances of the system (a small initial peak in the current waveform) are being charged and the gap voltage starts rising. P0 ends at $t = t_0$, when the current starts rapidly rising up. $t_0$ is also defined as the origin the time axis in our experiments. During the next phase P1, the current rises up to $I_{max} = 80$ A.
We note that the voltage continues growing for a short time until $t_{VP}$; only after this point it drops to a near-zero value, when the current reaches $I_{max}$.
However, we associate the initial point of the vacuum arc with $t_0$ and not with $t_{VP}$, since the voltage is expected to keep growing after the current through the gap has appeared.
At this initial stage of the arc, the current is not sufficient to consume the voltage over the gap yet.
This expectation is corroborated by Simulink \cite{simulink} simulations performed for the same circuit and conditions as used in the experiment (see supplementary material section S1 for the details.)
We also note that the drop of the voltage to a near-zero value and the current rise to $I_{max}$ are completed at approximately the same moment, which we define as $t_{V0}$ and associate with the start of the next phase of the steady arc P2, that lasts until the end of the pulse. The last phase P3 is the discharge decay, during which the voltage and current through the gap drop to zero, completing the vacuum arc process.
We have confirmed the existence of all four phases of the vacuum arc for different voltage pulse widths $\Delta t_V = 1 - 5 \mu s$.
The corresponding comparison is given in the supplementary material (S3), where we show that longer $\Delta t_V$ only increased the duration of the steady arc phase P2, while the phases P0, P1 and P3, which define the dynamics of arc evolution, are identical and independent of the pulse duration.
\subsection*{Dependence of the waveforms on the gap length}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\linewidth]{gap_dependence1.pdf}
\caption{Typical current (a) and voltage (b) waveforms for four different gap lengths, as denoted in the figure. The pulse length is 1 $\mu$s.}
\label{fig:diff}
\end{figure}
Since the gap length has been suggested to affect the role of the electrodes in the process of vacuum arcing \cite{slade2007}, we performed a series of experiments, where we fixed all the experimental parameters except for $d_g$, varying it from 0.5 mm to 5 mm.
These results are shown in Fig. \ref{fig:diff}.
\begin{figure}[htbp]
\includegraphics[width=\linewidth]{Time_voltage_gap1.pdf}
\caption{Dependence of the duration of the phases P0 and P1 (a) and the breakdown voltage (b) on the gap length. The corresponding error bars indicate the standard deviation as obtained from 50 measurement repetitions for each gap length.}
\label{fig:gap_stats}
\end{figure}
As we observe in this figure, both current (Fig.\ref{fig:diff}a) and voltage (Fig.\ref{fig:diff}b) waveforms are affected by the change of $d_g$.
The shorter this parameter is, the further $t_0$ shifts towards earlier times decreasing the duration of phase P0.
On the other hand, for longer $d_g$ the current rise phase (P1) lasts longer.
We analysed these variations and the results are presented in Fig. \ref{fig:gap_stats}(a).
Here the bars show the duration of P0 and P1 phases averaged over 50 independent measurements.
The corresponding error bars show the standard deviation from the mean value.
As one can see, the increase of the duration of phase P1 is significant with the increase of $d_g$, while the initial point of the vacuum arc $t_0$ is much less dependent on the size of the gap between the electrodes.
In Fig. \ref{fig:gap_stats}b, the breakdown voltage, i.e. the voltage at $t_0$, is shown to decrease systematically with increasing gap length.
This clearly indicates that the arc ignites when the local electric field at the apex of the cathode needle reaches a certain critical value.
We calculated the electric field distribution around the apex of the cathode using the finite element method (see the method section) and
found that for all $d_g$, the maximum electric field at $t_0$ is 160$\pm$30 MV/m, which is in surprisingly good agreement with the breakdown fields measurements for flat Cu electrodes \cite{Descoeudres2009, Descoeudres2009_dc}.
Based on our experiments, we conclude that the increase of the gap length has affected the duration of the identified phases of vacuum arc evolution, however, it did not affect the process of vacuum arcing dramatically, which would have indicated the switch of leading roles of electrodes in this process.
\subsection*{Observation of the vacuum arc development with nanosecond resolution}
We observed the vacuum arcs through a glass window by an intensified charge-coupled device camera (ICCD, Andor DH334T-18U-04). The electronic gate control of the ICCD allows an exposure time $t_w$ down to 2 ns. However, the physical limitation of the device allows five snapshots per second at maximum. Since the pulse width is only a few $\mu$s, we were able to obtain only one shot per pulse.
To reproduce the entire evolution of a vacuum arc with a nanosecond resolution, we repeated the experiment numerous times, gradually delaying the moment when the ICCD shot was taken by an interval $\Delta t$ ns with respect to the breakdown time $t_0$ (see the method Section for details).
The repeatability of the experiments is verified and shown in the supplementary material (S2).
In Fig. \ref{fig:main_seq}, we show the full evolution of the light emitted during a vacuum arc for a gap distance $d_g =$3 mm and a pulse length of 5 $\mu$s.
Inspecting the frames in Fig. \ref{fig:main_seq}, we see that a vacuum arc has three major stages with respect to the light emission recorded by the ICCD camera.
During the first stage, which lasts 150 ns (first three frames in Fig. \ref{fig:main_seq}), light is emitted from the tip of the cathode and the anode is dark.
Also during this stage, the intensity of the light emitted at the cathode gradually increases.
At 150 ns, the anode begins to radiate and the discharge enters the second stage, characterized by the glow of both electrodes.
During this stage, the anodic glow gradually expands, until it covers the whole gap at 800 ns (10th frame in Fig. \ref{fig:main_seq}).
Finally, during the last stage, the anodic glow starts decaying even before the end of the voltage pulse and eventually disappears from the gap.
However, the cathode still glows until 6000 ns after the power supply stops completely at 5000 ns.
After that, the ICCD did not record any radiation from the gap.
In short, based on the analysis of light imaging, we define the three main stages of the vacuum arc development.
These are the cathode-radiance stage, the anode light expansion stage and, finally, the anode light decaying stage.
\begin{figure}[htbp]
\includegraphics[width=\linewidth]{Arc_sequence.pdf}
\caption{Nanosecond-time-resolved light emission of the vacuum arcing process. The gap is 3 mm and the pulse length 5 $\mu$s. The electrodes are outlined by white dashed lines (cathode in the shape of a thin long tip and anode as a large flat surface). The numbers under each frame denote the delay time $\Delta t$. The camera exposure time is $t_w = $50 ns.}
\label{fig:main_seq}
\end{figure}
The time-resolved light emission during the vacuum arcs with gap lengths of 5, 1 and 0.5 mm can be found in the supplementary material (S6).
For different gap lengths we see similar behaviour to the one presented in Fig. \ref{fig:main_seq}, yet with significant differences in the duration of each stage.
The ending points of the three stages are summarized in Table \ref{tab:stages} for all four gap lengths.
The last column of Table \ref{tab:stages} contains the duration of the current rising phase P1, for comparison purposes.
We remind that the end of P1 corresponds to the time when the voltage collapses close to zero and a conductive channel has been formed in the gap.
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
$d_g$, mm & cathode-radiance, ns & anode light expansion, ns & light decay & current rise (phase P1), ns\\
\hline
5 & 250 & 2050 & 6000 & 350 \\
\hline
3 & 150 & 850 & 6000 & 200 \\
\hline
1 & 40 & 300 & 6000 & 110 \\
\hline
0.5 & 10 & 150 &6000 & 85 \\
\hline
\end{tabular}
\caption{\label{tab:stages} The times when each of the three stages of light emission evolution ended during the development of a vacuum arc. The time is counted from $t_0$ for different gap lengths, $d_g$. The last column corresponds to the end of the current rise phase P1.}
\end{table}
Three important observations emerge from the results of Fig. \ref{fig:main_seq} and Table \ref{tab:stages}.
Firstly, the cathodic radiance appears instantly after the breakdown takes place and the current starts rising. A significant part of the current rise phase P1 coincides with the stage of the cathode radiance. The stage of anode light expansion begins rather late during phase P1 (compare the second and last columns in Table \ref{tab:stages}), i.e. when the gap current is already quite high and the voltage has started collapsing.
Secondly, the second stage extends far into phase P2 and the moment when the whole gap is bridged by light appears significantly later than the voltage collapse and the formation of a full conductive path in the gap.
Finally, the duration of the two first stages of cathode radiance and anode light expansion depends strongly on the gap length.
\subsection*{Analysis of the cathode and anode light emissions}
The strong flashes of light, which we observe on the snapshots obtained during the vacuum arc (Fig. \ref{fig:main_seq}), do not provide exact information on the intensity of this light.
To estimate the contribution of each electrode to the glow in the gap, we analyse the intensity of the light emission as follows.
We first zoom in the camera to focus in the cathode region and set its exposure time to 7 $\mu$s in order to capture the whole discharge process; Fig. \ref{fig:light_emis}(a) demonstrates a typical snapshot of such an exposure.
We see that the light source appears as an extremely focused spherical spot with a maximum intensity at its center that is more than two orders of magnitude higher than the intensity of the surrounding light. The total integrated intensity of this light obtained in the experiments with different $d_g$ and $\Delta t_V$ did not show dependence on the gap length, but increased linearly with increasing $\Delta t_V$ (see the supplementary material S4 for details).
Furthermore, the full-width-half-maximum (FWHM) range of the peak (i.e. the cathode spot size) is also constant at about 0.1 mm for all gap distances.
Such a consistency of observations indicates that the cathode spot light intensity distribution is stable and constant throughout the whole arc process, regardless the gap length or the pulse duration.
Zooming out the camera we were able to capture the total intensity of the light emitted during the arc in the whole gap.
Comparing the intensities of the light emitted at the anode and the cathode, we can examine the contributions of both light sources to deduce a conclusion on which electrode has the leading role in the vacuum arc process.
In Fig. \ref{fig:light_emis}(b) we plot the normalized intensity distributions (integrated over the lateral directions) along the gap for various gap lengths $d_g$.
Since
we found that the maximum cathode light intensity is independent of $d_g$, we used it as a reference.
Hence, all the curves in Fig.\ref{fig:light_emis}(b) are normalized by the peak values of the light intensity at the cathode.
We clearly see that the light intensity at the cathode peak is significantly higher than that at the anode, since both the intensity and the duration of the cathodic glow are significantly higher and longer than those of the anodic one.
The peak corresponding to the anodic glow, however, is growing with increasing $d_g$.
It is clear that the anode light appears as a secondary effect caused by the events developed at the cathode.
The fact that the anodic glow begins after the electron current through the gap has risen almost to its maximum value, indicates that the glow at the anode appears as a result of the surface heating by the electron current.
This scenario is also in line with the fact that the energy available to heat the anode increases with the gap length, since both the duration of phase P1 (current rise) is longer and the breakdown voltage at $t_0$ is higher.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Intensity.pdf}
\caption{(a) Typical image of the cathode light emission recorded during the ICCD exposure time of 7 $\mu$s, covering the whole pulse duration. (b)Light intensity distribution along the gap. The curves are obtained by summing the intensity in the horizontal direction (parallel to the anode plate), normalizing by its maximum value and averaging 10 different measurement repetitions.}\label{fig:light_emis}
\end{figure}
The above hypothesis regarding the nature of the anodic glow can be confirmed experimentally by examining the response of the anodic glow to the application of a magnetic field perpendicular to the gap current flow.
For this purpose, we used a different triple-tip configuration of the electrodes.
A single-tip cathode was placed in the middle of the double-tip anode.
The perpendicular distance from the top of the cathode tip to the tops of the anode tips was $d_g = 3$ mm, and the voltage pulse width was $\Delta t_V = 1 \mu$s. Fig. \ref{fig:mag_field} shows the photographs of the discharges between the electrodes with a magnetic field $B = 280$ mT applied either outwards (Fig. \ref{fig:mag_field}a) or inwards (Fig. \ref{fig:mag_field}b) with respect to the plane of the figure.
\begin{figure}[htbp]
\includegraphics[width=\linewidth]{magnetic_field.pdf}
\caption{Effect of a magnetic field on the discharging process observed for the pulse width $\Delta t_V = 1 \mu$s. Here the anode is shaped as two tips instead of a simple flat plate. The gap length $d_g = 3$ mm is measured between the tops of the tips along the gap. A magnetic field 280 mT in the direction outwards (a) and inwards (b) to the plane of the electrodes was applied. The directions of the magnetic fields are shown in the left top corners of the figures. The exposure time of the camera was 2 $\mu$s and captured the whole discharge process.}
\label{fig:mag_field}
\end{figure}
The evolution of the vacuum arc in this experimental configuration was similar to the one we observed for the simple flat plate anode without magnetic field.
However, as can be seen in Fig. \ref{fig:mag_field}, the direction of the field systematically determined where the anodic glow appeared: right when the field is outwards (Fig. \ref{fig:mag_field}a), and left when it is inwards (Fig. \ref{fig:mag_field}b).
This is consistent with the deflection of negatively charged particles flowing from the cathode to the anode.
This observation confirms that the anodic glow is initiated by electrons impacting at the anode surface and heating it.
The heated anode starts emitting vapour, which interacts with the incoming electrons, producing the glow that expands from the anode surface.
\subsection*{Analysis of the anodic glow}\label{sec:theory}
In the previous sections, we suggested that the anodic glow may start due to the impact of the electrons emitted from the cathode spot.
Here we shall corroborate this explanation by comparing the surface damage of the cathode and anode surfaces and estimating the temperature evolution of the anode.
In order to assess the degree of surface damage corresponding to the cathodic and anodic glow, we conducted SEM (Scanning Electron Microscope) tests on the cathode and anode surface both before and after electrical discharges. was used.
Fig. \ref{fig:SEM} shows the corresponding images obtained from a Hitachi S-3000N SEM, for a 3 mm gap.
We observe clear melting on the cathode surface, while the anode does not show any indications of a melting process.
Many microscopic features found before the breakdown (see, for instance, red circle in Fig.8c) are still present after the experiment (Fig. 8d).
On the contrary, the surface of the cathode is heavily damaged, appearing as a solidified liquid (compare Fig. 8a and 8b).
The damage of the cathode surface suggests that the corresponding intense glow can be explained by the presence of a fully-developed arc plasma \cite{djurabekova2012crater, timko2010mechanism}, while the nearly unchanged anode surface needs to be examined further by estimating its temperature.
\begin{figure}[htbp]
\includegraphics[width=\linewidth]{SEM.pdf}
\caption{SEM figures for the cathode and anode surfaces both before and after vacuum discharges in a 3 mm gap. (a) cathode surface before discharges; (b) cathode surface after discharges; (c) anode surface before discharges; (d) anode surface after discharges. Red circles in (c) and (d) indicate the same position. Length scale is indicated by the ruler consisting of 11 little dots at the right bottom corner.}
\label{fig:SEM}
\end{figure}
To this end, we solved numerically the one-dimensional time-dependent heat diffusion equation for a flat copper plate, as described in the method section.
Knowing both waveforms of the voltage over the gap and the current through the gap, we can estimate the heating power deposited by the electrons arriving at the surface of the anode (see the method section for details). This estimation is done under the assumption that during the cathode-radiance stage of the arc (before the anode starts glowing), the electrons are freely accelerated by the gap voltage and deposit all their energy on the anode plate.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\linewidth]{temp_vapor1.pdf}
\caption{Evolution of the temperature at the anode plate (red, left axis) and the corresponding Cu vapour pressure (blue, right axis), calculated for three typical experiments with $d_g = 1$ mm (solid lines) and $d_g = 3$ mm (dashed lines) and $d_g = 5$ mm (dot lines). The maximum depth of the molten region does not exceed 0.5$\mu$m for all cases.}
\label{fig:anode_temp}
\end{figure}
In Fig. \ref{fig:anode_temp} we plot the evolution of the calculated surface temperature of the anode plate and the vapour pressure corresponding to this temperature, during the cathode-radiance stage, for three typical experiments at gap lengths $d_g = 1$, 3 and 5 mm.
We see that the temperatures reach the melting temperature of Cu (1356 K) for all gap distances, with the corresponding vapour pressure exceeding 0.1 Pa.
Such a pressure corresponds to neutral atom densities of the order of $10^{18}$ -- $10^{19}$ m$^{-3}$.
The electrons colliding with these neutral atoms cause the expansion of the glow that appears near the anode.
Furthermore, the vapour pressure reaches this range in a different time scale for each gap length (at 40, 100 and 230 ns for $d_g = $ 1,3,5 mm correspondingly).
These time scales are in agreement with the times that the anodic glow appears (see Fig. \ref{fig:light_emis} and Fig. S7-S8 of the supplementary material), i.e. the duration of cathode-radiance stage (see Table \ref{tab:stages}).
The latter increases for increasing gap distance, because the electron beam spreads broader reducing the deposited heat per unit area.
This means that with increase of the gap length the anode needs longer time to be heated to the temperature sufficient for intensive evaporation.
Finally, although the electron-deposited heat is sufficient to melt the anode surface for all cases, the maximum depth of the molten region does not exceed 0.5$\mu$m for all cases. Therefore, the heat does not penetrate in a depth that is sufficient to cause a noticeable melting damage, as shown in the SEM images of figure \ref{fig:SEM}.
\section*{Discussion}
The simultaneous analysis of the measured voltage-current waveforms and the light images at a nanosecond resolution provides deep a insight into the evolution of the vacuum arc process.
As we saw in the results section, the vacuum arc always ignites at the voltage resulting in a local electric field near the cathode of $\sim$ 160 MV/m. At this point, a measurable electron current through the gap starts rising.
Instantly after this moment, a spherical-shaped, localized and dense glow appears near the cathode tip.
The latter gradually expands as the current rises towards the external-circuit-limited value of about 80 A, while the gap voltage gradually collapses.
Before the voltage collapses completely, another glow appears in the anode region, slowly expanding and covering eventually the whole gap.
However, shortly after the gap is bridged by light, the anodic glow starts decaying, although the arc continues burning in a stable high-current low-voltage regime. At this point, only the intense light from the cathode remains, maintaining the arc. The cathode light decays slowly after the voltage pulse stops fully.
The cathodic glow may be explained by either a strong temperature rise due to intensive electron emission, or by the development of a local vacuum arc (i.e. plasma) near the cathode.
If the first scenario is true, the tip is heated to a very high temperature due to the Joule and Nottingham effects and emit light due to black body radiation.
It is well known that any type of intensive electron emission (field, thermionic or mixed) is limited by the space charge effect.
For very high emitted current density, the forming cloud of the emitted electrons creates a significant space charge in the vacuum above the emitting surface, that screens the applied field and thus causes a negative feedback that reduces the emission.
The maximum limit of the current density that can be emitted from a cathode depends on the local surface field and the total applied voltage.
This dependence is given by the Child-Langmuir law \cite{Child_SC}.
Using this law, we estimated the maximum emission current limited by the space charge for the geometry of the current experiments.
After calculating the distribution of the local electric field around the cathode surface for a given voltage by the finite element method (see Methods section), we integrated over the whole cathode surface the current density obtained by the Child-Langmuir law.
The details of this calculation can be found in the supplementary material (S5).
Comparing the experimental current waveform with the calculated space charge limit, we can verify the nature of the cathodic glow.
Shortly after the beginning of the current rise phase P1 (see Fig. \ref{fig:phases}) and the cathodic glow stage, the measured current through the gap exceeds significantly the value limited by the space charge. (see Fig. S5 in the supplementary material).
Furthermore, this happens long before an anodic glow begins in all experiments with different $d_g$.
This consideration allows us to conclude that the strong light emission seen at the cathode surface cannot be caused by electron emission heating phenomena, since the currents measured through the gap at the time of the strong cathodic glow are much higher than those limited by the intensively building-up space charge.
Furthermore, the spherical shape and the high intensity of the cathodic glow that are independent of the gap length, also confirm that the glow is due to the full arc plasma that forms within a few ns after the local field reaches a critical value.
The anodic glow appears to be rather different by nature from the cathodic one.
The response of the system to the application of a magnetic field shown in Fig. \ref{fig:mag_field} indicates that the anode glow appears as a result of impacts of electrons on the anode surface.
The electrons accelerated from the cathode deposit their energy on the anode surface.
This energy can only be transformed into heat, which causes high temperatures and significant metal vapour, that gradually expands to fill the whole gap.
The collisions of the vaporized neutral Cu atoms with the electron beam causes the apparent anodic glow, while they hinder the further heat deposition on the anode, thus self-regulating its temperature.
After the conductive channel is formed, the energy available to heat the anode decreases rapidly due to the collapse of the gap voltage.
As a result, the anode material gradually cools and stops providing Cu vapour.
Thus, the vapour gradually expands and diffuses away, leading to the decay of the light radiation on the anode surface and in the gap.
On the contrary, the cathode radiance spot remains at the same high intensity until the end of the pulse.
This scenario is confirmed by our anode heat calculations, which show that the electron impact power available in the gap is sufficient to heat the anode to high temperatures, within time intervals that are in agreement with the ICCD camera measurements.
Given the above, although we do not investigate here whether the anodic glow fulfils the plasma criteria, we can conclude that in contrast to the cathodic glow, it is neither stable nor necessary to sustain the arc.
It is rather a transient side-effect of the fact that the gap voltage does not collapse immediately after the arc is ignited, but has a delay time that depends on the gap length.
In summary, by conducting vacuum breakdown experiments between Cu electrodes under rectangular pulse voltages and using a high-speed ICCD camera, we reconstructed the entire vacuum arc process with nanosecond resolution.
Combining these results with theoretical estimations of the electron emission characteristics, breakdown currents and anode heat evolution, we conclude:
\begin{enumerate}
\item The vacuum breakdown is triggered at the cathode once the surface field reaches a critical value of about 160 MV/m.
Immediately after this, a localized intense radiance appears near the cathode; strong experimental and theoretical evidence shows that the latter is produced by a dense plasma that is formed at the cathode and drives the discharge allowing the gap current to grow to breakdown values.
\item A while after the breakdown initiation, another light emission starts from the anode, gradually growing to cover the whole gap.
We suggest that this glow results from the electrons that escape the cathodic arc plasma and bombard the anode.
Our heat diffusion calculations and the observations of the deflection of the anode glow in a magnetic field, confirm the correlation between the anodic glow and electrons escaping from the cathode plasma.
\item Although both the cathode and the anode contribute in the vacuum arc evolution, the role of the cathode is more crucial, since the processes developing at the cathode surface initiate the breakdown and maintain constant radiance from the cathode surface throughout the whole arc process, driving the arc in a stable manner.
On the contrary, the anode is active only during a fraction of the arc process, and the anode glow covers the gap long after a full conductive channel is established.
\end{enumerate}
\section*{Methods} \label{sec:method}
\subsection*{Experimental Set-up}
Figure \ref{fig:setup}(a) is the schematic diagram of the experimental set-up.
Electrical discharges were triggered in a demountable stainless steel chamber that was pumped to a pressure of $2.5 \times 10^{-4}$ Pa by a turbo molecular pump.
A pair of electrodes was installed in the chamber and the gap length was adjusted by a micrometer manipulator.
High voltage was provided by the pulsed voltage source with the output voltage set to -40 kV and a discharge current with the maximum value of 80 A, which was determined by a 500 $\Omega$ current limiting resistor installed in the circuit.
In addition, the width of the voltage pulse was also adjustable for specific purposes, between 1 $\mu$s and 5 $\mu$s.
The upper and lower electrodes were connected to the high voltage terminal and the ground, respectively.
We observed the vacuum breakdowns through a glass window by an intensified charge-coupled device camera (ICCD, Andor DH334T-18U-04).
This ICCD has an electronic gate control to ensure a minimum exposure time of 2 ns and offers a gate monitor signal to indicate the time instant of an observation.
A high voltage probe (NorthStar PVM-7) was used to measure the voltage across the gap, which has a bandwidth of 110 MHz.
The current through the circuit was measured by a Pearson current sensor (Model 6595) with a bandwidth of 200 MHz.
The voltage signal, the current signal and the gate monitor signal were all recorded by a four-channel oscilloscope.
\begin{figure}[htbp]
\includegraphics[width=\linewidth]{setup.pdf}
\caption{(a) Schematic of the experimental set-up. (b)Timing set-up diagram. $t_0$: start point of current rising; $t_1$: opening the ICCD shutter; $t_2$: closing the ICCD shutter; $t_w=t_2 - t_1$: exposure time; $\Delta t = t_1 - t_0$: the beginning of an image capture.}
\label{fig:setup}
\end{figure}
In addition, a digital delay generator (SRS DG645) controlled the sequence of the experiments, in the manner that is shown in Fig. \ref{fig:setup}(b).
After the beginning of the breakdown at $t_0$, the delay generator causes a delay time $\Delta t$, after which at $t_1$ a signal is given to the ICCD camera shutter to open for an exposure time $t_w$.
\subsection*{Finite element calculation of the field distribution}
We calculated the electric field distribution around the needle by the Finite Element Method (FEM), using the open-source tools Gmsh-GetDP \cite{GetDP}.
The schematic in Fig. \ref{fig:schematic}(a) illustrates the simulated geometry, the equations and the corresponding boundary conditions.
The Laplace equation is solved in the gap, with Dirichlet boundaries at the cathode and the anode.
The cathode tip is simulated as a hemisphere on a cone, which is terminated by a cylinder.
The radii and the aperture angle were chosen based on the geometry of the cathode tips, such as the one shown in the Scanning Electron Microscope (SEM) image in Fig. \ref{fig:SEM}(a).
The total height $h = 3 d_g$ is converged so that its increase does not affect the field on the conical area.
\begin{figure}[htbp]
\centering
\includegraphics[width=.5\linewidth]{schematic_sem.pdf}
\caption{Schematic of the finite element simulation geometry.}
\label{fig:schematic}
\end{figure}
\subsection*{Solution of the heat diffusion equation with the finite difference method}
Since the depth of the heated volume is much smaller than its lateral dimensions, we ignore the lateral heat flow and solve the heat equation in one dimension in order to obtain the heat distribution and evolution. The heat equation can be written as
\begin{equation} \label{eq:heat}
C_v \frac{\partial T}{\partial t} = \frac{\partial T}{ \partial z} \left[ \kappa(T) \frac{\partial T}{\partial z} \right] + p(z)
\end{equation}
where $C_v$ is the volumetric heat capacity, $\kappa(T)$ is the heat conductivity of Cu (depends on the temperature) and $p(z)$ is the deposited heat density as a function of the material depth $z$.
For the purposes of this work we approximate $p(z)$ with the “continuous slowing down approximation - CSDA” \cite{estar}.
This means that we consider the deposited heating power density to be constant over the CSDA range $z_d$ and zero deeper than it, i.e. $p(z > z_d) = 0$ and $p(z<z_d) = P/z_d$, with $P$ being the deposited heating power per unit area.
The heat conductivity of Cu is calculated by applying the Wiedemann-Franz law on the values of the Cu electric conductivity found in the literature \cite{Matula1979, Gathers1983} and the CSDA range $z_d$ is found by the ESTAR database \cite{estar}.
In order to estimate the deposited heating power $P$, we have to consider a certain distribution for the current density of the electron beam impinging on the anode.
We assume that this roughly follows a Gaussian distribution with its width $\sigma$ being estimated from the width of the anodic glow appearing in the camera images as $\sigma =$ 0.3, 0.85, and 0.95 mm for $d_g = $ 1, 3, and 5 mm correspondingly.
Then the peak power in the center of the beam can be found as $P(t) = V(t)I(t) / (2 \pi \sigma^2)$, where the product $V(t) I(t)$ gives the total power deposited on the anode by the discharge and is taken from the measured waveforms.
With the above assumptions, we solve the equation (\ref{eq:heat}) to obtain the evolution of the temperature depth profile evolution $T(z,t)$.
The equation is solved over a total depth domain of 20 $\mu m$, sampled at 512 equidistant points, with zero-heat-flux Neumann boundary conditions at the boundaries.
The initial temperature is assumed to be uniform at 300K and we used a forward-time-central-space finite difference integration scheme \cite{orlande2017finite} with a time-step of 0.2 ps.
When the temperature profile $T(z,t)$ is calculated, the corresponding vapour pressure and vapour density are obtained from the surface temperature $T(0,t)$ by interpolating tabulated temperature-pressure data \cite{vapor_data, alcock1984vapour, safarian2013vacuum}.
\section*{Acknowledgements}
Y. Geng was supported by National Key Basic Research Program of China (973 Program) (No. 2015CB251002). Z. Wang was supported by the National Natural Science Foundation of China (No. 51807147). A. Kyritsakis was supported by the CERN K-contract (No. 47207461). F.\;Djurabekova acknowledges gratefully the financial support of the Academy of Finland (Grant No. 269696). Z. Zhou was supported by the project of China Scholarship Council (No. 201806280259).
\section*{Author contributions statement}
Y.G planned the project and supervised the research. Z.Z and Z.W. designed and conducted the experiments and data analysis, as well as proposed and developed the mechanism hypothesis, together with A.K. and F.D. A.K. conducted the accompanying calculations and interpreted their connection to the experiments together with F.D. Y.L. conducted the experiments together with Z.Z. and Z.W. Z.Z. wrote the manuscript together with A.K. All authors reviewed the manuscript.
\section*{Additional information}
\textbf{Competing interests:} The authors declare no competing interests. \\
\section*{Data availability}
The data sets generated and analysed during the current study are available from the corresponding author on reasonable request.
| 10,454 |
\section{Introduction}
Evolutionary computing relies on the evolution of candidate solutions over a finite number of generations to obtain accurate solutions for complex optimization problems. Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO), among other Bio-inspired algorithms (BAs), have been applied to successfully solve a diversity of such problems applied in engineering and sciences. BAs guide the evolution of a population of individuals (candidate solutions) to improve their fitness to achieve a feasible solution to the target problem. BAs apply specialized computational rules to promote individual information exchange to the benefit of the population. However, optimization through such BA approaches demands exhaustive computations and efficient resources. Parallelization is one of the natural mechanisms applied to speed up and improve the accuracy of solutions obtained by BAs.
This work studies Parallel Island Models (PIMs) that partition the population among their islands (processors) and simultaneously run a BA in each island. In addition to the exchange of individuals, PIMs promote migration between islands. When all islands run the same BA, such models are called homogeneous PIMs (HoPIMs). This work improves heterogeneous PIMs (HePIMs) \cite{Lucas2021}, in which islands may run different BAs, allowing algorithmic \textit{reconfiguration} on their islands, i.e., islands may dynamically update their BAs. In addition to an adequate and well-calibrated migration policy required by HePIMs, reconfigurable HePIMs exchange information between islands to decide how islands should reconfigure their BAs.
Silveira {\em et al.} \cite{Lucas2021} introduced reconfigurable HePIMs running four different BAs in their islands, namely, Genetic Algorithm (${\mbox{\texttt{GA}}}$), double-point crossover GA (${\mbox{\texttt{GAD}}}$), Differential Evolution (${\mbox{\texttt{DE}}}$), and self-adjusting Particle Swarm Optimization (${\mbox{\texttt{PSO}}}$) (see e.g., \cite{Holland1973}, \cite{DE1970}, and \cite{eberhart1995particle}). PIMs performance depends on a good calibration of the breeding cycle parameters (related to the involved BA) and, in addition to vital aspects of the parallel island architecture as island communication synchronism, island migration frequency, communication topology, and migration policy. We select two successful asynchronous HePIMs from \cite{Lucas2021} maintaining their parameters and adding the reconfiguration frequency.
The new reconfigurable HePIMs are tested in solving the unsigned reversal distance problem (URD), an ${\cal NP}$-hard problem (\cite{Caprara1997sorting}). Approaches to solve URD are applied in combinatorics to explain algebraic aspects of permutations (\cite{DELIMA201859}) and, in genomics, to analyze the evolutionary distance between genomes (\cite{kececioglu1993exact}), also represented as permutations.
\vspace{2mm}
\noindent{\bf Main contributions.}
We design new reconfigurable HePIMs that run ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{GAD}}}$, ${\mbox{\texttt{DE}}}$, and ${\mbox{\texttt{PSO}}}$\ in their islands using two successful asynchronous topologies from \cite{Lucas2021}. Non-reconfigurable HePIMs computed competitive solutions regarding most HoPIMs. The new reconfigurable architectures showed promising results computing solutions that exceed the quality of pure HePIMs and are very competitive regarding the best adapted HoPIMs, namely, the ones that run ${\mbox{\texttt{DE}}}$\ in all their islands. The heterogeneity of the new model effectively shares, through the migration policy, the good results experimented by individuals guided by different BAs in each island to the whole architecture. Furthermore, the reconfiguration ability shares the good experiences of the BAs in each island of the model. Adding the reconfiguration capability, the new model exceeds the flexibility of HoPIMs (all islands may update their BA to a unique BA) and of HePIMs (reconfiguration may update island BAs to the fix configuration of any non-reconfigurable HePIM)
\vspace{2mm}
\noindent{\bf Organization.}
Sec. \ref{sec:background} discusses PIMs, the unsigned reversal distance problem, and the four selected BAs: ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{GAD}}}$, ${\mbox{\texttt{DE}}}$, and ${\mbox{\texttt{PSO}}}$. Sec. \ref{sec:topologies} introduces the new reconfigurable HePIMs explaining how different BAs are reconfigured. Then, Sec. \ref{sec:experimentsaccuracy} presents experiments and discusses accuracy results and statistical analysis. Finally, Sec. \ref{sec:relatedwork} present related work before Sec. \ref{sec:conclusion} that concludes and discussed future work. Source and data used in the experiments are available at \href{http://genoma.cic.unb.br}{http://genoma.cic.unb.br}.
\section{Background}\label{sec:background}
\subsection{Parallel island model (PIM)}\label{ssec:pim}
PIMs were proposed initially for GAs \cite{Crainic2003} and, besides improving speed-up, it is expected that such models also boost the solutions provided by sequential ${\mbox{\texttt{GA}}}$.
The population is distributed into islands whose number is determined by the developer, running their BAs in parallel.
The connection between the islands establishes the model's topology. \textit{Static} PIMs maintain the connections fixed during the execution, whereas \textit{dynamic} models admit changes during the process.
Linked islands exchange individuals to evolve. Such a transfer can be uni- or bi-directionally. Different topologies and strategies to implement them are available (e.g. \cite{Duarte2020,Lucas2020,Sudholt2015parallel}).
\textit{Homogeneous} PIMs execute the same BA in all islands, whereas \textit{heterogeneous} models admit different BAs running in their islands.
Figure \ref{fig:heterogeneousBTree} illustrates an heterogeneous static bi-directional tree topology. The edges show the connections between islands and remain unchanged, while vertices represent the islands with their BAs.
A \textit{migration policy} guides the exchange of individuals between islands during the evolutionary process.
PIMs have breeding-cycle and migration parameters tuned to improve the quality of solutions. In the following, the migration parameters are briefly presented. Some of them consider the classification of individuals as {\bf best}, {\bf worst} and {\bf random}, based on a rank established according to their fitness. The first half corresponds to the best and the second to the worst individuals, whereas random individuals are selected randomly.
\begin{itemize
\item Individuals number ({\it IN}): number of individuals emigrating from each island.
\item Emigrant Individuals ({\it EMI}): rule the type of individuals selected for emigration among:
1. {\bf best}, 2. {\bf worst}, and 3. {\bf random}.
\item Immigrant Individuals ({\it IMI}): determines the type of individuals in the target island replaced by immigrants among:
1. {\bf worst}, 2. {\bf random}, and 3. {\bf similar}. Similar individuals have the same classification as their replacement immigrants according to the fitness rank.
\item Emigration Policy ({\it EP}): defines whether individuals are {\bf cloned} or {\bf removed} in the local island when they emigrate to the target island.
\item Migration Interval ({\it MI}): corresponds to a percentage of iterations of the evolutionary process, called generations, after which the migration process is redone.
Each island separately evolves its population by $\textit{MI} \times \textit{maxIt}$ generations, where \textit{maxIt} is the total number of iterations performed by each BA.
\end{itemize}
PIMs are classified according to the synchroneity in which islands evolve their population. In \textit{Synchronous} PIMs, islands evolve performing each generation simultaneously, whereas, in \textit{asynchronous} PIMs, islands evolve independently even during migration. The latest mimics the behavior found in nature.
Here, we introduce reconfigurable heterogeneous PIMs. First, at a fixed generation percentage, called {\it Reconfiguration Frequency (RF)}, it is checked what islands have the best and the worst solutions regarding a metric based on the fitness average and the variance of the population. Then, the worst island updates its BA to the BA applied by the best island.
\subsection{Case-study}\label{subsec:case}
The evolutionary distance between two organisms can be computed as the number of rearrangements needed to transform a genome into another one by using some evolutionary measure.
The authors consider the minimum number of reversals to compute the distance between unichromosomal organisms in this work.
Permutations on $\{1,\cdots,n \}$ represent a genome containing $n$ genes.
Given a genome $\pi=(\pi_1, \pi_2, ..., \pi_n)$, where $1\leq i, \pi_i\leq n$, a reversal $\rho^{j,k}$, for $1\leq j \leq k \leq n$, transforms $\pi$ into $\pi'=(\cdots, \pi_{j-1},\pi_k,\cdots,\pi_j, \pi_{k+1},\cdots)$, that is, it inverts the elements between $\pi_j$ and $\pi_k$.
If the orientation of the genes is known, each one receives a positive or negative sign, and the genome is a signed permutation. There are two evolutionary problems related to computing the distance by reversals. The signed reversal distance (SRD) problem asks for the minimum number of reversals needed to transform a signed permutation into another. On the other hand, the unsigned reversal distance (URD) problem consists of computing such a number between unsigned permutations, which orientation of genes is unknown. It is well-known that SRD belongs to class $\cal P$ \cite{Hannenhall1999}, whereas URD is an ${\cal NP}$-hard problem \cite{Caprara1997sorting}.
Our models are applied to solve URD. The fitness used by the algorithms is computed over signed permutations, generated after a random assignment of signs to each gene of a permutation.
\subsection{Local Evolutionary Engines | bio-inspired Algorithms}
Four BAs, widely used for analyzing optimization problems and having distinct adaptability characteristics are applied.
\begin{itemize
\item Simple Genetic Algorithm (${\mbox{\texttt{GA}}}$):
to evolve local population, ${\mbox{\texttt{GA}}}$\
considers a breeding cycle where the best parents are selected and produce offspring by
applying one-point crossover (Fig. \ref{fig:crossover} (a)). Then, the descendants replace the worst individuals in the current population. The breeding cycle relies on four parameters, namely, the percentages of {\it selection} and {\it replacement}, and the probability of application of {\it mutation} and {\it crossover}. ${\mbox{\texttt{GA}}}$\ was developed by J. H. Holland in the 1970s \cite{Holland1973}.
\item Double-point Crossover Genetic Algorithm (${\mbox{\texttt{GAD}}}$):
it has a similar behavior than ${\mbox{\texttt{GA}}}$\, except by the technique to promote {\it crossover}, illustrated in Fig. \ref{fig:crossover} (b), and how the local population evolves: in contrast with ${\mbox{\texttt{GA}}}$\, descendants replace individuals randomly selected in ${\mbox{\texttt{GAD}}}$.
\item Differential Evolution (${\mbox{\texttt{DE}}}$):
it was proposed by Storn and Price \cite{DE1970}, and is a method to optimize functions over the real multidimensional space $\mathbb{R}^n$. We adapt the algorithm by restricting the domain of the function as the set of permutations. Two main parameters guide the evolutionary process: the {\it mutation factor} $F_M$, applied to individuals randomly selected from the population to generate mutants, and the {\it probability of crossover} $P_C$.
The local population evolves by replacing individuals having the worst fitness with mutants.
\item Self-adjusting Particle Swarm Optimization (${\mbox{\texttt{PSO}}}$):
it was introduced by Eberhart and Kennedy \cite{eberhart1995particle} and is based on the behavior of social organisms in groups. Originally, ${\mbox{\texttt{PSO}}}$\ was developed to optimize continuous functions from particles' velocity and position in an $n$-dimensional space. At each iteration, the vector representing a particle velocity is built from the best positions of such a particle and all particles, the velocity of the particle in the previous iteration, the individual and global acceleration (that influence the distance a particle can cover), and the weight of inertia (momentum). In this paper, we use the ${\mbox{\texttt{PSO}}}$\ proposed in \cite{pso2011}, which is self-adaptive since momentum and the individual and global acceleration coefficients are self-tuning during the search process.
\end{itemize}
To adapt ${\mbox{\texttt{PSO}}}$\ and ${\mbox{\texttt{DE}}}$\ to URD, to each $n$-dimensional real vector $v$ randomly generated is associated a signed permutation constructed from the unsigned permutation $\pi = (\pi_1, \ldots, \pi_n)$ gave as input: if the $i-$th entry of $v$ belongs to the interval $[0, 0.5)$ then $\pi_i$ receives a negative orientation; case such a coordinate is in $[0.5, 1]$ then $\pi$ is assigned positively. However, if the continuous vector representation has an entry outside of the interval $[0,1]$, a correct orientation is randomly generated.
For ${\mbox{\texttt{GA}}}$\ and ${\mbox{\texttt{GAD}}}$, the orientation of the genes in each individual is randomly generated as $\pm 1$.
After the transformation of an unsigned to signed permutations, the linear algorithm to solve the SRD problem, proposed by Bader \emph{et al.} \cite{Bader2001linear}, computes the fitness of each particle/individual.
\begin{figure}[!ht]
\centering
{\scriptsize
\[\begin{array}{c}
\mbox{(a) One-point crossover}\\[2mm]
\begin{array}{c}
\begin{array}{|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|}
\hline 1&2&3&4&5&6&7&8 \\
\hline
\end{array}\\[2mm]
{\color{cyan}\begin{array}{|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|}
\hline
5&8&6&4&3&1&2&7 \\
\hline
\end{array}}
\end{array}
{\color{blue} \huge\leadsto}
\begin{array}{c}
\begin{array}{|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|}
\hline 1&2&3&{\color{cyan}4}&{\color{cyan}3}&{\color{cyan}1}&{\color{cyan}2}&{\color{cyan}7} \\
\hline
\end{array}\\[1mm]
\begin{array}{|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|}
\hline {\color{cyan}5}&{\color{cyan}8}&{\color{cyan}6}&4&5&6&7&8 \\
\hline
\end{array}
\end{array}
\end{array}
\]
\[\begin{array}{c}
\mbox{(b) Double-point crossover}\\[2mm]
\begin{array}{c}
\begin{array}{|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|}
\hline 1&2&3&4&5&6&7&8 \\
\hline
\end{array}\\[2mm]
{\color{cyan}\begin{array}{|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|}
\hline
5&8&6&4&3&1&2&7 \\
\hline
\end{array}}
\end{array}
{\color{blue} \huge\leadsto}
\begin{array}{c}
\begin{array}{|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|}
\hline 1&2&3&{\color{cyan}4}&{\color{cyan}3}& 6&7&8\\
\hline
\end{array}\\[1mm]
\begin{array}{|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}||@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|}
\hline {\color{cyan}5}&{\color{cyan}8}&{\color{cyan}6}&4&5&{\color{cyan}1}&{\color{cyan}2}&{\color{cyan}7} \\
\hline
\end{array}
\end{array}
\end{array}
\]}
\vspace{-4mm}
\caption{One-point and double-point crossing operators.}
\label{fig:crossover}
\end{figure}
\section{Communication Topologies} \label{sec:topologies}
We select a static and a dynamic topology that successfully addressed URD in \cite{Lucas2020} for homogeneous PIMs and in \cite{Lucas2021} for non reconfigurable heterogeneous PIMs. We choose asynchronous models since the dynamic asynchronous HePIM was the one that provided the best results.
The static topology is a 12-island bi-directional binary tree (tree to the left in Figure \ref{fig:heterogeneousBTree}), and the dynamic topology is the 12-island complete graph (graph to the left in Figure \ref{fig:heterogeneousCGraph}). In the complete graph topology all pairs of islands may exchange individuals. The island communication dynamism is acquired by exploring diversity and quality into each island, given by fitness variance and average metrics. Variance measures islands' diversity: high variance represents high individuals' diversity, improving the chances of evolution into islands. Fitness average measures the quality of island populations.
According to such metrics, the islands are ranked as {\bf good}, {\bf bad}, and {\bf medium}. Migrations exchange individuals between good and bad islands, and medium and medium islands only (for short, {\it gbmm}). Reconfiguration uses the same metric to update the BA executed by islands, according to the best performance experienced by other islands. So, reconfigurable HePIMs perform migration and updating of BAs at some intervals during their evolution.
The models introduced in this paper are ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$. The former one uses the static tree topology and the latter one the dynamic complete graph topology. Both topologies are asynchronous and evolve through a refined migration policy that allows exchange of individuals maintaining diversity of the model, and furthermore, through the new feature of dynamic reconfiguration that allows updating the BAs executed in their islands, improving in this manner the performance of the island model. Reconfiguration cycles for ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ are illustrated in Figures \ref{fig:heterogeneousBTree} and \ref{fig:heterogeneousCGraph}, respectively.
\begin{figure*}[!ht]
\centering
\includegraphics[width=1\textwidth]{HeteroReconfigStatic.eps}
\caption{Example of reconfiguration in the static binary tree topology. Red dotted nodes represent islands that have undergone current reconfiguration, while black dotted nodes label islands that have undergone reconfiguration in previous cycles.}
\label{fig:heterogeneousBTree}
\end{figure*}
\begin{figure*}[!ht]
\centering
\includegraphics[width=1\textwidth]{HeteroReconfigDynamic.eps}
\caption{Example of reconfiguration in the dynamic complete graph topology. Red dotted nodes represent islands that have undergone current reconfiguration, while black dotted nodes label islands that have undergone reconfiguration in previous cycles.}
\label{fig:heterogeneousCGraph}
\end{figure*}
\section{Experiments and analysis of accuracy}\label{sec:experimentsaccuracy}
As in \cite{Lucas2021}, all PIMs, including the new reconfigurable models were implemented using the {\tt MPI} library of {\tt C} in Linux, and for the sake of comparison, experiments were executed on a computational platform using two Xeon E5-2620 2.4 GHz six core processors with hyper-threading.
The basis to compare the performance of PIMs, are sequential versions of ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{GAD}}}$, ${\mbox{\texttt{DE}}}$\ and ${\mbox{\texttt{PSO}}}$\ with populations of size $24 n \log n$ and breeding cycles fixed as $n$. Also, we select eight 12-island asynchronous HoPIMs, designed in \cite{Lucas2020}, running exclusively one of the BAs: ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{GAD}}}$, ${\mbox{\texttt{DE}}}$\ or ${\mbox{\texttt{PSO}}}$. Furthermore, we select two asynchronous HePIMs designed in \cite{Lucas2021}.
The homogeneous models are ${\cal P}^{\mbox{\texttt{GA}}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\texttt{GAD}}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\texttt{PSO}}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\texttt{GA}}}_{\mbox{\tiny gbmm12A}}$, ${\cal P}^{\mbox{\texttt{GAD}}}_{\mbox{\tiny gbmm12A}}$, ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$, and ${\cal P}^{\mbox{\texttt{PSO}}}_{\mbox{\tiny gbmm12A}}$. The superscripts denote the BA used by the homogeneous model; the subscript prefixes, whether the model use the static tree ({\tt Tr}) or the dynamic complete graph topology ({\tt gbmm}); and, the subscript suffix {\tt 12A} indicates the number of islands and that the model is asynchronous. From \cite{Lucas2021}, we select the heterogeneous PIMs ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$, being the latter one the HePIM that provided the best quality results.
\subsection{Parameter Setup}
The parameters for BAs, HoPIMs and non-reconfigurable HePIMs obtained in \cite{Lucas2021} were used. The \emph{parameter tuning} adopted the ``taxonomy T1'' in \cite{EIBEN201119}.
Table \ref{tab:parameters} presents the parameter ranges. For percentages, the tested values range between $2\%$ and $100\%$. For probabilities, the values range from $0.02$ to $1.0$, and for the mutation parameter from $0.01$ to $0.02$. For ${\mbox{\texttt{DE}}}$, the $F_M$ parameter ranges from $1\%$ to $2\%$ since values above $2\%$ degrade the quality of solutions.
For ${\mbox{\texttt{PSO}}}$, the parameters to guide the particles in the search space are self-adjusting.
The setup tested BAs, HoPIMs and HePIMs over packages of twenty $n$-gene permutations, $n \in \{50,60,\ldots,140,150\}$. All parameters reference values were evaluated and those that provided the best solutions selected (see Tables \ref{table:parametersettingGA_GAD} and
\ref{table:parametersettingDE_PSO}). HePIMs use the same evolutionary parameters than HoPIMs, and only migration parameters were calibrated. Reconfigurable HePIMs add reconfiguration frequency to the associated HePIMs (see Table \ref{table:parametersettingHet}).
\begin{table}[!t]
{\small
\caption{Estimated Values for the Parameters}
\label{tab:parameters}
\vspace{-3mm}
\begin{center}
\begin{tabular}{|c|c|c|}
\cline{2-3}
\multicolumn{1}{c|}{}& \multicolumn{1}{|c|}{Parameter}& \multicolumn{1}{|c|}{Estimated values}\\
\hline
\multirow{4}{*}{${\mbox{\texttt{GA}}}$\ and ${\mbox{\texttt{GAD}}}$}&\mbox{\it crossover} & $0.02, 0.04,\cdots,0.98, 1.0$ \\ \cline{2-3}
&\mbox{\it mutation} & $0.01, 0.011,\cdots,0.019, 0.02$\\ \cline{2-3}
&\mbox{\it selection} & $2\%, 4\%,\cdots,98\%, 100\%$\\ \cline{2-3}
&\mbox{\it replacement} & $2\%, 4\%,\cdots,98\%, 100\%$ \\ \hline
\multirow{2}{*}{${\mbox{\texttt{DE}}}$}
&\mbox{\it $P_C$} & $0.02, 0.04,\cdots,0.98, 1.0$ \\ \cline{2-3}
&\mbox{\it $F_M$} & $1\%, 1.1\%,\cdots,1.9\%, 2\%$
\\ \hline
\multirow{5}{*}{Migration}
& \mbox{\it IN} & 1,2,3,4,5,6,7,8,9,10,11,12,13\\ \cline{2-3}
&\mbox{\it EMI} & 1=Best, 2=Worst, 3=Random \\ \cline{2-3}
&\mbox{\it EP} & 1=Clone, 2=Remove\\ \cline{2-3}
&\mbox{\it IMI} & 1=Worst, 2=Random, 3=Similar\\ \cline{2-3}
&\mbox{\it MI} & $2\%, 4\%,\cdots,98\%, 100\%$ \\\hline
\end{tabular}
\end{center}}
\vspace{-6mm}
\end{table}
\begin{table}[!t]
{\small
\caption{Parameter Settings for ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{GAD}}}$, and associated HoPIMs.}
\label{table:parametersettingGA_GAD}
\vspace{-3mm}
\begin{center}
\begin{tabular}
{|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}|}
\hline
\multicolumn{1}{|c|}{}&
\multicolumn{1}{|c|}{}&
\multicolumn{2}{|c|}{${\cal P}^{\mbox{\texttt{GA}}}$}&
\multicolumn{1}{|c|}{}&
\multicolumn{2}{|c|}{${\cal P}^{\mbox{\texttt{GAD}}}$}\\
\hline
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{Parameter} &
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\texttt{GA}}}$}&
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\tiny Tr12A}}$}&
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\tiny gbmm12A}}$}&
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\texttt{GAD}}}$}&
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\tiny Tr12A}}$}&
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\tiny gbmm12A}}$}\\
\hline
{\it crossover} &$.90$ &$.98$ & $.96$ &
$.92$ &$.98$ & $.98$\\ \hline
{\it mutation} &$.02$ &$.015$ &$.011$ &
$.01$ &$.01$ &$.01$ \\ \hline
{\it selection} &$60\%$ &$92\%$ &$94\%$ &
$98\%$ &$98\%$ &$94\%$ \\ \hline
{\it replacement} &$60\%$ &$70\%$ &$70\%$ &
$90\%$ &$80\%$ &$90\%$ \\ \hline
\mbox{\it IN} & &9 &5 & &12 &5 \\ \hline
\mbox{\it EMI} & &1 &1 & &1 &1 \\ \hline
\mbox{\it EP} & &2 &2 & &2 &1\\ \hline
\mbox{\it IMI} & &1 &1 & &1 &1 \\ \hline
\mbox{\it MI} & &$30\%$ &$30\%$& &$14\%$ &$12\%$ \\\hline
\end{tabular}
\end{center}
}
\vspace{-5mm}
\end{table}
\begin{table}[!t]
{\small
\caption{Parameter Settings for ${\mbox{\texttt{DE}}}$, ${\mbox{\texttt{PSO}}}$, and associated HoPIMs.}
\label{table:parametersettingDE_PSO}
\vspace{-3mm}
\begin{center}
\begin{tabular}
{|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}|}
\hline
\multicolumn{1}{|c|}{} &
\multicolumn{1}{|c|}{} &
\multicolumn{2}{|c|}{${\cal P}^{\mbox{\texttt{DE}}}$} &
\multicolumn{2}{|c|}{${\cal P}^{\mbox{\texttt{PSO}}}$}
\\
\hline
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{Parameter}&
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\texttt{DE}}}$}&
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\tiny Tr12A}}$}&
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\tiny gbmm12A}}$}&
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\tiny Tr12A}}$}&
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\mbox{\tiny gbmm12A}}$}\\
\hline
$P_C$ &$.74$ &$.72$ &$.78$ &&\\ \hline
$F_M$ & $1\%$ &$1.4\%$ &$1\%$ &&\\ \hline
\mbox{\it IN} & &3 &5 &6 &5\\ \hline
\mbox{\it EMI} & &1 &1 &3 &3\\ \hline
\mbox{\it EP} & &1 &2 &2 &2\\ \hline
\mbox{\it IMI} & &1 &1 &1 &2 \\ \hline
\mbox{\it MI} & &$14\%$ &$12\%$ &$12\%$ &$22\%$\\ \hline
\end{tabular}
\end{center}}
\vspace{-5mm}
\end{table}
\begin{table}[!t]
{\small
\caption{Parameter Settings for HePIMs.}
\label{table:parametersettingHet}
\vspace{-3mm}
\begin{center}
\begin{tabular}
{|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}
|@{\hspace{0.2mm}}c@{\hspace{0.2mm}}|}
\hline
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{Parameter}&
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$}&
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$}&
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12S}}$}&
\multicolumn{1}{|@{\hspace{0.2mm}} c@{\hspace{0.2mm}}|}{${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$}\\
\hline
\mbox{\it IN} &3 &3 &6 &6 \\ \hline
\mbox{\it EMI} &1 &1 &3 &3 \\ \hline
\mbox{\it EP} &2 &2 &1 &1 \\ \hline
\mbox{\it IMI} &3 &3 &3 &3\\ \hline
\mbox{\it MI} &$10\%$ &$10\%$ & $14\%$ & $14\%$ \\\hline
\mbox{\it RF} & & 14\% & &24\% \\ \hline
\end{tabular}
\end{center}}
\vspace{-8mm}
\end{table}
\subsection{Analysis of Accuracy}\label{sec:analysis}
HePIMs use parameters taken from Tables \ref{table:parametersettingGA_GAD}, \ref{table:parametersettingDE_PSO} and \ref{table:parametersettingHet} according to the parameter setting obtained in \cite{Lucas2021}. In addition, the new reconfigurable HePIMs performed reconfigurations after each 14\% and 24\% generations, giving a total of seven and four reconfiguration cycles, respectively, for the static tree and dynamic complete graph topologies.
For each permutation size, $n \in \{100,110,\ldots,150\}$, one package of one hundred unsigned permutations with $n$ genes was randomly generated.
All PIMs were executed ten times using each one of the $n$ permutations of size $n$, and the average of these executions for each permutation is taken as the result. The average gives the computed number of reversals for each unsigned permutation.
The accuracies of non- and reconfigurable HePIMs are compared. The radar chart in Fig. \ref{fig:sequentialPIMs}, from previous experiments, shows that ${\mbox{\texttt{DE}}}$\ is the best adapted BA for the URD problem \cite{Lucas2021}. In contrast, ${\mbox{\texttt{PSO}}}$\ provided the worst results, while ${\mbox{\texttt{GA}}}$\ and ${\mbox{\texttt{GAD}}}$, in this order, gave competitive results. The six radii of the chart represent the accuracy for inputs of size $100, 110$ to $150$. The ranges and scales in each radius of the radar chart are adjusted for the sake of presentation.
PIMs with the tree and the complete graph topologies outperformed, as expected, their sequential versions \cite{Lucas2021}. The radar charts to the left in Figs. \ref{fig:het_tree12A} and \ref{fig:het_gbmm12A} show that the HoPIMs maintained the order of competitivity of their best and worst adapted BAs: the best quality solutions were obtained by ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$ and the worst by ${\cal P}^{\mbox{\texttt{PSO}}}_{\mbox{\tiny Tr12A}}$\, for the static tree model, while for the dynamic complete graph topology, the best solutions were computed by ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$\ and the worst by ${\cal P}^{\mbox{\texttt{PSO}}}_{\mbox{\tiny gbmm12A}}$. In contrast to the fact that ${\mbox{\texttt{GAD}}}$\ provided better accuracy than ${\mbox{\texttt{GA}}}$, the homogeneous models ${\cal P}^{\mbox{\texttt{GA}}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\texttt{GA}}}_{\mbox{\tiny gbmm12A}}$\ respectively outperformed ${\cal P}^{\mbox{\texttt{GAD}}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\texttt{GAD}}}_{\mbox{\tiny gbmm12A}}$.
Table \ref{tab:reconfigEnd} exemplify the final island configuration. We ran ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ and ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$\ over one hundred entries of size 100 and computed the average of the final distribution of the four BAs over the islands. Surprisingly the proportion of islands running ${\mbox{\texttt{GA}}}$\ is dominant for ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$\ that can be explained since the final average results of sets of islands with the same BA are very similar for this model (see right chart on Fig. \ref{fig:het_tree12A}). On the other side, the distribution BAs over islands for ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$\ is better balanced (cf. right chart on Fig. \ref{fig:het_gbmm12A}).
\begin{figure}[!ht]
\centering
\includegraphics[width=0.46\textwidth]{chart_sequentials.eps}
\caption{Accuracy of the sequential BAs ${\mbox{\texttt{DE}}}$, ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{GAD}}}$\ and ${\mbox{\texttt{PSO}}}$.}
\label{fig:sequentialPIMs}
\end{figure}
Figs. \ref{fig:het_tree12A} and \ref{fig:het_gbmm12A} include the accuracy for the HePIMs ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$, respectively. The charts on the left show that the accuracy of these models is very competitive regarding the HoPIMs with the same architecture being only surpassed by ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$, respectively. The charts on the right show (dashed lines) the best final average results obtained by each set of three islands in the HePIMs (${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$, and ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$, respectively) that execute the same BA.
These charts make it evident that the migration policy and application of diverse BAs in the islands of the heterogeneous architectures successfully propagates the results obtained in all islands, but are not enough to outperform the quality of the homogeneous model running the best adapted BA, ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$, respectively.
Performance of the new reconfigurable models is also included in Figs. \ref{fig:het_tree12A} and \ref{fig:het_gbmm12A}. The reconfigurable HePIM ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$\ computed better quality results than the pure heterogeneous model ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$, and closed the gap between this and the best adapted homogeneous architecture ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$, running DE (see radar chart on the left in Fig. \ref{fig:het_tree12A}). It makes evident that adding the versatility of reconfiguration to the heterogeneous PIMs may improve their performance. On the other side, the reconfigurable HePIM ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$\ computed quality results that are indistinguishable from the competitive ones computed by the non reconfigurable heterogeneous model, ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$.
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.52\textwidth]{chart_hom_rec_het_tr12A.eps}\hspace{-2mm}
\includegraphics[width=0.48\textwidth]{chart_rec_het_tr12A.eps}
\caption{(a) Accuracy of ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ and related HoPIMs; (b) Accuracy of ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ and average results of each set of islands in ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$\ running the same BA.}
\label{fig:het_tree12A}
\vspace{-4mm}
\end{figure*}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.528\textwidth]{chart_hom_rec_het_gbmm12A.eps}\hspace{-2mm}
\includegraphics[width=0.462\textwidth]{chart_rec_het_gbmm12A.eps}
\caption{(a) Accuracy of ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$, ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ and related HoPIMs; (b) Accuracy of ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$, ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ and average results of each set of islands in ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$\ running the same BA.}
\label{fig:het_gbmm12A}
\vspace{-4mm}
\end{figure*}
Fig. \ref{fig:heterogeneousPIMs} compares the accuracy of the four non- and reconfigurable HePIMs: ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$, and. ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$.
From the experiments, it is clear that the new reconfigurable heterogeneous architectures, ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$, add to the versatility of heterogeneous PIMs the flexibility of dynamically updating the BA executed in each island promoting in this manner not only data diversity, but also algorithmic dynamism. Reconfigurable heterogeneous PIMs open up a promising new exploration space where, unlike von Newman's style of algorithmic exploration focused on efficient information data management, algorithmic data dynamism enters as a crucial player in the game (see, for example, \cite{Hartenstein2010}, \cite{Hartenstein2013}).
\begin{figure}[!ht]
\centering
\includegraphics[width=0.48\textwidth]{chart_rec_heterogeneous.eps}
\caption{Accuracy of the non reconfigurable and reconfigurable HePIMs}
\label{fig:heterogeneousPIMs}
\end{figure}
\begin{table}[!t]
{\small
\caption{Example of the final distribution of ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{GAD}}}$, ${\mbox{\texttt{DE}}}$\ and ${\mbox{\texttt{PSO}}}$.}
\label{tab:reconfigEnd}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\cline{2-5}
\multicolumn{1}{c|}{}&
\multicolumn{1}{|c|}{${\mbox{\texttt{GA}}}$}& \multicolumn{1}{|c|}{${\mbox{\texttt{GAD}}}$}&
\multicolumn{1}{|c|}{${\mbox{\texttt{DE}}}$} &
\multicolumn{1}{|c|}{${\mbox{\texttt{PSO}}}$}
\\
\hline
\multirow{1}{*}{${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$}& 49.25\%& 4.09\% & 18.08\% & 28.58\% \\[1mm] \hline
\multirow{1}{*}{${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$}
&29.42\% &17.5\% & 28.25\% & 24.83\% \\[1mm] \hline
\end{tabular}
\end{center}}
\end{table}
\subsection{Statistical Analysis}\label{ssec:statisticalanalysis}
Statistical tests validated experiments using $95\%$ of significance level represented in the tests as $\alpha = 0.05$.
The samples are the sets of one hundred outputs obtained in Section \ref{sec:analysis}.
Initially, the Friedman test was applied to define the control algorithm. Then, Holm's test was applied to check the null hypothesis that the performance of the control algorithm is the same concerning the remaining algorithms, according to Garc\'ia and Herrera approach \cite{garcia2008} (see also \cite{demvsar2006statistical}, and \cite{derrac2011}).
Table \ref{table:holmStaticHe} presents the statistical results for the PIMs with binary tree topology: ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$, ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$, and ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$. The Friedman test selects ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$\ as the control algorithm, and the null hypothesis is discarded for all samples. The model ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$\ is the second-best confirming the discussion in Section \ref{sec:analysis}. Table \ref{table:holmDynamicHe} shows the statistical results for the models with complete graph topology, ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$, ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$, and the best homogeneous version, ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$, which is selected as the control algorithm. Finally, Table \ref{table:holmHeterogeneous} gives the statistical tests for the four reconfigurable and non-reconfigurable HePIMs. The selected control algorithm was ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$. Holm's procedure rejects the null hypotheses (for $p$-value $\le0.05$), thus, ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$\ has statistical significance only for ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$, and compared to ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$\ there is no statistical significance confirming the discussion in Section \ref{sec:analysis}.
\begin{table}[ht]
{ \scriptsize
\caption{Holm test for the tree topology PIMs. }
\label{table:holmStaticHe}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{\bfseries L}
& \multicolumn{1}{|c|}{\bfseries Control}
& \multicolumn{1}{|c|}{\bfseries i}
& \multicolumn{1}{|c|}{\bfseries Algorithm}
& \multicolumn{1}{|c|}{\bfseries \boldmath{ $p$}-value}
& \multicolumn{1}{|c|}{\bfseries \boldmath{\!$\alpha / i$\!}}\\
\multicolumn{1}{|c|}{\bfseries }
& \multicolumn{1}{|c|}{\bfseries Algorithm}
& \multicolumn{1}{|c|}{\bfseries }
& \multicolumn{1}{|c|}{\bfseries }
& \multicolumn{1}{|c|}{\bfseries }
& \multicolumn{1}{|c|}{\bfseries }\\[0.5mm]
\hline
\multirow{3}{*}{100}
& \multirow{3}{*}{}
& 2 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 7.74959960741012E-11 & 0.025 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$ & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 1.8505741383968991E-9 & 0.050 \\[0.5mm] \hline
\multirow{3}{*}{110}
& \multirow{3}{*}{}
& 2 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 1.346035421050821E-15 & 0.025 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$ & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 5.560820039745642E-15 & 0.050 \\[0.5mm]
\hline
\multirow{3}{*}{120}
& \multirow{3}{*}{}
& 2 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 3.722840351917189E-19 & 0.025 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$ & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 3.2599374788722365E-13 & 0.050 \\[0.5mm]
\hline
\multirow{3}{*}{130}
& \multirow{3}{*}{}
& 2 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 4.218936534105464E-12 & 0.025 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$ & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 1.449490502746956E-11 & 0.050 \\[0.5mm]
\hline
\multirow{3}{*}{140}
& \multirow{3}{*}{}
& 2 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 3.1593469401304723E-16 & 0.025 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$ & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 1.4870457587052685E-9 & 0.050 \\[0.5mm]
\hline
\multirow{3}{*}{150}
& \multirow{3}{*}{}
& 2 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 5.124221656690746E-19 & 0.025 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny Tr12A}}$& 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 9.723622409009922E-15 & 0.050 \\[0.5mm]
\hline
\end{tabular}
\vspace{-2mm}}
\end{table}
\begin{table}[ht]
{ \scriptsize
\caption{Holm test for the complete graph PIMs. }
\label{table:holmDynamicHe}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{\bfseries L}
& \multicolumn{1}{|c|}{\bfseries Control}
& \multicolumn{1}{|c|}{\bfseries i}
& \multicolumn{1}{|c|}{\bfseries Algorithm}
& \multicolumn{1}{|c|}{\bfseries \boldmath{ $p$}-value}
& \multicolumn{1}{|c|}{\bfseries \boldmath{\!$\alpha / i$\!}}\\
\multicolumn{1}{|c|}{\bfseries }
& \multicolumn{1}{|c|}{\bfseries Algorithm}
& \multicolumn{1}{|c|}{\bfseries }
& \multicolumn{1}{|c|}{\bfseries }
& \multicolumn{1}{|c|}{\bfseries }
& \multicolumn{1}{|c|}{\bfseries }\\[0.5mm]
\hline
\multirow{3}{*}{100}
& \multirow{3}{*}{}
& 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 7.43098372370352E-7 & 0.025 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$ & 1 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 1.1648657367238803E-5 & 0.050 \\[0.5mm] \hline
\multirow{3}{*}{110}
& \multirow{3}{*}{}
& 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 9.672204071723814E-19 & 0.025 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$& 1 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 2.9294885290101255E-14 & 0.050 \\[0.5mm]
\hline
\multirow{3}{*}{120}
& \multirow{3}{*}{}
& 2 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 1.792019989925749E-15 & 0.025 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$ & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 1.792019989925749E-15 & 0.050 \\[0.5mm]
\hline
\multirow{3}{*}{130}
& \multirow{3}{*}{}
& 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 3.943363947351002E-17 & 0.025 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$ & 1 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 2.3828362635579084E-15 & 0.050 \\[0.5mm]
\hline
\multirow{3}{*}{140}
& \multirow{3}{*}{}
& 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 4.82026808703977E-22 & 0.025 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$ & 1 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 1.0213251630273183E-20 & 0.050 \\[0.5mm]
\hline
\multirow{3}{*}{150}
& \multirow{3}{*}{}
& 2 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 3.4176814448375205E-24 & 0.025 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\texttt{DE}}}_{\mbox{\tiny gbmm12A}}$ & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 1.440728401105864E-23 & 0.050 \\[0.5mm]
\hline
\end{tabular}
\vspace{-2mm}}
\end{table}
\begin{table}[ht]
{ \scriptsize
\caption{Holm test for reconf. and non-reconf. HePIMs. }
\label{table:holmHeterogeneous}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{\bfseries L}
& \multicolumn{1}{|c|}{\bfseries Control}
& \multicolumn{1}{|c|}{\bfseries i}
& \multicolumn{1}{|c|}{\bfseries Algorithm}
& \multicolumn{1}{|c|}{\bfseries \boldmath{ $p$}-value}
& \multicolumn{1}{|c|}{\bfseries \boldmath{\!$\alpha / i$\!}}\\
\multicolumn{1}{|c|}{\bfseries }
& \multicolumn{1}{|c|}{\bfseries Algorithm}
& \multicolumn{1}{|c|}{\bfseries }
& \multicolumn{1}{|c|}{\bfseries }
& \multicolumn{1}{|c|}{\bfseries }
& \multicolumn{1}{|c|}{\bfseries }\\[0.5mm]
\hline
\multirow{3}{*}{100}
& \multirow{3}{*}{}
& 3 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 6.219448336201955E-7 & 0.016 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 3.5448303585580045E-5 & 0.025 \\[.26mm] \cline{3-6}
& & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 0.3958980057181269 & 0.05\\[0.5mm] \hline
\multirow{3}{*}{110}
& \multirow{3}{*}{}
& 3 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 2.1442166253671064E-13 & 0.016 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 7.221976514824151E-13 & 0.025 \\[.26mm] \cline{3-6}
& & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 0.1709035202307971 & 0.05\\[0.5mm] \hline
\multirow{3}{*}{120}
& \multirow{3}{*}{}
& 3 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 2.1442166253672077E-13 & 0.016 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 3.908557567773197E-9 & 0.025 \\[.26mm] \cline{3-6}
& & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 0.7218258402177081 & 0.05\\[0.5mm] \hline
\multirow{3}{*}{130}
& \multirow{3}{*}{}
& 3 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 1.076195601000617E-12 & 0.016 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 3.415515804642443E-11 & 0.025 \\[.26mm] \cline{3-6}
& & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 0.4113137917762579 & 0.05\\[0.5mm] \hline
\multirow{3}{*}{140}
& \multirow{3}{*}{}
& 3 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 4.342593992240847E-19 & 0.016 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 2.1442166253671531E-13 & 0.025 \\[.26mm] \cline{3-6}
& & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 0.8694817827381613 & 0.05\\[0.5mm] \hline
\multirow{3}{*}{150}
& \multirow{3}{*}{}
& 3 & ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$ & 3.3892419952526653E-19 & 0.016 \\[.26mm] \cline{3-6}
& ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$ & 2 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$ & 2.4806730968270847E-15 & 0.025 \\[.26mm] \cline{3-6}
& & 1 & ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$ & 0.912770619096563 & 0.05\\[0.5mm] \hline
\end{tabular}
\vspace{-2mm}}
\end{table}
\section{Related Work}\label{sec:relatedwork}
As far as we know, no HePIM has been proposed that may dynamically update their BAs as proposed in this work. Here, we discuss a few works related to non-reconfigurable HePIMs.
Bianchini and Brown \cite{Bianchini1993} proposed HePIMs with ring and torus topologies and applied them to the task map scheduling problem showing that HePIMs compute better solutions than HoPIMs. In addition, they observed that adding islands is better than increasing the population.
Also, Lin \textit{et al.} \cite{Shyn1994} proposed HePIMs, considering several migration strategies and topologies addressing the graph partitioning problem. They showed that 25-island PIMs are better than the sequential GA, using a migration strategy that replaces the worst individuals on target islands. Furthermore, they showed that exchanging individuals regarding their fitness-based population similarity instead gets good results without speed degradation.
Izzo \textit{et al.} \cite{sinc2009}
proposed an asynchronous-migration HePIM from variations of the ${\mbox{\texttt{DE}}}$\ algorithm. Asynchrony was shown more intuitive and suitable over TCP/IP, where resources might become available/unavailable at any time. Izzo \textit{et al.} models showed better performance than their sequential versions.
Gong and Fukunaga \cite{GoFu2011} proposed a ${\mbox{\texttt{GA}}}$-based HePIM that randomly selects different parameters for each processor. Some processors are expected to be assigned parameters that perform well on a given problem.
Such model may be considered a one-cycle reconfigurable model. However, it applies only an initial adjust of the same algorithm and does not update BAs dynamically as ours reconfigurable HePIMs.
Duarte \textit{et al.} \cite{duarte2018} proposed an attractiveness-based migration policy for five-island HePIMs that is based on island solutions' quality. Attractiveness was adjusted in \cite{Duarte2020}, inspired by the natural phenomenon known as stigmergy \cite{Capriles2007}, and the mechanism to compute islands' connections.
Silveira {\em et al.} \cite{lucas2016} proposed HoPIMs for a sequential ${\mbox{\texttt{GA}}}$\ introduced in \cite{lucas2015} to solve the unsigned translocation problem. Such PIMs outperformed the accuracy obtained by the ${\mbox{\texttt{GA}}}$\ after careful calibration of the migration and breeding cycle parameters and exploration of a variety of topologies \cite{silveira2018,silveira2019}. Further, Silveira {\em et al.} \cite{Lucas2020} analyzed synchronous HoPIMs for ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{PSO}}}$, and the Social Spider Algorithm (SSA). Experiments showed that HoPIMs applying ${\mbox{\texttt{PSO}}}$\ and ${\mbox{\texttt{GA}}}$\ are competitive, while those running SSA gave the best speed-ups but computed the worst-accuracy solutions. Finally, Silveira {\em et al.} \cite{Lucas2021} proposed a variety of HePIMs to deal with URD. In this work, we select Lucas {\em et al.} models ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$\ for our comparison with the new reconfigurable HePIMs.
HePIMs also have been conceived to solve multiobjective optimization problems (MOPs). We believe that MOPs are exciting applications to our reconfigurable HePIMs since each island may update its BA to optimize a single objective function.
For example, Zang \emph{et al.} \cite{Zang2011} proposed a multi-swarm optimizer that handles each objective function of a MOP with a different slave swarm, and a master swarm covers gaps among non-dominated optima using a multiobjective ${\mbox{\texttt{PSO}}}$. Also, Xu \emph{et al.} \cite{Xu2018} proposed a model with EAs using two subpopulations to solve dynamic interval MOPs, which are MOPs that change interval parameters of their objectives or constraints over time. In addition, Gong \emph{et al.} \cite{Gong2020} proposed a model that handles a cooperative co-evolutionary MOP based on dynamic interval similarity. Gong \emph{et al.} approach split decision variables according to their interval similarity and interval parameters. Then, decision variables are optimized cooperatively. Furthermore, Hashimoto \emph{et al.}
\cite{Hashimoto2018} proposed a HePIM to solve multi-task problems, where each island evaluates an objective. Migrants are selected at high migration frequency, and removed randomly on each local island, replacing the worst individuals in the target islands. Since immigrants went to islands responsible for different objectives, their fitness values are the worst, assuming they have fewer chances of being suitable for the target island objective.
\section{Conclusions and future work}\label{sec:conclusion}
Reconfigurable heterogeneous PIMs were introduced. Such architectures can run and dynamically update different bio-inspired algorithms on their islands. Two reconfigurable PIM architectures with two different archipelago-topologies were designed: a static binary tree topology, ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$, and a dynamic complete graph topology, ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$. The asynchronous models ran four different BAs in their islands: ${\mbox{\texttt{GA}}}$, ${\mbox{\texttt{GAD}}}$, ${\mbox{\texttt{PSO}}}$, and ${\mbox{\texttt{DE}}}$.
The new reconfigurable HePIMs were tested over the unsigned reversal distance problem and computed results that outperformed the quality of associated non-reconfigurable HePIMs.
Experiments, evaluated statistically, made evident the potential of reconfigurable HePIMs. Such reconfigurable models preserve the power of heterogeneous PIMs to navigate efficiently in the space of feasible solutions through a healthy balance between individual and island diversity and migration policy. Also, the new reconfiguration feature gives the architecture the flexibility to evolve dynamically, improving the model's algorithmic adaptability to solve the target problem.
The architectures ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny recHet}}_{\mbox{\tiny gbmm12A}}$\ reached quality results very competitive regarding non-reconfigurable associated architectures ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny Tr12A}}$\ and ${\cal P}^{\mbox{\tiny Het}}_{\mbox{\tiny gbmm12A}}$, closing the gap with the homogeneous best-adapted model that uses ${\mbox{\texttt{DE}}}$.
Future work will explore reconfigurable HePIMs over different problems and with a greater variety of BAs. In particular, we believe that the dynamic algorithmic heterogeneity promoted by the new model will be helpful to deal with multiobjective optimization problems. Indeed, the reconfiguration would promote the application of the best-adapted BA to each target problem over the islands.
\bibliographystyle{ieeetr}
| 23,124 |
\section{Introduction}
\label{sec:Introduction}
An integral part of Fritz Haake's scientific life was dedicated to the study of quantum chaos or, as he used to call it, {\em quantum signatures of chaos}~\cite{Haake18}. Among his many contributions to various aspects of quantum chaos, he and his co-workers' achievements towards a semiclassical understanding of random matrix universality for quantum-chaotic single-particle dynamics are particularly striking. Based on a short review of such accomplishments and those of many others, we summarize more recent work showing how these earlier semiclassical single-particle (SP) methods engender a semiclassical theory of many-body (MB) quantum dynamics.
\subsection{Facets of quantum chaos}
\label{sec:pillars}
One main branch of this field originated many years ago in the physics of strongly interacting nuclear MB systems.
There, Bohr’s compound nucleus model~\cite{Bohr36} may be viewed as the first quantum chaotic system, although at that time there was no concrete association with classically chaotic dynamics. Instead, Wigner's foundational work on random matrix ensembles~\cite{Wigner55, Wigner58} and subsequent contributions by several others~\cite{PorterBook} allowed for understanding nuclear statistical spectral and scattering properties such as level repulsion~\cite{Brody81, Bohigas83}, the Porter-Thomas distribution~\cite{Porter56}, and Ericson fluctuations~\cite{Ericson60, Verbaarschot85}.
Much later, random matrix theory (RMT) with a broadened focus, evolved into one of the methodological pillars of quantum chaos~\cite{Haake18, Bohigas88, Guhr98, Beenakker97RMP, Verbaarschot00, StockmannBook, Mehta04}.
Given that the notion of classically chaotic dynamics is absent from RMT, in a seminal series of papers~\cite{Gutzwiller71} starting from Feynman's path integral, Gutzwiller derived a semiclassical trace formula expressing a SP quantum spectrum as a sum over unstable classical periodic orbits. Fifty years ago, he thereby set the cornerstones of the bridge connecting the classical and quantum mechanics of non-integrable systems. Subsequently, the semiclassical mechanics of chaotic dynamical systems became a second central pillar of quantum chaos studies~\cite{Haake18,StockmannBook,Gutzwiller90,Brack03}.
Finally, in a parallel development universal features in spectral and quantum transport properties of disordered conductors had been predicted~\cite{Altshuler80,Lee85a} and observed~\cite{Washburn86}. Afterwards, this research line, comprising localization phenomena, criticality, and universality in predominantly non-interacting disordered systems, has evolved into its own field~\cite{Imry02, Akkermans07, EfetovBook, Evers08} and can be considered as representing a third methodological foundation of quantum chaos studies.
These three pillars, RMT, semiclassical theory, and the theory of disordered systems, initially developed rather independently, and only much later were their deep mutual links recognized and revealed. Of particular interest is the fundamental relation between the complementary approaches underlying semiclassical and random matrix theories. The assumptions for using RMT had originally been justified by invoking complexity of interacting MB dynamics, but the famous conjecture of Bohigas, Gianonni, and Schmit (BGS)~\cite{Bohigas84} represented a paradigm shift, namely that the deeper rationale for the justification and applicability of RMT was fully and exponentially unstable dynamics. It was a profound unifying concept that strongly interacting MB systems and conceptually simple quantum-chaotic SP systems exhibit to a large extent common statistical spectral properties, i.e.~a prime example of universality in quantum physics. Based on earlier works by Hannay and Ozorio~\cite{Hannay84}, Berry~\cite{Berry85}, and Sieber and one of the authors~\cite{Sieber01}, the group of Fritz Haake and Petr Braun, to whom this review is dedicated, contributed significantly towards a proof of the BGS conjecture~\cite{Heusler07} with regard to SP dynamics. Classical correlations between periodic orbits (for an example see Fig.~\ref{fig:po-pair-HR}) turned out to be the key to understanding random matrix-type spectral universality. Extending these approaches to many interacting particles involves further challenges as shown ahead.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/orbit-pair-PJ.png}
\caption{
{\bf
Correlated periodic orbits} | Phase-space sketch of classical periodic single-particle orbits that are nearly identical up to encounter regions, located at self-crossings in a two-dimensional configuration space (yellow $xy$-plane). Vertical components indicate the respective momenta in $y$-direction
(taken from Ref.~\cite{Haake11} with permission). \label{fig:po-pair-HR}}
\end{figure}
Although quantum MB physics has a long tradition and the foundations of statistical mechanics were laid together with those of quantum mechanics, these subjects have witnessed a recent rebirth due to advances in atomic, molecular, optical and condensed matter physics that allow for building, controlling and monitoring synthetic MB systems with strongly interacting quantum degrees of freedom. Studying their dynamics~\cite{Polkovnikov11,Eisert15,Ueda20} has allowed for identifying particular classes of states that fail to quantum thermalize~\cite{Nandkishore15,Altman18,Sacha17,Turner18}. Studying their evolution towards equilibrium is especially important because equilibration is associated with chaos, which underlies scrambling of quantum correlations across MB systems' many degrees of freedom. In particular, after the proposals on out-of-time-order correlators (OTOCs)~\cite{Shenker14} and a universal limit of their growth rates, {\em i.e.}~quantum 'bounds on chaos'~\cite{Maldacena16},
such aspects related to MB chaos, ergodicity, and ergodicity breaking have recently received a great deal of attention. During the last decade corresponding activities, ranging from MB quantum dynamics via statistical physics to quantum gravity, have merged into a swiftly expanding field in theoretical physics that may be subsumed under the topic {\em many-body quantum chaos}. Corresponding research is
dramatically redirecting towards quantum MB dynamics, harking back to its origins in nuclear MB physics.
\subsection{Semiclassical regimes of quantum many-body dynamics}
\label{subsec:limits}
Referring to chaos, an inherently classical concept, requires properly defined notions of classical and semiclassical limits in MB physics. Although there generally exists a variety of meanings for the term `{\em semiclassical}', depending on the respective field~\cite{Richter22}, here this term is being used in the original sense, just as in quantum chaos, referring to physics in the crossover regime between the classical and quantum worlds. Semiclassical theory may then be formally based on asympototic (effective) $\hbar$ expansions of quantum mechanical (MB) Feynman propagators. The resulting semiclassical expressions, although based on classical quantities for input, nevertheless fully account for quantum (or wave) interference as an integral part of the theory.
Large classes of MB quantum chaotic systems possess a classical limit and reside at such a semiclassical interface between non-integrable MB quantum and classical dynamics. In fact, this occurs in a two-fold way. First, far out-of-equilibrium quantum dynamics are associated with high-energy excitations and thereby with the usual short-wavelength limit, {\em i.e.}\ small $\hbar$. Alternatively, the thermodynamic limit of large particle numbers $N$ can also be regarded as semiclassical, governed by an effective Planck constant $\hbar_{eff} = 1/N$. We thereby consider MB chaotic quantum systems in the limits where either $\hbar$ or $\hbar_{eff}$ is small but nonzero. Both types of quantum-classical transitions are singular implying disruptive changes and complexity for the broad class of quantum systems residing at the edge of classical MB chaos. Typically, these systems require exceedingly difficult numerical quantum simulations due to vastly growing Hilbert space dimensions. Thus, there has been a quest for MB methods specifically devised for these complementary crossover regimes. In the following, the underlying concepts and challenges of a corresponding semiclassical MB theory are indicated.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/Quantum-classical-regimes-Haake.pdf}
\caption{\label{fig:sc-limits}
{\bf
Semiclassical regimes and limits of quantum MB dynamics} | At fixed particle number $N$, the usual semiclassical limit $S/\hbar \rightarrow \infty$, often referred to as $\hbar \rightarrow 0$, corresponds to a (horizontal) transition from quantum mechanical waves to classical particles, nearly always involving nonlinear dynamics. Semiclassical theory within the field of quantum chaos has traditionally addressed single-particle systems, i.e.~the lower right zone. From the perspective of quantum field theory,
the vertical direction of increasing particle number $N$, usually considered as thermodynamic, formally corresponds to a complementary semiclassical regime with effective $\hbar_{eff}= 1/N \rightarrow 0$. In that limit quantum fields pass into nonlinear waves.
}
\end{figure}
\subsubsection{The usual limit of high excitations}
\label{subsec:limit1}
Consider the familiar case of a SP quantum system with an existing classical limit that is approached in the limit $\hbar \rightarrow 0$. More precisely, the semiclassical limit is one in which the dimensionless ratio $\hbar/S \ll 1$ with $S=\int {\bf p} d{\bf q}$, a typical classical action of the particle with momentum $p$. This is the standard limit of short wave lengths $\lambda$ in view of the relation $\lambda = h/p$.
In the schematic Fig.~\ref{fig:sc-limits} with horizontal scale $S/\hbar$ and vertical scale denoting the particle number $N$, this semiclassical limit corresponds to the horizontal crossover (for $N=1$) from the deep quantum regime at $S/\hbar \sim 1$ into the semiclassical range where SP wave mechanics approaches classical mechanics, {\em i.e.} classical particles most frequently possessing nonlinear, possibly chaotic dynamics.
Since Gutzwiller's and Berry's early works~\cite{Gutzwiller71, Berry76}, semiclassical approaches in quantum chaos have nearly exclusively been focused on the case $N=1$, {\em i.e.}~in the lower right region of the $S$-$N$-landscape of Fig.~\ref{fig:sc-limits}. This region is the subject of several textbooks~\cite{Haake18, StockmannBook, Gutzwiller90,Brack03, Reichl21}. However, the limit $\hbar/S \rightarrow 0$ also formally applies to systems with more than one particle in $D$ dimensions by considering semiclassical methods applied in a respective $2D\!\cdot\! N$-dimensional phase space. In Fig.~\ref{fig:sc-limits} this corresponds to moving vertically upwards in the right semiclassical regime and considering the limit $\hbar \rightarrow 0$ for given $N$.
In this case, the MB density of states of $N$ interacting confined particles with MB energies $E_n^{(N)}$
is conveniently decomposed into a smooth and an oscillatory part,
\begin{equation}
\rho(E,N) = \sum_n \delta(E-E_n^{(N)}) =
\bar{\rho}(E,N) + \rho^{osc}(E,N) \, .
\label{eq:SCDOS}
\end{equation}
However, extending such semiclassical approaches from one to $N$ particles is accompanied by a variety of notable challenges. First of all, in practice the calculation, classification, and visualization of classical dynamics in high-dimensional phase spaces quickly reaches its limits. For instance, the implementation of Gutzwiller's trace formula for $\rho^{osc}(E,N)$ on the basis of $N$-particle periodic orbits seems practically impossible for many particles. Secondly, it is necessary to account for the symmetry character of MB states representing $N$ identical particles in quantum physics. Finally, and perhaps most challenging, to be truly valuable interaction effects must be incorporated as an integral part of MB quantum chaos.
Consequently, corresponding attempts have been rare, even for (non-integrable) few-particle systems. An early example is the successful semiclassical quantization of the correlated Coulomb dynamics of the electrons in the helium atom, a longstanding problem dating prior to Schrödinger and his equation, i.e.~to the `old quantum theory'~\cite{Bohr13a},
see \cite{Tanner00a,Kragh12} for reviews of the history. By applying a cycle expansion to chaotic dynamics in 6-dimensional phase space ground and exciting states of helium could be semiclassically computed with high precision~\cite{Ezra91}. More recently, the dynamics of quantum maps with up to 8-dimensional phase spaces has been visualized and thoroughly investigated~\cite{Richter14}. Furthermore, interesting collective MB dynamics were recently semiclassically identified in kicked spin-chains up to particle numbers of order $N\sim 20$~\cite{Akila17,Waltner17} making use of a remarkable particle number-time duality in the Ising spin chain~\cite{Akila16}.
There are only a few scattered examples for generalizations of the van Vleck-Gutzwiller propagator~\cite{Gutzwiller71} to truly many particles. In~\cite{Weidenmueller93} Gutzwiller’s trace formula for the density of states $\rho^{osc}(E,N)$ was reconsidered for systems of non-interacting identical particles, in particular fermions. However, being based on classical SP phase space, this construction of the MB density of states remains purely formal without a direct interpretation in terms of MB phase space.
With regard to semiclassical theory for many interacting fermions (for a review see~\cite{Ullmo08}), in~\cite{Ullmo98} it was shown that the orbital magnetic response can be greatly enhanced by the combined effects of interactions and finite size. In the context of MB scattering~\cite{Urbina16} contains a semiclassical calculation of the transmission probabilities through mesoscopic cavities for systems of many non-interacting particles, in particular photons, thereby generalizing advanced semiclassical SP techniques~\cite{Richter02, Mueller09, Berkolaiko12} for scattering in chaotic conductors. There the interplay between interference at the SP level and due to quantum indistinguishability leads to specific universal correlations in MB scattering properties with relevance for boson sampling~\cite{Aaronson10} and the Hong-Ou-Mandel effect~\cite{Hong87}.
In a parallel development, remarkable progress has been achieved by Gutkin and co-authors providing certain classical foundations of many particle chaos based on models for coupled cat maps~\cite{Gutkin16,Gutkin21}. In view of correlations between partners within ``braided classical orbit bundles'', known to be relevant for universal spectral properties and to be reviewed below, they highlighted the existence of partner orbits specific to MB systems. For a sufficiently large particle number $N$, these new partners are considered as relevant for construction of a consistent MB semiclassical theory. Very recently, a chaotic scalar lattice field theory (in one dimension) has been proposed~\cite{Lakshminarayan11,Liang22}, complementary in spirit to Gutzwiller's periodic-orbit approach for low-dimensional chaotic dynamics~\cite{Gutzwiller1971} and to generalizations presented in Sec.~\ref{sec:SC-MB}.
The above works comprise various attempts to generalize semiclassical periodic-orbit theory for the fluctuating level density $\rho^{osc}(E,N)$ to spectra of systems with few to many particles. The MB phase space dimensions increase with $N$, and accordingly the spectral MB density of states grows immensely~\cite{GarciaGarcia17}. In turn, the spacing between MB levels tends to zero and individual highly excited MB levels are with rare exceptions (such as slow neutron resonances) no longer resolvable.
Hence the smooth part $\bar{\rho}(E,N)$ of the spectral MB density in Eq.~(\ref{eq:SCDOS}), which is not sensitive to the nature of the dynamics, chaotic or regular, and often referred to as Weyl part of the spectrum~\cite{Weyl11}, gains particular importance.
It plays for instance a central role in computing thermodynamic equilibrium and non-equilibrium properties. However, to compute even this smooth part quantum mechanically is numerically challenging since systems with fixed $N$ require elaborate MB techniques generating ground and low excited states. They quickly reach their limits when increasing $N$ or the degree of excitations. This has prompted the development of MB techniques specifically devised to directly compute $\bar{\rho}(E,N)$, thereby circumventing the intricate or often impossible calculation of individual excited MB levels, which requires detailed information that is afterwards smoothed out anyway. For example, in the nuclear physics context, French and co-workers developed statistical spectroscopy based on moment methods~\cite{Chang71, Mon75, Brody81, KotaBook}.
In the SP case, the Weyl expansion~\cite{Weyl11} provides a well defined semiclassical $1/\hbar$ expansion of the smooth part \cite{ Brack03,Baltes76}. In~\cite{Hummel14, Hummel17, Hummel19} the SP Weyl expansion has been generalized to MB systems of $N$ indistinguishable particles in $D$ dimensions.
Corresponding expressions for $\bar{\rho}(E,N)$ take the form of sums over clusters of particles moving freely around manifolds in configuration space invariant under permutations.
This approach contains the famous Bethe law \cite{Bethe36} for the mean fermionic spectral density as a limiting case
\footnote{Shell corrections to the Bethe law for the MB density of states were semiclassically considered in Ref.~ \cite{Leboef05} and are more generally reviewed in Ref.~\cite{Brack93}.}.
Furthermore, the correct emergence of the fermionic MB ground state is a consequence of a delicate cancellation effect of cluster contributions. Moreover, by including interaction effects in a non-perturbative way this MB Weyl appraoch has further been extended to systems of experimental relevance in cold atom physics, such as interacting bosons in traps,
demonstrating for instance that systems with very few up to many particles share the same underlying spectral features~\cite{Hummel19}.
We believe that such underlying MB scaling laws have much in common with related semiclassical scalings in recent generalizations of Thomas-Fermi theory~\cite{Okun21}.
\subsubsection{The thermodynamic limit of large particle number}
\label{subsec:limit2}
Besides the usual notion of $S/\hbar\rightarrow \infty$ discussed so far, in quantum field theory, where wave functions are replaced by field operators, there is the complementary thermodynamic limit of large particle number $N$, but not necessarily small $\hbar$. In the limiting case of $N=\infty$, the equations for the quantum fields pass into nonlinear wave equations. From the viewpoint of quantum field theory, these wave equations appear as a kind of `classical' fluid dynamics. For instance, in the large-$N$ limit, systems of interacting bosons are described by the Gross-Pitaevskii equation. Formally, the large-$N$-but-not-infinite regime, corresponding to the upper (left) region in Fig.~\ref{fig:sc-limits}, can be associated with an effective Planck constant $\hbar_{eff} =1/N \ll 1$ and hence also be considered semiclassical.
Wave interference is usually built into semiclassical propagators through coherent sums over classical paths with interfering amplitudes in configuration space, leading for instance to the van Vleck propagator~\cite{Vanvleck28} or the Gutzwiller trace formula~(\ref{eq:SCDOS})~\cite{Gutzwiller90}
for $\rho^{osc}$ in terms of unstable periodic orbits of classical particles. In the complementary limit, $\hbar_{eff} = 1/N \ll 1$, many-particle propagators and quantum observables derived from those can be formally described also by means of semiclassical sums over paths defined by classical field solutions, which have a completely different interpretation and meaning. The summations are taken over collective modes of MB densities in a continuum version of high-dimensional MB Fock space~\cite{Engl14}, instead of particle trajectories in configuration space, as is outlined in Sec.~\ref{sec:SC-MB}. These Fock-space paths represent the various, in principle infinitely many, time-dependent solutions of the nonlinear wave equations in the classical limit $1/N = 0$ (upper region of Fig.~\ref{fig:sc-limits}). Quantum MB interactions turn into nonlinearities in these wave equations and may result in unstable, possibly chaotic MB mode dynamics. In this way chaos at the level of these classical-limit nonlinear waves implies {\em many-body quantum chaos} at the level of quantum fields\footnote{There are other conceivable routes to many-body quantum chaos not considered further in this contribution, but this issue is revisited for brief speculation in Sec.~\ref{sec:persp} (i).}. This is entirely analogous to signatures of chaotic classical particle dynamics in wave functions at the Schrödinger equation level, i.e. quantum chaos in the limit $\hbar \rightarrow 0$. In a sense, such an approach transports Gutzwiller's semiclassical theory of the time evolution operator from the level of ``first quantization'' to that of ``second quantization''. Note that the classical quantities entering semiclassical path integrals have different meanings in the two complementary limits: for instance different Lyapunov exponents quantify the instability of particle trajectories and collective modes, respectively.
Remarkably, the semiclassical theory in the limit $\hbar_{eff} = 1/N \ll 1$ also applies to ground or low-lying excited MB states.
The classical paths in MB space, i.e.~the time-dependent solutions of the nonlinear wave equations, just represent mean-field solutions of the full MB problem. This opens an interesting new perspective on the connections between chaotic mean-field dynamics, quantum correlations due to MB interactions, scrambling, and the generation of entanglement. MB interaction effects beyond mean-field are commonly considered as correlation effects~\cite{Fulde95}. Hence, as will be explained in Sec.~\ref{sec:SC-MB}, the interpretation of Eq.~(\ref{eq:SCDOS}) as a coherent sum over different collective mean-field modes implies that massive MB interference between these chaotic modes describes or explains quantum correlations in the MB propagator. Hence MB quantum chaos and quantum correlation phenomena are intimately intertwined. To highlight the difference between (SP) wave and MB quantum interference we coin the term for the latter case {\em genuine many-body quantum interference}.
\subsection{Outline of this review:
Universality in many-body quantum chaos from a semiclassical perspective}
\label{subsec:outline}
The semiclassical approach reviewed below addresses these leading-order (in $\hbar_{eff}=1/N$) MB quantum mechanical contributions to the thermodynamic limit. The theory's strength is its capacity to apply broadly to dynamical MB systems that are either fully chaotic, partially chaotic, or are even integrable as well. Hence, on the one hand it deals with systems not behaving in a universal manner and allows for addressing system specific individual, possibly quite atypical properties. On the other hand, the MB semiclassical approach provides dynamical foundations for universal aspects of quantum chaotic MB systems, i.e.~in the statistical RMT-like sense. This review focuses on the latter issue, primarily based on the recent accomplishments in Refs.~\cite{Engl14,Engl15,Dubertrand16,Rammensee18}.
Assuming fully chaotic MB mean-field dynamics, this branch of semiclassical MB theory follows the strategy of invoking ergodic properties and corresponding sum rules for the exponentially numerous classical paths, i.e.~collective modes entering into semiclassical trace formulas for various MB observables and correlation functions. Such assumptions often enable an analytical treatment of the arising multiple sums over chaotic MB modes. Generalizing corresponding SP techniques based on classical (periodic-)orbit correlations mentioned above provides the key to explaining aspects of RMT universality also in the many-particle context.
The remainder of the review is structured as follows: in Sec.~\ref{sec:SC-SP} the earlier semiclassical theory providing the link between RMT-universality and SP chaos is summarized. This includes the encounter calculus for special braided classical orbit bundles relevant for the evaluation of spectral correlation functions and, closely related, the treatment of phenomena at and beyond Ehrenfest time scales. In Sec.~\ref{sec:SC-MB} the foundations of an advanced semiclassical theory of MB quantum fields and chaos are given for $\hbar_{eff} = 1/N \ll 1$. After deriving a MB version of the van Vleck-Gutzwillerß propagator, a Gutzwiller-type trace formula for MB density of states is presented. The resultant formulas provide the basis for deriving MB spectral correlators, response functions, and echo-type observables. With regard to the latter the semiclassical theory of out-of-time-order-correlators (OTOCs)~\cite{Larkin69}, which have recently gained an enormous amount of attention~\cite{Maldacena16} in various fields of physics from condensed matter via cold atoms to cosmology, are sketched out. This review is completed with perspectives and open questions discussed in Sec.~\ref{sec:persp}.
\section{Semiclassical theory of single-particle quantum chaos}
\label{sec:SC-SP}
Treating semiclassical limits of quantum theory by starting from Feynman's path integral and invoking corresponding stationary phase approximations naturally leads to expressing unitary quantum time evolution in terms of sums over phase-carrying classical paths. Quantum interference, as a direct consequence of the principle of quantum superposition, is then captured by the existence of multiple classical solutions and their coherent summation. Depending on the structure of the quantum observable to be considered, for instance spatial or spectral $n$-point correlation functions, a multitude of time evolution operators can be involved leading to a corresponding number of summations over classical paths. Modern semiclassical theory is concerned with the challenge of how such multiple summations can be carried out efficiently and appropriately while preserving the inherent underlying quantum interference mechanisms. In this respect the Ehrenfest time $\tau_{E}$~\cite{Ehrenfest27} plays a key role. As will be discussed below, it has turned out to be of tremendous importance for the quantum dynamics of chaotic systems since it separates quantum propagation in the phase space around {\em one} dominant (unstable) classical trajectory at short time scales from subsequent times governed by strong wave interference, {\em i.e.}, involving propagation of amplitudes along {\em many} trajectories.
Beyond $\tau_{E}$, semiclassical approaches that do not appropriately cope with such many-trajectory interferences break down. Due to the exponential sensitivity to initial conditions for chaotic dynamics, $\tau_{E}$ is a logarithmically short time scale as a function of $\hbar$, and hence absent the interference contributions the corresponding range of validity of such approaches is extremely limited. In view of the fact that RMT-type spectral universality is reached in the limit where $\tau_{E} / \tH \rightarrow 0$, with
\begin{equation}
\tH = 2\pi \hbar \bar{\rho}(E)
\label{eq:tH}
\end{equation}
the Heisenberg time, the time dual to the mean level spacing $1/ \bar{\rho}(E)$ (with $ \bar{\rho}(E)$ the mean density of states), there has been the quest for devising advanced semiclassical methods to adequately treat post-Ehrenfest quantum dynamics. In fact, for a number of reasons, early on it was even thought that post-Ehrenfest quantum chaotic dynamics were beyond the range of any semiclassical approach. However, it was shown by the early $1990$'s that the validity of a complete semiclassical dynamics extended far, far beyond the logarithmic $\tau_{E}$ scale limit~\cite{Tomsovic91b, Oconnor92, Sepulveda92, Tomsovic93}.
In the following, a semiclassical theory is outlined that provides the link between chaos and RMT-universality for SP dynamics. It is based on Gutzwiller's trace formula~\cite{Gutzwiller1971} for the SP density of states that is briefly introduced in Sec.~\ref{sec:SP-Gutzwiller}. The theory further involves classical orbit correlations and the encounter calculus for braided orbit bundles (see Sec.~\ref{sec:SP-universality}). Intimately connected to that, it deals with interference phenomena at and beyond Ehrenfest time scales (see Sec.~\ref{sec:SP-Ehrenfest}).
\subsection{Single sums over paths: van Vleck propagator and Gutzwiller trace formula}
\label{sec:SP-Gutzwiller}
For a time-independent SP Hamiltonian $H$, the van Vleck propagator $K_{\rm sp}(t)$, and its refinement by Gutzwiller, is a semiclassical approximation to the quantum time evolution operator $U(t) = \exp{(-(i/\hbar) H t)}$ in configuration space. Here, the derivation of $K_{\rm sp}(t)$ and Gutzwiller's periodic orbit theory are skipped, as they can be found in various excellent text books~\cite{Haake18,StockmannBook,Gutzwiller90,Brack03,Reichl21}. The relevant expressions are directly introduced.
Evaluating the Feynman propagator $\langle {\bf r}_{f}| U(t)| {\bf r}_{\rm i}\rangle $ in a stationary phase approximation yields the
van Vleck-Gutzwiller propagator for the evolution of a quantum particle between initial and final coordinates ${\bf r}_{\rm i}$ and ${\bf r}_{\rm f}$ in $d$ dimensions:
\begin{equation}
\label{eq:vVG}
K_{\rm sp} ({\bf r}_{\rm f},{\bf r}_{\rm i},t)
= \sum_{\gamma}
\left(\frac{1}{(2\pi i \hbar)^d} \left| \frac{ \partial^2
R_{\gamma}({\bf r}_{\rm f},{\bf r}_{\rm i},t)}{\partial {\bf r}_{\rm f} \partial {\bf r}_{\rm i}}
\right|\right)^\frac{1}{2}
\
{\rm e}^{i(R_{\gamma}({\bf r}_{\rm f},{\bf r}_{\rm i},t)/\hbar - \nu_\gamma \pi/2)} \,
\end{equation}
with classical action
$ R_{\gamma}({\bf r}_{\rm f},{\bf r}_{\rm i},t) = \int_0^t L(\dot{\bf r}, {\bf r}, t) d t$ along the trajectory $\gamma$ connecting ${\bf r}_{\rm i}$ to ${\bf r}_{\rm f}$ in time $t$, and Morse index $\nu_\gamma$.
This expression for $K_{\rm sp}(t)$ holds generally for either chaotic or integrable classical dynamics, and even for systems with coexisting stable and unstable phase space regions.
After computing the energy-dependent Green function via a Laplace transform of $K_{\rm sp}(t)$ and upon calculating the spatial trace integral by means of further stationary phase approximations, Gutzwiller derived the famous trace formula for the density of states $\rhosp(E)$ of a {\em classically chaotic} quantum SP system,
thereby laying the foundations of periodic-orbit theory in quantum chaos~\cite{Gutzwiller71}:
\begin{equation}
\rhosp(E) \simeq \bar{\rho}_{\rm sp}(E) \ + \ \rho_{\rm sp}^{\rm (osc)}(E) =
\bar{\rho}_{\rm sp}(E) \ + \frac{1}{\pi\hbar} {\rm Re}\left\{
\sum_{\rm po} A_{\rm po}{\rm e}^{(i/\hbar) S_{\rm po}(E)} \right\} \, .
\label{eq:SP-Gutzwiller}
\end{equation}
The Weyl term $\bar{\rho}_{\rm sp}(E)$ is a smooth function of energy $E$. It is obtained, to leading order in $\hbar$, by calculating the volume of the on-energy-shell classical phase space volume,
\begin{equation}
\bar{\rho}_{\rm sp} (E)=\left(\frac{1}{2\pi\hbar}\right)^{d}\int \ d{\bf r} \ d{\bf p}\ \delta(E-H_{\rm sp} ({\bf r},{\bf p})) \, ,
\label{eq:SP-Weyl}
\end{equation}
where $H_{\rm sp}$ is the classical Hamiltonian. In the above trace formula the remaining oscillatory part $\rho_{\rm sp}^{\rm (osc)}(E)$ of the density of states appears as a coherent sum over
all {\it periodic orbits} (po) of the corresponding classical system at energy $E$. The respective phases
\begin{equation}
S_{\rm po}(E) = \int_{\rm po} \ {\bf p} \cdot d {\bf q} -\hbar \mu_{\rm po} \ \pi /2 \, .
\label{eq:SP-action}
\end{equation}
contain their classical actions and Maslov indices $\mu_{\rm po}$.
Note that the sum over all periodic orbits includes also all higher repetitions of a given primitive periodic orbit (ppo) with period $T_{\rm ppo}$.
The amplitudes in Eq.~(\ref{eq:SP-Gutzwiller}) read
\begin{equation}
A_{\rm po}(E) = \frac{T_{\rm ppo}(E)}{|{\rm d
et}({\bf M}_{\rm po}(E) - {\bf I} |^{1/2}} \, .
\label{eq:SP-stab}
\end{equation}
The monodromy (or stability) matrix ${\bf M}_{\rm po}(E)$ takes into account, in a linearized way, the phase space structure in the vicinity of the periodic orbit. It characterizes its instability in terms of stability exponents (similar to Lyapunov exponents) for chaotic dynamical systems.
The trace formula, Eq.~(\ref{eq:SP-Gutzwiller}), decomposes the quantum spectrum in a Fourier-type hierachical way: whereas short periodic orbits contribute with long-energy-ranged cosine-like spectral modulations, accounting for contributions from longer and longer periodic orbits, in principle, generates an increasing spectral resolution~\footnote{There is an extensive literature about convergence properties of the trace formula and the challenges associated with semiclassically computing individual energy levels; see~\cite{Cvitanovic92}.}. To resolve the quantum density of states at scales beyond the mean level spacing $1/\bar{\rho}(E)$ requires, in turn, to control semiclassical wave interference in the time domain on scales in the range of or longer than the Heisenberg time, $\tH$, the longest time scale involved. The challenge of coping with this {\em late-time behavior} leads to partially solved issues, but also many open questions that are addressed below in the context of spectral correlations.
\subsection{Multiple sums over paths: classical correlations and quantum universality}
\label{sec:SP-correlations}
\subsubsection{Spectral two-point correlation function}
\label{ref:sec-2point-cor}
In many circumstances, the quantities of interest are not the bare densities of states $\rhosp(E)$, but rather the spectral $n$-point correlation functions. In particular, the normalized connected spectral two-point correlator
\begin{equation}
C(\epsilon) = \frac{1}{\bar{\rho}_{\rm sp}(E)^2}
\left\langle
\rho_{\rm sp}^{\rm (osc)}
\left(E+\frac{\epsilon}{2\bar{\rho}_{\rm sp}(E) }\right)\
\rho_{\rm sp}^{\rm (osc)}
\left(E-\frac{\epsilon}{2\bar{\rho}_{\rm sp}(E)} \right)
\right\rangle_E
\label{eq:SP-2-point}
\end{equation}
with $\rho_{\rm sp}^{\rm (osc)}$ defined in
Eq.~(\ref{eq:SP-Gutzwiller})
is a simple but fundamental measure of spectral correlations.
Here the angular brackets denote a running local average over energy $E$. The dimensionless variable $\epsilon$ stands for a spectral energy distance in units of the mean level spacing $1/\bar{\rho}_{\rm sp}(E)$.
Corresponding energy correlation functions are defined, {\em e.g.}, for Green functions and scattering matrix elements. Instead of energies, also spatial or time-like correlators are of relevance in various branches of physics.
The quantum objects entering such correlators can commonly be semiclassically represented in terms of sums over (periodic) trajectories, similar to that in the trace formula
(\ref{eq:SP-Gutzwiller}) (see {\em e.g.} the reviews~\cite{Richter00,Jalabert00,Waltner10} for such semiclassical correlation functions in mesoscopic physics).
Hence, a semiclassical approach to $n$-point correlators naturally leads to $n$-fold coherent summations over amplitudes evolving along, in principle, infinitely many trajectories. Coping with such multiple infinite sums over highly oscillatory objects seems, at first glance, hopeless. However, an intrinsic strength of semiclassical theory lies in the fact that systems with diffusive or ergodic classical dynamics often do not require the computation of specific trajectories (nor is it always desirable). Instead, invoking ergodicity and uniformity of chaotic phase space implies powerful classical sum rules that permit a treatment of the orbits in a statistical manner. As no system-specific information is required, such approaches naturally lead to universal features of quantum-chaotic dynamics and may provide physical laws applicable to whole classes of quantum systems, exclusively characterized by means of their respective symmetry class.
The semiclassical evaluation of multiple sums over paths is illustrated for the prominent case of the two-point correlator, Eq.~(\ref{eq:SP-2-point}); the corresponding treatment of four-point objects is in Sec.~\ref{sec:OTOC} on out-of-time-order correlators. Replacing $\rhosp$ in Eq.~(\ref{eq:SP-2-point}) by its semiclassical approximation, Eq.~(\ref{eq:SP-Gutzwiller}), one obtains
\begin{eqnarray}
C(\epsilon)& \simeq &
\frac{1}{\bar{\rho}_{\rm sp}(E)^2} \left(\frac{1}{\pi\hbar}\right)^2 \times \nonumber \\
&& \left\langle
\sum_{\gamma} \sum_{\gamma'} A_\gamma A_{\gamma'}^\ast \ {\rm e}^{(i/\hbar)
[S_\gamma(E) - S_{\gamma'}(E)) + (T_\gamma(E)+T_{\gamma'}(E))\epsilon/ (2\pi \bar{\rho}_{\rm sp})]}
\right\rangle_E \ .
\label{eq:SP-2-point-sc}
\end{eqnarray}
Here, $S(E+E') \simeq S(E) + T(E) E'$ with $T(E) = \partial S/ \partial E$ (the orbit's period). The contributions of periodic orbit pairs $\gamma, \gamma'$ that exhibit an action difference $\Delta S(E)$ in the phase factor are handled separately from those that do not. Of course, $\Delta S(E)$ vanishes for the joint contributions of the specific orbit pairs $\gamma = \gamma'$, i.e. {\it the diagonal contributions}. In addition, if a system is invariant with respect to some symmetry, for instance time-reversal symmetry, then that symmetry is reflected in multiplicities of symmetry related orbits with classical action degeneracies as well. The resultant constructive interference encodes the symmetry's influence on the quantum system. In effect, this can be considered as part of the diagonal contributions.
In the semiclassical limit, the phases $S_\gamma(E)/\hbar$ oscillate rapidly upon varying $E$, and only terms with sufficiently small action differences
\begin{equation}
\Delta S(E) = S_\gamma(E)-S_{\gamma'}(E)
\label{eq:delta_S}
\end{equation}
can survive the energy averaging $\langle \ldots \rangle_E$. Using the classical sum rule of
Hannay and Ozorio de Almeida~\cite{Hannay84},
\begin{equation}
\sum_{\gamma}
\frac{1}{{\rm det}({\bf M}_\gamma-{\bf I})}
f_\gamma(T_\gamma) \simeq \int_{T_0} dT \frac{f(T)}{T} \, ,
\label{eq:SP-HOdA}
\end{equation}
that follows from the assumption of uniform phase space exploration by unstable periodic orbits, Berry computed the diagonal contribution
\begin{equation}
C_d(\epsilon) \simeq \frac{1}{\bar{\rho}_{\rm sp}(E)^2} \left(\frac{1}{\pi\hbar}\right)^2
\int_{T_0} dT \ T \ e^{iT\epsilon/ (2\pi \hbar \bar{\rho}_{\rm sp})}
\label{eq:SP-2-point-diag}
\end{equation}
to the two-point correlator~\cite{Berry85}. He thus derived the spectral rigidity found in RMT semiclassically. For the spectral form factor $K(\tau)$ (with $\tau \!=\! T/\tH$), the Fourier transform of $C(\epsilon)$, the diagonal approximation leads to the linear "ramp": $K(\tau) = \eta\tau$,
with $\eta =$ 2 and 1 for systems with and without time reversal symmetry, respectively. Berry's analysis provided the first intimate theoretical link between RMT and the semiclassical theory of chaos.
Apart from the diagonal terms there is an enormous number of off-diagonal orbit pairs in a chaotic system due to the exponential proliferation of the number of periodic orbits with increasing period, $T_\gamma(E)$. Most of the orbit pairs consist of periodic orbits with actions that are uncorrelated. Summing over them and performing the energy average, they collectively have a vanishing average, including the effects of ``accidental'' nearly equal actions $S(E)$. However, from RMT it had been known that for case of time-reversal invariant systems, there had to be further universal spectral correlations beyond those related to the diagonal term~\cite{Bohigas91}:
\begin{equation}
K^{\rm GOE} (\tau) =
\left\{
\begin{array}{ll}
2\tau -\tau \log(1+2\tau) & {\rm if} \quad \tau < 1 \, , \\
2 - \tau \log \frac{2\tau +1 }{2\tau -1} & {\rm if} \quad \tau >1 \, . \\
\end{array}
\right.
\label{eq:SP-FormFac}
\end{equation}
Hence to describe such universal RMT features, one had to find orbit pairs with non-random action differences~\cite{Argaman93}. Although it was expected that such non-vanishing contributions come from a relatively small number of pairs of correlated orbits, for a long time it was unclear how these orbit correlations could emerge from an ergodic phase space structure.
\subsubsection{Braided classical orbit bundles and encounters}
\label{sec:SP-bunches}
Henri Poincar\'e had already recognized in 1899 that
chaotic motion, which ergodically fills the classical
configuration space or phase space in a uniform manner and is often mistakenly equated with ``stochasticity'', is indeed subject to structural principles. In his {\it Les M\'ethodes Nouvelles de la M\'ecanique C\'eleste}~\cite{Poincare99} appears the notion that arbitrarily long trajectory segments can be approximated with arbitrary accuracy by pieces of a periodic orbit~\footnote{``\'Etant donn\'ees [$\dots$] une solution particuli\`ere quelconque de ces \'equations, on peut toujours trouver une solution p\'eriodique (dont la p\'eriode peut, il est vrai, \^etre tr\`es longue), telle que la diff\'erence entre les deux solutions soit aussi petite que l'on veut, pendant un temps aussi long qu'on le veut.''}.
For this reason, sometimes periodic orbits are considered as a ``skeleton'' or backbone of chaotic dynamics~\cite{Cvitanovic91}, along which all the non-periodic orbits must wind; see also~\cite{Li20, Li17a, Li18}. Research since 2000 has brought to light how this ``skeleton'' is constructed and that chaotic dynamics is subject to further principles of order: (periodic) orbits do not appear as independent individual entities but in pairs, as first discovered by~\cite{Sieber01,Richter02,Sieber02}, and more generally in densely packed bundles~\cite{Mueller09}. This hidden classical property of periodic orbits in chaotic systems turned out to play a central role for understanding universal spectral properties.
According to the popular notion of chaos, chaotic classical movement is extremely unpredictable. Two closely adjacent paths diverge exponentially $\sim e^{\lsp t}$, with the SP positive Lyapunov exponent(s) $\lsp$ as the divergence rate. However, this statement does not include all aspects of symplectic Hamiltonian dynamics: exponentially diverging motion happens locally on or in the neighborhood of local unstable manifolds in phase space, and there exists its complement, motion along stable manifolds where initial phase space distances exponentially decrease. The combination of these two structural elements of Hamiltonian dynamics lies behind the formation of (periodic) orbits in braided bundles. On the other hand, the probability of a solitary chaotic orbit's existence decreases exponentially with its length.
\begin{figure}
\centering
\includegraphics[width=0.4\linewidth]{figures/encounter-1.eps}
\includegraphics[width=0.4\linewidth]{figures/encounter-2.eps}
\caption{\label{fig:PO-example}
{\bf "Where's Waldo"?} |
Example of a correlated pair of two periodic orbits in the hyperbola billiard, essentially differing from each other in the regions marked by the red circle, where the left orbit exhibits a self-crossing while the right partner orbit does not cross. This orbit pair illustrates a two-encounter (here in two-dinensional configuration space). The corresponding encounter region, centered around the crossing, extends along the orbits over a scale of $v\tau_{E}$ comprising various reflections at the boundaries, depending on $\log \hbar^{-1}$
(Courtesy of M.~Sieber).
}
\end{figure}
The fact that braided bundles generally tend to arise due to the symplectic phase space structure is best illustrated by considering just two orbits. Figure~\ref{fig:PO-example} shows a representative example of such a pair of periodic
trajectories in the hyperbolic billiard known to exhibit chaotic dynamics.
Since the configuration space in the billiard interior is bounded,
long periodic trajectories necessarily have many self-crossings,
including those with a small angle between the intersecting segments
(see the self-crossing marked by a red circle in the left panel of Fig.~\ref{fig:PO-example}).
The right panel shows an almost identical partner trajectory,
which differs in topology from the reference trajectory only in the area of the self-encounter
in that the partner orbit has no intersection.
Such a trajectory doublet is shown in the left panel of Fig.~\ref{fig:pseudo-orbit} again schematically.
One of the paths has a crossing in the configuration space under a small angle
$\epsilon$. From this the corresponding partner trajectory can be uniquely constructed
by matching trajectory segments associated with the local stable and unstable manifold of the reference orbit~\cite{Sieber01}.
Then for each such long orbit, a partner orbit starting and
ending (exponentially) close to the first one exists.
For the fundamental braided orbit pair shown in the left panel of Fig.~\ref{fig:pseudo-orbit}, each of the two paths around the intersection has a {\em self-encounter}, where its segments are close to each other in configuration space. Outside the encounter region the two loop-like connecting pieces ("L" and "R" in Fig.~\ref{fig:pseudo-orbit}, called ''links'') are almost indistinguishable for the two trajectories, since they are exponentially close. In a rough but helpful simplification, one can consider the links (loops) of both orbits as the same, whereas the two
possibilities of their interconnection in the encounter region allows for constructing and distinguishing the two different orbits.
The close similarity of periodic orbits forming a pair, such as those depicted in Fig.~\ref{fig:PO-example} and sketched in Fig.~\ref{fig:pseudo-orbit} implies a tiny difference $ \Delta S(E)$, Eq.~(\ref{eq:delta_S}), in their classical action, and accordingly, a small phase difference
$(i/\hbar) (S_\gamma(E)-S_{\gamma'}(E))
$
entering the semiclassical expression, Eq.~(\ref{eq:SP-2-point-sc}), for the spectral correlator.
Placing a transversal
Poincaré surface of section inside the encounter and considering the differences between the two
points where the resepctive encounter stretches pierce through that section, the distances between the piercings can be decomposed into stable and unstable components $s,u$. They determine the approximate action difference of the two partner orbits in a two-encounter as
$\Delta S \approx su$~\cite{Mueller09} (see also \cite{Sieber01,Sieber02}). In Ref.~\cite{Li17b} exact geometric relations are given for $\Delta S$ in terms of the properties of Moser invariant curves in the homoclinic structure underlying encounters. The relative scale of the correction between $\Delta S \approx su$ and the exact result is exponentially small.
In constructing a partner path by switching at a self-encounter, it may happen that not one periodic trajectory, but two or more shorter ones form in such a way that their combination, called a pseudo-orbit~\cite{Keating91}, corresponds to the original orbit as a whole. Figure~\ref{fig:po-pair-HR} and the right panel of Fig.~\ref{fig:pseudo-orbit} show simple examples.
Such composites connecting orbits with pseudo-orbits also play a role in the next subsection. While the orbit pair to the left requires time-reversal symmetry, the bundle to the right also exists for the non-time reversal symmetric case.
\begin{figure}
\centering
\includegraphics[width=0.55\linewidth]{figures/orbit-pair.pdf}
\includegraphics[width=0.4\linewidth]{figures/pseudo-orbit.eps}
\caption{\label{fig:pseudo-orbit}
{\bf Braided periodic orbit pairs} |
Left: Scheme of a fundamental pair of classically correlated long periodic orbits linked to each other through a common self-encounter region~\cite{Sieber01}.
Right:
Two periodic orbits (dashed lines) forming a pseudo orbit of order 2 contributing to the pseudo-orbit expansion of the spectral determinant in Eq.~(\ref{eq:pseudo1}) according to the rules in Eq.~(\ref{eq:pseudo2}). In this particular case where the composing orbits almost touch and therefore define an encounter region, they are correlated with the longer eight-shaped orbit (solid). (Right panel from Ref.~\cite{Mueller09}.)
}
\end{figure}
The notation of links connecting self-encounters is helpful for devising the general mechanism for "constructing" (pseudo-) orbits via close self-encounters~\cite{Haake18,Mueller09}. Every long periodic orbit necessarily has many close self-encounters in configuration space. Not only two, but also three or generally $l$ orbital segments can temporarily approach each other, thus defining an $l$ encounter. The corresponding $l$ links outside of the encounter can be interconnected in $l!$ different ways through the $l$ encounter, defining a bunch of $l!$ different trajectories. Given one, the Hamiltonian phase space structure assures the existence of all of these orbits. Inasmuch as a long periodic orbit has many close self-encounters $k$,
each of which realizes $l_k!$ possible switchings,
such a trajectory is a member of a group of trajectories with a total number given by the product of all factors, $N=\Pi_k l_k!$. Figure \ref{fig:PO-bunches} shows a bundle of $N=(3!)^2 2!=72$ trajectory structures,
generated from two three- and one two-encounter. This bundle comprises individual periodic orbits as well as pseudo-orbits of nearly the same total period~\footnote{In principle, the periodic orbits sketched in Fig.~\ref{fig:PO-bunches} may contain higher repetitions of the entire orbits or parts of them. However, it has been shown~\cite{Waltner19} that trajectories with multiple partial traversals do not contribute (to leading order) to the spectral two-point correlator dsicussed below.}.
\begin{figure}
\centering
\includegraphics[width=0.70\linewidth]{figures/orbit-bundles.png}
\caption{\label{fig:PO-bunches}
{\bf Braided periodic orbit bundles} |
Example of a bundle of 72 periodic-orbit structures (single periodic orbits or pseudo-orbits composed of shorter periodic orbits) with nearly equal lengths and actions differing in two three-encounters and one two-encounter.
The illustration deliberately conveys the impression that it is
only a single orbit. Only in the boxes the different $l!$ interconnections within the encounter regions appear to be resolved.
(From Ref. \cite{Mueller09}).
}
\end{figure}
The existence of encounters and the construction scheme outlined above is not restricted to periodic orbits but holds in general also for open trajectories~\cite{Richter02, Li20}, with relevance for instance in quantum chaotic transport and scattering; see also Secs.~\ref{sec:SP-Ehrenfest} and \ref{sec:OTOC}. The underlying mechanism of forming orbit bundles is the same in all cases.
Generally, the longer open or periodic orbits become, the more close encounters they have with other orbits, leading to the notion that in the long-time-limit all orbits form whole nets weaving the classical phase space with a fine mesh: in the sense of Poincaré's original conception.
\subsubsection{Quantum spectral universality}
\label{sec:SP-universality}
The issue of orbit bundles does not naturally arise in classical SP physics, but their relevance for quantum physics is immediately obvious: due to the close similarity of all members of an orbit bundle, e.g.~as depicted in Fig.~\ref{fig:PO-bunches}, the members exhibit near-degenerate actions and are too highly correlated to ignore when energy averaging. This was discovered and first worked out by Sieber and one of the authors~\cite{Sieber01} for the case of correlated orbits pairs forming a two-encounter (Fig.~\ref{fig:pseudo-orbit}, left). Their analysis provided the leading quadratic contribution to the GOE spectral form factor,
Eq.~(\ref{eq:SP-FormFac}), beyond the linear ramp, thereby revealing symplectic chaotic dynamics as the semiclassical origin of RMT behavior. Based on these insights, Haake and members of his group worked out the encounter calculus that allows one to classify and compute general encounter structures, and used it in a ``tour de force'' approach for systematically calculating the semiclassical theory for the two-point correlator and spectral form factor, respectively. In the following, the major steps of their approach are outlined, which can be generalized to the many-body context; see Sec.~\ref{sec:spec-stat}. All details can be found in Haake's textbook~\cite{Haake18}.
Historically the semiclassical calculation of spectral correlators, that starts with the classic 1985 paper by Berry \cite{Berry85}, was based on their representation in terms of the bare trace formula as in Eq.~(\ref{eq:SP-2-point-sc}). In this representation the only structures of relevance are consequently built from pairs of {\it orbits}. The initial success of the semiclassical program for spectral universality based on the enumeration, classification, and calculation of the pertinent encounter structures contributing to the spectral form factor was, however, restricted to times shorter than the Heisenberg time \cite{Mueller04}. Deriving the behavior of spectral fluctuations beyond this point requires understanding the semiclassical mechanisms that account for quantum unitarity and its non-perturbative effects. Whereas a version of the trace formula in which unitarity can be studied and/or implemented remains elusive, the so-called spectral determinant
\begin{equation}
Z(E)=B(E)\, {\rm det}(E-\hat{H})
\end{equation}
(where $B(E)$ is a real function of the energy $E$ without real zeros) provides a powerful periodic orbit expansion. There unitarity can be explicitly enforced, and it offers a more convenient starting point for a semiclassical calculation based on action correlations aiming to include post-Heisenberg time effects. The price to pay is that the whole enumeration problem now involves pairs of {\it pseudo orbits}.
The starting point of this analysis is the formal identity
\begin{equation}
\log {\rm det}(E-\hat{H})={\rm Tr~}\log (E-\hat{H} )
\end{equation}
that, together with the definition of the spectral resolvent
\begin{equation}
R(E+i0^{+})={\rm Tr~}(E+i0^{+}-\hat{H})
\end{equation}
at the complex energy $E^{+}=E+i0^{+}$, allows one to write
\begin{equation}
Z(E^{+})\sim \exp{\left(\int^{E^{+}}R(E)dE\right)} \, .
\end{equation}
Here the symbol $\sim$ indicates that an arbitrary integration constant producing a multiplicative term absorbed in the function $B(E)$ is omitted. The semiclassical approximation to the spectral determinant is readily obtained by means of the semiclassical representation of the resolvent as a sum over periodic orbits a la Gutzwiller: Integrating Eq.~(\ref{eq:SP-Gutzwiller}) yields, to leading order in $\hbar$,
\begin{equation}
\label{eq:Z}
R_{\rm sp}(E)=-i\pi \bar{N}_{\rm sp}(E)-i\sum_{\rm po}\frac{A_{\rm po}(E)}{T_{\rm po}}{\rm e}^{i S_{\rm po}(E)/\hbar} \, .
\end{equation}
A careful analysis of this object, beautifully done by Berry and Keating \cite{Berry90} requires the consistent treatment of the sum over repetitions implicit in the trace formula. A simplified version where repetitions are neglected at both the level of the density of states and the spectral determinant is then obtained by simply expanding the exponential in (\ref{eq:Z}). Noticing that for primitive orbits $T_{\rm ppo}=T_{\rm po}$ and therefore $A_{\rm po}/T_{\rm po}=F_{\rm po}$ depends only on the stability of the orbit, one then naturally regroups the terms of the exponentiated sum and orders it by the number of primitive orbits that compose them. The resulting expression is then a sum
\begin{equation}
\label{eq:pseudo1}
Z_{\rm sp}(E^{+})\sim {\rm e}^{-i\pi \bar{N}(E)}\sum_{\rm pso}(-1)^{n_{\rm pso}}F_{\rm pso}{\rm e}^{iS_{\rm pso}/\hbar}
\end{equation}
over pseudo orbits (pso) of increasingly large order $n$ with
\begin{equation}
\label{eq:pseudo2}
F_{\rm pso}=\prod_{\rm ppo}^{n}F_{\rm ppo}, {\rm \ \ \ and \ \ }S_{\rm pso}=\sum_{\rm ppo}^{n}S_{\rm ppo},
\end{equation}
including the empty pseudo-orbit $F_{0}=1,S_{0}=0$.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/Haake-orbit-panel.png}
\caption{\label{fig:PO-geometries}
{\bf
Periodic-orbit bundles} Some of the pseudo-orbit pairs whose correlated actions are responsible for the generating function of spectral correlators in Eq.~(\ref{eq:generZ}) (from Ref.~\cite{Heusler07})
}
\end{figure}
The implementation of quantum unitarity at the semiclassical level, so far beyond control due to the very nature of the sum over periodic orbits as being formally divergent, follows now in two steps. First, the correct analytical structure of the resolvent as a meromorphic function of the complex energy (correspondingly the density of states being a distribution given as by the usual sum over Dirac-delta peaks) is imposed by imposing the exact relation
\begin{equation}
R(E^{+})=\frac{d}{dE^{+}} \log Z(E^{+})=\left(\frac{1}{Z(E')}\frac{d}{dE^{+}}Z(E^{+})\right)_{E' \to E^{+}}, \end{equation}
on the corresponding semiclassical approximations $R_{\rm sp}, Z_{\rm sp}$. To finally implement quantum unitarity in full, in a second step one reinforces the reality of the quantum mechanical energies by constructing a spectral determinant that is real for real energies. This condition can be implemented as well at different levels of rigor, the lowest being simply the replacement
\begin{equation}
\label{eq:RS}
Z_{\rm sp}(E) \to \bar{Z}_{\rm sp}(E)={\cal R}Z_{\rm sp}(E) {\rm \ \ \ for \ real \ } E
\end{equation}
that, however, makes the definition of how to perform the limit $\bar{Z}_{\rm }(E^{+} \to E)$ ambiguous.
In a remarkable paper \cite{Heusler07}, the resulting 2-point spectral correlator, Eq.~(\ref{eq:SP-2-point}) based on the improved resolvent
\begin{equation}
\label{eq:RS-res}
R_{\rm sp}(E^{+})=\left[\frac{d}{dE^{+}}\sum_{A,B}(-1)^{n_{A}}\left({\cal R}F_{A}(E^{+}){\rm e}^{iS_{A}/\hbar}\right)F_{B}(E'){\rm e}^{iS_{B}(E')}\right]_{E'\to E^{+}}
\end{equation}
was computed by incorporating the encounter calculus to include correlated quadruplets of pseudo-orbits of any order appearing in the generating function
\begin{equation}
\label{eq:generZ}
{\cal Z}(E_{A},E_{B},E_{C},E_{D})=\left\langle\frac{\bar{Z}_{\rm sp}(E_{A})\bar{Z}_{\rm sp}(E_{B})}{Z_{\rm sp}(E_{C})Z_{\rm sp}(E_{D})}\right\rangle
\end{equation}
that then leads to the spectral correlators by differentiation and identification
\begin{equation}
\langle R_{\rm sp}(E_{A})R_{\rm sp}(E_{B})\rangle=\left(\frac{\partial^{2}}{\partial E_{A} \partial E_{B}} {\cal Z}(E_{A},E_{B},E_{C},E_{D})\right)_{(E_{C},E_{D}) \to (E_{A},E_{B})}.
\end{equation}
The calculation of correlated quadruplets of pseudo-orbits requires generalizing the methods initially devised for orbit correlations in \cite{Mueller04} to include now multiple correlated orbits within pseudo-orbits. These correlations are defined through the familiar mechanism where orbits with systematically small action difference $\Delta S$, Eq.~(\ref{eq:delta_S}), are obtained by reshuffling the segments of the orbits inside an encounter, and such differences are consequently characterized by the number and type of the encounters. Two important aspects of the encounter structure of correlated pseudo-orbits are their total number ($V$) and the total number of orbit segments that approach within all encounters ($L$). Terms in the semiclassical expansions are then typically labelled by the function $g=L-V$.
To gain intuition about the encounter mechanism in the context of pseudo-orbits, consider first some of the lowest orders in the expansion of the generating function ${\cal Z}$. Besides the diagonal approximation, that is accounted for separately, the first correlated pseudo-orbits correspond to the sets $A= \{\gamma \},B=\{\gamma '\}, C=D=\{\}$ with $\gamma,\gamma'$ a pair of correlated orbits. Neglecting highly oscillatory terms proportional to ${\rm e}^{2i\bar{N}_{\rm sp}(E)}$ that vanish under average, this pairing can be obtained in two different ways. The enumeration of all possible pairs of correlated orbits was already achieved in \cite{Mueller04}. It starts with the lowest order of one 2-encounter, the Sieber-Richter pair corresponding to $L=2,V=1$ that, following the diagramatic rules of encounter calculus, contributes to the spectral correlation with a term proportional to $1/(E_{A}-E_{B})^{g}$ with $g=L-V=1$, as depicted in Fig.~(\ref{fig:PO-geometries}) on the leftmost top diagram. As this contribution requires the existence of time-reversal invariance symmetry, it is simply not present in the unitary case.
Generalizing this situation, the contributions from correlated orbits with higher and higher structures and increasingly larger $g$ can be considered. This is nothing but the encounter expansion one obtains from the usual orbit (instead of pseudo-orbit) approach. There are cancellations between contributions for all $g>1$ in the unitary case. This is shown for the particular case $g=2$ where the contributions from the only possible diagrams allowed by broken time-reversal invariance (grey shadow) have $L=4, V=2$ and $L=3,V=1$ and exactly cancel each other. The other possible diagrams admited in the case of preserved time-reversal invariance end up giving the corresponding non-zero contribution to order $1/(E_{A}-E_{B})^{3}$ in accordance with the perturbative expansion of the universal RMT result.
Genuine pseudo-orbit correlations, beyond what can be obtained by pairing of two orbits, are shown in the $n:n'$ columns of Fig.~(\ref{fig:PO-geometries}), where $(n,n')\ne(1,1)$ indicates the order of the pseudo-orbits involved. The key observation is that at the perturbative level all such contributions must be cancelled against each other order by order in both orthogonal and unitary symmetry classes, as the result from simple orbit correlations $n=1,n'=1$ was already shown in \cite{Mueller04} to give the correct perturbative part of the universal result. The explicit verification of such cancellation mechanism crucially depends on the $(-1)^{n}$ factors in Eq.~(\ref{eq:RS-res}) and was carried on in \cite{Heusler07}.
The result of this arduous calculation is then unambiguously split into a perturbative contribution, arising essentially from the pairing of pseudo-orbits without the substitution of Eq.~(\ref{eq:RS}) that in turn simply reproduces the result obtained from the representation of Eq.~(\ref{eq:SP-2-point-sc}), and a non-perturbative one with a characteristic oscillatory dependence ${\rm e}^{-2\pi i \epsilon}$ from pairings involving the conjugate part of $\bar{Z}_{\rm sp}$. In the language of \cite{Heusler07}, this oscillatory contribution is obtained instead from a suitable combination of the pairings $(E_{C},E_{D})\to (E_{A},E_{B})$ and $(E_{C},E_{D})\to (E_{B},E_{A})$ on the correlator obtained from $Z_{\rm sp}$, a manipulation that is justified only if Eq.~(\ref{eq:RS}) is satisfied.
The reduction of the large set of pseudo-orbit correlations back to the perturbative result obtained from orbit pairs relies both on the consistence of pseudo-orbit vs orbit correlations, and on massive cancellations, with the lowest order examples shown in the table in Fig.~(\ref{fig:PO-geometries}). Interestingly, as shown in~\cite{Waltner09}, in the case of ratios instead of spectral determinant products, the origin and interpretation of these cancellations is related instead with so-called curvature effects.
Besides for the spectral two-point correlator and form factor, respectively, during the last 20 years the encounter calculus has been developed for and applied to higher spectral correlation functions~\cite{Mueller18} and many other observables, including scattering, quantum transport, and quantum echoes, to name just a few.
In Secs.~\ref{sec:SP-Ehrenfest} and \ref{sec:OTOC} we will partly review these activities.
\subsection{Ehrenfest phenomena}
\label{sec:SP-Ehrenfest}
The formation of orbit bundles with quasi-degenerate actions is intimately connected with encounters as a structural element of chaotic Hamiltonian dynamics braiding the orbits involved. For trajectory pairs with action difference of order $\hbar$, the encounter time $t_{\rm enc}$ corresponds to the Ehrenfest time
\begin{equation}
\tEsp
= \frac{1}{\lsp} \log \frac{L}{\lambda_{\rm dB}}
= \frac{1}{\lsp} \log \frac{S}{\hbar} \, ,
\,
\label{eq:sp-Ehrenfest}
\end{equation}
where the label ``sp'' is used to delineate the $\hbar$-dependent Ehrenfest time in the SP context from the $\hbar_{eff}$-dependent many-body Ehrenfest or scrambling time, Eq.~(\ref{eq:scrambling}), discussed in Sec.~\ref{sec:OTOC}.
Notably, $\tEsp$ links classical and quantum scales, namely the largest positive Lyapunov exponent $\lsp$ with $\hbar$ or the ratio between linear system size $L$ and (de Broglie) wave length $\lambda_{\rm dB}$, respectively. In Eq.~(\ref{eq:sp-Ehrenfest}), $L/\lambda_{\rm dB}$ can be replaced by $S/\hbar$ where the classical action $S$ can be viewed as scale for a corresponding phase space section. Rewriting Eq.~(\ref{eq:sp-Ehrenfest}) as
$L = \lambda_{\rm dB}\exp{(\lsp \tEsp)}$ provides a simple interpretation: $\tEsp$ corresponds to the time it takes an initial minimum uncertainty Gaussian density of phase points of width $\lambda_{\rm dB}$ to spread to a scale of the system size $L$
in a possibly higher dimensional chaotic system governed by $\lsp$. Beyond the logarithmically short time scale $\tEsp$ interference necessarily sets in~\cite{Chirikov81} and the Ehrenfest theorem~\cite{Ehrenfest27} soon fails (for a recent review of early work of Chirikov and coworkers on $\tEsp$-effects for the standard map, see~\cite{Shepelyansky20}).
In Sec.~\ref{sec:SP-correlations}, RMT-type universality was deduced by formally taking the semiclassical limit of large $\tH$ for fixed $\tau\!=\! t /\tH$, {\em i.e.}, involving increasing times $t$. In turn, this implies that encounter times, $\tEsp$, collapse to zero, since they scale logarithmically with $\hbar$. However, measurements and numerical calculations commonly show fascinating quantum chaotic phenomena in the regime of small but non-vanishing $\hbar$, {\em i.e.}~non-vanishing $\tEsp$, implying deviations from RMT-type universality~\cite{Berry85}. Here a brief non-technical overview is given over this regime that is perfectly amenable to semiclassical methods, but beyond RMT approaches; for a detailed review-type account of the underlying semiclassical theory of Ehrenfest effects with an exhaustive account of the literature, see {\em e.g.} Ref.~\cite{Waltner12}.
After it had been demonstrated in the early $90$'s that -- contrary to the prevailing perspective -- it was possible to develop advanced methods to adequately treat post-Ehrenfest quantum dynamics purely semiclassically~\cite{Tomsovic91b, Oconnor92, Sepulveda92, Tomsovic93}, Ehrenfest phenomena were investigated, particularly for observables relevant in mesoscopic quantum systems. There the $\tEsp$-dependence was considered for a large variety of scattering, transport, spectral, and quantum decay properties of chaotic conductors for which representative examples are given in the following.
\subsubsection{Quantum transport}
For the Lorentz gas, a prototypical model of randomly placed disks acting as chaotic classical scatterers in an otherwise ballistic two-dimensional system~\cite{Gas1998Book},
$\tEsp$-signatures in weak localization were first theoretically discussed in Ref.~\cite{Aleiner96prb}. Based on a ballistic $\sigma$-model, and invoking averaging over weak disorder, this approach accounted for correlations in the initial chaotic dynamics (dashed box in Fig.~\ref{fig:Lorentz}) up to $\tEsp$. For later times dynamics merges into uncorrelated diffusive behavior in the Lorentz gas setting. The combined mechanism of initial chaotic spreading, followed by diffusive backscattering lead to the prediction \cite{Aleiner96prb}
\begin{equation}
\Delta \sigma \simeq -\frac{e^2}{\pi\hbar} \exp{\left(-\frac{
\tEsp }{t_\phi}\right)} \, \ln \left(
\frac{t_\phi}{t_{\rm e}}\right)
\label{eq:delta-sigma}
\end{equation}
for the weak localization correction to the two-dimensional conductivity.
In Eq.~(\ref{eq:delta-sigma}), $t_{\rm e}$ and $t_\phi(T)$ denote the elastic scattering time and the temperature-dependent phase breaking time, respectively.
The subsequently observed unusual exponential temperature dependence of $\Delta \sigma(T)$ for a ballistic electron gas in between randomly placed antidots (right panel of Fig.~\ref{fig:Lorentz}) allowed for experimentally detecting and extracting the Ehrenfest time~\cite{Yevtushenko00} using Eq.~(\ref{eq:delta-sigma}). In view of Eq.~(\ref{eq:sp-Ehrenfest}), it is then possible to estimate the {\em classical} Lyapunov exponent $\lsp$ of such an electron antidot billiard from the {\em quantum} weak localization correction $\Delta \sigma$.
See Ref.~\cite{Schneider13} for later work on how the Ehrenfest time effectively poses a short-time threshold for the trajectories contributing to the interaction correction in antidot lattices.
\begin{figure}
\centering
\includegraphics[width=0.45\linewidth]{figures/Lorentz-gas-a.pdf}
\hspace{2cm}
\includegraphics[width=0.2\linewidth]{figures/Lorentz-b.jpg}
\caption{\label{fig:Lorentz}
{\bf Ehrenfest effect on weak localization in a Lorentz gas} |
Left: Sketch of a pair of paths contributing to coherent backscattering in a ballistic system composed of randomly placed disks. Trajectories of a mimimal wave packet of size $\lambda_{\rm dB}$, marked as black dot, separate on scales of $\tEsp$, Eq.~(\ref{eq:sp-Ehrenfest}), up to a distance of the size $L$ of the classical scatterers, providing a mechanism for splitting the initial wave packet and leading to coherent backschattering due to constructive interference of time-reversed back-reflected paths.
Right: Experimental realization of a Lorentz gas built from an irregular array of "antidots" in a high-mobility two-dimensional electron system
(Left and right panel from Refs.~\cite{Richter00} and \cite{Yevtushenko00}, respectively).
}
\end{figure}
After these earlier studies, and parallel to the development of the encounter calculus for spectral correlators introduced above, the particular relevance of action correlations and braided orbit bundles for chaotic quantum transport has quickly become evident. In Ref.~\cite{Richter02} the leading-order weak localization correction to the classical magneto-transmission of ballistic mesoscopic conductors was computed and related to off-diagonal contributions and interference of encounter-correlated lead-connecting trajectories. This finding is in agreement with RMT and solved a decade long issue concerning missing current conservation in semiclassical transport theory (based on the diagonal approximation~\cite{Blumel88,Baranger91}) for chaotic disorder-free conductors. Various subsequent theoretical works have extended semiclassical quantum transport theory on the basis of encounter calculus or related approaches. These include the semiclassical calculation of higher-order corrections to the classical conductance of ballistic cavities~\cite{Heusler06}, shot noise in chaotic conductors~\cite{Lassl03,Schanz03,Braun06,Whitney06,Mueller07,Brouwer07}, weak anti-localization in spin-orbit coupled semiconductor cavities~\cite{Zaitsev05,Zaitsev05a,Bolte07}, higher transport moments and correlators~\cite{Berkolaiko12,Mueller07,Berkolaiko11,Bereczuk21,Novaes22}, Wigner-Smith time delay~\cite{Kuipers07,Kuipers08,Berkolaiko11,Kuipers14}, role of tunnel barriers~\cite{Waltner12a,Kuipers13}, and extensions to the ac-conductance~\cite{Petitjean09}.
Moreover, for transport through chaotic conductors Ehrenfest phenomena were again the focus of interest. It turned out that the weak localization correction to the classical conductance indeed shows a characteristic exponential suppression, reading (for large number of scattering channels)~\cite{Adagideli03}
\begin{equation}
\Delta G = -2\frac{e^2}{h} \, e^{\tEsp/\tau_{D} } \, .
\label{eq:delta-sigma-Ehrenfest}
\end{equation}
This has a straight-forward interpretation: If $\tEsp >\tau_{D}$, a charge carrier will leave the chaotic conductor with dwell time $\tau_{D}$ before an encounter could be formed, thereby suppressing the conductance correction on scales smaller than the encounter time. Reference~\cite{Adagideli03}, focussing on the transmission, has been complemented and generalized by further trajectory-based approaches~\cite{Jacquod06,Rahav06,Brouwer06,Altland07,Waltner10} that also considered $\tEsp$-effects on the complementary weak localization mechanism for the quantum reflection. In addition, shot noise measured in terms of the Fano factor exhibits such a $\tEsp$-behavior implying that it vanishes in the strict limit of large Ehrenfest times~\cite{Whitney06,Brouwer07}; see Ref.~\cite{Waltner11} for the generalization to full counting statistics. Corresponding techniques allow for computing $\tEsp$-modifications of the RMT form factor~(\ref{eq:SP-FormFac}) giving rise to deviations from the linear "ramp" for the GUE case~\cite{Brouwer06b,Waltner10}.
Contrary to the suppression of weak localization in the classical limit, in a chaotic quantum dot the variance of the conductance comprises $\tEsp$-independent terms, as was first numerically observed in~\cite{Tworzydlo04, Jacquod04}. The fact that the variance measuring the size of mesoscopic conductance fluctuations prevails in the classical limit is also captured by semiclassical theory~\cite{Brouwer06}. It arises from trajectories spending a long time in the vicinity of periodic orbits inside the cavity. Since the associated dwell times are arbitrarily long, these trajectories overcome the mechanism that $\tEsp$ provides a short-time suppression for quantum effects. Moreover, conductance fluctuations in ballistic conductors differ from those of disordered conductors at large $\tEsp$; see~\cite{Brouwer07} for a unified semiclassical treatment of chaotic quantum dots and extended systems exhibiting diffusive dynamics at long time scales, such as depicted in Fig.~\ref{fig:Lorentz}.
\subsubsection{Quantum decay}
Naturally, the imprints of the Ehrenfest time should appear most directly in the time domain, i.e.~in explicitly time-dependent quantities. Quantum mechanical decay of an open chaotic quantum system, such as a cavity with a hole in the boundary, is predominantly governed by classical exponential decay with dwell time $\tau_{D}$. However, a semiclassical interference mechanism similar to that of coherent backscattering leads to an enhancement of the quantum probability compared to the classical survival probability~\cite{Waltner08}, confirming RMT predictions~\cite{Frahm97}. Going beyond RMT one can semiclassically compute explicit $\tEsp$-effects on the time-dependent quantum decay. The spatially integrated probability density inside the open quantum system decreases as
\begin{equation}
\rho(t) \simeq e^{-t/\tau_{D}} + e^{-(t-\tEsp)/\tau_{D}} \
\frac{(t-2\tEsp)^2}{2\tau_{D}\tH} \ \Theta(t-2\tEsp) \,
\label{eq:decay}
\end{equation}
where the second contribution is the leading term in a series in $1/\tH$ of quantum corrections arsing from
special pairs of interfering, correlated open trajectories with encounters located at their ends~\cite{Waltner08}.
\footnote{
These trajectories also provide the key mechanism establishing an appropriate semiclassical version of the continuity equation~\cite{Kuipers09}.}
From times $t>2\tEsp$ on, the quantum decay is delayed compared to the classical decay $e^{-t/\tau_{D}}$ because it takes a minimum time $2\tEsp$ to form encounters thereby generating quantum interference effects. This unique $\tEsp$-behaviour has been confirmed by numerical wave packet simulations~\cite{Waltner08}.
\subsubsection{Proximity effect in Andreev billiards}
Finally, post-Ehrenfest interference mechanisms are at the heart of the formation of an induced gap in the density of states of a chaotic quantum dot in proximity to a superconductor.
Such an ``Andreev billiard''~\cite{Kosztin95, Adagideli02} possesses the interesting peculiarity that the suppression of its density of states at the Fermi energy crucially depends on whether the dynamics of its classical counterpart is integrable or chaotic~\cite{Melsen96}.
The spectrum of a chaotic Andreev billiard was expected, according to RMT, to exhibit a hard gap to scale with the energy $\sim \hbar/\tau_{D}$, where $\tau_{D}$ is the average dwell time a particle moves in the billiard between successive Andreev reflections~\cite{Melsen96}. On the contrary, semiclassical theory based on the
diagonal approximation yielded only
an exponential suppression of the density of states~\cite{Lodder98,Schomerus99,Ihra01} pointing to an obvious discrepancy that attracted much theoretical interest; see {\em e.g.}~\cite{Adagideli02a,Micklitz09} for first attempts to account for $\tEsp$-aspects, and~\cite{Beenakker05} for a review.
Later it was shown~\cite{Kuipers10,Kuipers11,Engl11}, using encounter calculus, how classical orbit correlations lead to the formation of the hard gap, as predicted by RMT in the limit of negligible Ehrenfest time, and how the influence of a finite $\tEsp$ causes the gap to shrink until the classical regime of exponential suppression is reached. Notably, this crossover is not smooth; instead, for intermediate $\tEsp$ a second pronounced gap was predicted to appear around $E\sim \hbar/\tEsp$ that would be a particularly striking feature of $\tEsp$-effects.
The Ehrenfest time has reappeared as `scrambling time' in a new guise in the many-body context. In Sec.~\ref{sec:OTOC} the saturation of out-of-time-order correlators at Ehrenfest time scales are discussed as another clear-cut $\tau_{E}$-singature, marking the change in interference governed by pre- and post-Ehrenfest semiclassical dynamics.
\section{Semiclassical theory of bosonic many-body quantum chaos}
\label{sec:SC-MB}
\subsection{van Vleck propagator for bosonic systems}
It might seem that the semiclassical approximation, based on the unaltered kinematic structure of quantum mechanics (Hilbert spaces, linear time evolution, observables as Hermitian operators, entanglement, etc.) supplemented by the asymptotic analysis of the propagator, requires only minimal modifications to be ushered into the realm of interacting many-body (MB) systems. Due to the product form of the total Hilbert space in such systems, the corresponding modification of the MB propagator would be simply accounted for by adapting the classical limit into the high dimensional phase space characteristic of MB classical systems. However, this picture does not account for a kinematic aspect of purely quantum origin, namely quantum indistinguishability, that imposes severe restrictions on the allowed MB identical particle system states by demanding that they have a well defined transformation rule under particle label permutations~\cite{Sakurai94}. This restriction on the allowed states is completely alien to the world of classical mechanics where identical particles can be made indistinguishable only in a statistical sense, whereas the fundamental microscopic dynamics enables, in principle, tracking of each particle's identity simply by following its path in phase space. This feature is even essential at the non-interacting level where it has macroscopic effects, such as the stability of fermionic matter~\cite{Dyson67a, Dyson67b, Lieb76}, the phenomena of Bose-Einstein condensation~\cite{Bose24, Einstein24}, the Hong-Ou-Mandel effect~\cite{Hong87}, and related coherence effects of much recent interest, boson sampling~\cite{Aaronson10}. It is therefore critical to incorporate it in any semiclassical program aimed at MB quantum systems.
In the spirit of the semiclassical theory of particle systems, quantum indistinguishability can be implemented by the application of (anti-) symmetrization operators~\cite{Sakurai94} directly on the arguments of the van Vleck-Gutzwiller propagator for distinguishable particles. In this way, the fermionic (F), bosonic (B) propagators are explicitly given by
\begin{equation}
K^{\rm F, B}_{\rm sp}({\bf r}_{f},{\bf r}_{i},t):=\frac{1}{N!}\sum_{{\cal P}}\epsilon^{{\cal P}}K_{\rm sp}({\cal P}{\bf r}_{f},{\bf r}_{i},t)
\end{equation}
involving a sum over the $N!$ elements ${\cal P}$ of the permutation group acting on the particle labels of the $Nd$-dimensional configuration eigenstate $|{\bf r}\rangle$, weighted by their parity $\epsilon^{{\cal P}}$ where $\epsilon=-1\ (1)$ for fermions (bosons).
The advantage of the semiclassical approximation in this first-quantized representation is that it is solely based on the semiclassical approximation for distinguishable particles, which does not know anything about the F/B nature of the degrees of freedom.
This first-quantized formalism has been very successful, especially in the framework of non-interacting MB systems (or weakly interacting ones using perturbation theory), from the mesoscopic theory of quantum transport~\cite{Richter00,Jalabert00} and quantum dots~\cite{Ullmo95} to the description of the scattering of bosonic states through chaotic cavities~\cite{Urbina16}. The problem is substantially more involved if interactions are taken into account due to the very different nature of the sum over classical paths inherent in the semiclassical propagator and the sum over permutations accounting for indistinguishability. This lack of compatibility is clearly seen in the limit $N \gg 1$ that becomes an essentially impossible task due to the (factorial) proliferation of terms in the (anti-) symmetrization process, even if one could efficiently account for the classical distinguishable limit.
The path towards extending semiclassical methods into the realm of quantum interacting systems of identical particles naturally profits from the change of perspective given by the description of such systems in terms of {\it quantum fields}~\cite{Negele1998}. On the conceptual level, understanding particle states as excitations of a quantum field means that the individual identity of the distinguishable degrees of freedom, now an extra conceptual baggage without any physical relevance, is immediately absent from the description: instead of building MB states out of (anti-) symmetrized states of distinguishable particles, one specifies states by saying how many particles occupy any given set of single-particle (SP) orbitals irrespective of their individual identities.
The space of quantum states labelled by such configurations is the familiar Fock space, and the goal of this section is to review the conceptual and technical steps that enable the adaptation of the semiclassical program into this new landscape, as well as its new regime of validity, advantages, and limitations.
\subsubsection{Semiclassical derivation}
Following standard references~\cite{Negele1998}, the construction of the Fock space begins with selecting an arbitrary but fixed set of SP states, denoted from now on as ``sites'' or ``orbitals''
\begin{equation}
\phi_{i} {\rm \ \ with \ \ \ }i=1,\ldots,d
\end{equation}
where $d$ is the (maybe infinite) dimension of the SP (or "local") Hilbert space. The quantum mechanical state $|\Psi\rangle$ of a bosonic MB system is then an element of the Fock space $|\Psi\rangle \in {\cal F}$ of the form
\begin{equation}
|\Psi\rangle=\sum_{\bf n}\Psi_{{\bf n}}|{\bf n}\rangle
\end{equation}
where the basis states, called Fock states,
\begin{equation}
|{\bf n}\rangle=|n_{1},\ldots,n_{d}\rangle {\rm \ \ \ \ with \ \ } n_{i}=0,1,2,\ldots
\end{equation}
are labelled by occupation numbers $n_{i}$, the eigenvalues of the observables $\hat{n}_{i}$
\begin{equation}
\hat{n}_{i}|{\bf n}\rangle=n_{i}|{\bf n}\rangle
\end{equation}
that count how many particles occupy the SP states $\phi_{i}$. Observables in ${\cal F}$ are written in terms of the corresponding creation/annihilation operators $\hat{b}^{\dagger},\hat{b}$ defined through their action in the basis Fock states,
\begin{eqnarray}
\hat{b}_{i}|n_{1},\ldots,n_{i},\ldots,n_{d}\rangle&=&\sqrt{n_{i}-1}|n_{1},\ldots,n_{i}-1,\ldots,n_{d}\rangle \nonumber \\
\hat{b}_{i}^{\dagger}|n_{1},\ldots,n_{i},\ldots,n_{d}\rangle&=&\sqrt{n_{i}}|n_{1},\ldots,n_{i}+1,\ldots,n_{d}\rangle
\end{eqnarray}
satisfying the important relations
\begin{equation}
\left[\hat{b}_{i},\hat{b}_{j}\right]=0 {\rm \ \ , \ \ }\left[\hat{b}_{i},\hat{b}_{j}^{\dagger}\right]=\hat{1}\delta_{i,j} {\rm \ \ and \ \ } \hat{n}_{i}=\hat{b}_{i}^{\dagger}\hat{b}_{i}.
\end{equation}
As a rule, Hermitian operators that are quadratic forms in $\hat{b}^{\dagger},\hat{b}$ represent single-particle observables, whereas two-body interactions are represented by combinations of fourth order. The general form of the Hamiltonian describing a system of bosons evolving under the influence of external potentials and pairwise interactions is then
\begin{equation}
\label{eq:Ham}
\hat{H}=\sum_{i,j}h_{i,j}\hat{b}^{\dagger}_{i}\hat{b}_{j}+\sum_{i,j,i',j'}v_{i,j,i',j'}\hat{b}^{\dagger}_{i}\hat{b}_{j}\hat{b}^{\dagger}_{i'}\hat{b}_{j'}\ .
\end{equation}
Correspondingly, the Fock-space propagator (assuming for simplicity time-independent external and interaction potentials) is defined as usual by
\begin{equation}
K({\bf n}^{(f)},{\bf n}^{(i)},t)=\langle {\bf n}^{(f)}|{\rm e}^{-\frac{i}{\hbar}\hat{H}t}|{\bf n}^{(i)}\rangle
\end{equation}
and our goal is to first identify the MB version of the semiclassical regime $\hbar_{\rm eff}=1/N\to 0$, and then, starting from a suitable path integral representation of $K$, perform the steps to derive the Fock space version of the van Vleck-Gutzwiller semiclassical sum over classical paths.
The first difficulty in attempting this program is the very fact that there is not a path integral between Fock states, at least not in the usual sense of time slicing and inserting representations of the unit operator in terms of Fock states. The issue here is clear: the Fock states define a discrete basis. Historically, this problem was addressed by shifting to the coherent state basis on ${\cal F}$~\cite{Negele1998} defined as the common eigenstates of the (non-Hermitian) annihilation operators, and labelled by a set of continuous complex numbers
\begin{equation}
\hat{b}_{i}|{\bf z}\rangle=z_{i}|{\bf z}\rangle \ .
\end{equation}
They admit a realization of the unit operator in ${\cal F}$
\begin{equation}
\frac{1}{(2\pi i)^{d}}\int \prod_{i}dz_{i}dz^{*}_{i} |{\bf z}\rangle \langle {\bf z}| = \hat{1}
\end{equation}
in a form suitable for the usual time-slicing path integral construction.
The resulting form of the coherent state path integral for bosonic quantum fields has been extensively used as a basis for semiclassical approximations~\cite{Negele1998, Klauder78, Baranger01}, so its use to derive a van Vleck-Gutzwiller type of approach is quite appealing. Although, one conceivable drawback of coherent states is that the resulting saddle point equations do not generally admit real solutions, thus requiring the complexification of the classical limit of the theory~\cite{Baranger01}. This approach has recently been implemented and successfully applied to describe quantum dynamics in cold atomic systems in optical lattices~\cite{Tomsovic18}, but its conceptual and technical foundation differs in some essential ways from the van Vleck-Gutzwiller approach taken into Fock space. A main feature of the usual (coordinate) path integral in single-particle systems, which differs from the complexified phase space inherent in the coherent state approach, is its characteristic Hamiltonian classical limit in terms of real phase space coordinates. This happens to be naturally consistent with the boundary value problem of ordinary time-dependent WKB theory~\cite{Maslov81}. If one is interested in maintaining the reality of the dynamical variables, the widely used coherent state path integral turns out not to be the ideal starting point for the van Vleck-Gutzwiller derivation.
Interestingly, a more direct approach follows the actual historical development of semiclassical methods for SP systems, where the wave packet propagator (constructed from the translations of the harmonic oscillator ground state) was a later development that came only after the construction of semiclassical approximations in a configuration representation culminating with ordinary time-dependent WKB theory~\cite{Keller58, Maslov81}.
A path integral in Fock space that provides a semiclassical approximation with real paths can be based on MB states of operators that have three key properties of the familiar configuration (or momentum) operators in SP systems. First, they must have a real, continuous, and unbounded spectrum. Second, they must have, in some precise sense, a classical limit where they play the role of {\it half} of a canonical pair. Third, when the corresponding realization of the unit operator is inserted in the time-sliced propagator, they must produce a real action functional admitting real solutions when extremized with fixed boundary conditions on the paths. A very natural choice is then the manifestly Hermitian combinations
\begin{equation}
\label{eq:quad1}
\hat{q}_{i}=\frac{\hat{b}^{\dagger}_{i}+\hat{b}_{i}}{\sqrt{2}} {\rm \ \ , \ \ } \hat{p}_{i}=\frac{\hat{b}^{\dagger}_{i}-\hat{b}_{i}}{\sqrt{2}i}
\end{equation}
again reminding us of the relation between the creation/annihilation and position/momentum operators for the SP harmonic oscillator \cite{Sakurai94}. In fact, these pairs can be easily shown to fulfill the relations
\begin{equation}
\left[\hat{q}_{i},\hat{q}_{j}\right]=\left[\hat{p}_{i},\hat{p}_{j}\right]=0 {\rm \ \ and \ \ } \left[\hat{q}_{i},\hat{p}_{j}\right]=i\delta_{i,j}
\end{equation}
of canonically conjugate operators. Together with their eigenbases that satisfy all the required conditions as can be seen by direct computation, they are at the center of the construction of the semiclassical approximation for bosonic matter fields. Following the terminology of quantum optics where these canonical pairs are of common use as Hermitian versions of the standard field operators, we refer to them as {\it quadrature operators} or simply quadratures~\cite{Scully97}. Armed with the quadratures and their eigenbases
\begin{equation}
\label{eq:quad2}
\hat{q}_{i}|{\bf q}\rangle=q_{i}|{\bf q}\rangle {\rm \ \ and \ \ } \hat{p}_{i}|{\bf p}\rangle=p_{i}|{\bf p}\rangle
\end{equation}
nicely satisfying \cite{Engl16}
\begin{equation}
\langle {\bf q}|{\bf p}\rangle=\frac{{\rm e}^{i{\bf q}\cdot {\bf p}}}{(2\pi)^{d/2}}
\end{equation}
it is possible to express the exact path integral by the usual method of time-slicing and inserting unity. However, note that inserting the quadrature definitions, Eq.~(\ref{eq:quad1}), into the generic form of the Hamiltonian, Eq.~(\ref{eq:Ham}), leads to Hamiltonians of a very different character than the ``kinetic plus potential energy'' (mechanical type) often found in non-relativistic SP systems. In fact, despite their identical formal properties, the configuration and momentum quadratures do not represent anything like position and momentum, and they can be considered as just a technical tool used to develop a path integral with the desired properties. For systems of massive bosons, they are not observable in the strict sense (a property they share with coherent states~\cite{Superselection}), and therefore the construction of the propagator between physical (Fock) states must be also eventually addressed.
Given the above considerations, the construction of the path integral form of the propagator between configuration quadratures is implemented similarly to the standard methodology~\cite{Schulman81} with a few key modifications. In a nutshell, one first time slices the evolution into a large number of factors of the form ${\rm e}^{-i\delta t \hat{H}/\hbar}$ with $\delta t \to 0$, and inserts the representation of the unit operator in Fock space in terms of the $q$-quadratures. The form of the Hamiltonian in Eq.~(\ref{eq:Ham}), very different from the usual kinetic-plus-potential-energy with the only dependence on the momentum being at most quadratic, demands a careful treatment of the resulting matrix elements. This is conveniently achieved by
\begin{equation*}
\langle {\bf q}|{\rm e}^{-\frac{i}{\hbar}\delta t \hat{H}}|{\bf q}'\rangle = \int d{\bf p}\langle {\bf q}|{\rm e}^{-\frac{i}{\hbar}\delta t \hat{H}}|{\bf p}\rangle \langle{\bf p}|{\bf q}'\rangle = \int d{\bf p}\langle {\bf q}|{\bf p}\rangle{\rm e}^{-\frac{i}{\hbar}\delta t \frac{\langle {\bf q}|\hat{H}|{\bf p}\rangle}{\langle {\bf q}|{\bf p}\rangle}}\langle{\bf p}|{\bf q}'\rangle+{\cal O}(\delta t)
\end{equation*}
therefore introducing extra integrations over momentum quadratures, to get the so-called Hamiltonian (or phase space) form of the propagator \cite{Negele1998}
\begin{equation}
K({\bf q}^{(f)},{\bf q}^{(i)},t):=\langle{\bf q}^{(f)}|{\rm e}^{-\frac{i}{\hbar}\hat{H}t}|{\bf q}^{(i)}\rangle=\int {\cal D}[{\bf q}(s),{\bf p}(s)]{\rm e}^{iR[{\bf q}(s),{\bf p}(s)]}
\end{equation}
where the integral is now defined over the space of paths $({\bf q}(s),{\bf p}(s))$. In this representation, only the paths in configuration quadrature endpoints are constrained,
\begin{equation}
{\bf q}(s=0)={\bf q}^{(i)}, {\bf q}(s=t)={\bf q}^{(f)}.
\end{equation}
whereas the momentum quadrature endpoints are completely unconstrained. Finally, the real-defined action functional is given in its Hamiltonian form by
\begin{equation}
R[{\bf q}(s),{\bf p}(s)]=\int ds \left[{\bf p}(s) \cdot \dot{{\bf q}}(s)-H_{\rm cl}({\bf q}(s),{\bf p}(s))/\hbar\right]
\end{equation}
where the classical symbol is obtained from the Hamiltonian operator expressed with all $\hat{p}$ operators moved to the right of the $\hat{q}$ ones as
\begin{equation}
H_{\rm cl}({\bf q},{\bf p})=\frac{\langle {\bf q}|\hat{H}|{\bf p}\rangle}{\langle {\bf q}|{\bf p}\rangle}.
\end{equation}
Note that the action functional in Fock space is dimensionless, an aspect that reflects once again how quadrature operators are not related with any physical coordinate/momentum in any sense beyond the formal analogies with their SP counterparts. A nice feature is the very natural way that the Hamiltonian symbol appearing in the exact path integral is obtained from the quantum operator. Preparing the road for the semiclassical approximation where ordering effects lead to subdominant contributions, and denoting
\begin{equation}
\label{eq:Tom1}
\hat{H}=H(\hat{{\bf b}}^{\dagger},\hat{{\bf b}}),
\end{equation}
the phase space function appearing in the path integral is given by
\begin{equation}
\label{eq:Tom2}
H_{\rm cl}({\bf q},{\bf p})=H\left(\frac{{\bf q}+i{\bf p}}{\sqrt{2}},\frac{{\bf q}-i{\bf p}}{\sqrt{2}}\right).
\end{equation}
Before commencing with the Fock space version of Gutzwiller's celebrated analysis of the exact path integral that culminates in the derivation of the semiclassical propagators, two deeply intertwined aspects must be addressed, namely the identification of the semiclassical regime where an asymptotic analysis of the path integral can be meaningfully applied, and the connection of the quadrature propagator to physical Fock states. First, the semiclassical regime in Fock space is {\it not} defined by $\hbar \to 0$. Here, as in many other important situations, the fundamental nature of the quadrature operators as formally, but not physically, the MB version of the canonical position and momentum operators in particle systems plays a key role. In contrast to the first quantized approach, inspection of the action functional reveals that in Fock space, $\hbar$ is simply a constant that transforms energies like $H$ into frequencies, but does not play the fundamental role of defining the small parameter upon which the asymptotic analysis is built on.
In order to identify a suitable asymptotic parameter, focus on the action of the quadrature propagator on the physical Fock states. Using the relation
\begin{equation}
\label{eq:transfo}
K({\bf n}^{(f)},{\bf n}^{(i)},t)=\int d{\bf q}^{(f)}d{\bf q}^{(i)}\langle {\bf n}^{(f)}|{\bf q}^{(f)}\rangle K({\bf q}^{(f)},{\bf q}^{(i)},t)\langle {\bf q}^{(i)}|{\bf n}^{(i)}\rangle
\end{equation}
and the transformation overlaps formally derived from the properties of quadrature states in~\cite{Engl16},
\begin{equation}
\langle q|n\rangle=\frac{{\rm e}^{-q^{2}/2}}{\pi^{1/4}\sqrt{2^{n}n!}} H_{n}(q)\ ,
\end{equation}
which is the same form as the corresponding results for the familiar harmonic oscillator in terms of the Hermite polynomials $H_{n}(x)$,
one can obtain the propagator between Fock states based on the path integral in quadrature representation. The semiclassical analysis, and the identification of a proper asymptotic regime, begins with the analysis of this kernel in the limit of large occupations $n \gg 1$. In this case, a well known asymptotic form \cite{Gradshteyn00}
\begin{equation}
\langle q|n\rangle \simeq A(q,n)\cos{(F(q,n)+\pi/4)}
\end{equation}
holds, where $A(q,n)$ is a smooth prefactor and following \cite{Engl16}
\begin{equation}
F(q,n)=\int dq\sqrt{2n+1 - q^{2}}
\end{equation}
is naturally identified as the generating function of the classical canonical transformation between the canonical pairs $(q,p)$ and $(n,\theta)$ with
\begin{equation}
q+ip=\sqrt{2n}{\rm e}^{i\theta}.
\end{equation}
Using this generating function, the phase space variables $(q,p)$ labelling quadrature eigenstates on each orbital are maximal for $q^{2}+p^{2}=2n$ if the quadrature propagator is applied to Fock states with large occupation numbers. Thus, acting on a Fock state with large occupations $n_{i} \gg 1$, the overlap between quadrature and Fock states suggests the scaling $q_{i}\propto \sqrt{n_{i}},p_{i} \propto \sqrt{n_{i}}$.
For many systems of interest, the Hamiltonian has an additional conservation property already easily seen in the general Hamiltonian of Eq.~(\ref{eq:Ham}), namely the operator representing the total number of particles in the system
\begin{equation}
\hat{N}=\sum_{i}\hat{n}_{i}
\end{equation}
is conserved, a constraint that is fundamental in the case of massive bosons~\cite{Superselection}. A consequence of this symmetry is that the Fock state propagator is different from zero if and only if
\begin{equation}
N^{(f)}=N^{(i)}=N
\end{equation}
and therefore all possible dynamical processes and amplitudes are restricted to the subspace of ${\cal F}$ fixed by the particular eigenvalue $N$ of $\hat{N}$. This observable appears as a real numerical constant parameterizing the propagator. Furthermore, simple combinatorial arguments imply that for asymptotically large $N$ and fixed number of orbitals $d$, the vast majority of Fock states satisfy $n_{i} \simeq N/d$. In essence, as long as Fock states with occupations bounded away from zero are considered, the regime of validity of the scaling with the large occupations is simply given by setting a large enough total number $N$ of particles with the quadrature variables scaling as
\begin{equation}
(q_{i},p_{i})=\sqrt{N}(Q_{i},P_{i}).
\end{equation}
The effect of this scaling with the total number of particles on the quadrature propagator is reduced, except for considerations about the functional measure that can be accounted for by a convenient regularization, to its effect on the action functional as
\begin{equation}
R[\sqrt{N}{\bf Q}(s),\sqrt{N}{\bf P}(s)]=\int ds \left[N{\bf P}(s) \cdot \dot{{\bf Q}}(s)-H_{\rm cl}(\sqrt{N}{\bf Q}(s),\sqrt{N}{\bf P}(s))\hbar\right].
\end{equation}
Given the specific homogeneity properties of the second-quantized Hamiltonian of Eq.~(\ref{eq:Ham}),
\begin{equation}
H_{\rm cl}(\sqrt{N}{\bf Q}(s),\sqrt{N}{\bf P}(s))=NH_{\rm cl}({\bf Q}(s),{\bf P}(s))
\end{equation}
as long as the interaction matrix elements are rescaled by
\begin{equation}
v=\tilde{v}/N\ ,
\end{equation}
a requirement arising from a very intuitive observation: for large occupations, $N \gg 1$, the interaction term in Eq.~(\ref{eq:Ham}) trivially dominates the dynamics given its natural scaling with $N^{2}$. It would otherwise overwhelm the $N$-dependence of the single particle part. This observation suggests that a meaningful limit is only achieved by
\begin{equation}
N \to \infty,\quad v \to 0,\quad vN=\tilde{v}=const\ .
\end{equation}
Thus,
\begin{equation}
\label{eq:action}
R[\sqrt{N}{\bf Q}(s),\sqrt{N}{\bf P}(s)]=N\int ds \left[{\bf P}(s) \cdot \dot{{\bf Q}}(s)-{\cal H}({\bf Q}(s),{\bf P}(s))/\hbar\right]
\end{equation}
where ${\cal H}$ denotes the classical Hamiltonian with rescaled interaction strength and therefore dynamics that are {\it fully independent of $N$}. The form of Eq.~(\ref{eq:action}) gives the formal identification $\hbar_{\rm eff}=1/N$ and $N \to \infty$ as the semiclassical regime of systems with a large number of interacting bosons.
There is one remaining ingredient in Eq.~(\ref{eq:transfo}) to be written in terms of scaled variables, the part corresponding to the transformation kernels. One readily sees that
\begin{equation}
F(\sqrt{N}Q,n)=NF(Q,\rho),\quad \rho=n/N\ ,
\end{equation}
thus bringing the quadrature propagator, when projected on initial and final Fock states with large total number of particles, into a linear combination of integrals of the form
\begin{eqnarray}
\label{eq:full}
K({\bf n}^{(f)},{\bf n}^{(i)},t)&=&\int d{\bf Q}^{(f)}d{\bf Q}^{(i)} \prod_{i}A(Q_{i}^{(f)},\rho_{i}^{(f)}){\rm e}^{iNF(Q_{i}^{(f)},\rho_{i}^{(f)})}A(Q_{i}^{(i)},\rho_{i}^{(i)}) \nonumber \\
&\times& {\rm e}^{iNF(Q_{i}^{(i)},\rho_{i}^{(i)})}\int_{{\bf Q}(0)={\bf Q}^{(i)}}^ {{\bf Q}(t)={\bf Q}^{(f)}}{\cal D}[{\bf Q}(s),{\bf P}(s)]{\rm e}^{iN{\cal R}[{\bf Q}(s),{\bf P}(s)]}\ .
\end{eqnarray}
The asymptotic limit for $N\to \infty$ naturally emerges since the corresponding action functional ${\cal R}[{\bf Q}(s),{\bf P}(s)]$ and phase functions $F(Q,\rho)$ are {\it independent of $N$}. Therefore, both the stationary phase condition $\delta {\cal R}=0$ that defines a consistent classical limit when supplemented with the boundary conditions ${\bf Q}(0)={\bf Q}^{(i)},{\bf Q}(t)={\bf Q}^{(f)}$, and the canonical transformation performing the change of phase space coordinates $(Q,P) \to (\rho,\theta)$ can be performed using stationary phase analysis.
Performing explicitly the variations over the $Q,P$ paths we get
\begin{eqnarray}
\frac{\delta {\cal R}}{\delta {\bf Q}}=0 &\rightarrow& \hbar\frac{d}{ds}{\bf P}=-\frac{\partial {\cal H}}{\partial {\bf Q}} \nonumber \\
\frac{\delta {\cal R}}{\delta {\bf P}}=0 &\rightarrow& \hbar \frac{d}{ds}{\bf Q}=\frac{\partial {\cal H}}{\partial {\bf P}}
\end{eqnarray}
which, using Eqs.~(\ref{eq:Tom1}) and (\ref{eq:Tom2}), are recognized as the real and imaginary parts of the mean field equations
\begin{equation}
\label{eq:MFE}
i \hbar\frac{d}{ds}\psi_{i}(s)=\frac{\partial}{\partial \psi^{*}_{i}}{\cal H}_{\rm MF}({\bf \psi},{\bf \psi}^{*})
\end{equation}
where the mean field Hamiltonian is defined by
\begin{equation}
{\cal H}_{\rm MF}({\bf \psi},{\bf \psi}^{*})={\cal H}({\bf Q},{\bf P})
\end{equation}
in terms of the complex classical fields
\begin{equation}
\psi_{i}=\frac{Q_{i}+iP_{i}}{\sqrt{2}}
\end{equation}
that together with $\psi^{*}$ parameterize the manifold in the phase space of the classical limit with the constraint
\begin{equation}
\sum_{i}\rho_{i}=1.
\end{equation}
We see then that in our construction, the classical limit is identical to the celebrated mean-field equations well known from the theory of interacting bosonic systems in their discrete (lattice) version. In fact, for a Hamiltonian with the form
\begin{equation}
\hat{H}=\sum_{i}e_{i}\hat{n}_{i}-J\sum_{i}\hat{b}^{\dagger}_{i}\hat{b}_{i+1}+\hat{b}^{\dagger}_{i+1}\hat{b}_{i}+U\sum_{i}\hat{n}_{i}(\hat{n}_{i}-1)
\end{equation}
a continuous limit with suitable redefinitions reads
\begin{equation}
\label{eq:classlim}
i \hbar \frac{d}{ds}\psi(x,s)=\left(-\frac{\hbar^{2}}{2m}\frac{\partial^{2}}{\partial x^{2}}+V(x)\right)\psi(x,s)+U|\psi(x,s)|^{2}\psi(x,s)
\end{equation}
which is the familiar Gross-Pitaevskii equation \cite{Negele1998} widely used to describe interacting bosonic systems. Now that the the mean field equations are identified as the underlying theory playing the role of the classical limit of the MB quantum theory in the asymptotic regime $N \gg 1$, the relationship between quantum, semiclassical, and mean-field descriptions becomes transparent. In the same vein as Hamilton's classical equations for particles not being able to describe quantum interference simply because the (phase) space of classical states does not allow for superpositions, the mean-field solutions cannot by themselves contain genuine MB quantum interference. Here, as with the SP case, to understand the connection between multiple mean-field solutions and their interference requires the full machinery of the semiclassical approximation.
The first point where the semiclassical approximation drastically departs from the mean field limit is in its way of treating the information provided by the mean-field equations. Invariably, mean-field methods are based on the propagation of a {\it single} solution of the mean-field equations, fully and uniquely determined by its initial condition $\psi(s=0)$. Quite the contrary, the classical limit of the interacting MB quantum theory is {\it not} an initial value problem, but actually a two-point boundary value problem consistently determined by the mean field equations supplemented with the boundary conditions
\begin{equation}
{\bf Q}(s=0)={\bf Q}^{(i)},{\bf Q}(s=t)={\bf Q}^{(f)}
\end{equation}
and thus generally admitting multiple solutions except for the non-interacting case. Following the conceptual framework of the semiclassical approximation, the role of these multiple mean-field solutions is clear: they describe genuine MB quantum interference.
The way MB quantum interference is made explicit naturally comes from the application of the stationary phase approximation to the MB propagator, fully justified now by the emergence of the large parameter $N$. This renders all integrals involved to be highly oscillatory and allows for following formally Gutzwiller's classic calculation \cite{Gutzwiller07}. In a first stage, the (scaled) quadrature path integral is calculated to obtain
\begin{eqnarray}
\label{eq:semquad}
\int_{{\bf Q}(0)={\bf Q}^{(i)}}^ {{\bf Q}(t)={\bf Q}^{(f)}}{\cal D}[{\bf Q}(s),{\bf P}(s)]&&{\rm e}^{iNR^{(r)}[{\bf Q}(s),{\bf P}(s)]} \\ &&\simeq \sum_{\gamma({\bf Q}^{(i)},{\bf Q}^{(f)},t)}D_{\gamma}({\bf Q}^{(i)},{\bf Q}^{(f)},t){\rm e}^{iN{\cal R}_{\gamma}({\bf Q}^{(i)},{\bf Q}^{(f)},t)} \nonumber
\end{eqnarray}
as a sum over the {\it mean field} solutions $\gamma$ with given boundary conditions, involving semiclassical amplitudes $D_{\gamma}$ that account for the stability of the solutions and other topological features of the Hamiltonian mean-field flow around them and, most importantly, the actions ${\cal R}_{\gamma}$ along each of them.
The semiclassical propagator in quadrature representation, Eq.~(\ref{eq:semquad}), is strictly speaking just an intermediate step in the construction of its Fock state version. Nevertheless, it is a very useful object with very desirable features. The first is the perfect analogy between the quadrature and coordinate representations in the semiclassics of MB and SP systems respectively, and the suggestive possibility of directly importing into the MB context several key ideas and results. Specifically, the derivation of the MB trace formula follows identical steps as in Gutzwiller's derivation as discussed ahead. The second concerns the friendly way coherent states are treated in quadrature representation, allowing for a very tractable way to connect these two useful and natural sets of MB states. Last, but not least, it is the very natural way in which systems with negligible interactions are described by a mean-field that defines a simple {\it linear} problem, a very appealing feature that is lost if physical Fock states are used instead~\cite{Engl18}.
For this last change of representation the integrals are performed over quadratures, Eq.~(\ref{eq:full}), in the stationary phase approximation. Leaving aside lengthy technical details that can be found in~\cite{Engl16,Engl15a}, the final result is~\cite{Engl14c}
\begin{equation}
K({\bf n}^{(f)},{\bf n}^{(i)},t)\simeq \sum_{\gamma({\bf n}^{(i)},{\bf n}^{(f)},t)}A_{\gamma}({\bf n}^{(i)},{\bf n}^{(f)},t)){\rm e}^{iN{\cal R}_{\gamma}({\bf n}^{(i)},{\bf n}^{(f)},t)}
\label{eq:MB-propagator}
\end{equation}
where the sum extends over the solutions $({\bf n}_{\gamma}(s),{\boldsymbol \theta}_{\gamma}(s))$ of the mean-field equations with boundary conditions
\begin{equation}
|\psi_{i}(s=0)|^{2}=n_{i}^{(i)} \quad , \quad |\psi_{i}(s=t)|^{2}=n_{i}^{(f)}
\end{equation}
and actions ${\cal R}_{\gamma}$. The semiclassical amplitudes are explicitly given by
\begin{equation}
A_{\gamma}({\bf n}^{(i)},{\bf n}^{(f)},t)=\left[{\rm det}_{i,j} \frac{N}{2\pi}\frac{\partial^{2} {\cal R}_{\gamma}({\bf n}^{(i)},{\bf n}^{(f)},t)}{\partial n_{i}^{(i)} \partial n_{j}^{(f)}}\right]^{1/2}{\rm e}^{-i\frac{\pi}{2}\mu_{\gamma}}
\end{equation}
where the index $\mu$ counts the number of focal points of the trajectory $\gamma$ \cite{OzorioBook}.
The semiclassical approximation of the Fock state propagator is a starting point for the semiclassical analysis of both dynamical and stationary properties of MB quantum systems of interacting bosons. The propagator~(\ref{eq:MB-propagator}) is not restricted to chaotic dynamics, but also allows, in principle, for investigating the imprint of more complex, {\it e.g.} mixed regular-chaotic, phase space dynamics and the consideration of system-dependent properties unique to individual bosonic MB systems.
Having at hand both the semiclassical propagator and a well defined classical (mean field) limit, a fundamental conceptual aspect can be addressed, namely, the meaning of MB quantum chaos. Since the asymptotic analysis automatically provides as a limit a theory with a well defined Hamiltonian structure~\cite{OzorioBook} given by Eq.~(\ref{eq:classlim}), the quantum ramifications of mean field chaotic dynamics can be rather precisely investigated and interpreted. Therefore, for systems of interacting bosons with large occupations the MB quantum manifestations of MB mean field chaos can be placed on a firm theoretical foundation.
The following passage summarizes the directions that, starting with the semiclassical propagator in Fock space or its variants, have been pursued by several groups during the last years in an attempt to understand the quantum signatures of mean field integrability and chaos.
\subsubsection{Relationship to the truncated Wigner approximation}
\label{sec:twa}
It is possible to view the way the classical limit plays a role in the quantum mechanical description of a many-body system in three stages. At the most primitive level, expectation values of time-dependent (Heisenberg) observables $\hat{A}(t)=A(\hat{{\bf b}}^{\dagger}(t),\hat{\bf b}(t))$ defined with respect to an initial coherent state $|{\bf z}\rangle=|{\boldsymbol \psi}\rangle$ are obtained from the classical limit simply by transporting the classical symbol along the solution ${\boldsymbol \psi}(t), {\boldsymbol \psi}(0)={\boldsymbol \psi}$ of the mean field equations (\ref{eq:MFE}),
\begin{equation}
\langle {\boldsymbol \psi}|\hat{A}(t)|{\boldsymbol \psi}\rangle \simeq A({\boldsymbol \psi}(t),{\boldsymbol \psi}^{*}(t)) \ ,
\end{equation}
which defines the strict mean field approximation.
The second stage adds a little more sophistication in which the classical solutions are still used directly to guide the quantum evolution, but zero-point motion underlying the quantum state is incorporated that evolves under the mean field flow,
\begin{eqnarray}
\langle {\boldsymbol \psi}|\hat{A}(t)|{\boldsymbol \psi}\rangle \simeq \int d{\boldsymbol \Psi}d{\boldsymbol \Psi}^{*} {\rm e}^{-|{\boldsymbol \Psi}-{\boldsymbol \psi}|^{2}} A({\boldsymbol \Psi}(t),{\boldsymbol \Psi}^{*}(t)),
\end{eqnarray}
giving the celebrated truncated Wigner approximation (TWA) \cite{Polkovnikov2011}. The pure mean field approximation is then obtained as a particular case when the classical symbol $A_{\rm cl}$ is smooth and the integral is well approximated by taking ${\boldsymbol \Psi}\simeq {\boldsymbol \psi}$. Both the mean field and TWA fail to account for coherent effects due to path interference. The former because it is based on a single unique classical solution, and the latter because it is based on adding probabilities instead of amplitudes. In essence, both approximations are {\it classical}.
The third stage is to incorporate fully the semiclassical approximation. It accounts for interference effects explicitly and completely by the use of the sum over amplitudes. In the exact expression
\begin{equation}
\langle {\boldsymbol \psi}|\hat{A}(t)|{\boldsymbol \psi}\rangle=\sum_{{\bf n},{\bf n}',{\bf m},{\bf m}'}{\boldsymbol \psi}_{{\bf n}}^{*}{\boldsymbol \psi}_{{\bf n}'}A_{{\bf m},{\bf m}'}K({\bf n},{\bf m},t)K^{*}({\bf n}',{\bf m}',t)
\end{equation}
where
\begin{equation}
{\boldsymbol \psi}_{{\bf n}}= \langle {\boldsymbol \psi}|{{\bf n}}\rangle {\rm \ \ , and \ \ }A_{{\bf m},{\bf m}'}=\langle {\bf m}|\hat{A}|{\bf m}'\rangle,
\end{equation}
the substitution of $K$ by its semiclassical approximation given in Eq.~(\ref{eq:MB-propagator}) does the trick. The key object to analyze is the product
\begin{equation}
\label{eq:doublesum}
K({\bf n},{\bf m},t)K^{*}({\bf n}',{\bf m}',t)\simeq \sum_{\gamma,\gamma'}A_{\gamma}A_{\gamma'}^{*}{\rm e}^{iN({\cal R}_{\gamma}-{\cal R}_{\gamma'})}
\end{equation}
where $\gamma$ labels mean field paths joining ${\bf n}$ with ${\bf m}$ in time $t$, and similarly for $\gamma'$. The TWA is readily obtained (in its polar form where $\psi=\sqrt{n}{\rm e}^{i\theta}$) from the diagonal approximation where the action is linearized for the $\gamma'=\gamma$ terms~\cite{Engl14}. Genuine many-body interference arises from the off-diagonal contributions $\gamma' \ne \gamma$. It is then a question to be addressed in a case-to-case basis of how much off-diagonal information, which demands a great effort to evaluate, is necessary to describe the physical phenomena of interest. This may range from the explicit and precise description of every quantum fluctuation around the classical background as done in \cite{Tomsovic18} well beyond the Ehrenfest time, to the selective use of restricted off-diagonal contributions to capture robust interference effects as in \cite{Schlagheck19}.
It is worth noting that the derivation of the TWA here relies on the more fundamental semiclassical approximation for MB amplitudes. As such, it is expected that the foundations of the TWA may suffer from ambiguities in systems where either the classical limit and/or semiclassical regime cannot be precisely defined. Extremely important systems such as spin half chains and Fermi-Hubbard models indeed represent such cases. Progress towards a formal construction of the TWA in these cases has been an active field recently (see \cite{DAVIDSON17} and references therein), with successful applications to SYK models \cite{Schmitt19} and spin chains \cite{Schachenmayer15}.
\subsubsection{A first application: coherent backscattering in Fock space}
\label{sec:cbs}
The capacity of semiclassical propagators to describe quantum interference comes from the natural way such effects are described within the path integral formalism where the coherent superposition of a multitude of paths carrying different phases produce the interference patterns. In the semiclassical regime the largely uncontrolled proliferation of all possible quantum paths is replaced by a sum over specific mean field paths, making the mechanism of interference much more explicit. Applied to MB Fock space the discrete sum over paths is now a coherent sum of amplitudes associated with the discrete set of mean-field solutions. Following the success of the single-particle semiclassical approach in describing the leading-order interference term in disordered conductors~\cite{Akkermans07}, the so-called weak localization correction due to interference between pairs of time-reversed paths, the corresponding MB effect arising from the superposition of amplitudes associated with two such corresponding mean-field solutions is given in \cite{Engl14c}. There it is shown that such coherent backscattering produces a characteristic enhancement of the return probability in MB Fock space.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/newsixring.pdf}
\caption{ \label{Fig:CBSI}
{\bf
Many-body coherent backscattering in Fock space.
} Numerical simulation of the transition probability in Fock space for a ring-shaped 6-site Bose-Hubbard ring (upper right inset). The initial state ${\bf n}^{(i)}$ (indicated by the vertical line) is propagated for a time larger than the equilibration time and the probability of finding the system in the final state ${\bf n}^{(f)}$ indicated in the horizontal axis is calculated by exact numerical solution of the quantum dynamics. The observed enhancement of the transition probability when ${\bf n}^{(f)}={\bf n}^{(i)}$ over the classical uniform background is a purely coherent effect that is suppressed by the application of a gauge field $\phi$ that destroys the time-reversal invariance of the system. (From Ref.~\cite{Engl14c}.)}
\end{figure}
The numerical confirmation of this coherent MB effect is shown in Fig.~\ref{Fig:CBSI}. Here, the probability $P({\bf n}^{(i)},{\bf n}^{(f)},t)=|K({\bf n}^{(i)},{\bf n}^{(f)},t)|^{2}$ of finding the system in the final Fock state ${\bf n}^{(f)}$ at time $t$ initially prepared in the state ${\bf n}^{(i)}$ is obtained by solving numerically the quantum dynamics of a 6-site Bose-Hubbard ring in the regime of chaotic mean field dynamics. After a relatively short relaxation time scale, the tendency toward uniform equilibration is clearly visible where all transition probabilities roughly reach the same classical value (also well described by the TWA). The only exception happens for ${\bf n}^{(f)}={\bf n}^{(i)}$ in which a non-classical enhancement is clearly observed. Furthermore, if time-reversal invariance of the system is broken by means of a gauge field parametrized by $\phi$, this enhancement, a hallmark of coherent backscattering due to quantum interference among classical paths related by time reversal, disappears.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/cbs_sketch.pdf}
\caption{ \label{Fig:CBSII}
{\bf
Many-body coherent backscattering in Fock space.
} The semiclassical propagator used to calculate the transition probability between Fock states $|K({\bf n}^{(i)},{\bf n}^{(f)},t)|^{2}$ naturally leads to the consideration of double sums over mean field equation solutions. Under local averaging only robust, smooth contributions survive. Generically, these smooth contributions simply correspond to interference from pairs of identical amplitudes corresponding to the same mean field solutions. For ${\bf n}^{(f)}={\bf n}^{(i)}$, however, {\it two} different solutions related by time-reversal constructively interfere, a purely quantum effect due to coherent superposition of amplitudes.
(From Ref.~\cite{Engl14c}.)}
\end{figure}
The semiclassical explanation of this effect starts with the double-sum over mean field solutions in Eq.~(\ref{eq:doublesum}), and the realization that only for ${\bf n}^{(f)}={\bf n}^{(i)}$, there is an off-diagonal contribution from orbits $\gamma,\gamma'$ related by time reversal as depicted in Fig.~\ref{Fig:CBSII}.
\subsection{Spectral properties}
\label{sec:spec-prop}
Beyond its use for calculating dynamical properties of observables, such as coherent backscattering,
the MB van Vleck-Gutzwiller propagator (\ref{eq:MB-propagator}) represents the fundamental building block for a semiclassical theory of spectral properties of MB bosonic systems.
In this vein a calculation of the MB density of states is summarized, which leads to a MB version of Gutzwiller's trace formula following Ref.~\cite{Engl15}. It turns out that
periodic mean field solutions of the nonlinear wave equations play the role of the
classical single-particle periodic orbits in Gutzwiller’s original trace formula.
Based on the MB trace formula RMT-type universal spectral correlations can be addressed through the lens of semiclassical theory as proposed in \cite{Engl15,Dubertrand16}. There post-Ehrenfest phenomena and the encounter calculus in the MB context naturally arises again.
\subsubsection{Many-body Gutzwiller trace formula}
\label{sec:MB-Gutzwiller}
Given the success of Gutzwiller's trace formula, Eq.~(\ref{eq:SP-Gutzwiller}), over the past 50 years, it is quite natural to investigate the corresponding MB extensions. A straightforward generalization consists in increasing the particle number $N$ for the usual semiclassical limit $\hbar \rightarrow 0$, {\em i.e.} the horizontal crossover in Fig~\ref{fig:sc-limits}. However, for large $N$ this approach would require periodic orbits in a vast $6N$-dimensional phase space, and in addition dealing with the (anti-)symmetrization. Hence in deriving a semiclassical approximation for the MB density it is prefereable to resort to the complementary limit $\hbar_{eff} = 1/N \rightarrow 0$ following \cite{Engl15}.
Either starting from the quadrature or coherent state representation of the semiclassical propagator, under the assumption of chaotic mean field dynamics, a series of further stationary phase calculations leads eventually to the semiclassical approximation for the MB density of states in the form~\cite{Engl15}
\begin{equation}
\rho(E,N) \simeq \bar{\rho}(E,N) \ + \ \rho^{\rm osc}(E,N) =
\bar{\rho}(E,N) \ + \
\sum_{\rm pm}A_{\rm pm}{\rm e}^{iNS_{\rm pm}(E)} \, .
\label{eq:MB-Gutzwiller}
\end{equation}
The first (Weyl) term is given, to leading order in $\hbar_{eff}$, by the phase space volume of the corresponding mean field energy shell,
\begin{equation}
\bar{\rho}(E,N)=\left(\frac{N}{2\pi}\right)^{d}\int \ d{\bf q} \ d{\bf p}\ \delta(E-H_{{\rm cl}} ({\bf q},{\bf p}))\ \delta(N-N({\bf q},{\bf p})) \, ,
\end{equation}
where $H_{{\rm cl}}$ is defined in Eq.~(\ref{eq:Tom2}).
The sum in Eq.~(\ref{eq:MB-Gutzwiller}) extends over all {\it periodic} solutions
\begin{equation}
({\bf q},{\bf p})(t=0)=({\bf q},{\bf p})(t=T)
\end{equation}
(for some period $T$)
of the classical mean field equations
\begin{equation}
\label{eq:MBequations}
\hbar \frac{d{\bf q}}{dt}=\frac{\partial H_{\rm cl}({\bf q},{\bf p})}{\partial {\bf p}}, {\rm \ \ \ } \hbar \frac{d{\bf p}}{dt}=-\frac{\partial H_{\rm cl}({\bf q},{\bf p})}{\partial {\bf q}}
\end{equation}
with fixed energy $E$ and particle number
\begin{equation}
N({\bf q},{\bf p})=\frac{1}{2}\sum_{i}(q_{i}^{2}+p_{i}^{2}) \, .
\end{equation}
This implies that unstable periodic mean field modes (pm) in Eq.~(\ref{eq:MB-Gutzwiller}) play the role of the classical periodic orbits in the single-particle context. In close analogy, the modes's actions take the form
\begin{equation}
S_{\rm pm}(E) = \oint_{\rm pm} {\bf p}({\bf q},E)\cdot d{\bf q}
\label{eq:MB-action}
\end{equation}
and the amplitudes read, as in Eq.~(\ref{eq:SP-stab}),
\begin{equation}
A_{\rm pm}(E) = \frac{T_{\rm ppm}(E)}{|{\rm d
et}({\bf M}_{\rm pm}(E) - {\bf I} |^{1/2}} {\rm e}^{-i\mu_{\rm pm} \frac{\pi}{2}}\ ,
\end{equation}
in terms of the period $T_{\rm ppm}$ of the primitive mean field mode, the monodromy matrix ${\bf M_{\rm pm}}$ depending on the stability of the mode, and its Maslov index $\mu_{\rm pm}$. The resulting semiclassical MB trace formula for discrete quantum fields incorporates genuine MB interference including that required to build up the discreteness of the MB spectrum, which arises from the coherent sum over phases associated with periodic mean field solutions.
This is in close analogy to Gutzwiller's expression for the single-particle case.
Here, a few further remarks are in order (for details see also \cite{Engl15}):
(i) notice that the range of validity in energy extends down to lowest energies because $\hbar_{eff}$ and not $\hbar$ controls the semiclassical limit, and thus Eq.~(\ref{eq:MB-Gutzwiller}) holds true even for MB ground states; (ii) due to the existence of the continuous symmetry related to particle number conservation, symmetry-projected semiclassical densities of states were considered to get an expression for the MB spectral density
within each sector with fixed total particle number;
(iii) by using quadrature states of the field the above derivation does not employ the coherent state representation that requires a complexification of the theory's classical limit \cite{Dubertrand16}; (iv) because in the non-interacting case the quantum problem reduces to a harmonic system, the trace formula is still applicable since the corresponding periodic mean field solutions (of the linear Schrödinger equations) turn out to be isolated in the generic case where the single-particle energies defining their frequencies are incommensurate; (v) the trace formula, Eq.~(\ref{eq:MB-Gutzwiller}), may also shed light on the the fact that MB systems often exhibit incoherent, possibly chaotic, single-particle dynamics, while at the same time show collective motion~\cite{Guhr98,Hammerling10}; and (vi) there is a certain conceptual analogy between the semiclassical MB propagator and the corresponding MB density of states as sums over mean field solutions on the one hand and configuration interaction methods, constructing MB wave functions as linear combinations of Slater determinants, {\em i.e.} fermionic mean field solutions on the other.
The MB trace formula allows, in principle, for computing an approximate MB density of states\footnote{In Ref.~\cite{Tomsovic18} a spectrum of a Bose-Hubbard system (with $N=$40 atoms) was computed with high accuracy, using corresponding MB semiclassical techniques.} for MB energies and particle numbers that are out of reach of usual numerical MB techniques. Moreover, the close formal relation between the semiclassical single-particle, Eq.~(\ref{eq:SP-Gutzwiller}), and MB, Eq.~(\ref{eq:MB-Gutzwiller}), trace formulas implies that many insights and results known for quantum chaotic single-particle dynamics can be taken over into the MB context as summarized in the following for spectral fluctuations.
\subsubsection{Many-body encounters and universal spectral correlations}
\label{sec:spec-stat}
In Sec.~\ref{sec:SP-universality} the semiclassical foundations of RMT-type spectral universality are outlined for chaotic single-particle dynamics, reflected in the BGS conjecture~\cite{Bohigas84}.
Although it might be evident to consider that this reasoning simply carries over to the quantum chaotic MB case, a formal justification has been missing.
Also it was not straightforward how the encounter calculus would be generalized to the MB case.
For the usual semiclassical limit $\hbar \rightarrow 0$, the encounter formalism has been shown to be applicable and to lead to RMT results for any phase space dimensionality~\cite{Turek05}, and hence also to the $6N$ dimensions of an $N$-particle system in 3 spatial dimensions. However, MB generalizations require some care. For instance, for a non-interacting MB system with chaotic single-particle dynamics, {\em e.g.} $N$ non-interacting particles in a billiard, the MB density of states is composed of independent single-particle spectra -- with conserved single-particle energies as associated constants of motion -- and thus do not obey RMT-statistics. The spectral statistics are Poissonian in the infinite dimensional limit; for recent work showing rich spectral features due to finite dimensionalities see \cite{Liao20}. Correspondingly, in the complementary limit $\hbar_{eff} \rightarrow 0$ the non-interacting case also features non-generic spectral fluctuations that do not correspond to the expected Poissonian spectra of integrable systems. This is a
consequence of the field theoretical context where
the free field corresponds to a peculiar linear system that is non-generic since it is not merely integrable. There, treating the quasi-integrable case due to the effect of a small interaction within a semiclassical perturbation theory may provide a useful approach.
Consider strongly interacting MB systems with an associated chaotic mean field dynamics characterized by a largest MB Lyapunov exponent $\lambda$.
For single-particle dynamics universal spectral correlations arise from interference between periodic orbits with quasi-degenerate actions and periods beyond the Ehrenfest time $\tau_{E}^{\rm (sp)} = (1/\lsp) \log (S/\hbar)$, see Eq.~(\ref{eq:sp-Ehrenfest}).
For quantum chaotic large-$N$ MB systems in the limit $\hbar_{eff} \rightarrow 0$, correspondingly genuine MB quantum interference is governed by another log-time scale, the Ehrenfest time
\begin{equation}
\tau_{E} = \frac{1}{\lambda} \ \log N \, ,
\label{eq:scrambling}
\end{equation}
also referred to as the scrambling time in the MB context~\cite{Swingle16}.
This very close formal analogy between the single-particle $\hbar\rightarrow 0$ and the MB $\hbar_{eff} \rightarrow 0$ regimes -- based on corresponding trace formulas and hierachy of times scales -- allows for the straightforward generalization of the bosonic MB spectral form factor semiclassical calculation~\cite{Engl15, Dubertrand16} by applying the encounter calculus summarized in Sec.~\ref{sec:SP-universality}. This amounts to replacing $\hbar$ by $\hbar_{eff}$, the Lyapunov exponent $\lsp$ by $\lambda$, the Ehrenfest time
$ \tau_{E}^{(sp)}$ by $\tau_{E}$, Eq.~(\ref{eq:scrambling}), single-particle phase space by the $2L$-dimensional phase space of the lattice, and the single-particle density of states $\rho_{\rm sp}^{\rm osc}(E)$ by
$\rho^{\rm osc}(E;N)$, Eq.~(\ref{eq:MB-Gutzwiller}).
Encounters between different (periodic) mean field modes take on the role of encounters between classical (periodic) orbits. This implies the interpretation that interfering periodic mean-field solutions of Eqs.~(\ref{eq:MBequations}) with quasi-degenerate actions $S_{\rm pm} (E)$, Eq.~(\ref{eq:MB-action}),
lead to the emergence of universal MB spectral fluctuations, in close correspondence with the reasoning in the single-particle case. This includes in particular the same RMT-type expressions for the spectral MB two-point correlator $R(\Delta E;N) \sim \langle \rho^{\rm osc}(E;N) \rho^{\rm osc}(E+\Delta E;N)\rangle$ and its associated spectral form factor\footnote{Note that $R(\Delta E;N)$ contains interesting new
parametric correlations with regard to (changes in)
particle number and or interaction strength.}.
These conclusions drawn from the semiclassical MB encounter formalism coincide both with long known results from nuclear shell models and data~\cite{Brody81, Bohigas83}, embedded Gaussian ensembles~\cite{KotaBook}, including those restricted to two-body interactions, as well as with recent results showing random-matrix behaviour of the spectral form factor for a periodically kicked Ising spin-1/2 model without a semiclassical limit (to leading orders in $t/t_H$ \cite{Kos18}) and for a Floquet spin model~\cite{Chan18}. Moreover, Wigner-Dyson-type level statistics have recently been numerically shown with appropriate parameters for a discrete Bose-Hubbard system~\cite{Kolovsky04, Dubertrand16} and the SYK-model~\cite{Garcia17} in the large $N$-limit. They help close an important missing theoretical link in the sense that the k-body restricted interaction ensembles, i.e.~embedded random matrix ensembles, have resisted analytic proofs of their asymptotic behaviors~\cite{Benet01, Asaga01, Srednicki02} unlike the original classical random matrix ensembles. Semiclassical results for spectral determinants~\cite{Keating91, Heusler07, Keating07, Mueller09, Waltner09, Waltner19} in terms of pseudo-orbits carry over to the MB case, i.e.~the semiclassical finding \cite{Waltner19} that pseudo- mean field paths with durations $t > t_H$ necessarily must involve multiple partial traversals that do not contribute to the MB spectrum.
It is worth, highlighting again the relevance of the scrambling time scale $\tau_{E}$, Eq.~(\ref{eq:scrambling}), and the role of encounters for entangling mean field modes: The semiclassical MB propagator and the trace formula, as sums over mean field paths, contain genuine MB interference thereby giving rise to MB correlations. The encounter calculus is involving ergodic averages distilling out of all these paths, otherwise mostly random interference terms, that prevail for an observable after energy or spatial averages. Each encounter diagram, such as those shown in Fig.~\ref{fig:PO-geometries}, represents all interferences resulting from certain types of coupled mean field trajectory configurations with quasi-degenerate actions. If we think of entanglement as coupling between different product states, as mean field solutions are, then encounters generate entanglement that resists (energy) averaging. After starting to propagate a wave packet along a separable initial (periodic) mean field path, it will split at an encounter, acting like a rail switch and entangling the mean field paths that come close to each other in the encounter region in Fock phase space. The time scale of this entanglement process is given by $\tau_{E}$. Whereas the relevance of encounters for entanglement arises naturally, developing tools to measure the degree of entanglement created through encounter structures remains open.
This would also distingish encounter-mediated entanglememt growth from some sort of entanglement captured through TWA approaches introduced in Sec.~\ref{sec:twa}. Quantum unitaries acting as interconnects in quantum networks can be viewed as mimicking certain encounter structures of a quantum chaotic dynamical system. For such random unitary dynamics entanglement growth has been measured~\cite{Nahum17}.
To conclude, MB semiclassical methods with underlying chaotic dynamics lay the foundations for universality in the spectral statistics of large-$N$ MB systems.
In the regime of mean field chaos in the classical limit, the local fluctuations of MB spectra comply with RMT predictions.
\subsection{Out-of-time-order-correlators and commutators}
\label{sec:OTOC}
\subsubsection{Concept and heuristic semiclassical reasoning}
\label{sec:concept}
Based on correlations between periodic mean field modes and by invoking ergodicity and classical hyperbolicity, it was just discussed how spectral RMT-type universality can be semiclassically explained in the MB context. For the GOE spectral form factor beyond Ehrenfest timescales, terms based on MB interference and organized through a hierarchy of encounters of mean field modes, provide higher-order $\hbar_{eff}$-corrections to Berry's diagonal contribution.
This section presents another prime example for ergodic MB interference, so-called out-of-time-order correlators~\cite{Larkin69,Shenker14,Maldacena16}
\begin{equation}
F(t)= \langle \Psi |
\hat{W}^\dagger(t) \ \hat{V}^\dagger(0)
\hat{W}(t) \ \hat{V}(0) | \Psi \rangle \, ;
\label{eq:OTOCorr_definition}
\end{equation}
see Fig.~\ref{fig:OTOCorr}, and their closely related relatives, out-of-time-order commutators (OTOCs)~\cite{Maldacena16}
\begin{equation}
C(t) = 2 - {\rm Im}(F(t)) =
\langle \Psi |
\left[ \hat{W}(t), \hat{V}(0) \right]^\dagger
\left[ \hat{W}(t),\hat{V}(0) \right] | \Psi \rangle \, .
\label{eq:OTOComm_definition}
\end{equation}
Both, $F(t)$ and $C(t)$ comprise two forward and two backward propagations by means of the Heisenberg operator $\hat{W}(t) = \exp{(-i\hat{H}t/\hbar)} \hat{W}(0) \exp{(i\hat{H}t/\hbar)}$.
Consider a Bose-Hubbard system in which the local measurement of an atom at a given site perturbs (locally) the MB system. The squared commutator $C(t)$ of such a suitable (local) operator $\hat{W}(t)$
with another (local) operator $\hat{V}(0)$ measures the temporal growth of $\hat{W}$, including its increasing complexity at another site at a certain distance. Hence the initial local (quantum) information is spread and scrambled across the huge Hilbert space of the interacting MB system with its vast number of degrees of freedom~\cite{Sekino08}, making OTOCs the measures of choice for quantifying growing complexity and instability of MB systems, thereby with relevance for quantum computing \cite{Mi21,Zalcman21}. Although OTOCs require rewinding time and their implementation is experimentally challenging, already several measurement protocols exist~\cite{Zhu16,Swingle16,Li16,Garttner16,Dominguez21}. For a recent comprehensive tutorial on OTOCs; see \cite{Xu22}).
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/scheme-OTOC.png}
\caption{\label{fig:OTOCorr}
{\bf
Scheme of an out-of-time-order correlator} | $F(t)$ can be viewed as the overlap between two MB states arising from the operation of $V(0)$ and $W(t)$ on $|\Psi\rangle$ in different time order, {\em i.e.}, $F(t)$ contains four non-time ordered operators.
}
\end{figure}
OTOCs represent one of the most direct quantum analogues
of classical measures for hyperbolic instability of chaotic dynamics. For the single particle case invoking a heuristic classical-to-quantum correspondence for small $\hbar$ and replacing the commutator (\ref{eq:OTOComm_definition})
for pre-Ehrenfest times by Poisson brackets $\{\cdot,\cdot\}$ generates a leading-order Moyal expansion, e.g.~$\hat{W}=\hat{p}_i$ and $\hat{V}=\hat{q}_j$
\cite{Larkin69,Maldacena16},
\begin{equation}
C(t) \longrightarrow
|\rmi \hbar|^2 \left\{p_i,q_j(t) \right\}^{\!2}
\! \simeq \! \hbar^2 \!
\left(\frac{\partial q_j(t)}{\partial q_i}\!
\right)^{\!2} \propto
\hbar^2 \rme^{2\lambda t} \, .
\label{eq:OTOC_Moyal}
\end{equation}
The leading off-diagonal monodromy matrix element $ \partial q_j(t) / \partial q_i$
is replaced by an exponential growth determined by the classical single-particle Lyapunov exponent $\lambda_{SP}$. This close quantum-to-classical correspondence for quantum chaotic single-particle dynamics is intriguing, and thus there is also the quest for establishing a corresponding MB quantum-to-classical correspondence, {\em i.e.}, a MB version of such a quantum butterfly effect. This, in particular, includes an interpretation of the quantum mechanical growth rate of OTOCs for MB systems with a classical limit. Most notably their growth is bounded in time and OTOCs saturate due to a MB interference mechanism setting in at the Ehrenfest time, i.e.~scrambling time. It gives rise to an interference term that is of the same order as the corresponding diagonal contribution \cite{Rammensee18}. Such distinct features at $\tau_{E}$ render OTOC evolution a hallmark of Ehrenfest phenomena.
\subsubsection{Pre- and post-Ehrenfest times: exponential growth and saturation}
\label{sec:prepost}
To derive the OTOC pre-Ehrenfest growth and to illustrate these genuine MB interferences consider again Bose-Hubbard systems describing $N$ interacting bosons
with creation (annihilation) operators $\cop{}$ ($\anop{}$) at sites $i\!=\!1,\ldots,L$.
Evaluate the OTOC (\ref{eq:OTOComm_definition}) for the position and momentum
quadrature operators, Eq.~(\ref{eq:quad1}),
\begin{equation}
\hat{q}_i =
(\anop{i}\!+\! \cop{i})/\sqrt{2N} \quad , \quad
\hat{p}_i = (\anop{i}\!-\! \cop{i})/ (\sqrt{2N}\rmi ) \, ,
\end{equation}
which are
related to the occupation operators $\hat{n}_i$ through
$(\hat{q}_i^2 \!+\!\hat{p}^2_i)/2 = \hbar_{eff}(\hat{n}_i \!+\! 1/2)$.
The OTOC, Eq.~(\ref{eq:OTOComm_definition}), reads
\begin{equation}
C(t)\!=\!\Braket{\Psi| \!
\left[ \hat{p}_i,\hat{U}^\dagger(t)\hat{q}_j\hat{U}(t)\right] \!
\left[ \hat{U}^\dagger(t)\hat{q}_j\hat{U}(t),\hat{p}_i\right] \!
|\Psi }
\label{eq:pq_OTOC_w_Ut}
\end{equation}
in terms of the MB time evolution operator
$\hat{U}(t)=\exp(-\rmi \hat{H} t / \hbar)$.
In Eq.~(\ref{eq:pq_OTOC_w_Ut}) consider an initial coherent state $\Ket{\Psi}$ localized in both quadratures.
The semiclassical derivation is based on approximating $\hat{U}(t)$ by its asymptotic form for
small $\hbar_{eff}$, the MB version~\cite{Engl14c,Engl2015}, Eq.~(\ref {eq:MB-propagator}), of the van Vleck-Gutzwiller propagator. The corresponding sum runs over all mean-field solutions $\gamma$ of the
equations of motion
$\rmi \hbar \partial \Phi/\partial t = \partial H_{\mathrm{cl}}/\partial \Phi^\ast$
of the classical Hamilton function~(\ref{eq:Tom2})
that denotes the mean-field limit of $\hat{H}$
for $\hbar_{eff}=1/N\rightarrow 0$:
\begin{equation}
\label{eq:BHham}
H_{\mathrm{cl}}
\left(\vec{q},\vec{p}\right)
= \sum_{i, j}h_{ij} \Phi_i^*\Phi_j
+\sum_{i, j, i', j'}V_{i j i' j'}
\Phi_i^*\Phi_{i'} \Phi_j^*\Phi_{j'} \, .
\end{equation}
In the coherent sum over mean-field paths in Eq.~(\ref{eq:semquad}) the phases are given by classical actions
$R_\gamma(\vecfin{q},\vecinit{q};t) \! = \!
\int_0^t \rmd t
[
\vec{p}_\gamma(t)\cdot\vec{q}_\gamma(t)
-H^{\mathrm{cl}}
\left(\vec{q}_\gamma(t),\vec{p}_\gamma(t)\right)/\hbar
]
$
along $\gamma$, and the weights $A_\gamma$ reflect their classical (in)stability.
In order to make a connection to RMT-type universality assume that the mean-field limit exhibits uniformly hyperbolic, chaotic dynamics with the same Lyapunov exponent $\lambda$ at any phase space point.
Evaluating Eq.~(\ref{eq:pq_OTOC_w_Ut}) in position quadrature representation, inserting unit operators,
and using Eq.~(\ref{eq:semquad}) for the propagator $K$ gives a general semiclassical MB representation of the OTOC. To leading order in $\hbar_{\rm eff}$, the derivatives $\hat{p}_i = -\rmi \hbar_{eff} \partial / \partial q_i$ only act on the phases $R_\gamma$ in $K$. Employing the relations
$
\vecinit{p}_\gamma = -
\partial R_{\gamma}/\partial \vecinit{q}
$
generates for the OTOC~\cite{Rammensee18}, Eq.~(\ref{eq:pq_OTOC_w_Ut}),
\begin{eqnarray}
C(t) \simeq &
\!\! \int\! \rmd^n q_1'\! \int \! \rmd^n q_2\!
\int \! \rmd^n q_3'\! \int \! \rmd^n q_4\!
\int \rmd^n q_5'
\Psi^{*}\!\left(\vec{q}_1'\right)\Psi\!\left(\vec{q}_5'\right)
\nonumber \\
& \quad \times \!\!\!\!\!
\sum_{
\alpha': \vec{q}_1' {\rightarrow}
\vec{q}_2
}
\quad
\sum_{
\alpha : \vec{q}_3' {\rightarrow}\vec{q}_2
}\!\!\!\!
A_{\alpha'}^* A_{\alpha}
\rme^{(\rmi/\hbar_{eff})\left(\!-R_{\alpha'}+R_{\alpha}\right)}
\left(\init{p}_{\alpha',i}\!-\!\init{p}_{\alpha,i} \right)\fin{q}_{2,j}
\nonumber
\\
& \quad \times \!\!\!\!\!
\sum_{
\beta': \vec{q}_3' {\rightarrow}\vec{q}_4}
\quad \sum_{
\beta : \vec{q}_5' {\rightarrow}\vec{q}_4}
\!\!\!\!
A_{\beta'}^* A_{\beta}
\rme^{(\rmi/\hbar_{eff}) \left(\!-R_{\beta'}+R_{\beta}\right)} \!
\left(\init{p}_{\beta,i}\!-\!\init{p}_{\beta',i} \right)\fin{q}_{4,j} \, .
\label{eq:OTOC_sc_integral_representation}
\end{eqnarray}
The four time evolution operators
in Eq.~(\ref{eq:pq_OTOC_w_Ut}) have been converted into
fourfold sums over mean-field trajectories of time $t$ linking the various initial and final position quadratures. In the semiclassical expression, Eq.~(\ref{eq:OTOC_sc_integral_representation}), the operators $\hat{p}_i$ and $\hat{q}_j$ are replaced
by their classical counterparts $\init{p}_{\gamma,i}$ and $\fin{q}_{\gamma,j}$.
The commutators translate into differences of initial momenta of trajectories
not restricted to start at nearby positions.
The geometric connections amongst the trajectories quadruples involved are sketched in Fig.~\ref{fig:OTOC_diagrams}.
Panel (a) shows an arbitrary orbit quadruple and (b) the corresponding diagram. Black and orange arrows refer to contributions to $K$ and $K^\ast$, respectively, i.e.~forward and backward propagation in time. The grey shaded spots mimic the initial state $|\Psi\rangle$.
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[width=0.5\linewidth]{figures/sc_OTOC_integral_realistic.pdf}
& &
\includegraphics[width=0.4\linewidth]{figures/sc_OTOC_integral.pdf}
\\ (a)&&(b) \\
\includegraphics[width=0.4\linewidth]{figures/0l_loops.pdf}
& &
\includegraphics[width=0.4\linewidth]{figures/b1l_loops.pdf}
\\ (c)&&(d) \\
\includegraphics[width=0.4\linewidth]{figures/e1l_loops.pdf}
& &
\includegraphics[width=0.4\linewidth]{figures/2l_loops.pdf}
\\ (e)&&(f)
\end{tabular}
\end{center}
\caption{
{\bf Configurations of interfering mean-field paths that contribute to the OTOC $C(t)$,} Eq.~(\ref{eq:OTOC_sc_integral_representation}).
(a) arbitrary trajectory quadruple and (b) corresponding general diagram denoting
forward and backward propagation along black and orange mean-field paths. (c)-(f): Relevant configurations contributing predominantly to $C(t)$:
The trajectory quadruples reside (c) inside an encounter (marked by dashed box), form a "two-leg"-diagram
with an encounter (d) at the beginning or (e) at the end, or (f) build a "four-leg"-diagram with the encounter
in between. (From Ref.~\cite{Rammensee18}.)}
\label{fig:OTOC_diagrams}
\end{figure}
In the semiclassical limit $R_\gamma(\vecfin{q},\vecinit{q};t) \! \gg \! \hbar_{eff}$. Hence, the corresponding phase factors in Eq.~(\ref{eq:OTOC_sc_integral_representation})
are highly oscillatory with respect to initial and final positions. Thus, contributions from arbitrary trajectory quadruples are usually suppressed whereas
correlated trajectory quadruples with action differences such that
$R_\alpha\!-\!R_{\alpha'}\!+\!R_{\beta}\!-\!R_{\beta'} \simeq O(\hbar_{eff})$ are not averaged out and contribute dominantly to $C(t)$. For post-Ehrenfest times these are quadruples where for most of the propagation the four paths are pairwise nearly identical except for
encounter regions where trajectory pairs approach each other, evolve in a
correlated manner, and exchange their partners. The encounter calculus applies in the high-dimensional phase space associated with the MB Fock space.
To leading order in $\hbar_{eff}$, the relevant quadruples for OTOCs involve a single encounter. These can be subdivided
into four classes depicted in Fig.~\ref{fig:OTOC_diagrams}(c)-(f).
Diagram (c) represents a bundle of four trajectories staying in close vicinity to each other throughout time $t$, i.e.~forming an encounter marked by the dashed box. This diagram turns out to be dominant for times $t<\tau_{E}$, Eq.~(\ref{eq:scrambling}), the time scale for traversing an encounter region, if the associated action differences are of order $\hbar_{eff}$. Due to exponential divergences in chaotic phase space the dynamics merges beyond the encounter boundary into uncorrelated time evolution of the individual trajectories. However, the symplectic Hamiltonian structure implies that the exponential separation along unstable phase space manifolds is complemented by motion near stable manifolds. This enables the formation of pairs of exponentially close trajectories~\cite{Sieber01}, e.g.~paths $\alpha'$ and $\alpha$ or $\beta$ and $\beta'$ in Figs.~\ref{fig:OTOC_diagrams}(d,f).
This mechanism becomes quantum mechanically
relevant for times beyond $\tau_{E}$; see the discussion in Sec.~\ref{sec:SP-Ehrenfest}.
Here it is crucial for understanding post-Ehrenfest OTOC saturation.
Panels (c, d) display diagrams with an encounter at the beginning or end of two such trajectory pairs.
The diagrams in (f) are characterized by uncorrelated motion of four trajectory pairs
before and after the encounter.
The evaluation of Eq.~(\ref{eq:OTOC_sc_integral_representation}) requires a thorough consideration of the
dynamics in and outside the encounter regions.
Inside an encounter, Fig.~\ref{fig:OTOC_diagrams}(c),
the hyperbolic dynamics essentially follows a common mean-field solution: linearization in the vicinity of one reference path allows for expressing contributions from the remaining three trajectories.
The detailed evaluation of the diagrams (d-f) in Fig.~\ref{fig:OTOC_diagrams} is given in Ref.~\cite{Rammensee18}.
It involves the calculation of corresponding encounter integrals based on phase space averages invoking ergodicity.
Diagrams similar to class (f)
have been earlier considered in the context of shot noise ~\cite{Lassl03,Schanz03,Braun06}
and observables based on quantum chaotic single-particle~\cite{Kuipers10} and MB \cite{Urbina16} scattering.
However, the evaluation of such encounter integrals for OTOCs requires a generalization to high-dimensional MB phase spaces. The occurence of operators (positions and momenta) in the case of OTOCs demand a generalization of the encounter calculus and special treatment,
depending on whether the initial or final position quadratures are inside an encounter.
Using the amplitudes $A_\gamma$ in
Eq.~(\ref{eq:OTOC_sc_integral_representation})
to convert integrals over final positions into initial momenta, the OTOC contribution from each diagram is conveniently represented as an ergodic phase-space average
\begin{equation}
C(t) \simeq
\int \rmd^n q' \int \rmd^n p' W(\vec{q}',\vec{p'})
I(\vec{q}',\vec{p}';t) \, .
\label{eq:PS_average}
\end{equation}
Here,
\begin{equation}
W(\vec{q}',\vec{p'}) \!=\! \int \rmd^n y / (2\pi)^n
\Psi^*\!\left(\vec{q}'\!+\! \vec{y}/2 \right)
\Psi\left(\vec{q}'\!-\!\vec{y}/2 \right)
\exp[(\rmi)\vec{y}\vec{p}']
\end{equation}
is the Wigner function~\cite{OzorioBook} and $I(\vec{q}',\vec{p}',t)$ comprises all encounter intgerals. The detailed evaluation of the encounter integrals $I$ represented by the different diagrams in Fig.~\ref{fig:OTOC_diagrams}, and
thereby $C(t)$, yields the following results for pre- and post-Ehrenfest time evolution~\cite{Rammensee18}:
From diagram (c) it follows for $\lambda^{-1} < t < \tau_{E}$
upon ergodic averaging in the semiclassical limit
\begin{equation}
I(\vec{q}',\vec{p'};t) \simeq F_<(t) \quad ; \quad
F_<(t) \approx e^{2\lambda(t - \tau_{E})} \Theta(\tau_{E}-t)
= \hbar_{eff}^2 e^{2\lambda t} \Theta(\tau_{E}-t) \, .
\label{eq:F<}
\end{equation}
Diagram (d) turns out to be negligible, diagrams (e, f) together yield for $t > \tau_{E}$
\begin{equation}
I (\vec{q}',\vec{p'};t) \simeq F_>(t) \Braket{
\left(p_i'-p_i\right)^2 } ( \Braket{q_j^2} - \Braket{q_j}^2) \quad ; \quad F_>(t) = \Theta(t-\tau_{E}) \, .
\label{eq:F>}
\end{equation}
Here,
\begin{equation}
\Braket{f} = \frac{1}{\Sigma(E)} \int\rmd^n q \int \rmd^n p f(\vec{q},\vec{p}) \delta\!\left(E \!-\!
\mathcal{H}^{\rm cl}\left(\vec{p},\vec{q}\right)
\right)
\end{equation}
is the ergodic average with $\Sigma(E)$ the phase space volume of the energy shell at energy $E$.
\begin{figure}
\centering{
\includegraphics[width=0.8\linewidth]{figures/OTOC-scheme.pdf}
}
\caption{
\label{fig:OTOCscheme}
{\bf Universal contribution to the time evolution of out-of-time-order commutators} |
Exponential increase according to $F_<(t)$,
Eq.~(\ref{eq:F<}), before and according to $F_>(t)$, Eq.~(\ref{eq:F>}), after the
Ehrenfest time $\tau_{E} = (1/\lambda) \log N$ marked by the vertical dashed line.
Insets depict diagrams (c), (f) and (e)
from Fig.~\ref{fig:OTOC_diagrams} representing interfering mean-field trajectories.
(From Ref.~\cite{Rammensee18}.)
}
\end{figure}
The time-dependences of the universal functions $F_<$ and $F_>$ are sketched in Fig.~\ref{fig:OTOCscheme}.
For $t<\tau_{E}$ the semiclassical evaluation for MB systems confirms the heuristic result, Eq.~(\ref{eq:OTOC_Moyal}). The careful treatment of the encounter dynamics, diagram (c), provides a natural cut-off (exponential suppression) at $\tau_{E}$, absent in Eq.~(\ref{eq:OTOC_Moyal}). It results from the mechanism that the initial phase space area enabling four trajectories to stay close to each other is exponentially shrinking for $t > \tau_{E}$.
The fact that for $t<\tau_{E}$ all four mean-field solutions essentially follow in the linearizable vicinity of a common one, see diagramm (c), indicates that the initial exponential increase of an OTOC of a chaotic MB system can be considered as a property of unstable mean-field dynamics that would also be captured by a truncated Wigner approach.
\begin{figure}
\centering{
\includegraphics[width=0.8\linewidth]{figures/OTOC.pdf}
}
\caption{
\label{fig:OTOCBH}
{\bf Out-of-time-order commutator of a Bose-Hubbard (BH) system} | Numerically exact calculation of Eq.~(\ref{eq:OTOComm_definition}) for a BH system with four sites and $N=40$ particles. The system is initially described by a coherent state localized near a hyperbolic fixed point of the classical mean-field dynamics. For the choice of parameters $J/NU\simeq \pi/2$ in Eq.~(\ref{eq:BHham}) with hopping $h_{i,j}=J(\delta_{i,j+1}+\delta_{i+1,j})$ and local interactions $V_{i,j,i',j'}=U\delta_{i,j}\delta_{i',j'}\delta_{i,j}$, the corresponding stability exponent is given by $(J/\hbar)\lambda$. For times $1<\lambda t < \log(N)$ (time in units of the typical hopping time between sites $\sim \hbar/J$) a clear exponential growth due to this local hyperbolicity can be observed.
(courtesy of Mathias~Steinhuber). }
\end{figure}
On the contrary, the term $F_>(t)$ in Eq,~(\ref{eq:F>}) is suppressed for $t\!<\!\tau_{E}$, but is indeed responsible for OTOC saturation. After the scrambling time $t>\tau_{E}$ genuine MB interference sets in captured by encounter diagrams such as in panel (f). This diagram represents successive forward and backward dynamics swapping back and forth along different encounter-coupled mean-field trajectories. This involves correlated
quantum MB dynamics and the temporal build up of entanglement between mean-field modes.
This mechanism is evidently in a regime where mean-field approaches fail~\cite{Han16}.
Thus, genuine MB interference
is the quantum mechanism behind the commonly observed saturation of OTOCs at the scrambling or Ehrenfest time.
Note that the expression, Eq.~(\ref{eq:F>}), for the OTOC
contains variances of classical quantities,
e.g.~the variance of the $j$-th final position quadrature that determine the OTOC saturation level.
Here different types of classical MB dynamics at post-scrambling times, e.g.~diffusive versus chaotic evolution, may lead to a different time-evolution of these classical variances.
As shown in \cite{Rammensee18}, diffusive dynamics implies a linear increase with time, whereas a calculation assuming ergodic dynamics yields $C(t) \approx 2/L^2$ for $t \gg \tau_{E}$ with $L$ the number of sites of a Bose-Hubbard system (for $|\Psi\rangle$ being either a localized wave packet or extended chaotic MB state) corresponding to the flat plateau in Fig.~\ref{fig:OTOCscheme}.
Figure~\ref{fig:OTOCBH} shows the OTOC Eq.~(\ref{eq:OTOComm_definition}) with $\hat{V}= \hat{n}_0$ and $\hat{W}=\hat{n}_1$ denoting occupation operators for adjacent sites
obtained quantum mechanically for a 4-site BH system with $N=40$ particles. These numerics confirm the semiclassical predictions. Up to $\tau_{E}$ the OTOC increases exponentially with slope $2\lambda$ where $\lambda$ agrees with the Lyapunov exponent of the (here locally) unstable MB mean-field dynamics of the specfifc BH system. At $t \simeq \tau_{E}$ saturation sets in.
The present semiclassical analysis of MB OTOCs in the large-$N$ limit, the vertical limit in Fig.~\ref{fig:sc-limits}, can be readily generalized to systems of $N$ particles in $d$ spatial dimensions in the complementary limit of small $\hbar$, in particular to
the quantum chaotic single-particle case.
Invoking the corresponding Gutzwiller propagator, Eq.~(\ref{eq:vVG}), in $n=d\times N$ dimensions the exponential increase of the OTOC $C_N(t)$ in Eq.~(\ref{eq:F<}) is then governed by the leading Lyapunov exponent $\lambda_N$ of the corresponding classical $N$-particle system. Saturation sets in at the corresponding Ehrenfest time $(1/\lambda_N) \log S/\hbar$ with $S$ a typical classical action. For a chaotic phase space average the usual semiclassical limit the saturation value $C(t) \approx \hbar^2 N/d$ results~\cite{Rammensee18}. For $N=1$ this short-time growth with $\lambda_1$ has also been independently semiclassically derived in \cite{Kurchan18,Jalabert18}.
The exponential increase and saturation of such single-particle OTOCs was considered in detail in numerical case studies of the kicked rotor~\cite{Rozenbaum17} and quantum maps~\cite{Garcia-Mata18}.
\subsubsection{Extensions}
\label{sec:extensions}
There are two further interesting extensions of semiclassical results to summarize for MB OTOCs.
First it is worth considering how the time dependence of the OTOC changes for open MB quantum systems, specifically for $N$-particle systems where each particle has a large but finite average dwell time $\tau_{D} > \tau_{E}$ to stay in the system. The corresponding classical decay can be incorporated into the encounter integrals $I(t)$ in Eq.~(\ref{eq:PS_average}) by means of terms $\sim \exp{(-t/\tau_{D})}$. Most notably, their individual consideration in the encounter diagrams
\ref{fig:OTOC_diagrams}(c) and (d-f) for $t < \tau_{E}$ and $t > \tau_{E}$, respectively, yields, to leading order, different decay rates~\cite{Vierl19}
\begin{eqnarray}
F_<(t) & \sim
e^{(2\lambda t -t/\tau_{D})} \Theta(\tau_{E}-t) \ , \\
F_>(t) & \sim \ e^{(-2t/\tau_{D})} \ \Theta(t-\tau_{E}) \, ,
\label{eq:F-open}
\end{eqnarray}
as depicted in Fig.~\ref{fig:otoc-k}(a). The non-trivial, doubled decay rate $2/\tau_{D}$ in the regime of MB interference arises from the structure of the corresponding encounter diagrams (d-f) containing two independent ``legs'' (trajectory pairs) of length $\sim t$ that can lead to particle decay, compared to correlated dynamics centered around one path of length $t$ inside the encounter, diagram (c). Its experimental observation would clearly indicate this subtle and possibly unexpected aspect of MB interfernce.
Second, it is of interest to consider $k$-th order generalizations of the usual OTOC~\cite{Cotler17}, i.e.~$k$-OTOCs
\begin{equation}
C_k(t) =
\langle \Psi |
\left[ \hat{p}_i(0), \hat{q}_j(t) \right]^k | \Psi \rangle \, .
\label{eq:k-OTOC}
\end{equation}
Note that this definition does not contain absolute values as in the definition, Eq.~(\ref{eq:OTOComm_definition}), of the usual OTOC, i.e.~$C(t) \neq C_2(t)$.
Generalizing the semiclassical encounter calculus to the case of a $k$-OTOC, $k-1$ encounters can be placed into a trajectory structure comprising $2k$ paths. Careful evaluation~\cite{Vierl19} of the leading-order encounter diagrams suggests a stepwise structure for $C_k(t)$ with stepsize $\tau_{E}$ as visualized in Fig.~\ref{fig:otoc-k}(b).
\begin{figure}
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.45\linewidth]{figures/OTOC-open-decay.png}
& &
\includegraphics[width=0.45\linewidth]{figures/OTOC-kthorder-saturation.png}
\\ (a)&&(b)
\end{tabular}
\caption{
\label{fig:otoc-k}
{\bf Generalized OTOCs} |
(a) Semiclassical prediction for the OTOC $C(t)$ of an open chaotic MB quantum system with decay time $\tau_{D} > \tau_{E}$.
%
For $t < \tau_{E}$, the classical regime governed by a dominant mean-field path, the exponent is diminished by the rate $1/\tau_{D}$, while in the regime of genuine MB quantum interference ($t > \tau_{E}$) the OTOC exhibits an exponential decay with twice that rate, $-2/\tau_{D}$.
(b) Sketch of the semiclassical prediction for the $k$-OTOC $C_k(t)$, Eq.~(\ref{eq:k-OTOC}).
Characteristic steps are expected at multiples of the scrambling time $\tau_{E}$ (adapted from Ref.~\cite{Vierl19}).
}
\end{figure}
Returning to the usual OTOCs, ergodic quantum MB systems with a chaotic classical limit $\hbar \rightarrow 0$ or
$\hbar_{eff} \rightarrow 0$ show an OTOC growth with exponent $2\lambda_N$ or $2\lambda$, respectively. However, an exponentially increasing OTOC does not necessarily imply chaotic dynamics, i.e.~OTOCs are not necessarily indicative of chaos. Important exceptions are large-$N$ MB systems (near criticality) where their quantum dynamics are accompanied by unstable fixed points (separatrices) in the associated MB mean-field dynamics that can even be integrable. The local fixed point instability $\lambda_s$ also leads to an exponential increase $C(t)\sim e^{2\lambda_s t}$ up to times $\tau_{E}$~\cite{Geiger19,Xu20}. Thus scrambling does not necessarily imply chaos.
In such systems the quantum critical states may be viewed as residing close to separatrices that have much in common with encounters. However, due to the integrable classical limit, the fast initial scrambling is followed subsequently by oscillatory behavior between reentrant localization and delocalization of information in Hilbert space~\cite{Geiger19}.
For such quantum critical systems the semiclassical leading-order $1/N$-expansion
has been refined providing $e^{2\lambda t}/N$ (and not $1/N$) as a renormalized parameter that non-perturbatively rules the quantum-classical transition~\cite{Geiger21}.
For scrambling in many-body hyperbolic systems this provides formal grounds for a conjectured and numerically observed multi-exponential form of OTOCs for the SYK-model~\cite{Kobrin21}.
Apart from quantum chaotic and critical large-$N$ systems, the imprints on OTOCs of non-ergodic dynamics, for example, a mixed (regular-chaotic) phase space in the classical limit, have also been recently considered~\cite{Fortes19}, but are not in the focus of this review.
\section{Acknowledgments}
This review is dedicated to the memory of our dear colleagues, Fritz Haake and Petr Braun. We are very thankful to both of them for many many inspiring conversations about semiclassical physics, quantum chaos and beyond science during the last 30 years and numerous encounters at various places all over the world. Our close connections started with the Symposium {\em Quantum Aspects of Nonlinear Systems}, organized by Fritz Haake and Robert Graham in 1990, in retrospective playing a role for quantum chaos similar to the role of the 1911 Solvay conference for quantum physics. K.R. is further indebted to Petr Braun for various conversations also about Rydberg atomic physics, and for his kind hospitality during a longer stay in St.~Petersburg in 1990.
Part of the results on MB semiclassics presented here arose out of two PhD theses~\cite{Engl15a,Rammensee19} conducted in Regensburg. Hence we particularly thank Th. Engl and J. Rammensee for their important work and for many related conversations.
We further thank P. Schlagheck and D. Ullmo, as knowlegable discussion partners on advanced topics of many-body semiclassical methods over many years.
We would also like to thank numerous colleagues for conversations on topics that entered the many-body part of this review, including A. Altland, A. Buchleitner, R. Dubertrand, B. Geiger, T. Guhr, B. Gutkin, Q. Hummel, R. Jalabert, D. Vierl, D. Waltner and D. Wisniacki.
We acknowledge financial support from the Deutsche Forschungsgemeinschaft (German Research Foundation) through Projects Ri681/14-1, through Ri681/15-1 within the Reinhart-Koselleck Programme, as well as funding through Vielberth Foundation in Regensburg.
\section{Literature}
\bibliographystyle{unsrt}
\section{Perspectives}
\label{sec:persp}
During the past twenty years considerable progress has been made and various breakthroughs have been achieved in laying the foundations of a semiclassical theory that successfully provides the understanding of quantum chaotic universality, or more precisely, of universal (spectral) features of SP and MB quantum systems exhibiting chaotic classical limits. However, as is generally accepted, this RMT-type universality applies to much broader classes of quantum systems, including those without a strict classical limit, such as quantum graphs, networks, and spin chains that often may be represented by ensemble theories, i.e.~theory of disordered systems and RMT. Why such quantum chaotic systems in a wider sense exhibit the same universal features as quantum chaotic systems in a narrow sense, i.e.~those with a classical limit, partially remains a mystery.
Leaving aside such conceptual questions, the now existing theoretical framework, reviewed here, opens various interesting perspectives and challenges. Although semiclassical theory applies to the dynamics of individual systems, enabling a complete understanding of system specific properties and deviations from universal behaviors (at least theoretically), we conclude this review with perspectives of particular relevance to quantum universality:
\begin{itemize}
\item[(i)] {\em Variety of classical limits --}
in both complementary asymptotic limits considered, the quantum-classical transition is singular. The limiting, but non-vanishing, values of $\hbar/S \ll 1$ and $\hbar_{eff} = 1/N \ll 1$ imply ever shorter wave lengths and extensive quantum interference. Classical and mean field physics arise for $\hbar/S \equiv 0$ and $\hbar_{eff} \equiv 0$, respectively.
Such singular asymptotic behavior is generally indicative of fascinating physical phenomena. The semiclassical theory presented provides the leading-orders (in $\hbar$ and $\hbar_{eff}$) of quantum mechanical contributions: the former dealing with quantum wave interference and the latter with genuine MB quantum interference. These two quite distinct limits, sketched in Fig.~\ref{fig:sc-limits}, represent two avenues of asymptotic analysis. It may be interesting to consider other limiting procedures where both $\hbar$ and $\hbar_{eff}$ come into play in concert. Indeed, short wavelength approximations (semiclassical methods) are applied universally across classical field theories, e.g.~optics, acoustics, gravity waves, seismic waves, etc... in which the limit leads to ray equations underlying the motion of classical waves. In turn these rays themselves can be chaotic. Beginning with quantum fields, it is possible to imagine a kind of ``asymptotics of asymptotics'' in which there are chaotic rays underlying the classical field solutions underlying semiclassically the quantum field solutions. This is as opposed to just the single limit $\hbar_{eff}\ll 1$ considered here generating some kind of nonlinear, unstable classical field. Does the asymptotics of asymptotics chaotic limit lead to a distinct kind of MB quantum chaos from that mainly considered in this text? On another front, following the ``diagonal'' in Fig.~\ref{fig:sc-limits}, {\em i.e.}, sending $\hbar$ and $\hbar_{eff}$ to zero simultaneously appears particularly appealing and challenging, since there is no reason to expect the two limits to commute and could lead in another unique direction. There is a great deal yet to uncover.
\item[(ii)] {\em The limit of dilute local Hilbert spaces --} The construction of the semiclassical propagator relies on two key facts: the existence of a classical limit (defined through extremizing an action identified in turn from the exact path integral) and a semiclassical regime (identified as the small parameter $\hbar_{\rm eff} \to 0$ scaling the action). In situations where either of these two ingredients is not obvious or explicit, the semiclassical program relies on further assumptions. Three very important situations where such problems appear and await for further progress are semiclassical analysis of fermionic Fock space, the related case of spin-$1/2$ chains, and the extension to fields in the continuum. In systems described by (continuous or discrete) fermionic degrees of freedom, the natural path integral based on Grassman-valued fields leads to Grassman-valued actions \cite{Negele1998} where the stationary phase approximation cannot be defined in any sensible manner. This problem reflects in turn the difficulty in identifying an $\hbar_{\rm eff}$ due to the fundamentally quantum character of the Pauli principle, a problem shared by spin systems with low spin with their fundamentally discrete natural basis states (in fact, fermionic and spin-$1/2$ degrees of freedom are rigorously mapped by means of the Jordan-Wigner transformation \cite{coleman_2015}). Progress in this direction can be achieved by forcing a description in terms of bosonic (commuting) classical fields as in \cite{Engl18,Engl15a,EngPloUrbRic2014TCA} for both fermionic and spin systems. The subsequent semiclassical program can be formally defined, and in some situations provides extremely accurate results for delicate MB interference effects as shown in \cite{Engl16,Engl18}, but so far it lacks rigorous support. A similar violation of the large local Hilbert space assumption also occurs in bosonic systems of one considers the continuum limit. Since the number of sites tends to infinity, any finite number of particles will get effectively diluted thus breaking the fundamental assumption of large occupations. Identifying a proper semiclassical regime for the propagator of bosonic fields in the continuum holds the key for a very promising program, as several important results are known for such non-linear classical field equations. There are the existence of chaos \cite{Brezinova12,Brezinova21}, a precise definition of classical mean-field integrability by means of the inverse scattering method, and the corresponding semiclassical quantization based on solitons as building classical blocks \cite{Korepin1993}. Extending this approach into the chaotic regime remains a fundamental and fascinating open problem.
\item[(iii)] {\em Many-body scarring and deviations from equilibration --} ergodicity is commonly related to the equidistribution of eigenfunctions linked to a chaotic phase space energy shell leading to generic equilibration as predicted by the eigenstate thermalization hypothesis~\cite{Deutsch91, Srednicki94}. There are various properties and mechanisms that lead to different degrees of ergodicity breaking and, as a result, to the possibility of hindered relaxation. Corresponding settings possibly include: (i) MB systems without a classical limit, which are subject to additional constraints~\cite{Bernien17,Karpov21}, leading to disconnected sub-Hilbert spaces with reduced dimensions; (ii) MB systems with a classical limit that exhibits a non-ergodic limit, e.g.~mixed phase space structures of co-existing chaotic and regular regions. As an obvious case, quantum states residing on tori associated with locally integrable phase space regions are long-lived and typically decay or MB equilibrate on (exponentially) long time scales.
However, there is a hierarchy of weak ergodicity breaking mechanisms reflected in deviations from equilibration. In this context, two examples from SP physics are {\em dynamical localization} due to partial transport barriers leading to additional time scales whose effects are to localize eigenfunctions~\cite{Bohigas93} and the concept of {\em scars}, discovered and introduced by Heller~\cite{Heller84} for low-dimensional quantum-chaotic systems. Both represent prime examples of weak ergodicity breaking. For the latter, a quantum eigenstate that is semiclassically anchored on or close to an {\em unstable} periodic orbit \cite{Heller84} is scarred, if the period of the orbit is short and its Lyapunov exponent weak enough, such that a single-particle wave packet launched along this orbit shows distinct recurrences after one period. This can be cast into a rough criterion for scar formation~\cite{Heller84}. This concept naturally requires a classical limit, and is intriguing because it indicates deviations from ergodicity for a fully chaotic system that globally shows eigenstate thermalization, and must be differentiated from the aforementioned quantum states localized in regular regions. Very recently, scarring in Heller's original sense could be demonstrated for a MB Bose-Hubbard system with a high-dimensional associated classical phase space, including the corresponding generalization of the scar criterion~\cite{Hummel22}.
Earlier, MB "scars", reflected in persistent oscillations of local observables, were observed in Rydberg-based quantum simulators \cite{Bernien17}, as well as in corresponding numerical simulations \cite{Turner18,Serbyn21}. They were found in spin-chain type MB Hamiltonians that do not possess a natural classical limit. It remains to be understood how such a ``MB scarring'' of this type can be related to the semiclassical scar mechanism associated with periodic orbits. Furthermore, for systems with a semiclassical limit, it has to be explored, whether true MB scars prevail in the thermodynamic limit of large site or particle number. This could be probed employing ultracold bosonic atoms in optical lattices as quantum simulators.
\item[(iv)] {\em Entropies, entanglement and encounters --} The use of quantum information concepts in the framework of MB systems has lead to deep insights into the mechanisms of equilibration, thermalization and the role of quantum coherence~\cite{Eisert15,Rigol08, Abanin19, Streltsov17}. In this context, RMT has rather successfully been applied to various universal aspects of particular information measures~\cite{Keating04, Chan18}. It has been of further utility for studying weakly connected bipartite chaotic systems as well~\cite{Bandyopadhyay02, Lakshminarayan19, Pulikkottil20, Pulikkottil22}. In contrast, the use of many-body semiclassical methods face severe technical difficulties and only limited work has been accomplished. The origin of the problem is the extreme nonlinear way the propagator appears when calculating such measures, typically involving functions with dependencies such as $\log K K^{*}$, lacking the usual structure of multiple sums over paths where further analysis based on action correlations is possible. Only the so-called purity (a linearized version of the entanglement entropy) has allowed for a semiclassical study in first-quantized systems as carried out in \cite{Jacquod09b,Bonanca11}. Properties of entanglement of two (non)interacting particles in the quantum chaotic Chirikov standard map were very recently numerically considered in Ref.~\cite{Ermann22}.
\item[(v)] {\em Quantum chaos meets quantum gravity --}
During the last two decades, quantum chaos concepts have entered the realm of research towards possible quantization of gravitational degrees of freedom. Early suggestions pointing to the chaotic character of black holes~\cite{Sekino08} were finally made precise through the study of scrambling of (quantum) information and the proposal of black holes as systems where such scrambling is maximal, see \cite{Maldacena16} and references therein. This connection between toy models of quantum gravity where, unlike the realistic scenario describing the universe, a full solution is available and quantum chaos was made precise in the cornerstone paper \cite{Saad19}. There, the authors showed the dual relation, by means of the equivalence of correlation functions, between quantized Jackiw-Teitelboim gravity, a solvable model of gravity coupled with a dilaton field in 1+1 dimensions, and a suitably double-scaled theory of random matrices. This finding has triggered lots of attempts towards understanding the origins of this duality with prospective links to supersymmetry and semiclassical analysis in quantum chaos~\cite{Altland21}.
\end{itemize}
Just these few speculations illustrate the richness of possibilities from following various semiclassical paths to interesting future theoretical challenges in MB quantum chaos.
\section{Introduction: file preparation and submission}
The \verb"iopart" \LaTeXe\ article class file is provided to help authors prepare articles for submission to IOP Publishing journals.
This document gives advice on preparing your submission, and specific instructions on how to use \verb"iopart.cls" to follow this advice. You
do not have to use \verb"iopart.cls"; articles prepared using any other common class and style files can also be submitted.
It is not necessary to mimic the appearance of a published article.
The advice
on \LaTeX\ file preparation in this document applies to
the journals listed in table~\ref{jlab1}. If your journal is not listed please go to the journal website via \verb"http://iopscience.iop.org/journals" for specific
submission instructions.
\begin{table}
\caption{\label{jlab1}Journals to which this document applies, and macros for the abbreviated journal names in {\tt iopart.cls}. Macros for other journal titles are listed in appendix\,A.}
\footnotesize
\begin{tabular}{@{}llll}
\br
Short form of journal title&Macro name&Short form of journal title&Macro name\\
\mr
2D Mater.&\verb"\TDM"&Mater. Res. Express&\verb"\MRE"\\
Biofabrication&\verb"\BF"&Meas. Sci. Technol.$^c$&\verb"\MST"\\
Bioinspir. Biomim.&\verb"\BB"&Methods Appl. Fluoresc.&\verb"\MAF"\\
Biomed. Mater.&\verb"\BMM"&Modelling Simul. Mater. Sci. Eng.&\verb"\MSMSE"\\
Class. Quantum Grav.&\verb"\CQG"&Nucl. Fusion&\verb"\NF"\\
Comput. Sci. Disc.&\verb"\CSD"&New J. Phys.&\verb"\NJP"\\
Environ. Res. Lett.&\verb"\ERL"&Nonlinearity$^{a,b}$&\verb"\NL"\\
Eur. J. Phys.&\verb"\EJP"&Nanotechnology&\verb"\NT"\\
Inverse Problems&\verb"\IP"&Phys. Biol.$^c$&\verb"\PB"\\
J. Breath Res.&\verb"\JBR"&Phys. Educ.$^a$&\verb"\PED"\\
J. Geophys. Eng.$^d$&\verb"\JGE"&Physiol. Meas.$^{c,d,e}$&\verb"\PM"\\
J. Micromech. Microeng.&\verb"\JMM"&Phys. Med. Biol.$^{c,d,e}$&\verb"\PMB"\\
J. Neural Eng.$^c$&\verb"\JNE"&Plasma Phys. Control. Fusion&\verb"\PPCF"\\
J. Opt.&\verb"\JOPT"&Phys. Scr.&\verb"\PS"\\
J. Phys. A: Math. Theor.&\verb"\jpa"&Plasma Sources Sci. Technol.&\verb"\PSST"\\
J. Phys. B: At. Mol. Opt. Phys.&\verb"\jpb"&Rep. Prog. Phys.$^{e}$&\verb"\RPP"\\
J. Phys: Condens. Matter&\verb"\JPCM"&Semicond. Sci. Technol.&\verb"\SST"\\
J. Phys. D: Appl. Phys.&\verb"\JPD"&Smart Mater. Struct.&\verb"\SMS"\\
J. Phys. G: Nucl. Part. Phys.&\verb"\jpg"&Supercond. Sci. Technol.&\verb"\SUST"\\
J. Radiol. Prot.$^a$&\verb"\JRP"&Surf. Topogr.: Metrol. Prop.&\verb"\STMP"\\
Metrologia&\verb"\MET"&Transl. Mater. Res.&\verb"\TMR"\\
\br
\end{tabular}\\
$^{a}$UK spelling is required; $^{b}$MSC classification numbers are required; $^{c}$titles of articles are required in journal references; $^{d}$Harvard-style references must be used (see section \ref{except}); $^{e}$final page numbers of articles are required in journal references.
\end{table}
\normalsize
Any special submission requirements for the journals are indicated with footnotes in table~\ref{jlab1}.
Journals which require references in a particular format will need special care if you are using BibTeX, and you might need to use a \verb".bst" file
that gives slightly non-standard output in order to supply any extra information required. It is not
necessary to give references in the exact style of references used in published articles, as long as all of
the required information is present.
Also note that there is an incompatibility
between \verb"amsmath.sty" and \verb"iopart.cls" which cannot be completely worked around. If your article relies
on commands in \verb"amsmath.sty" that are not available in \verb"iopart.cls", you may wish to consider using a different
class file.
Whatever journal you are submitting to, please look at recent published articles (preferably
articles in your subject area) to familiarize yourself with the features of the journal. We do not demand
that your \LaTeX\ file closely resembles a published article---a generic `preprint' appearance of the sort
commonly seen on \verb"arXiv.org" is fine---but your submission should be presented
in a way that makes it easy for the referees to form an opinion of whether it is suitable for the journal.
The generic advice in this document---on what to include in an abstract, how best to present complicated
mathematical expressions, and so on---applies whatever class file you are using.
\subsection{What you will need to supply}
Submissions to our journals are handled via the ScholarOne web-based submission system. When you submit
a new article to us you need only submit a PDF of your article. When you submit a revised version,
we ask you to submit the source files as well. Upon acceptance for publication we will use the source files to produce a proof of your article in the journal style.
\subsubsection{Text.}When you send us the source files for a revised version of your submission,
you should send us the \LaTeX\ source code of your paper with all figures read in by
the source code (see section \ref{figinc}). Articles can be prepared using almost any version of \TeX\ or \LaTeX{},
not just \LaTeX\ with the class file \verb"iopart.cls". You may split your \LaTeX\ file into several parts, but please show
which is the `master' \LaTeX\ file that reads in all of the other ones by naming it appropriately. The `master'
\LaTeX\ file must read in all other \LaTeX\ and figure files from the current directory. {\it Do not read in files from a different directory, e.g. \verb"\includegraphics{/figures/figure1.eps}" or
\verb"\include{../usr/home/smith/myfiles/macros.tex}"---we store submitted files
all together in a single directory with no subdirectories}.
\begin{itemize}
\item {\bf Using \LaTeX\ packages.} Most \LaTeXe\ packages can be used if they are
available in common distributions of \LaTeXe; however, if it is essential to use
a non-standard package then any extra files needed to process the article must
also be supplied. Try to avoid using any packages that manipulate or change the standard
\LaTeX\ fonts: published articles use fonts in the Times family, but we prefer that you
use \LaTeX\ default Computer Modern fonts in your submission. The use of \LaTeX\ 2.09, and of plain
\TeX\ and variants such as AMSTeX is acceptable, but a complete PDF of your submission should be supplied in these cases.
\end{itemize}
\subsubsection{Figures.} Figures should ideally be included in an article as encapsulated PostScript files
(see section \ref{figinc}) or created using standard \LaTeX\ drawing commands.
Please name all figure files using the guidelines in section \ref{fname}.
We accept submissions that use pdf\TeX\ to include
PDF or bitmap figures, but please ensure that you send us a PDF that uses PDF version 1.4 or lower
(to avoid problems in the ScholarOne system).
You can do this by putting \verb"\pdfminorversion=4" at the very start of your TeX file.
\label{fig1}All figures should be included within the body of the text
at an appropriate point or grouped together with their captions at the end of the article. A standard graphics inclusion package such as \verb"graphicx" should be used for figure inclusion, and the package should be declared in the usual
way, for example with \verb"\usepackage{graphicx}", after the \verb"\documentclass" command.
Authors should avoid using special effects generated by including verbatim
PostScript code in the submitted \LaTeX\ file. Wherever possible, please try to use standard \LaTeX\ tools
and packages.
\subsubsection{References.\label{bibby}}
You can produce your bibliography in the standard \LaTeX\ way using the \verb"\bibitem" command. Alternatively
you can use BibTeX: our preferred \verb".bst" styles are:
\begin{itemize}
\item For the numerical (Vancouver) reference style we recommend that authors use
\verb"unsrt.bst"; this does not quite follow the style of published articles in our
journals but this is not a problem. Alternatively \verb"iopart-num.bst" created by Mark A Caprio
produces a reference style that closely matches that in published articles. The file is available from
\verb"http://ctan.org/tex-archive/biblio/bibtex/contrib/iopart-num/" .
\item For alphabetical (Harvard) style references we recommend that authors use the \verb"harvard.sty"
in conjunction with the \verb"jphysicsB.bst" BibTeX style file. These, and accompanying documentation, can be downloaded
from \penalty-10000 \verb"http://www.ctan.org/tex-archive/macros/latex/contrib/harvard/".
Note that the \verb"jphysicsB.bst" bibliography style does not include article titles
in references to journal articles.
To include the titles of journal articles you can use the style \verb"dcu.bst" which is included
in the \verb"harvard.sty" package. The output differs a little from the final journal reference
style, but all of the necessary information is present and the reference list will be formatted
into journal house style as part of the production process if your article is accepted for publication.
\end{itemize}
\noindent Please make sure that you include your \verb".bib" bibliographic database file(s) and any
\verb".bst" style file(s) you have used.
\subsection{\label{copyright}Copyrighted material and ethical policy} If you wish to make use of previously published material for which you do not own the copyright then you must seek permission from the copyright holder, usually both the author and the publisher. It is your responsibility to obtain copyright permissions and this should be done prior to submitting your article. If you have obtained permission, please provide full details of the permission granted---for example, copies of the text of any e-mails or a copy of any letters you may have received. Figure captions must include an acknowledgment of the original source of the material even when permission to reuse has been obtained. Please read our ethical policy before writing your article.
\subsection{Naming your files}
\subsubsection{General.}
Please name all your files, both figures and text, as follows:
\begin{itemize}
\item Use only characters from the set a to z, A to Z, 0 to 9 and underscore (\_).
\item Do not use spaces or punctuation characters in file names.
\item Do not use any accented characters such as
\'a, \^e, \~n, \"o.
\item Include an extension to indicate the file type (e.g., \verb".tex", \verb".eps", \verb".txt", etc).
\item Use consistent upper and lower case in filenames and in your \LaTeX\ file.
If your \LaTeX\ file contains the line \verb"\includegraphics{fig1.eps}" the figure file must be called
\verb"fig1.eps" and not \verb"Fig1.eps" or \verb"fig1.EPS". If you are on a Unix system, please ensure that
there are no pairs of figures whose names differ only in capitalization, such as \verb"fig_2a.eps" and \verb"fig_2A.eps",
as Windows systems will be unable to keep the two files in the same directory.
\end{itemize}
When you submit your article files, they are manipulated
and copied many times across multiple databases and file systems. Including non-standard
characters in your filenames will cause problems when processing your article.
\subsubsection{\label{fname}Naming your figure files.} In addition to the above points, please give each figure file a name which indicates the number of the figure it contains; for example, \verb"figure1.eps", \verb"figure2a.eps", etc. If the figure file contains a figure with multiple parts, for example figure 2(a) to 2(e), give it a name such as \verb"figure2a_2e.eps", and so forth.
\subsection{How to send your files}
Please send your submission via the ScholarOne submission system. Go to the journal home
page, and use the `Submit an article' link on the right-hand side.
\section{Preparing your article}
\subsection{Sample coding for the start of an article}
\label{startsample}
The code for the start of a title page of a typical paper in the \verb"iopart.cls" style might read:
\small\begin{verbatim}
\documentclass[12pt]{iopart}
\begin{document}
\title[The anomalous magnetic moment of the
neutrino]{The anomalous magnetic moment of the
neutrino and its relation to the solar neutrino problem}
\author{P J Smith$^1$, T M Collins$^2$,
R J Jones$^3$\footnote{Present address:
Department of Physics, University of Bristol, Tyndalls Park Road,
Bristol BS8 1TS, UK.} and Janet Williams$^3$}
\address{$^1$ Mathematics Faculty, Open University,
Milton Keynes MK7~6AA, UK}
\address{$^2$ Department of Mathematics,
Imperial College, Prince Consort Road, London SW7~2BZ, UK}
\address{$^3$ Department of Computer Science,
University College London, Gower Street, London WC1E~6BT, UK}
\ead{williams@ucl.ac.uk}
\begin{abstract}
...
\end{abstract}
\keywords{magnetic moment, solar neutrinos, astrophysics}
\submitto{\jpg}
\maketitle
\end{verbatim}
\normalsize
At the start of the \LaTeX\ source code please include
commented material to identify the journal, author, and (if you are sending a revised
version or a resubmission) the reference number that the journal
has given to the submission. The first non-commented line should be
\verb"\documentclass[12pt]{iopart}" to load the preprint class
file. The normal text will be in the Computer Modern 12pt font.
It is possible to specify 10pt font size by passing the option \verb"[10pt]" to the class file.
Although it is possible to choose a font other than Computer Modern by loading external packages, this is not recommended.
The article text begins after \verb"\begin{document}".
Authors of very long articles may find it convenient to separate
their article into a series of \LaTeX\ files each containing one section, and each of which is called
in turn by the primary file. The files for each section should be read in from the current directory;
please name the primary file clearly so that we know to run \LaTeX\ on this file.
Authors may use any common \LaTeX\ \verb".sty" files.
Authors may also define their own macros and definitions either in the main article \LaTeX\ file
or in a separate \verb".tex" or \verb".sty" file that is read in by the
main file, provided they do not overwrite existing definitions.
It is helpful to the production staff if complicated author-defined macros are explained in a \LaTeX\ comment.
The article class \verb"iopart.cls" can be used with other package files such
as those loading the AMS extension fonts
\verb"msam" and \verb"msbm", which provide the
blackboard bold alphabet and various extra maths symbols as well as symbols useful in figure
captions. An extra style file \verb"iopams.sty" is provided to load these
packages and provide extra definitions for bold Greek letters.
\subsection{\label{dblcol}Double-column layout}
The \verb"iopart.cls" class file produces single-column output by default, but a two-column layout can be obtained by
using \verb"\documentclass[10pt]" at the start of the file and \verb"\ioptwocol" after the \verb"\maketitle" command. Two-column output will begin
on a new page (unlike in published double-column articles, where the two-column material
starts on the same page as the abstract).
In general we prefer to receive submissions in single-column format even for journals
published in double-column style; however, the \verb"\ioptwocol" option may be useful to test figure sizes
and equation breaks for these journals. When setting material
in two columns you can use the asterisked versions of \LaTeX\ commands such as \verb"\begin{figure*} ... \end{figure*}"
to set figures and tables across two columns. If you have any problems or any queries about producing two-column output, please contact us at \verb"submissions@iop.org".
\section{The title and abstract page}
If you use \verb"iopart.cls", the code for setting the title page information is slightly different from
the normal default in \LaTeX. If you are using a different class file, you do not need to mimic the appearance of
an \verb"iopart.cls" title page, but please ensure that all of the necessary information is present.
\subsection{Titles and article types}
The title is set using the command
\verb"\title{#1}", where \verb"#1" is the title of the article. The
first letter
of the title should be capitalized with the rest in lower case.
The title appears in bold case, but mathematical expressions within the title may be left in light-face type.
If the title is too long to use as a running head at the top of each page (apart from the
first) a short
form can be provided as an optional argument (in square brackets)
before the full title, i.e.\ \verb"\title[Short title]{Full title}".
For article types other than papers, \verb"iopart.cls"
has a generic heading \verb"\article[Short title]{TYPE}{Full title}"
and some specific definitions given in table~\ref{arttype}. In each case (apart from Letters
to the Editor and Fast Track Communications) an
optional argument can be used immediately after the control sequence name
to specify the short title; where no short title is given, the full title
will be used as the running head. Not every article type has its own macro---use \verb"\article" for
any not listed. A full list of the types of articles published by a journal is given
in the submission information available via the journal home page.
The generic heading could be used for
articles such as those presented at a conference or workshop, e.g.
\small\begin{verbatim}
\article[Short title]{Workshop on High-Energy Physics}{Title}
\end{verbatim}\normalsize
Footnotes to titles may be given by using \verb"\footnote{Text of footnote.}" immediately after the title.
Acknowledgment of funding should be included in the acknowledgments section rather than in a footnote.
\begin{table}
\caption{\label{arttype}Types of article defined in the {\tt iopart.cls}
class file.}
\footnotesize\rm
\begin{tabular*}{\textwidth}{@{}l*{15}{@{\extracolsep{0pt plus12pt}}l}}
\br
Command& Article type\\
\mr
\verb"\title{#1}"&Paper (no surtitle on first page)\\
\verb"\ftc{#1}"&Fast Track Communication\\
\verb"\review{#1}"&Review\\
\verb"\topical{#1}"&Topical Review\\
\verb"\comment{#1}"&Comment\\
\verb"\note{#1}"&Note\\
\verb"\paper{#1}"&Paper (no surtitle on first page)\\
\verb"\prelim{#1}"&Preliminary Communication\\
\verb"\rapid{#1}"&Rapid Communication\\
\verb"\letter{#1}"&Letter to the Editor\\
\verb"\article{#1}{#2}"&Other articles\\\ & (use this for any other type of article; surtitle is whatever is entered as {\tt
\#1})\\
\br
\end{tabular*}
\end{table}
\subsection{Authors' names and addresses}
For the authors' names type \verb"\author{#1}",
where \verb"#1" is the
list of all authors' names. Western-style names should be written as initials then
family name, with a comma after all but the last
two names, which are separated by `and'. Initials should {\it not} be followed by full stops. First (given) names may be used if
desired. Names in Chinese, Japanese and Korean styles should be written as you want them to appear in the published article. Authors in all IOP Publishing journals have the option to include their names in Chinese, Japanese or Korean characters in addition to the English name: see appendix B for details.
If the authors are at different addresses a superscripted number, e.g. $^1$, \verb"$^1$", should be used after each
name to reference the author to his/her address.
If an author has additional information to appear as a footnote, such as
a permanent address, a normal \LaTeX\ footnote command
should be given after the family name and address marker
with this extra information.
The authors' affiliations follow the list of authors.
Each address is set by using
\verb"\address{#1}" with the address as the single parameter in braces.
If there is more
than one address then the appropriate superscripted number, followed by a space, should come at the start of
the address.
E-mail addresses are added by inserting the
command \verb"\ead{#1}" after the postal address(es) where \verb"#1" is the e-mail address.
See section~\ref{startsample} for sample coding. For more than one e-mail address, please use the command
\verb"\eads{\mailto{#1}, \mailto{#2}}" with \verb"\mailto" surrounding each e-mail address. Please ensure
that, at the very least, you state the e-mail address of the corresponding author.
\subsection{The abstract}
The abstract follows the addresses and
should give readers concise information about the content
of the article and indicate the main results obtained and conclusions
drawn. It should be self-contained---there should be no references to
figures, tables, equations, bibliographic references etc. It should be enclosed between \verb"\begin{abstract}"
and \verb"\end{abstract}" commands. The abstract should normally be restricted
to a single paragraph of around 200 words.
\subsection{Subject classification numbers}
We no longer ask authors to supply Physics and Astronomy Classification System (PACS)
classification numbers. For submissions to {\it Nonlinearity}\/ we ask that you should
supply Mathematics Subject Classification (MSC) codes. MSC numbers are included after the abstract
using \verb"\ams{#1}".
The command
\verb"\submitto{#1}" can be inserted, where \verb"#1" is the journal name written in full or the appropriate control sequence as
given in table~\ref{jlab1}. This command is not essential to the running of the file and can be omitted.
\subsection{Keywords}
Keywords are required for all submissions. Authors should supply a minimum of three (maximum seven) keywords appropriate to their article as a new paragraph starting \verb"\noindent{\it Keywords\/}:" after the end of the abstract.
\subsection{Making a separate title page}
To keep the header material on a separate page from the
body of the text insert \verb"\maketitle" (or \verb"\newpage") before the start of the text.
If \verb"\maketitle" is not included the text of the
article will start immediately after the abstract.
\section{The text}
\subsection{Sections, subsections and subsubsections}
The text of articles may be divided into sections, subsections and, where necessary,
subsubsections. To start a new section, end the previous paragraph and
then include \verb"\section" followed by the section heading within braces.
Numbering of sections is done {\it automatically} in the headings:
sections will be numbered 1, 2, 3, etc, subsections will be numbered
2.1, 2.2, 3.1, etc, and subsubsections will be numbered 2.3.1, 2.3.2,
etc. Cross references to other sections in the text should, where
possible, be made using
labels (see section~\ref{xrefs}) but can also
be made manually. See section~\ref{eqnum} for information on the numbering of displayed equations. Subsections and subsubsections are
similar to sections but
the commands are \verb"\subsection" and \verb"\subsubsection" respectively.
Sections have a bold heading, subsections an italic heading and
subsubsections an italic heading with the text following on directly.
\small\begin{verbatim}
\section{This is the section title}
\subsection{This is the subsection title}
\end{verbatim}\normalsize
The first section is normally an introduction, which should state clearly
the object of the work, its scope and the main advances reported, with
brief references to relevant results by other workers. In long papers it is
helpful to indicate the way in which the paper is arranged and the results
presented.
Footnotes should be avoided whenever possible and can often be included in the text as phrases or sentences in parentheses. If required, they should be used only for brief notes that do not fit conveniently into the text. The use of
displayed mathematics in footnotes should be avoided wherever possible and no equations within a footnote should be numbered.
The standard \LaTeX\ macro \verb"\footnote" should be used. Note that in \verb"iopart.cls" the \verb"\footnote" command
produces footnotes indexed by a variety of different symbols,
whereas in published articles we use numbered footnotes. This
is not a problem: we will convert symbol-indexed footnotes to numbered ones during the production process.
\subsection{Acknowledgments}
Authors wishing to acknowledge assistance or encouragement from
colleagues, special work by technical staff or financial support from
organizations should do so in an unnumbered `Acknowledgments' section
immediately following the last numbered section of the paper. In \verb"iopart.cls" the
command \verb"\ack" sets the acknowledgments heading as an unnumbered
section.
Please ensure that you include all of the sources of funding and the funding contract reference numbers that you are contractually obliged to acknowledge. We often receive requests to add such information very late in the production process, or even after the article is published, and we cannot always do this. Please collect all of the necessary information from your co-authors and sponsors as early as possible.
\subsection{Appendices}
Technical detail that it is necessary to include, but that interrupts
the flow of the article, may be consigned to an appendix.
Any appendices should be included at the end of the main text of the paper, after the acknowledgments section (if any) but before the reference list.
If there are
two or more appendices they should be called Appendix A, Appendix B, etc.
Numbered equations will be in the form (A.1), (A.2), etc,
figures will appear as figure A1, figure B1, etc and tables as table A1,
table B1, etc.
The command \verb"
\section{Introduction}
\label{sec:Introduction}
During the last decade, a swiftly expanding field in theoretical physics has formed by combining concepts and methods of quantum chaos with quantum many-body physics. This emerging research, often now subsumed under the designation {\bf \em many-body quantum chaos}, resides at the interfaces of statistical physics, quantum dynamics in atomic and condensed matter physics, and cosmology, harking back to quantum chaos’ roots in nuclear many-body physics. Systems from all these distinctly different areas of many-body physics often have in common that they reside at the semiclassical border between many-body classical chaos and quantum dynamics. Furthermore, semiclassical theory applies to the dynamics of individual systems, making it at least theoretically possible to understand system specific properties and deviations from universal behaviors. This feature stands in stark contrast to the inherently statistical nature of ensemble theories, i.e.~theory of disordered systems and random matrix theory. Thus, here we will view and review many-body quantum dynamics and chaos through the lens of a semiclassical perspective. However, we begin with a brief account of the roots, the formation, and evolution of quantum chaos during the past half century before turning to the main focus.
\subsection{Quantum chaos: from many particles to one and back to many}
An integral part of Fritz Haake's scientific life was dedicated to the study of quantum chaos~\cite{Haake10}, i.e.~the manifestations of classical chaos in quantum mechanics, the earliest hints of which were presciently noted by Einstein already in 1917~\cite{Einstein17}. The first description of a quantum chaotic system may well have been Bohr's compound nucleus~\cite{Bohr36}, but the subject really came into being with Wigner's introduction of random matrix ensembles~\cite{Wigner55, Wigner58} and subsequent work by several others~\cite{PorterBook} aimed at understanding the properties of slow neutron resonances. It successfully accounted for spectral and resonance statistical properties~\cite{Brody81, Bohigas83}, such as level repulsion, spectral rigidity, the Porter-Thomas distribution~\cite{Porter56}, and Ericson fluctuations~\cite{Ericson60, Verbaarschot85}. Random matrix theory (RMT), with its focus on statistical properties of the system itself, represented a paradigm shift and became one of the methodological pillars of quantum chaos studies~\cite{Bohigas88, Guhr98, Beenakker97RMP, Verbaarschot00, StockmannBook, Mehta04, Haake10}. Since its origins, it has seen applications in an extraordinarily broad range of fields, well beyond the boundaries of physics.
The origin of quantum chaos in the context of strongly interacting nuclear many-body systems did not straightforwardly lend itself to a concrete association with classically chaotic dynamics. This association was pioneered in Gutzwiller's work~\cite{Gutzwiller71} “{\em Periodic Orbits and Classical Quantization Conditions}”, the culmination of four seminal papers in the {\em Journal of Mathematical Physics}. Using semiclassical theory he expressed a quantum spectrum as a sum over unstable classical periodic orbits, {\em i.e.}\ a semiclassical trace formula. He thereby provided an essential part of a critical ``missing link'' connecting the classical and quantum mechanics of non-integrable single-particle systems. That turned out to become a driving force for subsequent research and the semiclassical mechanics of chaotic dynamical systems became a second central pillar of quantum chaos studies. A comprehensive overview of the corresponding literature until the end of the century, with a focus on the post-modern period~\cite{Heller93s} of the prior 30 years, is found in the Resource Letter ICQM-1~\cite{Gutzwiller98}.
During those decades quantum chaos semiclassical research focused predominantly on single-particle aspects of quantum mechanics. One major thrust addressed the many basic theoretical questions and issues, such as: convergence properties of the trace formula, how to enumerate and organize unstable periodic orbits and thereby how to approximately compute individual energy levels~\cite{Gutzwiller90, Cvitanovic89, Berry90, Artuso90a, Cvitanovic91, Ezra91, Tanner91, Abarbanel91, Sieber91}; how accurate is the semiclassical theory of chaos and on what times scales can it be applied~\cite{Oconnor92, Sepulveda92, Tomsovic93}; and what is the nature of chaotic eigenstates~\cite{Shnirelman74,Berry77, Voros79, McDonaldThesis, Heller84, Bogomolny88, Muller97prl, Tomsovic93b, Aurich99, Urbina03, Heller07, Lakshminarayan08}, for a review see \cite{Urbina13}. Furthermore, there was considerable focus on conceptually simple chaotic systems, such as quantized maps, quantum graphs, and quantum billiards, in attempts to reveal the essence of the interplay between classical and quantum mechanics~\cite{Fishman82, Saraceno90, Shepelyansky83, Bohigas84, Balazs89, Keating91, Tomsovic93, Tanner00, Kottos99, Gnutzmann06}. Further directions comprised extensions to such conceptually simple systems with coexisting chaotic and integrable phase space regions~\cite{Berry84, Bohigas93, Ullmo96, Schomerus97} and tunneling phenomena therein \cite{Lin90, Grossmann91, Tomsovic93b, Casati94, Shudo95, Creagh96, Brodier01}. Correspondingly, early experiments were performed in atomic physics on the hydrogen atom in microwave cavities~\cite{Bayfield74, Jensen89, Koch95}, in strong magnetic fields~\cite{Holle88, Iu91}, and on hydrogen-type atoms in strong electrical fields~\cite{Eichmann88}, accompanied by theoretical analysis of the low frequency properties of the quantum spectra and photo cross sections by means of Gutzwiller's trace formula~\cite{Du88a, Delande86, Wintgen87, Friedrich89}. In a second broad class of experiments, chaos for classical waves had been investigated in microwave cavities~\cite{Stockmann90, Graf92, Sridhar92, So95}, in acoustic resonators~\cite{Weaver89, Ellegaard95, Tanner07}, and in optical cavities~\cite{Gmachl98}. In a parallel development, quantum mechanical implications of irregular classical scattering had been studied, both from the semiclassical~\cite{Blumel88, Gaspard89, Blumel90, Jensen91, Lewenkopf91} and RMT \cite{Fyodorov97,Brouwer97} perspective, with links to Ericson fluctuations~\cite{Ericson60}; for reviews see \cite{LesHouches91, Gaspard14}.
Another thrust considered effective single-particle models of many-body condensed-matter systems at mesoscopic scales. Starting in the nineties, high-mobility semiconductor-based nanostructures exhibiting ballistic electron dynamics \cite{Beenakker91a} had been moving into a focus of quantum chaos and semiclassical physics.
Electron quantum billiards could be directly experimentally designed with increasing precision by means of two-dimensional nanostructures, and specifically devised for the semiclassical regime where the Fermi wavelength of the charge carriers was much smaller than the linear system sizes~\cite{Jalabert00,Richter00}. This research area comprised, going hand in hand, experimental and theoretical studies of imprints of chaos in ballistic transport and spectral
properties:
Based on Landauer-Büttiker-type equations expressing the conductance through the scattering matrix~\cite{Landauer70,Buttiker85}, magneto-transport through chaotic conductors acting as quantum billiards was investigated experimentally~\cite{Marcus92,Chang94,Sachrajda98}, using semiclassical tools~\cite{Jalabert90,Baranger91,Baranger93,Ketzmerick96,Richter02} and by means of RMT~\cite{Dorokhov82, Mello88, Mello91,Beenakker93,Baranger94,Jalabert94,Brouwer99,Beenakker97RMP}.
In parallel, quantum-chaotic transport through antidot superlattices that can be viewed as realizations of the Sinai billiard and acting as "electron pinball machines", was studied experimentally and theoretically~\cite{Weiss91,Fleischmann92,Weiss93,Yevtushenko00}, including a semiclassical periodic-orbit approach in the framework of the Kubo conductivity~\cite{Richter95,Hackenbroich95}.
Finally, specifically devised quantum wells provided a further platform for experimentally investigating chaotic quantum transport~\cite{Fromhold94}.
Manifestations of chaos in the spectral properties of ballistic quantum dots were probed in particular through Coulomb blockade phenomena~\cite{Jalabert92,Prigodin93,Simmel97,Blanter97,Narimanov99,Alhassid00RMP,Aleiner02PhysRep}, through their orbital magnetic~\cite{Levy93,Mailly93,Oppen94,Ullmo95,Richter96,McCann98} and optical~\cite{Mehlig98} response, as well as in terms of proximity effects via coupling to superconductors in so-called Andreev billiards~\cite{Kosztin95,Melsen96,Ihra01,Vavilov01}.
Furthermore, shell- and supershell structures in atomic clusters were amenable to semiclassical approaches~\cite{Brack93,Magner01}.
Already prior to probing quantum chaos in ballistic mesoscopic quantum systems, in the "bronze age" of mesoscopics, universal quantum behaviors were predicted and measured in disordered media for quantities such as weak localization \cite{Altshuler80,Lee85a} conductance fluctuations~\cite{Lee85a, Altshuler85}, Aharonov-Bohm effects \cite{Washburn86,Webb88} and persistent currents \cite{Levy90,Chandrasekhar91,Oppen91,Altshuler91}, with close links to the theory of disordered media~\cite{Anderson58, Wegner79, Efetov82a, Dorokhov82, Mello88,Mirlin00PhysRep}.
In that context a great deal of work was also dedicated to the strong localization properties of eigenstates~\cite{Fyodorov94}, and in particular, the transition to extended states and the metal-insulator transition~\cite{Mirlin96, Evers08}. Hence theoretical aspects of criticality and universality in non-interacting disordered systems developed into a third pillar of quantum chaos studies.
Although these three pillars, RMT, semiclassical theory, and the theory of disordered systems, developed quite independently, and although only the second one was related directly to classical chaos, their deep interconnections began to be recognized during this same period of time. Aggregating them under a single umbrella term, i.e.~quantum chaos, recognized and evoked their broader meaning. An early indicator of chaos as a profound unifying concept emerged with the conjecture of Bohigas, Gianonni, and Schmit~\cite{Bohigas84}, which posited that the critical rationale for the applicability and universality of RMT was exponentially unstable chaotic dynamics. This is as opposed to the idea that RMT required complexity in the sense of a strongly interacting many-body system. In fact, even a very simple, single-particle system, if chaotic, would possess RMT statistics. Beyond extensive numerical verifications, Hannay and Ozorio de Almeida invoked a uniformity of phase space exploration of unstable periodic orbits to derive a sum rule~\cite{Hannay84}, which Berry relied on to derive semiclassically the spectral rigidity found in RMT, related to the `ramp' in of the spectral form factor~\cite{Berry85}. Thus, there had to exist an intimate link between the two pillars, RMT and the semiclassical dynamics of chaos.
This missing link was provided by periodic orbit correlations found by Sieber and one of the authors~\cite{Sieber01} to reveal chaotic dynamics as the true origin of RMT behavior. They thereby computed the leading contribution to the spectral form factor beyond the ramp.
More recently, exploiting such periodic-orbit correlations, Fritz Haake and Peter Braun, to whom this review is dedicated, and co-authors, contributed significantly towards a proof of the BGS conjecture~\cite{Heusler07}.
Section~\ref{sec:foundations} outlines how these semiclassical single-particle approaches are lifted to many-body dynamics providing a key to understanding RMT universality also in the many-particle context.
At about the same time in the eighties, the quantized kicked rotor, an extremely simple model of a chaotic quantum system, was mapped in a momentum representation onto a kind of one-dimensional Anderson model for a disordered system~\cite{Fishman82} and later realized experimentally~\cite{Moore95}. The only distinction boiled down to whether a deterministic quantity behaving as a pseudo-random number could be replaced by a random number. This created a direct link between the semiclassical mechanics of chaos, and the one-dimensional Lloyd variation of the Anderson model~\cite{Anderson58, Lloyd69}, i.e.~between the second and third pillars. Further work followed, such as~\cite{Agam95}.
Finally, supersymmetry~\cite{Efetov82a, Efetov82b, Efetov83} directly linked the nonlinear $\sigma$-models for diffusive systems~\cite{Wegner79, Efetov82a} with RMT, thus providing a strong link between the first and third pillars. Attempts have been made to extend nonlinear $\sigma$-models to ballistic systems, and create a more direct connection specifically to chaotic dynamics~\cite{Muzykantskii95, Agam95, Andreev96, Andreev96b}, for a recent review see \cite{Altland15}. However, while there are close connections between supersymmetric path integrals and semiclassical periodic-orbit expansions~\cite{Haake10}, such models are intrinsically ensemble models of stochastic systems and any properties linked to the deterministic nature of individual chaotic dynamics, especially models of very simple single-particle systems or quantum maps, are not naturally built in~\cite{Andreev96}. Semiclassical approaches, especially for effects that can be related to short and intermediate time scale dynamics of deterministic (and chaotic) systems, naturally incorporate system specific properties and retain an appreciable advantage for the description of such effects.
The strong localization properties of the aforementioned kicked rotor eigenstates, a single-particle system with chaotic (diffusive) classical dynamics, were a form of strong (Anderson) localization.
However, there are also weaker forms of localization properties, related to the underlying classical dynamics.
A first example, termed {\it dynamical localization}, arises in which the existence of classical barriers to transport introduces time scales in a system's dynamics. The quantum manifestations of these time scales can be seen both in the non-ergodic long time behaviors of wave packets initially localized behind such barriers and in the properties of quantum eigenfunctions~\cite{Radons88, Geisel89, Bohigas93}. The quantum scaling for the strength of the localization implied by a transport barrier is given by the classical flux $\Phi$ flowing from one phase space region into the other compared with the appropriate power of $\hbar$ ($\Phi/\hbar^{L-1} \lesssim 1 $ is the localizing regime)~\cite{Bohigas93, Michler12}. This is again a weak localizing effect compared with strong (Anderson) localization~\cite{Anderson58}, but can be thought of as a kind of precursor in the sense that a series of transport barriers, often related to a sequence of resonances in systems with few degrees of freedom, leads to diffusive dynamics, even if individually $\Phi/\hbar^{L-1} \gtrsim 1 $~\cite{Dana89}.
A second weaker form of localization was deduced by considering the closest analogy in quantum mechanics to following a chaotic trajectory, i.e.~by considering the evolution of an initial minimum uncertainty wave packet centered on a trajectory's initial conditions. The Wigner transform~\cite{Wigner32} of the wave packet would generate a Gaussian density of phase points in the phase space vicinity of the initial conditions whose evolution would roughly be determined by the initial condition according to the classical equations of motion up to the so-called Ehrenfest time $\tau_{E}$~~\cite{Ehrenfest27}, see Fig.~\ref{fig:Ehrenfest}. Due to the exponential sensitivity to initial conditions this time scale would be logarithmically short in terms of a characteristic action $S$ divided by $\hbar$, more precisely $\tau_{E} = (1/\mu)\ln(S/\hbar)$, with Lyapunov exponent $\mu$~\cite{Berman78, Berry79b, Chirikov81}.
$\tau_{E}$ also formally links classical and quantal effects as it incorporates both, the Lyapunov exponent and Planck's constant.
A first example arose by analyzing the effects of centering a wave packet somewhere on a short unstable periodic orbit with a period, $\tau$.
In the situation of $2\pi \gtrsim \mu \tau$, for at least some of the eigenstates there would be excess intensity in the neighborhood of the periodic orbit coined `scarring'~\cite{Heller84} relative to statistical expectations of eigenstates behaving randomly~\cite{Berry77, Voros79, Bogomolny88}.
Scars in a wider sense have recently attracted attention in the many-body context, as we will review below.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figures/DD-billiard-evolution.png}
\caption{
{\bf Dynamics in a chaotic quantum billiard}:
A wave packet with probability density marked in blue has its initial conditions centered on an unstable classical trajectory in a chaotic billiard. For a brief evolution time, it follows the orbit. Beyond the Ehrenfest time $\tau_{E}$, it rapidly becomes characterized by wave chaos-dominated interference patterns (time steps t in units of $\tau_{E}$; courtesy of A.~Goussev).
}
\label{fig:Ehrenfest}
\end{figure}
Since about 2000, the field of single-particle quantum chaos has, on the one hand, further evolved along the above lines of research and, on the other hand, branched out into various expanding subfields. We give here only exemplarily some of the major developments, in particular those that are relevant for the new developments in the interplay of quantum chaos and many-body effects.
The Ehrenfest time introduced above has turned out to be of tremendous importance for the quantum dynamics of chaotic systems, since it separates quantum propagation at short time scales determined by {\em one} dominant (unstable) classical trajectory from subsequent times governed by wave interference, semiclassically arising from wave contributions from {\em many} paths, as seen in a detailed analysis ahead in the many-body context. Naturally, $\tau_{E}$-imprints should appear most directly in the time domain for quantities such as the spectral form factor~\cite{Brouwer06b}, the emergence of random-wave-like interferences in the evolution of minimum uncertainty wave packets~\cite{Tomsovic93}, the quantum decay in open systems~\cite{Schomerus04,Waltner08}, or the fidelity decay~\cite{Gutkin10}. However, $\tau_{E}$-effects may also appear for stationary observables involving a further time integration. Indeed, $\tau_{E}$-signatures in weak localization were theoretically discussed~\cite{Aleiner96prb} and later experimentally observed~\cite{Yevtushenko00} for electron billiards with randomly placed antidots. Subsequently, various works in the mesoscopic context considered $\tau_{E}$-effects, e.g.\ in quantum corrections to the conductance~\cite{Adagideli03, Rahav06,Jacquod06,Brouwer07,Waltner10} and the density of states of Andreev billiards~\cite{Silvestrov03b,Kuipers10}.
The Ehrenfest time has reappeared as `scrambling time' in a new guise in the many-body context.
Many of the cited works on Ehrenfest phenomena in quantum chaos use semiclassical theory based on classical orbit correlations. The development of this theory and its application to various single-particle observables, thereby linking semiclassical and random matrix theories, has been a major achievement in the field since 2000; for related reviews see~\cite{Haake10,Waltner10}. Since these methods feature prominently for genuine many-body quantum interference and chaos, e.g.~Sec.~\ref{sec:OTOC}, they are reviewed in Sec.~\ref{sec:specstat}.
In a parallel development, echoes, generally measuring how susceptable a quantum system reacts under perturbations, have received particular attention in quantum chaos during the last twenty years. After early work by Peres~\cite{Peres84} on the stability of quantum motion, Jalabert and Pastawski~\cite{Jalabert01} showed for a quantum-chaotic single-particle system that a generalization of Hahn's spin echo,
the so-called Loschmidt echo also known as fidelity,
decays at longer times with a rate given by the classical Lyapunov exponent, independent of the external perturbation. The subsequent extensive literature on this topic is reviewed, e.g., in~\cite{Gorin06,Goussev12}.
We will consider (Loschmidt-)echoes in more detail when we discuss their many-particle relatives, out-of-time-order correlators, in Sec.~\ref{sec:OTOC}.
In different branches of atomic phyiscs, quantum signatures of chaos have been revealed in greatly refined measurements after 2000. For instance,
chaotic and regular dynamics was directly observed in atom-optics billiards~\cite{Friedman02} and in
an ultracold gas of interacting erbium atoms was shown to exhibit many Fano–Feshbach resonances with nearest-neighbour spacings as expected from random matrix theory \cite{Frisch14}.
The branch of quantum chaos linked to optics, namely wave phenomena in complex media at mesoscopic scales giving rise to diffusive or chaotic ray dynamics, has experienced a particular boost, with dedicated experiments as the driving force, and has expanded into various subfields:
Following on the activities that started in the nineties~\cite{Gmachl98}, open microdisk cavities turned out to be ideal models for studying electromagnetic wave chaos. Besides investigating chaos-controlled directed emission and lasing~\cite{Wiersig08,Yan09},
chaos in whispering gallery microresonators helps to transform the momentum of broadband light~\cite{Jiang17}. Moreover,
dielectric microcavities also act as model systems for `time reversed lasing`~\cite{Wan11} and as platform for exploiting exceptional points in lasing and, more generally non-hermitian physics~\cite{Cao15}.
Accessibility of the information stored in the measured scattering matrix of a complex optical medium~\cite{Popov10} allows for directly studying and controlling light propagation in disordered systems, just one example of the emerging subfield of wave front shaping in complex media~\cite{Rotter17}.
In condensed matter physics, quantum chaos studies have been extended, among other directions,towards relativistic effects:
Spin-orbit coupling had been implemented into periodic-orbit trace formulas~\cite{Bolte98,Pletyukov02}. With the rising interest in semiconductor spintronics the role of spin-orbit induced spin relaxation on transport, i.e. weak anti-localization was considered both experimentally~\cite{Zumbuhl02} and theoretically~\cite{,Cremers03,Zaitsev05}.
With the event of pseudo-relativistic effects in graphene and topological matter, semiclassical transport expressions and Gutzwiller-type trace formulas have been developped for Dirac-type charge carriers in ballistic graphene cavities~\cite{Wurm09,Bardarson09,Wurm11,Yang11} and microwave billiards mimicking graphene~\cite{Bittner10}, generalizing an early proposal on neutrino billards~\cite{Berry87}.
Furthermore, in low-dimensional conductors branching of electron flow has become a new subfield~\cite{Heller,Kaplan,Fleischmann,??}.
Moreover, for closed systems,
measurements of mesoscopic persistent currents have been performed again with greatly increased precision~\cite{Bleszynski09} confirming prior semiclassical analysis.
Semiclassics also helped to understand observed shell effects in superconducting nanoparticles~\cite{Bose10}.
\begin{figure}
\centering
\includegraphics[width=1.2\linewidth]{figures/MBQC-overview.pdf}
\caption{
{\bf The MBQC interface}:
to be adjusted / discussed
}
\label{fig:MBQCD}
\end{figure}
\KR{further relevant directions of single-particle qchaos, 2000-2020?, U Kuhl?}
More recently, research is shifting dramatically back toward the quantum many-body physics of interacting systems, due in large part to the extraordinary advances in experimental techniques for ultracold systems~\cite{Bloch08} and quantum materials~\cite{Keimer17}. They allow for building, controlling, and monitoring synthetic many-body systems with strongly interacting quantum degrees of freedom. Especially during the last decade, a novel cross-disciplinary field is forming and evolving by merging knowledge from 50 years of quantum chaos with
concepts from quantum statistical physics and quantum many-body dynamics, often nowadays referred to as {\bf \em many-body quantum chaos}. This overarching research area establishes novel interfaces and areas of common interest among
traditional the fields of cold atom and condensed matter physics, particle physics cosmology, statistical physics and quantum information, see Fig.~\ref{fig:MBQCD}.
\KR{
missing:
positioning and embedding of MBQC into adjacent fields.
We should decide on Tuesday upon how to structure this missing part.
Subtopics could be:
MBL and quantum thermalization
OTOCs and scrambling
SYK physics
links to cosmology / quantum gravity
...
from our Miscellaneous Sec. that we more or less agreed to absorb into other parts:
MB scars
MB states, wave functions,
dualites: time-particle number; Boris; T Prosen / B Bertini papers
spatio-temporal maps: Boris; Predrag}
\KR{shall we try to "define" Quantum chaos?
something like:}
Looking back to more than 50 years of quantum chaos, it is probably impossible to give a clear-cut and generally accepted definition of this term. In a narrow sense, quantum chaos comprises (many-body) quantum systems possessing a classical limit with fully chaotic dynamics, thereby featuring RMT-type spectral statistics. In a slightly broader sense, also quantum systems without a classical limit (e.g.\ spin-1/2 chains or quantum networks) that still show RMT universality can certainly be considered as quantum-chaotic. But, more generally, the exceedingly larger number of systems that do not obey RMT behavior, but exhibit complex though possibly non-ergodic dynamics, is usually also subsumed under a broader roof of (many-body) quantum chaos and dynamics with all its facets.
In the following we will consider among the latter the broad classes of systems from all the different areas shown in Fig.~\ref{fig:MBQCD} that possess a classical limit and find thenselves in the semiclassical regime between many-body classical and quantum physics, in fact in a two-fold way:
Far-out-of-equilibrium quantum dynamics involves high-energy excitations, associated with the usual short-wavelength limit where wave mechanics approaches the limit of classical particles;
Alternatively, the thermodynamic limit of large particle numbers $N$, where quantum fields pass into nonlinear waves, can also be regarded as semiclassical, governed by an effective Planck constant $1/N$. These two complementary crossover regimes in state-of-the-art many-body physics are experimentally relevant and theoretically particularly challenging.
\subsection{Outline of this review:
from short to late time many-body dynamics}
This review will focus on the development of a proper semiclassical theory of interacting many-body systems and its role in addressing various problems in many-body quantum chaos. The theory's strength is its capacity to apply broadly to many-body quantum chaotic systems, including in the statistical RMT-like sense, whilst also having the capacity to address system specific properties of deterministic dynamical systems, and to apply to dynamical systems which are not fully chaotic or are even integrable as well. This is particularly important in investigations in which a system is not behaving in a universal manner. Perhaps it is not quantum thermalizing or generating entanglement as expected, or exhibiting some other curious behavior. A semiclassical approach has the promise to shed light on such situations.
The origin of a modern interacting, many-body system semiclassical theory is quite recent.
Indeed, the Weyl formula~\cite{Weylxx} for the
average single-particle density of states, based on quantum propagation on shortest time scales, has just been generalized to many indistinguishable noninteracting ~\cite{Hummel13} and interacting~\cite{Hummel19} particles.
Moreover, the analogous propagator for interacting many-body systems to the single particle van Vleck - Gutzwiller propagator~\cite{Vanvleck28, Gutzwiller71} has been very recently developed for both fermionic~\cite{Engl14} and bosonic~\cite{Engl14b} interacting systems, as well as the implementation of a saddle point approximation to the coherent state path integral for interacting bosonic systems~\cite{Tomsovic18}, which is the analogous propagator to generalized Gaussian wave packet dynamics (GGWPD)~\cite{Huber88} and one form of a complex, time-dependent WKB theory~\cite{Maslov81}. Moreover, a trace formula over periodic mean field solutions for an interacting bosonic system has now also been developed~\cite{Engl15,Dubertrand16}, which is the analagous version of the Gutzwiller trace formula over periodic orbits in single-particle systems~\cite{Gutzwiller71}.
The review is structured as follows:
In Sec.~\ref{sec:foundations} we first introduce notions of semiclassical physics and we briefly resume the earlier semiclassical theory of (quantum chaotic) single and few-particle systems, including the semiclassical path to random matrix universality. After a glimse on earlier semiclassical approaches to many-body systems, we
lay the foundations of an advanced semiclassical theory of many-body quantum fields and chaos.
In Sec.~\ref{sec:short-time} we review asymptotic configuration space approaches to the mean density of states, based on short-time dynamics, and summarize recent advances towards a multi-dimensional Weyl formula for confined interacting fermionic and bosonic many-particle systems.
Section~\ref{sec:post-Ehrenfest} then addresses spectral and dynamical many-body properties governed by genuine quantum many-body interference at post-Ehrenfest time scales.
Based on representative echo-type observables, survival probabilities and OTOCs, we analyse the interplay between classical many-body chaos, quantum entanglement and scrambling of quantum information. Furthermore, we consider corresponding spectra using semiclassical tools.
We resume this review in Sec.~\ref{sec:summary} with perspectives and future challenges.
\section{Foundations of Many-Body Semiclassical Theory}
\label{sec:foundations}
\subsection{Between classical and quantal: semiclassical regimes and limits}
\label{subsec:clas-quant}
What does the term `{\em semiclassical}' stand for?
Depending on the community and on the context, there exist various different meanings and notions.
We use `{\em semiclassical}' in the original sense, as it is used in quantum chaos, referring to physics in the crossover regime between classical and quantal.
Before elaborating on this in more detail we mention other notions of "semiclassicality" that should not be confused with our approach:
First and very often, "semiclassical" is simply used synonymously for "classical".
In particular in condensed matter physics, the purely classical motion of a particle with energy $E(k)$ given by a band structure or in a potential landscape that is quantum mechanically obtained, is often referred to as "semiclassical" \cite{Ashcroft}.
Second, the description of a hybrid system composed of a quantum and a classical sector, such as an atom coupled to radiation described in terms of a classical field.
Correspondingly, in semiclassical gravity matter fields are considered quantum, while the gravitational field is classical.
Third, on similar footing, quantum systems that assume certain degrees of classicality due to interaction with a (classical) environment.
Fourth, and relevant in our context, the truncated Wigner approximation (TWA) is often referred to as semiclassical: In TWA, the time
evolution is described by the classical equations
of motion, while the initial conditions for the classical field
are distributed according to the Wigner transform of the
initial density matrix~\cite{Polkovnikov-review}. In this review we will refer to the TWA as classical in order to clearly distinguish it from semiclassical theory taking care of many-body quantum interference.
Fifth?? ...
The semiclassical theory put forward in this review is formally based on an (effective) $\hbar$-expansion of quantum mechanical many-body Feynman propagators.
Resulting expressions, while being based on classical paths, thereby still inherently comprise interference-based processes and mechanisms.
These are indispensable in order to answer what happens to chaotic quantum systems in the limit where $\hbar$ becomes small but non-zero.
Conceptually this follows a strategy to approach a classical limit starting from fundamental quantum mechanics, instead of, vice versa, thinking classically and adding quantum effects.
If a single-particle quantum system posesses a classical counterpart, the corresponding semiclassical limit is well defined: the often-cited limit $\hbar \rightarrow 0$ (with Planck constant $h = 2\pi \hbar$) refers to an asymptotic expansion in the dimensionless quantity $\hbar/S \ll 1$, where $S=\int p dq$ is the typical classical action of the particle with momentum $p$ that grows with energy. Accordingly, the semiclassical limit usually refers to the regime of high excitations or energy and corresponds, by means of the de Broglie relation $\lambda = h/p$, to the limit of small wave lengths $\lambda$. This limit also applies to systems with more than one particle.
In the scheme of Fig.~\ref{fig:sc-limits} this short-wavelength limit is associated with the direction along the horizontal axis where wave mechanics approaches the limit of usually non-linear dynamics of classical particles (at fixed particle number $N$).
Besides this common notion of (semi)classicality, in quantum field theory there is the complementary thermodynamic limit of large particle number $N$. Formally, this limit can also be regarded as semiclassical, now with effective Planck constant $\hbar_{eff} =1⁄N \ll 1$. where the wave functions are replaced by field operators in "2nd quantization".
In the macroscopic limiting case at $N=\infty$, the $N$-particle equations for the quantum fields pass into nonlinear wave equations, corresponding to the direction along the vertical axis in Fig.~\ref{fig:sc-limits}.
From the viewpoint of quantum field theory, these wave equations, as well as the Schrödinger equation, thus appear to be `classical´. For instance, in the large-$N$ limit, systems of interacting bosonic cold atoms are described by the Gross-Pitaevskii equation~\cite{??}, a nonlinear Schrödinger equation.
Note, that in both asymptotic limits the quantum-classical transistion is singular: While finite values $\hbar/S \ll 1$ and $\hbar_{eff} = 1/N \ll 1$
imply massive interference, one obtains classical physics for
$\hbar/S \equiv 0$ and $\hbar_{eff} \equiv 0$.
Such disruptiv asymptotic behavior is often the cause of interesting physics.
The general semiclassical theory presented below provides the leading-order (in $ \hbar$ and $\hbar_{eff}$) quantum mechanical contributions, respectively:
the former dealing with wave interference and the latter with {\em genuine} quantum many-body interference, see below.
\begin{figure}
\centering
\includegraphics[width=0.3\linewidth]{figures/triple-slit.jpg}
\includegraphics[width=0.2\linewidth]{figures/3-particles-in-lattice-a.pdf}
\includegraphics[width=0.2\linewidth]{figures/3-particles-in-lattice-b.pdf}
\includegraphics[width=0.2\linewidth]{figures/3-particles-in-lattice-c.pdf}
\caption{
{\bf Single-particle wave versus multi-particle quantum interference} |
The wave function of a single particle arises at the triple slit (left) by interference of the amplitudes of three paths in configuration space. In a corresponding three-particle quantum system on a lattice (right), the Fock state $|111\rangle$ (bottom) evolves from the initial state $| 120\rangle$ (top) by quantum interference of the amplitudes of different common (mean-field) paths in Fock space (three of which are shown).
}
\label{fig:wave-quantum-interference}
\end{figure}
Wave interference is usually built into semiclassical propagators of one or more particles through sums over interfering amplitudes evolving along different classical trajectories in configuration space.
As we will outline in Sec.~\ref{sec:SC-many-body-theory}, many-particle propagators and quantum effects derived from these can be formally described (for $\hbar_{eff} = 1/N \ll 1$) again by means of semiclassical sum over paths, but with distinctly different meaning:
Instead of summing over orbits in configuration space, we now sum over common paths (modes) of many particles in Fock space.
This difference between such (single-particle) wave and genuine quantum interference is exemplified for three particles in Fig.~\ref{fig:wave-quantum-interference}. The interference pattern of a particle at the triple slit (left) is obtained by superimposing the amplitudes of three paths contributing to the wave function $\psi(\bf r)$ at the screen below. In contrast, three identical particles in a periodic potential (e.g., three atoms in a laser field, right panels) can be described quantum mechanically in Fock space by the number of particles at each lattice site, e.g., $|120\rangle$ for the initial state. The three panels on the right show three different interfering three-particle paths in Fock space from $| 120 \rangle$ to a final state $| 111 \rangle$. For many particles, interference occurs between, in principle, infinitely many paths in high-dimensional many-body Fock space.
These paths in Fock space are now - formally - different time-dependent solutions (modes) of the nonlinear wave equations in the classical limit $1/N = 0$ (in Fig.\ref{fig:wave-quantum-interference} above). The nonlinearities due to interaction effects may now cause these modes to be unstable or even chaotic as well. These "classical" modes imply many-body quantum chaos at the quantum field level, just as chaotic classical single-body dynamics leaves signatures in wave functions at the Schrödinger equation level (single-body quantum chaos). This in a sense raises the Gutzwiller semiclassical theory of the time evolution operator, from the level of "1st quantization" to that of "2nd quantization".
Furthermore, the time-dependent solutions of the nonlinear wave equations, i.e., the classical paths in many-body space, represent precisely the mean-field solutions of the full many-body problem, which opens an interesting new view on the connection between, entanglement, scrambling and quantum chaotic many-body phenomena: A single mean-field state is not entangled. The interpretation of eq. (1) as a coherent sum over different mean-field modes implies that beyond $t\_E$ - where they couple conditionally by chaos - massive many-body interference between different chaotic mean-field modes generates the high entanglement in the many-body propagator: quantum chaos and high entanglement are intimately intertwined.
\subsection{Semiclassical Theory of single-particle systems}
\subsubsection{van Vleck + Gutzwiller}
ST: Since van Vleck is just a particular implementation of a more general time-dependent WKB theory, I will assume that the background for t-d WKB that is needed for the next section is here.
also Weyl term
dynamics vs. chaos
\subsubsection{Wave packet dynamics:}
\label{sec:ggwpd}
In the same year as the publication of his famous equation~\cite{Schrodinger26}, Schr\"odinger introduced minimum uncertainty Gaussian wave packets for purposes of furthering the understanding of the correspondence of quantum and classical dynamics~\cite{Schrodinger26b}. In addition to this critical benefit, especially with regards to post-$\tau_E$ quantum interference, wave packets (localized states) play extremely important roles in many physical systems such as a broad range of spectroscopic and pump-probe experiments~\cite{Heller81, Alber91, Gruebele92}, femto-chemistry~\cite{Zewail00}, attosecond physics~\cite{Agostini04}, driven cold atoms~\cite{Bakman15}, electrons in strong fields~\cite{Zagoya14}, and fidelity studies~\cite{Jalabert01, Jacquod01, Cerruti02}.
From multiple perspectives, the quantum state of a few-particle system that most closely corresponds to a point in phase space for the corresponding classical system is a minimum uncertainty Gaussian wave packet. More precisely, such a quantum state most closely corresponds to a Liouvillian density of classical phase points whose Gaussian weighting is given by the wave packet's Wigner transform, and which corresponds to a phase space volume of $(2\pi\hbar)^d$ where $d$ is the number of degrees of freedom. Prepared as the initial state of some quantum system, its propagation parallels that of the Liouvillian density up to an Ehrenfest time scale, $\tau_E$. In fact, linearizing the dynamics in the neighborhood of the central trajectory leads semiclassically to so-called linearized wave packet dynamics~\cite{Heller75} whose evolving Wigner transform matches the propagating Liouvillian density up to $\tau_E$, beyond which they necessarily diverge.
This well known, highly useful semiclassical approximation~\cite{Heller81b} portends a more powerful comprehensive semiclassical approximation, generalized Gaussian wave packet dynamics (GGWPD)~\cite{Huber87}. Huber et al.~demonstrated its equivalence to time-dependent WKB theory~\cite{Huber88, Maslov81}, although with the more complicated requirement of relying on complex phase space variables. It extends the time scale of the validity of the semiclassical approximation well beyond $\tau_E$ and is necessary for studying post-$\tau_E$ quantum interferences in wave packet evolution. This comes with the price of much more difficult implementation in practice. Along with the complexification of classical dynamics comes a number of complications, such as branch cuts related to singular trajectories that acquire infinite momenta in finite propagation times~\cite{Huber88}, Maslov indices that theoretically require complex time propagation paths to determine~\cite{Wang21b}, high dimensional complex phase spaces in which the physically relevant saddle trajectories out of an infinity of saddle solutions must be identified~\cite{Pal16, Tomsovic18, Wang21}. Nevertheless, it can be implemented and quantum interferences that emerge from propagation times beyond $\tau_E$ can be faithfully reproduced by GGWPD.
A multidimensional Gaussian wave packet for a system with $d$ degrees of freedom can be parameterized as follows
\begin{equation}
\label{wavepacket}
\hskip -6 em \phi_\alpha(\vec x) = \left[\frac{{\rm Det}\left({\bf b}_\alpha+{\bf b}^*_\alpha\right)}{(2\pi\hbar)^d}\right]^{1/4} \exp\left[ - \left(\vec x - \vec q_\alpha \right) \cdot \frac{{\bf b}_\alpha}{2\hbar} \cdot \left(\vec x - \vec q_\alpha \right) +\frac{i \vec p_\alpha}{\hbar} \cdot \left(\vec x - \vec q_\alpha \right) + \frac{i \vec p_\alpha \cdot \vec q_\alpha}{2\hbar} \right]
\end{equation}
where the real mean values of the conjugate momenta and positions are labelled $(\vec p_\alpha, \vec q_\alpha)$, the matrix ${\bf b}_\alpha$ describes all the possible shape parameters, and the global phase is chosen to be consistent with the usual form of a Glauber coherent state~\cite{Glauber63} expressed in quadratures. The $\hbar$ scaling is chosen to ensure that the overall shape is completely independent of $\hbar$ as are the equations for the saddle trajectories given ahead. The Wigner transform gives a positive definite $2d$-dimensional Gaussian density of phase points given by
\begin{equation*}
{\cal W}(\vec p, \vec q) = \frac{1}{(2\pi\hbar)^{d}} \int_{-\infty}^\infty {\rm d} \vec x \ {\rm e}^{i \vec p \cdot \vec x/\hbar} \phi_\alpha \left(q-\frac{\vec x}{2}\right) \phi^*_\alpha \left(q+\frac{\vec x}{2}\right)
\end{equation*}
\begin{equation}
= \left(\pi \hbar \right)^{-d} \exp \left[ - \left(\vec p - \vec p_\alpha, \vec q - \vec q_\alpha \right) \cdot \frac{{\bf A}_\alpha}{\hbar} \cdot \left(\vec p - \vec p_\alpha, \vec q - \vec q_\alpha \right) \right]
\end{equation}
where ${\bf A}_\alpha$ is
\begin{equation}
\label{mvg}
{\bf A}_\alpha = \left(\begin{array}{cc}
{\bf c^{-1}} & {\bf c}^{-1} \cdot {\bf d} \\
{\bf d} \cdot {\bf c}^{-1} & {\bf c} + {\bf d} \cdot {\bf c}^{-1} \cdot {\bf d} \end{array}
\right) \qquad {\rm Det}\left[ {\bf A}_\alpha \right] =1
\end{equation}
with the association
\begin{equation}
\label{mvgwf}
{\bf b}_\alpha = {\bf c} + i {\bf d}
\end{equation}
${\bf A}_\alpha$ is real and symmetric, and if $\bf b_\alpha$ is real, there are no covariances between $\vec p$ and $\vec q$. This density provides an idea of the relative importance of classical trajectory initial conditions to the wave packet's evolution.
Time-dependent WKB theory~\cite{Maslov81}, and hence GGWPD~\cite{Huber88}, can be schematically described as identifying a Lagrangian manifold of classical phase points associated with the initial state, propagating that manifold for a time $t$, and intersecting it with a final Lagrangian manifold of interest. Envisioned this way, the intersections identify the endpoints of trajectories whose initial conditions reside on the initial manifold. These are the trajectories whose properties define the stationary phase and saddle points of the theory.
For a wave packet expressed as in Eq.~(\ref{wavepacket}), the associated Lagrangian manifold is given by
\begin{equation}
\label{constraints}
{\bf b}_\alpha \cdot \left( \vec {\cal q} - \vec q_\alpha\right) + i \left( \vec {\cal p} - \vec p_\alpha\right) = 0
\end{equation}
where the caligraphic font for $(\vec {\cal p}, \vec {\cal q})$ indicates generally complex values. Any pair satisfying this equation represents a possible initial condition for a saddle trajectory. If the propagated wave function were the quantity of interest, the final Lagrangian manifold would be
\begin{equation}
\label{constraintsx}
\vec x = \vec {\cal q}
\end{equation}
with no restriction on $\vec {\cal p}$ whereas for an overlap of the propagated wave packet with a final wave packet whose parameters are labelled by $\beta$, the final Lagrangian manifold would be
\begin{equation}
\label{constraintsf}
{\bf b}_\beta^* \cdot \left( \vec {\cal q} - \vec q_\beta\right) - i \left( \vec {\cal p} - \vec p_\beta\right) = 0
\end{equation}
Excluding linear dynamical systems, for any $t>0$, there is an infinity of solutions to these equations nearly all of which either must be excluded due to Stokes phenomena or contribute so negligibly as to be irrelevant.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figures/ggwpd.eps}
\caption{put caption here; taken from Fig.~5 of~\cite{Wang21}).
}
\label{fig:ggwpd}
\end{figure}
~
~
Where to introduce coherent states for many-body bosonic systems? or say that they can be mapped mathematically to wave packets...
\subsubsection{spectral statistics}
\label{sec:specstat}
invoking ergodicity and hyperbolicity: encounters qualitatively
encounter references here?
\cite{Sieber02}
calculation we go beyond the so-called diagonal approx-
imation and evaluate contributions from correlated tra-
jectory pairs [12]. This technique has been extended and
applied to calculate various spectral [13, 14] and scat-
tering [15, 16, 17, 18, 19, 20, 21] properties of quantum
chaotic systems.
the various time scales
\subsection{Semiclassical theory of many-body quantum dynamics}
\label{sec:SC-many-body-theory}
\subsubsection{Earlier approaches}
DFT: Ullmo, Burke, Brack
Strutinsky etc., orb. mag.,
Mention truncated Wigner as classical here ?
\subsubsection{
from single-particle wave- to many-body quantum interference}
\begin{itemize}
\item
semiclassics:
\item
the various semiclassical limits, in particular (i) small hbar (ii) small $1/N$
B Miller, helium, H Primack 3d billiard,
Weidenmüller formula;
Strutinsky etc., orb. mag.,
\item
the various time scales
\end{itemize}
\subsubsection{Semiclassical propagation of many-body quantum fields}
As has been mentioned before, the central object that allows for a semiclassical treatment of interference and related quantum phenomena is the asymptotic ($\hbar \to 0, N \to \infty$) form of the many-body propagator, here defined as the matrix elements of the time evolution operator (here for an autonomous system for simplicity)
\begin{equation}
K({\rm fi},{\rm in},t)=\langle {\rm fi}|{\rm e}^{-\frac{i}{\hbar}\hat{H}t}|{\rm in}\rangle
\end{equation}
between suitable initial and final states. This is perhaps a good point to stress again that it is the application of asymptotic analysis to the exact path integral representation of the propagator what accounts for the approximate character of the semiclassical approach. All other foundational aspects of the quantum description are kept intact, including the kinematic aspects of its formulation in a Hilbert space and the corresponding role of superposition principle at the heart of quantum coherence, and the ubiquitous presence of non-classical correlations like entanglement.
This key aspect of the semiclassical program carries itself into the many-body context where a characteristically new aspect of quantum mechanics for systems of identical particles must be further incorporated, namely quantum indistinguishability. In the context of semiclassics of particle systems this in turn requires the explicit (anti-)symetrization of the propagator by means of the corresponding projection operators onto the bosonic or fermionic subspaces of the total (product) Hilbert space describing the many-body system. Within this picture, then, the semiclassical approach is implemented by the corresponding extra summation over the elements of the permutation group, weighted by the corresponding characters, acting on the arguments of the usual $n$-particle semiclassical propagator. The whole construction is purely kinematical and therefore is the starting point of both interacting and non-interacting systems of identical particles.
The success of the (anti-)symetrized
\subsubsection{SC MB propagators: UR approach}
\subsubsection{SC MB propagation: T approach}
outlook post-Ehrenfest (MB) interference mechanisms numerical example of survival prob. in BH (our PRA)
\section{Spectral properties from short-time dynamics}
\label{sec:short-time}
\begin{itemize}
\item
MB Weyl expansion, Bethe law, "partial fermionization"
\item
finite-$N$ quantum thermodynamics
\item
further topics ?
\end{itemize}
Brack cluster review
\section{Spectral and dynamical properties from post-Ehrenfest quantum interference}
\label{sec:post-Ehrenfest}
\subsection{Rewinding time I: Echoes}
\begin{itemize}
\item
coherent backscattering
\item
MB Loschmidt + spin echo
\item
survival probability (interference beyond TWA, our PRA/PRL, dynamical localization, Saclay/Liege/Pullman)
\end{itemize}
\subsection{Rewinding time II: OTOCs}
\label{sec:OTOC}
\begin{itemize}
\item
OTOCs for hyperbolic systems (Josef PRL, Rodolfo${ }^\ast$, etc.)
\item
OTOCs for critical systems (Benni's PRLs)
\end{itemize}
\subsection{Spectra}
\begin{itemize}
\item
Bose-Hubbard spectra (Steve)
\item
Gutzwiller trace formula (R., Bristol${ }^\ast$ )
\item
fixed $N$, small $\hbar$:
trace formula for Lieb-Liniger?
\item
(Large) spin models: Duisburg; role of specific MB POs
\item
universal spectral fluctuations${ }^\ast$ (Bristol, Duisburg/R.)
RMT for spin models + SC${ }^\ast$ : Prozen
\item
Discrete spectrum effects around ESQPTs ${ }^\ast$ (Regensburg, Prague, Argentinians)
\end{itemize}
\section{Introduction}
\KR{we should check whether we did not forget any relevant name in the citations for the development of QC below}
\subsection{Quantum chaos: from many particles to one and back to many}
An integral part of Fritz Haake's scientific life was dedicated to the study of quantum chaos~\cite{Haake10}, i.e.~the manifestations of classical chaos in quantum mechanics, the earliest hints of which were presciently noted by Einstein already in 1917~\cite{Einstein17}. The first description of a quantum chaotic system may well have been Bohr's compound nucleus~\cite{Bohr36}, but the subject really came into being with Wigner's introduction of random matrix ensembles~\cite{Wigner55, Wigner58} and subsequent work by several others~\cite{PorterBook} aimed at understanding the properties of slow neutron resonances. It successfully accounted for spectral and resonance statistical properties~\cite{Brody81, Bohigas83}, such as level repulsion, spectral rigidity, the Porter-Thomas distribution~\cite{Porter56}, and Ericson fluctuations~\cite{Ericson60, Verbaarschot85}. Random matrix theory (RMT), with its focus on statistical properties, represented a paradigm shift and became one of the methodological pillars of quantum chaos studies~\cite{Bohigas88, Guhr98, Beenakker97RMP, Verbaarschot00, StockmannBook, Mehta04, Haake10}. Since its origins, it has seen applications in an extraordinarily broad range of fields, well beyond the boundaries of physics.
The origin of quantum chaos in the context of strongly interacting nuclear many-body systems did not straightforwardly lend itself to a concrete association with classically chaotic dynamics. This association was pioneered in Gutzwiller's work~\cite{Gutzwiller71} “{\em Periodic Orbits and Classical Quantization Conditions}”, the culmination of four seminal papers in the {\em Journal of Mathematical Physics}. Using semiclassical theory he expressed a quantum spectrum as a sum over unstable classical periodic orbits, {\em i.e.}\ a semiclassical trace formula. He thereby provided an essential part of a critical ``missing link'' connecting the classical and quantum mechanics of non-integrable single-particle systems. That turned out to become a driving force for subsequent research and the semiclassical mechanics of chaos became a second central pillar of quantum chaos studies. A comprehensive overview of the corresponding literature until the end of the century, with a focus on the post-modern period~\cite{Heller93s} of the prior 30 years, is found in the Resource Letter ICQM-1~\cite{Gutzwiller98}.
During those decades quantum chaos research focused predominantly on single-particle aspects of quantum mechanics.
One major thrust addressed the many basic theoretical questions and issues, such as: convergence properties of the trace formula, how to enumerate and organize unstable periodic orbits and thereby how to approximately compute individual energy levels ~\cite{Gutzwiller90, Eckhardt89, Berry90, Artuso90a, Cvitanovic91, Ezra91, Sieber91};
how accurate is the semiclassical theory of chaos
and on what times scales can it be applied~\cite{Oconnor92, Sepulveda92, Tomsovic93};
and what is the nature of chaotic eigenstates~\cite{Berry77, Voros79, McDonaldThesis, Heller84, Bogomolny88,Urbina03,Heller07}, for a review see \cite{Urbina13}.
Furthermore, there was considerable focus on conceptually particularly simple chaotic systems, such as quantized maps, quantum graphs and billiards, trying to disclose the essence of the interplay between classical and quantum mechanics~\cite{Fishman82, Shepelyansky83, Bohigas84, Balazs89, Keating91, Tomsovic93, Tanner00, Kottos01, Gnutzmann06}, also comprising extensions to such systems with mixed phase space \cite{Berry84,Bohigas91,Ketzmerick96}.
Correspondingly, early experiments were performed in atomic physics on the hydrogen atom in microwave cavities~\cite{Bayfield74,Koch95} in strong magnetic fields~\cite{Holle88,Iu91} and on hydrogen-type atoms in strong electrical fields~\cite{Eichmann88}, accompanied by theoretical analysis of the low frequency properties of the quantum spectra and photo cross sections by means of Gutzwiller's trace formula \cite{Du88a,Friedrich89}.
In a second broad class of experiments chaos for classical waves had been investigated
in microwave cavities~\cite{Stockmann90,ARichter92,Sridhar92,So95}, in acoustic resonators \cite{Weaver84} and in optical cavities \cite{Gmachl98,Yan09}, to name just a few.
Another thrust considered effective single-particle models of many-body condensed-matter systems at mesoscopic scales. Starting in the ninetees, high-mobility semiconductor-based nanostructures exhibiting ballistic electron dynamics had been moving into a focus of quantum chaos and semiclassical physics. Through such two-dimensional nanostructures electron quantum billiards could be directly experimentally designed with increasing precision and specifically realized in the semiclassical regime where the Fermi wavelength of the charge carriers was much smaller than the linear system sizes~\cite{Baranger93b,
Jalabert00,Richter00}. This research area comprised, going hand in hand, experimental and theoretical studies of imprints of chaos in transport and spectral
properties: With regard to quantum-chaotic transport, these works addressed, in particular, magneto-transport through chaotic conductors acting as quantum billiards~\cite{Jalabert90,Marcus92,Chang94,Sachrajda98,Richter02}, through antidot superlattices~\cite{Weiss93,Richter95,Hackenbroich95,Yevtushenko00} and quantum wells \cite{Fromhold94}.
More recently quantum chaotic transport of Dirac-type charge carriers has been considered for graphene cavities ~\cite{Wurm09,Wurm11,Yang11} and corresponding microwave billiards mimicking graphene~\cite{Bittner10}.
Manifestations of chaos in the spectral properties of ballistic quantum dots were probed through Coulomb blockade phenomena~\cite{Jalabert92,Prigodin93}, through their orbital magnetic response~\cite{Richter96}, as well as in terms of proximity effects via coupling to superconductors in so-called Andreev billiards~\cite{Kosztin95,Melsen96,Ihra01}.
Already prior to probing quantum chaos in ballistic mesoscopic quantum systems, in the "bronze age" of mesoscopics, universal quantum behaviors were predicted and measured in disordered media for quantities such as weak localization \cite{Altshuler80} conductance fluctuations~\cite{Lee85, Altshuler85, Webb88} and persistent currents \cite{Levy90}, with close links to the theory of disordered media~\cite{Anderson58, Wegner79, Efetov82a, Dorokhov82, Mello88}.
In that context a great deal of work was also dedicated to the strong localization properties of eigenstates~\cite{Fyodorov94}, and in particular, the transition to extended states and the metal-insulator transition~\cite{Mirlin96, Evers08}. Hence theoretical aspects of criticality and universality in non-interacting disordered systems developed into a third pillar of quantum chaos studies.
Although these three pillars, RMT, semiclassical theory and the theory of disordered systems, developed quite independently, and although only the second one was related directly to classical chaos, their deep interconnections began to be recognized during this same period of time. Aggregating them under a single umbrella term, i.e.~quantum chaos, recognized and evoked their broader meaning. An early indicator of chaos as a profound unifying concept emerged with the conjecture of Bohigas, Gianonni, and Schmit~\cite{Bohigas84}, which posited that the critical rationale for the applicability and universality of RMT was exponentially unstable chaotic dynamics. This is as opposed to the idea that RMT required complexity in the sense of a strongly interacting many-body system. In fact, even a very simple, single-particle system, if chaotic, would possess RMT statistics. Beyond extensive numerical verifications, Hannay and Ozorio de Almeida invoked a uniformity of phase space exploration of unstable periodic orbits to derive a sum rule~\cite{Hannay84}, which Berry relied on to derive semiclassically the spectral rigidity found in RMT~\cite{Berry85}. Thus, there had to exist an intimate link between the two pillars, RMT and the semiclassical dynamics of chaos. More recently, Fritz Haake and Peter Braun, to whom this review is dedicated, and co-authors, contributed significantly towards a proof of the BGS conjecture~\cite{Heusler07}. It is based on periodic orbit correlations~\cite{Sieber01} to reveal chaotic dynamics as the true origin of RMT behavior, linking the two first pillars quantum chaos resides on. Below we will outline how these semiclassical single-particle approaches can be lifted to many-body dynamics providing a key to understanding RMT universality also in the many-particle context.
At about the same time in the eightees, the quantized kicked rotor, an extremely simple model of a chaotic quantum system, was mapped in a momentum representation onto a kind of one-dimensional Anderson model for a disordered system~\cite{Fishman82}. The only distinction boiled down to whether a deterministic quantity behaving as a pseudo-random number could be replaced by a random number. This created a direct link between the semiclassical mechanics of chaos, and the one-dimensional Lloyd variation of the Anderson model~\cite{Anderson58, Lloyd69}, i.e.~between the second and third pillar.
The strong localization properties of the kicked rotor eigenstates, a single-particle system with chaotic (diffusive) classical dynamics, were a form of strong (Anderson) localization.
Weaker forms of localization properties were deduced by considering the closest analogy in quantum mechanics to following a chaotic trajectory, i.e.~by considering the evolution of an initial minimum uncertainty wave packet centered on a trajectory's initial conditions. The Wigner transform~\cite{Wigner32} of the wave packet would generate a Gaussian density of phase points in the phase space vicinity of the initial conditions whose evolution would roughly follow the initial condition according to the Ehrenfest equations of motion up to the so-called Ehrenfest time~\cite{Ehrenfest27}. Due to the exponential sensitivity to initial conditions this time scale would be logarithmically short in terms of a characteristic action divided by $\hbar$~\cite{Berman78, Berry79b, Chirikov81}. A first example arose by analyzing the effects of centering a wave packet somewhere on a short unstable periodic orbit with a period, $\tau$ and Lyapunov exponent, $\mu$. In the situation of $2\pi \gtrsim \mu \tau$, for at least some of the eigenstates there would be excess intensity in the neighborhood of the periodic orbit called `scarring'~\cite{Heller84} relative to statistical expectations of eigenstates behaving randomly~\cite{Berry77, Voros79}.
A second example, termed {\it dynamical localization}, arises in which the existence of classical barriers to transport introduces time scales in a system's dynamics. The quantum manifestations of these time scales can be seen both in the non-ergodic long time behaviors of wave packets initially localized behind such barriers and in the properties of quantum eigenfunctions~\cite{Radons88, Geisel89, Bohigas93}. The quantum scaling for the strength of the localization implied by a transport barrier is given by the classical flux flowing from one region into the other compared with the appropriate power of $\hbar$ ($\Phi/\hbar^{L-1} \lesssim 1 $ is the localizing regime)~\cite{Bohigas93, Michler12}. This is a weak localizing effect compared with strong (Anderson) localization~\cite{Anderson58}, but can be thought of as a kind of precursor in the sense that a series of transport barriers, often related to a sequence of resonances in systems with few degrees of freedom, leads to diffusive dynamics, even if individually $\Phi/\hbar^{L-1} \gtrsim 1 $~\cite{Dana89}.
Finally, supersymmetry~\cite{Efetov82a, Efetov82b, Efetov83} directly linked the nonlinear $\sigma$-models for diffusive systems~\cite{Wegner79, Efetov82a} with RMT, thus providing a strong link between the first and third pillars. Attempts have been made to extend nonlinear $\sigma$-models to ballistic systems, and create a more direct connection specifically to chaotic dynamics~\cite{Muzykantskii95, Agam95, Andreev96, Andreev96b}, however such models are intrinsically ensemble models of stochastic systems and any properties linked to the deterministic nature of chaotic dynamics, especially models of very simple single particle systems or quantum maps, are not naturally built in~\cite{Andreev96}. A semiclassical approach, especially for effects that can be related to short and intermediate time scale dynamics of deterministic (and chaotic) systems, naturally incorporates system specific properties and retains an appreciable advantage for the description of such effects.
More recently, research is shifting dramatically back toward the quantum many-body physics of interacting systems, due in large part to the extraordinary advances in experimental techniques for ultracold systems~\cite{Bloch08} and quantum materials~\cite{Keimer17}. They allow for building, controlling, and monitoring synthetic many-body systems with strongly interacting quantum degrees of freedom. Especially during the last decade, a swiftly expanding field in theoretical physics has formed by combining concepts and methods of quantum chaos with quantum many-body physics. This emerging research, often now subsumed under the designation {\bf \em many-body quantum chaos}, resides at the interfaces of statistical physics, quantum dynamics in atomic and condensed matter physics, and cosmology, harking back to quantum chaos’ roots in nuclear many-body physics.
This review will focus on the development of a proper semiclassical theory of interacting many-body systems and its role in addressing various problems in many-body quantum chaos. The theory's strength is its capacity to apply broadly to many-body quantum chaotic systems, including in the statistical RMT-like sense, whilst also having the capacity to address system specific properties of deterministic dynamical systems, and to apply to dynamical systems which are not fully chaotic or are even integrable as well. This is particularly important in investigations in which a system is not behaving in a universal manner. Perhaps it is not quantum thermalizing or generating entanglement as expected, or exhibiting some other curious behavior. A semiclassical approach has the promise to shed light on such situations.
The origin of a modern interacting, many-body system semiclassical theory is quite recent. Indeed, the analogous propagator for interacting many-body systems to the single particle van Vleck - Gutzwiller propagator~\cite{Vanvleck28, Gutzwiller71} has just been developed for both fermionic~\cite{Engl14} and bosonic~\cite{Engl14b} interacting systems, as well as the implementation of a saddle point approximation to the coherent state path integral for interacting bosonic systems~\cite{Tomsovic18}, which is the analogous propagator to generalized Gaussian wave packet dynamics~\cite{Huber88} and one form of a complex, time-dependent WKB theory~\cite{Maslov81}. Moreover, a trace formula over periodic mean field solutions for an interacting bosonic system has now also been developed~\cite{Engl15}, which is the analagous version of the Gutzwiller trace formula over periodic orbits~\cite{Gutzwiller71}.
~\\ \\ \\
~ \\ \\ \\
In Secs.~\ref{Sec:??} we will come back to these ideas and methods that again play an important role in a semiclassical theory of interacting many-particle systems.
\KR{
missing: positioning and embedding of MBQC into adjacent fields;
Wan diagram;
text pieces, presumably helpful:
}
…
For example, Studying their dynamics [1,2] has allowed for identifying particular classes of states that fail to quantum thermalize [3], but also their evolution towards equilibrium is particularly interesting since equilibration goes along with scrambling of quantum correlations across the systems' many degrees of freedom.
In particular after the proposals on out-of-time-order correlators (OTOCs) [10] and a universal limit of their growth rates, i.e., a quantum “bounds on chaos” [11], these aspects of MB chaos and ergodicity have received broad interest, ranging from quantum matter to quantum gravity, in phenomena that may be subsumed under the topic MB quantum chaos.
\subsection{Outline of this review:
from short to late time many-body dynamics}
\KR{the outline of the review: what do we consider in which section ...}
\section{Introduction}
\subsection{Quantum chaos: from many particles to a single one and back to many}
Fifty years ago, with his paper \cite{Gutzwiller71} “{\em Periodic Orbits and Classical Quantization Conditions}”, Martin Gutzwiller completed a series of four seminal papers in the Journal of Mathematical Physics in which he set the cornerstones of a bridge between the, possibly chaotic, dynamics of a classical particle and the energy level spectrum of the corresponding quantum system.
He thereby established the basis of a semiclassical theory connecting the classical and quantum mechanics of non-integrable single-particle systems that turned out to become a driving force and one central pillar of a field later referred to as quantum chaos.
While post-Gutzwiller research in quantum chaos predominantly focused on single-particle aspects trying to disclose the essence of the interplay between classical and quantum mechanics by considering conceptually simple but complexly behaving systems, such as billiards, quantum graphs and maps, another prominent quantum chaos branch has its roots in early nuclear physics.
Bohr’s compound nucleus \cite{Bohr1936} constitutes an excited “chaotic” many-body system displaying statistical properties, equilibration and leading to randomly oscillating observables such as Ericson fluctuations \cite{Ericson66} in nuclear cross sections.
There, random matrix theory (RMT) \cite{Porter65}, starting with the foundational work by Wigner \cite{Wigner58}, turned out to be particularly appropriate for later analyzing such statistical nuclear properties.
Representing a paradigm shift in dealing with (many-body) quantum systems by focusing on spectral statistics instead of individual energy levels, RMT has then developed into a second prominent methodical pillar of quantum chaos.
Only much later, in the 1980s, these RMT-based statistical concepts for complex quantum systems gained particular physical significance and deeper understanding via the link to chaotic dynamics, see Ref.~\cite{Bohigas88} for an early review.
It was then realized by Bohigas, Gianonni and Schmit (BGS) that underlying chaotic motion, to be defined later, is the unifying deeper reason for RMT-type spectral fluctuations, rather than (nuclear) many-body properties This is summarized in the famous, central BGS-conjecture \cite{Bohigas82}:
{\em
The fluctuation properties of generic quantum systems with (without) time-reversal symmetry, which in the classical limit are fully chaotic, coincide with those of the GOE (GUE).
}
This implies that, while an interacting many-particle system such as the compound nucleus can be assumed to behave in some sense chaotically and hence shows RMT behavior, single-particle quantum systems with a chaotic classical limit behave essentially similarly.
This was a further reason besides Gutzwiller’s periodic orbit theory\cite{Gutzwiller1990} for complex single-particle dynamics being put into the center of interest.
The state of affairs and a comprehensive overview of the corresponding literature until the end of the century, with focus on the post-modern period \cite{Heller} of the previous 30 years, is found in the Resource Letter ICQM-1 \cite{Gutzwiller98}.
The BGS conjecture also draws a tighter frame around what is considered as quantum chaos:
A quantum chaotic system in a narrow sense is a (many-body) system owing a chaotic classical limit.
In a broader sense, quantum chaotic systems do not necessarily have a classical limit, but exhibit similar genuine quantum chaotic features, such as random matrix behavior.
Such systems comprise for instance quantum maps, quantum graphs and spin chains.
At a superficial level it sounds reasonable that interacting many-body dynamics and non-integrable single-particle dynamics lead to respective Hamiltonian matrices with statistical spectral features mimicked by those of random matrices (with the same symmetry constraints).
However, a profound physical link between classical dynamical and quantum spectral properties of quantum chaotic systems in the narrow sense was still missing.
The key to understanding random matrix universality turned out to be subtle correlations between the classical (periodic) orbits that form the backbone of an underlying semiclassical theory for respective quantum observables.
Among others, Fritz Haake and Peter Braun to whom this review is dedicated contributed significantly to unvealing chaotic dynamics as the origin of RMT behavior towards a proof of the BGS conjecture.
In Secs.~\ref{Sec:??} we will come back to these ideas and methods that again play an important role in a semiclassical theory of interacting many-particle systems.
With regard to the latter, during the last decade a swiftly expanding field in theoretical physics has formed by combining and merging concepts and methods of quantum chaos (QC) and quantum many-body (MB) physics.
This common area, often now subsumed under the topic {\em many-body quantum chaos}, resides at the interface of statistical physics, quantum dynamics in atomic and condensed matter and cosmology, partly referring back to quantum chaos’ roots in nuclear MB physics.
Although also quantum MB physics has a correspondingly long tradition and while the foundations of statistical mechanics were laid together with those of quantum mechanics, all these subjects have witnessed a recent rebirth due to advances in experimental atomic, molecular, optical and condensed matter physics:
They allow for building, controlling and monitoring synthetic MB systems with strongly interacting quantum degrees of freedom.
\KR{
missing: positioning and embedding of MBQC into adjacent fields;
Wan diagram;
text pieces, presumably helpful:
}
…
Studying their dynamics [1,2] has allowed for identifying particular classes of states that fail to quantum thermalize [3], but also their evolution towards equilibrium is particularly interesting since equilibration goes along with scrambling of quantum correlations across the systems' many degrees of freedom.
In particular after the proposals on out-of-time-order correlators (OTOCs) [10] and a universal limit of their growth rates, i.e., a quantum “bounds on chaos” [11], these aspects of MB chaos and ergodicity have received broad interest, ranging from quantum matter to quantum gravity, in phenomena that may be subsumed under the topic MB quantum chaos.
…
\KR{Memo:
mention (ballistic) sigma-model at some place as a further approaoch ?}
\subsection{Outline of this review: \\
from short to late time many-body dynamics}
\KR{the outline of the review: what do we consider in which section ...}
\section{Foundations of Many-Body Semiclassics}
\label{sec:Foundations}
\subsection{Between classical and quantal: semiclassical regimes and limits}
\label{subsec:clas-quant}
What does “semiclassical” stand for?
Depending on the community and on the context, there exist various different meanings and notions.
We use "semiclassical" in the original sense
\footnote{This notion is used in quantum chaos},
referring to physics in the crossover regime between classical and quantal.
\KR{the following paragraph could also be put into a longer footnote not to hinder the flow of the main text}
Before elaborating on this in more detail we mention other notions of "semiclassicallity" that should not be confused with our approach:
Very often "semiclassical" is simply used synonymously for "classical".
In particular in condensed matter physics, the purely classical motion of a particle with energy $E(k)$ given by a band structure or in a potential landscape that is quantum mechanically obtained, is often referred to as "semiclassical" \cite{Ashcroft}.
Second, the description of a hybrid system composed of a quantum and a classical sector, such as an atom coupled to radiation described in terms of a classical field.
Correspondingly, in semiclassical gravity matter fields are considered quantum, while the gravitational field is classical.
Third, on similar footing, quantum systems that assume certain degrees of classicality due to interaction with a (classical) environment.
Fourth, in statistical physics, large temperature expansions … .
The semiclassical theory put forward in this review is formally based on an (effective) $\hbar$-expansion of quantum mechanical (MB) Feynman propagators.
Resulting expressions, while being based on classical paths, thereby still inherently comprise interference-based processes and mechanisms.
These are indispensable in order to answer what happens to chaotic quantum systems in the limit where $\hbar$ becomes small but non-zero.
Conceptually this follows a strategy to approach a classical limit starting from fundamental quantum mechanics, instead of, vice versa, thinking classically and adding quantum effects.
If a single-particle quantum system posesses a classical counterpart, the corresponding semiclassical limit is well defined: the often-cited limit $\hbar \rightarrow 0$ (with Planck constant $h = 2\pi \hbar$) refers to an asymptotic expansion in the dimensionless quantity $\hbar/S \ll 1$, where $S=\int p dq$ is the typical classical action of the particle with momentum $p$ that grows with energy. Accordingly, the semiclassical limit usually refers to the regime of high excitations or energy and, by means of the de Broglie relation $\lambda = h/p$, corresponds to the limit of small wave lengths $\lambda$.
In the scheme of Fig.~\ref{fig:sc-limits} this short-wavelength limit is associated with the direction along the horizontal axis where wave mechanics approaches the limit of classical particles. Note, however, that the process $\hbar \rightarrow 0$ is singular: the crossover from small $\hbar$ to $\hbar=0$ is not smooth. Such disruptiv behavior is the cause of interesting physics in this asymptotic limit.
\begin{figure}
\centering
\caption{Different notions of classicallity in quantum mechanical many-body systems}
\label{fig:sc-limits}
\end{figure}
the various semiclassical limits, in particular (i) small $\hbar$ (ii) small $1/N$
During the recent years a novel field in theoretical physics has emerged at the interface of statistical physics, quantum dynamics in atomic and condensed matter and cosmology, which can be subsumed under the topic many-body quantum chaos.
Many systems from all these distinctly different areas have in common that they reside at the semiclassical border between many-body classical chaos and quantum physics, in fact in a two-fold way:
(i) Far-out-of-equilibrium quantum dynamics involves high-energy excitations, associated with the usual short-wavelength limit where wave mechanics approaches the limit of classical particles;
(ii) alternatively, the thermodynamic limit of large particle numbers N, where quantum fields pass into nonlinear waves, can also be regarded as semiclassical, governed by an effective Planck constant 1/N. These two complementary crossover regimes in state-of-the-art many-body physics that are experimentally relevant and theoretically particularly
These various notions associated with “semiclassicality” play a role in a broad range of physical fields, ranging from … to …
\subsection{Semiclassics for few-particle systems}
\subsubsection{van Vleck + Gutzwiller}
dynamics vs. chaos
B Miller, helium, H Primack 3d billiard,
\subsubsection{spectral statistics}
invoking ergodicity and hyperbolicity: encounters qualitatively
encounter references here?
\cite{Sieber02}
the various time scales
\subsection{Semiclassical theory of many-body quantum dynamics}
\subsubsection{Earlier approaches}
DFT: Ullmo, Burke, Brack
Strutinsky etc., orb. mag.,
Mention truncated Wigner as classical here ?
\subsubsection{
from single-particle wave- to many-body quantum interference JD: I will take over this section today}
\begin{itemize}
\item
semiclassics:
\item
the various semiclassical limits, in particular (i) small hbar (ii) small $1/N$
B Miller, helium, H Primack 3d billiard,
Weidenmüller formula;
Strutinsky etc., orb. mag.,
\item
the various time scales
\end{itemize}
\subsubsection{Semiclassical propagaion of many-body quantum fields}
\subsubsection{SC MB propagators: UR approach}
\subsubsection{SC MB propagation: T approach}
outlook post-Ehrenfest (MB) interference mechanisms
numerical example of survival prob. in BH (our PRA)
\section{Spectral properties from short-time dynamics}
\begin{itemize}
\item
MB Weyl expansion, Bethe law, "partial fermionization"
\item
finite-$N$ quantum thermodynamics
\item
further topics ?
\end{itemize}
\section{Spectral and dynamical properties from post-Ehrenfest quantum interference}
\subsection{Rewinding time I: Echoes}
\begin{itemize}
\item
coherent backscattering
\item
MB Loschmidt + spin echo
\item
survival probability (interference beyond TWA, our PRA/PRL, dynamical localization, Saclay/Liege/Pullman)
\end{itemize}
\subsection{Rewinding time II: OTOCs}
\begin{itemize}
\item
OTOCs for hyperbolic systems (Josef PRL, Rodolfo${ }^\ast$, etc.)
\item
OTOCs for critical systems (Benni's PRLs)
\end{itemize}
\subsection{Spectra}
\begin{itemize}
\item
Bose-Hubbard spectra (Steve)
\item
Gutzwiller trace formula (R., Bristol${ }^\ast$ )
\item
fixed $N$, small $\hbar$:
trace formula for Lieb-Liniger?
\item
(Large) spin models: Duisburg; role of specific MB POs
\item
universal spectral fluctuations${ }^\ast$ (Bristol, Duisburg/R.)
RMT for spin models + SC${ }^\ast$ : Prozen
\item
Discrete spectrum effects around ESQPTs ${ }^\ast$ (Regensburg, Prague, Argentinians)
\end{itemize}
\section{Miscellaneous}
\begin{itemize}
\item
MB scars
\item
states, wave functions,
\item
dualites: time-particle number; Boris; T Prosen / B Bertini papers
\item
spatio-temporal maps: Boris; Predrag
\end{itemize}
\section{Perspectives}
\begin{itemize}
\item
SC and ETH
\item
SC and (dynamical) localization
\item
SC and entropies
(entanglement entropy (Argentinians, Arrul?)
\item
SC beyond the Heisenberg time
\item
SC and renormalization, AdS/CFT correspondence
\item
SC and its role in quantum gravity models (SSS, Altland, R.?)
\item
SC and ...
\end{itemize}
\section{Literature}
Overviews:
Denis review
Semiclassics: The hidden theory behind the success of DFT, 10 May 2021, Pavel Okun, Kieron Burke
\bibliographystyle{unsrt}
\section{Introduction}
\subsection{Quantum chaos: from many particles to one and back to many}
An integral part of Fritz Haake's life was dedicated to the study of quantum chaos~\cite{Haake10, Heusler07}, i.e.~the manifestations of chaos in quantum mechanics, the earliest hints of which were presciently noted by Einstein already in 1917~\cite{Einstein17}. The first description of a quantum chaotic system may well have been Bohr's compound nucleus~\cite{Bohr36}, but the subject really came into being with Wigner's introduction of random matrix ensembles~\cite{Wigner55, Wigner58} and subsequent work by several others~\cite{PorterBook} aimed at understanding the properties of slow neutron resonances. It successfully accounted for spectral and resonance statistical properties~\cite{Brody81, Bohigas83}, such as level repulsion, spectral rigidity, the Porter-Thomas distribution~\cite{Porter56}, and Ericson fluctuations~\cite{Ericson60, Verbaarschot85}. Random matrix theory (RMT), with its focus on statistical properties, represented a paradigm shift and became one of the methodological pillars of quantum chaos studies~\cite{Bohigas88, Beenakker97RMP, Verbaarschot00, StockmannBook, Mehta04, Haake10}. Since its origins, it has seen applications in an extraordinarily broad range of fields, well beyond the boundaries of physics.
The origin of quantum chaos in the context of strongly interacting nuclear many-body systems did not straightforwardly lend itself to a concrete association with classically chaotic dynamics. This association was pioneered in Gutzwiller's work~\cite{Gutzwiller71} “{\em Periodic Orbits and Classical Quantization Conditions}”, the culmination of four seminal papers in the Journal of Mathematical Physics. Using semiclassical theory he expressed a quantum spectrum as a sum over unstable periodic orbits, i.e.~a semiclassical trace formula. He thereby provided part of a critical ``missing link'' connecting the classical and quantum mechanics of non-integrable single-particle systems. That turned out to become a driving force for subsequent research and the semiclassical mechanics of chaos became a second central pillar of quantum chaos studies. A comprehensive overview of the corresponding literature until the end of the century, with a focus on the post-modern period~\cite{Heller93s} of the prior 30 years, is found in the Resource Letter ICQM-1~\cite{Gutzwiller98}.
For the next few decades quantum chaos research focused predominantly on single-particle aspects of quantum mechanics. One major thrust addressed the many basic theoretical issues, such as: how to enumerate unstable periodic orbits and their properties~\cite{Gutzwiller90, Berry90, Artuso90a, Artuso90b, Cvitanovic91, Sieber01}; what is the nature of chaotic eigenstates~\cite{Berry77, Voros79, McDonaldThesis, Heller84, Bogomolny88}; and how accurate is the semiclassical theory of chaos and on what times scales can it be applied~\cite{Oconnor92, Sepulveda92, Tomsovic93}. There was considerable focus on conceptually very simple chaotic systems, such as quantized maps and billiards~\cite{Fishman82, Shepelyansky83, Balazs89, Keating91, Bohigas84, Tomsovic93}. Experiments were performed on the Hydrogen atom in microwave cavities and in strong magnetic fields~\cite{Bayfield74, Holle88, Du88a, Du88b}.
Another thrust considered effective single particle models of many-body systems, such as the theory of disordered media~\cite{Anderson58, Wegner79, Efetov82a, Dorokhov82, Mello88}, and the Landauer-B\"uttiker formalism for ballistic electron transport~\cite{Landauer57, Buttiker85, Buttiker86} applied to chaotic systems. A great deal of work was dedicated to the strong localization properties of eigenstates~\cite{Fyodorov94}, and in particular, the transition to extended states and the metal-insulator transition~\cite{Mirlin96, Evers08}. Universal behaviors were predicted and measured for quantities such as conductance fluctuations~\cite{Lee85, Altshuler85, Webb88, Marcus92}. Disordered systems developed into a third pillar of quantum chaos studies.
Although these three pillars developed quite independently, and only one of them was related directly to chaos, their deep interconnections began to be recognized during this same period of time. Aggregating them under a single umbrella term, i.e.~quantum chaos, recognized and evoked their broader meaning. An early indicator of chaos as a profound unifying concept emerged with the conjecture of Bohigas, Gianonni, and Schmit~\cite{Bohigas84}, which posited that the critical rationale for the applicability and universality of RMT was exponentially unstable chaotic dynamics. This is as opposed to the idea that RMT required complexity in the sense of a strongly interacting many-body system. In fact, even a very simple, single particle system, if chaotic, would possess RMT statistics. Beyond extensive numerical verifications, Hannay and Ozorio de Almeida invoked a uniformity of phase space exploration of unstable periodic orbits to derive a sum rule~\cite{Hannay84}, which Berry relied on to derive semiclassically the spectral rigidity found in RMT~\cite{Berry85}. Thus, there had to exist an intimate link between the two pillars, RMT and the semiclassical dynamics of chaos. More recently, Fritz Haake and Peter Braun, to whom this review is dedicated, and co-authors, contributed significantly towards a proof of the BGS conjecture~\cite{Heusler07}. It makes use of periodic orbit correlations~\cite{Sieber01} to reveal chaotic dynamics as the true origin of RMT behavior.
At about the same time, the quantized kicked rotor, an extremely simple model of a chaotic quantum system, was mapped in a momentum representation onto a kind of one dimensional Anderson model for a disordered system~\cite{Fishman82}. The only distinction boiled down to whether a deterministic quantity behaving as a pseudo-random number could be replaced by a random number. This created a direct link between the semiclassical mechanics of chaos, and the one dimensional Lloyd variation of the Anderson model~\cite{Anderson58, Lloyd69} (i.e.~between the second and third pillars). The strong localization properties of the kicked rotor eigenstates, a single particle system with chaotic (diffusive) classical dynamics, were a form of strong (Anderson) localization.
Weaker forms of localization properties were deduced by considering the closest analogy in quantum mechanics to following a chaotic trajectory, i.e.~by considering the evolution of an initial minimum uncertainty wave packet centered on a trajectory's initial conditions. The Wigner transform~\cite{Wigner32} of the wave packet would generate a Gaussian density of phase points in the phase space vicinity of the initial conditions whose evolution would roughly follow the initial condition for an Ehrenfest time scale~\cite{Ehrenfest27}, which due to the exponential sensitivity to initial conditions would be a logarithmically short time scale in terms of a characteristic action divided by $\hbar$~\cite{Berman78, Berry79b, Chirikov81}. A first example arose by analyzing the effects of centering a wave packet somewhere on a short unstable periodic orbit with a period, $\tau$ and Lyapunov exponent, $\mu$. In the situation of $2\pi \gtrsim \mu \tau$, for at least some of the eigenstates there would be excess intensity in the neighborhood of the periodic orbit called `scarring'~\cite{Heller84} relative to statistical expectations of eigenstates behaving randomly~\cite{Berry77, Voros79}.
A second example, termed {\it dynamical localization}, arises in which the existence of classical barriers to transport introduces time scales in a system's dynamics. The quantum manifestations of these time scales can be seen both in the non-ergodic long time behaviors of wave packets initially localized behind such barriers and in the properties of quantum eigenfunctions~\cite{Radons88, Geisel89, Bohigas93}. The quantum scaling for the strength of the localization implied by a transport barrier is given by the classical flux flowing from one region into the other compared with the appropriate power of $\hbar$ ($\Phi/\hbar^{L-1} \lesssim 1 $ is the localizing regime)~\cite{Bohigas93, Michler12}. This is a weak localizing effect compared with strong (Anderson) localization~\cite{Anderson58}, but can be thought of as a kind of precursor in the sense that a series of transport barriers, often related to a sequence of resonances in systems with few degrees of freedom, leads to diffusive dynamics, even if individually $\Phi/\hbar^{L-1} \gtrsim 1 $~\cite{Dana89}.
Finally, supersymmetry~\cite{Efetov82a, Efetov82b, Efetov83} directly linked the nonlinear $\sigma$-models for diffusive systems~\cite{Wegner79, Efetov82a} with RMT, thus providing a strong link between the first and third pillars. Attempts have been made to extend nonlinear $\sigma$-models to ballistic systems, and create a more direct connection specifically to chaotic dynamics~\cite{Muzykantskii95, Agam95, Andreev96, Andreev96b}, however such models are intrinsically ensemble models of stochastic systems and any properties linked to the deterministic nature of chaotic dynamics, especially models of very simple single particle systems or quantum maps, are not naturally built in~\cite{Andreev96}. A semiclassical approach, especially for effects that can be related to short and intermediate time scale dynamics of deterministic (and chaotic) systems, naturally incorporates system specific properties and retains an appreciable advantage for the description of such effects.
More recently, research is shifting dramatically back toward the quantum many-body physics of interacting systems, due in large part to the extraordinary advances in experimental techniques for ultracold systems~\cite{Bloch08} and quantum materials~\cite{Keimer17}. They allow for building, controlling, and monitoring synthetic many-body systems with strongly interacting quantum degrees of freedom. Especially during the last decade, a swiftly expanding field in theoretical physics has formed by combining concepts and methods of quantum chaos with quantum many-body physics. This emerging research, often now subsumed under the designation {\bf \em many-body quantum chaos}, resides at the interfaces of statistical physics, quantum dynamics in atomic and condensed matter physics, and cosmology, harking back to quantum chaos’ roots in nuclear many-body physics.
This review will focus on the development of a proper semiclassical theory of interacting many-body systems and its role in addressing various problems in many-body quantum chaos. The theory's strength is its capacity to apply broadly to many-body quantum chaotic systems, including in the statistical RMT-like sense, whilst also having the capacity to address system specific properties of deterministic dynamical systems, and to apply to dynamical systems which are not fully chaotic or are even integrable as well. This is particularly important in investigations in which a system is not behaving in a universal manner. Perhaps it is not quantum thermalizing or generating entanglement as expected, or exhibiting some other curious behavior. A semiclassical approach has the promise to shed light on such situations.
The origin of a modern interacting, many-body system semiclassical theory is quite recent. Indeed, the analogous propagator for interacting many-body systems to the single particle van Vleck - Gutzwiller propagator~\cite{Vanvleck28, Gutzwiller71} has just been developed for both fermionic~\cite{Engl14} and bosonic~\cite{Engl14b} interacting systems, as well as the implementation of a saddle point approximation to the coherent state path integral for interacting bosonic systems~\cite{Tomsovic18}, which is the analogous propagator to generalized Gaussian wave packet dynamics~\cite{Huber88} and one form of a complex, time-dependent WKB theory~\cite{Maslov81}. Moreover, a trace formula over periodic mean field solutions for an interacting bosonic system has now also been developed~\cite{Engl15}, which is the analagous version of the Gutzwiller trace formula over periodic orbits~\cite{Gutzwiller71}.
~\\ \\ \\
~ \\ \\ \\
In Secs.~\ref{Sec:??} we will come back to these ideas and methods that again play an important role in a semiclassical theory of interacting many-particle systems.
\KR{
missing: positioning and embedding of MBQC into adjacent fields;
Wan diagram;
text pieces, presumably helpful:
}
…
For example, Studying their dynamics [1,2] has allowed for identifying particular classes of states that fail to quantum thermalize [3], but also their evolution towards equilibrium is particularly interesting since equilibration goes along with scrambling of quantum correlations across the systems' many degrees of freedom.
In particular after the proposals on out-of-time-order correlators (OTOCs) [10] and a universal limit of their growth rates, i.e., a quantum “bounds on chaos” [11], these aspects of MB chaos and ergodicity have received broad interest, ranging from quantum matter to quantum gravity, in phenomena that may be subsumed under the topic MB quantum chaos.
\subsection{Outline of this review: \\
from short to late time many-body dynamics}
\KR{the outline of the review: what do we consider in which section ...}
\section{Foundations of Many-Body Semiclassics}
\label{sec:Foundations}
\subsection{Between classical and quantal: semiclassical regimes and limits}
\label{subsec:clas-quant}
What does “semiclassical” stand for?
Depending on the community and on the context, there exist various different meanings and notions.
We use "semiclassical" in the original sense
\footnote{This notion is used in quantum chaos},
referring to physics in the crossover regime between classical and quantal.
\KR{the following paragraph could also be put into a longer footnote not to hinder the flow of the main text}
Before elaborating on this in more detail we mention other notions of "semiclassicallity" that should not be confused with our approach:
Very often "semiclassical" is simply used synonymously for "classical".
In particular in condensed matter physics, the purely classical motion of a particle with energy $E(k)$ given by a band structure or in a potential landscape that is quantum mechanically obtained, is often referred to as "semiclassical" \cite{Ashcroft}.
Second, the description of a hybrid system composed of a quantum and a classical sector, such as an atom coupled to radiation described in terms of a classical field.
Correspondingly, in semiclassical gravity matter fields are considered quantum, while the gravitational field is classical.
Third, on similar footing, quantum systems that assume certain degrees of classicality due to interaction with a (classical) environment.
Fourth, in statistical physics, large temperature expansions … .
Fifth?? - accounting for zero-point motion in mean field, such as in a TWA calculation
The semiclassical theory put forward in this review is formally based on an (effective) $\hbar$-expansion of quantum mechanical (MB) Feynman propagators.
Resulting expressions, while being based on classical paths, thereby still inherently comprise interference-based processes and mechanisms.
These are indispensable in order to answer what happens to chaotic quantum systems in the limit where $\hbar$ becomes small but non-zero.
Conceptually this follows a strategy to approach a classical limit starting from fundamental quantum mechanics, instead of, vice versa, thinking classically and adding quantum effects.
If a single-particle quantum system posesses a classical counterpart, the corresponding semiclassical limit is well defined: the often-cited limit $\hbar \rightarrow 0$ (with Planck constant $h = 2\pi \hbar$) refers to an asymptotic expansion in the dimensionless quantity $\hbar/S \ll 1$, where $S=\int p dq$ is the typical classical action of the particle with momentum $p$ that grows with energy. Accordingly, the semiclassical limit usually refers to the regime of high excitations or energy and, by means of the de Broglie relation $\lambda = h/p$, corresponds to the limit of small wave lengths $\lambda$.
In the scheme of Fig.~\ref{fig:sc-limits} this short-wavelength limit is associated with the direction along the horizontal axis where wave mechanics approaches the limit of classical particles. Note, however, that the process $\hbar \rightarrow 0$ is singular: the crossover from small $\hbar$ to $\hbar=0$ is not smooth. Such disruptiv behavior is the cause of interesting physics in this asymptotic limit.
\begin{figure}
\centering
\caption{Different notions of classicallity in quantum mechanical many-body systems}
\label{fig:sc-limits}
\end{figure}
the various semiclassical limits, in particular (i) small $\hbar$ (ii) small $1/N$
During the recent years a novel field in theoretical physics has emerged at the interface of statistical physics, quantum dynamics in atomic and condensed matter and cosmology, which can be subsumed under the topic many-body quantum chaos.
Many systems from all these distinctly different areas have in common that they reside at the semiclassical border between many-body classical chaos and quantum physics, in fact in a two-fold way:
(i) Far-out-of-equilibrium quantum dynamics involves high-energy excitations, associated with the usual short-wavelength limit where wave mechanics approaches the limit of classical particles;
(ii) alternatively, the thermodynamic limit of large particle numbers N, where quantum fields pass into nonlinear waves, can also be regarded as semiclassical, governed by an effective Planck constant 1/N. These two complementary crossover regimes in state-of-the-art many-body physics that are experimentally relevant and theoretically particularly
These various notions associated with “semiclassicality” play a role in a broad range of physical fields, ranging from … to …
\subsection{Semiclassics for few-particle systems}
\subsubsection{van Vleck + Gutzwiller}
dynamics vs. chaos
B Miller, helium, H Primack 3d billiard,
\subsubsection{spectral statistics}
invoking ergodicity and hyperbolicity: encounters qualitatively
encounter references here?
\cite{Sieber02}
the various time scales
\subsection{Semiclassical theory of many-body quantum dynamics}
\subsubsection{Earlier approaches}
DFT: Ullmo, Burke, Brack
Strutinsky etc., orb. mag.,
Mention truncated Wigner as classical here ?
\subsubsection{
from single-particle wave- to many-body quantum interference JD}
\begin{itemize}
\item
semiclassics:
\item
the various semiclassical limits, in particular (i) small hbar (ii) small $1/N$
B Miller, helium, H Primack 3d billiard,
Weidenmüller formula;
Strutinsky etc., orb. mag.,
\item
the various time scales
\end{itemize}
\subsubsection{Semiclassical propagaion of many-body quantum fields}
As has been mentioned before, the central object that allows for a semiclassical treatment of interference and related quantum phenomena is the asymptotic ($\hbar \to 0, N \to \infty$) form of the many-body propagator, here defined as the matrix elements of the time evolution operator (here for an autonomous system for simplicity)
\begin{equation}
K({\rm fi},{\rm in},t)=\langle {\rm fi}|{\rm e}^{-\frac{i}{\hbar}\hat{H}t}|{\rm in}\rangle
\end{equation}
between suitable initial and final states. This is perhaps a good point to stress again that it is the application of asymptotic analysis to the exact path integral representation of the propagator what accounts for the approximate character of the semiclassical approach. All other foundational aspects of the quantum description are kept intact, including the kinematic aspects of its formulation in a Hilbert space and the corresponding role of superposition principle at the heart of quantum coherence, and the ubiquitous presence of non-classical correlations like entanglement.
This key aspect of the semiclassical program carries itself into the many-body context where a characteristically new aspect of quantum mechanics for systems of identical particles must be further incorporated, namely quantum indistinguishability. In the context of semiclassics of particle systems this in turn requires the explicit (anti-)symetrization of the propagator by means of the corresponding projection operators onto the bosonic or fermionic subspaces of the total (product) Hilbert space describing the many-body system. Within this picture, then, the semiclassical approach is implemented by the corresponding extra summation over the elements of the permutation group, weighted by the corresponding characters, acting on the arguments of the usual $n$-particle semiclassical propagator. The whole construction is purely kinematical and therefore is the starting point of both interacting and non-interacting systems of identical particles.
The success of the (anti-)symetrized
\subsubsection{SC MB propagators: UR approach}
\subsubsection{SC MB propagation: T approach}
outlook post-Ehrenfest (MB) interference mechanisms numerical example of survival prob. in BH (our PRA)
\section{Spectral properties from short-time dynamics}
\begin{itemize}
\item
MB Weyl expansion, Bethe law, "partial fermionization"
\item
finite-$N$ quantum thermodynamics
\item
further topics ?
\end{itemize}
\section{Spectral and dynamical properties from post-Ehrenfest quantum interference}
\subsection{Rewinding time I: Echoes}
\begin{itemize}
\item
coherent backscattering
\item
MB Loschmidt + spin echo
\item
survival probability (interference beyond TWA, our PRA/PRL, dynamical localization, Saclay/Liege/Pullman)
\end{itemize}
\subsection{Rewinding time II: OTOCs}
\begin{itemize}
\item
OTOCs for hyperbolic systems (Josef PRL, Rodolfo${ }^\ast$, etc.)
\item
OTOCs for critical systems (Benni's PRLs)
\end{itemize}
\subsection{Spectra}
\begin{itemize}
\item
Bose-Hubbard spectra (Steve)
\item
Gutzwiller trace formula (R., Bristol${ }^\ast$ )
\item
fixed $N$, small $\hbar$:
trace formula for Lieb-Liniger?
\item
(Large) spin models: Duisburg; role of specific MB POs
\item
universal spectral fluctuations${ }^\ast$ (Bristol, Duisburg/R.)
RMT for spin models + SC${ }^\ast$ : Prozen
\item
Discrete spectrum effects around ESQPTs ${ }^\ast$ (Regensburg, Prague, Argentinians)
\end{itemize}
\section{Miscellaneous}
\begin{itemize}
\item
MB scars
\item
states, wave functions,
\item
dualites: time-particle number; Boris; T Prosen / B Bertini papers
\item
spatio-temporal maps: Boris; Predrag
\end{itemize}
\section{Perspectives}
\begin{itemize}
\item
SC and ETH
\item
SC and (dynamical) localization
\item
SC and entropies
(entanglement entropy (Argentinians, Arrul?)
\item
SC beyond the Heisenberg time
\item
SC and renormalization, AdS/CFT correspondence
\item
SC and its role in quantum gravity models (SSS, Altland, R.?)
\item
SC and ...
\end{itemize}
\section{Literature}
Overviews:
Denis review
Semiclassics: The hidden theory behind the success of DFT, 10 May 2021, Pavel Okun, Kieron Burke\\ \\
\bibliographystyle{unsrt}
| 83,939 |
\section{INTRODUCTION} \label{sec:introduction}
Parity-violating physics in the early universe may cause an effect
known as cosmic birefringence, in which photons with different
polarizations travel differently along their propagation paths,
resulting in a net rotation on the polarization directions of cosmic
microwave background (CMB) photons. Such an effect can arise from many
types of beyond-the-Standard-Model physics, such as from the coupling
between axion-like particles and photons through a
Chern-Simons interaction (see, e.g., \cite{Li:2008}), from pseudoscalar
fields introduced in early dark energy models to resolve the Hubble
tension \cite{Capparelli:2020}, or from primordial magnetic fields
through Faraday rotation (see, e.g., \cite{Kosowsky:1996:FR}).
Cosmic birefringence can cause both isotropic and anisotropic
rotation of the microwave background polarization. Since
the polarization field is dominated by an E-mode signal from primordial
density perturbations, small rotations of polarization effectively turn
E-mode into B-mode polarization, leaving observable imprints in the
polarization power spectra. Isotropic birefringence, in particular, leads to non-zero
parity-odd power spectra in the CMB including TB and EB (see, e.g.,
\cite{Li:2008, Zhai:2020a}). Various experiments have placed
constraints on isotropic rotation angle, such as Planck \cite{Planck:2016soo},
WMAP \cite{2011}, and ACT \cite{ACT:2020frw}.
The observational challenge in constraining
isotropic birefringence is that its effect is highly degenerate
to that of a calibration error in the orientation of polarized detectors
(see, e.g., \cite{Keating:2013,Kaufman:2014}).
Anisotropic birefringence, on the other hand, leads
only to parity-even spectra and contributes non-negligibly
to the B-mode power spectrum. Anisotropic rotation also induces
off-diagonal correlations in the microwave background multipoles, which allows
reconstruction of the anisotropic rotation field using a quadratic estimator
approach similar to lensing reconstruction of the deflection field (see, e.g.,
\cite{Gluscevic:2009,Yadav:2012a,Namikawa:2017}). Such an effect has been used
to derive observational constraints on anisotropic rotation; for examples,
Planck \cite{PlanckCollaboration:2016}, BICEP2 / Keck \cite{BICEP2Collaboration:2017},
ACT \cite{Namikawa:2020}, and SPT \cite{Bianchini:2020} have all derived upper bounds on
anisotropic rotation field with a scale-invariant power spectrum.
Despite the physical importance of a possible rotation field, to our knowledge
no publicly available codes exist that compute CMB power spectra from cosmic
birefringence. Here we present a modified
version of \texttt{class}
\cite{software:class}\footnote{\url{https://github.com/lesgourg/class_public}},
named
\texttt{class\_rot}\footnote{\url{https://github.com/catketchup/class_rot}},
which implements this calculation and allows for fast computation of
the rotated EB, TB, EE, and BB power spectra due to both
isotropic and anisotropic rotation from cosmic birefringence. In particular, we
implement a non-perturbative calculation based on the angular
correlation function of the rotation field \cite{Li:2008,Li:2013}.
Our code has an accuracy better than 1\% at all multipoles from
$l=2$ to $l=4000$, which we verify through comparison with power
spectra of simulated sky maps including random rotation fields.
This paper is structured as follows. In Sec.~\ref{sec:rotation}, we
describe the basics of cosmic birefringence. In Sec.~\ref{sec:rotated
ps} we show the non-perturbative calculation method that is implemented in
\texttt{class\_rot}, focusing on the effect of cosmic birefringence on
the CMB power spectra. In Sec.~\ref{sec:code}, we demonstrate the code
implementation and give usage examples, and we present
comparisons between the results from \texttt{class\_rot} and numerical
simulations.
Sec.~\ref{sec:conclusion} provides a brief concluding discussion about the uses
of this code in the context of current and upcoming experiments.
\section{COSMIC ROTATION FIELD}
\label{sec:rotation}
The rotation effect from cosmic birefringence can be effectively
expressed as a rotation field $\alpha(\hat{\bm{n}})$, which can have
both an isotropic part and an anisotropic part \cite{Zhai:2020a},
given by
\begin{equation}
\label{eq:alpha}
\alpha(\hat{\bm{n}})=\bar{\alpha}+\delta \alpha(\hat{\bm{n}}),
\end{equation}
with $\bar{\alpha}$ the isotropic part, and
$\delta \alpha(\hat{\bm{n}})$ the anisotropic part with a zero mean,
\begin{equation}
\label{eq:rotation parts}
\expect{\delta \alpha(\hat{\bm{n}})}=0.
\end{equation}
As a result of rotation, Stokes parameter $Q$ and $U$ transform as
\begin{equation}
\label{eq:rotation}
(\tilde{Q} \pm i \tilde{U})(\hat{\bm{n}})=\exp (\pm i 2 \alpha(\hat{\bm{n}}))(Q \pm i U)(\hat{\bm{n}}),
\end{equation}
where we have used tildes to denote rotated quantities.
To illustrate how such a rotation field can arise from parity-violating
physics in the early universe, consider for example a
Chern-Simons-type interaction of photons and axions
with a Lagrangian given by
\begin{equation}
\label{eq:cs term}
\mathcal{L}_{c s}=\frac{\beta \phi}{2 M} F^{\mu \nu} \tilde{F}_{\mu \nu},
\end{equation}
where $\beta$ is a dimensionless coupling constant, $\phi$ is the
axion field, $M$ is its mass scale, and $F^{\mu \nu}$ is the
electromagnetic tensor with $\tilde{F}_{\mu \nu}$ being its dual. This term
modifies the Euler-Lagrange equations for electromagnetic field and induces a
rotation in the polarization direction of a photon if $\phi$
varies along its propagation path \cite{1997PhRvD..55.6760C, 1998PhRvD..58k6002C,Leon:2017}, with the rotation
angle given by
\begin{equation}
\label{eq:alpha and phi}
\alpha=\frac{\beta}{M} \Delta \phi,
\end{equation}
where $\Delta \phi$ is the change of $\phi$ along the photon path.
In the case that the axion field $\phi$ is spatially
homogeneous, Eq.~\eqref{eq:alpha and phi} introduces an
isotropic rotation field to the CMB; an inhomogeneous axion field
gives an anisotropic rotation field in the CMB.
A convenient way to express an anisotropic rotation field,
$\alpha(\hat{\bm{n}})$, is to expand it in the basis of spherical
harmonics as
\begin{equation}
\label{eq:alpha alm}
\delta \alpha(\hat{\bm{n}})=\sum_{L M} \alpha_{L M} Y_{L M}(\hat{\bm{n}}).
\end{equation}
We assume that $\alpha(\hat{\bm{n}})$ follows Gaussian random
statistics, in which case the statistical information of the rotation
field $\alpha(\hat{\bm{n}})$ can be completely specified by its power
spectrum $C_L^{\alpha\alpha}$, given by
\begin{equation}
\label{eq:alpha ps}
\expect{a_{L M} a_{L' M'}} = \delta_{L L'}\delta_{M M'}C_{L}^{\alpha\alpha}.
\end{equation}
In this paper we only consider a scale-invariant power spectrum of
the anisotropic rotation field, which is physically well-motivated
\cite{2011PhRvD..84d3504C}, though the formalism presented here is broadly
applicable to an arbitrary rotation field power spectrum. Following the convention in \cite{Abazajian:2019eic}, we parametrize a scale-invariant power spectrum as
\begin{equation}
\label{eq:cl_aa}
\frac{L(L+1)}{2 \pi} C_{L}^{\alpha \alpha}=A_{C B},
\end{equation}
with $A_{CB}$ the amplitude of the cosmic birefringence power
spectrum\footnote{Note that $A_{CB}$ defined in this paper is $10^{-4}$ times of that in \cite{Namikawa:2020} and $10^{-5}$ of that in \cite{Namikawa:2017}.}.
\section{Impacts on Microwave Background Polarization Power Spectra}
\label{sec:rotated ps}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{./figs/ps.pdf}
\caption{Microwave background polarization BB power spectrum contributions from a scale-invariant tensor mode ($r=0.004$), gravitational lensing, isotropic rotation ($\bar{\alpha}=0.1^{\circ}$) and scale-invariant anisotropic rotation ($A_{CB}=10^{-5}$) are given in the upper panel. The absolute TB and EB power spectra from isotropic rotation ($A_{CB}=10^{-5}$) are shown in the lower panel.}
\label{fig:ps.pdf}
\centering
\end{figure}
In this section, we briefly review the rotated CMB power spectra calculation implemented in \texttt{class\_rot}. We consider a rotation field with both an isotropic contribution and an Gaussian random anisotropic contribution as described in Eq.~\eqref{eq:alpha}. We adopt the non-perturbative method introduced in \cite{Li:2008,Li:2013}, which is similar to the calculation method of lensed CMB power spectra in \cite{Challinor:2005}. Here we briefly review the non-perturbative calculations relevant to the implementation of \texttt{class\_rot}; we refer interested readers to \cite{Li:2008,Li:2013} for more calculation details.
In this method, the starting point is to connect the real-space correlation functions of rotated quantities, such as $\tilde{T}(\hat{\bm{n}})$, $\tilde{Q}(\hat{\bm{n}})$, and $\tilde{U}(\hat{\bm{n}})$, to the rotated power spectra, e.g., $\tilde{C}_{\ell'}^{E E}$, $\tilde{C}_{\ell'}^{B B}$, with
\begin{equation}
\label{eq:xi spherical}
\begin{aligned}
\tilde{\xi}_{+}(\beta) &\equiv\left\langle(\tilde{Q}+i \tilde{U})^{*}(\hat{\bm{n}})(\tilde{Q}+i \tilde{U})\left(\hat{\bm{n}}^{\prime}\right)\right\rangle\\
&= \sum_{\ell'} \frac{2\ell'+1}{4 \pi}\left(\tilde{C}_{\ell'}^{E E}+\tilde{C}_{\ell'}^{B B}\right) d_{22}^{\ell'}(\beta),\\
\tilde{\xi}_{-}(\beta) &\equiv\left\langle(\tilde{Q}+i \tilde{U})(\hat{\bm{n}})(\tilde{Q}+i \tilde{U})\left(\hat{\bm{n}}^{\prime}\right)\right\rangle\\
&= \sum_{\ell'} \frac{2\ell'+1}{4 \pi}\left(\tilde{C}_{\ell'}^{E E}-\tilde{C}_{\ell'}^{B B}+2 i \tilde{C}_{\ell'}^{E B}\right) d_{-22}^{\ell'}(\beta), \\
\tilde{\xi}_{X}(\beta) &\equiv \left\langle T(\hat{\bm{n}})(\tilde{Q}+i \tilde{U})\left(\hat{\bm{n}}^{\prime}\right)\right\rangle\\
&= -\sum_{\ell'} \frac{2\ell'+1}{4 \pi}\left(\tilde{C}_{\ell'}^{T E}+i \tilde{C}_{\ell'}^{T B}\right) d_{02}^{\ell'}(\beta),
\end{aligned}
\end{equation}
where $\hat{\bm{n}}$ and $\hat{\bm{n}}^{\prime}$ are two directions in the spherical coordinate system, $\cos\beta = \hat{\bm{n}} \cdot \hat{\bm{n}}^{\prime}$, and $d_{mm'}^{\ell}$ is the Wigner d-function. Taking advantages of the orthogonality relations of Wigner d-functions,
\begin{equation}
\label{eq:w-d orthogonality}
\int_{-1}^{1} d \cos \beta\: d_{mk}^{\ell}(\beta) d_{m'k'}^{\ell'}(\beta) = \frac{2}{2\ell+1}\delta_{mm'}\delta_{kk'}\delta_{\ell \ell'},
\end{equation}
one can invert Eq.~\eqref{eq:xi spherical} to express rotated power spectra in terms of correlation functions, such as
\begin{equation}
\label{eq:xi reverse}
\tilde{C}_{\ell}^{E E}+\tilde{C}_{\ell}^{B B}=2 \pi \int_{-1}^{1} d \cos \beta\:\tilde{\xi}_{+}(\beta) d_{22}^{\ell}(\beta) .
\end{equation}
Applying Eq.~\eqref{eq:rotation}, $\tilde{\xi}_{+}(\beta)$ can be expressed by un-rotated quantities as
\begin{equation}
\label{eq:xi}
\tilde{\xi}_{+}(\beta) =e^{-4C^{\alpha}(0)+4C^{\alpha}(\beta)}\sum_{\ell'}(2\ell'+1)(C_{\ell'}^{EE}+C_{\ell'}^{BB})d_{22}^{\ell'}(\beta).
\end{equation}
Here $C^{\alpha}(\beta)$ is the correlation function of rotation angles in the two directions separated by $\beta$ and can be expressed as
\begin{equation}
\label{eq:cla}
\begin{aligned}
C^{\alpha}(\beta)=\left\langle\delta \alpha\left(\hat{\bm{n}}_{1}\right) \delta \alpha\left(\hat{\bm{n}}_{2}\right)\right\rangle=&\ \sum_{L} \frac{2 L+1}{4 \pi} C_{L}^{\alpha \alpha} P_{L}(\cos \beta)\\
=&\ \sum_{L} \frac{2 L+1}{4 \pi} C_{L}^{\alpha \alpha} d_{00}^{L}(\beta),
\end{aligned}
\end{equation}
where $C_{L}^{\alpha \alpha}$ is a generic rotation field power spectrum introduced in Eq.~\eqref{eq:alpha ps}, $P_{L}(\cos \beta)$ is the Legendre Polynomial, and we have applied $P_{L}(\cos \beta) = d_{00}^{L}(\beta)$.
Equipped with Eq.~\eqref{eq:xi}, Eq.~\eqref{eq:xi reverse} can be written as
\begin{equation}
\label{eq:rotated ps EE BB}
\begin{aligned}
\tilde{C}_{\ell}^{E E}+\tilde{C}_{\ell}^{B B} &=\frac{1}{2} e^{-4 C^{\alpha}(0)} \int d\cos \beta\: e^{4C^{\alpha}(\beta)} d_{22}^{\ell}(\beta) \\ &\left[ \sum_{\ell'}(2\ell'+1)(C_{\ell'}^{EE}+C_{\ell'}^{BB})d_{22}^{\ell'}(\beta)\right].
\end{aligned}
\end{equation}
Similarly, one can also obtain
\begin{equation}
\label{eq:rotated ps}
\begin{aligned}
\tilde{C}_{\ell}^{T E} &=C_{\ell}^{T E} \cos (2 \bar{\alpha}) e^{-2 C^{\alpha}(0)},\\
\tilde{C}_{\ell}^{T B} &=C_{\ell}^{T E} \sin (2 \bar{\alpha}) e^{-2 C^{\alpha}(0)},\\
\tilde{C}_{\ell}^{E E}-\tilde{C}_{\ell}^{B B} &=\frac{1}{2} e^{-4 C^{\alpha}(0)}\cos 4\bar{\alpha} \int d\cos \beta\: e^{-4C^{\alpha}(\beta)} d_{-22}^{\ell}(\beta)\\ &\left[ \sum_{\ell'}(2\ell'+1)(C_{\ell'}^{EE}-C_{\ell'}^{BB})d_{-22}^{\ell'}(\beta)\right],\\
\tilde{C}_{\ell}^{E B} &=\frac{1}{2} e^{-4 C^{\alpha}(0)} \sin 4\bar{\alpha} \int d\cos \beta\: e^{-4C^{\alpha}(\beta)} d_{-22}^{\ell}(\beta)\\ &\left[ \sum_{\ell'}(2\ell'+1)(C_{\ell'}^{EE}-C_{\ell'}^{BB})d_{-22}^{\ell'}(\beta)\right].
\end{aligned}
\end{equation}
Note that the rotated CMB EE, BB and EB power spectra in Eq.~\eqref{eq:rotated ps EE BB} and Eq.~\eqref{eq:rotated ps} are given by real-space integrals, which avoids convolution in the $\ell m$ space which is computationally expensive. A similar strategy that uses real-space integral instead of convolution in $\ell m$ space can be found in delensing calculation \cite{Smith:2012} which significantly reduces computational cost. Also note that we have ignored the correlations between the rotation field and both CMB temperature and (unrotated) E-polarization fields, which may arise in certain axion-like models, such as models with nonzero potential under adiabatic initial conditions \cite{2011PhRvD..84d3504C}. A similar calculation that takes account of these correlations can be found in \cite{Zhai:2020a}.
We can see from Eq.~\eqref{eq:rotated ps EE BB} and Eq.~\eqref{eq:rotated ps} that both isotropic and anisotropic rotations contribute to BB power spectrum. In the upper panel of Fig.~\ref{fig:ps.pdf}, we show the BB power spectrum contributed by an isotropic rotation field with $\bar{\alpha}=0.1^{\circ}$ and a scale-invariant anisotropic rotation field with $A_{CB}=10^{-5}$, respectively. As a comparison, we also show the contributions from primordial tensor mode with $r=0.004$ where $r$ is the tensor-to-scalar ratio, and the contribution from CMB lensing. One can see that the B-mode signal from rotation fields can be larger than that from the primordial tensor mode at $\ell \gtrsim 150$, which suggests that, apart from searching for parity-violating physics, rotation field is also an important systematic when searching for primordial tensor mode. We also note that rotation field generally contributes less than CMB lensing to B-mode polarization; this suggests that the ability to ``de-lens" the CMB will help tighten the constraints on cosmic birefringence.
From Eq.~\eqref{eq:rotated ps} we can also see that both $\tilde{C}_{\ell}^{T B}$ and $\tilde{C}_{\ell}^{E B}$ become non-zero when $\bar{\alpha}$ is non-zero; this is consistent with the fact that an isotropic rotation field violates parity symmetry and induces odd-parity CMB power spectra (see the lower panel of Fig.~\ref{fig:ps.pdf} for example).
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{./figs/ps_sims.pdf}
\caption{rotated CMB BB, TB and EB power spectra from simulation and theory. The theory curves are calculated by \texttt{class\_rot}. The parameters are chosen as: $r=0.004$, $\bar{\alpha}=0.1^{\circ}$ and $A_{CB}=10^{-5}$.}
\label{fig:ps_sims.pdf}
\centering
\end{figure}
\section{The Software Package}
\label{sec:code}
In this section, we describe briefly the implementation of \texttt{class\_rot}, give usage examples of its Python interface, and show comparisons to numerical simulations.
\vspace{0.2cm}
\textbf{Code implementation:}
In \texttt{class\_rot}, the calculations described in Sec.~\ref{sec:rotated ps} are implemented as a new module to \texttt{class}, contained in \texttt{rotation.c} \source{rotation.c}. Internally, this \texttt{rotation} module takes the power spectra calculated from the \texttt{harmonic} module as inputs, by doing so we have implicitly neglected the effect of CMB lensing when calculating the rotated power spectrum. This assumption significantly simplifies our code implementation and will only lead to sub-percent to percent level error due to the smallness of $C_\ell^{BB}$ relative to $C_\ell^{EE}$; to incorporate the effect of CMB lensing in the \texttt{rotation} module will be the subject of future work.
The \texttt{rotation} module can be turned on by specifying \texttt{rotation = yes} in the parameter file, and it can take two additional parameters that specify the rotation field, \texttt{alpha} and \texttt{A\_cb}, which correspond to $\bar{\alpha}$, in unit of degrees, and $A_{CB}$, in radians as defined in Eq.~\eqref{eq:cl_aa}, respectively. The rest of the parameters are identical to those in \texttt{class}. Note that by using $A_{CB}$ we implicitly assume that the rotation field follows a scale-invariant power spectrum -- a choice of preference rather than necessity; other rotation power spectrum can be implemented by changing the \texttt{rotation\_cl\_aa\_at\_l} function defined in \texttt{rotation.c} \source{rotation.c}. We leave the support for taking in a generic rotational power spectrum as input to a future work.
The parameters can be specified in a parameter file and passed to the compiled \texttt{class} binary executable, in the same way as the original \texttt{class}. An example parameter file, \texttt{explanatory\_ROT.ini} \href{https://github.com/catketchup/class_rot/blob/main/explanatory_ROT.ini}{{\codeicon}}\, is also provided as part of \texttt{class\_rot} to illustrate the use of parameters. Note that this parameter file is only needed when calling \texttt{class\_rot} from the command-line interface using its compiled binary executable. We have also provided Python bindings to the functions in the rotation module allowing them to be called in the Python interface, and we show some usage example below.
\vspace{0.2cm}
\textbf{Usage example:}
Here we give an example of how to calculate the rotated CMB power spectra using the Python interface of \texttt{class\_rot}:
\begin{lstlisting}[language=Python]
from classy import Class
params = {
"output": "tCl,pCl,rCl",
"l_max_scalars": 4000,
"rotation": "yes",
"alpha": 0.1,
"A_cb": 1E-5,
}
cosmo = Class()
cosmo.set(params)
cosmo.compute(level=["rotation"])
cosmo.rotated_cl()
\end{lstlisting}
One can see that \texttt{class\_rot} is meant to be used as a drop-in replacement to the original \texttt{class} as it is imported the same way and follows the same usage pattern. The parameters are specified in a Python dictionary, \texttt{param}, and passed to the \texttt{cosmo} object. Note that it is important to include \texttt{rCl} in the \texttt{output} option as it is required for computing the rotated power spectra. The option \texttt{rotation} turns on the rotation module when its value is \texttt{yes}; \texttt{alpha} and \texttt{A\_cb} specify the rotation parameters as can be used in a parameter file. Also note that when computing cosmological model with the function \texttt{cosmo.compute()}, one needs to include \texttt{level=["rotation"]} so that the rotation module and its dependencies are initialized properly. After running \texttt{cosmo.compute()}, the rotated power spectra can be obtained by the function call \texttt{cosmo.rotated\_cl()}, in the form of a Python dictionary following the convention from \texttt{class}. This illustrates a basic usage of \texttt{class\_rot}; we refer interested readers to the examples provided in the bundled Jupyter notebook in \texttt{class\_rot} to find more detailed examples and explanations \href{https://github.com/catketchup/class_rot/blob/main/notebooks_rot}{{\codeicon}}.
\vspace{0.2cm}
\textbf{Comparison with simulations:}
To demonstrate the accuracy of \texttt{class\_rot}, we compare the rotated CMB power spectra from \texttt{class\_rot} with those from full-sky simulations. In particular, we first generate 100 realizations of un-rotated CMB maps in T, Q, and U based on a fiducial model given by the best-fit cosmology from Planck 2018 \cite{Planck2018:VI:CP} with $l_{\rm max} = 6000$. Additionally we set a non-zero tensor-to-scalar ratio $r=0.004$. Next we generate 100 realizations of a full-sky rotation map with $\bar{\alpha}=0.1^{\circ}$ and $A_{CB}=10^{-5}$, which are then used to rotate each realization of unrotated CMB maps. These full-sky simulations are generated using \texttt{pixell} \cite{2021ascl.soft02003N} in rectangular pixelization and CAR projection with a resolution of 1 arcminute. We apply each rotation field to rotate one realization of simulated CMB maps in pixel space using Eq.~\eqref{eq:rotation} and then calculate its power spectra after the rotations. We repeat this procedure for each realization to get 100 sets of rotated CMB power spectra.
In Fig.~\ref{fig:ps_sims.pdf}, we show the average of the 100 realizations of rotated power spectra in comparison to the corresponding theory spectrum obtained from \texttt{class\_rot}. One can clearly see that the output of \texttt{class\_rot} is in an excellent agreement with simulations.
For $C_\ell^{BB}$ we estimate an error of $\lesssim 1\%$ at $\ell\lesssim 4000$; the accuracy noticeably degrades at larger $\ell$ likely due to a combination of pixel effect, numerical precision, and the smallness of the signal of interests. Both $C_\ell^{TE}$ and $C_\ell^{EB}$ from \texttt{class\_rot} agree with the simulations within the expected cosmic variance of the averaged power spectra up to $\ell = 6000$, which is the highest multipole we have tested.
\section{Discussion and Conclusion}
\label{sec:conclusion}
In this paper we present \texttt{class\_rot}, a new publicly available
modified \texttt{class} code, which calculates rotated CMB power
spectra from cosmic birefringence using a non-perturbative
method. \texttt{class\_rot} supports both isotropic and anisotropic
rotations, as can be specified by the isotropic rotation angle,
$\bar{\alpha}$, and the amplitude of scale-invariant rotation power
spectrum, $A_{CB}$, respectively. Hence, \texttt{class\_rot} can be
effectively used to search for cosmic birefringence signal
that features a scale-invariant rotation power spectrum or an isotropic
rotation in CMB polarization rotation, such as that from the coupling
between axion-like particles and photons via Chern-Simons interaction.
We leave the implementation of a more generic (i.e., not scale-invariant)
rotation power spectrum in \texttt{class\_rot} to a future work which
will allow us to search for a broader range of rotation signal such
as that caused by Faraday rotation from primordial magnetic field, which,
depending on its generation mechanism, may induce a rotation field that is
not scale-invariant (see \cite{2013A&ARv..21...62D} for a review).
In this paper we have also briefly reviewed the non-perturbative calculation
implemented in \texttt{class\_rot}, which makes use of the angular correlation
function of the rotation field and does not require the rotation to be perturbatively
small. Hence the calculation in \texttt{class\_rot} offers a broader range of
applicability. We leave the implementation of a perturbative calculation as
well as a detailed comparison between the non-perturbative and perturbative methods,
in terms of both speed and accuracy, to a future work.
We have briefly described the coding implementation and given an example of how
to use \texttt{class\_rot} with its Python interface. To demonstrate its accuracy we
have compared the rotated CMB power spectra such as BB, TB, and EB obtained
from \texttt{class\_rot} to full-sky simulations and shown that they are in
good agreements with $\lesssim 1\%$ error. The upcoming experiments are expected to
constrain cosmic birefringence with much higher precision. For example, while the current best limits lie around $\mathcal{O}(10')$ for isotropic rotation \cite{Planck:2016soo,ACT:2020frw} and around $\mathcal{O}(10^{-6})$ for $A_{CB}$ \cite{Namikawa:2020,Bianchini:2020}, it has been forecasted that Simons Observatory \cite{SO:2019:SciGoal} can improve the current limits by nearly an order of magnitude, achieving an uncertainty level of around 0.7$'$ for isotropic rotation and around $10^{-7}$ for $A_{CB}$ \cite{Pogosian:2019}. These limits will be further improved by the CMB-S4 experiment \cite{S4:2016:SciBook}, reaching an uncertainty level of around $0.2'$ for isotropic rotation \cite{Abazajian:2019eic} and around $10^{-8}$ for $A_{CB}$ \cite{Pogosian:2019}; this will allow for percent-level determinations of $\bar{\alpha}$ and $A_{CB}$ should there be a cosmic birefringence signal at our current observational limit. In light of these future prospects, it is important to have a robust code that computes the effect of cosmic birefringence in power spectra with better than percent-level accuracy. Hence, \texttt{class\_rot} can be a powerful tool for searches of cosmic birefringence signal in the future.
\section*{Acknowledgments}
We thank Arthur Kosowsky for very helpful comments. We thank Toshiya Namikawa, J. Colin Hill, Mathew S. Madhavacheril, and
Lisong Chen for useful discussion. This work uses resources of the
National Energy Research Scientific Computing Center, and open source
softwares including \texttt{healpy} \cite{2019JOSS....4.1298Z} and
\texttt{pixell} \cite{2021ascl.soft02003N}.
| 8,248 |
\section{Introduction}
No one has missed the AI surge in the last decade. There is an ever-increasing number of AI applications available as enterprises across domains seek to harness the promises of AI technology. Enabled by the growing availability of data, most of the AI success stories in recent years originate in solutions dominated by Machine Learning (ML)~\cite{giray2021software}. Where human programmers previously had to express all logic in source code, ML models can now be trained on huge sets of annotated data -- for certain tasks, this works tremendously well. Andrej Karpathy, AI Director at Tesla, somewhat cheekily refers to development according to the ML paradigm as ``Software 2.0''\footnote{bit.ly/3dKeUEH}. For many applications seeking mapping from input to output, it is easier to collect and annotate high-quality data than to program a mapping function in code explicitly.
Agile software development has become the norm in the software engineering industry. Flexibly adapting to change has proven to be a recipe to ripe some of the benefits of software -- significant changes can often occur at any time, both during a development project and post-release. Quickly adapting to shifting customer needs and technology changes is often vital to survival in a competitive market. In this light, the concept of DevOps has emerged as an approach to minimize time to market while maintaining quality~\cite{Ebert2016}. While agile development is particularly suitable for customer-oriented development in the Internet era, it is also increasingly used in embedded systems development of more critical nature~\cite{diebold2018agile} with adaptations such as SafeScrum~\cite{hanssen2018safescrum}. Moreover, while agile software development is flexible, we argue that ML development iterates even faster -- and thus necessitates ``agility on steroids.''
Data scientists often conduct the highly iterative development of ML models. Data scientists, representing a new type of software professionals, often do not have the software engineering training of conventional software developers~\cite{kim2016emerging}. This observation is analogous to what has been reported for developers of scientific computing in the past, e.g., regarding their familiarity with agile practices~\cite{sletholt2011we}. Instead of prioritizing the crafts of software engineering and computer science, many data scientists focus on mastering the art of taming data into shapes that are suitable for model training -- typically using domain knowledge to hunt quantitative accuracy targets for a specific application. The ML development process is experimental in nature and involves iterating between several intertwined activities, e.g., data collection, data preprocessing, feature engineering, model selection, model evaluation, and hyperparameter tuning. An unfortunate characteristic of ML development is that nothing can be considered in isolation. A foundational ML paper by Google researchers described this as the CACE principle ``Changing Anything Changes Everything''~\cite{sculley_hidden_2015}. When developing ML models in Software 2.0, no data science activities are ever independent.
In this keynote address, we will discuss two phenomena that have emerged to meet the characteristics of ML development. First, \textbf{Notebook interfaces} to meet the data scientists' needs to move swiftly. Unfortunately, the step from prototyping in Notebook interfaces to a mature ML solution is often considerable -- and cumbersome for many data scientists. In Section~\ref{sec:notebooks}, we will present a solution by Jakobsson and Henriksson that bridges the gap between the data scientists' preferred notebook interfaces and standard development in Integrated Development Environments (IDE). Second, analogous to DevOps in conventional agile software development, in Section~\ref{sec:mlops}, we will look at how \textbf{MLOps} has emerged to close the gap between ML development and ML operations. More than just an agility concept, we claim that it is required to meet the expectations on the trustworthy AI of the future -- illustrated in the light of the recently proposed Artificial Intelligence Act in the European Union. We refer to our concept of reinforcing the development and operations of AI systems, afflicted by the CACE principle, using two metaphors from construction engineering: buttresses and rebars.
\section{Connecting Notebook Interfaces and IDEs} \label{sec:notebooks}
Many data scientists are not trained software engineers and thus might not be fully aware of available best practices related to various software engineering activities~\cite{kim2016emerging}. Moreover, even with awareness of software engineering best practices, data science introduces new challenges throughout the engineering lifecycle~\cite{amershi2019software,wan2019does} -- from requirements engineering~\cite{vogelsang_requirements_2019} to operations~\cite{sculley_hidden_2015}. Due to the intrinsically experimental nature of data science, practitioners seek development environments that allow maximum agility, i.e., high-speed development iterations.
The go-to solution for many data scientists is to work iteratively in cloud-based notebook interfaces. While this allows rapid experimentation, it does not easily allow the application of the various tools available in a modern IDE~\cite{notebookPainPoints}. The first part of this keynote address presents a solution developed as part of a MSc thesis project by Jakobsson and Henriksson at Backtick Technologies~\cite{backtick} that enables data scientists to easily move between notebook interfaces and an IDE thanks to a networked file system. The idea is to let data scientists work in their favorite editor and use all the tools available for local development while still being able to use the cloud-based notebook interface for data exploration -- and reaping its benefits of easy access to distributed cloud computing. Jakobsson and Henriksson integrated and evaluated the solution as part of Cowait Notebooks, an experimental cloud notebook solution developed by Backtick Technologies. Cowait\footnote{https://cowait.io} is an open-source framework for creating containerized distributed applications with asynchronous Python.
\subsection{Agility Supported by Notebook Interfaces}
A substantial part of today's data science revolves around notebook interfaces, also known as computational notebooks. Notebook interfaces are typically cloud-based and consist of environments with interactive code interpreters accessible from web browsers that allow raöid, iterative development. The notebooks themselves usually run on a remote machine or a computer cluster, allowing the user easy access to compute resources available in data centers. While the notebook interfaces gradually mature, i.e., more features become available, the environments are still far from as capable as the IDEs software developers run locally. Consequently, the support for version control software, static analysis, linting, and other widely used development tools is limited in notebook interfaces~\cite{notebookPainPoints}.
The implementation of a notebook interface differs from a conventional IDE. A notebook runs an interpreter in the background that preserves the state for the duration of a programming session. A user observes a notebook as a sequence of cells that are either textual (allowing data scientists to document the process) or containing code. These two different types of cells are interwoven in the notebook. Notebook interfaces usually excel at presenting plots and tables that support data exploration. A code cell contains one or more statements and can be executed independently from any other code cell. Users can execute code cells in any order, but the cells all mutate the shared state of the background interpreter. This freedom of execution order greatly supports the agility of data science as users can re-run portions of a program while keeping other parts of the previously generated state. While this enables fast iterations toward a useful solution, it also makes it difficult to trace the path of execution that led to a specific result. Even worse, subsequent executions of the notebook may yield different results.
The concept of computational notebooks was envisioned by Knuth already in 1984~\cite{notebookKnuth}. Knuth proposed the \textit{literate programming} paradigm and showed how the idea could support program comprehension by mixing snippets of source code and natural language explanations of its embedded logic. As elaborated in Knuth's seminal book on the topic~\cite{knuth_book}, the key point is that literate programming explicitly shifts who is the most important reader of the programming artifact. In literate programming, source code is primarily written for \textit{humans} instead of computers -- and the artifact can be seen as a piece of literature. Many developers of scientific computing follow this paradigm to develop maintainable software artifacts~\cite{hannay2009scientists}.
A more general version of literate programming is \emph{literate computing}, where the source code cells and natural language explanations are accompanied by visual content such as tables, graphs, and images. Today's widely used notebook interfaces, such as the popular Jupyter Notebook\footnote{https://jupyter.org} and Databrick's Collaborative Notebook\footnote{https://databricks.com/product/collaborative-notebooks}, are examples of literate computing. For a recent overview of the notebook landscape, we refer the curious reader to an article by Vognstrup Fog and Nylandsted Klokmose~\cite{notebookLandscape}. Their summary presents both a historical perspective and a discussion of design decisions for future notebook interfaces.
Notebook interfaces have certainly evolved substantially since Knuth first envisioned them. However, there are still certain impediments for data scientists working in notebooks. Chattopadhyay \textit{et al.} analyzed contemporary issues with notebook interfaces and reported nine pain points~\cite{notebookPainPoints}. According to the authors, the most pressing pain points for developers of notebook interfaces to tackle are 1) code refactoring, 2) deployment to production, 3) exploring notebook history, and 4) managing long-running tasks. Notebook interfaces constitute a highly active research topic, and researchers have proposed several solutions to address their limitations~\cite{notebookForaging,notebookUntangle,notebookManageMesses}. However, while notebook interfaces are a prominent medium for software development, there is still a substantial need for research and development~\cite{notebookNotes}.
This talk will introduce a solution proposal by Jakobsson and Henriksson that bridges the benefits of notebook interfaces and local IDEs. Lau \textit{et al.} examined 60 different notebook interfaces and categorized them according to 10 dimensions of analysis: 1) data sources, 2) editor style, 3) programming language, 4) versioning, 5) collaboration, 6) execution order, 7) execution liveness, 8) execution environment, 9) cell outputs, and 10) notebook outputs. In the MSc thesis project by Jakobsson and Henriksson, the authors focused on the dimensions of \emph{execution environment} and \emph{data sources} for Cowait Notebooks. Their solution allows Cowait Notebooks to execute code in a remote multi-process execution environment using local files as data sources. This solution contrasts with Jupyter Notebook for which both code execution and data is local. The solution is also different from Databrick's Collaborative Notebook, where code is executed in a remote multi-process execution environment, but the data sources cannot be local. In the next section, we present the open-source Cowait framework.
\subsection{Cowait -- A Framework for Simplified Container Orchestration}
Cowait is a framework that simplifies the execution of Python code on the container orchestration system Kubernetes. The two main constituents of Cowait are 1) a workflow engine built on top of Docker and Kubernetes and 2) a build system to easily package source code into containers. Together, the workflow engine and the build system form an abstraction of containers and container hosts that helps developers leverage the power of containerization through Docker and cluster deployment using Kubernetes without knowing all technical details. Backtick Technologies designed Cowait to hide the intrinsic complexity of Docker and Kubernetes behind simple concepts that are familiar to general software developers. Cowait is developed under an Apache License and the source code is available on GitHub\footnote{https://github.com/backtick-se/cowait}.
Cowait provides four key features with a focus on user-friendliness, i.e., Cowait\ldots
\begin{enumerate}
\item \ldots helps the development of distributed workflows on your local machine with minimal setup.
\item \ldots simplifies dependency management for Python projects.
\item \ldots allows developers to unit test their workflow tasks.
\item \ldots lowers the bar for users to deploy solutions on Kubernetes clusters.
\end{enumerate}
In line with other workflow engines, Cowait organizes code into \textit{tasks}. A task is essentially a function that can accept input arguments and return values. As for functions in general, a task can invoke other tasks –- with one key difference: a call to invoke another task will be intercepted by the Cowait runtime environment and subsequently executed in a separate container. Cowait can also direct the execution of this separate container to a particular machine. The fundamental differentiator offered by Cowait is that tasks can interface directly with the underlying cluster orchestrator. In practice, this means that tasks can start other tasks without going through a central scheduler service. Instead, tasks create other tasks on demand, and they communicate with their parent tasks using web sockets. Further details are available in the Cowait Documentation\footnote{https://cowait.io/docs/}.
The task management system in Cowait relies on containers and thus supports the execution of arbitrary software. Thanks to this flexibility, Cowait can execute notebook interfaces. In their MSc thesis project, Jakobsson and Henriksson demonstrate the execution of the open-source JupyterLab notebook interface in a Cowait solution -- we refer to this as running a Cowait Notebook. JupyterLab is a popular notebook interface that is particularly suitable for this demonstration since it is implemented in Python. Once the JupyterLab task is started in a cluster, it automatically gets a public URL that the users can connect to. Cowait Notebooks allow data scientists to host notebook interfaces in any Kubernetes cluster with minimal setup.
Executing Cowait Notebooks within a Cowait task lets the notebook access Cowait's underlying task scheduler and allow sub-tasks to be launched directly from the notebook cells -- data scientists can thus easily execute background tasks on the cluster. In the next section, we present Jakobsson and Henriksson's solution to allow access to local files -- and thus enabling work with local IDEs.
\subsection{Local Files and Cowait Notebooks Executing on Clusters}
Jakobsson and Henriksson developed a proof-of-concept implementation of a general solution to file sharing between a data scientist's local computer and software running on a remote cluster. The key enabler is a custom networked file system implemented using File System in Userspace (FUSE)\footnote{File System in Userspace, \url{https://github.com/libfuse/libfuse}}. FUSE is an interface for userspace programs to export a file system to the Linux kernel. To make the solution compatible with as many different data science applications as possible, the network file system was implemented as a custom storage driver for Kubernetes. Kubernetes is the most popular cluster orchestration solution, available as a managed service from all major cloud providers. Furthermore, Kubernetes is an open-source solution that users can also deploy on-premise. Practically, Jakobsson and Henriksson ensured compatibility with Kubernetes by implementing the Container Storage Interface, an open standard for developing new Kubernetes storage options\footnote{https://kubernetes-csi.github.io/docs/}.
The goal of the MSc thesis project was to design a user-friendly, reliable, and widely compatible solution to file sharing for data scientists. The aim was to provide seamless access to files residing on a data scientist's local computer for other data scientists accessing the local files through cloud-based notebook interfaces executing on Kubernetes clusters. With such a solution in place, data scientists could collaborate online using the notebook interfaces they prefer while allowing state-of-the-art software engineering tools to operate in IDEs on local machines.
To evaluate the proof-of-concept, Jakobsson and Henriksson conducted two separate studies. First, a quantitative study was carried out to verify the solution's performance in light of requirements set by prior user experience research on human response times ~\cite[p.~135]{humanResponseTimes}. The authors studied the performance as different numbers of files, of different sizes, where accessed under different network conditions. While details are available in the MSc thesis~\cite{backtick}, the general finding is that the solution satisfied the requirement of file access within 1 second for reasonable file sizes and realistic network latency. We consider this a necessary but not sufficient requirement for the novel solution.
Second, Jakobsson and Henriksson conducted a qualitative study to collect deep insights into the solution's utility. The authors recruited a mix of data scientists and software developers (with substantial ML experience) to perform a carefully designed programming task under a think-aloud protocol~\cite{thinkAloud}. The purpose was to collect feedback on whether the novel file sharing solution could improve the overall experience of working with cloud-based notebook interfaces. The feedback displayed mixed impressions. Data scientists who were comfortable using managed cloud solutions expressed hesitation to use such a system due to reduced ease-of-use and potential collaboration issues. The group that was the most positive were developers with a software engineering background, who were excited to be able to use familiar tooling for local files. Despite the mixed opinions, we still perceive the proof-of-concept as promising -- but more work is needed to bridge notebook interfaces and local IDEs.
\section{MLOps -- A Key Enabler for Agility in Software 2.0} \label{sec:mlops}
Many organizations report challenges in turning an ML proof-of-concept into a production-quality AI system~\cite{bosch_engineering_2020}. The experimental nature of ML development limits qualities such as reproducibility, testability, traceability, and explainability –- which are needed when putting a trustworthy product or service on the market. On top of this, an AI system must be maintained until the product or service reaches its end-of-life. This holistic lifecycle perspective, i.e., what follows post-release, is often missing when novice data science teams develop AI proofs-of-concept in the sandbox. An organization must continuously monitor the ML models in operation and, in many cases, evolve the models according to feedback from the production environment -- where phenomena such as distributional shifts can be game-changers~\cite{sculley_hidden_2015}. Without designing for the operations phase and ensuring that ML model changes easily can be pushed to production, it will be tough to reach sustainably value-creating AI solutions. This attractive state is sometimes referred to as \textit{Operational AI}~\cite{tapia2018implementing}. In the next section, we will share our view on how the concept of MLOps can help organizations reach this state.
\subsection{Continuous Engineering in the AI Era}
In software development, continuous software engineering and DevOps emerged to reduce the lead time and remove the barriers between development, testing, and operations~\cite{Ebert2016}. Workflow automation in pipelines is fundamental, as it enables approaches such as 1) continuous integration (integration of code changes followed by test automation), 2) continuous delivery (building software for an internal test environment), and 3) continuous deployment (delivery of software to actual users)~\cite{fitzgerald_continuous_2017}. Depending on the application, organizations can also add staging processes when human validation is needed. Thanks to the automation, development qualities such as traceability come at a substantially lower cost compared to a manual workflow~\cite{jabbari2018towards}. DevOps has inspired a similar mindset within ML development in the form of \textit{MLOps}, i.e., the standardization and streamlining of ML lifecycle management~\cite{treveil2020introducing} -- which is a recommended approach to tackle continuous engineering in Software 2.0~\cite{hummer_modelops_2019}.
Just like DevOps is more than a set of tools, MLOps can be seen as a mindset on the highest level. As an engineering discipline, MLOps is a set of practices that combines ML, DevOps, and Data Engineering. Organizations adopting MLOps hope to deploy and maintain ML systems in production reliably and efficiently. Going beyond technology, MLOps involves embracing a culture with corresponding processes that an organization must adapt for the specific application domain. MLOps has emerged from the Big Tech Internet companies; thus, customization is required to fit smaller development organizations. Extrapolating from DevOps in conventional software engineering~\cite{Ebert2016,jabbari2018towards}, MLOps relies on pipeline automation to remove the barriers between data processing, model training, model testing, and model deployment.
MLOps is not yet well-researched from an academic perspective. The primary reason is that MLOps is a production concept, i.e., the phenomenon must be studied in the field rather than in university labs. However, this does not mean that MLOps should not be targeted by academic research. On the contrary, it is critically important that software and systems engineering researchers initiate industrial collaborations to allow empirical studies of what works and what does not when developing and evolving AI systems. As always in software engineering research, we have to identify the most important variation points needed to provide accurate guidance given specific application contexts. Just like there are uncountably many ways to implement pipeline automation -- the ML tools market is booming -- there is not a one-size-fits-all way to adopt MLOps in an organization.
\subsection{Reinforced AI Systems using Buttresses and Rebars}
Just as agile development enters regulated domains~\cite{diebold2018agile}, Software 2.0 is gradually entering critical applications~\cite{borg_safely_2019}. Examples include automotive software~\cite{falcini2017deep} and software in healthcare~\cite{Jiang230}. From a quality assurance perspective, AI systems using ML constitute a paradigm shift compared to conventional software systems. A contemporary deep neural network might be composed of hundreds of millions of parameter weights -- such an artifact is neither applicable to code reviews nor standard code coverage testing. Development organizations have learned how to develop trustworthy code-based software systems through decades of software engineering experience. This collected experience has successfully been captured in different industry standards. Unfortunately, many best practices are less effective when developing AI systems. Bosch \textit{et al.} and others argue that software and systems engineering must evolve to enable efficient and effective development of trustworthy AI systems~\cite{bosch_engineering_2020}. One response to this call is that new standards are under development in various domains to complement existing alternatives for high-assurance systems~\cite{vidot2021certification}.
Due to the growing reliance on AI systems, the European Union (EU) AI strategy stresses the importance of \textit{Trustworthy AI}. EU defines such systems as lawful, ethical, and robust~\cite{high-level_expert_group_on_artificial_intelligence_ethics_2019}. Unfortunately, we know that existing software engineering approaches such as requirements traceability~\cite{borg2017traceability} and verification \& validation~\cite{borg_safely_2019} are less effective at demonstrating system trustworthiness when functionality depends on ML models. Due to its experimental nature, data science makes it hard to trace design decisions after-the-fact and the resulting ML models become less reproducible~\cite{notebookPainPoints}. Moreover, the internals of ML models are notoriously difficult to interpret~\cite{gilpin2018explaining}, and AI systems are difficult to test~\cite{zhang_machine_2020,riccio_testing_2020}.
Not only must developers of critical AI systems comply with emerging industry standards, but novel AI regulations are also expected in the EU. In April 2021, the European Commission proposed an ambitious \textit{Artificial Intelligence Act} (AIA)~\cite{aiact}. AIA is a new legal framework with dual ambitions for turning Europe into the global hub for trustworthy AI. First, AIA aims to guarantee the safety and fundamental rights of EU citizens when interacting with high-risk AI systems. Second, AIA seeks to strengthen AI innovation by providing legal stability and instilling public trust in the technology. Many voices have been raised about the proposed legislation, in which especially the broad definition of AI has been criticized. However, all signs point to increased regulation of AI in the EU, in line with the now established General Data Protection Regulation~\cite{gdpr} -- including substantial fines defined in relation to annual global turnover.
ML is an increasingly important AI technology in the digitalization of society that receives substantial attention in the AIA. According to the proposal, any providers of high-risk solutions using ML must demonstrate AIA conformance to an independent national authority prior to deployment on the EU internal market. Demonstrating this compliance will be very costly -- and how to effectively (and efficiently!) do it remains an important open research question.
We are currently exploring the topic of built-in trustworthiness through a metaphor of reinforced engineering: \textit{buttresses and rebars}. Our fundamental position is that organizations must tackler quality assurance from two directions. Requirements engineering and verification \& validation shall work together like two bookends supporting the AI system, including its development and operations, from either end. Figure~\ref{fig:reinforce} illustrates how the primary reinforcement originates in buttressing the development of the AI system with requirements engineering (to the left) and verification \& validation (to the right). The metaphor, inspired by construction engineering, further borrows the concept of rebars, i.e., internal structures to strengthen and aid the AI system. In our metaphor, the rebars are realized in the form of so-called automation \textit{pipelines} for data, training, and deployment, respectively. Pipeline automation allows continuous engineering throughout the lifecycle, i.e., data management, training, deployment, and monitoring in an MLOps context. Pipeline automation enables flexibly adding automated quality assurance approaches as pipe segments, e.g., GradCAM heatmaps for explainability~\cite{borg2021test}, originating in the requirements engineering and verification \& validation buttresses. The envisioned reinforcement allows organizations to continuously steer the development and operations toward a trustworthy AI system –- in the context of highly agile data science, the CACE principle, and the ever-present risks of distributional shifts.
\begin{figure}
\label{fig:reinforce}
\includegraphics[width=\textwidth]{AI_system3-tight.jpg}
\caption{Metaphorical buttresses and rebars. Robust requirements engineering and verification \& validation support the engineering of an ever-changing AI system. Pipeline automation in an MLOps context constitutes the rebars that sustain trustworthiness by strengthening the AI system despite the dynamics involved.}
\end{figure}
Numerous studies report that requirements engineering is the foundation of high-quality software systems. However, the academic community has only recently fully embraced the idea of tailored requirements engineering for AI systems. We argue that the particular characteristics of ML development in data science necessitate an evolution of requirements engineering processes and practices~\cite{vogelsang_requirements_2019}. New methods are needed when development transitions to the generation of rules based on training data and specific fitness functions. Based on a 2020 Dagstuhl seminar, K\"astner stressed requirements engineering as a particular ML challenge, Google researchers express it as underspecification~\cite{d2020underspecification}, and several papers have been recently published by the requirements engineering research community~\cite{ahmad2021s,habibullah2021non,siebert2021construction}.
Academic research on verification \& validation tailored for AI systems has received a head start compared to requirements engineering for AI. New papers continuously appear, and secondary studies on AI testing~\cite{zhang2020machine,riccio2020testing} and AI verification~\cite{xiang2018verification} reveal hundreds of publications. As automation is close at hand for verification \& validation solutions, the primary purpose of the pipelines in the metaphor is to stress that they shall reach all the way to the requirements engineering buttress. Aligning requirements engineering with verification \& validation can have numerous benefits in software engineering~\cite{bjarnason2014challenges} -- and even more so, we argue, in AI engineering. Our planned next steps include exploring AIA conformant high-risk computer vision systems with industry partners reinforced by buttresses and rebars. Our ambition is to combine automated verification \& validation with an integrated requirements engineering approach~\cite{bjarnason2013integrated} in the continuous engineering of MLOps. Finally, we are considering introducing yet another metaphor from construction engineering, i.e., virtual plumblines as proposed by Cleland-Huang \textit{et al.} to maintain critical system quantities~\cite{cleland2008goal}. We posit that reinforcement and alignment will be two key essential concepts in future AI engineering, supported by a high level of automation to allow agile development of Software 2.0.
\section{Conclusion}
Whether we endorse the term Software 2.0 or not, AI engineering inevitably brings novel challenges. The experimental nature of how data scientists perform ML development means that the work must be agile. However, this agility can be supported in various ways. In this keynote address, we discussed two contemporary phenomena in data science and ML. First, we presented notebook interfaces, weaknesses, and a solution proposal to lower the bar for them to co-exist with modern IDEs. Second, we shared our perspective on MLOps and our ongoing work on providing reinforced engineering of AI systems in this context. Agility and continuous engineering are needed in AI engineering, as AI systems are ever-changing and often operate in dynamic environments. Finally, the EU AI Act further exacerbates the need for reinforced engineering and alignment between requirements engineering and verification \& validation. As a guiding light toward this goal, we introduced our vision of metaphorical buttresses and rebars.
\section*{Acknowledgements}
Martin Jakobsson and Johan Henriksson are the co-creators of the solution presented in Section~\ref{sec:notebooks} and deserve all credit for this work. Our thanks go to Backtick Technologies for hosting the MSc thesis project and Dr. Niklas Fors, Dept. of Computer Science, Lund University for acting as the examiner. This initiative received financial support through the AIQ Meta-Testbed project funded by Kompetensfonden at Campus Helsingborg, Lund University, Sweden and two internal RISE initiatives, i.e., ``SODA - Software \& Data Intensive Applications'' and ``MLOps by RISE.''
\bibliographystyle{splncs}
| 6,696 |
\section{Introduction}
The outcomes of high energy collider experiments depend to a large extent on event simulations obtained with MC generators. So do the planning and development of future machines and measurements \cite{Azzi:2019yne,Feng:2022inv,Mangano:2016jyj,LHeC:2020van,Proceedings:2020eah}. The baseline MCs are based on the description of hadron structure provided by collinear PDFs \cite{Kovarik:2019xvh}, while a more complete, 3D description of hadron structure is given by TMD PDFs \cite{Angeles-Martinez:2015sea}. There are thus efforts to include elements of TMD physics in the modern MC generators and in the parton-branching algorithms on which they are based. The idea of the work \cite{Hautmann:2022xuc} described in this article is to include the TMD splitting functions obtained from the high-energy (or small-x) limit of partonic amplitudes \cite{Catani:1994sq} in a parton branching algorithm, with the goal to incorporate in the parton evolution both small-x and Sudakov contributions. Thanks to its applicability over a wide kinematic region, the algorithm provided by the TMD Parton Branching (PB) method \cite{Hautmann:2017xtx,Hautmann:2017fcj} was chosen to perform this research.
\section{The TMD Parton Branching method}
The PB method is a flexible, widely applicable MC approach to obtain QCD high energy predictions based on TMD PDFs, simply called TMDs.
One of its main ingredients is a forward evolution equation \cite{Hautmann:2017xtx,Hautmann:2017fcj}.
The evolution of the parton density is expressed in terms of real, resolvable branchings and virtual and non-resolvable contributions, which are treated with Sudakov form factors.
Thanks to the momentum sum rule \footnote{Momentum sum rule for the DGLAP splitting functions $P_{ab}(z,\mu^2)$ yields $\sum_a\int_0^1 \textrm{d} z \; z P_{ab}(z,\mu^2) = 0$. }
and unitarity, the Sudakov form factor can be written in terms of real, resolvable splittings and interpreted as a non-emission probability.
Owing to the simple, intuitive picture of the evolution in terms of cascade of branchings and the probabilistic interpretation of the splitting functions and the Sudakov form factors, the PB evolution equation can be solved with MC techniques using a parton branching algorithm.
Additionally to the evolution equation, PB provides also a procedure to fit parameters of the initial distribution to the experimental data using \texttt{xFitter} platform \cite{Alekhin:2014irh}. Obtained PB TMDs and PDFs \cite{BermudezMartinez:2018fsv,Jung:2021vym,Jung:2021mox} are accesible via TMDlib \cite{Abdulov:2021ivr} and in LHAPDF \cite{Buckley:2014ana} format for the usage in (TMD) MC generators. A generator of a special importance is the TMD MC generator Cascade \cite{Baranov:2021uol} where
the TMD initial state parton shower is implemented with the backward evolution guided by the PB TMDs.
The PB method provides the procedure to match PB TMDs with next-to-leading order (NLO) matrix elements \cite{BermudezMartinez:2019anj} to obtain predictions. Recently, there was also a merging procedure developed \cite{BermudezMartinez:2021lxz}.
The PB method was used to study different evolution scenarios
like ordering conditions or resolution scales, see e.g. \cite{Hautmann:2017xtx,Hautmann:2019biw}. The PB predictions have been calculated for multiple measurements, in very different energy and mass regimes, including hadron colliders, fixed target experiments and $ep$ collider \cite{BermudezMartinez:2018fsv,BermudezMartinez:2019anj,BermudezMartinez:2020tys,Yang:2022qgk,Abdulhamid:2021xtt,H1:2021wkz}.
All those successful PB studies were performed with the DGLAP \cite{Gribov:1972ri,Lipatov:1974qm,Altarelli:1977zs,Dokshitzer:1977sg} splitting functions calculated in the collinear approximation. However, in some infrared-sensitive phase space regions, the collinear approximation is not enough
\cite{Dooling:2012uw,Dooling:2014kia}. In this work the PB approach was extended by using the TMD splitting functions \cite{Catani:1994sq,Gituliar:2015agu,Hentschinski:2016wya,Hentschinski:2017ayz}.
\section{TMD splitting functions}
The concept of the TMD splitting functions originates from the high energy factorization \cite{Catani:1994sq}, where the TMD splitting function for the splitting of an off-shell gluon into quark, $\widetilde{P}_{qg}$, was calculated. The other channels were obtained in \cite{Gituliar:2015agu,Hentschinski:2016wya,Hentschinski:2017ayz}.
The splitting functions have well defined collinear and high energy limits.
It was demonstrated that in the limit of small incoming transverse momenta, after angular average, the TMD splitting functions converge to the DGLAP leading order (LO) splitting functions. For finite transverse momenta, the TMD splitting function \cite{Catani:1994sq} can be written as an expansion in powers of the transverse momenta with $z$-dependent coefficients, which, after convoluting them with the TMD gluon Green's functions \cite{Kuraev:1977fs,Balitsky:1978ic}, give the
corrections to the splitting function logarithmically enhanced for $z\rightarrow 0$. Therefore, the work presented next on the implementation of
TMD splitting functions in the PB method can be viewed as a step toward
constructing full MC generators for small-$x$ physics (see e.g. \cite{Chachamis:2015zzp,Andersen:2011zd,Jung:2010si,Hoeche:2007hlb,Golec-Biernat:2007tjf}).
\section{TMD splitting functions in the PB method}
The DGLAP splitting functions $P_{ab}^R (z, \mu^{\prime})$ were replaced by the TMD ones $\tilde{P}_{ab}^{R}\left(z, k_{\bot} +(1-z)\mu_{\bot}^{\prime}, \mu_{\bot}^{\prime}\right)$ in the PB evolution equation for the momentum weighted parton density $x{\mathcal{A}}_a = \tilde{\mathcal{A}}_a$ \cite{Hautmann:2017fcj}
\begin{multline}
\tilde{\mathcal{A}}_a\left( x,k_{\bot}^2, \mu^2\right) =
\Delta_a\left(\mu^2,k_{\bot}^2\right)\tilde{\mathcal{A}}_a\left( x,k_{\bot}^2, \mu_0^2\right) +
\sum_b\int\frac{d^2\mu_{\bot}^{\prime}}{\pi\mu_{\bot}^{\prime 2}}\Theta(\mu_{\bot}^{\prime 2}-\mu_0^2)\Theta(\mu^2-\mu_{\bot}^{\prime 2})
\\
\times \int\limits_x^{z_M }\textrm{d}z\, \frac{ \Delta_a\left(\mu^2, k_{\bot}^2 \right) }
{ \Delta_a\left(\mu_{\bot}^{\prime 2}, k_{\bot}^2 \right)} \tilde{P}_{ab}^{R}\left(z, k_{\bot} +(1-z)\mu_{\bot}^{\prime}, \mu_{\bot}^{\prime}\right)
\tilde{\mathcal{A}}_b\left( \frac{x}{z}, (k_{\bot}+(1-z)\mu_{\bot}^{\prime})^2, \mu_{\bot}^{\prime 2}\right),
\label{EvolEq}
\end{multline}
where $a,b$- are the flavour indices, $x$- the fraction of the proton's longitudinal momentum carried by the parton $a$, $k_{\bot}$-the transverse momentum, $\mu$ - the evolution scale, $\mu_0$- the initial evolution scale, $z$ - the momentum transfer in the splitting and $z_M$- the soft gluon resolution scale which can be scale dependent.
To treat the virtual/non-resolvable emissions, a new TMD Sudakov form factor was introduced \cite{Hautmann:2022xuc}
\begin{equation}
\Delta_a(\mu^2,\mu_0^2,k_{\bot}^2)\equiv\Delta_a(\mu^2,k_{\bot}^2)=\exp\left(-\sum_b\int_{\mu_0^2}^{\mu^2}\frac{d\mu'^2}{\mu'^2}\int_0^{z_M}dz\ z\bar P^R_{ba}(z,k_{\bot}^2,\mu'^2)\right),
\label{TMDSud}
\end{equation}
using the angular averaged TMD splitting functions $\bar P^R_{ba}(z,k_{\bot}^2,\mu'^2)$. This construction was possible thanks to the momentum sum rule and unitarity.
As an intermediate step, a scenario with the TMD splittings included in the real resolvable emissions but with
the default PB Sudakov form factor
\begin{equation}
\Delta_a(\mu^2,\mu_0^2)\equiv\Delta_a(\mu^2)=\exp\left(-\sum_b\int_{\mu_0^2}^{\mu^2}\frac{d\mu'^2}{\mu'^2}\int_0^{z_M}dz\ z P^R_{ba}(z,\mu^{\prime 2})\right)
\label{CollSud}
\end{equation}
was studied.
It was shown analytically \cite{Hautmann:2022xuc}, that only when the same type of splitting functions are used both in the real emissions and Sudakov form factors, the evolution equation from Eq.~\ref{EvolEq} satisfies the momentum sum rule.
In other words, for the evolution equation Eq.~\ref{EvolEq} with the TMD Sudakov form factor in the form given by Eq.~\ref{TMDSud} the momentum sum rule holds, whereas with the collinear Sudakov form factor from Eq.~\ref{CollSud} it is broken.
\begin{figure}[htb]
\begin{minipage}{0.49\textwidth}
\includegraphics[width=5.0cm]{asmu_iTMDx-down-mu100.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=5.0cm]{asmu_iTMDx-gluon-mu100.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=5.0cm]{asmu_kt-down-x1e-3-mu100_linear.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=5.0cm]{asmu_kt-gluon-x1e-3-mu100_linear.pdf}
\end{minipage}
\hfill
\caption[]{
Down quark and gluon distributions for scenarios with the collinear splitting functions (red), with the TMD splitting functions in the real emissions and the collinear Sudakov form factor (blue) and with the TMD splitting functions both in the real emissions and in the Sudakov form factor (purple).
Top: integrated TMDs as a function of $x$ at $\mu=100\;\textrm{GeV}$. Bottom: TMDs as a function of $|k_{\bot}|$ at $x=0.001$ and $\mu=100\;\textrm{GeV}$ \cite{Hautmann:2022xuc}. }
\label{Fig:Distributions}
\end{figure}
\section{Numerical results}
In the upper part of Fig.~\ref{Fig:Distributions}, the integrated distributions (iTMDs) as a function of $x$ at the scale $\mu=100\;\textrm{GeV}$ are shown for down quark and gluon for 3 evolution scenarios: the dashed red curve is obtained from the PB evolution equation with collinear splitting functions, the blue dotted curve with the model with TMD splitting functions in real resolvable emissions but with the collinear Sudakov form factors and the solid magenta line with the TMD splitting functions both in the real resolvable emissions and the Sudakov form factors. In the bottom of Fig.~\ref{Fig:Distributions} the down quark and gluon TMDs as a function of $|k_{\bot}|$ are shown at $x=0.001$, $\mu=100\;\textrm{GeV}$ for the same 3 models.
The bottom panel of each plot shows the ratios obtained with respect to the fully collinear scenario.
For the purpose of this study, the same starting distribution was used for all those 3 models, which means that the differences between the curves come only from the evolution, i.e. purely from the treatment of the splitting functions. For the iTMDs, the effect of the TMD splitting functions is visible especially at low $x$, for the TMDs, the effects are visible in the whole $k_{\bot}$ region. It is worth reminding that for both the red and magenta curves the momentum sum rule holds, whereas the blue curve violates it. The numerical check of the momentum sum rule was performed in \cite{Hautmann:2022xuc}.
\section{Conclusions}
In this work a parton branching algorithm to obtain TMDs and integrated distributions, which for the first time includes TMD splitting functions and fulfils momentum sum rule, was presented.
A new TMD Sudakov form factor was constructed using the momentum sum rule and unitarity.
The studies presented here are at the level of the forward evolution but it is a
first step towards a full TMD MC generator covering the small-$x$ phase space.
\section*{Acknowledgements}
Presented results were obtained in collaboration with F. Hautmann, M. Hentschinski, L. Keersmaekers, A. Kusina and K. Kutak.
A. Lelek acknowledges funding by Research Foundation-Flanders (FWO) (application number: 1272421N).
\bibliographystyle{mybibstyle}
{
| 3,766 |
\section{Introduction \label{sec:intro}}
In the last few years solid state physics has increasingly benefited from
scientific computing and the significance of numerical techniques
is likely to keep on growing quickly in this field.
Because of the high complexity of solids, which are made of a huge number
of interacting electrons and nuclei, a full understanding
of their properties cannot be developed using analytical methods only.
Numerical simulations do not only provide quantitative results for
the properties of specific materials but are also widely used
to test the validity of theories and analytical approaches.
Numerical and analytical approaches based on perturbation theory
and effective independent-particle theories such as
the Fermi liquid theory, the density functional theory,
the Hartree-Fock approximation, or the Born-Oppenheimer approximation,
have been extremely successful in explaining the properties of
solids.
However, the low-energy and low-temperature electronic, optical, or
magnetic properties
of various novel materials are not understood within these simplified
theories.
For example, in strongly correlated
systems, the interactions between constituents of the solid are
so strong that they can no longer be considered separately and
a collective behavior can emerge.
As a result, these systems may exhibit new and
fascinating macroscopic properties such as high-temperature superconductivity or
colossal magneto-resistance~\cite{science00}.
Quasi-one-dimensional electron-phonon (EP) systems like
MX-chain compounds are other examples of electronic systems that are
very different from traditional ones~\cite{MX93}.
Their study is particularly rewarding for a number of reasons. First they
exhibit a remarkably wide range of competing forces,
which gives rise to a rich variety of different phases
characterized by symmetry-broken ground states and
long-range orders.
Second these systems share fundamental features with higher-dimensional
novel materials (for instance,
high-$T_{\rm c}$ cuprates or charge-ordered nickelates) such as
the interplay of charge, spin, and lattice degrees of
freedom.
One-dimensional (1D) models allow us to investigate
this complex interplay, which is important but poorly understood
also in 2D- and 3D highly correlated electron systems,
in a context more favorable to numerical simulations.
\subsection{Models}
Calculating the low-energy low-temperature properties of solids
from first principles
is possible only with various approximations which
often are not reliable in strongly correlated or low-dimensional
electronic systems. An alternative approach for investigating these materials
is the study of simplified lattice models which include only the relevant
degrees of freedom and interactions but nevertheless are believed to
reproduce the essential physical properties of the full system.
A fundamental model for 1D correlated electronic systems is the
Hubbard model~\cite{Hubbard} defined by the Hamiltonian
\begin{equation}
H_{\rm ee} =- t \sum_{\langle i,j \rangle;\sigma}
\left( c_{i\sigma}^{\dag}c^{\phantom{\dag}}_{j\sigma}
+ c_{j\sigma}^{\dag}c^{\phantom{\dag}}_{i\sigma} \right)
+ U \sum_{i} n_{i\uparrow} n_{i\downarrow}
.
\label{hubbard}
\end{equation}
It describes electrons with spin
$\sigma=\uparrow,\downarrow$
which can hop between neighboring sites on a lattice.
Here $c^{\dag}_{i\sigma}$,
$c^{\phantom{\dag}}_{i\sigma}$ are creation and annihilation operators for
electrons with spin $\sigma$ at site $i$, $n_{i\sigma}=
c^{\dag}_{i\sigma}c^{\phantom{\dag}}_{i\sigma}$
are the corresponding density operators.
The hopping integral $t$ gives rise to a
a single-electron band of width $4tD$
(with $D$ being the spatial dimension).
The Coulomb repulsion between electrons is
mimicked by a local Hubbard interaction $U \geq 0$.
The average electronic density per site is $0 < n < 2$,
where $n = N_{\rm e}/N$, $N$ is the number of lattice sites and
$N_{\rm e}$ is the number of electrons;
$n/2$ is called the band filling.
In the past the Hubbard model was intensively studied with respect to
(itinerant) ferromagnetism, antiferromagnetism
and metal-insulator (Mott) transitions in transition metals.
More recently it has been used in the context of heavy fermions and
high-temperature superconductivity as perhaps the most fundamental model
accounting for strong electronic correlation effects in solids.
The coupling between electrons and the lattice relaxation
and vibrations (phonons) is also known to have significant
effects on the properties of solids including the above mentioned
strongly correlated electronic materials
(see the papers by Egami, Calvani, Zhou, Perroni,
and Saini).
Dynamical phonon effects are particularly important
in quasi-1D metals and charge-density-wave (CDW) systems.
The simplest model describing the effect of an additional EP
coupling is the Holstein-Hubbard model.
This model describes electrons coupled to dispersionless phonons,
represented by local Einstein oscillators.
The Hamiltonian is given by
\begin{equation}
H_{\rm ep} = H_{\rm ee} +
\frac{1}{2M} \sum_i p^2_i + \frac{K}{2} \sum_i q^2_i
- \alpha \sum_i q_i n_i ,
\label{holstein}
\end{equation}
where $q_i$ and $p_i$ are the position and momentum operators
for a phonon mode at site $i$, and $n_i=n_{i\uparrow}+n_{i\downarrow}$.
At first sight, there are three additional parameters in this model
(compared to the purely electronic model): The oscillator
mass $M$, the spring constant $K$, and the EP coupling constant $\alpha$.
However, if we introduce phonon (boson) creation and annihilation operators
$b^\dag_i$ and $b^{\phantom{\dag}}_i$, respectively,
the Holstein-Hubbard Hamiltonian can be written (up to a constant term)
\begin{equation}
H_{\rm ep} = H_{\rm ee} + \omega_{0}
\sum_i b^\dag_i b^{\phantom{\dag}}_i
- g \omega_0 \sum_i (b^\dag_i + b^{\phantom{\dag}}_i)
n_i^{\phantom{\dag}},
\end{equation}
where the phonon frequency is given by
$\omega_{0}^2 = K/M$ ($\hbar = 1$)
and a dimensionless EP coupling constant is defined by
$g = \alpha a / \omega_0$
with the range of zero-point fluctuations
given by $2a^2 = (KM)^{-1/2}$.
We can set the parameter $a$ equal to 1 by
redefining the units of oscillator displacements.
Thus, the effects of the EP coupling are determined
by two dimensionless parameter ratios only: $\omega_{0}/t$ and $g$.
Alternatively, one can use the polaron binding energy
$\varepsilon_{\rm p} = g^2 \omega_{0}$ or,
equivalently, $\lambda=\varepsilon_{\rm p}/2tD$,
instead of $g$.
The various constituents and couplings of the Holstein-Hubbard model
are summarized in fig.~\ref{fig:model}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=13.0cm]{jeckel_fehske_fig1.eps}
\caption{Schematic representation of the 1D
Holstein-Hubbard model.
}
\label{fig:model}
\end{center}
\end{figure}
In the single-electron case, the resulting Holstein model~\cite{Holstein} has
been studied as a paradigmatic model for polaron formation~(see the paper
by Fehske, Alvermann, Hohenadler and Wellein). At half-filling, the EP
coupling may lead to a Peierls instability related to the
appearance of CDW order in competition with the SDW
instability triggered by $U$ (see the separate paper
by Fehske and Jeckelmann).
\subsection{Methods}
Despite the great simplification brought by the above models,
theoretical investigations remain difficult
because a quantum many-particle problem has to be solved
with high accuracy to determine correlation effects
on physical properties beyond the mean-field level.
Analytical solutions of these models are known for special
cases only.
To determine the spectral and thermodynamical properties
of these models, theorists have turned to numerical simulations.
Among the various approaches, exact numerical methods
play a significant role.
A numerical calculation is said to be exact if
no approximation is involved aside from the restriction imposed
by finite computational resources
(in particular, the same calculation would be mathematically exact if
it were carried out analytically),
the accuracy can be systematically improved
with increasing computational effort, and
actual numerical errors are quantifiable or
completely negligible.
Especially for strongly correlated systems,
exact numerical methods are often the only approach available to obtain
accurate quantitative results in model systems and
the results that they provide are essential for checking the
validity of theories or testing the accuracy of approximative
analytical methods.
Nowadays, finite-cluster exact diagonalizations (ED), the numerical
renormalization group (NRG), density
matrix renormalization group (DMRG) calculations, quantum Monte
Carlo (QMC) simulations, and the dynamical mean-field theory (DMFT)
have become very powerful and important tools for
solving many-body problems.
In what follows, we briefly summarize the advantages and weaknesses of
these techniques, especially in the context of EP lattice models
such as the the Holstein-Hubbard model.
Then in the remaining sections we will present the basic principles
of the ED (sect.~\ref{sec:ed}) and DMRG (sect.~\ref{sec:dmrg}
and~\ref{sec:dynamical}) approaches.
The paradigm of an exact numerical calculation in a quantum system
is the exact diagonalization (ED) of its Hamiltonian,
which can be carried out using several
well-established algorithms (see the next section).
ED techniques can be used to calculate most properties of any quantum system.
Unfortunately, they are restricted to small systems
(for instance, 16 sites for a half-filled Hubbard model and
less for the Holstein-Hubbard model) because of the exponential increase
of the computational effort with the number of particles.
In many cases, these system sizes are too small to simulate
adequately macroscopic properties of solids (such as (band) transport,
magnetism or charge long-range order). They are of great value,
however, for a description of local or short-range order effects.
In QMC simulations the problem of computing quantum properties is transformed
into the summation of a huge number of classical variables, which is
carried out using statistical (Monte Carlo) techniques.
There are numerous variations of this principle also for EP problems
(see, e.g., the paper on QMC by Mishchenko).
QMC simulations are almost as widely applicable as ED techniques
but can be applied to much larger systems.
The results of QMC calculations are affected by statistical errors
which, in principle, can be systematically reduced with increasing
computational effort. Therefore, QMC techniques are often
numerically exact.
In practice, several problems such as the sign problem (typically,
in frustrated quantum systems) or the large auto-correlation time
in the Markov chain (for instance, in critical systems) severely
limit the applicability and accuracy of QMC simulations in
strongly correlated or low-dimensional systems.
Moreover, real-frequency dynamical properties often
have to be calculated from the imaginary-time correlation functions
obtained with QMC using an analytic continuation.
However, the transformation of imaginary-time data affected
by statistical errors to real-frequency data
is an ill-conditioned numerical problem.
In practice, one has to rely on approximate transformations using
least-square or maximum-entropy fits,
which yield results of unknown accuracy.
DMRG methods allow us to calculate the static and dynamical
properties of quantum systems much larger than those
possible with ED techniques (see the third and fourth section, as well as
the separate paper by Fehske and Jeckelmann).
For 1D systems and quantum impurity problems,
one can simulate lattice sizes
large enough to determine static properties in the thermodynamic limit
and the dynamical spectra of macroscopic systems exactly.
In higher dimensions DMRG results are numerically exact
only for system sizes barely larger than those available with
ED techniques. For larger system sizes in dimension two and higher
DMRG usually provides only a variational approximation of the system
properties.
The NRG is a precursor of the DMRG method. Therefore, it it
not surprising that most NRG calculations can also
be carried out with DMRG. NRG provides numerically exact results for
the low-energy properties
of quantum impurity problems (see the paper on NRG methods by
Hewson). Moreover, for this type
of problem it is computationally more efficient than DMRG.
For other problems (lattice problems, high-energy properties)
NRG usually fails or provides results of poor accuracy.
In the dynamical mean-field theory
(see the papers by Ciuchi, Capone, and Castellani)
it is assumed that the self-energy of the
quantum many-body system is momentum-independent. While this is exact on a
lattice with an infinitely large coordination number (i.e., in the
limit of infinite dimension), it is considered to be a
reasonable approximation for 3D systems.
Thus in applications to real materials DMFT is never
an exact numerical approach, although a DMFT-based approach
for a 3D system could possibly yield better results than
direct QMC or DMRG calculations.
It should be noticed that in the DMFT framework
the self-energy has to be determined
self-consistently by solving a quantum impurity problem, which is itself a
difficult strongly correlated problem. This quantum impurity problem
is usually solved numerically using one of the standard methods discussed here
(ED, NRG, QMC or DMRG). Therefore, the DMFT approach and
its extensions can be viewed as
an (approximate) way of circumventing the limitations of the
standard methods and extend their applicability to large 3D
systems.
In summary, every numerical method has advantages and weaknesses.
ED is unbiased but is restricted to small clusters.
For 1D systems DMRG is usually the best method
while for 3D systems only direct QMC simulations are possible.
QMC techniques represent also the most successful approach
for non-critical and non-frustrated two-dimensional systems.
There is currently no satisfactory numerical (or analytical)
method for two-dimensional strongly correlated systems
with frustration or in a critical regime.
\section{Exact diagonalization approach \label{sec:ed}}
As stated above, ED is presently probably the best controlled numerical method
because it allows an approximation-free treatment of coupled
electron-phonon models in the whole parameter range.
As a precondition we have to work with finite systems
and apply a well-defined truncation procedure for the phonon
sector (see subsect.~\ref{subsec:hs_bc}). At least for the
single-electron Holstein model a variational basis can
be constructed in such a way that the ground-state properties
of the model can be computed numerically exact in the
thermodynamic limit (cf. subsect.~\ref{subsec:vm}).
In both cases the resulting numerical problem is to find
the eigenstates of a (sparse) Hermitian matrix using Lanczos
or other iterative subspace methods (subsect.~\ref{subsec:ep}).
In general the computational
requirements of these eigenvalue algorithms are determined by
matrix-vector multiplications (MVM), which have to be implemented in
a parallel, fast and memory saving way on modern supercomputers.
Extensions for the calculation
of dynamical quantities have been developed on the
basis of Lanczos recursion and kernel polynomial expansions
(cf. subsect.~\ref{sec:sp}).
Quite recently cluster perturbation theory (CPT) has been used
in combination with these techniques to determine the single-particle
electron and phonon spectra.
\subsection{Many-body Hilbert space and basis construction\label{subsec:hs_bc}}
\subsubsection{Basis symmetrization\label{subsubsec:bs}}
The total Hilbert space of
Holstein-Hubbard type models~(\ref{holstein}) can be written as the tensorial
product space of electrons and phonons, spanned by the
complete basis set
$\left\{|b\rangle=|e\rangle\otimes |p\rangle\right\}$
with
\begin{equation}
|e\rangle = \prod_{i=1}^N \prod_{\sigma = \uparrow,\downarrow}
(c_{i\sigma}^{\dagger})^{n_{i\sigma,e}}
|0\rangle_{e}\quad\mbox{and}\quad
|p\rangle = \prod_{i=1}^N \frac{1}{\sqrt{m_{i,p} !}}
(b_i^{\dagger})^{m_{i,p}}|0\rangle_{p}.
\label{basis1}
\end{equation}
Here $n_{i\sigma,e} \in \{0,1\}$, i.e. the electronic
Wannier site $i$ might be empty, singly or doubly occupied, whereas
we have no such restriction for the phonon number,
$m_{i,p}\; \in \{0,\ldots,\infty\}.$ Consequently,
$e=1,\ldots,D_{e}$ and $p=1,\ldots,D_{p}$ label
basic states of the electronic and phononic subspaces having
dimensions $D_{e}=\genfrac(){0cm}{1}{N}{N_{e,\sigma}}
\genfrac(){0cm}{1}{N}{N_{e,-\sigma}} $
and $D_{p}=\infty$, respectively.
Since the Holstein Hubbard Hamiltonian
commutes with the total electron number operator
$\hat{N}_{e}=
\sum_{i=1}^N(n_{i,\uparrow}+n_{i,\downarrow})$,
$\hat{N}_{e,\sigma}=\sum_{i=1}^Nn_{i,\sigma}$ (we
used the `hat' to discriminate operators from the
corresponding particle numbers),
and the $z$-component of the total spin $S^{z}=
\frac{1}{2}\sum_{i=1}^N(n_{i,\uparrow}-n_{i,\downarrow})$,
the basis $\left\{|b\rangle\right\}$ has been constructed for
fixed $N_{e}$ and $S^z$.
To further reduce the dimension of the total Hilbert space,
we can exploit the space group symmetries
[translations ($G_T$) and point group operations ($G_L$)]
and the spin-flip invariance [($G_S$); $S^z=0$ -- subspace only].
Clearly, working on finite bipartite clusters in 1D or 2D
(here $N=k^2+l^2$, $k$ and $l$ are both even or odd integers)
with periodic boundary conditions (PBC),
we do not have all the symmetry properties of the underlying 1D or
2D (square) lattices~\cite{BWF98}. Restricting ourselves to the 1D
non-equivalent irreducible representations of the group
$G(\vec{K})=G_T\times G_L(\vec{K})\times G_S$,
we can use the projection operator
${\cal P}_{\vec{K},rs}=[g(\vec{K})]^{-1}
\sum_{{\cal G} \in G(\vec{K})}\chi_{\vec{K},rs}^{(\cal G)}\;{\cal G}$
(with $[H,{\cal P}_{\vec{K},rs}]=0$,
${\cal P}_{\vec{K},rs}^{\dagger}={\cal P}_{\vec{K},rs}$ and
${\cal P}_{\vec{K},rs}\;{\cal P}_{\vec{K}^{\prime},r^{\prime}s^{\prime}}
={\cal P}_{\vec{K},rs}\;
\delta_{\vec{K},\vec{K}^{\prime}}\;\delta_{r,r^{\prime}}\;
\delta_{s,s^{\prime}}$)
in order to generate a new symmetrized basis set:
$\{|b\rangle\} \stackrel{\cal P}{\to}
\{|\tilde{b}\rangle\}$. ${\cal G}$ denotes the $g(\vec{K})$ elements of
the group $G(\vec{K})$ and $\chi_{\vec{K},rs}^{(\cal G)}$
is the (complex) character of
${\cal G}$ in the $[\vec{K},rs]$ representation, where
$\vec{K}$ refers to one of the $N$ allowed wave vectors in the
first Brillouin zone, $r$ labels the irreducible representations
of the little group of $\vec{K}$, $G_L(\vec{K})$, and $s$
parameterizes $G_S$.
For an efficient parallel implementation of the MVM
it is extremely important
that the symmetrized basis can be constructed preserving the tensor
product structure of the Hilbert space, i.e.,
\begin{equation}
\{|\tilde{b}\rangle=
N^{[\vec{K}rs]}_{\tilde{b}}\, {\cal P}_{\vec{K},rs}\,
\left[ |\tilde{e}\rangle
\otimes |p\rangle\right]\} ,
\label{basis2}
\end{equation}
with $\tilde{e}=1,\ldots, \tilde{D}_{e}^{g(\vec{K})}$
$[\tilde{D}_{e}^{g(\vec{K})}\sim D_{e}/g(\vec{K})]$. The
$N^{[\vec{K}rs]}_{\tilde{b}}$ are normalization factors.
\subsubsection{Phonon Hilbert space truncation\label{subsubsec:hst}}
Since the Hilbert space associated to the phonons is infinite
even for a finite system, we apply a truncation procedure~\cite{WRF96}
retaining only basis states with at most $M$ phonons:
\begin{equation}
\{ |p\rangle\; ; \;m_p=\sum^N_{i=1} m_{i,p} \le M \}.
\label{eq:cutoff}
\end{equation}
The resulting Hilbert space has a total dimension
$\tilde{D}=\tilde{D}_{e}^{g(\vec{K})}\times D_{p}^M$ with
$D_{p}^M =\frac{(M+N)!}{M!N!}$, and a general state
of the Holstein Hubbard model is represented as
\begin{equation}
|\psi_{\vec{K},rs}\rangle =
\sum_{\tilde{e}=1}^{\tilde{D}_{e}^{g(\vec{K})}}
\sum_{p=1}^{D^M_{p}} c_{\tilde{e}p}\,
|\tilde{b}\rangle.
\end{equation}
It is worthwhile to point out that, switching from a real-space representation
to a momentum space description, our truncation scheme takes
into account all dynamical phonon modes. This has to be
contrasted with the frequently used single-mode
approach~\cite{AP98}.
In other words, depending on the model parameters and the band filling,
the system ``decides'' by itself how the $M$ phonons will be
distributed among the independent Einstein oscillators related
to the $N$ Wannier sites or, alternatively, among the $N$ different
phonon modes in momentum space. Hence with the
same accuracy phonon dynamical effects on lattice distortions
being quasi-localized in real space
(such as polarons, Frenkel excitons, \ldots) or in momentum space
(like charge-density-waves, \ldots)
can be studied.
Of course, one has carefully to check for the convergence of
the above truncation procedure by calculating the ground-state
energy as a function of the cut-off parameter $M$.
In the numerical work convergence is assumed to be achieved if
$E_0$ is determined with a relative error
$\Delta E_0^{(M)} = (E_0(M)-E_0(M-1))/E_0(M)\leq 10^{-6}$\,.
In addition we guarantee that the phonon distribution function
\begin{equation}
|c^{(m)}|^2(M) = \sum^{\tilde{D}^{g(\vec{K})}_{e}}_{\tilde{e}=1}
\sum^{D^M_{p}}_{\stackrel{{\displaystyle {}_{p=1}}}{\{m_p=m\}}}
|c_{\tilde{e}p}|^2,
\end{equation}
which gives the different weights of the $m$-phonon states in the
ground-state $|\psi_0\rangle$, becomes independent of $M$
and $|c^{(M)}|^2(M)\leq 10^{-6}$.
To illustrate the $M$ dependences of the phonon distribution function
and the ground-state energy, we have shown both quantities
in fig.~\ref{f:phon_dis} for the single-electron Holstein model
on rather small lattices.
Figure~\ref{f:phon_dis}
proves that our truncation procedure is well controlled
even in the strong EP coupling regime, where multi-phonon states become
increasingly important.
\begin{figure}[t]
\includegraphics[width=.45\linewidth,clip]
{jeckel_fehske_fig2a.eps}\hspace*{0.5cm}
\includegraphics[width=.45\linewidth,clip]
{jeckel_fehske_fig2b.eps}
\caption{Convergence of the phonon distribution
function $|c^m|^2(M)$ and ground-state
energy $E_0(M)$ (inset; here filled symbols give the results
obtained separating the $Q=0$ phonon mode)
as a function of the maximum number of
phonons $M$ retained. Results are given
for the Holstein model on a 1D lattice with
$N=14$ (a) and $N=6$ (b) sites (PBC), where the parameters
$\varepsilon_p$ and $\omega_0$ are the same
as for the CPT calculation
presented in fig.~3 of the paper by
Fehske, Alvermann, Hohenadler and Wellein.
}
\label{f:phon_dis}
\end{figure}
For the Holstein-type models
the computational requirements can be further reduced.
Here it is possible to separate the symmetric
phonon mode, $B_{0}=\frac{1}{\sqrt{N}}\sum_{i} b_i$, and to
calculate its contribution to $H$ analytically~\cite{SHBWF05}.
For the sake of simplicity, we restrict ourselves to the 1D spinless
case in what follows.
Using the momentum space representation of the phonon operators,
the original Holstein Hamiltonian takes the form
\begin{equation}
H = -t \sum_{ij} (c^{\dagger}_{i}c_{j}^{}
+c^{\dagger}_{j}c_{i}^{}) - \sqrt{\varepsilon_p\omega_0} \;\sum_{j}(
{B^{\dagger}_{-Q_j}}+ {B^{}_{Q_j}}) {n}_{Q_j}^{} +\omega_0
\sum_{j} B^{\dagger}_{Q_j} B^{}_{Q_j}
\end{equation}
with $B_{Q_j}^{\dagger} = {\cal U}_{j,i} b^{\dagger}_i$,
$B_{Q_j}^{}={\cal U}_{j,i}^{*} b^{}_i ={\cal U}_{-j,i}^{} b^{}_i$,
and $n_{Q_j}= \sum_{i}{\cal U}_{j,i} n_{i}^{}$,
where ${\cal U}_{j,i}= (1/\sqrt{N})\times$\\
$ \exp{\{i Q_j R_i\}}$ and
$Q_j$ ($R_i$) denote the allowed
momentum (translation) vectors of the lattice.
The $Q=0$ phonon mode couples to
$n_{0}=N_{e}/\sqrt{N}$ which is a constant
if working in a subspace with fixed number of electrons.
Thus the Hamiltonian decomposes into
$H=H^\prime+H_{Q=0}$, with
$H_{Q=0}=- \sqrt{\varepsilon_p\omega_0}\,
({B^{\dagger}_{0}}+{B^{}_{0}})\,n_{0}^{}
+ \omega_0 B^{\dagger}_{0} B^{}_{0}$.
Since $[H^\prime , H_{Q=0}]=0$, the eigenspectrum
of $H$ can be built up by the analytic solution for $H_{Q=0}$
and the numerical results for $H^\prime$.
Using the unitary transformation
\begin{equation}
\label{UTransS}
{\cal \bar{S}} (N_{e}) = \exp{ \left\{ - \frac{N_{e}}{\sqrt{N}}
\sqrt{\frac{\varepsilon_p}{\omega_0}}
({B^{\dagger}_{0}}-{B^{}_{0}}) \right\} }
\end{equation}
which introduces a shift of the phonon operators
($B^{}_{0} \rightarrow B^{}_{0}+ \frac{N_{e}}{\sqrt{N}}
\sqrt{\frac{\varepsilon_p}{\omega_0}}$),
we easily find the diagonal form of
$\bar{H}_{Q=0} = \omega_0 B^{\dagger}_{0} B^{}_{0}
- \varepsilon_pN^2_{e}/N$.
It represents a harmonic oscillator with
eigenvalues and eigenvectors
$\bar{E}_{\bar{l}} =
\omega_0 \bar{l} -\varepsilon_p \frac{N^2_{el}}{N}$ and
$| \bar{l} \rangle = \frac{1}{\sqrt{ \bar{l} !} }
(B^{\dagger}_{0} )^{\bar{l}} | 0 \rangle$.
The corresponding eigenenergies and eigenvectors of
$H_{Q=0}$ are $E_{l}=\bar{E}_{\bar{l}}$ and
$|l (N_{e} )\rangle = {\cal \bar{S}}^{\dagger} (N_{e}) | \bar{l} \rangle$,
respectively. That is, in the eigenstates of the Holstein model
a homogeneous lattice distortion occurs.
Note that the homogeneous lattice distortions are different
in subspaces with different electron number. Thus excitations due to
lattice relaxation processes will show up in the one-particle
spectral function.
Finally, eigenvectors and eigenenergies of $H$ can be constructed by
combining the above analytical result with the numerically determined
eigensystem ($E^\prime_{n}$; $|\psi_{n}^{\prime} \rangle$)
of $H^\prime$:
$E_{n,l}^{\prime}= E_{n}^{\prime} + \omega_0 l
- \varepsilon_p N^2_{e}/N$ and
$|\psi_{n,l} \rangle = |\psi_{n}^{\prime} \rangle \otimes |l ( N_{e})
\rangle$.
\subsection{Variational ED method\label{subsec:vm}}
In this section we briefly outline a very efficient variational method
to address the (one-electron) Holstein polaron problem numerically
in any dimension.
The approach was developed by Bon\v{c}a {\it et al.}~\cite{BTB99,KTB02}
and is based on a clever way of constructing the EP Hilbert
space which can be systematically
expanded in order to achieve high-accuracy results with rather modest
computational resources.
\begin{figure}[t]
\begin{minipage}{0.45\linewidth}
\includegraphics[width=\linewidth,clip]
{jeckel_fehske_fig3.eps}
\end{minipage}\hspace*{0.3cm}
\begin{minipage}{0.50\linewidth}
\caption{Small variational Hilbert space for the 1D polaron problem.
Basis states are represented by dots, off-diagonal matrix elements
by lines. Vertical bonds create or destroy phonons with frequency
$\omega_0$. Horizontal bonds correspond to electron hops $(\propto t)$.
Accordingly, state $|1\rangle$ describes an electron at the
origin (0) and no phonon,
state $|2\rangle$ is an electron and one phonon both at site 0,
$|3\rangle$ is an electron at the nearest neighbor-site site 1,
and a phonon at site 0, and so on. The figure is re-drawn from
ref.~\cite{BTB99}.
\vspace*{0.2cm}
\label{f:vm}}
\end{minipage}
\end{figure}
The authors built up the variational space starting from an initial state,
e.g. the electron at the origin, and acting repeatedly ($L$ times) with the
off-diagonal diagonal hopping ($t$) and EP coupling ($\lambda$) terms
of the Hamiltonian~(see fig.~\ref{f:vm}). A basis state is added
if it is connected by a non-zero $t$- or $\lambda$-matrix element
to a state previously in the space, i.e., states in generation $l$ are
obtained by acting $l$ times with off-diagonal terms. Only one copy
of each state is retained. Importantly, all translations of these
states on an infinite lattice are included. According to Bloch's
theorem each eigenstate can be written as $\psi=\mbox{e}^{{\rm i} kj} a_L$,
where $a_L$ is a set of complex amplitudes related
to the states in the unit cell, e.g. $L=7$ for the small variational
space shown in fig.~\ref{f:vm}. For each momentum $K$ the resulting
numerical problem is then to diagonalize a Hermitian $L\times L$
matrix. While the size of the Hilbert space increases as (D+1)$^L$,
the error in the ground-state
energy decreases exponentially with $L$. Thus in most cases
$10^4$-$10^6$ basis states are sufficient to obtain an 8-10 digit
accuracy for $E_0$. The ground-state energy calculated this way is
variational for the infinite system.
\subsection{Solving the eigenvalue problem\label{subsec:ep}}
To determine the eigenvalues of large sparse Hermitian matrices,
iterative (Krylov) subspace methods like Lanczos~\cite{CW85} and variants of
Davidson~\cite{Da75} diagonalization techniques
are frequently applied. These algorithms
contain basically three steps:
\begin{itemize}
\item[(1)] project problem matrix ${\bf A}$ $ \in \mathbb{R}^n$ onto a
subspace $\bar{\bf A}^k \in \mathbb{V}^k$ $(k\ll n)$
\item[(2)] solve the eigenvalue problem in $\mathbb{V}^k$
using standard routines
\item[(3)] extend subspace $\mathbb{V}^k \to \mathbb{V}^{k+1}$ by
a vector $\vec{t}\perp \mathbb{V}^k $ and go back to (2).
\end{itemize}
This way we obtain a sequence of approximative inverses of the
original matrix ${\bf A}$.
\subsubsection{Lanczos diagonalization\label{subsubsec:ld}}
Starting out from an arbitrary (random) initial state
$|\varphi_0\rangle$,
having finite overlap with the true ground state $|\psi_{0}\rangle$,
the Lanczos algorithm recursively generates a set of orthogonal
states (Lanczos or Krylov vectors):
\begin{equation}
|\varphi_{l+1}\rangle=
{\bf H}^{\tilde{D}}|\varphi_{l}\rangle
-a_l|\varphi_{l}\rangle
-b_l^2|\varphi_{l-1}\rangle,
\label{lr1}
\end{equation}
where
$a_l=\langle\varphi_{l}|{\bf H}^{\tilde{D}}
|\varphi_{l}\rangle/\langle\varphi_{l}|
\varphi_{l}\rangle,
b_l^2=\langle\varphi_{l}|\varphi_{l}\rangle/\langle\varphi_{l-1}|
\varphi_{l-1}\rangle, b_0^2=0$,
and $|\varphi_{-1}\rangle=0$.
Obviously, the representation matrix $[{\bf T}^L]_{l,l^{\prime}}=
\langle \varphi_{l}|{\bf H}^{\tilde{D}}|
\varphi_{l^{\prime}}\rangle$ of ${\bf H}^{\tilde{D}}$ is tridiagonal
in the $L$-dimensional Hilbert space spanned by the
$\{|\varphi_{l}\rangle\}_{l=0,\ldots,L-1}$ ($L\ll\tilde{D}$):
\begin{equation}
\label{tm}
{\bf T}^{L} = \left(
\begin{array}{cccccc}
a_{0} & b_{1} & 0 & 0 & 0 & \cdots\\ b_{1} & a_{1} & b_{2} & 0 & 0
&\cdots\\ 0 & b_{2} & a_{2} & b_{3} & 0 & \cdots\\ 0 & 0 & b_{3} & a_{3}
& b_{4} & \cdots\\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{array}
\right).
\end{equation}
Applying the Lanczos recursion~(\ref{lr1}), the eigenvalues $E_n$ and
eigenvectors $|\psi_{n}\rangle $ of ${\bf H}^{\tilde{D}} $
are approximated by
\begin{equation}
E_n^L\quad\mbox{ and}\quad
|\psi^L_{n}\rangle=\sum_{l=0}^{L-1}c_{n,l}^L|\varphi_{l}\rangle ,
\label{cnlL}
\end{equation}
respectively, where the L coefficients $c_{n,l}^L$ are
the components of the ($n-{\rm th}$) eigenvectors
of ${\bf T}^L$ with eigenvalue $E_n^L$.
The eigenvalue spectrum of ${\bf T}^L$ can be easily
determined using standard routines from libraries
such as EISPACK (see http://www.netlib.org).
Increasing $L$ we check for the convergence of an eigenvalue of
${\bf T}^L$ in a specific energy range. So we can avoid spurious eigenvalues
for fixed Lanczos dimension $L$ which disappear as one varies $L$~\cite{CW85}.
Note that the convergence of the Lanczos algorithm is excellent at the
edges of the spectrum (the ground state for example is obtained with
high precession using at most $\sim 100$ Lanczos iterations)
but rapidly worsens inside the
spectrum.
So Lanczos is suitably used only to obtain the ground state
and a few low lying excited states.
\subsubsection{Implementation of matrix vector multiplication
\label{sec:mvi}}
The core operation of most ED algorithms is a MVM.
It is quite obvious that our matrices are extremely sparse
because the number of non-zero entries per row of our Hamilton matrix
scales linearly with the number of electrons.
Therefore a standard implementation of the MVM step uses a sparse storage
format for the matrix, holding the non-zero elements only.
Two data schemes are in wide use, the compressed row
storage (CRS) and the jagged diagonal
storage (JDS) format~\cite{Baea93}, where the
latter is the method of choice for vector computers.
The typical storage requirement per non-zero entry
is 12-16 Byte for both methods, i.e. for a matrix dimension of
$\tilde{D}=10^9$ about one TByte main memory is required
to store only the matrix elements of the EP Hamiltonian.
Both variants can be applied to any sparse matrix structure
and the MVM step can be done in parallel by using a
parallel library such as PETSc
(see http://www-unix.mcs.anl.gov/petsc/petsc-as/).
To extend our EP studies to even larger matrix sizes we
store no longer the non-zero matrix elements but generate them
in each MVM step. Of course, at that point standard
libraries are no longer useful and a parallel code
tailored to each specific class of Hamiltonians must be developed.
For the Holstein-Hubbard EP model we have established a massively
parallel program using the Message Passing Interface (MPI)
standard. The minimal total memory requirement of this
implementation is three vectors with Hilbert space dimension.
The parallelization approach follows the inherent natural parallelism of
the Hilbert space, which can be constructed as the tensorial product space
of electrons and phonons
$\{|\tilde{b}\rangle=|\tilde{e}\rangle\otimes|p\rangle\}$
(cf. subsubsect.~\ref{subsubsec:bs}).
Assuming, that the electronic dimension ($\tilde{D}_{e}$) is a multiple
of the number of processors used ($N_{\rm cpu}$) we can easily
distribute the electronic basis states among these processors,
i.e. processor $i (0 \leq i \leq N_{\rm cpu}-1)$ is holding the
basis states ($\tilde{e}_i = i \tilde{D}_e/N_{\rm cpu}+1,
\ldots , (i+1) D_e/N_{\rm cpu}$).
As a consequence of this choice only the electronic hopping term
generates inter-processor communication in the MVM
while all other (diagonal electronic) contributions can be
computed locally on each processor.
Furthermore, the communication pattern remains constant within a single run
for all MVM steps and the message sizes (at least $D_{p}$ words)
are large enough to ignore the latency
problems of modern interconnects.
Using supercomputers with hundreds of processors and one TBytes
of main memory, such as IBM p690 clusters or SGI Altix systems,
we are able to run simulations up to a matrix dimension of
$30 \times 10^{9}$.
\subsection{Algorithms for estimating spectral
functions\label{sec:sp}}
The numerical calculation of spectral functions
\begin{eqnarray}
A^{\cal O}(\omega)&=&-\lim_{\eta\to 0^+}\frac{1}{\pi} \mbox{Im} \left[
\langle\psi_{0}
|{\bf O}^{\dagger}\frac{1}{\omega - {\bf H} +E_0 +i\eta}{\bf O}^{}
|\psi_{0}\rangle\right]\nonumber\\&=&
\sum_{n=0}^{\tilde{D}-1}|\langle\psi_{n}|{\bf O}|
\psi_{0}\rangle |^{2}\delta [\omega - (E_{n} - E_{0})],
\label{specfu}
\end{eqnarray}
where ${\bf O}$ is the matrix representation of a certain
operator ${\cal O}$ (e.g., the creation operator
$c_{k}^\dagger$ of an electron with wave number $k$ if one wants
to calculate the single-particle spectral function; or the current operator
$\hat{\jmath} = - \mbox{i} e t\sum_{i}(c_{i}^{\dagger}
c_{i+1}^{} - c_{i+1}^{\dagger}
c_{i}^{})$ if one is interested in the optical conductivity),
involves the resolvent of the Hamilton
matrix ${\bf H}$. Once we have obtained the
eigenvalues and eigenvectors of $H$ we can plug them into
eq.~(\ref{specfu}) and obtain directly the corresponding
dynamical correlation or Green functions.
In practice this `naive' approach is applicable for small
Hilbert spaces only, where the complete
diagonalization of the Hamilton matrix is feasible.
For the typical EP problems under investigation
we deal with Hilbert spaces having total dimensions $\tilde{D}$
of $10^6$-$10^{11}$. Finding all eigenvectors and eigenstates
of such huge Hamilton matrices is impossible,
because the CPU time required for
exact diagonalization of ${\bf H}$ scales as
$\tilde{D}^3$ and memory as $\tilde{D}^2$.
Fortunately, there exist very accurate and well-conditioned
linear scaling algorithms for a direct approximate calculation of
$A^{\cal O}(\omega)$.
\subsubsection{Lanczos recursion method\label{subsubsec:sdm}}
Having determined the ground state $|\psi_{0}^L\rangle$ by the
Lanczos technique, we can use again the recursion relation~(\ref{lr1}),
but with the initial state
$|\varphi_0\rangle={\bf O}|\psi^L_{0}\rangle/\sqrt{
\langle \psi^L_{0}|{\bf O}^{\dagger}{\bf O}|\psi^L_{0}
\rangle}$,
to determine within the so-called {\it Lanczos recursion method} (LRM)
or {\it spectral decoding method} (SDM) an approximative spectral function,
\begin{equation}
\bar{A}^{\cal O}(\omega)=\sum_{n=0}^{L-1} |c_{n,0}^L|^2
\langle\psi_{0}|{\bf O}^{\dagger}{\bf O}|\psi_{0}\rangle\,
\delta[\omega -(E_n^L-E_0^L)],
\label{asdm}
\end{equation}
or equivalently
\begin{equation}
\label{equ:lrmform}
\bar{A}^{\cal O}(\omega) = -\lim_{\eta\to 0^+}\frac{1}{\pi}\mbox{Im} {\,\,
\frac{ \langle\psi_{0}| {\bf O}^{\dagger} {\bf O} |\psi_{0}\rangle}
{\omega+\mbox{i}\eta -a_{0}-\cfrac{b_{1}^{2}} {\omega+{\rm
i}\eta -a_{1}-\cfrac{b_{2}^{2}} {z-a_{2}-\cdots}}}
}\,,
\end{equation}
which is built up by $L$ $\delta$-peaks.
Of course,
the true spectral function $A^{\cal O}(\omega)$ has $\tilde{D}$
$\delta$-peaks. According to the Lanczos phenomenon, the approximated
spectral weights and positions of the peaks converge to their true values
with increasing $L$. Some of the main problems of the LRM/SDM are:
(i) The convergence is not uniform in the whole energy range.
(ii) There exist so-called spurious peaks, which appear
and disappear as $L$ is increased, i.e., when the iteration proceeds.
(iii) Without computationally expensive re-orthogonalization only a
few hundred iterations are possible.
\subsubsection{Kernel polynomial method\label{subsubsec:kpm}}
The idea behind a conceptionally different approach, the
{\it kernel polynomial method} (KPM) (for a review see~\cite{WWAF05}), is
to expand $A^{\cal O}(\omega)$
in a finite series of $L+1$ Chebyshev polynomials $T_m(x)= \cos [m
\arccos(x)]$. Since the Chebyshev polynomials are defined
on the real interval $[-1,1]$, we apply first a simple
linear transformation to the Hamiltonian and all energy scales:
${\bf X}=({\bf H}-b)/a$, $x=(\omega -b)/a$,
$a=(E_{\rm max}-E_{\rm min})/2(1-\epsilon)$,
and $b=(E_{\rm max}+E_{\rm min})/2$
(the small constant $\epsilon$ is introduced in order to avoid
convergence problems at the endpoints of the interval
--a typical choice is $\epsilon \sim 0.01$ which has only
1\% impact on the energy resolution~\cite{SR97}).
Then the expansion reads
\begin{equation}
A^{\cal O}(x)=\frac{1}{\pi
\sqrt{1-x^{2}}}\left(\mu_{0}^{\cal O}+
2\sum_{m=1}^{L}\mu_{m}^{\cal O}T_{m}(x)\right),
\label{akpm}
\end{equation}
with the coefficients (moments)
\begin{equation}
\mu_m^{\cal O}=\int_{-1}^{1}dx\,T_{m}(x)A^{\cal O}(x)=\langle
\psi_{0}| {\bf O}^{\dagger}T_{m}({\bf X}){\bf O}^{}|\psi_{0}\rangle.
\label{mkpm}
\end{equation}
Equation~(\ref{akpm}) converges to the correct function for $L\to\infty$.
Again the moments
\begin{equation}
\mu_{2m}^{\cal O}=2\langle\phi_m|\phi_m\rangle -\mu_0^{\cal O}
\quad\mbox{and}\quad
\mu_{2m+1}^{\cal O}=2\langle\phi_{m+1}|\phi_m\rangle -\mu_1^{\cal O}
\label{moments2}
\end{equation}
can be efficiently obtained by repeated parallelized MVM,
where $|\phi_{m+1}\rangle
=2{\bf X}|\phi_m\rangle -|\phi_{m-1}\rangle$
but now $|\phi_1\rangle={\bf X}|\phi_0\rangle$ and
$|\phi_0\rangle={\bf O}|\psi_0\rangle$
with $|\psi_0\rangle$
determined by Lanczos ED.
As is well known from Fourier expansion,
the series~(\ref{akpm}) with $L$ finite
suffers from rapid oscillations (Gibbs phenomenon)
leading to a poor approximation to $A^{\cal O}(\omega)$.
To improve the approximation
the moments $\mu_n$ are modified $\mu_n \to g_n \mu_n$,
where the damping factors $g_n$
are chosen to give the `best' approximation for a given $L$.
This modification is equivalent to a convolution of the infinite series
with a smooth approximation $K_L(x,y)$ to $\delta(x-y)$,
a so-called approximation kernel.
The appropriate choice of this kernel, that is of $g_n$,
e.g. to guarantee positivity of $A^{\cal O}(\omega)$,
lies at the heart of KPM.
We mainly use the Jackson kernel
which results in a uniform approximation whose resolution
increases as $1/L$, but for the determination of the
single-particle Green functions below we use a Lorentz kernel
which mimics a finite imaginary part $\eta$
in eq.~(\ref{specfu}), see~\cite{WWAF05}.
In view of the uniform convergence of the expansion,
KPM is a method tailored to the calculation of spectral properties.
Most important, spectral functions obtained via KPM are not subject to
uncontrolled or biased approximations:
The accuracy of its outcome depends only on the expansion depth
$L$, and can be made as good as required by just increasing $L$.
Of course one is restricted to finite systems of moderate size
whose associated Hamilton matrix does not exceed available
computational resources.
\subsubsection{Cluster perturbation theory (CPT)\label{subsubsec:cpt}}
The spectrum of a finite system of $N$ sites which we obtain through
KPM differs in many respects from that in the thermodynamic limit
$N\to\infty$,
especially it is obtained for a finite number of momenta
$K=\pi\, m/N$ only.
The most obvious feature is the resulting discreteness of energy levels
which is a property already of the non-interacting system.
While we cannot easily increase $N$ without going beyond
computationally accessible Hilbert spaces,
we can try to extrapolate from a finite to the infinite system.
For this purpose we first calculate the Green function $G^c_{ij}(\omega)$
for all sites $i,j=1,\dots,N$ of a $N$-size cluster with open
boundary conditions, and then recover the infinite lattice
by pasting identical copies of this cluster at their edges.
The `glue' is the hopping $V$
between these clusters, where $V_{kl}=t$ for $|k-l|=1$ and
$k,l \equiv 0,1 \bmod N$,
which is dealt with in first order perturbation theory.
Then the Green function $G_{ij}(\omega)$ of the infinite lattice
is given through a Dyson equation
\begin{equation}
G_{ij}(\omega) = G^c_{ij}(\omega) + \sum_{kl} G^c_{ik}(\omega) V_{kl}
G_{lj}(\omega),
\end{equation}
where indices of $G^c(\omega)$ are counted modulo $N$.
Obviously this order of perturbation in $V$ is exact for the
non-interacting system. We thus get rid of the discreteness addressed above.
The Dyson equation is solved by Fourier transformation over momenta
$K= k N$ corresponding to translations by $N$ sites
\begin{equation}
G_{ij}(K,\omega)
= \left[ \frac{G^c(\omega)}{1-V(K)G^c(\omega)} \right]_{ij} ,
\end{equation}
from which one finally obtains
\begin{equation}
G(k,\omega) = \frac{1}{N} \sum_{i,j=1}^{N} G^c_{ij}(N k,\omega)
\exp(- \mathrm{i} k (i-j) ) .
\end{equation}
In this way, which is called CPT~\cite{SPP00},
we obtain a Green function $G(k,\omega)$ with continuous
momenta $k$ from the Green function $G^c_{ij}(\omega)$ on a finite cluster.
Two approximations are made,
one by using first order perturbation theory in $V=t$,
the second on assuming translational symmetry in $G_{ij}(\omega)$
which is only approximatively satisfied.
In principle, the CPT spectral function $G(k,\omega)$ does not
contain any more
information than the cluster Green function $G^c_{ij}(\omega)$
already does.
But extrapolating to the infinite system it gives a first hint at the
scenario in the thermodynamic limit.
However, CPT does not describe effects
which only occur on large length scales,
like Anderson localization
(see the paper by Fehske, Bronold and Alvermann)
or the critical behavior at a phase transition.
Providing direct access to spectral functions, still without relying
on possibly erroneous approximations,
CPT occupies a niche
between variational approaches like (D)DMRG
(see sect.~\ref{sec:dmrg} and~\ref{sec:dynamical})
and methods directly working in the thermodynamic limit
like the variational ED method~\cite{BTB99}.
\section{Density matrix renormalization group approach \label{sec:dmrg}}
The Density Matrix Renormalization Group (DMRG)
is one of the most powerful
numerical techniques for studying many-body systems.
It was developed by Steve White~\cite{Steve_dmrg} in 1992 to overcome
the problems arising in the application of the standard Numerical
Renormalization Group (NRG)
to quantum lattice many-body systems
such as the Hubbard model~(\ref{hubbard}).
Since then the approach has been extended to a great variety of
problems,
from Classical Statistical Physics
to Quantum Chemistry
(including {\it ab initio} calculations of electronic
structures in molecules) and, recently, to
Nuclear Physics and the Physics
of Elementary Particles and Fields.
A review article on DMRG and its numerous applications
has recently been published~\cite{Uli_review}. Additional
information
can also be found on the DMRG web page at {\it http://www.dmrg.info}.
A detailed discussion of the basic DMRG algorithms and their
implementation has been published in ref.~\cite{Reinhard_review}.
Readers interested in a simple example
should consider the application of DMRG to
single-particle problems, which is also discussed there.
The source code of a single-particle DMRG program
(in the programming language C$^{++}$)
is now part of the ALPS distribution~\cite{ALPS}.
DMRG techniques for strongly correlated systems
have been substantially improved and extended since their
conception and
have proved to be both extremely accurate for low-dimensional
problems and widely applicable.
They enable numerically exact calculations
(as good as exact diagonalizations)
of low-energy properties
on large lattices with up to a few thousand particles and sites
(compared to less than a few tens for exact diagonalizations).
The calculation of high-energy excitations and dynamical
spectra for large systems has proved to be more difficult
and will be discussed in sect.~\ref{sec:dynamical}.
\subsection{Renormalization group and density matrix}
Consider a quantum lattice system with $N$ sites
(in general, we are interested in the case $N \rightarrow \infty$
or at least $N \gg 1$).
The Hilbert space of this system is the Fock space of all
properly symmetrized many-particle wave functions.
As seen in sect.~\ref{sec:ed},
its dimension $D$ grows exponentially with the number of sites
$N$.
Obviously, a lot of these states are not necessary to investigate
specific properties of a model such as the ground state.
Thus, a number of methods have been developed to perform
a projection onto a subspace of
dimension $d \ll D$ and then an exact diagonalization of the
Hamiltonian in this subspace.
Such an approach, called the Numerical Renormalization Group (NRG),
was developed by Wilson 30 years ago to solve the Kondo impurity
problem~\cite{Wilson}.
The key idea is the decomposition of the system into
subsystems with increasing size
(see the paper on the NRG method by Hewson).
The subsystem size is increased by one site at each step
as shown in fig.~\ref{fig:nrg}.
Each subsystem is diagonalized successively and the information
obtained is used to truncate the Hilbert
space before proceeding to the next larger subsystem.
Let $m$ and $n$ be the dimension of the (effective)
Hilbert spaces associated
with the subsystem made of the first $\ell$ sites
and with the site $\ell+1$, respectively.
A basis of dimension $d=mn$ for the next subsystem with $\ell+1$ sites
is built as a tensor product of the subsystem and site bases.
Assuming that $d$ is small enough, the effective Hamiltonian
can be fully diagonalized in the tensor-product basis.
The energy is used as a criterion to truncate the subsystem Hilbert
space.
The lowest $m$ eigenstates are kept to form a new basis
while the high-energy eigenstates are discarded.
The new subsystem with $\ell+1$ sites and an effective Hilbert space
of dimension $m$
can be used to start the next iteration.
In summary, this procedure provides a transformation
$H_{\ell +1} = R[ H_{\ell}]$
forming effective Hamiltonians of fixed dimension $m$ which
describe the low-energy physics of increasingly larger systems.
In this transformation the high-energy states are steadily traced
out as the system grows as illustrated in fig.~\ref{fig:nrg}.
Such a transformation $R$ is usually called a renormalization group (RG)
transformation.
A different implementation of the NRG idea is possible
if the system is homogeneous like in the Hubbard model.
A copy of the current subsystem can be substituted
for the added site in the above procedure.
Thus the subsystem size doubles at every iteration
as illustrated in fig.~\ref{fig:nrg}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=2.5cm]{jeckel_fehske_fig4a.eps}
\includegraphics[width=4.0cm]{jeckel_fehske_fig4b.eps}
\includegraphics[width=5.4cm]{jeckel_fehske_fig4c.eps}
\caption{Schematic representations of the usual
NRG algorithm (left), an alternative NRG algorithm
(middle) and the energy levels
in the effective Hamiltonian $H_{\ell}$ for increasing
subsystem system size $\ell$ (right).}
\label{fig:nrg}
\end{center}
\end{figure}
The NRG method is very accurate for the Kondo problem
and more generally for quantum impurity problems.
Unfortunately, NRG and related truncation schemes have
proved to be unreliable for quantum lattice systems such
as the Hubbard model~\cite{nrg_hub}.
It is easy to understand the failure of the standard NRG in those cases.
A subsystem always has an artificial
boundary at which the low-energy eigenstates of a quantum
lattice Hamiltonian tend to vanish smoothly.
Thus the truncation procedure based on the effective eigenenergies may keep
only states that vanish at the artificial boundary.
As a consequence, at later RG iterations the eigenstates of the
effective Hamiltonian of larger subsystems
may have unwanted features like nodes where
the artificial boundary of the previous subsystems were located.
The application of the second NRG algorithm
(in the middle in fig.~\ref{fig:nrg}) to the problem of a quantum particle
in a one-dimensional box gives a clear illustration of this
effect~\cite{Reinhard_review,particle_box}.
The ground state wavefunction for a $N$-site lattice
$\phi(x) = \sqrt{\frac{2}{N+1}}\sin \left (\frac{\pi x}{N+1} \right )$
has a minimum (node) where the ground state wavefunction
of the twice larger system ($N \rightarrow 2N$) has a maximum
as seen in fig.~\ref{fig:pib}.
Therefore,
the low-energy eigenstates of a small system
are not necessarily the best states to form the low-energy
eigenstates of a larger system.
An approach proposed by White and Noack~\cite{particle_box}
to solve this problem is the construction of
an effective Hamiltonian including the effects
of the subsystem environment
to eliminate the artificial boundary.
DMRG is the extension of this idea to interacting many-particle
problems.
\begin{figure}[t]
\begin{center}
\includegraphics[width=5.5cm]{jeckel_fehske_fig5.eps}
\caption{Ground-state wavefunctions of the tight-binding
particle-in-the-box problem for two systems of $N=$10 sites
(circles) and one system of $N=20$ sites (squares).
}
\label{fig:pib}
\end{center}
\end{figure}
Consider a quantum system which can be split into two parts
called the subsystem and its environment.
A basis of the system Hilbert space
is given by the tensor product
\begin{equation}
\label{basis}
| i,j \rangle = | i \rangle \otimes | j\rangle_{\rm E}
\end{equation}
of basis states $|i\rangle \ (i=1,\dots,d)$ and
$|j\rangle_{\rm E} \ (j=1,\dots, d_{\rm E})$ for the subsystem
and its environment, respectively.
The dimension of this basis is $d_{\rm S} = d \cdot d_{\rm E}$.
Any state $|\psi\rangle$ of the system can be expanded in
this basis
$| \psi \rangle = \sum_{i,j} \psi_{ij} \ | i \rangle \otimes
| j\rangle_{\rm E}$ .
The most important states in the subsystem to represent
the state $|\psi\rangle$ are given by its reduced density matrix,
which is obtained by tracing out the states of the environment
\begin{equation}
\label{rho}
\rho_{i, i^{\prime}} = \sum_j \psi^*_{ij} \psi_{i^{\prime}j} .
\end{equation}
This density matrix is symmetric and has $d$ positive eigenvalues
$w_{\alpha} \geq 0$ satisfying the normalization
$\sum_{\alpha} w_{\alpha} =1$.
A new basis of the subsystem,
$| v_{\alpha} \rangle = \sum_i v_{\alpha i} |i\rangle$,
can be defined
using the eigenvectors of the density matrix
$ \sum_{i^{\prime}} \rho_{ i, i^{\prime}} v_{\alpha i^{\prime}}
= w_{\alpha} v_{\alpha i} ; \alpha,i = 1, \dots, d .
$
In the new basis the state $| \psi \rangle$ can be written
\begin{equation}
\label{exactWF}
| \psi \rangle = \sum_{\alpha} \lambda_{\alpha} | v_{\alpha} \rangle
\otimes | u_{\alpha} \rangle_{\rm E} ,
\end{equation}
with $\lambda^2_{\alpha} = w_{\alpha} > 0$ and
normalized states
$ | u_{\alpha} \rangle_{\rm E} = \frac{1}{\lambda_{\alpha}}
\sum_{i,j} v^*_{\alpha i} \psi_{ij} | j\rangle_{\rm E} .
$
Therefore, $w_{\alpha}$ is the probability that a subsystem is
in a state $| v_{\alpha} \rangle$ if the superblock
is in the state $|\psi \rangle$.
The density matrix provides an optimal choice for selecting
the $m$ states of the subsystem to be kept
in a RG transformation:
keep density matrix eigenstates $| v_{\alpha} \rangle$ with
the largest weights $w_{\alpha}$.
This is the key idea of the density matrix renormalization group
approach.
\subsection{DMRG algorithms}
In a DMRG calculation one first
forms a new subsystem with $\ell+1$ sites
and an effective Hilbert space of dimension $d=mn$
by adding a site to the current subsystem with $\ell$ sites
as in the NRG method.
Then one considers a larger system,
called a superblock, which is made of the new subsystem and
an environment.
A basis of the superblock Hilbert space
is given by the tensor product~(\ref{basis}).
Assuming initially that we want to compute the system
ground state only, we then calculate the ground state
$|\psi\rangle$ of the superblock Hamiltonian using
the techniques discussed in subsect.~\ref{subsec:ep}.
Then the reduced density matrix~(\ref{rho}) for the subsystem
is obtained by tracing out the environment states.
The $m$ density matrix eigenstates $| v_{\alpha} \rangle$ with
the largest weights $w_{\alpha}$ are kept to build an optimal effective
basis of dimension $m$ for the subsystem with $\ell+1$ sites.
This subsystem is then used to start the next iteration and build the
next larger subsystem.
This procedure defines a generic density matrix RG
transformation.
Clearly, the accuracy of a DMRG calculation will depend on the
quality of the environment used.
The environment should make up as much as possible of
the lattice which is not already included in the subsystem.
Constructing such a large and accurate environment is
as difficult as the original quantum many-body problem.
Therefore, the environment must also be constructed self-consistently
using a density matrix RG transformation.
In his initial papers~\cite{Steve_dmrg},
White described two DMRG algorithms:
the infinite-system method and the finite-system method.
The infinite-system method is certainly the simplest DMRG algorithm
and is the starting point of many other algorithms.
Its defining characteristic is that the environment
is constructed using a ``reflection'' of the
current subsystem.
The superblock size increases by two sites at each step as
illustrated in fig.~\ref{fig:dmrg}.
The system is assumed to be homogeneous and ``symmetric''
to allow this operation.
Iterations are continued until an accurate approximation of
an infinite system is obtained.
The infinite-system method is sometimes used to calculate
properties of finite systems.
While this approach may work just fine in many cases, it should be
kept in mind that there is no guarantee of convergence to the
eigenstates of the finite system.
\begin{figure}[b]
\begin{center}
\includegraphics[width=5.2cm]{jeckel_fehske_fig6a.eps}
\includegraphics[width=2.7cm]{jeckel_fehske_fig6b.eps}
\caption{Schematic representation of DMRG algorithms.
Left: Infinite-system algorithm (from top to bottom).
Right: Finite-system DMRG algorithm for a ten-site lattice.
The environment block is on the right-hand side when
going from top to bottom and on the left-hand-side
when going from bottom to top.}
\label{fig:dmrg}
\end{center}
\end{figure}
The finite-system method is
the most versatile and reliable DMRG algorithm
and with some later improvements~\cite{Uli_review,Reinhard_review} it has
also become a very efficient method.
It is designed to calculate the properties of a finite system
accurately.
The environment is chosen so that the superblock represents
the full lattice at every iteration.
The environments can also be considered as being subsystems and
are calculated self-consistently using DMRG with the usual
subsystems playing the part of their environment.
Iterations are continued back and forth through every configuration
of the superblock (i.e., every splitting of the lattice in
one subsystem and its environment) until convergence.
This procedure is illustrated in fig.~\ref{fig:dmrg}.
This ensures the self-consistent optimization of the subsystems
and their environments (for a given number $m$ of states kept)
and thus considerably improves the results reliability
compared to the infinite-system method.
Most DMRG algorithms use only two blocks (the subsystem and its
environment) to form the superblock, but it is possible and useful
for some problems to consider a more complicated configuration.
For instance, one can use four blocks to treat one-dimensional
problems with periodic boundary conditions and
using several blocks can also be advantageous for systems with
boson degrees of freedom such as phonons~\cite{Robert}.
The DMRG sites usually correspond to
the physical sites of the lattice model investigated, such as
spin, fermionic, or bosonic degree of freedom.
[For a phonon mode (boson), which has an infinite Hilbert space,
a DMRG site represents a finite dimensional basis of the phonon
states~\cite{Samuel} as already discussed for exact diagonalization methods
(subsect.~\ref{subsec:hs_bc}).]
However, the DMRG sites can also represent a combination of
several physical sites [for instance, the electron and phonon
at each site of the Holstein-Hubbard model~(\ref{holstein})],
a fraction of
the Hilbert space associated with a given physical site
(as in the pseudo-site method for bosonic degrees of freedom
presented in subsubsect.~\ref{subsubsec:pseudo}).
It is possible to compute several quantum states simultaneously with
DMRG.
In that case, the density matrix is formed as the sum
of the density matrices~(\ref{rho})
calculated for each target state.
A target state can be any quantum state which is well-defined
(and can be computed) in every superblock of a DMRG calculation.
This feature turns out to be very important
for the calculations of dynamical properties
(see sect.~\ref{sec:dynamical}).
\subsection{Truncation errors\label{subsec:truncation}}
With DMRG
an error is made when projecting operators
onto the subspace spanned by the most important $m$
density matrix eigenstates.
This is called the truncation error.
Probably the most important characteristic of a DMRG calculation
is the rate at which the truncation error
decreases with an increasing number $m$ of states kept.
In the most favorable cases (gapped one-dimensional systems
with short-range interactions only and open boundary conditions),
the accuracy increases roughly exponentially with $m$.
For instance, the ground-state energy of the spin-one Heisenberg chain
on lattices with hundreds of sites can be calculated to an accuracy of
the order of $10^{-10}$ with a modest computational effort.
In very difficult cases (long-range off-diagonal interactions,
two-dimensional systems with periodic boundary
conditions), the truncation error in the ground state
energy
can decrease as slowly as $m^{-2}$.
It is possible to calculate exactly the density matrix spectrum
of several integrable models~\cite{Peschel}.
This analysis shows that the distribution of density matrix
eigenvalues $w_{\alpha}$ varies greatly.
As a result, truncation errors
may fall exponentially with increasing $m$ in some favorable
cases but decrease extremely slowly for other ones.
Correspondingly, DMRG accuracy and performance depend
substantially on the specific problem investigated.
For any target state $| \psi \rangle$
written down in its representation~(\ref{exactWF}),
the truncation of the density matrix basis
corresponds to making an approximation
\begin{equation}
\label{approxWF}
| \psi^{\prime} \rangle =
\sum_{m \; \mbox{\scriptsize kept states}}
\lambda_{\alpha} \ | v_{\alpha} \rangle
\otimes | u_{\alpha} \rangle_{\rm E} ,
\end{equation}
which minimizes the error
\begin{equation}
D_m = \left | |\psi\rangle - |\psi^{\prime} \rangle \right |^2
= \sum_{d-m \; \mbox{\scriptsize discarded states}}
w_{\alpha}
= 1 -
\sum_{m \; \mbox{\scriptsize kept states}}
w_{\alpha} ,
\end{equation}
for a fixed number $m$ of states kept.
The total weight of the discarded density matrix eigenstates
(discarded weight) $D_m$ is
related to the truncation errors of physical quantities.
For $D_m \ll 1$ it can be shown that
the truncation error in the eigenenergy of any target
eigenstates of the Hamiltonian $H$ scales linearly with
the discarded weight,
$
E^{\prime}_m - E_{\rm exact} = c D_m + \mathcal{O}(D_m^2) \; ,
$
where
$E^{\prime}_m = \langle \psi^{\prime} | H | \psi^{\prime} \rangle/
\langle \psi^{\prime} | \psi^{\prime} \rangle$ is the energy
in the approximate state $ | \psi^{\prime} \rangle$
and $c$ is a constant.
For the expectation values of other operators the truncation error
theoretically scales as $\sqrt{D_m}$.
The discarded weight $D_m$ decreases (and thus the
accuracy of DMRG results increases)
when the number $m$ of density matrix eigenstates kept is increased.
In particular, for large enough $m$
the discarded weights vanish at every RG iteration for both
subsystems and environments and
truncation errors become negligible.
Therefore, DMRG is an exact numerical method as defined in the
introduction (sect.~\ref{sec:intro}).
Moreover, if the number $m$ of density matrix eigenstates
kept is so large that the discarded weight is exactly
zero at every RG iteration,
DMRG becomes equivalent to an exact diagonalization~(sect.~\ref{sec:ed}).
In real DMRG applications, series of density matrix basis
truncations are performed in successive superblock configurations.
Therefore, the measured discarded weights $D_m$
do not represent the real error in the wave function of the
target.
Nevertheless, in most cases,
the energy truncation error scales linearly
with the average measured discarded weight $D_m$ for $D_m \ll 1$
as shown in fig.~\ref{fig:truncation}(a).
This can (and should) be used to make an $D_m \rightarrow 0$
extrapolation of the energy.
For other physical
quantities such as an order parameter,
truncation errors sometimes scale as $(D_m)^r$, with
$r \approx 0.5$ as seen in fig.~\ref{fig:truncation}(b).
This can also be used to estimate truncation errors.
The measured discarded weight $D_m$ alone should not be used as
an indication of a calculation accuracy, because truncation
errors for physical quantities can be several orders
of magnitude larger than $D_m$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=6.0cm]{jeckel_fehske_fig7a.eps}
\includegraphics[width=6.0cm]{jeckel_fehske_fig7b.eps}
\caption{(a) Ground state energy $E^{\prime}_m$ calculated with DMRG
for a $12\times3$ Hubbard ladder with $U=t$ and
6 holes as a function of the discarded weight
$D_m$ for several number of density matrix
eigenstates kept $m=300$ to 2200.
The zero of the energy is given by a (nearly) exact
quantum Monte Carlo result~\cite{Bonca}.
(b) Staggered bond order parameter $\delta_m$ calculated with DMRG
in an extended 1D half-filled Hubbard model~\cite{Eric}
for $U=3t$ and $V=1.5t$ on a 1024-site lattice
as a function of
$\sqrt{D_m}$ for $m=200$ to 1200. Solid lines are linear fits.
}
\label{fig:truncation}
\end{center}
\end{figure}
It should be noted that
DMRG can be considered as a variational approach.
The system energy
$E(\psi)= \langle \psi |H|\psi \rangle/\langle \psi |\psi \rangle$
is minimized in a variational subspace of the system
Hilbert space to find the ground-state wavefunction $|\psi_0\rangle$
and energy $E_0 = E(\psi_0)$.
If the ground-state wavefunction is calculated with an error of
the order of $\epsilon \sim \sqrt{D_m}
\ll 1$ (i.e., $|\psi\rangle = |\psi_0\rangle +
\epsilon |\phi\rangle$, with $\langle \phi | \phi \rangle = 1$),
the energy obtained is an upper bound to the exact result and the error
in the energy is of the order of $\epsilon^2 \ (\sim D_m$)
as in all variational approaches.
For specific algorithms the variational wave function can be written
down explicitly as a matrix product wave function
or as a product of local tensors~\cite{Uli_review}.
The computational effort of a DMRG calculation
usually increases as a power law for increasing system size $N$,
number $m$ of states kept, or DMRG site dimension $n$.
In the most favorable case (1D system with short-range interactions
only), the CPU time
increases theoretically as $N m^3 n^3$, while the
amount of stored data is of the order of
$\sim m^2 n^2$.
In most cases, however, $m$ has to be increased with
the system size $N$ to keep truncation errors constant.
\subsection{Methods for electron-phonon systems\label{subsec:phonons}}
A significant limitation of DMRG and exact diagonalizations
is that they require a finite basis for each site.
In electron-phonon lattice models such as the Holstein-Hubbard
model~(\ref{holstein}), the number of
phonons (bosons) is not conserved and the Hilbert space is infinite
for each site representing an oscillator.
Of course, the number of phonons can be artificially constrained
to a finite number $M$ per site, but
the number $M$ needed for an accurate
treatment may be quite large.
This often severely constrains the system size or the
regime of coupling which may be studied with
DMRG~\cite{Samuel} and exact diagonalizations
(sect.~\ref{sec:ed}).
Here, we describe two methods for treating systems including sites
with a large Hilbert space.
Both methods use the information contained in a density matrix
to reduce the computational effort required for the study of
such systems.
The first method~\cite{pseudo}, called pseudo-site method,
is just a modification of the usual DMRG technique
which allows us to deal more efficiently with sites having a large
Hilbert space.
The second method~\cite{optimal,optimal2},
called the optimal basis method,
is a procedure for generating a controlled
truncation of the phonon Hilbert space, which allows the use of a
very small optimal basis without significant loss of accuracy.
\subsubsection{Pseudo-site method\label{subsubsec:pseudo}}
The DMRG algorithms presented above can easily be generalized to treat
systems including phonons (bosons).
In a standard implementation of the DMRG method, however,
each boson forms one lattice site
and thus memory and CPU time requirements increase
as $M^2$ and $M^3$, respectively.
Therefore, performing calculations for the Holstein model
requires much more computer resources
than computations for purely electronic systems.
To understand the basis of the pseudo-site
approach~\cite{pseudo},
it is important to note that, in principle, the computer
resources
required by DMRG increase linearly with the number of lattice
sites.
Thus, DMRG is much better able to handle several
few-state sites rather than one many-state site.
The key idea of the pseudo-site approach is to
transform
each boson site with $M=2^{P}$ states into $P$ pseudo-sites
with 2 states.
This approach is motivated by a familiar concept:
The representation of a number in binary form.
In this case the number is the boson state index $s$ going from
0 to $P-1$.
Each binary digit $r_j$ is represented by a pseudo-site, which can
be occupied ($r_j=1$) or empty ($r_j=0$).
One can think of these pseudo-sites as hard-core bosons.
Thus,
the level (boson state) with index $s=0$ is represented by $P$ empty
pseudo-sites, while the highest level, $s=2^{P}-1$,
is represented by one boson on each of the $P$ pseudo-sites.
\begin{figure}
\begin{center}
\includegraphics[width=4.5cm]{jeckel_fehske_fig8.eps}
\caption{
Symbolic representations (a) of the standard DMRG approach
for $M=8$
and (b) of the pseudo-site approach for $P=3$.
}
\label{fig:pseudo}
\end{center}
\end{figure}
Figure~\ref{fig:pseudo} illustrates the differences between
standard and pseudo-site DMRG approaches for $M=8$ ($P=3$).
In the standard approach [fig.~\ref{fig:pseudo}(a)],
a new block (dashed rectangle) is built up
by adding a boson site (oval)
with $M$ states to another block (solid rectangle)
with $m$ states.
Initially, the Hilbert space of the new block
contains $m M$ states and
is truncated to $m$ states according to the DMRG
method.
In the pseudo-site approach [fig.~\ref{fig:pseudo}(b)],
a new block is made of the previous block with $m$ states and
one pseudo-site with two states.
The Hilbert space of this new block
contains only $2 m$ states and is also
truncated to $m$ states according to the DMRG
method.
It takes $P$ steps to make the final block
(largest dashed rectangle)
including the initial block and all pseudo-sites,
which is equivalent to the new block
in fig.~\ref{fig:pseudo}(a).
However, at each step we have to manipulate
only a fraction $2/M$ of the bosonic Hilbert space.
To implement this pseudo-site method,
we introduce $P$ pseudo-sites $j=1,...,P$
with a two-dimensional Hilbert
space $\{|r_j \rangle, r_j = 0,1 \}$ and the operators
$a^\dag_j, a^{\phantom{\dag}}_j$ such that
$ a^{\phantom{\dag}}_j |1\rangle = |0 \rangle,
\ a^{\phantom{\dag}}_j |0\rangle = 0$
and $a^\dag_j$ is the hermitian conjugate of $a^{\phantom{\dag}}_j$.
These pseudo-site operators have the same properties
as hard-core boson operators:
$a^{\phantom{\dag}}_j a^\dag_j + a^\dag_j a^{\phantom{\dag}}_j = 1$,
and operators on different pseudo-sites commute.
The one-to-one mapping
between a boson level $|s\rangle$, $s = 0,...,M-1$,
where $b^\dag b |s\rangle = s |s\rangle$,
and the $P$-pseudo-site
state $|r_1, r_2, ..., r_P \rangle$
is given by the relation
$s = \sum_{j=1}^{P} 2^{j-1} r_j $
between an integer number and its binary representation.
The next step is to write all boson operators in terms of
pseudo-site operators.
It is obvious that the boson number operator is
given by
$N_{\rm b} = b^\dag b = \sum_{j=1}^{P}
2^{j-1} \, a^\dag_j a^{\phantom{\dag}}_j$.
Other boson operators take a more complicated form.
For instance, to calculate the representation of $b^\dag$ we
first write $b^\dag = B^\dag \sqrt{N_{\rm b}+1}$,
where $B^\dag |s\rangle = |s+1\rangle$.
The pseudo-site operator representation of the second term is
\begin{equation}
\sqrt{N_{\rm b}+1} =
\sum_{s=0}^{M-1} \sqrt{s+1} \
A_1(r_1) \, A_2(r_2) ... A_P(r_P) ,
\end{equation}
where $A_j(1) = a^\dag_j a^{\phantom{\dag}}_j$,
$A_j(0) = a^{\phantom{\dag}}_j a^\dag_j$ and
the $r_j$ ($j=1,..,P$) are given by the binary digits of $s$.
For $B^\dag$ we find
\begin{equation}
B^\dag =
a^\dag_1 + a^\dag_2 a^{\phantom{\dag}}_1
+ a^\dag_3 a^{\phantom{\dag}}_2 a^{\phantom{\dag}}_1 + ...
+ a^\dag_P a^{\phantom{\dag}}_{P-1}
a^{\phantom{\dag}}_{P-2} ... a^{\phantom{\dag}}_1 .
\end{equation}
Thus one can substitute $P=\log_2(M)$ pseudo-sites for each
boson site in the lattice and rewrite the system Hamiltonian
and other operators
in terms of the pseudo-site operators.
Then the finite system DMRG algorithm
can be used to
calculate the properties of this system of interacting
electrons and hard-core bosons.
The pseudo-site approach outperforms the standard approach
when computations become challenging.
For $M = 32$, the pseudo-site approach is already faster
than the standard approach by two orders of magnitude.
With the pseudo-site method it is possible
to carry out calculations on lattices large enough
to eliminate finite size effects
while keeping enough states per phonon mode
to render the phonon Hilbert space truncation errors negligible.
For instance, this technique provides some of the most accurate
results~\cite{pseudo,Romero}
for the polaron problem in the Holstein model
(see also the paper by Fehske, Alvermann, Hohenadler and Wellein).
It has also been successfully used to study quantum phase transitions
in the 1D half-filled Holstein-Hubbard model
(see ref.~\cite{peierls} and
the separate paper by Fehske and Jeckelmann).
\subsubsection{Optimal phonon basis\label{subsubsec:optimal}}
The number of phonon levels $M$ needed for an accurate treatment
of a phonon mode can be strongly reduced by
choosing a basis which minimizes the error due to the truncation
of the phonon Hilbert space instead of using
the bare phonon basis made of the lowest eigenstates of the operators
$b^\dag_i b^{\phantom{\dag}}_i$.
As with DMRG,
in order to eliminate phonon states without loss
of accuracy, one should transform to the basis of
eigenvectors of the reduced density matrix
and discard states with low probability.
The key difference is that here the subsystem is a single site.
To be specific, consider the translationally invariant
Holstein-Hubbard model~(\ref{holstein}).
A site includes both the phonon levels and the
electron degrees of freedom.
Let $\alpha$ label the four possible electronic states
of a particular
site and let $s$ label the phonon levels of this site.
Let $j$ label the combined states of all of the rest of the
sites.
Then a wave function of the system can be written as
\begin{equation}
|\psi\rangle = \sum_{\alpha,s,j} \psi_{\alpha s,j}
| \alpha,s\rangle | j \rangle .
\label{eq:wfn}
\end{equation}
The density matrix for this site for a given
electronic state
$\alpha$ of the site is
\begin{equation}
\rho^\alpha_{s,r}
= \sum_{j} \psi_{\alpha s,j} \psi_{\alpha r,j}^{*} ,
\label{eq:rho}
\end{equation}
where $r$ also labels the phonon levels of this site.
Let $w_{\alpha k}$ be the eigenvalues and $\phi_{\alpha k}(n)$
the eigenvectors of $\rho$,
where $k$ labels the different eigenstates for a given
electronic state of the site.
The $w_{\alpha k}$ are the probabilities of
the states $\phi_{\alpha k}$
if the system is in the state~(\ref{eq:wfn}).
If $w_{\alpha k}$ is negligible, the
corresponding eigenvector can be discarded from the
basis for the site without affecting the state~(\ref{eq:wfn}).
If one wishes to keep a limited number of states $m$ for a site,
the best states to keep are the eigenstates of the
density matrices~(\ref{eq:rho}) with the largest eigenvalues.
In EP
systems, these $m$ eigenstates form an optimal
phonon basis\index{optimal phonon basis}.
We note that we obtain different optimal phonon states
for each of
the four electron states of the site.
Unfortunately, in order to obtain the optimal phonon states,
we need
the target state~(\ref{eq:wfn}), which we do not know
--usually we want the optimal states to help get this state.
This problem can be circumvented in several ways~\cite{optimal}.
Here, we describe one algorithm in conjunction
with an exact diagonalization approach (sect.~\ref{sec:ed})
but it can also be incorporated into a standard DMRG algorithm.
One site of the system (called the big site) has both optimal
states and a few extra phonon levels.
(These $n$ extra levels are taken from a set of $M \gg m$ bare
levels but are explicitly orthogonalized to the current optimal states.)
To be able to perform an exact diagonalization of the system,
each site of the lattice is allowed to have only a small number of
optimal phonon levels, $m \sim 3 - 4$.
This approach is illustrated in fig.~\ref{fig:optimal}.
The ground state of the Hamiltonian is calculated
in this reduced Hilbert space using an exact diagonalization technique.
Then the density matrix (\ref{eq:rho}) of the big site
is diagonalized.
The most probable $m$ eigenstates are new optimal phonon states,
which are used on all other sites for the next diagonalization.
Diagonalizations must be repeated until the optimal states have
converged.
Each time different extra phonon levels are used for
the big site.
They allow improvements of the optimal states by mixing
in the $M$ bare states little by little.
\begin{figure}[t]
\begin{center}
\includegraphics[width=3.2cm]{jeckel_fehske_fig9.eps}
\caption{Schematic representation of the big site algorithm.
Each site (circles) has the electronic degrees of freedom
and three optimal states (wiggly bars).
The big site (second site from the left) has the optimal states
plus two bare levels (straight bars).
\label{fig:optimal}
}
\end{center}
\end{figure}
The improvement coming from using optimal phonon states
instead of bare phonon levels is remarkable.
For the Holstein-Hubbard model\index{Holstein model}
the ground state energy converges very rapidly as a function
of the number of optimal phonon levels~\cite{optimal,optimal2}.
Two or three optimal phonon states per site
can give results as accurate as with a hundred or more bare
phonon states per site.
For intermediate coupling ($\omega_{0}=t$, $g=1.5$, and $U=0$) in the
half-filled band case, the energy is accurate to
less than 0.1\% using only two optimal levels,
whereas keeping eleven bare levels the error is still greater
than 5\%.
Combined with ED techniques the above algorithm for generating optimal
phonon states can be significantly improved~\cite{optimal3,Weisse}.
First, the use of a different phonon basis for the big site
artificially breaks the system symmetries.
This can be solved by including all those states into the
phonon basis that can be created by symmetry operations
and by summing the density matrices generated with respect to every site.
Second, the effective phonon Hilbert space is unnecessarily large
if one uses the configuration shown in fig.~\ref{fig:optimal}.
As eigenvalues $w_{\alpha k}$ of the density matrix decrease very
rapidly we can introduce a cut-off for the lattice phonon states,
which is reminiscent of the energy or phonon number cut-off~(\ref{eq:cutoff})
discussed in subsubsect.~\ref{subsubsec:hst}, to further reduce the phonon Hilbert space
dimension without loss of accuracy.
In the Holstein-Hubbard model it is possible to transfer optimal
phonon states from small systems to larger ones because of
the localized nature of the phonon modes.
Therefore, one can first calculate a large optimal phonon basis
in a two-site system and then use it instead of the bare
phonon basis to start calculations on larger lattices.
This is the simplest approach for combining the optimal
phonon basis method with standard DMRG techniques for large
lattices such as the infinite- and finite-system
algorithms~\cite{Friedman}.
The features of the optimal phonon states can sometimes be understood
qualitatively~\cite{optimal,optimal2}.
In the weak-coupling regime optimal states are simply
eigenstates of an oscillator with an equilibrium position
$\langle q \rangle \approx 2g$ as predicted by
a mean-field approximation.
In the strong-coupling regime ($g^2 \omega_0 \gg U,t$)
the most important optimal phonon states
for $n_i=0$ or 2 electrons on the site can be obtained
by a unitary Lang-Firsov transformation of the bare phonon ground state
$S(g) = e^{-g \sum_i (b_i^\dag - b^{\phantom{\dag}}_i) n_i}$,
in agreement with the strong-coupling theory.
The optimal phonon state for a singly occupied site
is not given by the Lang-Firsov transformation but
is approximately the superposition of the optimal phonon states
for empty or doubly occupied sites due to retardation
effects~\cite{optimal}.
In principle, the understanding gained from an analysis
of the optimal phonon states calculated numerically
with our method could be used
to improve the optimized basis used in variational approaches
(see the related paper by Cataudella).
An interesting feature of the optimal basis approach
is that it provides a natural way to dress electrons
locally with phonons~\cite{optimal2}.
This allows us to define creation and annihilation operators
for composite electron-phonon objects like small polarons and bipolarons
and thus to calculate their spectral functions.
However, the dressing by phonons at a finite distance
from the electrons is completely neglected with this method.
\section{Dynamical DMRG \label{sec:dynamical}}
Calculating the dynamics of quantum many-body systems
such as solids has been a long-standing problem of theoretical physics
because many experimental techniques probe the low-energy,
low-temperature dynamical properties of these systems.
For instance, spectroscopy experiments, such as
optical absorption, photoemission, or nuclear magnetic resonance,
can measure dynamical correlations between an external
perturbation and the response of electrons and phonons
in solids~\cite{Kuzmany}.
The DMRG method has proved to be extremely accurate for
calculating the properties of very large
low-dimensional correlated systems and even
allows us to investigate static properties in the thermodynamic limit
The calculation of high-energy excitations and dynamical
spectra for large systems has proved to be more difficult
and has become possible only recently with the
development of the dynamical DMRG (DDMRG) approach~\cite{ddmrg0,ddmrg}.
Here we first discuss the difficulty in calculating excited
states and dynamical properties within the DMRG approach and
several techniques which have been developed for this
purpose. Then we present the DDMRG method and a
finite-size scaling analysis for dynamical spectra.
In the final section,
we discuss the application of DDMRG
to electron-phonon systems.
\subsection{Calculation of excited states and dynamical properties\label{subsec:dynamics}}
The simplest method for computing excited states within DMRG is the
inclusion of the lowest $R$ eigenstates as target
instead
of the sole ground state, so that the RG transformation
produces effective Hamiltonians describing these states
accurately.
As an example, fig.~\ref{fig:polaron} shows the dispersion
of the lowest 32 eigenenergies as a function of the momentum
$\hbar k$
in the one-dimensional Holstein model on a 32-site ring
with one electron (the so-called polaron problem,
see the paper by Fehske, Alvermann, Hohenadler, and Wellein
for a discussion of the polaron physics in that model).
These energies have been calculated
with the pseudo-site DMRG method
using the corresponding 32 eigenstates as targets.
Unfortunately, this approach is limited to small number $R$ of targets
(of the order of a few tens), which is not sufficient
for calculating a complete excitation spectrum.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=6.0cm]{jeckel_fehske_fig10.eps}
\caption{Dispersion of the lowest 32 eigenenergies in the
one-dimensional Holstein model with one electron on
a 32-site lattice for
$\omega_{0}=t$ and $g=1$ calculated with
DMRG (circles).
The diamonds show a hypothetical polaron dispersion
$\epsilon(k)=E_{\rm p}-2t^*[\cos(k)-1]$, where
$E_{\rm p}=-2.471t$ and the effective hopping
$t^*=0.753t$ have been fitted to
the DMRG dispersion around $k=0$.
The squares show the free-electron dispersion
$\epsilon(k)=-2t\cos(k)$.
}
\label{fig:polaron}
\end{center}
\end{figure}
\subsubsection{Dynamical correlation functions}
The (zero-temperature)
linear response of a quantum system to a time-dependent
perturbation is often given by dynamical correlation functions
(with $\hbar=1$)
\begin{equation}
G_{A}(\omega + i \eta) = - \frac{1}{\pi}
\langle \psi_0| A^{\dag} \frac{1}{E_0+\omega + i \eta - H} A
|\psi_0\rangle ,
\label{dynamCF}
\end{equation}
where $H$ is the time-independent Hamiltonian of the
system, $E_0$ and $|\psi_0 \rangle$ are its ground-state energy and
wavefunction, $A$ is the quantum operator corresponding to the physical
quantity which is analyzed, and $A^{\dag}$ is the Hermitian conjugate
of $A$.
A small real number $\eta$ is used in the calculation to
shift the poles of the correlation function into the complex plane.
As a first example, the real part $\sigma_1(\omega > 0)$
of the optical conductivity
is related to the imaginary part of the dynamical
current-current correlation function, which corresponds
to the operator
$
A = {it} \sum_{j\sigma}
(c^\dag_{j+1\sigma} c^{\phantom{\dag}}_{j\sigma} -
c^\dag_{j\sigma} c^{\phantom{\dag}}_{j+1\sigma})
$
in the above equation for the 1D Hubbard and Holstein-Hubbard
models.
As another example, on a one-dimensional lattice
the angle-resolved photoemission spectrum is related
to the spectral function $A(k,\omega)$ given by the imaginary
part of (\ref{dynamCF}) with the operator
$A = \frac{1}{\sqrt{N}} \sum_{j} e^{ikj} c_{j\sigma}$.
In general, we are interested in the imaginary part
of the correlation function
\begin{equation}
\label{spectral}
I_{A}(\omega + i\eta) = {\rm Im} \ G_{A}(\omega + i \eta)
= \frac{1}{\pi} \langle \psi_0 | A^{\dag}
\frac{\eta}{(E_0+\omega -H)^2 + \eta^2} A |\psi_0 \rangle
\end{equation}
in the $\eta \rightarrow 0$ limit,
$
I_{A}(\omega) = \lim_{\eta \rightarrow 0} I_{A}(\omega + i \eta)
\geq 0 .
$
It should be noted that the spectrum $I_{A}(\omega + i \eta)$ for any
finite $\eta > 0$ is equal to the convolution
of the spectral function $I_{A}(\omega)$
with a Lorentzian distribution of width $\eta$
\begin{equation}
I_{A}(\omega + i \eta) =
\int_{-\infty}^{+\infty} d\omega' I_{A}(\omega')
\frac{1}{\pi}\frac{\eta}{(\omega-\omega')^2+\eta^2}
> 0 .
\label{convolution}
\end{equation}
Let $|n\rangle, n=0,1,2, \dots$ be the complete set of eigenstates
of $H$ with eigenenergies $E_n$ ($|n=0\rangle$ corresponds to
the ground state $|\psi_0\rangle$).
The spectral function~(\ref{spectral}) can be written in the
so-called Lehmann spectral representation
\begin{equation}
\label{lehmann}
I_{A}(\omega + i \eta) =
\frac{1}{\pi} \sum_{n} |\langle n|A|0\rangle|^2
\frac{\eta}{(E_n -E_0-\omega)^2 + \eta^2} ,
\end{equation}
where
$E_n -E_0$ is the excitation energy and $|\langle n|A|0\rangle|^2$
the spectral weight of the $n$-th excited state.
Obviously, only states with a finite spectral weight contribute
to the dynamical correlation function~(\ref{dynamCF})
and play a role in the dynamics of the physical quantity
corresponding to the operator $A$.
In the one-dimensional half-filled Hubbard model
the Hilbert space dimension increases exponentially with the
number of sites $N$ but the number of eigenstates with non-zero
matrix elements $\langle n|A|0\rangle$ increases only as a power-law
of $N$ for the
optical conductivity or the one-electron density of states.
\subsubsection{Symmetries \label{subsubsec:symmetry}}
The simplest method for calculating specific excited states
uses symmetries of the system.
If symmetry operators are well defined
in every subsystem, DMRG calculations can be carried out
to obtain the $R$ lowest eigenstates in a specific
symmetry subspace.
It is also possible to target simultaneously
the lowest eigenstates in two different subspaces
and to calculate matrix elements $\langle n| A|0 \rangle$
between them.
This allows one to reconstruct the dynamical correlation function
at low energy using eq.~(\ref{lehmann}).
There are several approaches for using
symmetries in DMRG calculations.
From a computational point of view, the best approach is an
explicit implementation of the corresponding conserved quantum numbers
in the program
This is easily done for so-called additive quantum numbers
such as the particle number or the projection of the total spin
onto an axis.
For instance, if the $z$-projection of the
total spin is conserved
(i.e., the spin operator $S_z$ commutes with the
Hamilton operator $H$), one can
calculate the lowest eigenstates for various quantum numbers $S_z$
to investigate spin excitations~\cite{SteveHuse}.
As another example, if we study a system with $N_{\rm e}$ electrons, we
can compute the ground-state energy $E_0(N')$ for
different number $N'$ of electrons around $N_{\rm e}$
and thus obtain the (charge) gap
$E_{\rm g1} = E_0(N_{\rm e}+1) + E_0(N_{\rm e}-1) - 2 E_0(N_{\rm e})$
in the spectrum of free electronic charge excitations
(see the separate paper by Fehske and Jeckelmann
for an application of this approach).
The extension of DMRG to non-abelian symmetries
such as the $SU(2)$ spin symmetry of the Hubbard model
is presented in ref.~\cite{Ian}.
A second approach for using symmetries
is the construction of projection matrices
onto invariant subspaces of the Hamiltonian.
They can be used to project the matrix representation
of the superblock Hamiltonian~\cite{Ramasesha}
or the initial wave function of the iterative
algorithm used to diagonalize the superblock Hamiltonian~\cite{Boman}
(so that it converges to eigenstates in the chosen subspace).
This approach has successfully been used
to study optical excitations in one-dimensional
models of conjugated polymers~\cite{Boman,Ramasesha_review}.
A third approach consists in adding an interaction term to the system
Hamiltonian $H$
to shift the eigenstates with the chosen symmetry
to lower energies. For instance, if the total spin $\mathbf{S}^2$
commutes
with the Hamilton operator $H$, one applies the DMRG method
to the Hamiltonian $H^{\prime} = H + \lambda \mathbf{S}^2$ with
$\lambda > 0$ to obtain the lowest singlet eigenstates
without interference from the $\mathbf{S}^2 > 0$
eigenstates~\cite{Stephane}.
Using symmetries is the most efficient and accurate approach
for calculating specific low-lying excited states with DMRG.
However, its application is obviously restricted to those problems
which have relevant symmetries and it
provides only the lowest $R$ eigenstates
for given symmetries, where $R$
is at most a few tens for realistic applications. Thus this approach
cannot describe high-energy excitations nor any complex or
continuous dynamical spectrum.
\subsubsection{Lanczos-DMRG}
The Lanczos-DMRG approach was introduced by Karen Hallberg in 1995
as a method for studying dynamical properties of lattice quantum
many-body systems~\cite{Hallberg}.
It combines DMRG with the continuous fraction
expansion technique, also called Lanczos algorithm
(see subsubsect.~\ref{subsubsec:sdm}),
to compute the dynamical correlation function~(\ref{dynamCF}).
Firstly, the Lanczos algorithm is used to calculate the complete
dynamical spectrum of a superblock Hamiltonian.
Secondly, some Lanczos vectors are used as DMRG target
in an attempt at constructing an effective Hamiltonian which
describes excited states contributing to the correlation function accurately.
Theoretically, one can systematically improve the accuracy using an
increasingly large enough
number $L$ of Lanczos vectors as targets.
Unfortunately, the method becomes numerically instable for large
$L$ as soon as the DMRG truncation error is finite.
Therefore, Lanczos-DMRG is not an exact numerical method
except for a few special cases.
The cause of the numerical instability is the tendency of the Lanczos
algorithm to blow up the DMRG truncation errors.
In practice, only the first few Lanczos vectors
are included as target.
The accuracy of this type of calculation is unknown
and can be very poor.
From a physical point of view, the failure of
the Lanczos-DMRG approach for complex dynamical spectra can be
understood.
With this method one attempts to construct a single effective
representation of the Hamiltonian $H$ which describes the relevant
excited states for all excitation energies.
This contradicts the essence of a RG calculation, which is
the construction of an effective representation of a system
at a specific energy scale by integrating out the other
energy scales.
Nevertheless, Lanczos-DMRG is a relatively
simple and quick method for calculating dynamical properties
within a DMRG approach and it has already been used in
several works~\cite{Uli_review}.
It gives accurate results for systems slightly larger
than those which can be investigated with exact diagonalization
techniques.
It is also reliable for larger systems with simple discrete spectra
made of a few peaks.
Moreover, Lanczos-DMRG gives accurate results for the first
few moments of a spectral function and
thus provides us with a simple independent check of the spectral
functions calculated with other methods.
In the context of EP systems
the Lanczos algorithm has been successfully combined with
the optimal phonon basis DMRG method
(see subsubsect.~\ref{subsubsec:optimal}).
The optical conductivity, single-electron spectral functions
and electron-pair spectral functions have been
calculated for the 1D Holstein-Hubbard model
at various band fillings~\cite{optimal2,Chunli}.
Results for these dynamical quantities
agree qualitatively with results obtained by conventional
exact diagonalizations using powerful parallel
computers (sect.~\ref{sec:ed}).
\subsubsection{Correction vector DMRG}
Using correction vectors to calculate dynamical correlation
functions with DMRG was first proposed by
Ramasesha {\it et al.}~\cite{Pati}.
The correction vector associated with $G_A(\omega + i \eta)$ is defined
by
\begin{equation}
\label{CV}
|\psi_A(\omega + i \eta) \rangle = \frac{1}{E_0+\omega + i \eta - H}
| A \rangle ,
\end{equation}
where $| A \rangle = A | \psi_0 \rangle$ is the first
Lanczos vector.
If the correction vector is known, the dynamical correlation
function can be calculated directly
\begin{equation}
G_A(\omega + i \eta) =
-\frac{1}{\pi} \langle A|\psi_A(\omega + i \eta) \rangle .
\label{dynamCF2}
\end{equation}
To calculate a correction vector, one first solves
an inhomogeneous linear equation
\begin{equation}
\left [ (E_0+\omega-H)^2+\eta^2 \right ] | \psi \rangle
= -\eta | A \rangle ,
\label{CVequation1}
\end{equation}
which always has a unique solution
$| \psi \rangle = | Y_A(\omega + i \eta) \rangle$ for $\eta \neq 0$.
The correction vector is then given by
$
|\psi_A(\omega + i \eta) \rangle = | X_A(\omega + i \eta) \rangle
+ i | Y_A(\omega + i \eta) \rangle
$,
with
\begin{equation}
| X_A(\omega + i \eta) \rangle =
\frac{H-E_0-\omega}{\eta} | Y_A(\omega + i \eta) \rangle .
\label{CVequation2}
\end{equation}
One should note that the states $| X_A(\omega + i \eta) \rangle$
and $| Y_A(\omega + i \eta) \rangle$ are complex if the state
$|A\rangle$ is not real, but they always determine the real part and
imaginary part of the dynamical correlation function
$G_A(\omega + i \eta)$, respectively.
The approach can be extended to higher-order dynamic response
functions such as third-order optical polarizabilities~\cite{Pati}
and to derivatives of dynamical correlation functions~\cite{ddmrg}.
The distinct characteristic of a correction vector approach
is that a specific
quantum state~(\ref{CV}) is constructed to compute
the dynamical correlation function~(\ref{dynamCF}) at each frequency
$\omega$.
To obtain a complete dynamical spectrum, the procedure has to be
repeated for many different frequencies.
Therefore,
this approach is generally less efficient than
the iterative methods presented in sect.~\ref{sec:sp}
in the context of exact diagonalizations.
For DMRG calculations, however, this is a highly favorable
characteristic.
The dynamical correlation function can be determined for
each frequency $\omega$ separately using effective representations
of the system Hamiltonian $H$ and operator $A$ which have to
describe a single energy scale accurately.
K\"{u}hner and White~\cite{Till}
have devised a correction vector DMRG method
which uses this characteristic to
perform accurate calculations
of spectral functions for all frequencies
in large lattice quantum many-body systems.
In their method, two correction vectors with close frequencies
$\omega_1$ and $\omega_2$ and finite broadening
$\eta \sim \omega_2 -\omega_1 >0$
are included as target.
The spectrum is then calculated in the frequency interval
$\omega_1 \alt \omega \alt \omega_2$
using eq.~(\ref{dynamCF2}) or the continuous fraction expansion.
The calculation is repeated for several (overlapping)
intervals to determine the spectral function over a large
frequency range.
This procedure
makes the accurate computation of complex or continuous spectra possible.
Nevertheless, there have been relatively few applications of
the correction vector DMRG method~\cite{Uli_review} because it
requires substantial computational resources and is difficult
to use efficiently.
\subsection{Dynamical DMRG method\label{subsec:ddmrg}}
The capability of the correction vector DMRG method to calculate
continuous spectra shows that using specific target states
for each frequency is the right approach.
Nevertheless, it is highly desirable to simplify
this approach and to improve its performance.
A closer analysis shows that the complete problem of
calculating dynamical properties can be
formulated as a minimization problem.
This leads to the definition of a more efficient and simpler
method, the dynamical DMRG (DDMRG)~\cite{ddmrg}.
DDMRG enables accurate calculations
of dynamical properties for all frequencies in large
systems with up to
a few hundred particles using a workstation.
Combined with a proper finite-size-scaling analysis
(subsect.~\ref{subsec:scaling})
it also enables the investigation of spectral functions
in the thermodynamic limit.
Therefore, DDMRG provides a powerful new approach for investigating
the dynamical properties
of quantum many-body systems.
\subsubsection{Variational principle}
In the correction vector DMRG method the most time-consuming task is
the calculation of correction vectors in the superblock
from eq.~(\ref{CVequation1}).
A well-established approach for solving an inhomogeneous linear
equation~(\ref{CVequation1}) is to formulate it as a minimization
problem.
Consider the equation $M \mathbf{x} = \mathbf{a}$, where
$M$ is a positive definite symmetric matrix with a non-degenerate
lowest eigenvalue, $\mathbf{a}$ is a known vector,
and $\mathbf{x}$ is the unknown vector to be calculated.
One can define the function
$W(\mathbf{x})= \mathbf{x} \cdot M \mathbf{x}
- \mathbf{x} \cdot \mathbf{a}
- \mathbf{a} \cdot \mathbf{x}$,
which has a non-degenerate minimum for the vector
$\mathbf{x}_{\rm min} = M^{-1} \mathbf{a}$
which is solution of the inhomogeneous linear equation.
(K\"{u}hner and White~\cite{Till} used a conjugate gradient
method to solve this minimization problem.)
Generalizing this idea one can formulate a variational
principle for dynamical correlation functions.
One considers the functional
\begin{equation}
W_{A,\eta}(\omega, \psi) =
\langle \psi | (E_0+\omega-H)^2+\eta^2 | \psi \rangle
+ \eta \langle A | \psi \rangle + \eta \langle \psi | A \rangle .
\label{functional}
\end{equation}
For any $\eta \neq 0$ and a fixed frequency $\omega$ this functional
has a well-defined and non-degenerate minimum
for the quantum state which is solution of eq.~(\ref{CVequation1}), i.e.
$
| \psi_{{\rm min}} \rangle = | Y_A(\omega + i \eta) \rangle .
$
It is easy to show that the value of the minimum is related
to the imaginary part of the dynamical correlation function
\begin{equation}
W_{A,\eta}(\omega, \psi_{{\rm min}}) =
-\pi\eta I_A(\omega + i \eta).
\label{minimum}
\end{equation}
Therefore, the calculation of spectral functions can be formulated
as a minimization problem.
To determine $I_A(\omega + i \eta)$ at any frequency $\omega$ and for
any $\eta > 0$, one minimizes the corresponding functional
$W_{A,\eta}(\omega, \psi)$.
Once this minimization has been carried out, the real part of
the correlation function $G_A(\omega + i \eta)$ can be calculated using
eqs.~(\ref{dynamCF2}) and~(\ref{CVequation2}).
This is the variational principle for dynamical correlation functions.
It is clear that if we can calculate $| Y_A(\omega + i \eta) \rangle$
exactly, this variational formulation is completely equivalent to a
correction vector approach.
However, if we can only calculate an approximate solution with an error
of the order $\epsilon \ll 1$, $|\psi\rangle = | Y_A(\omega + i \eta)
\rangle + \epsilon |\phi\rangle$ with $\langle \phi|\phi \rangle=1$,
the variational formulation is more accurate.
In the correction vector method the error in the spectrum
$I_A(\omega + i \eta)$ calculated
with eq.~(\ref{dynamCF2}) is also of the order of $\epsilon$.
In the variational approach it is easy to show that the error
in the value of the minimum $W_{A,\eta}(\omega, \psi_{{\rm min}})$,
and thus in $I_A(\omega + i \eta)$, is of the order of $\epsilon^2$.
With both methods the error in the real part of $G_A(\omega + i \eta)$
is of the order of $\epsilon$.
The DMRG procedure used to minimize the energy functional
$E(\psi)$
(see sect.~\ref{sec:dmrg}) can also be used to minimize the
functional $W_{A,\eta}(\omega,\psi)$ and thus to calculate the
dynamical correlation function $G_A(\omega+i\eta)$.
This approach is called the dynamical DMRG method.
In principle, it is equivalent to the correction vector DMRG
method of K\"{u}hner and White~\cite{Till}
because the same target states are used to build the
DMRG basis in both methods.
In practice, however, DDMRG
has the significant advantage over the correction
vector DMRG that errors in $I_A(\omega+i\eta)$
are significantly smaller
(of the order of $\epsilon^2$ instead of $\epsilon$ as
explained above) as soon as DMRG truncation errors are no
longer negligible.
\subsubsection{DDMRG algorithm}
The minimization of the functional $W_{A,\eta}(\omega,\psi)$ is
easily integrated into the usual DMRG algorithm.
At every step of a DMRG sweep through the system lattice,
a superblock representing the system is built and the following
calculations are performed in the the superblock subspace:
\begin{enumerate}
\item The energy functional $E(\psi)$ is minimized using a standard
iterative algorithm for the eigenvalue problem. This yields the ground
state vector $|\psi_0\rangle$ and its energy $E_0$ in the superblock
subspace.
\item The state $|A \rangle$ is calculated.
\item The functional $W_{A,\eta}(\omega,\psi)$ is minimized using an
iterative minimization algorithm.
This gives the first part of the correction vector
$|Y_{A}(\omega+ i\eta)\rangle$ and the imaginary part
$I_A(\omega + i \eta)$ of the dynamical correlation function
through eq.~(\ref{minimum}).
\item The second part $|X_{A}(\omega+ i\eta)\rangle$
of the correction vector is calculated using eq.~(\ref{CVequation2}).
\item
The real part of the dynamical correlation function
can be calculated from eq.~(\ref{dynamCF2}).
\item The four states $|\psi_0\rangle$, $|A \rangle$,
$|Y_{A}(\omega+ i\eta)\rangle$, and $|X_{A}(\omega+ i\eta)\rangle$
are included as target in the density matrix renormalization
to build a new superblock at the next step.
\end{enumerate}
The robust finite-system DMRG algorithm must be used to perform
several sweeps through a lattice of fixed size.
Sweeps are repeated until the procedure has converged to the
minimum of both functionals $E(\psi)$ and $W_{A,\eta}(\omega,\psi)$.
To obtain the dynamical correlation function
$G_{A}(\omega+ i\eta)$ over a range of frequencies, one has to
repeat this calculation for several frequencies $\omega$.
If the DDMRG calculations are performed independently, the computational
effort is roughly proportional to the number of frequencies.
It is also possible to carry out a DDMRG calculation for
several frequencies simultaneously, including several states
$|X_{A}(\omega+ i\eta)\rangle$ and $|Y_{A}(\omega+ i\eta)\rangle$
with different frequencies $\omega$ as target.
As calculations for different frequencies are essentially
independent, it would be easy and very efficient to use
parallel computers.
Because of the variational principle one naively expects that the DDMRG
results for $I_A(\omega +i\eta)$ must converge monotonically from below
to the exact result as the number $m$ of density matrix eigenstates is
increased.
In practice, the convergence is less regular because of two
approximations made to calculate the functional
$W_{A,\eta}(\omega,\psi)$.
First, the ground-state energy $E_0$ and the state $|A\rangle$ used
in the definition~(\ref{functional}) of $W_{A,\eta}(\omega,\psi)$
are not known exactly but calculated with DMRG.
Second, one calculates an effective representation of $H$ only
and assumes that $(H^2)_{\rm eff} \approx (H_{\rm eff})^2$
to compute $W_{A,\eta}(\omega,\psi)$ in the superblock subspace.
These approximations can cause a violation of the variational bound
$W_{A,\eta}(\omega,\psi) \geq -\pi \eta I_A(\omega +i\eta)$.
In practice, for a sufficiently large number $m$ of
density matrix eigenstates kept,
the absolute errors in $I_A(\omega +i \eta)$
decrease systematically with increasing $m$.
Errors becomes negligible if enough states are kept to
make the discarded weight vanish.
It is also possible to estimate the accuracy of a DDMRG
calculation from the results obtained for different values of $m$.
Therefore, DDMRG is an exact numerical method as defined in
the introduction (sect.~\ref{sec:intro}).
The accuracy of the DDMRG approach for 1D
correlated electron systems has been demonstrated by numerous
comparisons with exact analytical
results~\cite{ddmrg0,ddmrg,Fabian,Eric2,Holger,Holger2,Satoshi}.
As an example, fig.~\ref{fig:ddmrg} shows the optical conductivity
of the 1D half-filled Hubbard model for
$U=3t$. The DDMRG result (calculated using a broadening
$\eta = 0.1t$ on a 128-site lattice) agrees perfectly with
the field-theoretical prediction (also broadened with a
Lorentzian distribution of width $\eta$)~\cite{ddmrg0,Fabian}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=6.0cm]{jeckel_fehske_fig11a.eps}
\includegraphics[width=6.0cm]{jeckel_fehske_fig11b.eps}
\caption{Optical conductivity of the one-dimensional half-filled
Hubbard model for $U=3t$. Left panel: DDMRG result calculated
on a 128-site lattice using a broadening $\eta=0.1t$ (dashed line) and
field-theoretical prediction for the infinite chain~\cite{ddmrg0}
broadened with a Lorentzian of width $\eta=0.1t$ (solid line).
Right panel: the field-theoretical prediction without broadening
(solid line) and the DDMRG result after deconvolution (circles).
}
\label{fig:ddmrg}
\end{center}
\end{figure}
\subsection{Spectrum in the thermodynamic limit\label{subsec:scaling}}
DDMRG allows us to calculate spectral functions of a large but
finite system with a broadening
given by the parameter $\eta > 0$.
To determine the properties of a dynamical spectrum
in the
thermodynamic limit, one has to analyze the scaling of the corresponding
spectra $I_{N, \eta}(\omega)$ as a function of the system size $N$
for vanishing broadening $\eta$
\begin{equation}
I(\omega) = \lim_{\eta \rightarrow 0} \lim_{N \rightarrow \infty}
I_{N, \eta}(\omega) .
\label{inflim}
\end{equation}
Computing both limits in this equation from numerical results
for $I_{N, \eta}(\omega)$ requires a lot of accurate data
for different values of $\eta$ and $N$
and can be the source of large extrapolation errors.
A much better approach is to use a broadening $\eta(N) >0$
which decreases with increasing $N$ and vanishes in the
thermodynamic limit~\cite{ddmrg}.
The dynamical spectrum is then given by
$
I(\omega) = \lim_{N \rightarrow \infty} I_{N, \eta(N)}(\omega) .
$
From the existence of both limits in eq.~(\ref{inflim})
it can be demonstrated that there exists a minimal broadening
$\eta_0(N) \geq 0$,
which converges to zero for $N \rightarrow \infty$,
such that this procedure is exact for all
functions $\eta(N)$ with $\eta(N) > \eta_0(N)$ and
$\lim_{N \rightarrow \infty} \eta(N) = 0$.
The function $\eta_0(N)$ depends naturally on the specific problem
studied and
can also vary for each frequency $\omega$ considered.
For one-dimensional
correlated electron systems such as the Hubbard model~(\ref{hubbard}),
one finds empirically that a sufficient condition is
\begin{equation}
\eta(N) = \frac{c}{N} ,
\label{etacondition}
\end{equation}
where the constant $c$ is comparable to the effective
width of the dynamical spectrum $I(\omega)$,
which is finite in such lattice models.
This condition has a very simple physical interpretation.
The spectral function $I_{N, \eta}(\omega)$ represents the dynamical
response of the system over a time period $\sim 1/\eta$ after
one has started to apply an external force.
Typically, in a lattice model the spectral width is proportional to the
velocity of the excitations involved in the system response.
Thus the condition~(\ref{etacondition}) means that excitations are too
slow to travel the full length $\sim N$ of the system in the time
interval $\sim 1/\eta$ and do not ``sense'' that the system is finite.
An additional benefit of a broadening satisfying the
condition~(\ref{etacondition})
is that the finite-system spectrum $I_{N,\eta}(\omega)$ becomes
indistinguishable from the infinite-system spectrum with the same
broadening $\eta$ for relatively small $N$.
Therefore, if one knows a spectral function
$I(\omega)$ for an infinite system, its convolution with a
Lorentzian of width $\eta$ can be compared directly with the numerical
results for the finite-system spectrum $I_{N,\eta}(\omega)$.
This is the approach used in fig.~\ref{fig:ddmrg} (left panel) to compare
DDMRG results for a finite lattice with the field-theoretical
prediction for an infinite chain.
Finally, an approximation for an infinite-system (continuous) spectrum
can be obtained by solving
the convolution equation~({\ref{convolution}) numerically
for $I_A(\omega')$
using the numerical DDMRG data for a finite system on
the left-hand side of this equation~\cite{Satoshi}.
Performing such a deconvolution is a ill-conditioned inverse problem,
which can only be solved approximately using some assumptions
on the spectrum properties like its smoothness.
Therefore, the accuracy of deconvolved DDMRG spectra is unknown.
In practice, however, one obtains often accurate results
as shown in fig.~\ref{fig:ddmrg} (right panel),
where a deconvolved DDMRG spectrum
is compared to an exact field-theoretical result.
In summary, the dynamical spectrum of an infinite system can be
determined accurately and efficiently from numerical (DDMRG) data for
finite-system spectra using a finite-size scaling analysis with
a size-dependent broadening $\eta(N)$.
\subsection{Application to electron-phonon problems}
The DDMRG algorithm described in the previous section can be applied
to EP systems such as the Holstein-Hubbard model
without modification~\cite{Georg}.
It can naturally be combined with the special DMRG
techniques for systems with bosonic degrees of freedom which
are described in subsect.~\ref{subsec:phonons}.
As for ground-state simulations,
these special techniques substantially reduce the computational
cost for calculating dynamical properties
of EP systems.
Nevertheless, applications of DDMRG to the Holstein-Hubbard model
are significantly more costly than those for comparable purely
electronic systems such as the Hubbard model.
As an illustration, we compare DDMRG and ED-KPM
(subsubsect.~\ref{subsubsec:kpm}) results for the spectral
functions of an eight-site spinless Holstein model in
fig.~\ref{fig:ep}. There is an excellent overall
agreement between both methods. The observable
differences are due to the larger broadening $\eta=0.1t$ used
in the DDMRG simulation. This hides the sharpest details
of the spectrum like the oscillations
which are visible in the KPM spectrum.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.0cm]{jeckel_fehske_fig12.eps}
\caption{
Spectral functions $A(k,\omega)$
(for electron removal, $\omega < 0$, and
electron injection, $\omega > 0$)
of the spinless Holstein model
at half filling on a 8-site lattice with periodic
boundary conditions.
The system is in the CDW/Peierls insulating phase
($\omega = 0.1 t$ and $g=4$).
The rapidly oscillating thin lines are the KPM results
while the smooth thick line are the DDMRG results
with the pseudo-site method.
Note that only $|k| \leq \pi/2$ is shown because
$A(k\pm\pi,\omega) = A(k,-\omega)$.
}
\label{fig:ep}
\end{center}
\end{figure}
A significant difference between both methods is the computational
cost.
These DDMRG calculations took only 150 CPU hours on an Opteron 244
processor (1.8 GHz) and required less than 300 MBytes of memory.
It is thus possible to simulate significantly larger EP
systems than this eight-site lattice with DDMRG while this system
size is the largest one which can be simulated with exact
diagonalization techniques.
Finally, we note that the broadening $\eta=0.1t$ used for
the DDMRG results in fig.~\ref{fig:ep} is one order of magnitude
smaller
than the value used for a purely electronic systems of
comparable size (see fig.~\ref{fig:ddmrg}).
It seems that the scaling~(\ref{etacondition}) is not applicable
to EP systems.
Analytical and numerical investigations will be necessary
to determine the function $\eta_0(N)$ for EP systems
before one can investigate their dynamical properties in the
thermodynamic limit using the techniques described
in subsect.~\ref{subsec:scaling}.
\section{Conclusion}
Exact diagonalization and density matrix renormalization group
techniques are powerful and versatile exact numerical approaches
for investigating the properties of electron-phonon
lattice models for strongly correlated (low-dimensional)
materials.
Thanks to recent developments
we can now calculate dynamical quantities
which are directly related to experimental techniques
used in solid-state spectroscopy and
often determine the properties in the
thermodynamic limit
using a finite-size scaling analysis.
| 35,809 |
\section{Introduction}\label{intro}
The modulations of the X--ray flux in Low--Mass X--ray Binaries
(LMXBs) and their observed range in frequencies, from $\approx 300$~Hz
to $\approx 1200$~Hz (see van der Klis 2000; McClintock \& Remillard
2006; van der Klis, these proceedings, for reviews) strongly suggest
that dynamical time scales in the inner regions of the accretion disks
occurring in these systems are being probed (i.e., at orbits only a
few gravitational radii in circumference).
In Black Hole X--ray Binaries (BHXRBs), the high frequency
Quasi--Periodic Oscillations (QPOs) in the X--ray flux seem to be
fairly stable, exhibiting nearly constant frequencies. Additionally,
in the four galactic microquasars know to show more than one peak,
their frequencies are in a 3:2 ratio, strongly suggesting that a
resonance is responsible for their production (Klu\'{z}niak \&
Abramowicz 2000).
In Neutron Star (NS) systems, the twin peaks observed in the frequency
spectrum drift over a considerable interval (a few hundred Hz, as
mentioned above), yet are linearly correlated throughout this
range. In many sources the peak separation, while not constant, is
consistent with being half the burst oscillation frequency, and in
other sources with the burst frequency directly. In the one observed
instance where burst oscillations, the spin frequency of the pulsar,
and twin QPO peaks in the kHz range have been observed in the same
source (SAX J1808.4-3658), the burst oscillation {\em is} the spin
frequency, and is twice the peak separation, i.e.,
$\nu_{burst}=\nu_{s}=2 \Delta \nu$(QPO) (Wijnands et al. 2003). In
XTEJ1807--294, the peak separation is consistent with the spin
frequency (Linares et al. 2005), and there generally appears to be a
trend whereby the "slow" neutron stars have twin peaks with $\Delta
\nu \approx \nu_{s}$ and the "fast" ones show $\Delta \nu \approx
\nu_{s}$ (with the transition at about 300~Hz).
We have previously suggested (Klu\'{z}niak et al. 2004; Lee,
Abramowicz \& Klu\'{z}niak 2004) that the peculiar behavior of SAX
J1808 can be understood in terms of a nonlinear response of the
accretion disk, when subject to an external perturbation at a fixed
frequency. While it is currently not possible to study the detailed
structure and dynamical behavior of the accretion disk and its modes
of oscillations in full detail, general physical principles can still
be applied, and yield suggestive results. We have proposed that within
the accretion disk, as in other nonlinear systems, a subharmonic
response will appear in the presence of a perturbation, as will higher
harmonics of the perturbation itself. Specifically, a second QPO
frequency is to appear when two possible modes of oscillation are
separated by half the perturbing frequency, and thus couple.
This presentation is devoted to a numerical study of the non--linear
oscillations in such systems, by considering bound fluid tori
representing a portion of the accretion flow around the compact object
(in particular thin tori). We believe strong gravity plays an
important role in their behavior, and consider its effects in a
simplified approach. This is a purely hydrodynamical study, and so the
potentially important effects of the magnetic field have not been
considered for now.
\section{Hydrostatic equilibrium for fluid tori}\label{hydroeq}
Astrophysical fluid bodies often appear in two topologically different
varieties: spheres and tori. In the former, support against
gravitational collapse is mostly provided by pressure from within
(e.g., by photons, degenerate or ideal gas particles, neutrinos), with
a possible centrifugal contribution, while in the latter, it comes
mostly from rotation, with pressure providing corrections from simple
test particle motion. Each of these terms: hydrodynamical and
mechanical in nature, play an important role in the behavior of the
system under external perturbations, and on the propagation of waves
within such systems. As we will consider here the behavior and
possible observational consequences of accretion disks in the
particular context of LMXBs, we will focus mainly on toroidal
configurations. Nevertheless, knowledge gathered from quasi--spherical
systems is relevant, and indeed quite useful for interpreting these
systems.
\subsection{Gravitational potentials}\label{potentials}
At this point we may consider various forms for the gravitational
potential of the central mass, $M$. In the Newtonian regime, obviously
$\Phi= \Phi_{\rm N}=-G M/r$. Since we are interested in applications
to systems containing compact objects such as Neutron Stars and Black
Holes, it is useful to consider pseudo--Newtonian potentials of the
kind proposed originally by Paczy\'{n}ski and Wiita (1980), namely
$\Phi_{\rm PW}=GM/(r-r_{g})$, where $r_{g}=2GM/c^{2}$ is the
gravitational, or Schwarszchild, radius of the central mass. The
important feature of this class of potentials is that they reproduce
the existence of a marginally stable orbit, and that of capture orbits
for finite values of the orbital angular momentum, $\ell$, both unique
features of General Relativity. In addition to the original form, we
have also used extensively a new pseudo-potential, $\Phi_{\rm
KL}=GM[1-\exp(r_{ms}/r)]/r_{ms}$, where $r_{ms}=3r_{g}$ (Klu\'{z}niak
\& Lee 2002). The main advantage of this expression is related to the
epicyclic frequencies for test particle motion.
In Newtonian gravity, a test particle in circular motion with
frequency $\nu_{\phi}$ around a point mass, $M$, will perform small
epicyclic motions when perturbed. These can be radial or vertical, but
will always occur at frequencies $\nu_{r}=\nu_{z}=\nu_{\phi} = \Omega
/2 \pi$. This is just a re--statement of the well-known fact that the
Newtonian orbits of a point mass are closed curves. In addition, no
matter what the value of angular momentum a particle has, $\ell$, one
can always place it a a certain radius, $r_{\rm circ}=\ell^{2}/GM$
such that it will be in circular orbit.
In strong gravity this is no longer the case, as the central well in
the potential is so powerful that two qualitatively new effects
appear. First, capture orbits exist even at finite $\ell$. Second, in
the most simple case of the Schwarzschild metric (static, spherically
symmetric geometry), the three--fold degeneracy between orbital and
epicyclic frequencies is broken, such that $\nu_{r} <
\nu_{z}=\nu_{\phi}$, and $\nu_{r}^{2} < 0$ for $r < r_{\rm ms}$. Radial
oscillations are thus unstable inside $r_{\rm ms}$, and no circular
orbit is possible in that region.
The particular form of $\Phi_{\rm KL}$ is such that the ratio of
radial to vertical epicyclic frequencies as a function of radius,
$\nu_{r}/\nu_{z}$ is exactly equal to the value computed in full
General Relativity in the Schwarszchild metric. Specifically, we have
\begin{equation}
\nu_{z}^{2}=\frac{1}{4 \pi^{2}} \frac{GM}{r^{3}} \exp(r_{\rm ms}/r),
\end{equation}
and
\begin{equation}
\nu_{r}^{2}=\frac{1}{4 \pi^{2}} \left(1-\frac{r_{\rm ms}}{r}\right)
\frac{GM}{r^{3}} \exp(r_{\rm ms}/r).
\end{equation}
Figure~\ref{fig:potentials} shows the effective potential well as a
function of radius for the three forms mentioned here and the same
value of the orbital angular momentum.
\begin{figure}
\resizebox{\hsize}{!}
{\includegraphics[]{phie.eps}}
\caption{Effective potential wells for the Newtonian (N),
Pacyz\'{n}ski--Wiita (PW) and Klu\'{z}niak--Lee (KL) formulae given in
the text, in units of $c^{2}/2$. The value of angular momentum is
given in units of $r_{g}c$. Capture orbits are not possible in
Newtonian physics, but they appear qualitatively in the
pseudo--Newtonian potentials, as in General Relativity. The local
minimum in each curve gives the radius of the circular orbit possible
at the corresponding value of $\ell_{0}$.}
\label{fig:potentials}
\end{figure}
\subsection{Fluid equilibrium configurations}\label{fluideq}
The fluid constituting the torus is located in the vicinity of the
mass $M$, which produces the potential $\Phi$. For simplicity we shall
neglect the contribution of the mass density of the fluid, $\rho$, to
the potential, and assume azimuthal symmetry, so that the problem can
be studied in cylindrical coordinates $(r,z)$. The gravitational pull
from the mass $M$ is countered by the fluid pressure gradients and the
centrifugal force, such that
\begin{equation}
\frac{1}{\rho} \nabla P = -\nabla \Phi_{\rm eff},\label{eq:hydroeq}
\end{equation}
where
\begin{equation}
\Phi_{\rm eff}= \Phi + \int \frac{\ell(r^\prime)^{2}}{2 r^{\prime 3}} dr
\end{equation}
is the effective potential and $\ell$ is the distribution of specific
angular momentum of the fluid, which depends only on the radial
coordinate (according to Von Zeipel's theorem, for a polytropic
equation of state the surfaces of constant angular velocity for a
rotating fluid are cylinders).
Now, equation~(\ref{eq:hydroeq}) can be trivially integrated over $r$
if the fluid obeys a polytropic relation of the kind $P=K
\rho^{\gamma}$, with $K$ a constant and $\gamma$ the adiabatic index,
to give
\begin{equation}
\frac{\gamma}{\gamma-1} \frac{P}{\rho} + \Phi_{\rm eff} + \Phi_{0} = 0.
\end{equation}
The constant of integration $\Phi_{0}$ is related to the size of the
torus. The boundary of the torus is the surface over which the
pressure is zero, and it coincides with a particular equipotential
surface (see Figure~\ref{fig:surfaces}).
\begin{figure}
\resizebox{\hsize}{!}
{\includegraphics[]{thin.eps}}
\resizebox{\hsize}{!}
{\includegraphics[]{fat.eps}}
\caption{Meridional cross section of torus boundaries for a constant
distribution of angular momentum, $\ell$ and different values of
$\Phi_{0}$ in the Paczynski--Wiita potential. In the limit of a
slender torus (top) the meridional cross sections of the equipotential
surfaces become ellipses with axis ratio
$a_{r}/a_{z}=\nu_{r}/\nu_{z}$. In the limit of a thick torus (bottom),
the outer surface becomes spherical, while a narrow funnel develops
along the rotation axis. }
\label{fig:surfaces}
\end{figure}
The fluid within a torus is in hydrostatic equilibrium, with the
pressure gradient balancing gravity vertically, and centrifugal forces
providing the bulk of the radial support against the gravitational
field. The circle of maximum density (or pressure) defines the center
of the torus at $r_{0}$. At this point, the pressure gradient
vanishes, and thus the rotation is exactly Keplerian, as for a free
particle. If the potential well is filled only slightly with gas, the
torus will be slender, and confined to regions close to the equatorial
plane. If the potential well is increasingly filled, the torus will
grow and deform until it overflows the inner Roche lobe and accretion
onto the central mass begins. This Roche lobe is analogous to that
which occurs in binary systems, except in this case it is an effect of
General Relativity (Abramowicz, Calvani \& Nobili 1983), specifically
of the existence of capture orbits for a finite value of angular
momentum (see Figure~\ref{fig:potentials}).
It is easiest to think of the rotation law through the distribution of
angular momentum, and convenient to write it as a power law, with
$\ell(r)=\ell_{0} (r/r_{0})^{\alpha}$. For Keplerian rotation in a
Newtonian potential we have, obviously, $\alpha=1/2$, while a constant
distribution of angular momentum, such as the one used to plot the
curves in Figures~\ref{fig:potentials} and \ref{fig:surfaces}, has
$\alpha=0$. For sub--Keplerian rotation (i.e., $\alpha < 1/2$ in this
case), the torus is bound by a surface of zero pressure, becoming
infinite in the exact Keplerian limit (where one would have
pressureless dust in orbit). The overall shape and size of the torus
is thus determined by the degree to which the potential welll is
filled, and by the rotation law.
\section{Numerical Method and Initial Conditions}\label{methodICs}
\subsection{Numerical Method}\label{method}
We have used the SPH (Smooth Particle Hydrodynamics) numerical method
to perform our simulations (Monaghan 1992). This is a Lagrangian
formulation, which interpolates a given function $A(r)$, with a set of given
quantities $A(r')$ through a kernel function $W(r,h)$, using the
convolution integral:
\begin{equation}
A(r)=\int A(r')W(r-r')dr.
\end{equation}
We have implemented the prescriptions of Monaghan \& Lattanzio
(1985) for azimuthal symmetry, using the kernel:
\begin{equation}
W(r,h)= \frac{\sigma}{h^{\nu}} \left\{ \begin{array}{l l}
1-3\left(\frac{r}{h}\right)^{2}/2+3\left(\frac{r}{h}\right)^{3}/4
& \mbox{if $0
\leq \frac{r}{h} \leq 1$;} \\ & \\ \left(2-\frac{r}{h}\right)^{3}/4 &
\mbox{if $1 \leq \frac{r}{h} \leq 2$;} \\ & \\ 0 & \mbox{2 $\leq
\frac{r}{h}$.}
\end{array}
\right.
\end{equation}
Here $h$ represents the smoothing length, and is
comparable to the typical separation between fluid elements (it
essentially gives the spatial resolution of the calculation), and $r$
is the radial coordinate. In two dimensions, $\nu=2$ and
$\sigma=10/7\pi$. The gas is modeled as an inviscid fluid, and so
the Navier--Stokes equations take the form:
\begin{equation}
\frac{dv_{r}}{dt}=- \frac{1}{\rho}\frac{\partial P}{\partial r}
-\frac{GM_{BH}r}{R(R-r_{g})^{2}}+r \Omega^{2}
+\left(\frac{dv_{r}}{dt}\right)_{\rm art},
\end{equation}
\begin{equation}
\frac{dv_{z}}{dt}=- \frac{1}{\rho}\frac{\partial P}{\partial z}
-\frac{GM_{BH}z}{R(R-r_{g})^{2}}+
\left(\frac{dv_{z}}{dt}\right)_{\rm art},
\end{equation}
where $R=\sqrt{r^2+z^2}$ is the distance to the central mass $M$. The
sub--index \emph{art} indicates the artificial viscosity terms, which
is used to account for the presence of shocks and to avoid
interpenetration of the fluid elements.
The energy equation is:
\begin{equation}
\frac{du}{dt}=- \left(\frac{P}{\rho}\right)\nabla \cdot \vec{v}+
\left( T \frac{ds}{dt} \right)_{\rm art}
\end{equation}
where $u$ is the internal energy per unit mass. No external (i.e.,
radiative) losses are assumed, and thus the only change in $u$ arises
from work done. When discretized over a finite number of fluid
elements, often called particles in SPH, the convolution integral
becomes a sum over all elements.
We have used the prescription given by Balsara (1995) for the
artificial viscosity, which reduces the artificial shearing
stress. The explicit forms for the acceleration and the energy
dissipation due to artificial viscosity for the $i-eth$ SPH fluid element are:
\begin{equation}
\left(\frac{d \vec{v}}{d t}\right)_{i,\rm art}=
- \sum_{j \neq i}m_{j} \Pi_{ij}\nabla_{i}W_{ij},
\end{equation}
and
\begin{equation}
\left(T \frac{ds}{dt}\right)_{i,\rm art}= \frac{1}{2} \sum_{j \neq i}m_{j}
\Pi_{ij}(\vec{v_{i}}-\vec{v_{j}}) \cdot \nabla_{i}W_{ij}.
\end{equation}
where $\Pi$ is defined by (see, e.g., Lee and Ramirez-Ruiz 2002)
\begin{equation}
\Pi_{ij}=\left(\frac{P_{i}}{\rho_i^{2}}+\frac{P_{j}}{\rho_{j}^{2}}\right)=
(-\alpha_{b}\mu_{ij}+\beta_{b}\mu_{ij}^{2}),
\end{equation}
\begin{equation}
\mu_{ij}= \left\{ \begin{array}{l l} \frac{(\vec{v}_{i}-\vec{v}_{j}) \cdot
(\vec{r}_{i}-\vec{r}_{j})}
{h_{ij}(|\vec{r}_{i}-\vec{r}_{j}|^{2}/h_{ij}^{2})+\eta^{2}}
\frac{(f_{i}+f_{j})}{2c_{ij}}
& \mbox{if $\vec{v}_{ij} \cdot \vec{r}_{ij} < 0$;}
\\ 0 & \mbox{if $\vec{v}_{ij} \cdot \vec{r}_{ij}
\geq 0$;}
\end{array}
\right.
\end{equation}
The function $f_{i}$ is defined by:
\begin{equation}
f_{i}=\frac{|\nabla \cdot \vec{v}|_{i}}
{|\nabla \cdot \vec{v}|_{i}+|\nabla \times \vec{v}|_{i} + \eta'c_{i}/h_{i}},
\end{equation}
and $\eta=10^{-2}$. The sound speed and smoothing length of
each element are denoted by $c_{i}$ and $h_{i}$ respectively, and the
factor $\eta^{\prime} \simeq 10^{-4}$ in the denominator prevents
numerical divergences. $\alpha_{b}=\beta_{b}=\gamma/2$ are constants of
order unity and $\gamma$ is the polytropic index from the equation of
state. This form of the artificial viscosity supresses the shearing
stresses when the compression in the fluid is low and the vorticity is
high, $|\nabla \cdot \vec{v}| \ll |\nabla \times \vec{v}|$, but
remains in effect if the compression dominates in the flow $|\nabla
\cdot \vec{v}| \gg |\nabla \times \vec{v}|$.
\subsection{Initial Conditions and Applied Perturbations}\label{ICspert}
To construct a particular fluid torus, one needs only to specify the
equation of state, the distribution and absolute value of angular
momentum, and the degree to which the potential well is filled
(through the constant $\Phi_{0}$). We restrict ourselves here to
systems in which $\ell(r)=\ell_{0}=$~cst. Thus the actual value of
angular momentum fixes the position of the center of the torus,
$r_{0}$, as defined in \S~\ref{fluideq}. Numerically, this is done by
means of a Monte Carlo technique, by distributing fluid elements over
the prescribed volume according to the analytical density profile, and
subsequently relaxing them with artificial damping included in the
equations of motion. The rotation law is strictly enforced during this
procedure, and the internal energy is fixed by the adiabatic relation
so that one has complete control over the initial condition. We have
verified that our initial conditions are indeed in equilibrium by
evolving them explicitly in time without any applied perturbations. No
global evolution is observed during these computations.
We have applied two fundamentally different types of perturbations to
initial models: impulsive and periodic. In the first, an additional
velocity field is imposed on the initial condition at $t=0$, and the
system is evolved without additional perturbations for many (typically
one hundred) dynamical times. In the second, an additional, periodic
term is added to the equations of motion to produce an
acceleration. This can be applied only in the radial direction, or
vertical, or both.
Finally, one can hold $\ell_{0}$ constant during a calculation, or
vary it slowly (i.e., over many dynamical times). In the first case,
the torus will remain essentially in the same radial region,
oscillating due to the applied perturbation. In the second, in
addition to these oscillations it will either move inward or outward,
depending on whether $\ell_{0}$ decreases or increases. We have
considered this possibility in view of the fact that gas within an
accretion disk will, in reality, drift radially as angular momentum is
transported by viscous stresses.
The temporal profile of the applied forcing can be varied. We have
used single--frequency sinusoids as well as narrow pulse--like
functions with repetition time $T_{s}=1/\nu_{s}$. This can be thought
of as the rotation period of the central mass, which affects the
accretion disk through some unspecified mechanism (e.g., the pulsar
magnetic field, asymmetries in the quadrupole moment or other
effects).
\section{Forced Oscillations}\label{forcedosc}
We will hereafter restrict the presentation to the case of slender
tori, where their radial and vertical extent, $L$, is small compared
to the radial position of their center, i.e., $L \ll r_{0}$. The thin
annulus can then be viewed as a small section of the entire accretion
structure surrounding the compact object. The main difficulty with
this picture, and in relating it to real systems, lies in the fact
that the artifical zero--pressure boundaries obviously make wave
propagation into the exterior of this torus impossible. Mode leaking
is certainly an important issue (Fragile 2005), but for now we address
only closed systems. The dynamical response of thick tori under global
impulsive perturbations has been addresssed in a series of papers by
Rezzolla and collaborators (Zanotti et al. 2003; Rezzolla et
al. 2003a,b; Montero et al. 2004; Zanotti et al. 2005), while
global and localized periodic perturbations have been considered by
Rubio--Herrera \& Lee (2005a,b).
As a result of the induced forcing, the torus experiences
small--amplitude oscillations, which can be measured in various
ways. One of these is to simply consider the position of its center,
$(r_{0},z_{0})$, defined, as above, as the location of maximum
density, as a function of time. The corresponding time series are
complicated, as they prominently display the perturbing frequency
$\nu_{s}$ and the local epicyclic frequency for small radial and
vertical oscillations, $\nu_{r}$ and $\nu_{z}$ respectively. There
are however, combination frequencies and cross--over phenomena, so
that the whole behavior is best analyzed by performing Fourier
transforms of $r_{0}(t)$ and $z_{0}(t)$.
\subsection{Radial and Vertical Mode Coupling}\label{rzcoupling}
\begin{figure}
\resizebox{\hsize}{!}
{\includegraphics[]{fftfixedr.eps}}
\caption{Fourier transforms of the radial (solid) and vertical
(dashed) oscillations of the center of a slender torus with constant
specific angular momentum when perturbed periodically at frequency
$\nu_{s}=400$~Hz in a pseudo--Newtonian potential. The local values of
the radial and vertical epicyclic frequencies are $\nu_{r}=500$~Hz and
$\nu_{z}=700$~Hz, respectively. Even though the perturbation is purely
radial, vertical oscillations are excited because of pressure
coupling. The power was re--scaled along the vertical axis for
illustrative purposes. In reality the power in vertical motions is
much smaller than in the radial ones.}
\label{fig:fftfixedr}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}
{\includegraphics[]{fftrcmfullt.eps}}
\resizebox{\hsize}{!}
{\includegraphics[]{fftzcmfullt.eps}}
\caption{Fourier transforms of the radial (top) and vertical (bottom)
oscillations of the center of a radially drifting slender torus when
perturbed periodically at frequency $\nu_{s}=400$~Hz in a
pseudo--Newtonian potential. The outer (initial) and inner (final)
central radii are $r_{0}=6.7 r_{g}$ and $r_{0}=5.35 r_{g}$
respectively.}
\label{fig:fftcmfullt}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}
{\includegraphics[]{fftrcm18.eps}}
\resizebox{\hsize}{!}
{\includegraphics[]{fftzcm18.eps}}
\caption{Fourier transforms of the radial (top) and vertical (bottom)
oscillations of the center of a radially drifting slender torus when
perturbed periodically at frequency $\nu_{s}=400$~Hz in a
pseudo--Newtonian potential. Only the first and last segments of the
calculation (1/8 of the total) are shown.}
\label{fig:fftcm18}
\end{figure}
For calculations where the angular momentum does not vary in time,
since it is also constant in space, there are no mechanisms for its
transport, and the perturbation does not produce any torques, the
fluid must remain close to its equilibrium radius
$r_{0}$. Figure~\ref{fig:fftfixedr} shows the Fourier transform of
$r_{0}$ and $z_{0}$ for such a calculation, assuming the potential
$\Phi_{\rm KL}$, a central mass $M=1.4$~M$_{\odot}$,
$r_{0}=6.1$~r$_{g}$ and a purely radial, sinusoidal perturbation at
frequency $\nu_{s}=400$~Hz. The corresponding values of the local
radial and vertical epicyclic frequencies are $\nu_{r}=500$~Hz and
$\nu_{z}=700$~Hz. The power spectrum of radial motions clearly shows
the perturbation frequency, $\nu_{s}$, and the radial epicyclic
frequency, $\nu_{r}$. Likewise, the power spectrum of vertical motions
exhibits the vertical epicyclic frequency $\nu_{z}$. In this
particular case, the value of $r_{0}$ and $\nu_{s}$ is such that the
difference between the two epicyclic frequencies is equal to half the
spin frequency, i.e., $\nu_{z}-\nu_{r}=\nu_{s}/2$. There is thus clear
evidence for mode coupling, through pressure, since the perturbation
was initially only applied in the radial direction. Beats between the
various frequencies are also present. In Figure~\ref{fig:fftfixedr},
the power in vertical motions shows peaks at $\nu_{z}-\nu_{s}$,
$\nu_{r}$, and $\nu_{s}+\nu_{r}$. The very mode responsible for the
coupling between the radial and vertical oscillations is weakly
visible at $\nu_{z}-\nu_{r}=200$~Hz.
\begin{figure}
\resizebox{\hsize}{!}
{\includegraphics[]{zpowervsr.eps}}
\caption{Power in vertical motions at the local value of $\nu_{z}$ as
a function of the average central position of the torus, $r_{0}$ for
the calculation with a radially drifting (inwards) perturbed
torus. The calculation was divided into eight segments of equal
duration. During each segment the torus drifted by $0.16 r_{g}$ in the
radial direction (shown by the horizontal bars). The strongest
response occurs at $r_{0}\approx 6.15 r_{g}$, where
$\nu_{z}-\nu_{r}=\nu_{s}/2$. Note how the curve is not symmetrical with
respect to the point of maximum power. See the text for details.}
\label{fig:zpowervsr}
\end{figure}
A calculation with varying angular momentum (see \S~\ref{ICspert})
where the torus drifts inward is even more revealing. Starting at
$r_{0}=6.7 r_{g}$, and terminating at $r_{0}=5.35 r_{g}$ over two
hundred initial orbital periods, the radial and vertical oscillations
of the center of the torus occur at a range of frequencies, covering
values between the extremes in epiclycic frequencies $\nu_{r}$ and
$\nu_{z}$. This choice of parameters implies that, with
$\nu_{s}=400$~Hz, the calculation begins with $\nu_{z}-\nu_{r} <
\nu_{s}/2$ and ends with $\nu_{z} - \nu_{r} > \nu_{s}/2$.
A power spectrum of the radial oscillations of $r_{0}$ during the a
complete simulation using a pulse--like perturbation shows several
features (see Figure~\ref{fig:fftcmfullt}). The perturbation at
$\nu_{s}$ is clearly visible, as are its three harmonics (it is not a
pure sine wave, see Figure~3 in Lee, Abramowicz \& Klu\'{z}niak
2004). A broad, flat--topped peak is also apparent, covering the range
$ 440 {\rm Hz} < \nu_{r} < 580 {\rm Hz}$. These are simply the local
values of $\nu_{r}$ at the inner and outer radii, respectively.
Harmonics of the radial epicyclic frequency, as well as a beat with
$\nu_{s}$ are also visible, at greatly reduced power. The
corresponding power spectrum for the oscillations of $z_{0}$ shows a
similar broad peak, for $ 590 {\rm Hz} <\nu_{z} < 880 {\rm Hz}$, as
expected (see Figure~\ref{fig:fftcmfullt}).
To explore the behavior of the oscillations in more detail, we have
split the time series of $r_{0}(t)$ and $z_{0}(t)$ into eight equal
segments, each thus covering one eighth of the simulation. During this
time, the torus is in a narrow radial window, and we may say that its
position is nearly constant (except for the oscillations induced by
the external forcing). We then extract the individual Fourier
transforms for each segment. The results for the radial oscillations
are shown in Figure~\ref{fig:fftcm18}, for the first and last time
segments (they are all qualitatively similar). As expected, each one
shows the perturbation and its harmonics, as well as a narrow peak
centered at the value of $\nu_{r}$ corresponding to the average
position of the circle of maximum pressure during the time
interval. The power spectrum of the corresponding vertical
oscillations is shown in Figure~\ref{fig:fftcm18}. There is a strong
peak at $\nu_{z}$, the amplitude of which varies greatly as the torus
drifts radially. In Figure~\ref{fig:zpowervsr} we show the power in
this peak as a function of the average position of the center of the
torus, $r_{0}$. Two facts are immediately clear. First, the intensity
of the coupling between radial and vertical modes is a function of the
external perturbation. Second, this is a non--linear coupling, because
the interaction is through a sub--harmonic of the pertrubation,
$\nu_{s}/2$, and not with the perturbation itself or its higher
harmonics. This fact points clearly to rich non--linear behavior in
these systems, and, we believe, has been directly seen in accretion
disks in LMXBs (Klu\'{z}niak et al. 2004; Lee, Abramowicz \&
Klu\'{z}niak 2004), in particular in SAXJ1808.4-3658 (Wijnands et
al. 2003).
If the coupling between modes is due to some interaction with the
external perturbation, and this excites a resonance, one would naively
expect the consequences to display resonance--like behavior. In
particular, the resonant window should be narrow, and if the
corresponding condition is not met (e.g., above
$\nu_{z}-\nu_{r}=\nu_{s}/2$), then no coupling should occur. However,
this is not what we observe in the calculation we have just
described. As the torus drifts inward, the power in vertical motions
rises sharply as the resonance is approached, {\em but remains high
even after it has been passed}. The corresponding range in frequencies
for $\nu_{z}$ spans more than 100~Hz. This appears to indicate that
once excited, the modes have locked (Klu\'{z}niak et al. 2004) and
continue to be excited for some time. The curve in
Figure~\ref{fig:zpowervsr} is reminiscent in shape of the amplitudes
of meridional oscillations for slightly perturbed orbits in nearly
geodesic motion, as computed by Abramowicz et al. (2003, see their
Figure~2). In that case a generic coupling between radial and polar
oscillators was assumed. In Neutron Star sources, the frequencies of
the kHz QPOs drift over large ranges, sometimes hundreds of Hz. Mode
locking following resonant excitation could in principle be
responsible for this behavior.
\subsection{Additional Vertical Modes}\label{morevertical}
Inspection of Figures~\ref{fig:fftfixedr} and \ref{fig:fftcm18}
reveals that the power spectrum of vertical oscillations has a
secondary peak at frequency $\nu_{u} > \nu_{z}$. The power in this
oscillation is variable as the torus drifts radially, but it appears
to be continously excited. In some instances it seems to increase in
power when $\nu_{z} - \nu_{r} \approx \nu_s/2$, but its presence is
not limited to the corresponding radial window.
Thus it would seem again, in agreement with our previous results, that
a non--linear coupling between the radial and vertical modes transfers
energy to the latter. Further, a second oscillation at frequency
$\nu_{2} \approx \nu_{z} + \nu_{s}/2$ is excited within the same
vertical motions, with power comparable to that at frequency
$\nu_{z}$. Previously, when we examined peaks in the power spectrum at
two different frequencies, they occurred in different variables (e.g.,
radial vs. vertical) and thus it was impossible to directly compare
their amplitudes (as the strength of the real coupling is unknown, and
the applied perturbation is arbitrary). Here we now show for the first
time evidence for similar amplitudes in different modes of the same
general motions.
\section{Conclusions and Directions for Future Work}
The oscillations of fluid confined by gravity and/or centrifugal
forces can lead to rich behavior under a variety of astrophysical
scenarios (we have focused on QPOs in X--ray binaries, the
observations of solar oscillations provide another, much closer
example). Under quite general considerations we have shown that
simple, acoustic modes which have their origin in free particle
oscillations are modified by the hydrodynamics and can couple to one
another in non--linear fashion. The locking of excited modes could
explain the fact that the observed QPO frequencies drift over
considerable ranges while still arising from resonant interactions. We
believe that their signature is present in the observations of
accretion flows in strong gravitational fields, and will allow for the
measurement of important parameters related to the central, compact
object.
Clearly, meaningful constraints will be derived only through much more
detailed work. For example, it is not clear how such oscillations will
actually modify the X--ray lightcurve (see, e.g., Schnittman 2005 for
work along these lines). In addition, the strong gravitational field
needs to be modeled in full General Relativity, with consideration of
the self--gravity of the disk and detailed thermodynamics and
radiation transport.
\acknowledgements
It is a pleasure to thank M.~A. Abramowicz and W. Klu\'{z}niak for a
continous and enjoyable collaboration. Financial support for this work
was provided in part by CONACyT (36632E) and NORDITA, whose
hospitality is gratefully acknowledged.
| 9,488 |
\section{Introduction}
Strongly correlated electron systems with some orbitals
have been investigated extensively.\cite{ImadaRev,TokuraScience}
In particular, substantial progress in the theoretical
understanding of the Mott transition in multiorbital systems
has been made by dynamical mean-field theory (DMFT)
\cite{Georges,KotliarPT,PruschkeAP,Metzner,Muller}
calculations.
\cite{2band1,2band2,Florens,Kotliar96,Rozenberg97,Bunemann:Gutzwiller,Hasegawa98,Held98,Han98,Momoi98,Klejnberg98,Imai01,Koga,KogaSN,Oudovenko02,Ono03,Tomio04,Pruschke,Sakai,Liebsch,KogaLett,KogaB,Ferrero05,Medici05,Arita05,Knecht05,LiebschFiniteT,Biermann05,Inaba,Ruegg,sces,Song,InabaB}
Among them, the orbital-selective Mott transition (OSMT)\cite{Anisimov}
has been one of the most active topics in this context.
A typical material is the single-layer isovalent ruthenate alloy
Ca$_{2-x}$Sr$_x$RuO$_4$\cite{Nakatsuji,Nakatsuji1,tilting}.
The end-member Sr$_2$RuO$_4$ is a well-known
unconventional superconductor \cite{PT,RMP}, while Ca$_2$RuO$_4$ is a
Mott-insulating $S=1$ antiferromagnet \cite{Ca2RuO41,Ca2RuO42,Ca2RuO43}.
The relevant $4d$-orbitals belong to the $t_{2g}$-subshell.
The planar structure prevents the hybridization between orbitals
which have even ($d_{xy}$) and odd parity ($d_{yz},d_{zx}$) under the
reflection $ z \to -z $.
The complex evolution between these different end-members has
stimulated theoretical investigations \cite{Mazin,Hotta,Fang,Okamoto},
and among others to the proposal of the OSMT: some of
the $d$-orbitals fall in localized states, while the
others provide itinerant electrons. The OSMT scenario
could explain the experimental
observation of a localized spin $S=1/2 $ in the metallic system at $x
\sim 0.5 $ in Ca$_{2-x}$Sr$_x$RuO$_4$, which is difficult
to obtain from the entirely itinerant description
\cite{Anisimov,Fang,SigristTroyer}.
Another example of the OSMT is the compound
$\rm La_{n+1}Ni_nO_{3n+1}$.\cite{LaNiO} It is reported that
the OSMT occurs in the $e_g$-subshell
at the critical temperature $T_c\sim 550K$, below which
the conduction of electrons in the $3d_{x^2-y^2}$ orbital is disturbed
by the Hund coupling with the localized electrons in the $3d_{3z^2-r^2}$
orbital.\cite{Kobayashi96}
These experimental findings have stimulated theoretical
investigations of the Mott transitions in the multiorbital systems.
\cite{Liebsch,SigristTroyer,Fang,Anisimov,KogaLett,KogaB,Ferrero05,Medici05,Arita05,Knecht05,LiebschFiniteT,Biermann05,sces,Ruegg,Inaba,Tomio04}
In this paper, we give a brief review of our recent studies
\cite{Koga,KogaSN,KogaLett,KogaB,Inaba,InabaB}
on the Mott transitions in the two-orbital Hubbard model by means of DMFT and
the self-energy functional approach (SFA).\cite{SFA}
In particular, we focus on the role of the orbital degrees of freedom
to discuss the stability of the metallic state
at zero and finite temperatures.
The paper is organized as follows.
In \S\ref{sec2}, we introduce the model Hamiltonian for the two-orbital
systems and briefly explain the framework of DMFT and SFA.
We first treat the Hubbard model with same bandwidths, and
elucidate how enhanced spin and orbital fluctuations affect
the metal-insulator transition in \S \ref{sec3}.
In \S \ref{sec4}, we then consider the system with different bandwidths
to discuss in which conditions the OSMT occurs. We obtain the
phase diagrams at zero and finite temperatures.
Finally in \S \ref{sec5}, the effect of the hybridization between the
two orbitals is investigated to clarify the instability of the intermediate
OSM phase. A brief summary is given in the last section.
\section{Model and Methods}\label{sec2}
\subsection{Two-orbital Hubbard model}
We study the two-orbital Hubbard Hamiltonian,
\begin{eqnarray}
H&=&H_0+H'\\
H_0&=&\sum_{\stackrel{<i,j>}{\alpha\sigma}}
t_{ij}^{(\alpha)} c_{i\alpha\sigma}^\dag c_{j\alpha\sigma}
+V\sum_{i\sigma}\left[c_{i1\sigma}^\dag c_{i2\sigma}+
c_{i2\sigma}^\dag c_{i1\sigma}\right]\\
H'&=&\sum_i H'_i\\
H'_i&=&
U\sum_{\alpha}n_{i\alpha\uparrow}n_{i\alpha\downarrow}
+\sum_{\sigma\sigma'}
\left(U'-J_z\delta_{\sigma\sigma'}\right)n_{i1\sigma}n_{i2\sigma'}
\nonumber\\
&-&J\sum_\sigma c_{i1\sigma}^\dag c_{i1\bar{\sigma}}
c_{i2\bar{\sigma}}^\dag c_{i2\sigma}
-J'\left[ c_{i1\uparrow}^\dag c_{i1\downarrow}^\dag
c_{i2\uparrow} c_{i2\downarrow}+ c_{i2\uparrow}^\dag c_{i2\downarrow}^\dag
c_{i1\uparrow} c_{i1\downarrow} \right]\label{eq:model}
\label{Hamilt}
\end{eqnarray}
where $c_{i\alpha\sigma}^\dag (c_{i\alpha\sigma})$
creates (annihilates) an electron
with spin $\sigma(=\uparrow, \downarrow)$ and orbital
$\alpha(=1, 2)$ at the $i$th site and
$n_{i\alpha\sigma}=c_{i\alpha\sigma}^\dag c_{i\alpha\sigma}$.
$t_{ij}^{(\alpha)}$ is the orbital-dependent nearest-neighbor hopping,
$V$ the hybridization between two orbitals and
$U$ ($U'$) represents the intraorbital (interorbital) Coulomb interaction.
$J$, $J_z$ and $J'$ represent the spin-flip, Ising, and pair-hopping
components of the Hund coupling, respectively. We note that
when the system has the Ising type of anisotropy in the Hund
coupling, $J=J'=0$,
the system at low temperatures should exhibit quite different properties
from the isotropic case.
\cite{sces,Pruschke,Knecht05,Liebsch,Ruegg,LiebschFiniteT,Biermann05,Han98}
In this paper, we deal with the isotropic case and
set the parameters as $J=J_z=J'$, which satisfy
the symmetry requirement in the multi-orbital systems.
It is instructive to note that this generalized model allows us
to study a wide variety of different models
in the same framework. For $V=0$,
the system is reduced to the multi-orbital Hubbard model
with the same $(t_{ij}^{(\alpha)}=t_{ij})$ or distinct orbitals.
\cite{KogaLett,Tomio04,Ruegg,Liebsch,KogaB,sces,Ferrero05,Medici05,Arita05,Knecht05,LiebschFiniteT,Biermann05,Inaba}
On the other hand, for $t_{ij}^{(2)}=0$, the system is reduced to
a correlated electron system coupled to localized electrons,
such as the periodic Anderson model ($J=0$),
\cite{Coleman,Rice,Yamada,Kuramoto,Kim90}
the Kondo lattice model ($V=0$ and $J<0$)\cite{Tsunetsugu,Assaad}
for heavy-fermion systems, and
the double exchange model
($V=0$ and $J>0$) for some transition metal oxides.
\cite{Zener,Anderson,Kubo,Furukawa}
For general choices of the parameters, various
characteristic properties may show up, which
continuously bridge these limiting cases.
\subsection{Method}
\subsubsection{dynamical mean-field theory}
To investigate the Hubbard model (\ref{Hamilt}),
we make use of DMFT, \cite{Metzner,Muller,Georges,PruschkeAP,KotliarPT}
which has successfully been applied to various electron systems such as
the single-orbital Hubbard model,
\cite{Caffarel,2site,LDMFT,single1,Rozenberg1,OSakai,single2,single3,single4,BullaNRG,OnoED,Nishimoto,Uhrig,Zitzler}
the multiorbital Hubbard model,
\cite{2band1,2band2,Florens,Kotliar96,Rozenberg97,Bunemann:Gutzwiller,Hasegawa98,Held98,Han98,Momoi98,Klejnberg98,Imai01,Oudovenko02,Koga,Ono03,KogaSN,Sakai,Pruschke,Song,InabaB,Liebsch,sces,Knecht05,Biermann05,LiebschFiniteT,Ruegg,KogaLett,KogaB,Tomio04,Ferrero05,Medici05,Arita05,Inaba}
the periodic Anderson model,
\cite{Rozenberg,PAM,Mutou,Saso,Sun,Sato,Ohashi,MediciPAM,SchorkPAM}
the Kondo lattice model,\cite{Matsumoto,OhashiJPC,Schork} etc.
In the framework of DMFT, the lattice model is mapped to
an effective impurity model,
where local electron correlations are taken into account precisely.
The Green function for the original lattice system is then obtained
via self-consistent equations imposed on the impurity problem.
In DMFT for the two-orbital model,
the Green function in the lattice system is given as,
\begin{eqnarray}
{\bf G}\left(k, z\right)^{-1}={\bf G}_0\left(k, z\right)^{-1}
-{\bf \Sigma}\left(z \right),
\end{eqnarray}
with
\begin{equation}
{\bf G}_0\left( k, z\right)^{-1}=\left(
\begin{array}{cc}
z+\mu-\epsilon_1( k) & -V\\
-V & z+\mu-\epsilon_2( k)
\end{array}
\right),
\end{equation}
and
\begin{equation}
{\bf \Sigma}\left(z\right)=\left(
\begin{array}{cc}
\Sigma_{11}(z) & \Sigma_{12}(z) \\
\Sigma_{21}(z) & \Sigma_{22}(z)
\end{array}
\right),
\end{equation}
where $\mu$ is the chemical potential, and $\epsilon_\alpha (k)$
is the bare dispersion relation for the $\alpha$-th orbital.
In terms of the density of states (DOS) $\rho (x)$ rescaled by the
bandwidth $D_\alpha$,
the local Green function is expressed as,
\begin{eqnarray}
G_{11}(z)&=&\int dx \frac{\rho(x)}{\xi_1\left(z,x\right)-
\frac{\displaystyle v(z)^2}{\displaystyle \xi_2\left(z, x\right)}},
\nonumber\\
G_{12}(z)&=&\int dx \frac{\rho(x)v(z)}
{\xi_1\left(z, x\right)\xi_2\left(z, x\right)-v(z)^2},
\nonumber\\
G_{22}(z)&=&\int dx \frac{\rho(x)}{\xi_2\left(z, x\right)-
\frac{\displaystyle v(z)^2}{\displaystyle \xi_1\left(z, x\right)}},
\end{eqnarray}
where
\begin{eqnarray}
\xi_1\left(z, x\right)&=&z+\mu-\Sigma_{11}-D_1 x,\nonumber\\
\xi_2\left(z, x\right)&=&z+\mu-\Sigma_{22}-D_2 x,\nonumber\\
v\left(z\right)&=&V+\Sigma_{12}\left(z\right).
\end{eqnarray}
In the following, we use the semicircular DOS,
$\rho(x)=\frac{2}{\pi}\sqrt{1-x^2},$
which corresponds to the infinite-coordination Bethe lattice.
There are various numerical methods
to solve the effective impurity problem.
We note that self-consistent perturbation theories such as
the non-crossing approximation and the iterative perturbation method
are not efficient enough to discuss orbital fluctuations
in the vicinity of the critical point.
In this paper, we use numerical techniques such as
the exact diagonalization (ED) \cite{Caffarel}
and the quantum Monte Carlo (QMC) simulations\cite{Hirsch}
as an impurity solver at zero and finite temperatures.
In this connection, we note that the Hund coupling is a
crucial parameter that should control the nature of
the Mott transition in the multiorbital systems.\cite{sces}
It is thus important to carefully analyze the effect of
the Hund coupling in the framework of QMC.
For this purpose, we use the QMC algorithm proposed
by Sakai et al.,\cite{Sakai}
where the Hund coupling is represented in terms of
discrete auxiliary fields.
When we solve the effective impurity model by means of QMC method,
we use the Trotter time slices $\Delta \tau = (TL)^{-1} \le 1/6$,
where $T$ is the temperature and $L$ is the Trotter number.
\subsubsection{self-energy functional approach}
We also make use of a similar but slightly different method, SFA,\cite{SFA}
to determine the phase diagram at finite temperatures. This SFA, which
is based on the Luttinger-Ward variational method,\cite{Luttinger}
allows us to deal with
finite-temperature properties of the multi-orbital system efficiently,
\cite{Inaba,InabaB}
where standard DMFT with numerical methods
may encounter some difficulties in practical calculations when
the number of orbitals increases.
In SFA, we utilize the fact that the Luttinger-Ward functional
does not depend on
the detail of the Hamiltonian ${\cal H}_0$ as far as the interaction term
${\cal H}^\prime$ is unchanged.\cite{SFA}
This enables us to introduce a proper reference system having
the same interaction term.
One of the simplest reference systems is given
by the following Hamiltonian,
${\cal H}_{\rm ref}=\sum_i{\cal H}_{\rm ref}^{(i)}$,
\begin{eqnarray}
{\cal H}_{\rm ref}^{(i)}&=&\sum_{\alpha \sigma } \left[
e^{(i)}_{0\alpha}
c^\dag_{i\alpha\sigma} c_{i\alpha\sigma}+
e^{(i)}_{\alpha}
a^{(i)\dag}_{\alpha\sigma}a^{(i)}_{\alpha\sigma}+v^{(i)}_{\alpha}
\left(c^\dag_{i\alpha\sigma}a^{(i)}_{\alpha\sigma}+a^{(i)\dag}_{\alpha\sigma}c_{i\alpha\sigma}\right)\right]
+H_i^\prime,\label{eq:ref_model}
\end{eqnarray}
where $a^{(i)\dag}_{\alpha\sigma}(a^{(i)}_{\alpha\sigma})$
creates (annihilates) an electron with spin $\sigma$ and orbital $\alpha$,
which is connected to the $i$th site in the original lattice.
This approximation may be regarded as a finite-temperature extension
of the two-site DMFT\cite{2site}.
In the following, we fix the parameters
$e_{0\alpha}=0$ and $e_{\alpha}=\mu$
to investigate the zero and finite temperature properties at half filling.
We determine the parameters $v_{\alpha}$ variationally so as
to minimize the grand potential,
$\partial\Omega/\partial v_\alpha=0$ $(\alpha=1,2)$,
which gives a proper reference system within the given form
of the Hamiltonian (\ref{eq:ref_model}).
\section{Mott transition in the degenerate two-orbital model}\label{sec3}
We begin with the two-orbital Hubbard model with same bandwidths
$(t_{ij}^{(\alpha)}=t_{ij})$ at half filling.
\cite{2band1,2band2,Florens,Kotliar96,Rozenberg97,Bunemann:Gutzwiller,Hasegawa98,Held98,Han98,Momoi98,Klejnberg98,Imai01,Oudovenko02,Koga,KogaSN,Ono03,Sakai,Pruschke,Song,InabaB}
In this and next sections, we neglect the hybridization term by putting $V=0$.
We first discuss the Fermi-liquid properties in the metallic phase
when the Coulomb interactions $U$ and $U'$ are varied
independently in the absence of the Hund coupling $(J=0)$.\cite{Koga,KogaSN}
The effect of the Hund coupling will be mentioned
in the end of this section.\cite{InabaB}
\subsection{zero-temperature properties}
To investigate the Mott transition at zero temperature,
we make use of the ED method as an impurity solver,
where the fictitious temperature\cite{Caffarel} $\tilde{\beta}$ allows
us to solve the self-consistent equation
$G_{loc}=G_{imp}$ numerically.
In this paper, we use the fictitious temperature $\tilde{\beta}(\geq 50)$
and the number of sites for the impurity model is set as $N=6$
to converge the DMFT iterations.
We note that a careful scaling analysis for the fictitious temperature and
the number of sites is needed only when the system is near the metal-insulator
transition points.
To discuss the stability of the Fermi liquid state, we define
the quasi-particle weight for $\alpha$th band as
\begin{eqnarray}
Z_\alpha =\left.\left(
\frac{ 1-{\rm Im} \Sigma_\alpha(\tilde{\omega}_n) }
{ \tilde{\omega}_n}\right)^{-1}\right|_{n=0},
\end{eqnarray}
where $\tilde{\omega}_n[=(2n+1)\pi/\tilde{\beta} ]$ is the Matsubara frequency.
In Fig. \ref{fig:Z1},
the quasi-particle weight
calculated
is shown
as a function of the interorbital interaction
$U'$ for several different values of intraorbital interaction $U$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7cm]{ZZ.eps}
\end{center}
\vskip -4mm
\caption{The quasi-particle weight $Z$ as a function of the interorbital
Coulomb
interaction $U'$. The data are obtained by the ED $(N=6)$ within DMFT.}
\label{fig:Z1}
\end{figure}
For $U=0$, $Z$ decreases monotonically with
increasing $U'$, and a metal-insulator transition occurs around
$U'_c\sim 0.9$. On the other hand,
when $U\neq 0$, there appears nonmonotonic behavior
in $Z$; it once increases with the increase of $U'$,
has the maximum value in the vicinity of $U'\sim U$,
and finally vanishes at the Mott transition point.
It is somehow unexpected that
the maximum structure appears around $U\sim U'$, which is more enhanced
for larger $U$ and $U'$. We will see below that this
is related to enhanced orbital fluctuations.
By repeating similar calculations systematically, we end up with the
phase diagram at zero temperature (Fig. \ref{fig:phasephase}), where
the contour plot of the quasi-particle weight is shown explicitly.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7cm]{phaseJ0.eps}
\end{center}
\vskip -4mm
\caption{The zero-temperature phase diagram for $J=0$.
The contour plot of the quasi-particle weight $Z$ is shown:
the bold-dashed line represents the phase boundary of the metal-insulator
transition, which is obtained by estimating
the values of $U$ and $U'$ that give $Z=0$.
The solid circles are the transition points obtained by the
linearized DMFT.\cite{LDMFT}
}
\label{fig:phasephase}
\end{figure}
At $U'=0$, where the system is reduced to the single-orbital Hubbard model,
we find that the intraorbital Coulomb interaction $U$ triggers
the metal-insulator transition at $U_c=2.9\sim3.0$.
This critical value obtained by the ED
of a small system $(N=6)$ is in good agreement with other
numerical calculations such as
the numerical renormalization group $(U_c=2.94)$,\cite{BullaNRG,Zitzler}
the ED $(U_c=2.93)$,\cite{OnoED}
the linearized DMFT $(U_c=3)$,\cite{LDMFT,2site}
the dynamical density-matrix renormalization group
$(U_c=3.07)$.\cite{Nishimoto,Uhrig}
There are some notable features in the phase diagram.
First, the value of $Z$ is not so sensitive to
$U'$ for a given $U$ ($>U'$), except for the
region $U \sim U'$. In particular, the phase boundary indicated by
the dashed line in the upper side of the figure
is almost flat for the small $U$ region.
The second point is that when $U \sim U'$ the metallic
phase is stabilized up to fairly large Coulomb interactions, and it
becomes unstable,
once the parameters are away from the condition $U=U'$.
The latter tendency is more conspicuous in the regime of strong correlations.
\cite{Koga,Bunemann:Gutzwiller}
\subsection{finite-temperature properties}
To observe such characteristic properties around $U = U'$
in more detail, we compute the physical quantities at finite temperatures
by exploiting the QMC method as an impurity solver.\cite{Hirsch}
In Fig. \ref{fig:dosdos}, we show the DOS deduced by
applying the maximum entropy method (MEM) \cite{MEM1,MEM2,MEM3}
to the Monte Carlo data.\cite{KogaSN}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7cm]{dosdos.eps}
\end{center}
\vskip -4mm
\caption{The DOS for the two-orbital Hubbard model $(U=3.0)$.
The data are for the inverse
temperature $T^{-1}=1, 2, 4, 8$ and $16$ from the top to the bottom.}
\label{fig:dosdos}
\end{figure}
In the case $(U, U')=(3.0, 0.0)$, the system is in the insulating phase
close to the transition point (see Fig.\ref{fig:phasephase}).
Nevertheless we can clearly observe the formation of the Hubbard gap
with the decrease of temperature $T$.
In the presence of $U'$, the sharp quasi-particle peak is
developed around
the Fermi level at low temperatures (see the case of
$(U, U')=(3.0, 3.0)$).
This implies that orbital fluctuations induced by $U'$ drive the
system to the metallic phase.
Further increase in the interorbital interaction suppresses
spin fluctuations, leading
the system to another type of the Mott insulator in the
region of $U'>U$.
To characterize the nature of the spin and orbital
fluctuations around the Mott transition,
we investigate the temperature dependence of the
local spin and orbital susceptibilities,
which are defined as,
\begin{eqnarray}
\chi_s&=&\int_0^\beta {\rm d}\tau\langle
\left\{n_\uparrow(0)-n_\downarrow(0)\right\}
\left\{n_\uparrow(\tau)-n_\downarrow(\tau)\right\}\rangle\nonumber\\
\chi_o&=&\int_0^\beta {\rm d}\tau\langle
\left\{n_1(0)-n_2(0)\right\}
\left\{n_1(\tau)-n_2(\tau)\right\}\rangle,
\end{eqnarray}
where $\beta=T^{-1}$, $n_\sigma=\sum_\alpha n_{\alpha,\sigma}$,
$n_\alpha=\sum_\sigma n_{\alpha, \sigma}$ and $\tau$ is imaginary time.
We show the results obtained by QMC simulations
within DMFT in Fig. \ref{fig:chi}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7cm]{chiall.eps}
\end{center}
\vskip -4mm
\caption{The local spin and orbital susceptibilities
as a function of the temperature.}
\label{fig:chi}
\end{figure}
Let us first look at Figs. \ref{fig:chi} (a) and (b) for $U'=0$
(equivalent to the single-orbital model). Since
the introduction of the intraorbital interaction $U$ makes
the quasi-particle peak narrower, it increases
the spin susceptibility $\chi_s$ at low temperatures. On
the other hand, the formation of the Hubbard gap
suppresses not only the charge susceptibility
but also the orbital susceptibility $\chi_o$.
As seen from Figs. \ref{fig:chi} (c) and (d),
quite different behavior appears in the susceptibilities,
when the interorbital interaction $U'$ is increased.
Namely, the spin susceptibility is suppressed, while
the orbital susceptibility is enhanced
at low temperatures. This tendency holds
for larger $U'$ beyond the condition $U' \sim U$.
Therefore in the metallic phase close to the Mott insulator
in the region of $U>U' (U<U')$, spin (orbital) fluctuations
are enhanced whereas orbital (spin) fluctuations are suppressed with
the decrease of the temperature.
These analyses clarify
why the metallic phase is particularly stable along the
line $U=U'$. Around this line, spin and orbital fluctuations
are almost equally enhanced, and this subtle balance is efficient
to stabilize the metallic phase.
When the ratio of interactions deviates from this condition, the
system prefers either of the two Mott insulating phases.
\subsection{phase diagram at finite temperatures}
QMC simulations are not powerful enough to determine the phase
diagram at finite temperatures. To overcome this difficulty, we
make use of a complementary method, SFA,\cite{InabaB}
which allows us to discuss the Mott transition at
finite temperatures.
The obtained phase diagram is shown in Fig. \ref{hund},
where not only the case of $J=0$ but also
$J=0.1U$ are studied under the condition $U=U'+2J$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7cm]{phase_Hund.eps}
\end{center}
\caption{
The finite temperature phase diagram for the two-orbital Hubbard
model for $J=0$ and $J=0.1U$ under the condition $U=U'+2J$.
}\label{hund}
\end{figure}
It should be noted that the Mott transition at finite temperatures is
of first order, similarly to the single-orbital Hubbard case.\cite{Georges}
The critical value for the transition point, $U_c$, is
determined by comparing the grand potential for each phase.
We find the region in the phase diagram
where the metallic and Mott insulating phases coexist.
Starting from the metallic phase at low temperatures,
the increase of the Coulomb interaction
triggers the Mott transition to the insulating phase
at $U_{c2}$, where we can observe the discontinuity
in the quasi-particle weight $Z$.
On the other hand, the Mott transition occurs at $U_{c1}$
when the interaction decreases.
The phase boundaries $U_c$, $U_{c1}$ and $U_{c2}$ merge to
the critical temperature $T_c$, where the second order transition occurs.
Note that upon introducing the Hund coupling $J$, the phase boundaries
are shifted to the weak-interaction region, and
therefore the metallic state gets unstable for large $U$.
Also, the coexistence region surrounded by
the first order transitions shrinks as $J$ increases. This tendency
reflects the fact that the metallic state is stabilized by
enhanced orbital fluctuations around $U=U'$: the Hund
coupling suppresses such orbital fluctuations, and
stabilizes the Mott-insulating phase.
Another remarkable point is that the Mott transition becomes of
first-order even at zero temperature in the presence of
$J$,\cite{Bunemann:Gutzwiller,Ono03,Pruschke}
since the subtle balance realized at $T=0$ in the
case of $U=U'$ is not kept anymore for finite $J$.
There is another claim that the second order transition could be
possible for certain
choices of the parameters at $T=0$,\cite{Pruschke}
so that more detailed discussions may be
necessary to draw a definite conclusion on this problem.
\section{Orbital-selective Mott transitions}\label{sec4}
In the previous section, we investigated the two-orbital Hubbard model
with same bandwidths, and found that the metallic state is
stabilized up to fairly large Coulomb interactions around $U = U'$,
which is shown to be caused by the enhanced spin and orbital fluctuations.
We now wish to see what will happen if we consider
the system with different bandwidths,
which may be important in real materials such as
$\rm Ca_{2-x}Sr_xRuO_4$\cite{Nakatsuji1}
and $\rm La_{n+1}Ni_nO_{3n+1}$.\cite{Kobayashi96,LaNiO}
In the following, we will demonstrate that the enhanced spin and
orbital fluctuations again play a key role in controlling the
nature of the Mott transitions even for the system with different
bandwidths.\cite{KogaLett,KogaB,Inaba}
\subsection{zero-temperature properties}
Let us start with the quasi-particle weight calculated
by DMFT with the ED method \cite{KogaLett} and see the stability
of the metallic phase at zero temperature. Here,
we include the Hund coupling explicitly under the constraint $U=U'+2J$.
The quasi-particle weights obtained with fixed ratios $U'/U$ and $J/U$
are shown in Fig. \ref{fig:Z} for half-filled bands.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=6cm]{Za.eps}
\includegraphics[width=6cm]{Zb.eps}
\end{center}
\caption{The quasi-particle weights $Z_1$ and $Z_2$ at half filling
as a function of $U$ for $D_1=1.0$ and $D_2=2.0$: (a) $U'/U=1.0$ ($J=0$) and
(b) $U'/U=0.5$ ($J/U=0.25$).
Open (closed) circles represent the results for orbital $\alpha=1(2)$
obtained by combining DMFT the ED as an impurity solver
for the $N=6$ small cluster.
Solid triangles represent the Mott-transition points
obtained by the two-site DMFT method. Insets show the same plots
for $D_1=1.0$ and $D_2=5.0$.
}
\label{fig:Z}
\end{figure}
We first study the case of $U=U'$ and $J=0$
with bandwidths $D_1=1.0$ and $D_2=2.0$ [Fig. \ref{fig:Z} (a)].
When the Coulomb interaction is switched on, the
quasi-particle weights $Z_1$ and $Z_2$ decrease
from unity in slightly
different manners reflecting the different bandwidths.
A strong reduction in the quasi-particle weight
appears initially for the narrower band.
However, as the system approaches the Mott transition,
the quasi-particle weights merge again, exhibiting very similar
$ U $-dependence, and eventually vanish at
the same critical value. This behavior is explained as follows.
For small interactions,
the quasi-particle weight depends on the effective Coulomb interactions
$U/D_\alpha$ which are different for two bands of different width
$D_{\alpha}$, and give distinct behavior of $Z_1 $ and $Z_2$.
However, in the vicinity of the Mott transition, the
effect of the bare bandwidth is diminished due to the
strong renormalization of the effective quasi-particle bandwidth,
allowing $Z_1$ and $Z_2$ to vanish together
\cite{Gutzwiller,Brinkman}.
The introduction of a finite Hund coupling $J$ makes $ U \neq U' $,
and causes qualitatively different behavior [Fig. \ref{fig:Z} (b)].
As $ U $ increases with the fixed ratio $ U'/U=0.5 $,
the quasi-particle weights decrease differently and vanish at different
critical points: $U_{c1}\approx 2.6$ for $ Z_1$ and $U_{c2} \approx
3.5 $ for $Z_2 $. We thus have an intermediate phase with one
orbital localized and the other itinerant, which may be
referred to as the intermediate phase. The analogous
behavior is observed for
other choices of the
bandwidths, if $J$ takes a finite value [see the
inset of Fig. \ref{fig:Z} (b)].
These results certainly suggest the existence of
the OSMT with $ U_{c2} > U_{c1} $.
We have repeated similar DMFT calculations for various
choices of the parameters to determine
the ground-state phase diagram, which is shown in Fig. \ref{fig:phase}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7cm]{phase.eps}
\end{center}
\caption{The zero-temperature phase diagram for two-orbital Hubbard model
for $D_1=1$ and $D_2=2$. In the phase (I) [phase (II)],
both bands are in metallic (insulating) state.
The phase (III) is induced
by the OSMT, where
the metallic state coexists with the Mott insulating state.
Since we consider the ferromagnetic Hund coupling, $J>0$,
the relevant region in the diagram is $U>U'$.
}
\label{fig:phase}
\end{figure}
The phase diagram has some remarkable features.
First, the metallic phase (I) is stabilized up to fairly
large Coulomb interaction $U$ when $U \to U'$ (small $J$).
Here the Mott transitions merge to a single transition.
As mentioned above, this behavior reflects the high symmetry
in the case of
$U=U' $ ($J=0$) with six degenerate two-electron onsite
configurations: four spin configurations with one electron in each orbital
and two spin singlets with both electrons in one of the two orbitals.
Away from the symmetric limit, i.e. $ U > U' $ ($ 2J = U - U' $) orbital
fluctuations are suppressed and the spin
sector is reduced by the Hund coupling to three onsite spin triplet
components as the lowest multiplet for two-electron states.
In this case, we encounter two types of the Mott transitions
having different critical points.
In between the two transitions we find the intermediate OSM phase (III)
with one band localized and the other itinerant.
Within our DMFT scheme we have confirmed that various choices
of bandwidths give rise to the qualitatively same structure of the
phase diagram as shown in Fig. \ref{fig:phase} (see also the discussions
in Summary).
\subsection{finite-temperature properties}
To clarify how enhanced orbital fluctuations
around $U=U'$ affect the nature of Mott transitions,
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7cm]{chi-o+.eps}
\end{center}
\caption{ The
orbital susceptibility as a function of $T$ for $V=0$.
Open (solid) symbols represent the results in the case $U=U'$ and $J=0$
($U'/U=3/4$ and $J/U=1/8$) and dashed lines those for the
non-interacting case.
}
\label{fig:chi-o}
\end{figure}
let us study the temperature-dependent
orbital susceptibility, which is shown in Fig. \ref{fig:chi-o}.
Here, we have used the new algorithm of the QMC
simulations proposed by Sakai {\it et al.}\cite{Sakai}
to correctly take into account the effect of Hund coupling
including the exchange and the pair hopping terms.\cite{KogaB}
We can see several characteristic properties in Fig. \ref{fig:chi-o}.
In the non-interacting system,
the orbital susceptibility increases with decreasing temperature,
and reaches a constant value at zero temperature.
When the interactions are turned on ($U'/U=3/4$ and $J/U=1/8$),
the orbital susceptibility is suppressed
at low temperatures, which implies that electrons
in each band become independent. Eventually
for $U \ge U_{c1} \sim 3$, one of the orbitals is localized,
so that orbital fluctuations are suppressed completely, giving
$\chi_o=0$ at $T=0$.
On the other hand, quite different behavior appears
around $U'=U$. In this case, the
orbital susceptibility is increased with the increase of interactions
even at low temperatures.
Comparing this result with the phase diagram in
Fig. \ref{fig:phase},
we can say that the enhanced orbital fluctuations are
relevant for stabilizing the metallic phase in the
strong correlation regime. While such behavior has
been observed for models with
two equivalent orbitals (previous section), it appears even in
the systems with
nonequivalent bands.\cite{KogaSN}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=6cm]{chit-s+a.eps}
\includegraphics[width=6cm]{chit-s+b.eps}
\end{center}
\caption{
(a) The effective Curie constant
$\chi_s T$ as a function of $T$
for $U'/U=3/4$ and $J/U=1/8$; (b) the results for $U'=U$ and $J=0$.
}
\label{fig:chit-s}
\end{figure}
In Fig. \ref{fig:chit-s},
the effective Curie constant $\chi_s T$ is shown as a function
of the temperature.
We first look at the case of $U'/U=3/4$ and $J/U=1/8$.
At high temperatures, all the spin configurations are equally
populated, so that the effective Curie constant has the value
$1/2$ for each orbital in our units, giving $\chi_s T\sim 1$.
For weak electron correlations
($U=1$), the system is in the metallic phase,
so that the Pauli paramagnetic behavior
appears, resulting in $\chi_s T \rightarrow 0$
as $T \rightarrow 0$. We can see that the increase of the interactions
enhances the spin susceptibility at low temperatures, as a result of
the progressive trend to localize the electrons.
The effective Curie constant is $\chi_sT=2$ when a free spin is realized
in each orbital, while $\chi_sT=8/3$
when a triplet $S=1$ state is realized due to the Hund coupling.
It is seen that the Curie constant increases beyond these values
with the increase of the interactions ($U=3, 4$).
This means that ferromagnetic correlations due to
the Hund coupling appear here.
For $U'=U$, we can confirm that not only orbital but also
spin fluctuations are enhanced in the
presence of the interactions, see Fig. \ref{fig:chit-s} (b).
Accordingly, both spin and orbital susceptibilities
increase at low temperatures, forming heavy-fermion states
as far as the system is in the metallic phase. Note that
for $U=6$, at which
the system is close to the Mott
transition point, the spin susceptibility is enhanced with the effective
Curie constant $\chi_sT \sim 4/3$ down to very low temperatures
[Fig. \ref{fig:chit-s} (b)].
The value of 4/3 originates from
two additional configurations of doubly-occupied orbital besides
four magnetic configurations, which are all degenerate
at the metal-insulator
transition point. Although not clearly seen
in the temperature range shown, the Curie constant $\chi_sT$ should vanish at
zero temperature for $U=U'=6$, since the system is still in
the metallic phase, as seen from Fig. \ref{fig:phase}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7cm]{dos-V=0.eps}
\end{center}
\caption{
Solid (dashed) lines represent the DOS
for the orbital $\alpha=1$ ($\alpha=2$)
when $(D_1, D_2)=(1.0, 2.0)$. The data are for the temperatures $T=2, 1,
1/2$ and $1/6$ from the top to the bottom.
}
\label{fig:dos0}
\end{figure}
To see how the above spin and orbital characteristics affect
one-electron properties, we show
the DOS for each orbital in Fig. \ref{fig:dos0},
which is computed by the MEM.\cite{MEM1,MEM2,MEM3}
When the interactions increase along the line $U'/U=3/4$ and $J/U=1/8$,
the OSMT should occur. Such tendency indeed
appears at low temperatures in Fig. \ref{fig:dos0}(a).
Although both orbitals are in metallic states down to
low temperatures ($T=1/6$) for $U=1$, the OSMT
seems to occur for $U=2$; one of the bands develops the Mott Hubbard
gap, while the other band still remains metallic.
At a first glance, this result seems slightly different from
the ground-state phase diagram
shown in Fig. \ref{fig:phase}, where the system is in
the phase (I) even at $U=2$.
However, this deviation is naturally understood
if we take into account the fact that for $U=2$, the
narrower band is already in a highly correlated
metallic state, so that the sharp quasi-particle peak immediately
disappears as the temperature increases. This explains the behavior
observed in the DOS at $T=1/6$. For $U=3$, both
bands are insulating at $T=1/6$ (the system
is near the boundary between the phases (II) and (III) at
$T=0$).
For $U'=U$, the qualitatively different
behavior appears in Fig. \ref{fig:dos0}.
In this case, quasi-particle peaks are developed in
both bands as the interactions increase, and the system
still remains metallic even at $U=U'=3$.
As mentioned above, all these features, which are contrasted
to the case of $U' \neq U$,
are caused by equally enhanced spin and orbital fluctuations
around $U=U'$.
\subsection{phase diagram at finite temperatures}
Having studied the spin and orbital properties,
we now obtain the phase diagram at finite temperatures
in the general case $U\neq U'$ and $J\neq 0$.
Since each Mott transition at zero
temperature is similar to that for the single-orbital Hubbard model,
\cite{Georges} we naturally expect that the transition should be
of first order at finite temperatures
around each critical point \cite{LiebschFiniteT}.
In fact, we find the hysteresis in the physical quantities.
For example, we show the entropy per site in Fig. \ref{fig:local}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7cm]{ent_hys.eps}
\end{center}
\caption{
The entropy as a function of $U$ at $T=0.002, J=0.25U$.
}
\label{fig:local}
\end{figure}
At $T=0.002$, when $U$ increases,
the metallic state (I) disappears around $U_{c2}\sim 2.36$
where the first-order Mott transition
occurs to the intermediate phase (III), which is accompanied by
the jump in the curve of entropy.
On the other hand, as $U$ decreases, the intermediate phase (III)
is stabilized down to $U_{c1}\sim 2.24$.
The first-order critical point $U_c\sim 2.33$ for $T=0.002$ is estimated
by comparing the grand potential for each phase.\cite{Inaba}
The phase diagram thus obtained by SFA is shown in Fig. \ref{fig:phase_J25}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7cm]{phase_J25.eps}
\end{center}
\caption{The finite-temperature phase diagram for $J=0.25U$.
There are two coexistence phases, which correspond to the
triangular-shaped regions: the metallic phase and
the intermediate phase coexist in the left region and
the intermediate phase and insulating phase coexist in the
right region.}
\label{fig:phase_J25}
\end{figure}
It is seen that the two coexistence regions
appear around $U\sim 2.4$ and $U\sim 3.3$.
The phase boundaries $U_{c1}$, $U_{c2}$ and $U_c$ merge
at the critical temperature $T_c$ for each transition.
We note that similar phase diagram
was obtained by Liebsch by means of DMFT with
the ED method.\cite{LiebschFiniteT} Our SFA treatment
elucidates further interesting
properties such as the crossover behavior among the competing
phases (I), (II) and (III). To see this point more clearly,
we show the detailed results for
the entropy and the specific heat in Fig. \ref{fig:hcap}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=6cm]{hcapa.eps}
\includegraphics[width=6cm]{hcapb.eps}
\end{center}
\caption{(a) Entropy $S/L$ and (b) Specific heat $C/L$
as a function of $U$ in the crossover region, $J=0.25U$.
}\label{fig:hcap}
\end{figure}
There exists a double-step structure in the curve of the entropy,
which is similar to that found at $T=0$,
where the residual entropy
$S/L=0, \log 2$ and $\log 3$ appears in the phase (I), (III) and (II),
respectively. Such anomalies
are observed more clearly in the specific heat. It is remarkable
that the crossover behavior among three phases is clearly seen
even at high temperatures. Therefore,
the intermediate phase (III) is well defined even at higher
temperatures above the critical temperatures.
\section{Effect of hybridization between distinct orbitals}\label{sec5}
In the present treatment of the model
with DMFT, the intermediate phase (III) is unstable
against certain perturbations. There are several
mechanisms that can stabilize this phase. A possible mechanism,
which may play an important role in real materials,
is the hybridization between the two distinct orbitals.
The effect of the hybridization is indeed important, e.g.
for the compound $\rm Ca_{2-x}Sr_x Ru O_4$, \cite{Nakatsuji1}
where the hybridization between $\{\alpha, \beta\}$ and $\gamma$ orbitals
is induced by the tilting of RuO$_6$ octahedra in the
region of $\rm Ca$-doping $0.2<x<0.5$\cite{tilting}. An interesting point
is that the hybridization effect could be closely
related to the reported heavy
fermion behavior.\cite{Nakatsuji,Nakatsuji1}
The above interesting aspect naturally motivates us to study the
hybridization effect
between the localized and itinerant electrons in the
intermediate phase (III).
Here, we take into account the effect of hybridization in each phase
to study the instability of the OSMT.\cite{KogaB,Medici05}
We study the general case with
$U'\neq U$ and $J\neq 0$ in the presence of the hybridization
$V$. In Fig. \ref{fig:dos-V}, we show the
DOS calculated by QMC and MEM for several different values of $V$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7cm]{dos-V.eps}
\end{center}
\caption{Solid (dashed) lines represent the DOS
for the orbital $\alpha=1$ ($\alpha=2$)
when $(D_1, D_2)=(1.0, 2.0)$ at $T=1/6$
with the fixed parameters of $U'/U=3/4$ and $J/U=1/8$.
The data are plotted for $V=0.0, 0.25, 0.5, 0.75, 1.0, 1.25$
and $1.5$ from top to bottom.
}
\label{fig:dos-V}
\end{figure}
We begin with the case of weak interaction, $U=1$, where
the metallic states are realized in both orbitals at $V=0$.
Although the introduction of small $V$ does not change
the ground state properties, further increase in $V$ splits
the DOS, signaling the formation of the
band insulator, where all kinds of excitations have the gap.
On the other hand, different behavior appears when the
interactions are increased up to $U=2$ and 3. In these cases,
the system at $V=0$ is in the intermediate
or Mott-insulating phase at $T=1/6$. It is seen that
the DOS around the Fermi level increases
as $V$ increases. At $U=2$, the intermediate state
is first changed to the metallic state, where the quasi-particle
peaks emerge in both orbitals ($V=0.75,1.0$).
For fairly large $V$, the system falls into the
renormalized band insulator ($V=1.5$).
In the case of $U=3$, the hybridization first drives the Mott-insulating
state to the intermediate one, as seen at $V=0.75$, which
is followed by two successive transitions.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7cm]{chiall-v.eps}
\end{center}
\caption{The charge, spin and orbital susceptibilities
as a function of $V$ at $T=1/6$.
}
\label{fig:chiall-v}
\end{figure}
These characteristic properties are also observed in the
charge, spin and orbital susceptibilities
at low temperatures (Fig. \ref{fig:chiall-v}).
For weak interactions ($U=1$), the charge
susceptibility $\chi_c$ monotonically decreases with the increase of $V$.
When electron correlations become strong,
nonmonotonic behavior appears in $\chi_c$:
the charge fluctuations, which are suppressed at $V=0$,
are somewhat recovered by the hybridization.
For large $V$, $\chi_c$ is suppressed again
since the system becomes a band insulator.
It is seen that the orbital susceptibility $\chi_o$ shows
nonmonotonic behavior similar to the charge susceptibility,
the origin of which is essentially the
same as in $\chi_c$. In contrast, the spin susceptibility
decreases with the increase of $V$ irrespective of
the strength of the interactions. As is the case for
$V=0$, the effective spin is enhanced by ferromagnetic
fluctuations due to the Hund coupling in the insulating
and intermediate phases. When the hybridization is introduced in
these phases, the ferromagnetic fluctuations are
suppressed, resulting in the monotonic decrease of the
effective Curie constant.
We can thus say that the introduction of
appropriate hybridization gives rise to
heavy-fermion metallic behavior.
Such tendency is observed more clearly in an
extreme choice of the bandwidths, $(D_1, D_2)=(1.0, 10.0)$,
as shown in Fig. \ref{fig:ex}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=6.3cm]{exa.eps}
\includegraphics[width=6cm]{exb.eps}
\end{center}
\caption{(a) Effective Curie constant as a function of $T$
and (b) DOS in the narrower band $(\alpha=1)$
at $T=1/4$ for
an extreme choice of the bandwidths, $(D_1,D_2)=(1.0, 10.0)$.
The DOS for the wider band is not shown here.
The other parameters are $U=4.0, U'=3.0$ and $J=0.5$.
}
\label{fig:ex}
\end{figure}
Since the system is in the intermediate phase at $V=0.0$,
the narrower band shows localized-electron properties [Fig. \ref{fig:ex} (b)]
in the background of
the nearly free bands. This double structure in the DOS yields
two peaks in the temperature-dependent Curie constant,
as shown in Fig. \ref{fig:ex} (a). The localized state
plays a role of the
$f$-state in the Anderson lattice model,
\cite{Kusunose} so that heavy-fermion quasi-particles appear
around the Fermi level for finite $V$, which are essentially the
same as those observed in Fig. \ref{fig:dos-V}.
Finally, we make some comments on the phase diagram at $T=0$.
Although we have not yet studied low-temperature properties
in the presence of $V$, we can give some qualitative arguments on the
phase diagram expected at zero temperature.
As shown above, the metallic phase (I) is not so sensitive to
$V$ in the weak-$V$ regime. This is also the case for the
insulating phase (II),
where a triplet state ($S=1$) formed by the Hund coupling
is stable against a weak hybridization.
There appears a subtle situation
in the intermediate phase (III).
The intermediate phase exhibits Kondo-like heavy fermion
behavior at low temperatures in the presence of $V$.
However, we are now dealing with
the half-filled case, so that this Kondo-like metallic phase
should acquire a Kondo-insulating gap due to commensurability
at zero temperature. Therefore, the intermediate
phase (III) should be changed into the Kondo-insulator with a tiny
excitation gap in the presence of $V$ at $T=0$.
Accordingly,
the sharp transition between the phases (II) and (III) at $V=0$
still remains in the weakly hybridized case. \cite{Medici05}
This is different from the situation in the periodic Anderson model
with the Coulomb interactions for conduction electrons
as well as localized $f$ electrons,
where the spin-singlet insulating phase is always realized
in its parameter range.\cite{Sato,SchorkPAM}
Consequently we end up with the schematic phase diagram
(Fig. \ref{fig:phase+}) for
the two-orbital model with the hybridization between the
distinct orbitals.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7cm]{schematic+.eps}
\end{center}
\caption{
The schematic phase diagram for the model
with finite $V$ at $T=0$.
Solid lines represent the phase boundaries between the metallic and
insulating phases.
Dashed line indicates the phase boundary
between the Mott insulator and
the Kondo insulator.
}
\label{fig:phase+}
\end{figure}
Recall that the phase has three regions on the line $V=0$:
(I) metallic, (II) insulating and
(III) intermediate phases.
The metallic phase (I) for small $U$ is simply driven to
the band-insulator (IV) beyond a certain critical value of $V$.
The intermediate phase (III) at $V=0$ is
changed to the Kondo-insulator (V)
in the presence of any finite $V$. As $V$ increases, this insulating state
first undergoes a phase transition to the metallic phase (I), and
then enters the band-insulator (IV). On the other hand,
the Mott insulating phase (II) first shows a transition to
the Kondo insulator (V), which is further driven to the metallic phase (I)
and then to the band-insulating phase (IV).
Note that at finite temperatures above the Kondo-insulating gap,
we can observe Kondo-type heavy
fermion behavior in the intermediate phase with finite $V$.
\section{Summary}
We have investigated the Mott transitions in the two-orbital Hubbard model
by means of DMFT and SFA.
It has been clarified that orbital fluctuations enhanced in the special
condition $U \sim U'$ and $J \sim 0$ play a key role in
stabilizing the correlated metallic state. In particular, this
characteristic property gives rise to nontrivial effects on the
Mott transition, which indeed control whether the OSMT is realized
in the two-orbital model with different bandwidths.
We have demonstrated the above facts by taking
the system with the different bandwidths
$(D_1, D_2)=(1.0, 2.0)$ as an example.
In the special case with $U=U'$ and $J=0$,
the metallic state is stabilized up to fairly large interactions,
resulting in a single Mott transition. The resulting single transition
is nontrivial, since we are concerned with the system
with different bandwidths. On the other hand, for
more general cases with $U\neq U'$ and $J\neq 0$,
the Hund coupling suppresses orbital fluctuations,
giving rise to the OSMT. We have confirmed these results by
computing various quantities at zero and finite
temperatures.
Recently, it was reported that
when the ratio of the bandwidths is quite large,
the OSMT occurs even in the case of $U=U'$ and $J=0$.\cite{Medici05,Ferrero05}
This result implies that the detailed structure of the phase diagram in
some extreme cases depends on the parameters chosen, and is
not completely categorized in the scheme discussed in this paper.
This naturally motivates us to establish the detailed phase diagrams
at zero and finite temperatures by incorporating various effects such as
the magnetic field, the crystalline electric
field,\cite{Ruegg} the lattice structure, etc.
In this paper, we have restricted our discussions to the normal metallic
and Mott insulating phases
without any long-range order. It is an important open problem to take into
account such instabilities to various ordered phases,
which is now under consideration.
\section*{Acknowledgement}
We are deeply indebted to our collaborators in this field,
T. Ohashi, Y. Imai, S. Suga, T.M. Rice, and M. Sigrist, and
have benefitted from helpful discussions with
F. Becca, S. Biermann, A. Georges, A. Liebsch, S. Nakatsuji,
and Y. Maeno.
Discussions during the YITP workshop YKIS2004 on
"Physics of Strongly Correlated Electron Systems"
were useful to complete this work.
This work was partly supported by a Grant-in-Aid from the Ministry
of Education, Science, Sports and Culture of Japan,
the Swiss National Foundation and the NCCR MaNEP.
A part of computations was done at the Supercomputer Center at the
Institute for Solid State Physics, University of Tokyo
and Yukawa Institute Computer Facility.
| 15,934 |
\section*{Acknowledgments}
\addcontentsline{toc}{section}{Acknowledgments}
\end{center}
I would like to thank my advisor Professor G.~R.~Farrar for the skills I learned, for the support and the patience with my written English. I am grateful to Slava, Francesco, Seba for helping me when it was needed. And, thank you Emi.
\newpage
\begin{center}
\section*{Abstract}
\end{center}
\vspace{4mm}
\addcontentsline{toc}{section}{Abstract}
In this thesis we study the dark matter problem with particular reference to a candidate particle within the Standard Model: the $H$ dibaryon. We consider as well a scenario which aims to connect the dark matter origin to the Baryon Asymmetry of the Universe, studying the examples of $H$ and of a Beyond-the-Standard-Model particle $X$.
Strongly attractive color forces in the flavor singlet channel may lead to a tightly bound and compact $H$ dibaryon. We find that the observation of $\Lambda$ decays from doubly-strange hypernuclei puts a constraint on the $H$ wavefunction which is plausibly satisfied. In this case the $H$ is very long-lived as we calculate and an absolutely stable $H$ is not excluded. We also show that an $H$ or another compact, flavor singlet hadron is unlikely to bind to nuclei, so that experimental bounds on exotic isotopes do not exclude their existence. Remarkably, the $H$ appears to evade other experimental constraints as well, when account is taken of its expected compact spatial wavefunction.
In order to check whether the $H$ is a viable DM candidate, we consider experiments sensitive to light particles. Taking into account the dark matter interaction in the crust above underground detectors we find a window in the exclusion limits in the micro-barn, $m\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 2.4$ GeV, range. Remarkably, this coincides with the range expected for the tightly bound $H$. Having these constraints in mind we conclude that the $H$ is a good DM candidate, but its production with sufficient abundance in the Early Universe is challenging.
Finally, we present a scenario in which dark matter carries (anti-)baryon number $B_X$ and which offers a mechanism to generate the baryon asymmetry observed in the Universe. If $\sigma^{\rm annih}_{\bar{X}} < \sigma^{\rm annih}_{X}$, the $\bar{ X}$'s freeze out at a higher temperature and have a larger relic density than $X$'s. If $m_X \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 4.5 \,B_X$ GeV and the annihilation cross sections differ by $\mathcal{O}(10\%)$ or more, this type of scenario naturally explains the observed $\Omega_{DM} \simeq 5\, \Omega_b$. Two examples are given, one involving the $H$ and the other invoking an hypothetical beyond the Standard Model candidate $X$.
\chapter{Proposed Scenario}
\label{scenario}
In most approaches the origins of DM and the BAU are completely unrelated \-- baryon number density $n_B$ is obtained from baryogensis models while the number density of DM $n_{DM}$ derives from relic freeze-out calculations and their values naturally could differ by many orders of magnitude (see \cite{other1,other2,other3,other4} for other papers where these two problems are related). In this Section we propose a new type of scenario, in which the observed baryon asymmetry is due to the {\it separation} of baryon number between ordinary matter and dark matter and not to a net change in the total baryon number since the Big Bang, \cite{fz:bau}.
Thus the abundances of nucleons and dark matter are related. The first Sakharov condition is not required, while the last two remain essential. We give explicit examples in which anti-baryon number is sequestered at temperatures of order 100 MeV.
The CPT theorem requires that the total interaction rate of any ensemble of particles and antiparticles is the same as for the conjugate state in which each particle is replaced by its antiparticle and all spins are reversed. However individual channels need not have the same rate so, when CP is violated, the annihilation rates of the CP reversed systems are not in general equal. A difference in the annihilation cross section, $\sigma^{\rm annih}_{\bar{X}} < \sigma^{\rm annih}_{X} $, means that the freeze out temperature for $X$'s ($T_X$) is lower than for $\bar{X}$'s ($T_{\bar{X}}$). After the $\bar{X}$'s freeze out, the $X$'s continue to annihilate until the temperature drops to $T_{X}$, removing $B_X$ antinucleons for each $X$ which annihilates.
Assuming there are no other significant contributions to the DM density, the present values $n_{o\, N}$, $n_{o\, X}$
and $n_{o\, \bar{X}}$ are
determined in terms of $m_X$, $B_X$ and the observables $ \frac{\Omega_{DM}}{\Omega_b}$ and $\frac{n_{o\,N}}{n_{o\, \gamma}} \equiv \eta_{10} \,10^{-10}$ or $\rho_{\rm crit}$. From WMAP,
\begin{eqnarray} \label{wmappar}
\eta_{10} &=& 6.5^{+0.4}_{-0.3}, \nonumber \\
\frac{\Omega_{DM}}{\Omega_b} &=& 5.27 \pm 0.49.
\end{eqnarray}
Given the values of these observables, we can ``reverse engineer" the process of baryon-number-segregation.
For brevity, suppose there is only one significant species of DM particle. Let us define $\epsilon = \frac{n_X}{n_{\bar{X}} }$. Then the total energy density in $X$'s and $\bar{X}$'s is
\begin{equation}
\rho_{DM} = m_X n_{\bar{X}} (1 + \epsilon).
\end{equation}
By hypothesis, the baryon number density in nucleons equals the antibaryon number density in $X $ and $\bar{X}$'s, so
\begin{equation}
B_X n_{\bar{X}} (1-\epsilon) = (n_N - n_{\bar{N}}) = \frac{\rho_b}{m_N}.
\end{equation}
Thus
\begin{equation} \label{kappa}
\frac{\Omega_{DM}}{\Omega_b} = \left( \frac{1 + \epsilon}{1 - \epsilon} \right) \frac{m_X}{m_N B_X}.
\end{equation}
As long as the DM particle mass is of the order of hadronic masses and $\epsilon $ is not too close to 1, this type of scenario naturally accounts for the fact that the DM and ordinary matter densities are of the same order of magnitude. Furthermore, since $ \frac{1 + \epsilon}{1 - \epsilon} \ge 1$, the DM density in this scenario must be {\it greater} than the nucleonic density, unless $m_X < m_N B_X$, as observed.
Given the parameters of our Universe, we can instead write (\ref{kappa}) as an equation for the DM mass
\begin{equation}\label{mX}
m_X = \left( \frac{1 - \epsilon}{1 + \epsilon} \right) \frac{\Omega_{DM}}{\Omega_b} \, B_X m_N .
\end{equation}
For low baryon number, $B_X = 1\, (2)$, this implies
\begin{equation}
m_X \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 4.5 \,(9)\,{\rm GeV}.
\end{equation}
If dark matter has other components in
addition to the $X$ and $\bar{X}$, the $X$ must be lighter still. The observed BAU can be due to baryon number sequestration with heavy DM only if $B_X$ is very large, e.g., strangelets or Q-balls. However segregating the baryon number in such cases is challenging.
As an existence proof and to focus discussion of the issues, we present two concrete scenarios. In the first, $X$ is a particle already postulated in QCD, the $H$ dibaryon ({\it uuddss}). New particle physics is necessary, however, because the CP violation of the Standard Model via the CKM matrix cannot produce the required $\mathcal{O}$(20\%) difference in annihilation cross sections, since only the first two generations of quarks are involved. The second scenario postulates a new particle, we call $X_4$, with mass $\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 4.5$ GeV, which couples to quarks through dimension-6 operators coming from beyond-the-standard-model physics. In this case CP violation is naturally large enough, $\mathcal{O}$(10\%), because all three quark generations are involved and, in addition, the new interactions in general violate CP. Review of particle properties of these candidates is given in Sections \ref{Tightlybound} and \ref{Xproperties}. After deducing the properties required of these particles by cosmology we discuss indirect (Sections \ref{Hindirect} and \ref{Xconstraints}) and direct (Section \ref{directDM}) searches. As we shall show, the $H$, $\bar{H}$ scenario can already be ruled out by limits on the heat production in Uranus, while $X$ remains a viable candidate.
The annihilation rate of particles of type $j$ with particles of type $i$ is
$\Gamma^{annih}_{j}(T) = \Sigma_i ~n_i(T) <\sigma_{ij}^{annih}
v_{ij}>$,
where $<...>$ indicates a thermal average and $v_{ij}$ is the
relative velocity. As the Universe cools, the densities of all the
particle species $i$ decrease and eventually the rate of even the most important
annihilation reaction falls below the expansion rate of the
Universe. The temperature at which this occurs is called the
freezeout temperature $T_j$ and can be calculated by solving Boltzmann equations.
For the freezeout of $\bar X$ annihilation the Boltzmann equation can be written as
\begin{equation}
\frac{x}{Y^{eq} _{\bar X}}\frac{dY_{\bar X}}{dx}=\frac{\Sigma_i ~n^{eq} _i <\sigma v>}{H(T)}\left(\frac{Y_{\bar X}}{Y^{eq} _{\bar X}}-1\right),
\end{equation}
where $x=m_{\bar X}/T$, $Y_{\bar X}=n_{\bar X}/s$ and $s$ is the entropy of the Universe. Notice that $dY_{\bar X}/dx$ goes to zero (or $Y_{\bar X}$ stays constant, corresponding to freezeout) when $\Gamma^{annih}_{\bar X}(T_{\bar X})\ll H(T)$.
Therefore the freezeout temperature can be estimated roughly from a condition $\Gamma^{annih}_{j}(T_j) = H(T_j) = 1.66 \sqrt{g_*} ~ T_j^2/ M_{Pl} $,
where $g_*$ is the effective number of relativistic degrees of freedom\cite{kolbTurner}. Between a few MeV and the QCD phase transition only neutrinos, $e^\pm$ and $\gamma$'s are in equilibrium and $g_* = 10.75$. Above the QCD phase transition which is within the range 100 to 200 MeV, light quarks and antiquarks ($q, \, \bar{q}$) and $\mu^\pm$ are also relativistic species in equilibrium, giving $g_* = 56.25$.
The equilibrium density at freeze out temperature, $n_j(T_j)$, is a good estimate of the relic abundance of the $j$th species\cite{kolbTurner}. A key element of baryon-number sequestration is that self-annihilation cannot be important for maintaining equilibrium prior to freeze out. This is easily satisfied as long as $\sigma_{\bar{X}X}^{\rm ann}$ is not much greater than $\sigma_{\bar{X}q}^{\rm ann}$ , since at freezeout in ``$X_4$" scenario, $n_{X_4 ,\, \bar{X_4} }\sim 10^{-11} n_{d ,\, \bar{d} }$.
Given $m_X,\, B_X$ and $g_X$ (the number of degrees of freedom of the $X$ particle) and associated densities $n_{\{X,\bar{X}\}}$, the temperature $T_{\bar{X}}$ at which $\bar{X}$'s must freeze out of thermal equilibrium satisfies:
\begin{equation} \label{Xbarfo}
\frac{n_{\bar{X}} - n_X}{n_{\bar{X}}} \frac{n_{\bar{X}}}{n_\gamma } = (1-\epsilon) \frac{ \pi^2 g_X
x_{\bar{X}}^{3/2} e^{-x_{\bar{X}} } }{2 \zeta(3) (2 \pi)^{3/2} }=
\frac{10.75}{3.91}\frac{\eta_{10} 10^{-10} }{B_X} ,
\end{equation}
where $x_{\bar{X}} \equiv m_X/T_{\bar{X}}$.
$ \frac{10.75}{3.91}$ is the factor by which $\frac{n_b}{n_\gamma}$ increases above $e^\pm$ annihilation. The equation for $X$ freezeout is the same, with $(1-\epsilon) \rightarrow (1-\epsilon)/\epsilon $. Freezeout parameters for our specific models, the $H$ dibaryon and the $X$, are given in Table 2.1; $\tilde{\sigma} \equiv \langle \sigma^{\rm ann} v \rangle / \langle v \rangle$ is averaged over the relevant distribution of c.m. kinetic energies, thermal at $\approx 100$ MeV for freezeout.
If $X$s interact weakly with nucleons, standard WIMP searches constrain the low energy scattering cross section $\sigma_{DM} \equiv (\sigma^{\rm el}_{\bar{X} N} + \epsilon \sigma^{\rm el}_{XN})/(1+ \epsilon)$. However if the $X$ is a hadron, multiple scattering in the earth or atmosphere above the detector can cause a significant fraction to reflect or be degraded to below threshold energy before reaching a deep underground detector. Scattering also greatly enhances DM capture by Earth, since only a small fraction of the halo velocities are less than $v_{\rm esc}^{E} = 11$ km/s. Table I gives the total fluxes and the factor $f_{\rm cap}$ by which the flux of captured $\bar{X}$'s is lower, for the two scenarios. The capturing rate is obtained using code by Edsjo et al. \cite{edsjo} which calculates the velocity distribution of weakly interacting dark matter at the Earth taking into account gravitational diffusion by Sun, Jupiter and Venus. For the $H$ dibaryon these are the result of integrating the conservative halo velocity distribution \cite{zf:window}. A comprehensive reanalysis of DM cross section limits including the effect of multiple scattering is given in thesis section \ref{directDM} and ref. \cite{zf:window}. A window in the DM exclusion was discovered for $m_X \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 2.4$ GeV and $ \tilde{\sigma}_{DM} \approx 0.3 - 1\, \mu $b; otherwise, if the DM mass $\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 5$ GeV, $\tilde{\sigma}_{DM}$ must be $ \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 10^{-38} {\rm cm}^2$, \cite{zf:window}.
Since $\sigma_{\{X,\bar{X} \}N}$ is negligible compared to $\sigma_{NN}$ and the $X,\,\bar{X}$ do not bind to nuclei\cite{fz:nucbind}, nucleosynthesis works the same in these scenarios as with standard CDM. Primordial light element abundances constrain the {\it nucleon} -- not {\it baryon} -- to photon ratio!
\begin{table}[htb] \label{table}
\caption{Required freezeout temperatures and annihilation cross sections at freezeout, and captured DM flux in Earth, in two models; $\sigma_{-42} \equiv \sigma/(10^{-42} {\rm cm}^2)$. }
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Model & $T_{\bar{X}}$ MeV & $T_{X}$ MeV & $\tilde{\sigma}^{\rm ann}_{\bar{X}}$ cm$^2$ & $\tilde{\sigma}^{\rm ann}_{{X}}$ cm$^2$ & $R_{\rm cap}$ s$^{-1}$ \\ \hline
$H$, $\bar{H}$ & 86.3 & 84.5 & $2.2~10^{-41}$ & $2.8~10^{-41}$ & $3.8 \times 10^{23}$ \\ \hline
$X_4$ & 180 & 159 & $3.3~10^{-45}$ & $3.7~10^{-45}$ & $ 1.6 \times 10^{12} \sigma_{-42} $ \\ \hline
\end{tabular}
\end{center}
\end{table}
The CPT theorem requires that $\sigma^{\rm ann}_{X} + \sigma^{\rm non-ann}_{X} = \sigma^{\rm ann}_{\bar{X}} + \sigma^{\rm non-ann}_{\bar{X}}$. Therefore a non-trivial consistency condition in this scenario is
\begin{displaymath}
\sigma^{\rm ann}_{X} - \sigma^{\rm ann}_{\bar{X}} \le \sigma^{\rm non-ann}_{\bar{X}}
\end{displaymath}
The value of the LHS needed for B-sequestration from Table I is compatible with the upper limits on the RHS from DM searches, and $\sigma^{\rm non-ann}_{\bar{X}} \ge \sigma^{\rm el}_{\bar{X}} $, so no fine-tuning is required to satisfy CPT.
Further in the text we focus on specific DM candidates with baryon number and experimental constraints that can be placed on such particles.
\chapter{$H$ dibaryon: Dark Matter candidate within QCD?}
\label{Hdibaryon}
This Chapter is organized as follows: in \S\ref{Hhistory} we review properties of the di-baryon; in \S\ref{Tightlybound} we focus on the $H$ which is tightly bound and therefore is light and compact object; we review the experiments relevant for the existence of tightly bound $H$ in \S\ref{expts}; we calculate nuclear transitions of the $H$ in \S\ref{NucStab} and binding of the $H$ to nuclei in \S\ref{binding} and set bounds on parameters from the experiments; in \S\ref{summaryEX} we Summarize the properties of the $H$ which would be consistent with current existence experiments.
\section{$H$ history and properties}
\label{Hhistory}
The $H$ particle is a $(udsuds)$ flavor singlet dibaryon with strangeness $S=-2$, charge $Q=0$ and spin-isospin-parity $J^P =0^+$. In 1977 Jaffe calculated its mass \cite{jaffe} to be about 2150 MeV in the MIT-bag model and thus predicted it would be a strong-interaction-stable bound state, since decay to two $\Lambda$ particles would not be kinematically allowed. The basic mechanism which is expected to give large attractive force between quarks in the $H$ is {\it the color magnetic interaction}. The contribution to the mass from lowest-order gluon exchange is proportional to
\begin{equation}
\Delta=-\sum_{i>j}({\vec \sigma} _i {\vec \sigma}_j)({\vec \lambda} _i {\vec \lambda} _j)M(m_iR,m_jR),
\end{equation}
where ${\vec \sigma} _i$ is spin and ${\vec \lambda} _i$ color vector of the ith quark, and $M(m_iR,m_jR)$ measures the interaction strength. For color singlet hadrons containing quarks and no antiquarks \cite{jaffe},
\begin{equation}
\Delta =\left( 8N-\frac{1}{2}C_6+\frac{4}{3}J(J+1)\right){\bar M},
\end{equation}
where $N$ is total number of quarks, $J$ is their angular momentum and $C_6$ is Casimir operator for the color-spin representation of quarks, $SU(6)$. We can see that the lightest dibaryons will be those in which the quarks are in the color-spin representation with the largest value of Casimir operator. Large values of the Casimir operator are associated with symmetric color-spin representation. Antisymmetry requires flavor representation to be asymmetric. Calculation shows that only flavor singlet representation is light enough to be a stable state.
This result raised high theoretical and experimental attention. The mass was later computed using different models such as Skyrme, quark cluster models, lattice QCD with values ranging between 1.5 and 2.2 GeV. Such a wide range of predictions makes clear contrast to the success in reproducing the mass spectrum of mesons and baryons using the above methods.
The $H$ searches have been done using different production mechanisms, we will comment here on the most common approaches
\begin{itemize}
\item{{\it $H$ production via $(K^-,K^+)$ reaction}}: In BNL E885 experiment, \cite{bnlE885}, ${\rm C}_{12}$ is used as a target. The subsequent reactions: $K^-+p\rightarrow K^+ +\Xi ^-$ and $(\Xi ^- A)_{\rm atom}\rightarrow H+X$ was expected to produce the $H$;
\item{{\it Production through Heavy Ion collision}}: In BNL E896 experiment, \cite{bnlE896}, a $Au+Au$ collision was used to produce the $H$ that could be identified by the anticipated decay $H\rightarrow \Sigma ^-p\rightarrow n\pi ^-p$;
\item{{\it ${\bar p}$- nucleus annihilation reaction}}: In the experiment by Condo {\it et al.}, \cite{condo}, they used the reaction ${\bar p}+A\rightarrow H+X$ to produce the $H$, and looked for its decay through $H\rightarrow \Sigma ^-+p$ channel.
\end{itemize}
Other experiments can be found in ref.~\cite{H}.
Experiments were guided by theoretical predictions for $H$ production and decay lifetimes. The most remarkable contribution to theoretical predictions came from the works of Aerts and Dover \cite{aertsdover1,aertsdover2,aertsdover3} whose calculations have been the basis for understanding of the Brookhaven experiments BNL E813 and E836 with respect to the formation of the $H$. However, theoretical predictions may depend critically on the wave function of the $H$ dibaryon.
Experiments so far did not confirm the existence of the $H$ particle but they put bounds on the production cross section for particular mass values, see \cite{H} for a detailed review. An underlying assumption has generally been that the $H$ is not deeply bound. In our work we are particularly interested in the possibility that the $H$ is tightly bound and that it has a mass less than $m_N+m_{\Lambda}$. In that case, as we shall see, its lifetime can be longer than the age of the Universe.
\section{Tightly bound $H$} \label{Tightlybound}
Several lines of reasoning suggest the possibility that the $H$ could be a tightly bound state, with mass lower than $m_N+m_{\Lambda}=2053$ MeV. We digress here briefly to review this motivation, put forth in~\cite{f:StableH}. The first line of motivation starts from the observation that the properties of the $\frac{1}{2} ^-$ baryon resonance $\Lambda(1405)$ and its spin $\frac{3}{2}$ partner $\Lambda(1520)$ are nicely explained if these are assumed to be ``hybrid baryons'': bound states of a gluon with three quarks in a color -octet, flavor-singlet state denoted $(uds)_8$. If we adopt the hybrid baryon interpretation of the $\Lambda(1405)$ and $\Lambda(1520)$, the similarity of their masses and glueball mass ($\sim 1.5$ GeV) suggests that the color singlet bound state of two $(uds)_8$'s, which would be the $H$, might also have a mass close to 1.5 GeV. A second line of reasoning starts from the observation that instantons produce a strong attraction in the scalar diquark channel, explaining the observed diquark structure of the nucleon. The $H$ has a color-flavor-spin structure which permits a large proportion of the quark pairwise interactions to be in this highly attractive spin -0, color ${\bar 3}$ channel, again suggesting the possibility of tightly bound $H$. Indeed, ref. \cite{kochelev} reported an instanton-gas estimate giving $m_H=1780$ MeV.
If the $H$ is tightly bound, it is expected to be spatially compact. Hadron sizes vary considerably, for a number of reasons. The nucleon is significantly larger than the pion, with charge radius $r_N = 0.87$ fm compared to $r_\pi = 0.67$ fm\cite{PDBook}. Lattice and instanton-liquid studies qualitatively account for this diversity and further predict that the scalar glueball is even more tightly bound: $r_G \approx 0.2$ fm \cite{lattice:RG,shuryak:RG}. If the analogy suggested in ref.~\cite{kf:Lam1405} between $H$, $\Lambda_{1405}$ and glueball is correct, it would suggest $r_H \approx r_G \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 1/4 ~r_N$. The above size relationships make sense: the nucleon's large size is due to the low mass of the pion which forms an extended cloud around it, while the $H$ and glueball do not couple to pions, due to parity and flavor conservation, are thus are small compared to the nucleon.
Lattice QCD efforts to determine the $H$ mass have been unable to obtain clear evidence for a bound $H$. The discussion above suggests that inadequate spatial resolution may be a problem due to the small size of the $H$. Several lattice calculations \cite{wetzorke,pochinsky,iwasaki,mackenzie} use sufficiently small spacing but they use quenching approximation, which does not properly reproduce instanton effects. This is a serious deficiency since the instanton liquid results of ref.~\cite{kochelev} indicate that instanton contributions are crucial for the binding in the scalar diquark channel.
In the absence of an unquenched, high-resolution lattice QCD calculation capable of a reliable determination of the $H$ mass and size, we will consider all values of $m_H $ and take $r_H/r_N \equiv 1/f$ as a parameter, with $f$ in the range $2$-$6$.
Based on the fact that the $H$ interacts with ordinary matter only if it has non-vanishing color charge radius (since it is spin and isospin zero neutral particle) ref.~\cite{f:StableH} estimates cross section of the $H$ with ordinary matter to be of the order $\approx 10^{-2,3}$ mb.
Based on the assumption of light and tightly bound $H$ motivated by Farrar in \cite{f:StableH} we examine the viability of this model using current experimental constraints. My work has focused on the study of several processes involving the $H$ which are phenomenologically important if it exists:
\begin{itemize}
\item {whether it binds to nuclei,} in which case exotic isotope searches place stringent constraints;
\item {nucleon transitions of the $H$} \-- conversion of two $\Lambda$'s in a doubly-strange hypernucleus to
an $H$ which is directly tested in experiments, decay of the $H$ to two baryons, and---if the $H$ is light enough---conversion of two nucleons in a nucleus to an $H$.
\end{itemize}
The experimental constraints important for the existence of the light $H$ and the $H$ as a DM candidate are outlined below. For more experimental constraints on tightly bound $H$, see~\cite{f:StableH}.
\section{The Existence of the $H$ \-- Experimental constraints}
\label{expts}
\subsection{Double $\Lambda $ hyper-nucleus detection} \label{expdoublelambda}
One of the ways to produce and study the $H$ is through the production of double hypernuclei. A double $\Lambda$ hypernucleus formed in an experiment usually through the $(K^-, K^+)$ interaction with the target, is expected to decay strongly into the $H$ and the original nucleus. If single $\Lambda$ decay from a double $\Lambda$ hypernucleus is observed, it means that either the mass of the $H$ should be heavier than the mass of the two $\Lambda$'s minus the binding energy, {\it or} that the decay of two $\Lambda$'s to an $H$ proceeds on a longer time scale than the $\Lambda$ weak decay.
There are five experiments which have reported positive results in the search for single $\Lambda$ decays from double $\Lambda$ hypernuclei. The three early emulsion based experiments \cite{prowse,danysz,kek} suffer from ambiguities in the particle identification, and therefore we do not consider them further. In the latest emulsion experiment at KEK~\cite{kek2}, an event has been observed which is interpreted with good confidence as the sequential decay of ${\rm He}^6 _{\Lambda \Lambda}$ emitted from a $\Xi ^-$ hyperon nuclear capture at rest. The binding energy of the double $\Lambda$ systems is obtained in this experiment to be $B_{\Lambda \Lambda }=1.01\pm 0.2$ MeV, in significant disagreement with the results of previous emulsion experiments, finding $B_{\Lambda \Lambda }\sim 4.5$ MeV.
The third experiment at BNL~\cite{ags} was not an emulsion experiment. After the $(K^-, K^+)$ reaction on a ${\rm Be}^9$ target produced S=-2 nuclei it detected $\pi$ pairs coming from the same vertex at the target. Each pion in a pair indicates one unit of strangeness change from the (presumably) di-$\Lambda$ system. Observed peaks in the two pion spectrum have been interpreted as corresponding to two kinds of decay events. The pion kinetic energies in those peaks are (114,133) MeV and (104,114) MeV. The first peak can be understood as two independent single $\Lambda$ decays from $\Lambda \Lambda$ nuclei. The energies of the second peak do not correspond to known single $\Lambda$ decay energies in hyper-nuclei of interest. The proposed explanation\cite{ags} is that they are pions from the decay of the double $\Lambda$ system, through a specific He resonance. The required resonance has not yet been observed experimentally, but its existence is considered plausible. This experiment does not suffer from low statistics or inherent ambiguities, and one of the measured peaks in the two pion spectrum suggests observation of consecutive weak decays of a double $\Lambda$ hyper-nucleus. The binding energy of the double $\Lambda$ system $B_{\Lambda \Lambda }$ could not be determined in
this experiment.
The KEK and BNL experiments are generally accepted to demonstrate quite conclusively, in two different techniques, the observation of $\Lambda$ decays from double $\Lambda$ hypernuclei. Therefore the formation of the $H$ in a double $\Lambda$ hypernucleus does not proceed, at least not on a time scale faster than the $\Lambda$ lifetime, i.e., $\tau _{A_{\Lambda \Lambda}\rightarrow A_{H}'X}$ cannot be much less than $\approx 10^{-10}$s. (To give a more precise limit on $\tau _{A_{\Lambda \Lambda}\rightarrow A_{H}'X}$ requires a detailed analysis by the experimental teams, taking into account the number of hypernuclei produced, the number of observed $\Lambda$ decays, the acceptance, and so on.) This experiment is considered leading evidence against the existence of the $H$ di-baryon. As will be seen below, this constraint is readily satisfied if the $H$ is compact: $r_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 1/2 ~r_N$ or less, depending on the nuclear wave function.
\subsection{Stability of nuclei} \label{expstab}
This subsection derives constraints for a stable $H$, where $m_H\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 2m_N$. In that case the $H$ would be an absolutely stable form of matter and nuclei would generally be unstable toward decay to the $H$.
There are a number of possible reactions by which two nucleons can convert to an $H$ in a nucleus if that is kinematically allowed ($m_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 2 m_N$\footnote{Throughout, we use this shorthand for the more precise inequality $m_H < m_{A} - m_{A'} - m_X$ where $m_X$ is the minimum invariant mass of the final decay products.}). The initial nucleons are most likely to be $pn$ or $nn$ in a relative s-wave, because in other cases the Coulomb barrier or relative orbital angular momentum suppresses the overlap of the nucleons at short distances which is necessary to produce the $H$. If $m_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 2 m_N - m_\pi=1740$ MeV, the final state can be $H \pi^+ $ or $H \pi^0$. If $\pi $ production is not allowed, for $m_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, 1740$ MeV, the most important reactions are $p n \rightarrow H e^+ \nu_e$ or the radiative-doubly-weak reaction $n n \rightarrow H \gamma$.
The best experiments to place a limit on the stability of nuclei are proton decay experiments. Super Kamiokande (SuperK), can place the most stringent constraint due to its large mass. It is a water Cerenkov detector with a 22.5 kiloton fiducial mass, corresponding to $8~10^{32}$ oxygen nuclei. SuperK is sensitive to proton decay events in over 40 proton decay channels\cite{SuperK}. Since the signatures for the transition of two nucleons to the $H$ are substantially different from the monitored transitions, a specific analysis by SuperK is needed to place a limit. We will discuss the order-of-magnitude of the limits which can be anticipated.
Detection is easiest if the $H$ is light enough to be produced with a $\pi^+$ or $\pi^0$. The efficiency of SuperK to detect neutral pions, in the energy range of interest (KE $\sim 0-300$ MeV), is around 70 percent. In the case that a $\pi ^+$ is emitted, it can charge exchange to a $\pi ^0$ within the detector, or be directly detected as a non-showering muon-like particle with similar efficiency. More difficult is the most interesting mass range $m_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, 1740$ MeV, for which the dominant channel $p n \rightarrow H e^+ \nu$ gives an electron with $E \sim (2 m_N - m_H)/2 \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 70$ MeV. The channel $nn \rightarrow $H$ \gamma$, whose rate is smaller by a factor of order $\alpha$, would give a monochromatic photon with energy $(2 m_N - m_H) \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 100$ MeV.
We can estimate SuperK's probable sensitivity as follows. The ultimate background comes primarily from atmospheric neutrino interactions,
\begin{equation}
\nu N \rightarrow N'(e,\mu),\quad \nu N \rightarrow N'(e,\mu)+n\pi~{\rm and} \nu N\rightarrow \nu N' +n\pi, \nonumber
\end{equation}
for which the event rate is about $100$ per kton-yr. Without a strikingly distinct signature, it would be difficult to detect a signal rate significantly smaller than this, which would imply SuperK might be able to achieve a sensitivity of order $\tau_{A_{NN}\rightarrow A_{H}'X}\,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, {\rm few} 10^{29}$ yr. Since the $H$ production signature is not more favorable than the signatures for proton decay, the SuperK limit on $\tau_{A_{NN}\rightarrow A_{H}'X}$ can at best be $0.1 \tau_p$, where $0.1$ is the ratio of Oxygen nuclei to protons in water. Detailed study of the spectrum of the background is needed to make a more precise statement. We can get a lower limit on the SuperK lifetime limit by noting that the SuperK trigger rate is a few Hz~\cite{SuperK}, putting an immediate limit $\tau_{O\rightarrow H + X }\,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, {\rm few} 10^{25}$ yr, assuming the decays trigger SuperK.
SuperK limits will apply to specific decay channels, but other experiments potentially establish limits on the rate at which nucleons in a nucleus convert to an $H$ which are independent of the $H$ production reaction. These experiments place weaker constraints on this rate due to their smaller size, but they are of interest because in principle they measure the stability of nuclei directly. Among those cited in ref.~\cite{PDG02}, only the experiment by Flerov {\it et. al.}~\cite{flerov} could in principle be sensitive to transitions of two nucleons to the $H$. It searched for decay products from ${\rm Th}^{232}$, above the Th natural decay mode background of $4.7$ MeV $\alpha$ particles, emitted at the rate $\Gamma _{\alpha}=0.7~10^{-10} {\rm yr}^{-1}$. Cuts to remove the severe background of $4.7$ MeV $\alpha$'s may or may not remove events with production of an $H$. Unfortunately ref.~\cite{flerov} does not discuss these cuts or the experimental sensitivity in detail. An attempt to correspond with the experimental group, to determine whether their results are applicable to the $H$, was unsuccessful. If applicable, it would establish that the lifetime $\tau_{{\rm Th}^{232}\rightarrow H + X}> 10^{21}$ yr.
Better channel\--independent limits on $N$ and $NN$ decays in nuclei have been established recently, as summarized in ref.~\cite{BOREXINO}. Among them, searches for the radioactive decay of isotopes created as a result of $NN$ decays of a parent nucleus yield the most stringent constraints. This method was first exploited in the DAMA liquid Xe detector \cite{DAMAnucldecay}. BOREXINO has recently improved these results\cite{BOREXINO} using their prototype detector, the Counting Test Facility (CTF) with parent nuclei ${\rm C}^{12},{\rm C}^{13}~{\rm and}~{\rm O}^{16}$. The signal in these experiments is the beta and gamma radiation in a specified energy range associated with deexcitation of a daughter nucleus created by decay of outer-shell nucleons in the parent nucleus. They obtain the limits $\tau _{pp} > 5~10^{25}$ yr and $\tau _{nn} > 4.9~10^{25}$ yr. However $H$ production requires overlap of the nucleon wavefunctions at short distances and is therefore suppressed for outer shell nucleons, severely reducing the utility of these limits. Since the SuperK limits will probably be much better, we do not attempt to estimate the degree of suppression at this time.
Another approach could be useful if for some reason the direct SuperK search is foiled. Ref. \cite{suzuki} places a limit on the lifetime of a bound neutron, $\tau _{n}>4.9~10^{26}$ yr, by searching for $\gamma$'s with energy $E_{\gamma}=19-50$ MeV in the Kamiokande detector. The idea is that after the decay of a neutron in oxygen the de-excitation of $O^{15}$ proceeds by emission of $\gamma$'s in the given energy range. The background is especially low for $\gamma$'s of these energies, since atmospheric neutrino events produce $\gamma$'s above 100 MeV. In our case, some of the photons in the de-excitation process after conversion of $pn$ to H, would be expected to fall in this energy window.
In \S\ref{NucStab} we calculate nuclear transition rates of a tightly bound $H$ in order to find constraints on the $H$ size and mass, primarily focusing on the constraints set by SuperK experiment.
\subsection{Experimental constraints on the $H$ binding}
\label{exptsB}
If the $H$ binds to nuclei and if it is present with DM abundance, it should be present in nuclei on Earth and experiments searching for anomalous mass isotopes would be sensitive to its existence. Accelerator mass spectroscopy (AMS) experiments generally have high sensitivity to anomalous isotopes, limiting the fraction of anomalous isotopes to $10^{-18}$ depending on the element. We discuss binding of the $H$ to heavy and to light isotopes separately.
The $H$ will bind more readily to heavy nuclei than to light ones because their potential well is wider. However, searches for exotic particles bound to heavy nuclei are limited to the search for charged particles in Fe \cite{Feneg} and to the experiment by Javorsek et al. \cite{javorsek} on Fe and Au. The experiment by Javorsek searched for anomalous Au and Fe nuclei with $M_X$ in the range $200$ to $350$ atomic mass units u. Since the mass of Au is $197$ u, this experiment is sensitive to the detection of an exotic particle with mass $M_X \ge 3$ u$=2.94$ GeV and is not sensitive to the tightly bound $H$.
A summary of limits from various experiments on the concentrations of exotic isotopes of light nuclei is given in~\cite{hemmick}. Only the measurements on hydrogen~\cite{hydrogen} and helium \cite{helium} nuclei are of interest here because they are sensitive to the presence of a light exotic particle with a mass of $M_X \sim~ 1 $ GeV. It is very improbable that the $H$ binds to hydrogen, since the $\Lambda$ does not bind to hydrogen in spite of having attractive contributions to the potential not shared by the $H$, {\it e.g.}, from $\eta$ and $\eta'$ exchange. Thus we consider only the limit from helium. The limit on the concentration ratio of exotic to non-exotic isotopes for helium comes from the measurements of Klein, Middleton and Stevens who quote an upper limit of $\frac {He_X}{He}<2\times 10^{-14}$ and $\frac{He_X}{He}<2\times 10^{-12}$ for primordial He \cite{plaga}. Whether these constraints rule out the $H$ depends on the $H$ coupling to nucleus.
In \S\ref{binding} we calculate the binding of the $H$, or more generally any flavor singlet, to nuclei and find the values of coupling which are allowed from the existence of the $H$. As we will see, the allowed couplings coincide with the values expected from the particle physics arguments.
\section{Nucleon and nuclear transitions \newline of the $H$ dibaryon \-- Rates estimate} \label{NucStab}
In this section we study several processes involving the $H$ which are phenomenologically important if it exists, \cite{fz:nucstab}: conversion of two $\Lambda$'s in a doubly-strange hypernucleus to an $H$ (\S\ref{expdoublelambda}), decay of the $H$ to two baryons, and---if the $H$ is light enough---conversion of two nucleons in a nucleus to an $H$ (\S\ref{expstab}). The amplitudes for these processes depend on the spatial wavefunction overlap of two baryons and an $H$. We are particularly interested in the possibility that the $H$ is tightly bound and that it has a mass less than $m_N + m_\Lambda$ because then, as we shall see, the $H$ is long-lived, with a lifetime which can be longer than the age of the Universe.
To estimate the rates for these processes requires calculating the overlap of initial and final quark wavefunctions. We model that overlap using an Isgur-Karl harmonic oscillator model for the baryons and $H$, and the Bethe-Goldstone and Miller-Spencer wavefunctions for the nucleus. The results depend on $r_N/r_H$ and the nuclear hard core radius.
We also calculate the lifetime of the $H$ taking similar approach to the overlap calculation. We do it in three qualitatively distinct mass ranges, under the assumption that the conditions to satisfy the constraints from double-$\Lambda$ hypernuclei are met. The ranges are
\begin{itemize}
\item {$m_H < m_N + m_\Lambda$,} in which $H$ decay is a doubly-weak $\Delta S = 2$ process,
\item {$m_N + m_\Lambda < m_H < 2 m_\Lambda$,} in which the $H$ can decay by a normal weak interaction, and
\item {$m_H > 2 m_\Lambda$,} in which the $H$ is strong-interaction unstable.
\end{itemize}
The $H$ lifetime in these ranges is greater than or of order $10^{7}$ years, $\sim 10$ sec, and $\sim 10^{-14}$ sec, respectively.
Finally, if $m_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 2 m_N$, nuclei are unstable and $\Delta S=-2$ weak decays convert two nucleons to an $H$. In this case the stability of nuclei is a more stringent constraint than the double-$\Lambda$ hypernuclear observations, but our results of the next subsection show that nuclear stability bounds can also be satisfied if the $H$ is sufficiently compact: $r_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, ~1/4 ~r_N$ depending on mass and nuclear hard core radius. This option is vulnerable to experimental exclusion by SuperK.
In order to calculate transition rates we factor transition amplitudes into an amplitude describing an $H$-baryon-baryon wavefunction overlap times a weak interaction transition amplitude between strange and non-strange baryons. In subsection \ref{overlapcalc} we setup the theoretical apparatus to calculate the wavefunction overlap between $H$ and two baryons. We determine the weak interaction matrix elements phenomenologically in subsection \ref{weakME}. Nuclear decay rates are computed in subsection \ref{convlifetimes} while lifetime of the $H$ for various values of $m_H$ is found in \ref{metastable}. The results are reviewed and conclusions are summarized in section \ref{summaryEX}.
\subsection{Overlap of $H$ and two baryons}
\label{overlapcalc}
We wish to calculate the amplitudes for a variety of processes, some of which require one or more weak interactions to change strange quarks into light quarks. By working in pole approximation, we factor the problem into an $H$-baryon-baryon wavefunction overlap times a weak interaction matrix element between strange and non-strange baryons, which will be estimated in the next section. For instance, the matrix element for the transition of two nucleons in a nucleus $A$ to an $H$ and nucleus $A'$, $A_{NN} \rightarrow A'_H X $, is calculated in the $\Lambda \Lambda$ pole approximation, as the product of matrix elements for two subprocesses: a transition matrix element for formation of the $H$ from the $\Lambda \Lambda$ system in the nucleus, $ |{\cal M}|_{\{\Lambda \Lambda\} \rightarrow H~X}$, times the amplitude for a weak doubly-strangeness-changing transition, $|{\cal M}|_{NN \rightarrow \Lambda \Lambda}$:
\begin{equation}
|{\cal M}|_{A
\rightarrow A' _HX}=|{\cal M}|_{\{\Lambda \Lambda\} \rightarrow H~X}~|{\cal M}|_{NN \rightarrow \Lambda \Lambda}.
\end{equation}
We ignore mass differences between light and strange quarks and thus the spatial wavefunctions of all octet baryons are the same. In this section we are concerned with the dynamics of the process and we suppress spin-flavor indices.
\subsubsection{Isgur-Karl Model and generalization to the $H$}
\label{IK}
The Isgur-Karl (IK) non-relativistic harmonic oscillator quark model~\cite{IK,faiman,bhaduri} was designed to reproduce the masses of the observed resonances and it has proved to be successful in calculating baryon decay rates~\cite{faiman}. In the IK model, the quarks in a baryon are described by the Hamiltonian
\begin{equation} \label{hamiltonian}
H=\frac {1}{2m} (p^2 _1+p^2 _2+p^2 _3)
+\frac{1}{2}k\Sigma_{i<j} ^3 (\vec {r}_i -\vec {r}_j)^2,
\end{equation}
where we have neglected constituent quark mass differences. The wave function of baryons can then be written in terms of the relative positions of quarks and the center of mass motion is factored out. The relative wave function in this model is~\cite{faiman,bhaduri}
\begin{equation}
\Psi _{B} (\vec{r}_1,\vec{r}_2,\vec{r}_3) = N_{B} \exp \left[{-\frac {\alpha_{B} ^2}{6}\Sigma_{i<j} ^3 (\vec {r}_i -\vec{r}_j)^2}\right],
\end{equation}
where $N_B$ is the normalization factor, $\alpha _B=\frac {1}{\sqrt{<r_B ^2>}}=\sqrt{3km}$, and $<r_B ^2>$ is the baryon mean charge radius squared. Changing variables to
\begin{equation}\label{rholambda}
\vec {\rho} =\frac {\vec {r_1} -\vec {r_2}}{\sqrt{2}},~\vec {\lambda}=\frac {\vec {r_1} +\vec {r_2}-2 \vec {r_3}}{\sqrt{6}},
\end{equation}
reduces the wave function to two independent harmonic oscillators. In the ground state \begin{equation}
\Psi_{B} (\vec {\rho}, \vec {\lambda})=\left( \frac{\alpha_B}{\sqrt{\pi}} \right) ^3 \exp\left[ -\frac{\alpha_{B}^2}{2} (\rho ^2 + \lambda ^2)\right].
\end{equation}
One of the deficiencies of the IK model is that the value of the $\alpha_B$ parameter needed to reproduce the mass splittings of lowest lying $\frac {1}{2} ^+$ and $\frac {3}{2} ^+$ baryons, $\alpha_B = 0.406$ GeV, corresponds to a mean charge radius squared for the proton of $\sqrt{<r^2_{N}>}= \frac {1}{\alpha _B}=0.49~{\rm fm}$. This is distinctly smaller than the experimental value of $0.87$ fm. Our results depend on the choice of $\alpha_B$ and therefore we also report results using $\alpha_B = 0.221$ GeV which reproduces the observed charge radius at the expense of the mass-splittings.
Another concern is the applicability of the non-relativistic IK model in describing quark systems, especially in the case of the tightly bound $H$. With $r_H/r_N = 1/f$, the quark momenta in the $H$ are $\approx f$ times higher than in the nucleon, which makes the non-relativistic approach more questionable than in the case of nucleons. Nevertheless we adopt the IK model because it offers a tractable way of obtaining a quantitative estimate of the effect of the small size of the $H$ on the transition rate, and there is no other alternative available at this time. For comparison, it would be very interesting to have a Skyrme model calculation of the overlap of an $H$ with two baryons.
We fix the wave function for the $H$ particle starting from the same Hamiltonian (\ref{hamiltonian}), but generalized to a six quark system. For the relative motion part this gives
\begin{equation}
\Psi_{H}=N_{H}\exp\left[-\frac{\alpha_{H}^2}{6}\sum _{i<j} ^6 (\vec{r_i} -\vec{r_j})^2\right].
\end{equation}
The space part of the matrix element of interest, $\langle A'_{H}|A_{ \Lambda \Lambda }\rangle$, is given by the integral
\begin{equation}
\int \prod _{i=1} ^6 d^3\vec{r}_i \Psi_{\Lambda} ^{a} (1,2,3) \Psi _{\Lambda} ^{b} (4,5,6) \Psi_H(1,2,3,4,5,6).
\end{equation}
Therefore it is useful to choose variables for the $H$ wavefunction as follows, replacing
\begin{equation}
\vec{r}_1,\vec{r}_2,\vec{r}_3,\vec{r}_4,\vec{r}_5,\vec{r}_6\rightarrow \vec {\rho}^{a},\vec{\lambda}^{a},\vec{\rho}^{b},\vec{\lambda}^{b}, \vec {a}, \vec {R}_{CM},
\end{equation}
where $\vec{\rho}^{a(b)}$ and $\vec {\lambda}^{a(b)}$ are defined as in eq.~(\ref{rholambda}), with $a(b)$ referring to coordinates $1,2,3~(4,5,6)$. (Since we are ignoring the flavor-spin part of the wavefunction, we can consider the six quarks as distinguishable and not worry about fermi statistics at this stage.) We also define the center-of-mass position and the separation, $\vec{a}$, between initial baryons $a$ and $b$:
\begin{equation}
\label{coord} \vec{R}_{CM}=\frac {\vec{R}_{CM}^{a}+\vec{R}_{CM}^{b}}{2},~ \vec{a}=\vec{R}_{CM}^{a}-\vec{R}_{CM}^{b}.
\end{equation}
Using these variables, the $H$ ground state wave function becomes
\begin{eqnarray}
\Psi_{H}&=&\left( \frac{3}{2}\right) ^{3/4}
\left( \frac{\alpha _H}{\sqrt{\pi}} \right)^{15/2}\\
&\times & \exp[-\frac {\alpha_{H} ^2}{2} (\vec {\rho^{a}}^2 + \vec{\lambda ^{a}}^2+\vec {\rho^{b}}^2 + \vec {\lambda ^{b}}^2 +\frac{3}{2} \vec {a}^2)]. \nonumber
\end{eqnarray}
As for the 3-quark system, $\alpha _H=\frac {1}{\sqrt{<r_H ^2>}}$.
\subsubsection{Nuclear Wavefunction} \label{BBG}
We will use two different wavefunctions to describe two $\Lambda$'s or nucleons in a nucleus, in order to study the model dependence of our results and to elucidate the importance of different aspects of the nuclear wavefunction. A commonly used wavefunction is the Miller-Spencer (MS) wavefunction\cite{MillerSpencer}:
\begin{equation}
\label{MS}
\psi _{MS}=1-\exp ^{-c_1 a^2}(1-c_2 a^2),
\end{equation}
with the canonical parameter choices $c_1 = 1.1$ fm$^{-2}$ and $c_2=0.68$ fm$^{-2}$. It must be emphasized that at the short distances relevant for this calculation, the form and magnitude of the MS wavefunction are not constrained experimentally and rather are chosen to give a good fit to long-distance physics with a simple functional form. The other wavefunction we use (BBG) is a solution of the Bruecker-Bethe-Goldstone equation describing the interaction of a pair of fermions in an independent pair approximation; see, {\it e.g.}, \cite{walecka}. It is useful because we can explicitly explore the sensitivity of the result to the unknown short-distance nuclear physics by varying the hard-core radius.
The BBG wave function is obtained as follows. The solution of the Schrodinger equation for two fermions in the Fermi sea interacting through a potential $v({\vec x}_1,{\vec x}_2)$ takes the form
\begin{equation}
\psi (1,2)=\frac {1}{\sqrt{V}}~e^{i{\vec P}{\vec R}_{CM}}
~\psi ({\vec a}),
\end{equation}
where ${\vec R}_{CM}$ and ${\vec a}$ are defined as in (\ref {coord}). The first factor contains the center-of-mass motion and the second is the internal wave function of the interacting pair. $\psi ({\vec a})$ is a solution of the Bethe-Goldstone equation (eq.~(36.15) in~\cite{walecka}) which is simply the Schrodinger equation for two interacting fermions in a Fermi gas, where the Pauli principle forbids the appearance of intermediate states that are already occupied by other fermions. Both wave functions are normalized so that the space integral of the modulus-squared of the wave function equals one. In the application of this equation to nuclear matter, the interaction of each particle from the pair with all particles in the nucleus through an effective single particle potential is included, in the independent pair approximation known as Bruecker theory (see eq.~(41.1) and (41.5) in~\cite{walecka}).
We are interested in s-wave solutions to the Bethe-Goldstone equation since they are the ones that penetrate to small relative distances. Following \cite{walecka}, an s-wave solution of the internal wave function is sought in the form
\begin{equation}
\psi (a)\sim \frac{u(a)}{a},
\end{equation}
which simplifies the Bethe-Goldstone equation to
\begin{equation}
\left( \frac {d^2}{dx^2}+k^2 \right)u(a)=v(a)u(a)-\int ^{\infty} _0\chi (a,y)v(y)u(y)dy
\end{equation}
where $v(a)$ is the single particle potential in the effective-mass approximation, and the kernel $\chi (a,y)$ is given by
\begin{equation}
\chi (a,y)=\frac{1}{\pi} \left[ \frac{\sin k_F(a-y)}{a-y}-\frac{\sin k_F (a+y)}{a+y}\right],
\end{equation}
where $k_F$ is the Fermi wavenumber. For the interaction potential between two nucleons in a nucleus we choose a hard core potential for the following reasons. The two particle potential in a nucleus is poorly known at short distances. Measurements (the observed deuteron form factors, the sums of longitudinal response of light nuclei,...) only constrain two-nucleon potentials and the wave functions they predict at internucleon distances larger than $0.7$ fm~\cite{pandharipande}. The Bethe-Goldstone equation can be solved analytically when a hard-core potential is used. While the hard-core form is surely only approximate, it is useful for our purposes because it enables us to isolate the sensitivity of the results to the short-distance behavior of the wavefunction. We stress again, that more ``realistic" wavefunctions, including the MS wave function, are in fact not experimentally constrained for distances below $0.7$ fm. Rather, their form at short distance is chosen for technical convenience or aesthetics.
Using the hard core potential, the s-wave BG wavefunction is
\begin{equation}
\Psi_{BG}(\vec{a})=\left\{\begin{array}{ll}
N_{BG}\frac{u(a)}{a} & \textrm{for \quad $a>\frac{c}{k_F}$} \\
0 & \textrm {for $\quad a<\frac{c}{k_F}$}
\end{array}\right.,
\end{equation}
with
\begin{equation} \label{Nbg}
N_{BG}=\frac {1}{\sqrt{\int^{R(A)} _{\frac {c}{k_F}} \left| \frac {u(a)}{a} \right| ^2 4\pi ~a^2~d a}},
\end{equation}
where $\frac{c}{k_F}$ is the hard core radius and $R(A)=1.07 A^{1/3}$ is the radius of a nucleus with mass number $A$. Expressions for $u$ can be found in~\cite{walecka}, eq.~(41.31). The normalization factor $N_{BG}$ is fixed setting the integral of $|\psi _{BG}|^2$ over the volume of the nucleus equal to one. The function $u$ vanishes at the hard core surface by construction and then rapidly approaches the unperturbed value, crossing over that value at the so called ``healing distance''. At large relative distances and when the size of the normalization volume is large compared to the hard core radius, $u(a)/a$ approaches a plane wave and the normalization factor $N_{BG}$ (\ref{Nbg}) reduces to the value $1/\sqrt{V_{\rm box}}$, as
\begin{equation} \label{pwBGGrln}
\psi_{BG}(a)=N_{BG}~\frac{u(a)}{a}~\rightarrow \frac {1}{\sqrt{V_{\rm box}}}~e^{ika}.
\end{equation}
\subsubsection{Overlap Calculation}
The non-relativistic transition matrix element for a transition $\Lambda \Lambda \rightarrow H$ inside a nucleus is given by (suppressing spin and flavor)
\begin{eqnarray} \label{matrixel}
T_{\{\Lambda \Lambda\}\rightarrow H}&=&2 \pi i \delta (E) \int d^3
a~ d^3 R_{CM} \prod _{i=a,b}
d^3 \rho^i d^3 \lambda ^i \nonumber \\
&\times & ~\psi^* _H \psi ^a _{\Lambda}~\psi^b _{\Lambda}~\psi
_{nuc}~ e^{i({\vec k}_H-{\vec k}_{\Lambda \Lambda}){\vec R}_{CM}},
\end{eqnarray}
where $\delta (E)=\delta (E_H-E_{\Lambda \Lambda})$, $\psi ^{a,b}_{\Lambda}=\psi ^{a,b} _{\Lambda}(\vec {\rho}^{a,b},\vec{\lambda}^{a,b})$, and $\psi _{nuc}=\psi _{nuc}({\vec a})$ is the relative wavefunction function of the two $\Lambda 's$ in the nucleus. The notation $\{\Lambda \Lambda\}$ is a reminder that the $\Lambda$'s are in a nucleus. The plane waves of the external particles contain normalization factors $1/\sqrt{V}$ and these volume elements cancel with volume factors associated with the final and initial phase space when calculating decay rates. The integration over the center of mass position of the system gives a 3-dimensional momentum delta function and we can rewrite the transition matrix element as
\begin{equation} \label{matrixel2}
T_{\{\Lambda\Lambda\} \rightarrow H}=(2\pi)^4 i\delta ^4(k_f-k_i)~{\cal M}_{\{\Lambda \Lambda\} \rightarrow H},
\end{equation}
where $|{\cal M}|_{\{\Lambda \Lambda\} \rightarrow H}$ is the integral over the remaining internal coordinates in eq.~(\ref{matrixel}). In the case of pion or lepton emission, plane waves of the emitted particles should be included in the integrand. For brevity we use here the zero momentum transfer, $\vec {k} =0$ approximation, which we have checked holds with good accuracy; this is not surprising since typical momenta are $\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 0.3$ GeV.
Inserting the IK and BBG wavefunctions and performing the Gaussian integrals analytically, the overlap of the space wave functions becomes
\begin{eqnarray} \label{overlap}
|{\cal M}|_{\Lambda \Lambda \rightarrow H}&=&\frac {1}{\sqrt{4}} \left(\frac {2f}{1+f^2}\right )^6 \left( \frac{3}{2}\right)^{3/4}\left( \frac{\alpha _H}{\sqrt{\pi}} \right)^{3/2}\\
\nonumber &\times & N_{BG}\int^{R(A)} _{\frac{c}{k_F}} d^3 a\frac {u(a)}{a}e ^{-\frac {3}{4}\alpha_{H} ^2 a^2}
\end{eqnarray}
where the factor $1/\sqrt{4}$ comes from the probability that two nucleons are in a relative s-wave, and $f$ is the previously-introduced ratio of nucleon to $H$ radius; $\alpha _H=f~\alpha _B $. Since $N_{BG}$ has dimensions $V^{-1/2}$ the spatial overlap ${\cal M}_{\{\Lambda \Lambda\} \rightarrow H}$ is a dimensionless quantity, characterized by the ratio $f$, the Isgur-Karl oscillator parameter $\alpha_B$, and the value of the hard core radius. Fig.~\ref{figoverlap1} shows $|{\cal M}|^2_{\{\Lambda \Lambda\} \rightarrow H}$ calculated for oxygen nuclei, versus the hard-core radius, for a range of values of $f$, using the standard value of $\alpha_B= 0.406$ GeV for the IK model~\cite{bhaduri} and also $\alpha_B = 0.221$ GeV for comparison.
\begin{figure}
\begin{center}
\includegraphics*[width=8cm]{listafig2.eps}
\end{center}
\caption{ Log$_{10}$ of $|{\cal M}|^2_{\Lambda \Lambda \rightarrow H}$
versus hard core radius in fm, for ratio $f=R_N/R_H$ and two
values of the Isgur-Karl oscillator parameter: $\alpha_B= 0.406$ GeV (thick lines)
and $\alpha_B = 0.221$ GeV (thin lines).}\label{figoverlap1}
\end{figure}
Fig.~\ref{figoverlap1} shows that, with the BBG wavefunction, the overlap is severely suppressed and that the degree of suppression is very sensitive to the core radius. This confirms that the physics we are investigating depends on the behavior of the nuclear wavefunction at distances at which it is not directly constrained experimentally. Fig. \ref{figoverlap2} shows a comparison of the overlap using the Miller Spencer and BBG nuclear wavefunctions, as a function of the size of the $H$. One sees that the spatial overlap is strongly suppressed with both wavefunctions, although quantitatively the degree of suppression differs. We cannot readily study the sensitivity to the functional form of the baryonic wavefunctions, as there is no well-motivated analytic form we could use to do this calculation other than the IK wavefunction. However by comparing the extreme choices of parameter $\alpha_B$ in the IK wavefunction, also shown in Figs. \ref{figoverlap1} and \ref{figoverlap2}, we explore the sensitivity of the spatial overlap to the shape of the hadronic wavefunctions. Fortunately, we will be able to use additional experimental information to constrain the wavefunction overlap so that our key predictions are insensitive to the overlap uncertainty.
\begin{figure}
\begin{center}
\includegraphics*[width=8cm]{listafig3.eps}
\end{center}
\caption{Log$_{10}$ of $|{\cal M}|^2_{\Lambda \Lambda
\rightarrow H}$ versus ratio $f=\alpha _H/\alpha _N$, calculated with BBG wave function with core radius $0.4$ and $0.5$ fm, and with the MS wave function. Thick (thin) lines are for $\alpha_B= 0.406$ GeV ($\alpha_B = 0.221$ GeV) in the IK wavefunction.} \label{figoverlap2}
\end{figure}
\subsection{Weak Interaction Matrix Elements}
\label{weakME}
Transition of a two nucleon system to off-shell $\Lambda\Lambda$ requires two strangeness changing weak reactions. Possible $\Delta S=1$ sub-processes to consider are a weak transition with emission of a pion or lepton pair and an internal weak transition. These are illustrated in Fig. \ref{figweaktrans} for a three quark system. We estimate the amplitude for each of the sub-processes and calculate the overall matrix element for transition to the $\Lambda \Lambda$ system as a product of the sub-process amplitudes. \\
\begin{figure}
\begin{center}
\includegraphics*[width=8cm]{internal.eps}
\end{center}
\caption{Some relevant weak transitions for $NN \rightarrow HX$} \label{figweaktrans}
\end{figure}
The matrix element for weak pion emission is estimated from the $\Lambda\rightarrow N \pi$ rate:
\begin{equation}
|{\cal M}|^2_{\Lambda\rightarrow N \pi}=\frac {1}{(2\pi )^4} ~ \frac{2m_{\Lambda} }{\Phi _2} \frac {1}{\tau _{\Lambda\rightarrow N\pi}}=0.8 \times 10^{-12} \quad {\rm GeV}^{2}.
\end{equation}
By crossing symmetry this is equal to the desired $|{\cal M}|^2_{N\rightarrow \Lambda \pi}$, in the approximation of momentum-independence which should be valid for the small momenta in this application. Analogously, for lepton pair emission we have
\begin{equation}
|{\cal M}|^2_{\Lambda\rightarrow N e\nu }=\frac {1}{(2\pi )^4}~\frac {2 m_{\Lambda} } {\Phi _3 }\frac{1}{ \tau _{\Lambda\rightarrow N e\nu }} =3 \times 10^{-12}.
\end{equation}
The matrix element for internal conversion, $(uds) \rightarrow (udd)$, is proportional to the spatial nucleon wave function when two quarks are at the same point:
\begin{equation}
|{\cal M}|_{\Lambda\rightarrow N} \approx <\psi _{\Lambda }|\delta^3 (\vec {r}_1-\vec {r}_2)|\psi _N > \frac {G_F \sin \theta _c \cos\theta _c}{m_q},
\end{equation}
where $m_q$ is the quark mass introduced in order to make the 4 point vertex amplitude dimensionless\cite{yaouanc}. The expectation value of the delta function can be calculated in the harmonic oscillator model to be
\begin{equation} \label{delta1}
<\psi _{\Lambda }|\delta^3 (\vec {r}_1-\vec{r}_2)|\psi _N >~ = \left(\frac {\alpha _B}{\sqrt {2\pi}}\right)^3=0.4 \times 10^{-2} ~~ {\rm GeV}^3.
\end{equation}
The delta function term can also be inferred phenomenologically in the following way, as suggested in \cite{yaouanc}. The Fermi spin-spin interaction has a contact character depending on $~\vec {\sigma_1}\vec { \sigma_2}/m^2 _q \delta(\vec {r}_1-\vec {r}_2)$, and therefore the delta function matrix element can be determined in terms of electromagnetic or strong hyperfine splitting:
\begin{eqnarray}
(m_{\Sigma ^0}-m_{\Sigma ^+} )-(m_n-m_p)=\alpha \frac {2\pi
}{3m^2 _q}<\delta^3(\vec {r}_1-\vec {r}_2)>\\m_{\Delta} -m_N=
\alpha _S \frac {8\pi }{3 m^2 _q} <\delta^3(\vec {r}_1-\vec
{r}_2)>,
\end{eqnarray}
where $m_q$ is the quark mass, taken to be $m_N/3$. Using the first form to avoid the issue of scale dependence of $\alpha_S$ leads to a value three times larger than predicted by the method used in eq.~(\ref{delta1}), namely:
\begin{equation} \label{delta2}
<\psi_{\Lambda }|\delta^3 (\vec {r}_1-\vec {r}_2)|\psi _N> ~ =1.2\times 10^{-2} \quad {\rm GeV}^3.
\end{equation}
We average the expectation values in eq.~(\ref{delta1}) and eq.~(\ref{delta2}) and adopt
\begin{equation} \label{MdeltaS}
|{\cal M}|^2_{\Lambda\rightarrow N}=4.4 \times 10^{-15}.
\end{equation}
In this way we have roughly estimated all the matrix elements for the relevant sub-processes based on weak-interaction phenomenology.
\subsection{Nuclear decay rates} \label{nuclifetime}
\subsubsection{Lifetime of doubly-strange nuclei}
\label{hypernuc}
The decay rate of a doubly-strange nucleus is:
\begin{eqnarray} \label{doubleform}
\Gamma_{A_{\Lambda \Lambda} \rightarrow A'_{H} \pi} &\approx&
K^2(2\pi )^4 \frac {m^2 _q }{2(2m_{\Lambda \Lambda})} \\
\nonumber &\times& \Phi _2 |{\cal M}|^2_{\Lambda \Lambda
\rightarrow H},
\end{eqnarray}
where $\Phi _2$ is the two body final phase space factor, defined as in~\cite{PDG02}, and $m_{\Lambda \Lambda}$ is the invariant mass of the $\Lambda$'s, $\approx 2 m_{\Lambda}$. The factor $K$ contains the transition element in spin flavor space. It can be estimated by counting the total number of flavor-spin states a $uuddss$ system can occupy, and taking $K^2$ to be the fraction of those states which have the correct quantum numbers to form the $H$. That gives $K^2\sim 1/1440$, and therefore we write $K^2 = (1440~\kappa_{1440})^{-1}$. Combining these factors we obtain the estimate for the formation time of an $H$ in a doubly-strange hypernucleus
\begin{equation} \label{tauform}
\tau_{\rm form} \equiv \tau_{A_{\Lambda \Lambda}\rightarrow A'_{H}
\pi}\approx \frac {3(7)~\kappa_{1440}~10^{-18}~ {\rm s} }{ |{\cal M}|^2_{\Lambda
\Lambda \rightarrow H}},
\end{equation}
where the phase space factor was evaluated for $m_H = 1.8 (2)$ GeV.
Fig.~\ref{figoverlap2} shows $|{\cal M}|^2_{\{\Lambda \Lambda\} \rightarrow H}$ in the range of $f$ and hard-core radius where its value is in the neighborhood of the experimental limits, for the standard choice $\alpha_B = 0.406$ GeV and comparison value $\alpha_B =0.221$ GeV. In order to suppress $\Gamma(A_{\Lambda\Lambda}\rightarrow A'_{H} X)$ sufficiently that some $\Lambda$'s in a double-$\Lambda$ hypernucleus will decay prior to formation of an $H$, we require $|{\cal M}|^2_{\Lambda \Lambda \rightarrow H}\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 10^{-8}$. If the nucleon hard core potential is used, this is satisfied even for relatively large $H$, {\it e.g.}, $r_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, r_N/2.3~(r_N/2.1) $ for a hard-core radius $0.4$ ($0.5$) fm and can also be satisfied with the MS wave function as can be seen in Fig.~\ref{figoverlap2}. Thus the observation of single $\Lambda$ decay products from double-$\Lambda$ hypernuclei cannot be taken to exclude the existence of an $H$ with mass below $2 m_\Lambda$ unless it can be demonstrated that the wave function overlap is large enough.
\subsubsection{Conversion of $\Delta S=0$ nucleus to an $H$}
\label{convlifetimes}
If the $H$ is actually stable ($m_H < 2 m_p + 2 m_e$) two nucleons in a nucleus may convert to an $H$ and cause nuclei to disintegrate. $N N \rightarrow H X$ requires two weak reactions. If $m_H< 1740$ MeV two pion emission is allowed and the rate for the process $A_{NN}\rightarrow A'_{H}\pi \pi$, is approximately
\begin{eqnarray}
\Gamma_{A_{NN} \rightarrow A'_{H} \pi \pi }&\approx &K^2 \frac
{(2\pi )^4} {2 (2m_{N}) }~ \Phi_3\\ \nonumber
&\times & \left(
\frac { |{\cal M}|_{N\rightarrow \Lambda \pi} ^2 |{\cal
M}|_{\Lambda \Lambda \rightarrow H} } {(2m_{\Lambda }-m_H )^2
}\right) ^2
\end{eqnarray}
where the denominator is introduced to correct the dimensions in a way suggested by the $\Lambda \Lambda$ pole approximation. Since other dimensional parameters relevant to this process, {\it e.g.}, $m_q= m_N/3$ or $\Lambda_{QCD}$, are comparable to $2m_{\Lambda }-m_H$ and we are only aiming for an order-of-magnitude estimate, any of them could equally well be used. The lifetime for nuclear disintegration with two pion emission is thus
\begin{equation}
\tau_{A_{NN}\rightarrow A'_{H}\pi \pi}\approx \frac
{40~\kappa_{1440}}{ |{\cal M}|^2_{\Lambda \Lambda \rightarrow H}}
\quad {\rm yr},
\end{equation}
taking $m_H = 1.5$ GeV in the phase space factor. For the process with one pion emission and an internal conversion, our rate estimate is
\begin{eqnarray}
\Gamma_{A_{NN}\rightarrow A'_{H}\pi}&\approx &K^2\frac {(2\pi
)^4}{2 (2m_{N})}~\Phi_2 \\ \nonumber
&\times &(|{\cal
M}|_{N\rightarrow \Lambda \pi} |{\cal M}|_{N\rightarrow \Lambda}
|{\cal M}|_{\Lambda \Lambda \rightarrow H})^2,
\end{eqnarray}
leading to a lifetime, for $m_H = 1.5$ GeV, of
\begin{equation}
\tau_{A_{NN}\rightarrow A'_{H} \pi}\approx \frac
{3~\kappa_{1440}}{ |{\cal M}|^2_{\Lambda \Lambda \rightarrow H}}
\quad {\rm yr}.
\end{equation}
If $m_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, 1740$ MeV, pion emission is kinematically forbidden and the relevant final states are $e^+ \nu$ or $\gamma$; we now calculate these rates. For the transition $A_{NN}\rightarrow A'_H e\nu$, the rate is
\begin{eqnarray}
\Gamma_{A_{NN} \rightarrow A'_{H}e\nu }&\approx &K^2\frac {(2\pi
)^4}{2 (2m_{N})}\Phi_3 \\ \nonumber &\times &(|{\cal
M}|_{N\rightarrow \Lambda e\nu} |{\cal M}|_{N\rightarrow \Lambda}
|{\cal M}|_{\Lambda \Lambda \rightarrow H})^2.
\end{eqnarray}
In this case, the nuclear lifetime is
\begin{equation} \label{enu}
\tau_{A_{NN}\rightarrow A'_{H} e\nu}\approx \frac {\kappa_{1440}}{|{\cal M}|^2_{\Lambda \Lambda \rightarrow H}}~10^{5} \quad {\rm yr},
\end{equation}
taking $m_H = 1.8$ GeV. For $A_{NN}\rightarrow A'_H\gamma$, the rate is approximately
\begin{eqnarray}
\Gamma_{A_{NN}\rightarrow A'_{H}\gamma }&\approx &K^2 (2\pi )^4
\frac {\alpha _{EM} m^2 _q}{2 (2m_{N})} \\ \nonumber &\times &
\Phi_2(|{\cal M}|^2 _{N\rightarrow \Lambda} |{\cal M}|_{\Lambda
\Lambda \rightarrow H})^2,
\end{eqnarray}
leading to the lifetime estimate
\begin{equation}
\tau_{A_{NN}\rightarrow A'_{H}\gamma}\approx \frac{2~\kappa_{1440}}{|{\cal M}|^2_{\Lambda\Lambda\rightarrow H}}~10^6 \quad {\rm yr},
\end{equation}
for $m_H = 1.8$ GeV.
One sees from Fig.~\ref{figoverlap1} that a lifetime bound of $\,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, {\rm few}~10^{29}$ yr is not a very stringent constraint on this scenario if $m_H$ is large enough that pion final states are not allowed. {\it E.g.}, with $\kappa_{1440} = 1$ the rhs of eq.~(\ref{enu}) is $\,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, {\rm few}~10^{29}$ yr, for standard $\alpha_B$, a hard core radius of $0.45$ fm, and $r_H \approx 1/5~r_N$---in the middle of the range expected based on the glueball analogy. If $m_H$ is light enough to permit pion production, experimental constraints are much more powerful. We therefore conclude that $m_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 1740$ MeV is disfavored and is likely to be excluded, depending on how strong limits SuperK can give.
\begin{table}[hpb]
\caption{The final particles and momenta for nucleon-nucleon transitions to $H$ in nuclei. For the 3-body final states marked with *, the momentum given is for the configuration with $H$ produced at rest.} \label{t1}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
mass & final state & final momenta & partial lifetime \\
$m_H$ [GeV] & $A^\prime$ $H$ + & $p$ [MeV] & $ \times
K^2|{\cal M}|^2 _{\Lambda \Lambda \rightarrow H}$ [yr] \\ \hline
$1.5$ & $\pi $ & $318$ & $2~10^{-3}$ \\ \hline
$1.5$ & $\pi \pi$ & $170$* & $0.03$ \\ \hline
$1.8$ & $e \nu$ & $48$* & $70$ \\ \hline
$1.8$ & $\gamma$ & $96$ & $2~10^3$ \\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Lifetime of an Unstable $H$} \label{metastable}
If $2 m_N \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, m_H < m_N + m_\Lambda$, the $H$ is not stable but it proves to be very long lived if its wavefunction is compact enough to satisfy the constraints from doubly-strange hypernuclei discussed in section \ref{hypernuc}. The limits on nuclear stability discussed in the previous section do not apply here because nuclear disintegration to an $H$ is not kinematically allowed.
\subsubsection{Wavefunction Overlap}
To calculate the decay rate of the $H$ we start from the transition matrix element eq.~(\ref{matrixel}). In contrast to the calculation of nuclear conversion rates, the outgoing nucleons are asymptotically plane waves. Nonetheless, at short distances their repulsive interaction suppresses the relative wavefunction at short distances much as in a nucleus. It is instructive to compute the transition amplitude using two different approximations. First, we treat the nucleons
as plane waves so the spatial amplitude is:
\begin{eqnarray}
T_{H\rightarrow \Lambda \Lambda }&=&2 \pi i\delta (E_{\Lambda
\Lambda}-E_H) \int \prod _{i=a,b}
d^3 \rho^i d^3 \lambda ^i d^3 a~ d^3 R_{CM} \nonumber \\
&\times & \psi _H \psi ^{*a} _{\Lambda}~\psi^{*b} _{\Lambda}~
e^{i({\vec k}^a _N+{\vec k}^b _N-{\vec k}_{H}){\vec R}_{CM}}.
\end{eqnarray}
The integration over ${\vec R}_{CM}$ gives the usual 4D $\delta$ function. Using the Isgur-Karl wave function and performing the remaining integrations leading to $|{\cal M}|_{H\rightarrow \Lambda \Lambda}$, as in eq.~\ref{matrixel2}), the amplitude is:
\begin{eqnarray}
\label{planewaves}
|{\cal M}|_{H \rightarrow \Lambda
\Lambda}&=&\left (\frac {2f}{1+f^2}\right )^6 \left( \frac{3}{2}
\right)^{3/4}\left( \frac{\alpha _H}{\sqrt{\pi}} \right)^{3/2}\\
\nonumber &\times & \int^{\infty} _{0} d^3 a~ e ^{-\frac
{3}{4}\alpha_{H} ^2 a^2-i\frac{{\vec k}^a _N-{\vec k}^b
_N}{2}{\vec a}}\\ \nonumber &=& \left( \frac{8}{3\pi}
\right)^{3/4} \left (\frac {2f}{1+f^2}\right )^6 \alpha ^{-3/2}
_H~e^{-\frac{({\vec k}^a _N-{\vec k}^b _N)^2}{12~\alpha ^2 _H}}.
\end{eqnarray}
The amplitude depends on the size of the $H$ through the factor $f= r_N/r_H$. Note that the normalization $N_{BG}$ in the analogous result eq.~\ref{overlap}) which comes from the Bethe-Goldstone wavefunction of $\Lambda$'s in a nucleus has been replaced in this calculation by the plane wave normalization factor $1/\sqrt{V}$ which cancels with the volume factors in the phase space when calculating transition rates.
Transition rates calculated using eq.~(\ref{planewaves}) provide an upper limit on the true rates, because the calculation neglects the repulsion of two nucleons at small distances. To estimate the effect of the repulsion between nucleons we again use the Bethe-Goldstone solution with the hard core potential. It has the desired properties of vanishing inside the hard core radius and rapidly approaching the plane wave solution away from the hard core. As noted in section~\ref{BBG}, $N_{BG}\rightarrow 1/\sqrt{V}$, for $a\rightarrow \infty$. Therefore, we can write the transition amplitude as in eq.~(\ref {overlap}), with the normalization factor $1/\sqrt{V}$ canceled with the phase-space volume element:
\begin{eqnarray} \label{ovlapfree}
|{\cal M}|_{H \rightarrow \Lambda \Lambda }&=&\left
(\frac {2f}{1+f^2}\right )^6 \left( \frac{3}{2}
\right)^{3/4}\left( \frac{\alpha _H}{\sqrt{\pi}} \right)^{3/2}\nonumber\\
&\times & \int^{\infty} _{0} d^3 a \frac {u(a)}{a}e ^{-\frac {3}{4}\alpha_{H} ^2 a^2}.
\end{eqnarray}
\begin{table}[hpb] \label{t2}
\caption{$|{\cal M}|_{H \rightarrow \Lambda \Lambda }^2 $ in ${\rm
GeV}^{-3/2}$ for different values of $f$ (rows) and nuclear wavefunction (columns), using the standard value $\alpha_{B1}=0.406$ GeV and the comparison value $\alpha_{B2}=0.221$ GeV in the IK wavefunction of the quarks. }
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& \multicolumn{2}{c|} {BBG, 0.4 fm} & \multicolumn{2}{c|} {BBG, 0.5 fm} & \multicolumn{2}{c|} {MS} \\\cline{2-7}
& $\alpha_{B1}$ & $\alpha_{B2}$ & $\alpha_{B1}$ & $\alpha_{B2}$ & $\alpha_{B1}$ & $\alpha_{B2}$ \\ \hline
4 & $~6~10^{-14}~$ & $~6~10^{-8}~$ & $~7~10^{-18}~$ & $~4~10^{-9}~$ & $~1~10^{-8}~$ & $~8~10^{-7}~$ \\ \hline
3 & $~5~10^{-9}~$ & $3~10^{-5}$ & $~3~10^{-11}~$ & $~7~10^{-6}~$ & $~2~10^{-6}~$ & $~9~10^{-5}~$ \\ \hline
2 & $~1~10^{-4}~$ & $~0.02~$ & $~1~10^{-5}~$ & $~0.01~$ & $~9~10^{-4}~$ & $~0.03~$ \\ \hline
\end{tabular}
\end{center}
\end{table}
This should give a more realistic estimate of decay rates. Table \ref{t2} shows the overlap values for a variety of choices of $r_H$, hard-core radii, and $\alpha_B$.
Also included are the results with the MS wavefunction
\subsubsection{Empirical Limit on Wavefunction Overlap}
As discussed in section~\ref{hypernuc}, the $H$ can be lighter than $2$ $\Lambda$'s without conflicting with hypernuclear experiments if it is sufficiently compact, as suggested by some models. The constraint imposed by the hypernuclear experiments can be translated into an empirical upper limit on the wavefunction overlap between an $H$ and two baryons. Using eq.~(\ref{tauform}) for the formation time $\tau_{\rm form}$ of an $H$ in a double-$\Lambda$ oxygen-16 hypernucleus we have
\begin{equation}
\label{MBBHlim} |{\cal M}|^2_{\Lambda \Lambda \rightarrow H} = 7~
10^{-8}~ \frac{\kappa_{1440}}{f_{\rm form}} \left( \frac{\tau_{\rm
form}}{10^{-10}{\rm s}} \right)^{-1},
\end{equation}
where $f_{\rm form} =\frac{\Phi_2(m_H)}{\Phi_2(m_H = 2 {\rm GeV})}$ is the departure of the phase space factor for hypernuclear $H$ formation appearing in eq.~(\ref{doubleform}), from its value for $m_H = 2$ GeV. By crossing symmetry the overlap amplitudes $|{\cal M}|_{H \rightarrow \Lambda \Lambda }$ and $|{\cal M}|_{\Lambda \Lambda \rightarrow H}$ only differ because the $\Lambda$'s in the former are asymptotically plane waves while for the latter they are confined to a nucleus; comparing eqns.~(\ref{ovlapfree}) and (\ref{overlap}) we obtain:
\begin{equation}
\label{Mreln} |{\cal M}|^2_{H \rightarrow\Lambda\Lambda}=\frac{4}{N^2_{BG}} |{\cal M}|^2_{\Lambda \Lambda \rightarrow H}.
\end{equation}
For oxygen-16, $~\frac {N^2 _{BG}}{4} \approx \frac{1}{5~10^{4}}$ GeV$^3$. Using eqns.~(\ref{MBBHlim}) and (\ref{Mreln}) will give us an upper limit on the overlap for the
lifetime calculations of the next section.
\subsubsection{ Decay rates and lifetimes}
Starting from $|{\cal M}|_{H \rightarrow \Lambda \Lambda}$ we can calculate the rates for $H$ decay in various channels, as we did for nuclear conversion in the previous section. The rate of $H\rightarrow nn$ decay is
\begin{eqnarray}\label{HtoNN}
\Gamma_{H\rightarrow nn}&\approx &K^2\frac {(2\pi )^4m^5 _q}{2~
m_{H}}~\Phi_2 (m_H)\\ \nonumber &\times & ( |{\cal M}|^2
_{N\rightarrow \Lambda} |{\cal M}|_{H \rightarrow \Lambda
\Lambda})^2,
\end{eqnarray}
where $\Phi _2$ is the phase space factor defined for $H\rightarrow nn$ normalized as in~\cite{PDG02}. Using eqs.~(\ref{Mreln}) and (\ref{MBBHlim}), the lifetime for $H\rightarrow nn$ is
\begin{equation}\label{tau2wk}
\tau_{H\rightarrow NN} \approx 9(4) ~10^{7}~\mu_0 ~{\rm yr},
\end{equation}
for $m_H = 1.9 ~(2)$ GeV, where $\mu_0 \gtrsim 1$ is defined to be $(\tau_{\rm form} f_{\rm form})/(10^{-10}{\rm s})\times (5~10^4~N^2 _{BG})/4 $. The $H$ is therefore cosmologically stable, with a lifetime longer than the age of the Universe, if $|{\cal M}|^2_{\Lambda \Lambda \rightarrow H}$ is $10^{2-3}$ times smaller than needed to satisfy double hypernuclear constraints. As can be seen in Fig.~\ref{figoverlap2}, this corresponds to $r_H\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 1/3~r_N$ in the IK model discussed above. Note that $\kappa_{1440}$ and the sensitivity to the wavefunction overlap has been eliminated by using $\tau_{\rm form}$.
If $m_N + m_\Lambda~( 2.05~ {\rm GeV}) < m_H < 2 m_\Lambda~( 2.23$ GeV), $H$ decay requires only a single weak interaction so the rate in eq. (\ref{HtoNN}) must be divided by $ |{\cal M}|^2 _{N\rightarrow \Lambda}$ given in eqn (\ref{MdeltaS}). Thus we have
\begin{equation} \label{tau1wk}
\tau_{H\rightarrow N \Lambda} \approx 10~\mu_0 ~{\rm s}.
\end{equation}
Finally, if $ m_H > 2 m_\Lambda~( 2.23$ GeV), there is no weak interaction suppression and
\begin{equation} \label{tau0wk}
\tau_{H\rightarrow \Lambda \Lambda} \approx 4~10^{-14}~\mu_0 ~{\rm s}.
\end{equation}
Equations~(\ref{tau2wk})-(\ref{tau0wk}) with $\mu_0 = 1$ give the lower bound on the $H$ lifetime, depending on its mass. This result for the $H$ lifetime differs sharply from the classic calculation of Donoghue, Golowich, and Holstein~\cite{donoghue:Hlifetime}, because we rely on experiment to put an upper limit on the wavefunction overlap $|{\cal M}|_{H \rightarrow \Lambda \Lambda}^2$. Our treatment of the color-flavor-spin and weak interaction parts of the matrix elements is approximate, but it should roughly agree with the more detailed calculation of ref.~\cite{donoghue:Hlifetime}, so the difference in lifetime predictions indicates that the spatial overlap is far larger in their bag model than using the IK and Bethe-Goldstone or Miller-Spencer wavefunctions with reasonable parameters consistent with the hypernuclear experiments. The bag model is not a particularly good description of sizes of hadrons, and in the treatment of~\cite{donoghue:Hlifetime} the $H$ size appears to be fixed implicitly to some value which may not be physically realistic. Furthermore, it is hard to tell whether their bag model analysis gives a good accounting of the known hard core repulsion between nucleons. As our calculation of previous sections shows, these are crucial parameters in determining the overlap. The calculation of the weak interaction and color-flavor-spin matrix elements in ref.~\cite{donoghue:Hlifetime} could be combined with our phenomenological approach to the spatial wavefunction overlap to provide a more accurate yet general analysis. We note that due to the small size of the $H$, the p-wave contribution should be negligible.
\section{Binding of flavor singlets to nuclei} \label{binding}
After calculating the constraints implied on the $H$ from nuclear transition processes we turn to the calculation of $H$ nuclear binding, ref.~\cite{fz:nucbind}. In section~\ref{expts} we concluded that the relevant experimental constraints on exotic nuclei can place strong constraints on the abundance of the $H$ in the case it binds to nuclei. In this section we explore binding of flavor singlet to nuclei. We summarize the theory of nuclear binding in subsection~\ref{nucth}, to set the framework for and to make clear the limitations of our computation. In subsection~\ref{calc} we analyze the binding of a flavor singlet scalar to nuclei, and calculate the minimum values of coupling constants needed for binding. Corresponding limits on nucleon-$H$ scattering are given in subsection~\ref{scat}. Other flavor-singlets are also considered, in subsection~\ref{R0} and elsewhere. We summarize the results and give conclusions in section~\ref{summaryEX}.
\subsection{Nuclear binding-general}
\label{nucth}
QCD theory has not yet progressed enough to predict the two nucleon interaction {\it ab initio}. Models for nuclear binding are, therefore, constructed semi-phenomenologically and relay closely on experimental input.
The long range part of the nucleon-nucleon interaction (for distances $r\geq 1.5$ fm) is well explained by the exchange of pions, and it is given by the one pion exchange potential (OPEP). The complete interaction potential $v_{ij}$ is given by $v^{\pi}_{ij}+v^{R}_{ij}$, where $v^{R}_{ij}$ contains all the other (heavy meson, multiple meson and quark exchange) parts. In the one boson exchange (OBE) models the potential $v^{R}_{ij}$ arises from the following contributions:
\begin{itemize}
\item In the intermediate region (at distances around $r\sim 1$ fm) the repulsive vector meson ($\rho ,\omega$) exchanges are important. A scalar meson denoted $\sigma$ was introduced to provide an attractive potential needed to cancel the repulsion coming from the dominant vector $\omega$ meson exchange in this region. Moreover, a spin-orbit part to the potential from both $\sigma$ and $\omega$ exchange is necessary to account for the splitting of the $P^3$ phase shifts in NN scattering.
\item At shorter scales ($r\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 1$ fm), the potential is dominated by the repulsive vector meson ($\rho ,\omega$) exchanges.
\item For $r\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 0.5$ fm a phenomenological hard core repulsion is introduced.
\end{itemize}
However, many of these OBE models required unrealistic values for the meson-nucleon coupling constants and meson masses. With this limitation the OBE theory predicts the properties of the deuteron and of two-nucleon scattering, although, it cannot reproduce the data with high accuracy.
A much better fit to the data is obtained by using phenomenological potentials. In the early 1990's the Nijmegen group~\cite{nij90} extracted data on elastic $NN$ scattering and showed that all $NN$ scattering phase shifts and mixing parameters could be determined quite accurately. $NN$ interaction models which fit the Nijmegen database with a $\chi ^2/N_{data}\sim 1$ are called 'modern'. They include Nijmegen models \cite{nijmod}, the Argonne $v_{18}$~\cite{argonmod} and CD-Bonn~\cite{cdbonnmod} potentials. These potentials have several tens of adjustable parameters, and give precision fits to a wide range of nucleon scattering data.
The construction of 'modern' potentials can be illustrated with the Nijmegen potential. That is an OBE model based on Regge pole theory, with additional contributions to the potential from the exchange of a Pomeron and f, f' and $A_2$ trajectories. These new contributions give an appreciable repulsion in the central region, playing a role analogous to the soft or hard core repulsion needed in semi-phenomenological and OBE models.
Much less data exists on hyperon-nucleon interactions than on $NN$ interactions, and therefore those models are less constrained. For example the extension of the Nijmegen potential to the hyper-nuclear (YN) sector~\cite{nijYN} leads to under-binding for heavier systems. The extension to the $\Lambda \Lambda$ and $\Xi N$ channels cannot be done without the introduction of extra free parameters, and there are no scattering data at present for their determination.
The brief review above shows that the description of baryon binding is a difficult and subtle problem in QCD. Detailed experimental data were needed in order to construct models which can describe observed binding. In the absence of such input data for the $H$ analysis, we must use a simple model based on scalar meson exchange described by the Yukawa potential, neglecting spin effects in the nucleon vertex in the first approximation. We know from the inadequacy of this approach in the NN system that it can only be used as a crude guide. However since the strength of couplings which would be needed for the $H$ to bind to light nuclei are very large, compared to their expected values, we conclude that binding is unlikely. Thus limits on exotic nuclei cannot be used to exclude the existence of an $H$ or other compact flavor singlet scalar or spin-1/2 hadron.
\subsection{Binding of a flavor singlet to nuclei}
\label{calc}
The $H$ cannot bind through one pion exchange because of parity and also flavor conservation. The absorption of a pion by the $H$ would lead to an isospin $I=1$ state with parity $(-1)^{J+1}$, which could be $\Lambda \Sigma ^0$ or heavier $\Xi p$ composite states. These states have mass $\approx 0.1$ GeV higher than the mass of the $H$ (for $m_{\Lambda}+m_N\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, m_H\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 2m_{\Lambda})$, which introduces a strong suppression in $2^{nd}$ order perturbation theory. Moreover, the baryons in the intermediate state must have relative angular momentum $\rm{L}=1$, in order to have odd parity as required; this introduces an additional suppression. Finally, production of $\Lambda \Sigma ^0$ or $\Xi N$ states is further suppressed due to the small size of the $H$, as explained in \S\ref{NucStab}. Due to all these effects, we conclude that the contribution of one or two pion exchange to $H$ binding is negligible.
The first order process can proceed only through the exchange of a flavor singlet scalar meson and a glueball. The lightest scalar meson is f(400-1100) (also called $\sigma$). The mass of the glueball is considered to be around $\sim 1.5$ GeV. In Born approximation, the Yukawa interaction leads to an attractive Yukawa potential between nucleons
\begin{equation}
V(r)=-\frac {gg'}{4\pi} \frac{1}{r}e^{-\mu r},
\end{equation}
where $\mu$ is the mass of the exchanged singlet boson s ($\sigma$ or glueball) and $g g'$ is the product of the s-H and s-nucleon coupling constants, respectively. The potential of the interaction of $H$ at a position $\vec r$ with a nucleus, assuming a uniform distribution of nucleon $\rho =\frac {A}{\rm V}$ inside a nuclear radius R, is then
\begin{equation}
V=-\frac {gg'}{4\pi}\frac{A}{{\rm V}} \int \frac{e^{ -\mu |\vec {r} -\vec {r'} |}}{|\vec{r} -\vec {r'} |}d^3 \vec{r'},
\end{equation}
where A is the number of nucleons, ${\rm V}$ is the volume of the nucleus and $\vec {r} $ is the position vector of the $H$. After integration over the angles the potential is
\begin{equation} \label{3}
V=-\frac {3}{2} \frac {gg'}{4\pi}
\frac{1}{(1.35~{\rm fm}~\mu)^3} f(r),
\end{equation}
where we used $R=1.35 A^{1/3}$ fm;
\begin{displaymath} \label{5}
f(r) = \left \{ \begin{array}{ll}
2 \mu \left[ 1 -(1+\mu R)~e^{-\mu R}~\frac{\sinh [\mu r]}{\mu r} \right] & r\le R \\
2\mu \left[ \mu R\cosh [\mu R]-\sinh [\mu R] \right] \frac
{e^{-\mu r}}{\mu r} & r\ge R.
\end{array} \right.
\end{displaymath}
Throughout, we use $ \hbar = c = 1$ when convenient.
Fig.~\ref{potentialfig} shows the potential the nucleus presents to the $H$ for $A=50$, taking the mass of the exchanged boson to be $\mu =0.6$ and $1.5$ GeV. The depth of the potential is practically independent of the number of nucleons and becomes shallower with increasing scalar boson mass $\mu $.
\begin{figure}
\begin{center}
\includegraphics [width=8cm]{nucbindV.eps}
\end{center}
\caption{Potential in GeV, for $\frac {gg'}{4\pi}$=1, A=50 and
$\mu=0.6~(\rm {dashed})$ or $\mu=1.5$ GeV (solid) as a function
of distance r.} \label{potentialfig}
\end{figure}
Note that Born approximation is applicable at low energies and for small coupling constants; it may not be valid for $H$ binding. Born approximation is valid when
\begin{equation}
\frac {m}{\mu } \frac{gg'}{4\pi}<<1,
\end{equation}
where $m$ is the reduced mass and $\mu $ the mass of the exchanged particle. As we shall see, this condition is actually not satisfied for values of $g g'$ which assure binding for the $H$-mass range of interest. This underlines the fact that no good first principle approach to nuclear binding is available at present.
We can now calculate the value of $c_*=\left( \frac{gg'}{4\pi}\right) _*$ for which the potential is equal to the minimum value needed for binding; in square well approximation this is given by
\begin{equation}
V_{min}=\frac{\pi ^2}{8R^2m}.
\end{equation}
Fig.~\ref{cstarfig} shows the dependence of $c_*$ on the mass of the exchanged particle, $\mu$. The maximum value of $c_*$ for which the $H$ does not bind decreases with increasing $H$ mass, and it gets higher with increasing mass of the exchanged particle, $\mu$.
\begin{figure}
\begin{center}
\includegraphics*[width=8cm]{nucbindc.eps}
\end{center}
\caption{ Critical value $c_*$ of the coupling constant product versus nuclear size needed for the $H$ to just bind, for $\mu$[GeV]$=0.7$ (dotted), $1.3$ (dashed) and $1.5$ (solid).} \label{cstarfig}
\end{figure}
The $H$ does not bind to light nuclei with $A\le4$, as long as the product of couplings
\begin{equation}
c_* \leq [0.27,0.73,1.65],~{\rm for}~\mu=[0.6,1,1.5]~{\rm GeV},
\end{equation}
where $c = g_{NN\sigma}~ g_{HH\sigma}/(4\pi)$ or $g_{NNG}~ g_{HHG}/(4 \pi)$. The $H$ will not bind to heavier nuclei if
\begin{equation}
c_*\leq [0.019,0.054,0.12],~{\rm for}~ \mu =[0.6,1,1.5]~{\rm GeV}.
\end{equation}
In the next sections we will compare these values to expectations and limits.
It should also be noted that binding requires the product of coupling constants, $g g'$ to be positive and this may not be the case. Even in the case of hyperons, experimental information was necessary to decide whether the $\Xi$ has a relative positive coupling~\cite{dover}.
\subsection{Limits on {\bf {\it cm}} from Nucleon $H$ elastic scattering}
\label{scat}
The nucleon-$H$ elastic scattering cross section is expected to be very small, due to the compact size of the $H$ and the suppression of color fluctuations on scales $\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 1 ~{\rm GeV}^{-1}$ in the nucleon. Ref.~\cite{f:StableH} estimates $\sigma_{HN} \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 10^{-3}$ mb. This can be translated to an estimated upper limit on the product $c ~m$ which determines the potential presented to the $H$ by a nucleus, as follows. In the one boson exchange model, the elastic $H$-$N$ cross section due to the $\sigma$- or glueball-mediated Yukawa interaction is given by
\begin{equation}
\frac{d\sigma }{d\Omega }= (2mc)^2 \frac {1}{(2p^2(1-\cos \theta
)+\mu ^2)^2}.
\end{equation}
In the low energy limit
\begin{equation}
\label{eq:crossection}
\sigma_{HN} =(2mc)^2\frac {4 \pi}{\mu ^4}.
\end{equation}
Writing $\sigma_{HN} = \sigma_{-3} 10^{-3}$ mb and $\mu = \mu_{\rm GeV} 1$ GeV, this gives
\begin{equation}
c~ m = 0.007\sqrt{\sigma_{-3}} ~\mu_{\rm GeV}^2 ~{\rm GeV}.
\end{equation}
Comparing to the values of $c^*$ needed to bind, we see that for $m_H<2 m_p$ this is too small for the $H$ to bind, even to heavy nuclei\footnote{We have summarized the net effect of possibly more than one exchange boson ({\it e.g.}, $\sigma$ and glueball) by a single effective boson represented by a $c_*^{\rm eff}$ and $\mu_{\rm eff}$.}.
If dark matter consists of relic $H$'s, we can demonstrate that $H$'s do not bind to nuclei without relying on the theoretical estimate above for $\sigma_{HN}$. It was shown in~\cite{wandelt} that the XQC experiment excludes a dark matter-nucleon cross section $\sigma_{XN}$ larger than about $0.03$ mb for $m_X \sim 1.5$ GeV. Thus if dark matter consists of a stable $H$ it would require $\sigma_{XN} \le 0.03 $ mb, implying $c \le [0.01, 0.03, 0.06]$ for $\mu = [0.6,1.0,1.5]$ GeV and the $H$ would not bind even to heavy nuclei.
A generic new scalar flavor singlet hadron $X$ which might appear in an extension of the Standard Model, might not have a small size and correspondingly small value of $\sigma_{XN}$, and it might not be dark matter and subject to the XQC limit. In that case, it is more useful to turn the argument here around to give the maximum $\sigma_{XN}^*$ above which the $X$ would bind to nuclei in the OBE approximation. From eqn (\ref{3}),(\ref{5}) and $f(0) = 2\mu$ we have
\begin{equation}
c_* = \frac{\pi^2 (1.35 ~{\rm fm}) \mu^2}{24 A^{2/3} m}.
\end{equation}
Then eq.~(\ref{eq:crossection}) leads to
\begin{equation}
\sigma_{XN}^* \approx 155~A^{-4/3}~ {\rm mb}.
\end{equation}
That is, for instance, if it is required that the $X$ not bind to He then it must have a cross section on nucleons lower than $25$ mb.
\subsection{Flavor singlet fermion}
\label{R0}
The analysis of the previous sections can be extended to the case of a flavor singlet fermion such as the glueballino -- the supersymmetric partner of the glueball which appears in theories with a light gluino~\cite{f:lightgluino}. In this case the possible exchanged bosons includes, in addition to the $\sigma$ and the glueball, the flavor-singlet component of the pseudoscalar meson $\eta'$ ($m_\eta' = 958$ MeV). However the size of the $R_0$ should be comparable to the size of the glueball, which was the basis for estimating the size of the $H$. That is, we expect $r_{R_0} \approx r_G \approx r_H$ and thus $\sigma_{R_0 N} \approx \sigma_{HN}$~\cite{f:StableH}. Then arguments of the previous section go through directly and show the $R_0$ is unlikely to bind to light nuclei\footnote{Nussinov\cite{nussinov:R0} considered that the $R_0$ would bind to nuclei, by assuming that the depth of the potential presented by the nucleus to the $R_0$ is at least $2$-$4$ MeV for $16\le A\le 56$. However the discussion of the previous sections, with $\sigma_{R_0N} = 10^{-3}\sigma_{-3}$ mb, gives a potential depth of $0.07$ MeV $\sqrt{\sigma_{-3}}/(m_{R_0}/{\rm GeV})$.}.
\section{The Existence of the $H$ \-- Summary} \label{summaryEX}
In \S\ref{expts} we considered the constraints placed on the $H$ dibaryon by the stability of nuclei and hypernuclei with respect to conversion to an $H$, and we calculated the lifetime of the $H$ in the case when it is heavier than two nucleons. In the model calculations we used the Isgur-Karl wavefunctions for quark in baryons and the $H$, and the Miller-Spencer and Bruecker-Bethe-Goldstone wavefunctions for nucleons in a nucleus, to obtain a rough estimate of the $H$-baryon-baryon wavefunction overlap. By varying the IK oscillator strength parameter and the hard-core radius in the BBG wavefunction over extreme ranges, we find that the wavefunction overlap is very sensitive to the size and shape of the hadronic and nuclear wavefunctions. With the BBG (MS) wavefunction, the hypernuclear formation time of an $H$ is comparable to or larger than the decay time for the $\Lambda$'s and thus the $H$ is not excluded, if $r_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 1/2~(1/3)~ r_N$. \footnote{The overlap between an $H$ and two nucleons should be strongly suppressed also in the Skyrme model, in view of the totally different nature of the nucleon and $H$ solitons \cite{Balachandran,Sakai2}. However, a method for computing the overlap has not been developed so we are unable to explore this here.} We conclude that the observation of $\Lambda$ decays in double-$\Lambda$ hypernuclei cannot be used to exclude $m_H < 2 m_\Lambda$, given our present lack of understanding of the hadronic and nuclear wavefunctions.
In the second part of our work we abstracted empirical relations which give us relatively model-independent predictions for the $H$ lifetime. By crossing symmetry, the overlap of the wavefunctions of an $H$ and two baryons can be constrained using experimental limits on the formation time of an $H$ in a hypernucleus. Using the empirically constrained wavefunction overlap and phenomenologically determined weak interaction matrix elements, we can estimate the lifetime of the $H$ with relatively little model uncertainty. We find:
\begin{enumerate}
\item If $m_H > 2 m_\Lambda$, the hypernuclear constraint is not applicable but the $H$ would still be expected to be long-lived, in spite of decaying through the strong interactions. E.g., with the BBG wavefunction and $r_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 1/2 ~ r_N$, $\tau_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, 4~10^{-14}$ sec.
\item If $m_N + m_\Lambda \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, m_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 2 m_\Lambda$, the $H$ lifetime is $\,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, 10$ sec.
\item If $2 m_N \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, m_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, m_N + m_\Lambda$, the $H$ lifetime is $\gtrsim 10^8$ yr. For $r_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, (1/3)~ r_N$ as suggested by some models, the $H$ lifetime is comparable to or greater than the age of the Universe.
\end{enumerate}
Our results have implications for several experimental programs:
\begin{enumerate}
\item The observation of $\Lambda$ decays from double $\Lambda$ hypernuclei excludes that $\tau_{\rm form}$, the formation time of the $H$ in a double $\Lambda$ hypernucleus, is much less than $\tau_\Lambda$. However if $\tau_{\rm form}$ is of order $\tau_\Lambda$, some double $\Lambda$ hypernuclei would produce an $H$. One might hope these $H$'s could be observed by reconstructing them through their decay products, e.g., $H \rightarrow \Sigma^- p$. Unfortunately, our calculation shows that $\tau_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, 10$ sec for the relevant range of $m_H$, so any $H$'s produced would diffuse out of the apparatus before decaying.
\item Some calculations have found $m_H < 2 (m_p + m_e)$, in which case the $H$ is absolutely stable and nucleons in nuclei may convert to an $H$. We showed that SuperK can place important constraints on the conjecture of an absolutely stable $H$, or conceivably discover evidence of its existence, through observation of the pion(s), positron, or photon produced when two nucleons in an oxygen nucleus convert to an $H$. We estimate that SuperK could achieve a lifetime limit $\tau \,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, {\rm few} ~10^{29}$ yr. This is the lifetime range estimated with the BBG wavefunction for $m_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, 1740$ MeV and $r_H \approx 1/5 ~r_N$. An $H$ smaller than this seems unlikely, so $m_H \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 1740$ MeV is probably already ruled out.
\end{enumerate}
In \S\ref{nucth} we first reviewed the theory of nuclear binding and emphasized that even for ordinary nucleons and hyperons there is not a satisfactory first-principles treatment of nuclear binding. We showed that exchange of any pseudoscalar meson, or of two pseudoscalar octet mesons, or any member of the vector meson octet, makes a negligible contribution to the binding of an $H$ or other flavor singlet scalar hadron to a nucleon. The dominant attractive force comes from exchange of a glueball or a $\sigma$ (also known as the f(400-1100) meson), which we treated with a simple one boson exchange model. The couplings of $\sigma$ and glueball to the $H$ are strongly constrained by limits on $\sigma_{HN}$, to such low values that the $H$ cannot be expected to bind, even to heavy nuclei. Thus we conclude that the strong experimental limits on the existence of exotic isotopes of He and other nuclei do not exclude a stable $H$.
More generally, our result can be applied to any new flavor singlet scalar particle X, another example being the $S^0$ supersymmetric hybrid baryon ($uds\tilde{g}$) discussed in \cite{f:lightgluino}. If $\sigma_{XN} \le 25~{\rm mb} ~{\rm GeV}/m_{X}$, the $X$ particle will not bind to light nuclei and is ``safe". Conversely, if $\sigma_{XN} >> 25~{\rm mb} ~{\rm GeV}/m_{X}$, the $X$ particle could bind to light nuclei and is therefore excluded unless if there is some mechanism suppressing its abundance on Earth, or it could be shown to have an intrinsically repulsive interaction with nucleons. This means the self-interacting dark matter (SIDM) particle postulated by Spergel and Steinhardt \cite{spergel:SIDM} to ameliorate some difficulties with Cold Dark Matter, probably cannot be a hadron. SIDM requires $\sigma_{XX} /M_X \approx 0.1 - 1 $ b/GeV; if $X$ were a hadron with such a large cross section, then on geometric grounds one would expect $\sigma_{XN} \approx 1/4 \sigma_{XX}$ which would imply the $X$ binds to nuclei and would therefore be excluded by experimental limits discussed above.
\section{The $H$ or $H$, $\bar{H}$ as Dark Matter}
As we have seen in the previous sections, the existence of the $H$ di-baryon is not ruled out by experiment if the $H$ is tightly bound and compact. If $m_H\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, m_N+m_{\Lambda}$ and $r_H\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 1/3~r_N$ the $H$ can be cosmologically stable. In the rest of this section we explore the possibility that the DM consists of the $H$ or ${\bar H},H$ and therefore be predicted within the Standard Model.
\begin{enumerate}
\item[1.] {\it The $H$ Dark Matter.} The number density of nonrelativistic species $i$ in thermal equilibrium is given by
\begin{equation}
n_i=g_i\left( \frac {m_iT}{2\pi}\right) ^2\exp\left[-\frac{m_i-\mu_i}{T}\right].
\end{equation}
If baryon number is conserved we have $\mu_H=2\mu _N$, and therefore the nucleon and the $H$ energy densities are related as
\begin{equation}\label{Hdmeq}
\frac{m_Hn_H}{m_nn_n}=\left(2\pi\right)^{3/2}\frac{g_H}{g^2 _N}\frac{n_n}{n_{\gamma}}n_{\gamma}\frac{m^{5/2} _H}{m^4 _nT^{3/2}}\exp\left[\frac{2m_n-m_H}{T}\right].
\end{equation}
The left-hand side is actually the ratio $\Omega_H/\Omega_N$ and for $H$ DM it is fixed, see eq. (\ref{wmappar}). Then, by solving eq. (\ref{Hdmeq}) we can find $T_{\rm f.o.}$, the temperature at which the $H$ has to go out of equilibrium in order to have the proper DM energy density today. The equation has a solution only for $m_H\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 2m_N$, a mass range which is unfavored by the discussion in \S\ref{convlifetimes}. The freeze-out temperatures are 15 (7) MeV, for $m_H=1.5~(1.7)$ GeV. These temperatures correspond to an age of the Universe of $0.003~(0.02)$ sec. By that time all strangeness carrying particles already decayed and therefore the $H$ cannot stay in equilibrium (for example, through reactions as $K^+H\leftrightarrow p\Lambda$) at such low temperatures. We see that even if the $H$ was cosmologically stable it could not be thermally produced in the early universe in sufficient amount to be dark matter.
\item[2.]{{\it The} $H {\bar H}$ {\it Dark Matter}.} In this case the $H$ would decouple as in the B-sequestration scenario discussed in \S \ref{scenario}. The reactions which would keep it in the equilibrium, up to the temperatures listed in Table 2.1, are
\begin{eqnarray}
{\bar H}N\leftrightarrow {\bar \Lambda}K\nonumber \\
H{\bar N}\leftrightarrow \Lambda{\bar K}.\nonumber
\end{eqnarray}
In this scenario $\Lambda$s and $K$s stay in equilibrium sufficiently long, because the above reactions proceed in the left direction with the rates which are, for temperatures of interest, much higher than the decay rates of $\Lambda$ and $K$. The $H$ and ${\bar H}$ stay in equilibrium through these interactions and may reach DM abundance.
\end{enumerate}
In the next Chapter we explore direct and indirect experimental constraints on the $H{\bar H}$ DM scenario.
\chapter{Dark Matter constraints in the range of the $H$ parameters} \label{Hdmcons}
This Chapter is organized as follows: we calculate direct DM detection experiments in the mass and cross section range expected for the $H$ in \S~\ref{directDM} and indirect constraints that could be place on ${\bar H}$ DM in \S~\ref{Hindirect}; we show that the ${\bar H}$ DM could be ruled out from the heat production in Uranus.
\section{Direct DM Detection \-- Light DM Constraints} \label{directDM}
In this Section we focus on direct detection experiments in order to see whether the $H$ is allowed as a DM candidate given that his expected cross section with nucleons is of the order $\mu $b (see discussion in \S~\ref{Tightlybound}). More generally we explore the limits on light DM, with primary interest in $\mu $b cross section range.
The region of mass $m \,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, 10$ GeV is well explored today, up to TeV range, from strong ($\sigma \sim 10$ mb) to weak ($\sigma \sim 10^{-6}$pb) cross sections. Here we explore the possibility that the DM mass is in the range $0.4 \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, m \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 10$ GeV. Masses below $\sim 0.4$ GeV are below the threshold of direct detection DM experiments and are therefore unconstrained, with the exception of axion searches.
The mass range $m\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 10$ GeV has not yet been explored carefully. Several dark matter underground experiments have sufficiently low mass threshold today: the CRESST \cite{cresst}, DAMA \cite{dama}, IGEX \cite{igex}, COSME \cite{cosme} and ELEGANT \cite{elegant} experiments. Except for DAMA, these experiments have published upper limits on the cross section assuming it is weak, but have not addressed the case of stronger cross sections,\footnote{DAMA did publish strong cross section limits in \cite{damastrong}, but they were based on a dedicated experiment which had a higher mass threshold $m\,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, 8$ GeV.} where the approach for extracting the cross section limits is substantially different, as we explain below. Also, recent data from an X-ray experiment XQC, \cite{XQC} proved to be valuable in constraining low mass DM, but limits based on the final data have not yet been derived. Since XQC is a space\-- based experiment it is especially suitable for exploring the higher cross section range. In \cite{wandelt} it was shown that in the low mass range the XQC experiment rules out Strongly Interacting DM (SIMPs, \cite{spergel:SIDM}). Dark matter with low masses and 'intermediate' cross sections, several orders of magnitude smaller than normal hadronic cross sections, remains to be fully analyzed and that is the focus of this work. We will abbreviate DM with intermediate cross section on nucleons as DMIC.
Early limits from DMIC direct detection experiments can be found in the paper \cite{RRS} by Rich, Rocchia and Spiro in which they reported results from a 1987 balloon experiment. Starkman et al. \cite {SG} reviewed DM constraints down to a mass of 1 GeV as of 1990. Wandelt et al. \cite{wandelt} added constraints based on preliminary data from the more recent XQC sounding rocket experiment. The above constraints are discussed and updated further in the text. In previous works on this topic the effect of the halo particle's velocity distribution on the cross section limit was not explored. Since the only detectable light particles are those in the exponential tail of the velocity distribution, the limits on light DM are sensitive to the parameters in the velocity distribution, in particular to the value of the escape velocity cutoff. We investigate this sensitivity in the standard Isothermal Sphere model, where the DM velocity distribution function is given by a truncated Maxwell-Boltzmann distribution. We also consider the spin-independent and spin-dependent interaction cases separately. Except in Section \ref{fraction}, we assume a single type of DM particle.
\subsection{Direct Dark Matter Detection} \label{directdet}
The basic principle of DM detection in underground experiments is to measure the nuclear recoil in elastic collisions, see for example \cite{Lewin}. The interaction of a DM particle of mass $m\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 10$ GeV, produces a recoil of a nucleus of 20 keV or less. The recoil energy (which grows with DM mass as in (2) below) can be measured using various techniques. The detectors used so far are:
\begin{itemize}
\item{ionization detectors} of Ge (IGEX, COSME) and of Si
\item{scintillation crystals} of NaI (DAMA, ELEGANT)
\item{scintillators} of CaF$_2$
\item{liquid or liguid gas} Xenon detectors (DAMA, UKDMC)
\item{thermal detectors (bolometers)} with saphire (CRESST, ROSEBUD), telurite or Ge (ROSEBUD) absorbers
\item{bolometers, which also measure the ionization} like that of Si (CDMS) and Ge (CDMS, EDELWEISS).
\end{itemize}
As we have seen in the Introduction, the most accepted DM candidate in GeV range is the neutralino. The expected signal for neutralino-nucleon scattering is in the range $10$ to $10^{-5}$ counts/kgday, \cite{SUSYmorales}. The smallness of signal dictates the experimental strategies, detectors have to be highly radio pure, and have substantial shielding (they are built deep underground).
For a given velocity distribution $f({\vec v})$, the differential rate per unit recoil energy $E_R$ in (kg day keV)$^{-1}$ in the detector can be expressed as
\begin{equation}
\label{cr}
\frac {dR}{dE_R} = N_T~n_{X} \int _{v_{min}} ^{v_{esc}} ~d{\vec v}~|{\vec v}|~f({\vec v})~g({\vec v}) \frac{d \sigma_{XA}}{dE_R},
\end{equation}
where $n_{X}$ is the number density of DM particles, $N_T$ is the number of target nuclei per kg of target, $\sigma _{XA}$ is the energy dependent scattering cross section of DM on a nucleus with mass number A, $g({\vec v})$ is the probability that a particle with velocity $v$ deposits an energy above the threshold $E_{TH}$ in the detector, and $v_{min}$ is the minimum speed the DM particle can have and produce an energy deposit above the threshold.
The recoil energy of the nucleus is given by
\begin{equation} \label{er}
E_{R}=\frac {4m_A~m_{X}}{(m_A+m_{X})^2} (\frac {1}{2} m_X v^2 _{X}) \left( \frac {1-\cos\theta _{CM}}{2}\right)
\end{equation}
where $\theta _{CM}$ is the scattering angle in the DM-nucleus center of mass frame. We will assume isotropic scattering as is expected at low energies.
So, for instance, for $A=16$, $m=1$ GeV and an energy threshold of 600 eV, the minimal DM velocity to produce a detectable recoil is $v_{min}=680$ km/s, in the extreme tail of the DM velocity distribution.
In order to compare cross section limits from different targets we will normalize them to the proton-DM cross section, $\sigma _{Xp}$. For the simplest case of interactions which are independent of spin and the same for protons and neutrons, the low energy scattering amplitude from a nucleus with mass number A is a coherent sum of A single nucleon scattering amplitudes. The matrix element squared therefore scales with size of nucleus as $\sim A^2$. In addition the kinematic factor in the cross section depends on the mass of the participants in such a way \cite{witten,Lewin} that
\begin{equation} \label{sigSI}
\frac {\sigma^{SI} _{XA}}{\sigma^{SI} _{Xp}}=\left[ \frac {\mu (A)}{\mu(p)}\right] ^2 A^2
\end{equation}
where $\mu (A)$ is the reduced mass of the DM-nucleus system, and $\mu (p)$ is the reduced mass for the proton-DM system.
At higher momentum transfer $q^2=2m_NE_R$ the scattering amplitudes no longer add in phase, and the total cross section $\sigma _{XA} (q)$ becomes smaller proportionally to the form factor $F^2(q^2)$, $\sigma _{XA} (q)=\sigma _0 F^2(q^2)$.
We take this change in the cross section into account when we deal with higher mass ($m\,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, 10$ GeV) dark matter; for smaller masses the effect is negligible. We adopt the form factor $F(q^2)=\exp\left[-1/10(qR)^2\right]$ with $R=1.2 A^{1/2}$ fm, used also in \cite{Ahlen:1987mn,Freese:1987wu}. The simple exponential function is suffitiently accurate for our purposes and easy to implement using the Monte Carlo method to sample momentum transfer $q$, from its distribution given by the form factor. The procedure is described in more detail in Appendix~\ref{appendixB}.
For spin dependent interactions the scattering amplitude changes sign with the spin orientation. Paired nucleons therefore contribute zero to the scattering amplitude and only nuclei with unpaired nucleon spins are sensitive to spin dependent interactions. Due to the effect of coherence, the spin independent interaction is usually dominant, depending on the mass of the exchanged particle \cite{kamionkowski}. Therefore, the spin dependent cross section limit is of interest mainly if the spin independent interaction is missing, as is the case, for example, with massive majorana neutrinos. Another example of DM with such properties is photino dark matter, see \cite{witten}, in the case when there is no mixing of left- and right- handed scalar quarks. The amplitude for DM-nucleus spin dependent interaction in the case of spin 1/2 DM, in the nonrelativistic limit, is proportional to \cite{witten, engel-vogel}
\begin{equation}
{\cal M}\sim \langle N|{\vec J}|N\rangle\cdot {\vec s}_{X}
\end{equation}
where ${\vec J}$ is the total angular momentum of the nucleus, $|N>$ are nuclear states and ${\vec s}_{X}$ is the spin of the DM particle.
In the case of scalar DM the amplitude is
\begin{equation}
{\cal M}\sim \langle N|{\vec J}|N\rangle\cdot \left( {\vec q} \times {\vec q}' \right)
\end{equation}
where ${\vec q}$ and ${\vec q}'$ are the initial and final momenta of the scattering DM particle. Thus the cross section for this interaction is proportional to the fourth power of the ratio $q/M$, of DM momentum to the mass of the target which enters through the normalization of the wavefunction. Therefore the spin dependent part of the interaction for scalar DM is negligible when compared to the spin independent part.
We adopt the standard spin-dependent cross section parametrization \cite{Lewin}
\begin{equation}
\sigma _{XA}\sim \mu (A) ^2~[\lambda ^2J(J+1)]_A C^2 _{XA}
\end{equation}
where $\lambda$ is a parameter proportional to the spin, orbital and total angular momenta of the unpaired nucleon. The factor $C$ is related to the quark spin content of the nucleon, $C=\sum T^q _3 \Delta_q,~q=u,d,s$, where $T^{u,d,s} _3$ is the charge of the quark type $q$ and $\Delta_q$ is the fraction of nucleon spin contributed by quark species $q$.
The nuclear cross section normalized to the nucleon cross section is
\begin{equation} \label{sigSD}
\frac {\sigma^{SD} _{XA}}{\sigma^{SD} _{Xp}}=\left[\frac{\mu (A)}{\mu(p)}\right]^2~\frac {[\lambda ^2J(J+1)]_A}{[\lambda ^2J(J+1)]_p}\left( \frac {C_{XA}}{C_{Xp}}\right)^2 .
\end{equation}
The values of proton and neutron $C$ factors, $C_{Xp},~C_{Xn}$ vary substantially depending on the model. For targets of the same type \-- odd-n (Si, Ge) or odd-p (Al, Na, I) nuclei \-- this model dependence conveniently cancels. The comparison of cross sections with targets of different types involves the $C_{Xp}/C_{Xn}$ ratio. This ratio was thought to have the value $\sim 2$ for any neutralino, based on the older European Muon Collaboration (EMC) measurements, but the new EMC results imply a ratio which is close to one for pure higgsino, and is $\,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\,$ 10 otherwise. (The biggest value for the ratio is $C_p/C_n\sim 500$, for bino.) We normalize our spin dependent results to the proton cross section $\sigma _{Xp}$ using $C_{Xp}/C_{Xn}=1$ for definiteness below.
In this paper we assume that the DM halo velocity distribution is given by a truncated Maxwell-Boltzmann distribution in the galactic reference frame, as in the Isothermal Sphere Halo model \cite{Binney}. We direct the ${\hat z}$ axis of the Earth's frame in the direction of the Local Standard of Rest (LSR) motion. \footnote{The Local Standard of Rest used here is the dynamical LSR, which is a system moving in a circular orbit around the center of Milky Way Galaxy at the Sun's distance.} The DM velocity distribution, in the Earth's frame, is given by
\begin{equation}
\label{veldistE}
f(v_{z},{\vec v}_{\perp})=N \exp \left[-\frac{(v_{z}-v^{t} _E)^2+{\vec v}_{\perp}^2}{v^2 _c}\right].
\end{equation}
Here $v_c$ is the local circular velocity and it is equal to $\sqrt{2}$ times the radial velocity dispersion in the isothermal sphere model; ${\vec v}_E$ is the velocity of the Earth in the Galactic reference frame. Throughout, superscript ``t'' indicates a tangential component. This neglects the Earth's motion in the radial direction which is small. The velocities $v_z$ and ${\vec v}_{\perp}$ are truncated according to $\sqrt {v^2 _z+{\vec v}_{\perp}^2}\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, v_{esc}$, where $v_{esc}$ is the escape velocity discussed below.
The model above is the simplest and the most commonly used model which describes a self-gravitating gas of collisionless particles in thermal equilibrium. On the other hand numerical simulations produce galaxy halos which are triaxial and anisotropic and may also be locally clumped depending on the particular merger history (see \cite{annegreen} for a review). This indicates that the standard spherical isotropic model may not be a good approximation to the local halo distribution. Here we aim to extract the allowed DM window using the simplest halo model, but with attention to the sensitivity of the limit to poorly determined parameters of the model. The effects of the more detailed halo shape may be explored in a further work.
We ignore here the difference between the DM velocity distribution on the Earth, deep in the potential well of the solar system, and the DM velocity distribution in free space. This is a common assumption justified by Gould in \cite{gould} as a consequence of Liouville's theorem. Recently Edsjo et al. \cite{edsjo} showed that the realistic DM velocity distribution differs from the free space distribution, but only for velocities $v\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 50$ km/s. Therefore, the free space distribution is a good approximation for our analysis, since for light dark matter the dominant contribution to the signal comes from high velocity part of the distribution.
The velocity of the Earth in the Galactic reference frame is given by
\begin{equation}
{\vec v}_E={\vec v}_{LSR}+{\vec v}_{S}+{\vec v}_{E,orb},
\end{equation}
where ${\vec v}_{LSR}$ is the velocity of the local standard of rest LSR: it moves with local circular speed in tangential direction $v^t _{LSR}=v_c$, toward $l=90^o$, $b=0^o$, where $l$ and $b$ are galactic longitude and latitude. The velocity of the Sun with respect to the LSR is ${\vec v}_{S}= 16.6$ km/s and its direction is $l=53^o$, $b=25^o$ in galactic coordinates. $v_{E,orb}=30$ km/s is the maximal velocity of the Earth on its orbit around the Sun.
The magnitude of $v^t _{LSR}$ has a considerable uncertainty. We adopt the conservative range $v_c=(220\pm 50)$ km/s which relies on purely dynamical observations \cite{kochanek}. Measurements based on the proper motion of nearby stars give a similar central value with smaller error bars, for example $v_c (R_0)=(218\pm 15)$ km/s, from Cepheids and $v_c (R_0)=(241\pm 17)$ km/s, from SgrA$^*$ (see \cite{annegreen2} and references therein). The choice $v_c=(220\pm 50)$ km/s is consistent with the DAMA group analysis in \cite{damav} where they extracted the dependence of their cross section limits on the uncertainty in the Maxwellian velocity distribution parameters.
Projecting the Earth's velocity on the tangential direction ($l=90^o$, $b=0^o$) we get
\begin{equation}
v^{t} _E=v_c+v^{t} _S + v^{t} _{E,orb}~\cos [\omega (t-t_0)]
\end{equation}
where $v^{t} _S=12~{\rm km/s}$; $v^{t} _E= 30~\cos \gamma~{\rm km/s}$ where $\cos \gamma=1/2$ is the cosine of the angle of the inclination of the plane of the ecliptic, $\omega =2\pi/365$ day$^{-1}$ and $t_0$ is June 2nd, the day in the year when the velocity of the Earth is the highest along the LSR direction of motion. In the course of the year $\cos [\omega (t-t_0)]$ changes between $\pm ~1$, and the orbital velocity of the Earth ranges $\pm 15$ km/s.
Taking all of the uncertainties and annual variations into account, the tangential velocity of the Earth with respect to the Galactic center falls in the range $v^{t} _{E}=(167~{\rm to}~307)~{\rm km/s}$.
The other parameter in the velocity distribution with high uncertainty is the escape velocity, $v_{esc}=(450~{\rm to}~650)$ km/s \cite{vesc}. We will do our analysis with the standard choice of velocity distribution function parameters,
\begin{equation} \label{parameters}
v^t _{E}=230~ {\rm km/s},~~ v_c=220~ {\rm km/s},~~v_{esc}=650~ {\rm km/s},
\end{equation}
and with the values of $v_E$ and $v_{esc}$ from their allowed range, which give the lowest count in the detector and are therefore most conservative:
\begin{equation} \label{range}
v^t _{E}=170~ {\rm km/s},~~v_c=170~ {\rm km/s},~~v_{esc}=450~ {\rm km/s}.
\end{equation}
For experiments performed in a short time interval we take the value of $v^{t} _{E,orb}~\cos [\omega (t-t_0)]$ which corresponds to the particular date of the experiment, and the lowest value of $v^t _{E}$ allowed by the uncertainties in the value of $v_c$.
Another effect is important in the detection of moderately interacting particles. Since particles loose energy in the crust rapidly (mean free path is of the order of 100 m) only those particles which come to the detector from $2\pi$ solid angle above it can reach the detector with sufficient energy. Since the velocity distribution of the particles arriving to the detector from above depends on the detector's position on Earth with respect to the direction of LSR motion, the detected rate for a light DMIC particle will vary with the daily change of position of the detector. This can be a powerful signal.
\subsection{XQC Experiment}
For light, moderately interacting dark matter the XQC experiment places the most stringent constraints in the higher cross section range.
The XQC experiment was designed for high spectral resolution observation of diffuse X-ray background in the $60-1000$ eV range. The Si detector consisted of an array of 36 1 mm$^2$ microcalorimeters. Each microcalorimeter had a 7000 Angstrom layer of HgTe X-ray absorber. Both the HgTe and the Si layers were sensitive to the detection. The experiment was performed in a 100 s flight in March, and therefore the Earth's velocity $v^t _{E}$ falls in the 200 to 300 km/s range. The experiment was sensitive to energy deposit in the energy range $25-1000$ eV. For energy deposits below 25 eV the efficiency of the detector drops off rapidly. For energy deposits above about 70 eV the background of X-rays increases, so XQC adopted the range 25-60 eV for extraction of DM limits, and we will do the same. This translates into a conservative mass threshold for the XQC experiment of $0.4$ GeV, obtained with $v_{esc}=450$ km/s and $v^t _E=200$ km/s, which is the lowest mass explored by direct DM detection apart from axion searches.
The relationship between the number of signal events in the detector $N_S$ and the scattering cross section $\sigma_{XA}$ of DM particles on nuclei is the following
\begin{equation} \label{rate}
N_S = n_X~f~T~( N_{\rm Si} \langle{\vec v}_{\rm Si}\rangle \sigma_{\rm Si}+N_{\rm Hg} [ \langle{\vec v}_{\rm Hg}\rangle \sigma_{\rm Hg}+\langle{\vec v}_{\rm Te}\rangle \sigma_{\rm Te} )],
\end{equation}
where $N_{\rm Si}$ and $N_{\rm Hg}$ are the numbers of Si and Hg (Te) nuclei in the detector, $n_{X}$ is the local number density of DM particles, $\langle\vec{v}_{\rm Si}\rangle$, $\langle\vec{v}_{\rm Hg}\rangle$ and $\langle\vec{v}_{\rm Te}\rangle$ are the effective mean velocities of the DM particles on the Si and HgTe targets, $f$ is the efficiency of the detector, and $T=100$ s is the data-taking time. In this energy range, $f \approx 0.5$.
The standard value for the local DM energy density is $\rho _X=0.3$ GeV cm$^{-3}$. However, numerical simulations combined with observational constraints \cite{draco} indicate that the local DM energy density $\rho _X$ may have a lower value, $0.18 \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, \rho _X/({\rm GeV}~{\rm cm}^{-3}) \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 0.3 $. In our calculations we use both the standard value $\rho _{X}=0.3$ GeV/cm$^3$, and the lower value suggested by the numerical simulations, $\rho _{X}=0.2$ GeV/cm$^3$. The cross sections $\sigma_{\rm Si}$, $\sigma_{\rm Hg}$, $\sigma_{\rm Te}$ are calculated using equations (\ref{sigSI}) and (\ref{sigSD}). In this section and the next we assume that DM has dominantly spin-independent cross section with ordinary matter. In \S~\ref{SD} we consider the case of DM which has exclusively spin-dependent cross section or when both types of interaction are present with comparable strength.
XQC observed two events with energy deposit in the 25-60 eV range, and expected a background of 1.3 events. The equivalent 90\% cl upper limit on the number of signal events is therefore $N_S = 4.61$. This is obtained by interpolating between 4.91 for expected background = 1.0 event and 4.41 for expected background = 1.5 events, for 2 observed events using table IV in ref. \cite{feldman}.
We extract the cross section limits using our simulation. Because of the screening by the Earth we consider only particles coming from the $2\pi$ solid angle above the detector, for which $\langle{\hat n}\cdot {\vec v}\rangle\leq 0$ and for them we simulate the interaction in the detector, for particles distributed according to (\ref{veldistE}). We take the direction of the LSR motion, ${\hat n}$ as the z axis.
We choose the nucleus $i$ which the generated DM particle scatters from, using the relative probability for scattering from nucleus of type $i$, derived in Appendix \ref{appendixA}:
\begin{equation} \label{probability}
P_i =\frac{\lambda _{eff}}{\lambda _i}=\frac {n_i\sigma _{XA_i}}{\sum n_j\sigma _{XA_j}},
\end{equation}
where $\lambda _i$ is the mean free path in a medium consisting of material with a mass number $A_i$: $\lambda _i=(n_i \sigma_{XA_i})^{-1}$. Here $n_i$ is the number density of target nuclei $i$ in the crust, $\sigma_{XA_i}$ is the scattering cross section of X on nucleus A$_i$ and the effective mean free path, $\lambda _{eff}$, is given as
\begin{equation} \label{freepath}
\lambda _{eff}=\left( \sum \frac {1}{\lambda _i} \right) ^{-1}.
\end{equation}
In each scattering the DM particle loses energy according to (\ref{er}), and we assume isotropic scattering in the c.m. frame.
We determine the effective DM velocity $<{\vec v_A}>$ as
\begin{equation} \label{<v>}
\langle{\vec v_A}\rangle=\frac {\sum ' v}{N_{tot}}
\end{equation}
where the sum is over the velocities of those DM particles which deposit energy in the range 25-60 eV, in a collision with a nucleus of type A, and $N_{tot}$ is the total number of generated DM particles. The result depends on the angle between the experimental look direction, and the motion of the Earth.
The zenith direction above the place where the rocket was launched, $\hat n _{XQC}$, is toward $b=+82^o,l= 75^o$. Thus the detector position angle compared to the direction of motion of the Earth through the Galaxy is 82$^o$. Only about $10\%$ of the collisions have an energy deposit in the correct range.
Putting this together gives the $90 \% $ confidence level XQC upper limit on the spin independent cross section for DMIC shown in Figures \ref{fig1} and \ref{fig2}. The solid line limit is obtained using the most conservative set of parameters ($\rho =0.2$ GeV/cm$^3$, $v^t _E=200$ km/s, $v_{esc}=450$ km/s) and the dotted line is the limit obtained by using the standard parameter values in eq.~(\ref{parameters}). The upper boundary of the upper domain, $\sigma\simeq 10^6\div 10^8$ mb is taken from \cite{wandelt}.
When the dark matter mass is higher than $10$ GeV, the form factor suppression of cross section is taken into account. We give the details of that calculation in the Appendix B.
In the next section we explain how the upper boundaries of the excluded region from the underground experiments shown in these figures are obtained. Also shown are the original limits from the balloon experiment of Rich et al.~\cite{RRS} obtained using the ``standard'' choices for DM at the time of publishing (dashed line) as well as the limits obtained using the conservative values of parameters ($v^t _E=170$ km/s, since the experiment was performed in October, and $v_{esc}=450$ km/s). Fig. \ref{fig2} zooms in on the allowed window in the $m\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 2.4$ GeV range.
\subsection{Underground Detection}
In this section we describe the derivation of the lower boundary of the DMIC window from the underground experiments.
This is the value of $\sigma _{Xp}$ above which the DM particles would essentially be stopped in the Earth's crust before reaching the detector. (More precisely, they would loose energy in the interactions in the crust and fall below the threshold of a given experiment.) To extract the limit on $\sigma _{Xp}$ we generate particles with the halo velocity distribution and then follow their propagation through the Earth's crust to the detector. We simulate the DM particle interactions in the detector and calculate the rate of the detector's response. We compare it to the measured rate and extract cross section limits.
The basic input parameters of our calculation are the composition of the target, the depth of the detector and the energy threshold of the experiment. We also show the value of the dark matter mass threshold $m_{TH}$, calculated for the standard and conservative parameter values given in eq.~(\ref{parameters}) and eq.~(\ref{range}). The parameters are summarized in Table \ref{t1ch3} for the relevant experiments.
\begin{table}[htb]
\label{t1ch3}
\caption{The parameters of the experiments used for the extraction of cross section limits; $m_{TH}$ is the minimum mass of DM particle which can produce a detectable recoil, for the standard and conservative parameter choice. The energy threshold values $E^{nuc}_{TH}$ refer to the nuclear response threshold. This corresponds to the electron response threshold divided by the quenching factor of the target.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Experiment & Target & Depth & $E^{nuc} _{TH}$ & $m^{std}_{TH} (m^{cons}_{TH})$ \\ \hline \hline
CRESST, \cite{cresst} & Al$_2$O$_3$ & 1400 m & 600 eV & 0.8 (1.1) GeV \\ \hline
DAMA, \cite{dama} & NaI & 1400 m & 6 keV & 3.5 (5) GeV \\ \hline
ELEGANT, \cite{elegant} & NaI & 442 m & 10 keV & 5 (8) GeV \\ \hline
COSME I, \cite{cosme} & Ge & 263 m & 6.5 keV & 5.5 (8) GeV \\ \hline
CDMS, \cite{cdms} & Si & 10.6 m & 25 keV & 9.8 (16) GeV \\
& Ge & & & 14 (21) GeV \\ \hline
\end{tabular}
\end{center}
\end{table}
In the code, after generating particles we propagate those which satisfy $\langle{\hat n} \cdot {\vec v}\rangle\leq 0$ through the crust. Given the effective mean free path in the crust eq.~(\ref{freepath}), the distance traveled between two collisions in a given medium is simulated as
\begin{equation} \label{distance}
x=-\lambda _{eff} \ln R
\end{equation}
where R is a uniformly distributed random number, in the range $(0,1)$.
After simulating the distance a particle travels before scattering, we choose the nucleus $i$ it scatters from using the relative probability as in eq.~(\ref{probability}).
We take the mass density of the crust to be $\rho =2.7$ g/cm$^3$. To explore the sensitivity of the result to the composition of the crust we consider two different compositions. First we approximate the crust as being composed of quartz, ${\rm SiO}_2$, which is the most common mineral on the Earth and is frequently the primary mineral, with $>98 \%$ fraction.
Then we test the sensitivity of the result by using the average composition of the Earth's crust: Oxygen 46.5 $\%$, Silicon 28.9 $\%$, Aluminium 8.3 $\%$ and Iron 4.8 $\%$, where the percentage is the mass fraction of the given element. Our test computer runs showed that both compositions give the same result up to the first digit, so we used simpler composition for the computing time benefit. Since the DM exclusion window we obtain at the end of this section should be very easy to explore in a dedicated experiment, as we show later in the text, we do not aim to find precise values of the signal in the underground detector.
When collisions reduce the DM velocity to less than the Earth's radial escape velocity, $v_{esc}=11$ km/s, DM is captured by the Earth and eventually thermalized. Collisions may also reverse the DM velocity in such a way that the DM particle leaves the surface of the Earth with negative velocity: effectively, the DM particle gets reflected from the Earth. The majority of light DM particles wind up being reflected as is characteristic of diffuse media. The percentage of reflected particles proves not to depend on the cross section, as long as the mean free path is much smaller than the radius of the earth, but it does depend on DM particle mass. Light particles have a higher chance of scattering backward and therefore a higher percentage of them are reflected. The initial DM flux on Earth equals $2.4(1.2)~10^{6}~(1~{\rm GeV}/m_X)$ cm$^{-2}$s$^{-1}$, taking standard (conservative) parameter values. Table \ref{t2ch3} shows the fraction of initial flux of DM particles on the Earth which are captured and thermalized for various mass values. The fraction is, up to a percent difference, independent of whether we make the standard or conservative parameter choice.
For DM particles which are not scattered back to the atmosphere and which pass the depth of the detector before falling below the energy threshold of the detector, the scattering in the detector is simulated.
\begin{table}[htb]
\caption{ The percentage of DM particles incident on Earth which are captured, when $\lambda _{int}\ll R_E$.} \label{t2ch3}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
mass [GeV] & 2 & 4 & 6 & 10 & 100 \\ \hline \hline
thermalized [$\%$] & 21 & 30 & 36 & 46 & 94 \\ \hline
\end{tabular}
\end{center}
\end{table}
For composite targets we simulate collision with different nuclei with probabilities given as in eq.~(\ref{probability}). If the energy of the recoil is above $E_{TH}$, we accept the event and record the velocity of the particle which deposited the signal.
The spectral rate per (kg day keV) is then calculated as a sum of rates on the separate elements of the target, as
\begin{equation} \label{sum}
\frac{dR}{dE_R} [\alpha (t)]=\sum_{i} \frac{f_i}{A_i~m_p}~\frac{\rho _X}{m_X} ~\frac{\langle v[\alpha (t)]\rangle_i}{\Delta E}~\sigma _{XA_i}
\end{equation}
where $f_i$ is the mass fraction of a given element in the target, $\rho _X$ is the local DM energy density, $\Delta E$ is the size of an energy bin of a given experiment and $<v(\alpha (t))>$ is calculated as in (\ref{<v>}). The signal in the detector falls exponentially with $\sigma _{XN}$ since the energy of DM at a given depth gets degraded as an exponential function of the cross section, see \cite{SG}. Therefore the limit on $\sigma _{XN}$ is insensitive to small changes in the rate in the detector coming from changes in $\rho _X$; we adopt the commonly used value $\rho _X=0.3$ GeV cm$^{-3}$ for the local DM energy density.
We emphasize here that the spectral rate is a function of the relative angle $\alpha (t)$ between the direction of the motion of LSR and the position of the detector.
This angle changes during the day as
\begin{equation} \label{time}
\cos \alpha (t)=\cos \delta \cos \alpha '+ \sin \delta \sin \alpha ' \sin (\omega t +\phi _0)
\end{equation}
where $\delta $ is the angle between the Earth's North pole and the motion of LSR; $\alpha '$ is the angle between the position of the detector and the North pole, and it has a value of ($90^{\circ}$ - geographical latitude).
The angle between the LSR motion and Earth's North pole is $\delta = 42^{\circ}$, so for an experiment around $45^{\circ}$ latitude (as for Gran Sasso experiment), $\alpha '=45 ^{\circ}$. Therefore, in the course of a day, the angle between the detector and the LSR motion varies in the range approximately $0^{\circ}$ to $90^{\circ}$.
Fig. \ref{figdaydep} shows the rate $R$ per (kg $\cdot $ day) as a function of time, (\ref{time}),
\begin{equation}
R [\alpha (t)]=\sum_{i=Al,O} \frac{f_i}{A_i~m_p}~\frac{\rho _X}{m_X} ~\langle v(\alpha (t))\rangle_i~\sigma _{XA_i}
\end{equation}
calculated for the parameters of the CRESST experiment.
We choose $\phi_0$ so that time t=0 corresponds to the moment the detector starts to move away from the LSR axis. We see that for these masses the rate is a strong function of the angle of the position of the detector with respect to the motion of the LSR, which gives an interesting detection signature for detector locations such that this angle changes during the day.
\begin{figure}
\begin{center}
\includegraphics*[width=8cm]{daydep.eps}
\end{center}
\caption{ The time dependence of the measured rate in underground detectors for m=2 GeV and m=1.5 GeV DM candidates.}\label{figdaydep}
\end{figure}
To extract our limits, we average the signal from the simulation $dR(t)/dE_R$ over one day:
\begin{equation}
\langle dR/dE_R\rangle=\frac {1}{T}~\int ^{T} _0 dR(t)/dE_R~dt.
\end{equation}
Since the shape of the spectral rate response is a function of $\sigma _{Xp}$ in our case (because the velocity distribution function at the depth of detector is a function of $\sigma _{Xp}$ due to the interactions in the crust) the extraction of cross section limits is more complicated than when the rate scales linearly with $\sigma _{Xp}$. In the region where the window is located, i.e. masses below 2 GeV, we perform the analysis based on the fit method used by the CRESST group, \cite{cresst}. The measured spectrum is fit with an empirical function called $B$. In our case $B$ is the sum of two falling exponentials and a constant term, since we expect the signal only in the few lowest energy bins. For the fit we use the maximum likelihood method with Poissonian statistics in each bin. The maximum likelihood of the best fit, $B_0$, is ${\emph L}_0$. We define the background function $B'$ as the difference between the best fit to the measured signal, $B_0$ and some hypothesized DM signal $S$: $B'=B_0-S$. Following the CRESST procedure, we set $B'$ to zero when $S$ exceeds $B_0$. When $\sigma _0$ is such that the simulated signal $S$ is below the measured signal $B_0$, $B'$ adds to the signal $S$, completing it to $B_0$ and the likelihood is unchanged. With increasing $\sigma _0$, when $S$ starts to exceed the function $B_0$, $B'$ becomes zero, and we calculate the likelihood using $S$ alone in such bins, leading to a new likelihood ${\emph L}$. Following the CRESST collaboration prescription, $\sigma_0$ excluded at $90 \%$ CL is taken to be the value of $\sigma_0$ giving $\ln{\emph L}-\ln{{\emph L}_0}=-1.28^2/2$ \cite{cresst}, since $10\%$ of the area of a normalized Gaussian distribution of width $\sigma$ is $1.28 \sigma $ above than the peak. We show the window obtained this way in Fig.~\ref{fig2} and for the low mass range, in Figure \ref{fig1}.
For masses higher than $2$ GeV we can use a simpler method, since this range of masses is already ruled out and our plot is only indicating from which experiments the constraints are coming. We calculate the response of the detector for different cross section values, and take the limit to be that value for which the simulated signal is below the experiment's background. Fig \ref{signal} shows CRESST background together with the simulated response for DM particles with mass $m_X=2$ GeV and 10 GeV and various values of cross section. The limits obtained this way for different experiments are given in Figure \ref{fig1}.
\begin{figure}
\begin{center}
\includegraphics*[width=8cm]{mhspectarbin.eps}
\end{center}
\caption{ The CRESST background and the simulated response of the detector for masses $m_X=2$ and $m_X=10$ GeV, and different values of spin independent cross sections $\sigma _{Xp}$.}\label{signal}
\end{figure}
The only dark matter detector sensitive to particles with mass $\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 4$ GeV is CRESST. Since it is the only experiment with threshold lower than the threshold of the balloon experiment by Rich et al., it extends the existing exclusion window for intermediate cross sections. For the CRESST experiment we perform the calculation using both standard and conservative parameters, because the size of the exclusion window is very sensitive to the value of mass threshold, and therefore to the parameter choice. For other underground experiments we use only standard parameters. In the mass range $5\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, m\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 14$ GeV, the ELEGANT and COSME I experiments place the most stringent constraints on a DMIC scenario, since they are located in shallow sites a few hundred meters below the ground; see Table~\ref{t1ch3}. Other experiments sensitive in this mass range (e.g. IGEX, COSME II) are located in much deeper laboratories and therefore less suitable for DMIC limits. We therefore present limits from ELEGANT and COSME I, for masses 5 to 14 GeV. Masses grater than 14 GeV are above the threshold of the CDMS experiment and this experiment places the most stringent lower cross section limit due to having the smallest amount of shielding, being only 10.6 m under ground. The CDMS I had one Si and four Ge detectors operated during a data run. To place their limits they used the sophisticated combination of data sets from both types of detectors. Due to the large systematic uncertainty on the Si data the Ge data set dominates their combined measurements. To be conservative we assume that only Ge detectors are present, which reduces the region excluded by CDMS to $m\,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, 14$ Gev. Fig \ref{fig1} shows the cross section limits these experiments imply, for masses $m\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 10^3$ GeV.
\begin{figure}
\begin{center}
\includegraphics*[width=8cm]{gabilongff14.eps}
\end{center}
\caption{Overview of the exclusion limits for spin independent DM-nucleon elastic cross section coming from the direct detection experiments on Earth. The limits obtained by using the conservative parameters, as explained in the text, are plotted with a solid line; the dotted lines are obtained by using standard parameter values and the dashed lines show limits published by corresponding experiments or in the case of XQC, by Wandelt et al. \cite{wandelt}. The region labeled with DAMA* refers to the limits published in \cite{damastrong}.}\label{fig1}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics*[width=8cm]{gabiBNLlike.eps}
\end{center}
\caption{The allowed window for $\sigma^{el}_{XN}$ for a spin independent interaction. The region above the upper curve is excluded by the XQC measurements. The region below the lower curve is excluded by the underground CRESST experiment. The region $m\geq 2.4$ GeV is excluded by the experiment of Rich et al.}\label{fig2}
\end{figure}
\subsection{Spin-Dependent limits} \label{SD}
In this section we address the case in which DM has a spin dependent interaction with ordinary matter. We consider first purely SD interaction and later we consider the case in which both interaction types are present. We focus on low masses which belong to the cross section window allowed by the experiment of Rich et al.
\begin{figure}
\begin{center}
\includegraphics*[width=8cm]{gabisdlikely.eps}
\end{center}
\caption{The allowed Spin Dependent interaction for $(C_{Xp}/C_{Xn})^2=1$. The region above the upper curve is excluded by XQC measurements. The region below the lower curve is excluded by CRESST. The region $m\geq 2.4$ GeV is excluded by the balloon experiment of Rich et al. }\label{figSD}
\end{figure}
If the DM has only a spin dependent interaction with ordinary matter, only the small fraction of the XQC target with nonzero spin is sensitive to DM detection. The nonzero spin nuclei in the target are: ${\rm Si}_{29}$ ($4.6~\%$ of natural Si), ${\rm Te}_{125}$ ($7~\% $) and ${\rm Hg}_{199}$, ($16.87~ \% $); their spin is due an unpaired neutron. We calculate the spin dependent cross section limits from the XQC experiment the same way as for the spin independent case, using the new composition of the target. The limiting value of the elastic cross section of DM with protons, $\sigma^{SD} _{Xp}$, is shown in Figure \ref{figSD}. Since the XQC target consists of n-type nuclei, the resulting cross section with protons is proportional to the $(C_{Xp}/C_{Xn})^2$ factor as explained in section II. In Figure \ref{signal} we use the value $(C_{Xp}/C_{Xn})^2=1$ which is the minimal value this ratio may have. We note that the maximal value of the ratio, based on the EMC measurements is $(C_{Xp}/C_{Xn})^2=500^2$ and it would shift the XQC limit by a factor $500^2$ up to higher cross sections (substantially extending the allowed window).
The spin sensitive element in the CRESST target is Al which has an unpaired proton in the natural isotope. We assume that the crust consists only of Al, since it is the most abundant target nucleus with non-zero spin. In this case the model dependence of the $C$ factor ratio drops out in normalizing the underground experiment to the proton cross section.
\begin{figure}
\begin{center}
\includegraphics*[width=8cm]{SISD.eps}
\end{center}
\caption{$\sigma_{SI}$ {\it vs} $\sigma_{SD}$, for CRESST and XQC experiments, for mass $m_X=2$ GeV. The region between two curves is the allowed region.}\label{figSISD}
\end{figure}
The window is extended when compared to the purely spin independent DM interaction, as shown in Fig. \ref{figSD}. This is mostly due to the fact that sensitive part of the XQC target is substantially reduced.
In Fig. \ref{figSISD}, for mass $m_X=2$ GeV, we plot the $\sigma _{SI}$ vs $\sigma _{SD}$ limit, assuming both types of interactions are present. An interesting feature in the $\sigma _{SI}$ vs $\sigma _{SD}$ dependence is that, when the spin dependent and independent cross sections on the target nuclei are of comparable magnitude, screening between two types of targets allows cross sections to be higher for the same rate in the detector than in the case when only one type of interaction is present.
\subsection{Constraint on the fraction of DMIC} \label{fraction}
We now turn the argument around and use the XQC data to place a
constraint on the fraction of allowed DMIC as a function of its
elastic cross section. We restrict consideration to values of the
cross section which are small enough that we do not have to treat
energy loss in the material surrounding the sensitive components of
XQC. The maximal fraction DMIC allowed by XQC data $p=n^{MI}
_{DM}/n^{tot} _{DM}$ can then be expressed as a function of cross
section, using (\ref{rate})
\begin{equation}\label{eqfraction}
p=\frac{N_S}{n_X~f~T} \left[ N_{\rm Si} \langle{\vec v}_{\rm Si}\rangle \sigma_{\rm Si}+ N_{\rm Hg} (\langle{\vec v}_{\rm Hg}\rangle\sigma_{\rm Hg}+\langle{\vec v}_{\rm Te}\rangle \sigma_{\rm Te} )\right] ^{-1}
\end{equation}
where all quantities are defined as before.
The mass overburden of XQC can be approximated as \cite{mccam:correspondence}: $\lambda =10^{-4}~{\rm g/cm}^2$, for off-angle from the center of the field of the detector $\alpha =(0^o~{\rm to}~ 30^o)$; $\lambda =10~{\rm g/cm}^2$, for $\alpha =(30^o~{\rm to}~ 100^o )$; and $\lambda =10^4~{\rm g/cm}^2$, for $\alpha \ge 100^o$. The center of the field of view points toward $l=90^o$, $b=60^o$ which makes an angle of $32^o$ with the detector position direction. Since DM particles are arriving only from above the detector, they will traverse either 10 g/cm$^3$ or $10^{-4}$ g/cm$^3$ overburden.
\begin{figure}
\begin{center}
\includegraphics*[width=8cm]{fractionLG.eps}
\end{center}
\caption{ The allowed fraction $p$ of DM candidate as a function of DM-nucleon cross section. For each mass, $p$ is calculated up to the values of cross sections for which the interaction in the mass overburden of the detector becomes important.}\label{fig7}
\end{figure}
For example, for values of cross section of about $0.7$ mb, $m=2$ GeV DM particles start to interact in the 10 g/cm$^3$ overburden, thus for cross sections above this value our simple approach which does not account for the real geometry of the detector, is not applicable anymore.
We therefore restrict our analysis to values of the cross section for which neglecting the interaction in the overburden is a good approximation. In this domain, the allowed fraction of DM falls linearly with increasing cross section, as can be seen in equation (\ref{eqfraction}) since $<{\vec v_{DM}}>$ remains almost constant and is given by the halo velocity distribution eq.~(\ref{veldistE}).
The results of the simulation are shown in Fig. \ref{fig7}, for a spin independent interaction cross section.
An analysis valid for larger cross sections, which takes into account details of the geometry of the XQC detector, is in preparation \cite{Spergel}.
\subsection{Future Experiments}
The window for $m_X\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 2.4$ GeV in the DMIC cross section range could be explored in a dedicated experiment with a detector similar to the one used in the XQC experiment and performed on the ground. Here we calculate the spectral rate of DM interactions in such detector, in order to illustrate what the shape and magnitude of a signal would be.
\begin{figure}
\begin{center}
\includegraphics*[width=8cm]{ftrexpREV.eps}
\end{center}
\caption{ The simulated minimum rate per (kg day eV) calculated with $\sigma _{Xp}=2~\mu$b, for a DM experiment on the ground, versus deposited energy $E_R$ in eV, for a SI target and for a target with mass number A=100. The solid line indicates maximal value of the cosmic ray muon background determined based on the total muon flux as is used in the text.}\label{figftrexp}
\end{figure}
In Fig. \ref{figftrexp} we plot the rate per (kg$\cdot $day$\cdot $eV), for a Si detector and DM particle masses of $m_X=$ 1 and 2 GeV assuming a spin independent interaction. In the case of an exclusively spin dependent interaction, the signal would be smaller, approximately by a factor $f/A^2$, where {\it f} is the fraction of odd-nuclei in the target. The calculation is done for a position of a detector for which the signal would be the smallest. We assume a short experiment and do not perform averaging over time of a day because that would increase the signal.
The rate scales with cross section; the rate shown in Fig \ref{figftrexp} is for $\sigma _{Xp}=2$ $\mu $b, the lower limit on the cross section window from the XQC experiment for $m=1$ GeV. Since the unshielded muon flux on the ground is of the order of $2~10^{2}~(m^2~s)^{-1}=2~10^3$ (cm$^2$ day$)^{-1}$, an experiment performed on the ground with an array of micro-calorimeter absorbers such as XQC whose target mass is $\approx 100$ g, should readily close this window or observe a striking signal.
\subsection{Summary}
In \S~\ref{directDM} we have determined the limits on dark matter in the low mass range ($m\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 10$ GeV) and with an intermediate cross section on nucleons based on the final XQC data and results of underground experiments with low mass threshold. We also updated previous limits taking into account newer halo velocity distribution. We found that there is an allowed window for DM mass $m\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 2.4$ GeV and cross section $\sigma \approx \mu$b. Curiously this window overlaps with the mass/cross section range expected for the $H$ dibaryon making a possible DM candidate, \cite{f:StableH,fz:nucstab} and Chapter~\ref{Hdibaryon}. We showed that it should be straightforward experimentally to explore the window. A signal due to a light DMIC would have strong daily variations depending on the detectors position with respect to the LSR motion and therefore provide strong signature.
\section{The $H{\bar H}$ Dark Matter\-- Indirect detection constraints} \label{Hindirect}
B-sequestration scenarios imply the possibility of detectable annihilation of DM with anti-baryon number with nucleons in the Earth, Sun or galactic center. The rate of $\bar{H}$ annihilation in an Earth-based detector is the $\bar{H}$ flux at the detector, times $\sigma_{\bar{H}N}^{\rm ann}$, times (since annihilation is incoherent) the number of target nucleons in the detector, $6 \times 10^{32} $ per kton. The final products of $\bar{H} N$ annihilation are mostly pions and some kaons, with energies of order 0.1 to 1 GeV. The main background in SuperK at these energies comes from atmospheric neutrino interactions whose level is $\sim100$ events per kton-yr\cite{SKatmneutrinos}. Taking $\Phi^{SK}_{\bar{H}} = R_{\rm cap}/A_{SK}$, where $A_{SK}$ is the area of SK experiment and $R_{\rm cap}$ is taken from Table 2.1, the annihilation rate in SuperK is lower than the background if $\tilde{\sigma}^{\rm ann}_{\bar{H}N} \le 6 \times 10^{-44}\, {\rm cm}^2$
\begin{equation}
R^{dir} _{SK}\sim 100\left[\frac{\sigma^{\rm ann} _{\bar H}}{6~10^{-44} {\rm cm}^2}\right] \left({\rm kton~yr}\right)^{-1}
\end{equation}
. The total energy release of $m_H + B_H m_N $ should give a dramatic signal, so it should be possible for SuperK to improve this limit. Note that for the $H,\,\bar{H}$ scenario this limit is already uncomfortable, since it is much lower than the effective cross section required at freezeout ($\sigma^{\rm ann} _{\bar H} =2.2~10^{-41}$ cm$^2$). However this cannot be regarded as fatal, until one can exclude with certainty the possibility that the annihilation cross section is very strongly energy dependent.
Besides direct observation of annihilation with nucleons in a detector, constraints can be placed from indirect effects of $\bar{H}$ annihilation in concentrations of nucleons. We first discuss the photons and neutrinos which are produced by decay of annihilation products. The signal is proportional to the number of nucleons divided by the square of the distance to the source, so Earth is a thousand-fold better source for a neutrino signal than is the Sun, all other things being equal. Since $\gamma$'s created by annihilation in the Earth or Sun cannot escape, the galactic center is the best source of $\gamma$'s but do not pursue this here because the constraints above imply the signal is several orders of magnitude below present detector capabilities.
The rate of observable neutrino interactions in SuperK is
\begin{equation} \label{neutintsSK}
\Gamma_{\nu{ \rm SK}} = N_{{\rm SK}}\, \Sigma_i \int{ \frac{d n_{\nu_i}}{dE} \sigma^{\rm eff}_{\nu_i N} \Phi_{\nu_i} dE },
\end{equation}
where the sum is over neutrino types, $N_{{\rm SK}}$ is the total number of nucleons in SuperK, $\frac{d n_{\nu_i}}{dE}$ is the spectrum of $i$-type neutrinos from an $\bar{H}$ annihilation, $\sigma^{\rm eff}_{\nu_i N} $ is the neutrino interaction cross section summed over observable final states (weighted by efficiency if computing the rate of observed neutrinos), and $ \Phi_{\nu_i} $ is the $\nu_i$ flux at SK. This last is $f_{\nu_i}$, the mean effective number of $\nu_i$'s produced in each $\bar{H}$ annihilation discussed below, times the total rate of $\bar{H}$ annihilation in the source, $\Gamma^{\rm ann}_{\bar{H},s}$, divided by $\approx 4 \pi R_s^2$, where $R_s$ is the distance from source to SuperK; $R_s \approx R_E$ for annihilation in Earth.
In general, computation of the annihilation rate $\Gamma^{\rm ann}_{\bar{H},s}$ is a complex task because it involves solving the transport equation by which DM is captured at the surface, migrates toward the core and annihilates, eventually evolving to a steady state distribution. However if the characteristic time for a DM particle to annihilate, $\tau^{\rm ann}=\langle\sigma^{\rm ann} n_N v\rangle^{-1}$, is short compared to the age of the system, equilibrium between annihilation and capture is established (we neglect the evaporation which is a good approximation for $M_{DM} \,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, \mathcal{O}$(GeV) and is also more conservative approach) so $\Gamma^{\rm ann}_{\bar{X},E}$ equals $f_{\rm cap} \Phi_{\bar{H}} 4 \pi R_E^2$. Then the neutrino flux, eq.~(\ref{neutintsSK}), is independent of $\sigma^{\rm ann}_{\bar{H}N},$ because the annihilation rate per $\bar{H}$ is proportional to it but the equilibrium number of $\bar{H}$'s in Earth is inversely proportional to it. For Earth, the equilibrium assumption is applicable for $\tilde{\sigma}^{\rm ann} \,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, 5 \times 10^{-49} {\rm cm}^2$, while for the Sun it is applicable if, roughly, $\tilde{\sigma}^{\rm ann} \,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, 10^{-52} {\rm cm}^2$. For lower annihilation cross sections, transport must be treated.
The final state in $\bar{H} N$ annihilation is expected to contain $\bar{\Lambda}$ or $\bar{\Sigma}$ and a kaon, or $\bar{\Xi}$ and a pion, and perhaps additional pions. In a dense environment such as the core of the Earth, the antihyperon annihilates with a nucleon, producing pions and at least one kaon. In a low density environment such as the Sun, the antihyperon decay length is typically much shorter than its interaction length. In Earth, pions do not contribute significantly to the neutrino flux because $\pi^0$'s decay immediately to photons, and the interaction length of $\pi^\pm$'s is far smaller than their decay length so they eventually convert to $\pi^0$'s through charge exchange reactions; similarly, the interaction lengths of $K^{0}_L$'s and $K^\pm$'s are much longer than their decay lengths, so through charge exchange they essentially all convert to $K^0_S$'s before decaying. The branching fraction for production of $\nu_{e,\mu}$ and $\bar{\nu}_{e,\mu}$ from $K_S^0 \rightarrow \pi l^\pm \nu$ is $3.4 \times 10^{-4}$ for each, so $f_{\nu_i} \ge 2(3.4\times 10^{-4})$ for $\bar{H}$ annihilation in Earth. Since the Sun has a paucity of neutrons, any kaons in the annihilation products are typically $K^+$ and furthermore their charge exchange is suppressed by the absence of neutrons. The branching fraction for $K^+ \rightarrow \mu^+ \nu_\mu$ is 63\% and the $\nu_\mu$ has 240 MeV if the kaon is at rest. If the final states of $\bar{H}$ annihilation typically contain kaons, then $f_\nu $ is $\mathcal{O}$(1). However if annihilation favors $\bar{\Xi}$ production, $f_\nu$ could be as low as $\approx 3 \cdot 10^{-4}$ for production of $\bar{\nu}_{e}$'s and $\bar{\nu}_\mu$'s above the charged current threshold. Thus the predicted neutrino signal from $\bar{H}$ annihilation in the Sun is far more uncertain than in Earth.
Neutrinos from $\bar{H}$ annihilation can be detected by SuperK, with a background level and sensitivity which depends strongly on neutrino energy and flavor. Taking the captured $\bar{H}$ flux on Earth from Table 2.1, assuming the neutrinos have energy in the range 20-80 MeV for which the background rate is at a minimum, and taking the effective cross section with which $\nu$'s from the kaon decays make observable interactions in SuperK to be $10^{-42} {\rm cm}^2$, eq.~(\ref{neutintsSK}) leads to a predicted rate of excess events from annihilations in Earth of $ \Gamma_{\nu{ \rm SK}} \approx 2$/(kton yr) in the $\bar{H}$ scenario. This is to be compared to the observed event rate in this energy range $\approx 3$/(kton yr)\cite{SKloEneutrinos}, showing that SuperK is potentially sensitive. If a detailed analysis of SuperK's sensitivity were able to establish that the rate is lower than this prediction, it would imply either that the $H, \bar{H}$ model is wrong or that the annihilation cross section is so low that the equilibrium assumption is invalid, i.e., $\sigma^{\rm ann}_{\bar{H}N} \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 2 \times 10^{-48} {\rm cm}^2$. The analogous calculation for the Sun gives $ \Gamma_{\nu{ \rm SK}} \approx 130 f_\nu$/(kton yr) for energies in the sub-GeV atmospheric neutrino sample, for which the rate is $\approx 35$ events/(kton yr) \cite{SKatmneutrinos}\footnote{This estimate disagrees with that of Goldman and Nussinov (GN)\cite{GN}, independently of the question of the value of $f_\nu$. GN use an $\bar{H}$ flux in the solar system which is eight times larger than our value in Table 2.1 from integrating the normal component of the halo velocity distribution, due to poor approximations and taking a factor-of-two larger value for the local DM density. We include a factor 0.35 for the loss of $\nu_\mu$'s due to oscillation, we account for the fact that only neutrons in SuperK are targets for CC events, and we avoid order-of-magnitude roundup. Note that the discussion of the particle physics of the $H$ in \cite{GN} applies to the case of an absolutely stable $H$, which we discussed but discarded in \cite{fz:nucstab}.}. Thus if $f_\nu$ were large enough, SuperK could provide evidence for the $H,\,\bar{H}$ scenario via energetic solar neutrinos, but the absence of a solar neutrino signal cannot be taken as excluding the $H,\,\bar{H}$ scenario, given the possibility that $f_\nu \le 10^{-3}$.
Fortunately, there is a clean way to see that the DM cannot contain a sufficient density of $ \bar{H}$'s to account for the BAU. When an $\bar{H}$ annihilates, an energy $m_H + B_H m_N$ is released, almost all of which is converted to heat. Uranus provides a remarkable system for constraining such a possibility, because of its large size and extremely low level of heat production, $42 \pm 47 \, {\rm erg ~ cm}^{-2} s^{-1}$, (Uranus internal heat production is atypically small, only about a tenth of the similar sized planet Neptune), \cite{uranusVoyager}. When annihilation is in equilibrium with capture as discussed above, the power supplied by annihilation is
$P_{\bar{H}}^{\rm ann} = f_{\rm cap}^U \Phi_{\bar{X}} (m_X + B_X m_N).$
For the $\bar{H}$, $f_{\rm cap}^U \approx 0.2$ as for Earth, so the heat flux generated in Uranus should be $470 \, {\rm erg ~ cm}^{-2}s^{-1}$, which definitively excludes the $H,\,\bar{H}$ scenario.
\section{Conclusion} \label{conclusion}
In this section we have shown that the $H$ di-baryon could evade experimental searches if it is compact and tightly bound. It would not bind to nuclei and therefore the anomalous mass isotope experiments would not be sensitive to its existence. For masses $m_H\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 2.05$ GeV it would also be cosmologically stable. As such it could potentially offer an explanation of DM problem within the Standard Model. We showed that the $H$ alone could not be produced in the DM abundance through thermal decoupling from SM particles. In the B-sequestration scenarios, $H{\bar H}$ could be produced with proper abundance in the early universe at a decoupling temperatures of around $85$ MeV. We find that mass and cross section ranges expected for the $H$ is not ruled out by current DM direct detection experiments, but that the $H,{\bar H}$ scenario could be ruled out through the heat production due to ${\bar H} N$ annihilation in Uranus.
\chapter{New Particle $X$ with Baryon Number} \label{X}
\section{Particle properties of $X$} \label{Xproperties}
We now turn to the possibility of a new, light fundamental particle with $B_X = 1$ and $m_X \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 4.5$ GeV. Such a low mass suggests it is a fermion whose mass is protected by a chiral symmetry. Various dimension-6 interactions with quarks could appear in the low energy effective theory after high scale interactions, e.g., those responsible for the family structure of the Standard Model, have been integrated out. These include
\begin{equation} \label{Xbcd}
\kappa (\bar{X} b \, \bar{d^c} c - \bar{X} c \, \bar{d^c} b) + h.c.,
\end{equation}
where the
$b$ and $c$ fields are left-handed SU(2) doublets, combined to form an SU(2) singlet, $d^c$ is the charge conjugate of the SU(2) singlet field $d_R$, and $\kappa=g^2/\Lambda ^2$, where $\Lambda$ is an energy scale of new physics. The suppressed color and spin indices are those of the antisymmetric operator $\tilde{O}^{\dot{a}}$ given in equation (10) of ref.~\cite{peskin79}. The hypercharge of the left-handed quarks is +1/3 and that of $d_R$ is -2/3, so the $X$ is a singlet under all Standard Model interactions and its only interaction with fields of the Standard Model are through operators such as eq.~(\ref{Xbcd}). Dimension-6 operators involving only third generation quarks can be constructed; supplemented by $W$ exchange or penguins, they could also be relevant. Note that $\kappa$ is in general temperature dependent and we denote its value today (at freezeout) by $\kappa_0$ and $\kappa_{\rm fo}$ respectively.
Prior to freezeout, $\bar{X}$'s stay in equilibrium through scattering reactions like
\begin{equation}\label{dXbar}
d + \bar{X} \leftrightarrow \bar{b}~\bar{c}.
\end{equation}
The coupling $\kappa $ in eq.~(\ref{Xbcd}) is in general complex and a variety of diagrams involving all three generations and including both W exchange and penguins contribute to generating the effective interaction in eq.~(\ref{dXbar}), so the conditions necessary for a sizable CP-violating asymmetry between $\sigma_{X}^{ \rm ann} $ and $ \sigma_{\bar{X}}^{ \rm ann}$ are in place.
An interaction such as eq.~(\ref{Xbcd}) gives rise to
\begin{displaymath}
\sigma_{\bar{X} d \rightarrow \bar{b} \bar{c}} = \frac{1}{8\pi} \kappa_{\rm{fo}} \frac{m_X m_b m_c T_{\rm fo}}{(m_b+m_c)^2}.
\end{displaymath}
For the freezeout of $X$ to occur at the correct temperature (see Table 2.1), $\kappa_{\rm{fo}} \approx 10^{-8}\, {\rm GeV}^{-2}$ is needed. This suggests an energy scale for new physics of $\Lambda \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 10$ TeV, taking dimensionless couplings to be $\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 1$.
$X$ particles can decay via production of $bcd$ quarks, but this state is too massive to be kinematically allowed. For an $X$ particle mass of 4.5 GeV the decay is off-shell, with a W exchange between $b$ and $c$ quarks, giving:
\begin{equation}
X\rightarrow csd
\end{equation}
The matrix element for this transition is proportional to
\begin{equation}
{\mathcal M}\approx \kappa g^2 _W |V_{bc}V_{cs}| \int d^4 k \frac{1}{k\llap/}\frac{1}{k\llap/}\frac{1}{k^2-M^2 _W}
\end{equation}
The integral over the loop momentum gives $\ln\left(\Lambda/M_w\right)$ and the diagram is logarithmically divergent
The decay rate of $X$ today can be estimated as:
\begin{displaymath}
\Gamma \sim m^5 _X \kappa ^2 _0 g^4 _W |V_{bc}V_{cs}|^2,
\end{displaymath}
where $g_W$ is the electroweak $SU(2)$ gauge coupling. The condition $\tau _X \,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, \tau_{universe}$ places a constraint on the value of the $X$ coupling today, $\kappa _{0} \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 10^{-20}$ GeV$^{-2}$. Thus for $X$ to be a valid dark matter candidate, its coupling to ordinary matter needs to have a strong temperature dependence changing from $10^{-8}\, {\rm GeV}^{-2}$ at a temperature of $\sim 200$ MeV, to $10^{-20}\, {\rm GeV}^{-2}$ or effectively zero at zero temperature. If the interaction in eq.~(\ref{Xbcd}) is mediated by a scalar field $\eta$ with couplings of order 1, its effective mass $m_{\eta}$ should vary from 10 TeV at 200 MeV to $10^{10}$ TeV at zero temperature. The most attractive way to do this would be if it were related somehow to a sphaleron-type phenomenon which was allowed above the QCD or chiral phase transition, but strongly suppressed at low temperature. We do not attempt that here and instead display two "toy" examples of models where the desired dramatic change in $\kappa$ occurs. Let the dominant contribution to the $\eta$ mass be due to the VEV of another scalar field $\sigma$ which has baryon and other quantum numbers zero. The VEV of $\sigma$ can be rapidly driven from zero to some fixed value resulting in the desired mass change in $\eta$ by several possible mechanisms. The simplest requires only one additional field with the zero temperature interaction
\begin{equation}
V(\eta ,\sigma )=-m^2 _{\sigma}\sigma ^2+\alpha_1 \sigma ^4 +\alpha_2 \eta ^4 +2\alpha_3 \sigma ^2 \eta ^2.
\end{equation}
The global minimum of this potential at zero temperature is at
\begin{equation}
\langle\eta\rangle=0,\quad\langle\sigma^2\rangle = \pm \frac{m^2 _{\sigma}}{2\alpha _1}.
\end{equation}
The mass of the field $\eta$ in this scenario equals
\begin{equation} \label{meta}
m_{\eta }=\sqrt{2\alpha_3 \sigma ^2 }=\sqrt{(\alpha_3/\alpha_1) } m_{\sigma}\sim10^{10} \;\; {\rm TeV}.
\end{equation}
At higher temperature, one loop corrections contribute to the potential and introduce a temperature dependence \cite{senjanovic,weinberg74}:
\begin{equation}
V_{\rm {loop}}=\frac{2\alpha_1+\alpha_3}{6}T^2\sigma ^2+\frac{2\alpha_2+\alpha_3}{6}T^2\eta ^2.
\end{equation}
The new condition for the minimum of the potential in the $\sigma$ direction becomes
\begin{equation}
\langle\sigma ^2\rangle=\frac{m^2 _{\sigma}-T^2\left( 2\alpha _1+\alpha _3\right)/3}{4\alpha _1},
\end{equation}
and for temperatures higher than the critical value
\begin{equation} \label{Tcr}
T_{CR}=\frac{3m^2 _{\sigma}}{2\alpha _1+\alpha _3}
\end{equation}
the potential has a unique minimum at $\langle\eta\rangle=0$, $\langle\sigma \rangle=0$. Condition (\ref{meta}) together with $T_{CR}\sim 200$ MeV implies the relation
\begin{equation}
\alpha _3\sim 10^7\sqrt{\alpha _1}.
\end{equation}
This large difference in coupling looks unnatural. Fine tunning can be avoided at the price of introducing a second helping scalar field $\phi$ as in Linde's hybrid inflation models \cite{linde}:
\begin{eqnarray}
V(\sigma,\phi)&=&\frac{1}{4\lambda}(M^2-\lambda \sigma ^2)^2+\frac{m^2\phi ^2}{2}+\frac{g^2}{2}{\phi ^2}{\sigma ^2}\\&=& \frac{M^4}{4\lambda}-\left(\frac{M^2}{2}-\frac{g^2\phi ^2}{2}\right)\sigma ^2+\frac{\lambda}{4}\sigma ^4+\frac{m^2\phi ^2}{2} \nonumber
\end{eqnarray}
The potential is such that, for values of $\phi\geq M/g$, its minimum in the $\sigma$ direction is at $\langle\sigma\rangle=0$. The evolution of fields $\phi$ and $\sigma$ as the universe expands would be as follows. At high temperatures we assume that the field $\phi$ is at a high value,
\begin{equation}
\phi\geq\frac{M}{g}.
\end{equation}
The equation of motion of the field $\phi$ in an expanding Universe is
\begin{equation}
\ddot{\phi}+3H\dot{\phi}+V'(\phi)=0.
\end{equation}
The solution for radiation dominated universe, where Hubble constant scales as $H=1/(2t)$ is
\begin{eqnarray}
\phi (t) &=& C_1\frac{J_{1/4} (mt)}{(mt)^{1/4}} + C_2\frac{Y_{1/4} (mt)}{(mt)^{1/4}} \nonumber \\
&\rightarrow & C' _1+ C' _2 (mt)^{-1/2},~ mt\rightarrow 0
\end{eqnarray}
where $J$ and $Y$ are spherical Bessel functions, and $\phi$ and oscillates for sufficiently large $mt$. As $\phi$ rolls down the potential it becomes equal to $\phi=\frac{M}{g}$ and the symmetry breaking occurs. The potential develops a minimum in the $\sigma$ direction and the value of $\sigma$ field tracking the minimum becomes
\begin{equation}
\langle\sigma \rangle=\frac{\sqrt{M^2-g^2\phi ^2}}{\lambda}.
\end{equation}
As the temperature drops further $\phi$ goes to zero on a time scale $1/m$ and the VEV of $\sigma$ goes to its asymptotic value
\begin{equation}
\langle\sigma \rangle=\frac{M}{\lambda}.
\end{equation}
From the condition on the value of coupling of $X$ today, with $\kappa _{0}\sim 1/m^2 _{\eta}\sim 1/{\langle\sigma \rangle^2}$ and assuming $\lambda \sim 1$ we get the value of parameter $M$ of $M\,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, 10^{10}$ GeV.
We can place a constraint on $V(\phi)$ from a condition that the energy density in the field $\phi$, $V(\phi)=m^2\phi ^2/2$ should be less than the energy density in radiation at temperatures above $\sim 200$ MeV. Since the field $\phi$ rolls down slowly as $t^{-1/4}$ and $\rho_{\phi}\sim t^{-1/2}$, while radiation scales down more rapidly, as $\rho_{rad}\sim T^4\sim t^{-2}$, it is enough to place the condition on the value of the energy density in the field $\phi$ at 200 MeV, $\rho _{rad}\,\raisebox{-0.13cm}{$\stackrel{\textstyle>}{\textstyle\sim}$}\, \rho _{\phi}$. This leads to the condition that the mass parameter for $\phi$ must satisfy $m\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 10^{-3}$ eV. Requiring that $mt<1$ at $T=200$ MeV, to avoid the oscillation phase, sets the condition $m\leq 10^{-10}$ eV. Achieving such low masses naturally is a model-building challenge. Work on the cosmological implications of these scenarios is in progress and will be presented elsewhere.
\section{Constraints on $X {\bar X}$ DM} \label{Xconstraints}
The approach presented here to solve the DM and BAU puzzles at the same time, with baryonic and anti-baryonic dark matter, can run afoul of observations in several ways which we now check:
\subsection{Direct detection constraints}
The scattering cross sections $\sigma_{XN}$ and $\sigma_{\bar{X}N}$ must be small enough to be compatible with DM searches -- specifically, for a 4 GeV WIMP. If $X$s interact weakly with nucleons, standard WIMP searches constrain the low energy scattering cross section $\sigma_{DM} \equiv (\sigma^{\rm el}_{\bar{X} N} + \epsilon \sigma^{\rm el}_{XN})/(1+ \epsilon)$. Table 2.1 gives the capturing rate of $X$ by the Earth, $R_{cap}$. The capturing rate is obtained using code by Edsjo et al. \cite{edsjo} which calculates the velocity distribution of weakly interacting dark matter at the Earth taking into account gravitational diffusion by Sun, Jupiter and Venus. It is not possible to use the upper limit on $\kappa_0$ from requiring the $X$ lifetime to be long compared to the age of the Universe, to obtain $\sigma^{\rm el}_{XN \{\bar{X} N\} }$ without understanding how the interaction of eq.~(\ref{Xbcd}) is generated, since it is not renormalizable. A naive guess
\begin{equation}
\sigma^{\rm el}_{XN \{\bar{X} N\} } \sim \kappa^4 \Lambda ^2 \frac {m^2 _X m^2 _N}{(m_X+m_N)^2}
\end{equation}
is well below the present limit of $\approx 10^{-38} {\rm cm}^2$ for a 4 GeV particle, even using the maximum allowed value of $\kappa_0$, but the actual value depends on the high-scale physics and could be significantly larger or smaller.
\subsection{Indirect constraints}
A second requirement is that the annihilation of $\bar{X}$ with matter does not produce observable effects. If eq.~(\ref{Xbcd}) is the only coupling of $X$ to quarks and $\kappa_0\simeq 10^{-20}$ GeV$^{-2}$, the effects of annihilation in Earth, Sun, Uranus and the galactic center are unobservably small. In this case the very stability of the $X$ implies that its interaction with light quarks is so weak that its annihilation rate in the $T=0$ Universe is essentially infinitesimally small. The cross section for the dominant annihilation processes is governed by the same Feynman diagrams that govern $X$ decay so that dimensional arguments lead to the order of magnitude relation
\begin{equation} \label{siganngf}
\sigma^{ann}_{\bar{X}N} \sim m_X^{-3} \tau_X^{-1} \simeq 10^{-72} (30 {\rm Gyr}/\tau_X)~{\rm cm}^2 .
\end{equation}
For completeness, in the rest of this section we will discuss the indirect limits which could be placed from annihilation experiments on the ${\bar X}_4$ DM, although, as we have seen, the expected $X_4$ cross sections are much smaller than what current limits could demand. \\
\emph{Direct detection of annihilation}. The discussion in this subsection is similar to the analysis of ${\bar H}$ as DM in B-sequestration models in \S\ref{Hindirect}. The rate of $\bar{X}$ annihilation in SuperK detector is, analogously to the case of ${\bar H}$ (see \S\ref{Hindirect}), the $\bar{X}$ flux at the detector times $\sigma_{\bar{X}N}^{\rm ann}$, times the number of target nucleons in the detector (since annihilation is incoherent) and has the value
\begin{equation} \label{rateSKdir}
R^{dir} _{SK} \cong 10~\left[ \frac{\sigma ^{\rm ann} _{{\bar X}}}{10^{-45}~{\rm cm}^2} \right] ({\rm kton}~{\rm yr})^{-1}.
\end{equation}
The ${\bar X}$ signal is lower than the background if $\tilde{\sigma}^{\rm ann}_{\bar{X}N,0} \le 2 \times 10^{-44}\, {\rm cm}^2$, which is readily satisfied as we have seen above.\\
\emph{Indirect detection of annihilation}: Besides direct observation of annihilation with nucleons in a detector, constraints can be placed from indirect effects of $\bar{X}$ annihilation in concentrations of nucleons.\\
Neutrinos from $\bar{X}$ annihilation can be detected by SuperK, with a background level and sensitivity which depends strongly on neutrino energy and flavor. The rate of observable neutrino interactions in SuperK is given by eq.~(\ref{neutintsSK}). We will distinguish two contributions to the DM annihilation rate $\Gamma^{\rm ann}_{\bar{X},s}$ resulting in a neutrino signal, 1) annihilation of a total DM flux $\Phi^{DM} _{\bar{X}}$ occurring while DM passes through the Earth and 2) annihilation rate of the small percentage of DM particles that are gravitationally captured due to scatter from nuclei in the Earth and therefore eventually settle in the Earth's core. In the first case the annihilation rate can be estimated as $\Gamma ^{(1)} _{\bar{X},s} \sim N_{E}~\Phi^{DM} _{\bar{X}} \sigma ^{\rm ann} _{{\bar X}}$ where $N_{E}$ is the number of nucleons in the Earth. We assume that the annihilations are spread uniformly in the Earth and that $f_{\nu}$ neutrinos is produced in each annihilation. We calculate the signal in SuperK due to neutrino interaction in the detector, taking the effective cross section with which $\nu$'s from the kaon decays make observable interactions in SuperK to be $10^{-42} {\rm cm}^2$, in eq.~(\ref{neutintsSK}) and we get the signal in SK of
\begin{equation}
R^{(1)} _{SK} \cong 10^{-6}f_{\nu}\left[ \frac{\sigma ^{\rm ann} _{\bar X}}{10^{-45}~{\rm cm}^2} \right] ({\rm kton}~{\rm yr})^{-1}
\end{equation}
The annihilation of DM in the Earth and subsequent neutrino production therefore do not produce detectable signal for $f_{\nu}\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 10^7$.
In general, computation of the annihilation rate in the case of captured DM is a complex task because it involves solving the transport equation by which DM is captured at the surface, migrates toward the core and annihilates, eventually evolving to a steady state distribution. However in equilibrium, and when neglecting the evaporation, capturing rate equals to the annihilation rate, $\Gamma^{\rm ann}_{\bar{X},E}=f_{\rm cap} \Phi_{\bar{X}} 4 \pi R_E^2$, see \S~\ref{Hindirect}. Then the neutrino flux in eq.~(\ref{neutintsSK}) is independent of $\sigma^{\rm ann}_{\bar{X}N},$ as we have seen in \S~\ref{Hindirect}.
Taking the captured $\bar{X}$ flux on Earth from Table 2.1, eq.~(\ref{neutintsSK}) leads to a predicted rate of the neutrino interaction in SK of
\begin{equation}
R^{(2)} _{SK} \cong 10^{-9}f_{\nu}\left[ \frac{\sigma ^{\rm ann} _{\bar X}}{10^{-45}~{\rm cm}^2} \right] ({\rm yr}{\rm kton})^{-1}.
\end{equation}
The analogous calculation for the Sun gives even smaller rates for energies in the sub-GeV atmospheric neutrino sample. The most stringent constraint in this B-sequestration model therefore comes from DM annihilation in SuperK, eq.~(\ref{rateSKdir}), and it is safe for cross sections of interest.
\chapter{Summary and Outlook}
In this thesis we addressed several problems related to the nature of dark matter: we summarize here briefly our results.
\paragraph{The existence and properties of the $H$ dibaryon.}
Since it was predicted to be a strong interaction stable 6 quark state, this particle raised a huge interest because it pointed to possibility of new type of hadrons. As we have summarized in \S\ref{Hdibaryon}, extensive experimental effort has been made in trying to produce and detect it. Recently, experiments on double $\Lambda$ hypernuclei claimed to rule it out. In the work presented in the thesis, we show that, if suffitiently compact ($r_H\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 1/2 r_N$) the $H$ formation time from a double $\Lambda$ system could be longer than the single $\Lambda$ decay lifetime and therefore the double $\Lambda$ experiments would be insensitive to its existence. Furthermore, we discover that, for an even smaller $H$ ($r_H\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 1/3 r_N$), with a mass smaller than $m_H\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, m_N+m_{\Lambda}$, the $H$ could be cosmologically stable. We also find that, for the reasonable range of values of coupling to $\sigma$ meson and glueball the $H$ would not bind to nuclei, and therefore the anomalous mass isotope experiments cannot rule out its existence.
\paragraph{Can the $H$ be a DM candidate? Is a Standard Model Dark Matter ruled out?}
Given that the $H$ is a neutral particle which could be sufficiently long lived, and having the virtue that it was predicted within the SM, we posed the question weather it could be DM. We analyzed the results from DM experiments sensitive in the $H$ expected mass and cross section ranges in \S\ref{Hdmcons}. This region of the parameter space was not analyzed before since only the new generation of underground experiments reached such low mass region. We also analyzed the final data from the X-ray experiment XQC, which proved to exclude a large part of the parameter space for the low masses and high cross sections. Surprisingly, there is an allowed window in DM exclusion region. We show that the window should be easy to close with a dedicated experiment. Given that the $H$ would not bind to nuclei and that it would be nonrelativistic after the QCD phase transition, the $H$ would be a cold DM candidate allowed by experiments. The production mechanism in the early Universe turns out to be problematic, since the $H$ could not be produced in sufficient amount by the standard mechanism of thermal production. The reason for this is that in order to be abundant enough it would need to stay in the equilibrium until low temperatures ($T\sim 15$ MeV), when all the strange particles needed for its production have already decayed.
\paragraph{Can Dark Matter carry baryon number and be the answer to the baryon asymmetry of the Universe?}
We worked on a scenario in which dark matter carries (anti)baryon number and offers a solution to the baryon asymmetry problem. We analyzed two concrete models, the $H,{\bar H}$ DM and the new Beyond the Standard Model particle $X$ as candidates for the model. For the $H,{\bar H}$ scenario we already checked that DM detection experiments allow for its existence and that it has the correct particle properties to be undetected and long lived. In this scenario the new set of constraints with respect to the $H$ DM comes from the annihilation of ${\bar H}$s in regions with high concentration of nucleons. The ${\bar H}$ can successfully evade constraints from the direct detection of annihilation in SuperK, and the detection of neutrinos produced by its annihilation in the Sun and the Earth. However, the heat production in Uranus, which is a planet with an anomalously low internal heat production, is lower than the heat that would be produced by the annihilation of captured ${\bar H}$s. This excludes a $H{\bar H}$ dark matter. The other scenario, involving the new particle $X$ turns out to be safe from the above constraints. Its stability requires that its coupling to quarks should have a temperature dependence, and we analyze two models which could provide the change in the coupling. It also follows that the value of the coupling today is such that the $X$ is virtually undetectable by current experiment.
\chapter{Relative probability for scattering from different types of nuclei}
\label{appendixA}
The probability $P (x+dx)$ that a particle will not scatter when propagating through a distance $x+dx$, equals the probability $P (x)$ that it does not scatter in the distance $x$, times the probability that it does not scatter from any type $i$ of target nuclei in the layer $dx$:
\begin{equation}
P(x+dx)=P(x)\left(1-\sum \frac{dx}{\lambda_i} \right)\equiv P(x)\left(1- \frac{dx}{\lambda _{eff}} \right)
\end{equation}
By solving this differential equation one gets the probability that a particle will travel a distance $x$ in a given medium, without scattering,
\begin{equation}
P(x)=e^{-x/\lambda _{eff}}.
\end{equation}
The probability for scattering once and from a given nuclear species $i$ in the layer $(x,x+dx)$, is proportional to the product of probabilities that a particle will not scatter in distance $x$ and that it will scatter from species of type $i$ in $dx$:
\begin{equation}
f_i(x)dx=e^{-x/ \lambda _{eff}}\frac{dx}{\lambda _i}.
\end{equation}
The probability that a particle scatters once from any species in a $dx$ layer is the sum of the single particle probabilities $\sum f_i(x)dx$, where
\begin{equation}
\int ^{\infty} _{0} \sum f_i(x)~dx=1.
\end{equation}
In the simulation we want to generate the spectrum of distances a particle travels before scattering once from any of elements, using a set of uniformly distributed random numbers.
We can achieve this by equating the differential probability for scattering to that of a uniformly distributed random number,
\begin{equation}
\sum f_i(x)~dx=dR
\end{equation}
After integrating
\begin{equation}
\int ^x _0 \sum f_i(x)~dx=\int ^R _0 dR
\end{equation}
we get for the distribution of scattering distances $x$
\begin{equation}
x=-\lambda _{eff} ~\ln R
\end{equation}
The relative frequency of scattering from a nucleus of type $i$, is then given by
\begin{equation}
\int ^{\infty} _{0} f_i(x)dx=\frac{\lambda _{eff}}{\lambda _i}=\frac {n_i\sigma _{XA_i}}{\sum n_j\sigma _{XA_j}}
\end{equation}
\chapter{MC simulation for DM heavier than 10 GeV}
\label{appendixB}
We assume the following function for the form factor, as explained in Section \ref{directdet},
\begin{equation}
F^2(q^2)=\exp ^{-\frac{1}{10}(qR)^2},
\end{equation}
where $q$ is momentum transfer and $R$ is the nuclear radius.
For a particle moving with a given velocity $v$, the mean free path to the next collision is obtained using the cross section $\sigma _{tot}$ which corresponds to $\sigma (q)$ integrated over the available momentum transfer range, from zero to $q_{max}$, where $q_{max}=2m_NE_{R,max}$ and $E_{R,max}=2\mu^2/m_A(v/c)^2$:
\begin{equation}
\sigma _{tot}=\sigma _0\frac{\int^{q_{max}} _0 F^2(q^2)dq^2}{\int^{q_{max}} _0 dq^2}.
\end{equation}
After a particle travels the distance calculated from the mean free path described above, the collision is simulated. The momentum transfer of a collision is determined based on the distribution given by the form factor function, as in the usual Monte Carlo method procedure
\begin{equation}
\int ^p _0 dp=\frac{\int ^q _0 F^2(q^2)dq^2}{\int ^{q_{max}} _0 F^2(q^2)dq^2},
\end{equation}
where $p$ is a uniformly distributed random number from $0$ to $1$.
Once the momentum transfer of the collision is determined, the recoil energy of the nucleus, $E_R$, and the scattering angle of the collision, $\theta _{CM}$, are uniquely determined.
We repeat this procedure while following the propagation of a particle to the detector. If the particle reaches the detector we simulate the collision with target nuclei. For each collision in the target, the energy deposited in the detector $E_R$ is determined as above. For each particle $i$ the energy transfer determines the cross section with target nuclei as $\sigma_{XA_i} (E_R)=\sigma _{XA,0} F^2(E_R )$. The rate in the detector is found as in equation (\ref{sum}) with the only difference that in this case the sum runs over $\sum_{i} <v(\alpha (t)) \sigma _{XA_i}>_i$ instead of depending only on $v(\alpha (t))$.
\chapter{Dark Matter and the Baryon Asymmetry of the Universe} \label{intro}
The existence of Dark Matter (DM) is, today, well established. The name ``Dark'' derives from the fact that it is non-luminous and non-absorbing; it can be detected (so far) only through its gravitational interaction with ordinary matter. One of the first evidence for the presence of DM was the 1933 measurement by F.~Zwicky of the velocities of galaxies which are part of the gravitationally bound COMA cluster, \cite{Zwicky:1933gu}. Zwicky found that galaxies are moving much faster than one would expect if they only felt the gravitational attraction from visible nearby objects. Nevertheless, the existence of dark matter was not firmly established until the 1970's when the measurement of the rotational velocity of stars and gas orbiting at a distance $r$ from the galactic center was performed. The velocity at a distance $r$ scales as $v\sim\sqrt{M(r)/r}$, where $M(r)$ is mass enclosed by the orbit. If measurement is performed outside the visible part of galaxy, one would expect $M(r)\sim\rm{const.}$, or $v\sim 1/\sqrt{r}$. Instead, observations show that $v\sim\rm{const.}$, implying the existence of a dark mass, with radial dependence $M\sim r$ or $\rho\sim 1/r^2$, assuming spherical symmetry. The existence of dark mass is probed on different scales: velocities of stars, gas clouds, globular clusters, or, as we have seen, entire galaxies, are larger than one would predict based on the gravitational potential inferred from the observed, luminous mass.
More recent methods for direct detection of DM include measurements of X-ray temperature of the hot gas in galaxy clusters, which is proportional to the gravitational potential field and therefore to the mass of the cluster, and observations of the gravitational lensing of galaxies caused by the mass of a cluster in the foreground.
On a more theoretical basis, the presence of dark matter allows to relate the anisotropies of the cosmic microwave background (CMB) and the structures observed in galaxy and Lyman-$\alpha$ surveys to a common primordial origin in the framework of the inflationary model. The currently most accurate determination of DM and baryonic energy densities comes from global fits of cosmological parameters to these observations. For instance, using measurements of the CMB and the spatial distribution of galaxies at large scales, \cite{Tegmark:2003ud}, one gets the following constraints on the ratio of measured energy density to the critical energy density $\rho_{cr}=3H^2 _0/8\pi G_N$, $\Omega _{DM}=\rho_{DM}/\rho_{cr}$ and $\Omega _{b}=\rho_{b}/\rho_{cr}$:
\begin{eqnarray} \label{Omega}
\Omega_{DM}h^2 &=& 0.1222\pm 0.009,\nonumber\\
\Omega_b h^2&=& 0.0232\pm 0.0013,
\end{eqnarray}
where $h=0.73\pm 0.03$ is the Hubble constant in units of $100$ km s$^{-1}$Mpc$^{-1}$, $H_0=h~100$ km s$^{-1}$ Mpc$^{-1}$. These two numbers are surprisingly similar,
\begin{equation} \label{ratio}
\Omega _{DM}/\Omega _{b}=5.27\pm0.49 ,
\end{equation}
even though in conventional theories there is no reason for them to be so close: they could differ by many orders of magnitude. In our work we will explore the idea that these two numbers might originate from the same physical process, thereby making the value of their ratio natural. \\
The nature of the dark matter particle is an unsolved problem. Candidates for dark matter must satisfy several, basic, conditions: they should have very weak interaction with the electromagnetic radiation, they must have a relic density that corresponds to the observed DM energy density, eq.~(\ref{Omega}), and they should have a lifetime longer then the lifetime of the universe. They should also be neutral particles. The models of structure formation based on the inflation scenario prefer the so called ``cold'' dark matter, {\it i.e.} dark matter particles which are non-relativistic at the time of galaxy formation. In these models dark matter fluctuations, caused by primordial fluctuations in the inflaton field, are responsible for growth of the structures observed today. Baryons follow DM fluctuations falling into their gravitational wells where they form astrophysical objects. A relativistic dark matter would have a large {\it free-streaming} length, below which no structure could form while cold dark matter has a free-streaming length small enough to be irrelevant for structure formation.
A particle candidate for DM is believed not to be provided by the Standard Model (SM). Neutrinos have long been believed to be good SM candidates but based on recently derived constraints on the neutrino mass, it has been realized that they cannot provide enough energy density to be the only dark matter component (the current limit is $\Omega _{\nu}h^2\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 0.07$). Therefore the DM candidate is usually looked for in physics beyond the Standard Model: the most important candidates today, which satisfy the above requirements and are well motivated from particle physics considerations are {\it axions} and the {\it lightest supersymmetric particle}.
Axions are pseudo Nambu-Goldstone bosons associated with the spontaneous breaking of a Peccei-Quinn, \cite{Peccei:1977ur,Peccei:1977hh}, $U(1)$ symmetry at scale $f_A$, introduced to solve the strong CP problem. In general, their mass is inversely proportional to $f_A$, as $m_A=0.62~10^{-3}~{\rm eV}~(10^{10}~{\rm GeV}/f_A)$. The allowed range for the axion mass $10^{-6}\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, m_A\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, 10^{-2}$ eV, see for instance \cite{PDBook}. The lower bound derives from the condition that the energy density in axionic DM is not higher than the observed DM density, and the upper bound derives most restrictively from the measurement of the super nova neutrino signal (the duration of the SN 1987A neutrino signal of a few seconds points to the fact that the new born star cooled mostly by neutrinos rather than through an invisible channel, such as axions). The axions are produced with the DM abundance only for large values of the decay constant $f_A$, which implies that they did not come into thermal equilibrium in the early universe. They were produced non-thermally, for example through a vacuum misalignment mechanism, see~\cite{axionmissmatch}. Experimental searches for axionic DM have been performed dominantly through the axion to photon coupling. The Lagrangian is ${\mathcal L}=g_{A\gamma}\;\vec{E}\cdot\vec{B} \phi _A$, where $\phi _A$ is the axion field, and $g_{A\gamma}$ is coupling whose strength is an important parameter in axion models; it permits the conversion of an axion into a single real photon in an external electromagnetic field, {\it i.e.} a Primakoff interaction. Halo axions may be detected in a microwave cavity experiments by their resonant conversion into a quasi-monochromatic microwave signal in a cavity permeated by a strong magnetic field. The cavity ``Q factor'' enhances the conversion rate on resonance. Currently two experiments searching for axionic DM are taking data: one at LLNL in California, \cite{Peng:2000hd}, and the CARRACK experiment, \cite{Yamamoto:2000si}, in Kyoto, Japan. Preliminary results of the CARRACK I experiment exclude axions with mass in a narrow range around $10\mu$eV as major component of the galactic dark halo for some plausible range of $g_{A\gamma}$ values. This experiment is being upgraded to CARRACK II, which intends to probe the range between 2 and 50 $\mu$eV with sensitivity to all plausible axion models, if axions form most of DM.
The lightest supersymmetric particle belongs to the general class of {\it weak interacting massive particles}, the so called WIMPs. WIMPs are particles with mass of the order of 10 GeV to TeV and with cross sections of weak interaction strength, $\sigma\sim G^{~2}_F~m^2 _{WIMP}$. Supersymmetry (SUSY) allows the existence of a R-parity symmetry which implies that the lightest SUSY particle is absolutely stable, offering a good candidate for DM. In most SUSY models the lightest supersymmetric particle is the neutralino, a linear combination of photino, wino and two higgsinos. Under the assumption that WIMPs were in thermal equilibrium after inflation, one can calculate the temperature at which their interaction rate with Standard Model particles becomes lower than the expansion rate of the universe. At that point they decouple from thermal bath and their number in co-moving volume stays constant at later times (they {\it freeze-out}). Freeze-out happens at temperature $T\simeq m/20$, almost independent of the particle properties, which means that particles are non-relativistic at the time of decoupling, making WIMPs a cold DM candidates. The abundance of DM in this scenario is $\Omega_{DM}\simeq 0.1~{\rm pb}/\langle \sigma v \rangle$, which surprisingly corresponds to the measured DM density for weak cross section values. The fact that their mass/cross section value is in the correct range, together with good motivation from SUSY, makes them attractive candidates.
Today, a large part of the parameter space expected for neutralino has been explored: a region of mass $10$ GeV $\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, m \,\raisebox{-0.13cm}{$\stackrel{\textstyle<}{\textstyle\sim}$}\, $ TeV, from $\sigma \sim 10$ mb to $\sigma \sim 10^{-6}$pb, and a new generation of experiments reaching $\sigma \sim 10^{-7,8}$pb is planned for the near future. Since direct and indirect detection searches have been unsuccessful, the nature of dark matter is still unresolved, although not all of the possible SUSY and axion parameter range has been explored, see for instance \cite{ellis,PDBook,axion}. We will examine alternative candidates for DM in the thesis, in particular candidates connected to the Baryon Asymmetry in the Universe, described in detail below.\\
Another important open question in our understanding of the Universe today is the observed Baryon Asymmetry of the Universe. The Dirac equation places anti-matter places on an equal footing with matter: in the Big Bang cosmological model the early epoch should contain a fully mixed state of matter and anti-matter. As the Universe expanded and cooled this situation would result in the annihilation of matter and anti-matter, leaving a baryon mass density today of $\Omega_b\simeq 4~10^{-11}$ which is much lower than the observed value $\Omega _b\simeq 0.04$, eq.~(\ref{Omega}) (or a baryon to photon ratio $n_b/n_{\gamma}\approx 10^{-18}$, eight orders of magnitude smaller than measured). Evidently some mechanism had to act to introduce the asymmetry in the baryon and antibaryon abundances and prevent their complete annihilation. The apparent creation of matter in excess of antimatter in the early universe is called Baryogenesis, for a review see, for instance, \cite{baryogenreview}. It was A.~Sakharov who first suggested in 1967, \cite{sakharov}, that the baryon density might not come from initial conditions, but can be understandable in terms of physical processes. He enumerated three necessary conditions for baryogenesis:
\begin{enumerate}
\item[1.]{\it Baryon number violation:} If baryon number (B) is conserved in all reactions, then a baryon excess at present can only reflect asymmetric initial conditions.
\item[2.]{\it C and CP violation:} Even in the presence of B-violating reactions, if CP is conserved, every reaction which produces a particle will be accompanied by a reaction which produces its antiparticle at the same rate, so no net baryon number could be generated.
\item[3.]{\it Departure from thermal equilibrium:} the CPT theorem guarantees equal masses for particle and its antiparticle, so in thermal equilibrium the densities of particles and antiparticles are equal and again no baryon asymmetry could exist.
\end{enumerate}
Several mechanisms have been proposed to understand the baryon asymmetry. I comment on a few of the most important models below.\\
\emph{Grand Unified Theory (GUT) scale Baryogenesis (\cite{GUT1,GUT2,GUT3}):} Grand Unified Theories unify the gauge interactions of the strong, weak and electromagnetic interactions in a single gauge group. The GUT scale is typically of the order $10^{16}$ GeV so baryogenesis in this model occurs very early in the history of Universe. GUT's generically have baryon-violating reactions, such as proton decay (not yet observed) and they have heavy particles whose decays can provide a departure from equilibrium. The main objections to this possibility come from inflation and reheating models, \-- the temperature of the universe after reheating in most models is well below $M_{GUT}$. Furthermore, if baryogenesis occurs before inflation it would be washed out in the exponential expansion. \\
\emph{Electroweak Baryogenesis (\cite{ewbg}):} In this model baryogenesis occurs in the SM at the Electroweak Phase Transition (EWPT) \-- this is the era when the Higgs first acquired a vacuum expectation value (VEV) and the SM particles acquired masses through their interaction with the Higgs field. This transition happened around 100 GeV. The Standard Model satisfies all of the Sakharov conditions:
\begin{enumerate}
\item In the SM there are no dimension 4 operators consistent with gauge symmetry which violate baryon (B) or lepton number (L). The leading operators which violate B are dimension 6 operators, which are suppressed by $O(1/M^2)$ and the operators which violate L are dimension 5, suppressed by $O(1/M)$, where $M$ is a scale of high energy physics which violates B or L. Inside the SM, B and L currents are not exactly conserved due to the fact that they are anomalous. However, the difference between $j^{\mu}_B-j^{\mu}_L$ is anomaly-free and is an exactly conserved quantity in the SM. In perturbation theory these effects go to zero, but in non-abelian gauge theories there are non-perturbative configurations which contribute to the non-conservation of currents. The vacuum structure of a Yang-Mills theory has an infinite set of states. In tunneling between these states, because of the anomaly, the baryon and lepton numbers will change. At zero temperatures tunneling effects are exponentially suppressed by $\exp(-2\pi /\alpha)$. At finite temperatures this rate should be larger. To estimate this rate one can look for the field configuration which corresponds to sitting on top of the barrier, a solution of the static equations of motion with finite energy, known as a sphaleron. The rate for thermal fluctuations to cross the barrier per unit time and volume should be proportional to the Boltzmann factor for this configuration, \cite{ewbg,dine,arnold}, $\Gamma =T^4 e^{-cM_W/g^2T}$. At high temperature $M_W$ vanishes and the transition gets the form $\Gamma=\alpha ^4 _W T^4$.
\item CP-violation has been experimentally observed in kaon decays and is present in the SM. However, SM CP violation must involve all three generations. The lowest order diagram that involves three generations and contributes to CP violating processes relevant to baryogenesis is suppressed by 12 Yukawa couplings. The CKM CP violation contributes a factor of $10^{-20}$ to the amount of baryon asymmetry that could arise in the SM and a Beyond the Standard Model CP violation is usually invoked.
\item Thermal nonequilibrium is achieved during first-order phase transitions in the cooling early universe. In the electroweak theory, there is a transition to a phase with massless gauge bosons. It turns out that, for a sufficiently light Higgs, this transition is of the first order. A first order transition is not, in general, an adiabatic process. As we lower the temperature, the transition proceeds by the formation of bubbles. The moving bubble walls are regions where the Higgs fields are changing and all of Sakharov's conditions are satisfied. It has been shown that various non-equilibrium processes near the wall can produce baryon and lepton numbers, \cite{ewpt1,ewpt2}. Avoiding the washing out of the asymmetry requires that after the phase transition, the sphaleron rate should be small compared to the expansion rate of the universe, or, as we have seen above, that $M_W$ be large compared to the temperature. This, in turn, means that the Higgs expectation value must be large immediately after the transition. It turns out that the current lower limit on the Higgs boson mass rules out any possibility of a large enough Higgs expectation value after the phase transition, at least in the minimal model with a single Higgs doublet.
\end{enumerate}
Any baryon asymmetry produced in the Standard Model is far too small to account for observations, the main obstacle being the heaviness of the Higgs, and one has to turn to extensions of the Standard Model in order to explain the observed asymmetry. \\
\emph{Leptogenesis (\cite{leptogen}):} In the last few years the evidence for neutrino masses has become more and more compelling. The most economical way to explain these facts is that neutrinos have Majorana masses arising from lepton-number violating dimension five operators (permitted if fermion carries no conserved charges). These interactions have the form ${\mathcal L}=\frac{1}{M}L H L H$. For $M=M_{pl}$ the neutrino mass would be too small to account for the observed values. The see-saw mechanism provides a simple picture of how the lower scale might arise. It assumes that in addition to the SM neutrinos, there are some SM singlet, heavy neutrinos, $N$. These neutrinos could couple to the left handed doublets $\nu_L$ providing the correct mass for the light neutrinos.
What is relevant is that heavy neutrinos $N$ can decay, for example, to both $H+\nu$ and $H+{\bar \nu}$, breaking the lepton number. CP violation can enter through phases in the Yukawa couplings and mass matrices of the $N$'s. At tree-level these phases will cancel out, so it is necessary to consider one loop diagrams and to look at quantum corrections in which dynamical phases can appear in the amplitudes. These decays then produce a net lepton number, and hence a net $B-L$. The resulting lepton number will be further processed by sphaleron interactions, yielding a net lepton and baryon number. Reasonable values of the neutrino parameters give asymmetries of the order we seek to explain. However, all parameters needed for precise calculations are not measured yet (in the case of $\nu_L$ masses and CP violating couplings) and one needs some additional information about the masses of the $N$'s.\\
\emph{Affleck-Dine Baryogenesis(\cite{affleckdine})} In supersymmetric theories, the ordinary quarks and leptons are accompanied by scalar fields, which carry baryon and lepton number. A coherent field, {\it i.e.} a large classical value of such a field, can in principle carry a large amount of baryon number. Through interactions with the inflaton field CP-violating and B-violating effects can be introduced. As the scalar particles decay to fermions, the net baryon number the scalars carry can be converted into an ordinary baryon excess.
The Affleck-Dine mechanism is also a mechanism for dark matter creation. Fluctuations in the scalar quark fields (``Q-balls'') are a dark matter candidate if they are stable. If they are unstable, they can still decay into dark matter. Since the Affleck-Dine mechanism describes the production of baryons {\it and} dark matter it could provide an explanation of the ratio between $\Omega_{DM}$ and $\Omega_{_b}$ from first principles. If supersymetry is discovered, given the success of inflation theory, the Affleck-Dine scenario will appear quite plausible.\\
To summarize, the abundance of baryons and dark matter in our Universe poses several challenging puzzles:
\begin{enumerate}
\item Why is there a non-zero net nucleon density and what determines its value?
\item What does dark matter consist of? Can it be explained as a SM particle?
\item Is it an accident that the dark matter density is roughly comparable to the nucleon density, $\rho_{DM} = 5 ~\rho_N$?
\end{enumerate}
In the next Chapter we outline the scenario which aims to connect and answer the questions above. In Chapters \ref{Hdibaryon}, \ref{Hdmcons} and \ref{X} we focus on the two concrete DM candidates in this scenario, one being a particle within the Standard Model, and the other is BSM candidate. We comment on their particle physics properties and experimental constraints. As we will see, SM candidate is ruled out while the Beyond the Standard Model candidate is safe by many orders of magnitude.
| 64,947 |
\section{Introduction}
In the past idealized models for turbulent fluctuations which can be found in the solar wind plasma
or in the interstellar medium have been proposed (e.g. Matthaeus et al. 1995). We are concerned
with statistically axisymmetric models of magnetostatic fluctuations $\delta \vec{B} (\vec{x})$ that
are transverse to a uniform mean magnetic field $\vec{B}_0$. If solar wind turbulence is considered,
the mean field might be identified with the magnetic field of the Sun. The total magnetic field is a
superposition of this mean field and the fluctuations $\vec{B}(\vec{x})=\vec{B}_0+\delta \vec{B}(\vec{x})$.
Whereas we usually approximate the mean field by a constant field aligned parallel to the $z-$axis
($\vec{B}_0 = B_0 \vec{e}_z$), the turbulent contribution has to be replaced by turbulence models.
Some prominant examples are slab, 2D, and two component models that include both slab and 2D
contributions (e.g. Matthaeus et al. 1990).
There are recent spacecraft measurements of magnetic correlations in the solar wind (see e.g.
Matthaeus et al. 2005, Dasso et al. 2007). Such measurements are very interesting and important
since they allow an improved understanding of turbulence. For instance, characteristic length scales
of turbulence such as the correlation length, the bendover scale, and the dissipation scale can be
obtained from such observations. Also the investigation of spectral anisotropy by using data from
different space craft missions such as Wind and ACE is possible. These properties of solar wind
turbulence are very important for several investigations (heating and damping of the solar wind
plasma, transport of charged cosmic rays). A further important turbulence property is the
turbulence dynamics (the time dependence of the stochastic magnetic fields). In principle, data sets
from Wind and ACE can also be used to compute dynamical correlation functions to
explore the turbulence dynamics.
In a recent article (Shalchi 2008) magnetic correlation functions were computed analytically. Such
analytical forms of magnetic correlations complement data analysis results such as Matthaeus et al.
(2005) and Dasso et al. (2007). Since we expect that future data analysis work will also allow the
investigation of temporal correlation functions, we explore theoretically (numerically and analytically)
the forms of these Eulerian correlations. These results can be compared with data analysis results
as soon as they are available.
The organization of the paper is as follows: in section 2 we define and discuss the basic parameters
which are useful for describing turbulence. Furthermore, we explain the slab, the 2D, and the slab/2D
compositel model. In section 3 we review different models for the turbulence dynamics. In section 4
we compute Eulerian correlation functions numerically and analytically. In section 5 the results of this
article are summarized.
\section{General remarks - setting}
\subsection{The turbulence correlation function}
The key function in turbulence theory is the two-point-two-time correlation tensor. For homogenous turbulence its
components are
\begin{equation}
R_{lm} (\vec{x},t) = \left< \delta B_l (\vec{x},t) \delta B_m^{*} (\vec{0},0) \right>.
\label{s1e1}
\end{equation}
The brackets $\left< \dots \right>$ used here denote the ensemble average. It is convenient to introduce the
correlation tensor in the $\vec{k}-$space. By using the Fourier representation
\begin{equation}
\delta B_l (\vec{x},t) = \int d^3 k \; \delta B_l (\vec{k},t) e^{i \vec{k} \cdot \vec{x}}
\label{s1e2}
\end{equation}
we find
\begin{eqnarray}
R_{lm} (\vec{x},t) = \int d^3 k \int d^3 k^{'} \left< \delta B_{l} (\vec{k},t) \delta B_m^{*} (\vec{k}^{'},0) \right>
e^{i \vec{k} \cdot \vec{x}}.
\label{s1e3}
\end{eqnarray}
For homogenous turbulence we have
\begin{equation}
\left< \delta B_{l} (\vec{k},t) \delta B_m^{*} (\vec{k}^{'},0) \right> = P_{lm} (\vec{k},t) \delta (\vec{k} - \vec{k}^{'})
\label{s1e4}
\end{equation}
with the correlation tensor in the $\vec{k}-$space $P_{lm} (\vec{k},t)$. By assuming the same temporal
behaviour of all tensor components, we have
\begin{equation}
P_{lm} (\vec{k}, t) = P_{lm} (\vec{k}) \; \Gamma (\vec{k}, t)
\end{equation}
with the dynamical correlation funtion $\Gamma (\vec{k}, t)$. Eq. (\ref{s1e3}) becomes than
\begin{equation}
R_{lm} (\vec{x},t) = \int d^3 k \; P_{lm} (\vec{k}) \Gamma (\vec{k}, t) e^{i \vec{k} \cdot (\vec{x})}
\label{s1e5}
\end{equation}
with the magnetostatic tensor $P_{lm} (\vec{k}) = \left< \delta B_{l} (\vec{k}) \delta B_m^{*} (\vec{k}) \right>$.
\subsection{The two-component turbulence model}
In this paragraph we discuss the static tensor $P_{lm} (\vec{k})$ defined in Eq. (\ref{s1e5}). Matthaeus \&
Smith (1981) have investigated axisymmetric turbulence and derived a general form of $P_{lm} (\vec{k})$
for this special case. In our case the symmetry-axis has to be identified with the axis of the uniform mean
magnetic field $\vec{B}_0 = B_0 \vec{e}_z$. For most applications (e.g. plasma containment devices,
interplanetary medium) the condition of axisymmetry should be well satisfied. Furthermore, we neglect
magnetic helicity and we assume that the parallel component of the turbulent fields is zero or negligible
small ($\delta B_z = 0$). In this case the correlation tensor has the form
\begin{equation}
P_{lm} (\vec{k}) = A(k_{\parallel},k_{\perp}) \left[ \delta_{lm} - \frac{k_l k_m}{k^2} \right], \quad l,m = x,y
\label{s1e20}
\end{equation}
and $P_{lz}=P_{zm}=0$. The function $A(k_{\parallel},k_{\perp})$ is controlled by two turbulence properties:
the turbulence geometry and the turbulence wave spectrum. The geometry describes how
$A(k_{\parallel},k_{\perp})$ depends on the direction of the wave vector $\vec{k}$ with respect to the
mean field. There are at least three established models for the turbulence geometry:
\begin{enumerate}
\item The slab model: here we assume the form
\begin{equation}
A^{slab} (k_{\parallel},k_{\perp}) = g^{slab} (k_{\parallel}) \frac{\delta (k_{\perp})}{k_{\perp}}.
\label{s1e21}
\end{equation}
In this model the wave vectors are aligned parallel to the mean field ($\vec{k} \parallel \vec{B}_0$).
\item The 2D model: here we replace $A(k_{\parallel},k_{\perp})$ by
\begin{equation}
A^{2D} (k_{\parallel},k_{\perp}) = g^{2D} (k_{\perp}) \frac{\delta (k_{\parallel})}{k_{\perp}}.
\label{s1e22}
\end{equation}
In this model the wave vectors are aligned perpendicular to the mean field ($\vec{k} \perp \vec{B}_0$)
and are therefore in a two-dimensional (2D) plane.
\item The slab/2D composite (or two-component) model: In reality the turbulent fields can depend on
all three coordinates of space. A quasi-three-dimensional model is the so-called slab/2D composite
model, where we assume a superposition of slab and 2D fluctuations:
$\delta B_i^{comp} (x,y,z) = \delta B_i^{slab} (z) + \delta B_i^{2D} (x,y)$. Because of
$< \delta B_i^{slab} (z) \delta B_j^{*,2D} (x,y) > = 0$, the correlation tensor has the form
\begin{equation}
P_{lm}^{comp} (\vec{k}) = P_{lm}^{slab} (\vec{k}) + P_{lm}^{2D} (\vec{k}).
\label{s4e2}
\end{equation}
In the composite model the total strength of the fluctuations is $\delta B^2 = \delta B_{slab}^2 + \delta B_{2D}^2$.
The composite model is often used to model solar wind turbulence. It was demonstrated by several
authors (e.g. Bieber et al. 1994, 1996) that $20 \%$ slab / $80 \%$ 2D should be realistic in the solar wind at 1 AU heliocentric
distance.
\end{enumerate}
The wave spectrum describes the wave number dependence of $A (k_{\parallel},k_{\perp})$. In the
slab model the spectrum is described by the function $g^{slab} (k_{\parallel})$ and in the 2D model
by $g^{2D} (k_{\perp})$.
As demonstrated in Shalchi (2008), the combined correlation functions (defined as $R_{\perp}=R_{xx}+R_{yy}$) for
pure slab turbulence is given by
\begin{equation}
R_{\perp}^{slab} (z) = 8 \pi \int_{0}^{\infty} d k_{\parallel} \; g^{slab} (k_{\parallel}) \cos (k_{\parallel} z)
\label{corrslab}
\end{equation}
and the correlation function for pure 2D is
\begin{equation}
R_{\perp}^{2D} (\rho) = 2 \pi \int_{0}^{\infty} d k_{\perp} \; g^{2D} (k_{\perp}) J_0 (k_{\perp} \rho).
\label{s3e12}
\end{equation}
Here $z$ is the distance parallel with respect to the mean magnetic field and $\rho$ denotes the distance
in the perpendicular direction. To evaluate these formulas we have to specify the two wave spectra
$g^{slab} (k_{\parallel})$ and $g^{2D} (k_{\perp})$.
\subsection{The turbulence spectrum}
In a cosmic ray propagation study, Bieber et al. (1994) proposed spectra of the form
\begin{eqnarray}
g^{slab} (k_{\parallel}) & = & \frac{C(\nu)}{2 \pi} l_{slab} \delta B_{slab}^2 (1 + k_{\parallel}^2 l_{slab}^2)^{-\nu} \nonumber\\
g^{2D} (k_{\perp}) & = & \frac{2 C(\nu)}{\pi} l_{2D} \delta B_{2D}^2 (1 + k_{\perp}^2 l_{2D}^2)^{-\nu}
\end{eqnarray}
with the inertial range spectral index $2 \nu$, the two bendover length scales $l_{slab}$ and $l_{2D}$, and the
strength of the slab and the 2D fluctuations $\delta B_{slab}^2$ and $\delta B_{2D}^2$. By requiring normalization
of the spectra
\begin{equation}
\delta B^2 = \delta B_x^2 + \delta B_y^2 + \delta B_z^2
= \int d^3 k \; \left[ P_{xx} (\vec{k}) + P_{yy} (\vec{k}) + P_{zz} (\vec{k}) \right]
\label{s2e4}
\end{equation}
we find
\begin{equation}
C(\nu) = \frac{1}{2 \sqrt{\pi}} \frac{\Gamma (\nu)}{\Gamma (\nu-1/2)}.
\label{s2e5}
\end{equation}
By combining these spectra with Eqs. (\ref{corrslab}) and (\ref{s3e12}) the slab correlation function
\begin{equation}
R_{\perp}^{slab} (z) = 4 C(\nu) \delta B_{slab}^2 l_{slab}
\int_{0}^{\infty} d k_{\parallel} \; (1+k_{\parallel}^2 l_{slab}^2)^{-\nu} \cos (k_{\parallel} z)
\label{s2e6}
\end{equation}
as well as the 2D correlation function
\begin{equation}
R_{\perp}^{2D} (\rho) = 4 C (\nu) \delta B_{2D}^2 l_{2D}
\int_{0}^{\infty} d k_{\perp} \; (1+k_{\perp}^2 l_{2D}^2)^{-\nu} J_0 (k_{\perp} \rho)
\label{s3e13}
\end{equation}
can be calculated. In Eq. (\ref{s3e13}) we have used the Bessel function $J_0 (x)$. In Shalchi (2008)
such calculations valid for magnetostatic turbulence are presented.
\subsection{Correlation functions for dynamical turbulence}
For dynamical turbulence the slab and the 2D correlation functions from Eqs. (\ref{corrslab}) and (\ref{s3e12})
become
\begin{eqnarray}
R_{\perp}^{slab} (z) & = & 8 \pi \int_{0}^{\infty} d k_{\parallel} \; g^{slab} (k_{\parallel})
\cos (k_{\parallel} z) \Gamma^{slab} (k_{\parallel},t) \nonumber\\
R_{\perp}^{2D} (\rho) & = & 2 \pi \int_{0}^{\infty} d k_{\perp} \; g^{2D} (k_{\perp})
J_0 (k_{\perp} \rho) \Gamma^{2D} (k_{\perp},t).
\label{generalcorr}
\end{eqnarray}
For the model spectrum defined in the previous paragraph these formulas become
\begin{eqnarray}
R_{\perp}^{slab} (z,t) & = & 4 C(\nu) \delta B_{slab}^2 l_{slab}
\int_{0}^{\infty} d k_{\parallel} \; (1+k_{\parallel}^2 l_{slab}^2)^{-\nu} \cos (k_{\parallel} z) \; \Gamma^{slab} (k_{\parallel},t) \nonumber\\
R_{\perp}^{2D} (\rho,t) & = & 4 C (\nu) \delta B_{2D}^2 l_{2D}
\int_{0}^{\infty} d k_{\perp} \; (1+k_{\perp}^2 l_{2D}^2)^{-\nu} J_0 (k_{\perp} \rho) \; \Gamma^{2D} (k_{\perp},t).
\label{corrdyn2}
\end{eqnarray}
To evaluate these equations we have to specify the dynamical correlation functions $\Gamma^{slab} (k_{\parallel},t)$
and $\Gamma^{2D} (k_{\perp},t)$ which is done in the next section.
\section{Dynamical turbulence and plasma wave propagation effects}
In the following, we discuss several models for the dynamical correlation function $\Gamma (\vec{k}, t)$. In
Table \ref{dyntab}, different models for the dynamical correlation function are summarized and compared
with each other.
\begin{table}[t]
\begin{center}
\begin{tabular}{|l|l|l|}\hline
$ \textnormal{Model} $ & $ \Gamma^{slab} (k_{\parallel},t)
$ & $ \Gamma^{2D} (k_{\perp},t) $ \\
\hline\hline
$ \textnormal{Magnetostatic model} $ & $ 1
$ & $ 1 $ \\
$ \textnormal{Damping model of dynamical turbulence} $ & $ e^{-\alpha v_A k_{\parallel} t}
$ & $ e^{-\alpha v_A k_{\perp} t} $ \\
$ \textnormal{Random sweeping model} $ & $ e^{-(\alpha v_A k_{\parallel} t)^2}
$ & $ e^{-(\alpha v_A k_{\perp} t)^2} $ \\
$ \textnormal{Undampled shear Alfv\'en waves} $ & $ \cos (\pm v_A k_{\parallel} t)
$ & $ 1 $ \\
$ \textnormal{Undampled fast mode waves} $ & $ \cos (v_A k_{\parallel} t)
$ & $ \cos (v_A k_{\perp} t) $ \\
$ \textnormal{NADT model} $ & $ \cos (\pm v_A k_{\parallel} t) e^{-\gamma_{slab} t}
$ & $ e^{- \gamma_{2D} t} $ \\
\hline
\end{tabular}
\medskip
\caption{\it Different models for the dynamical correlation function $\Gamma (\vec{k},t)$. Here, $v_A$ is the
Alfv\'en speed and $\alpha$ is a parameter that allows to adjust the strength of dynamical effects. The
parameters $\gamma_{slab}$ and $\gamma^{2D}$ of the NADT model are defined in Eq. (\ref{c2s6e3}).}
\label{dyntab}
\end{center}
\end{table}
In the $\vec{k}-$space these models show a very different decorrelation in time. In the Section IV
we compute the dynamical correlation functions in the configuration space for these different models
\subsection{Damping and random sweeping models}
One of the first authors discussing dynamical turbulence were Bieber et al. (1994). In their article,
the authors proposed two models for the dynamical correlation function:
\begin{eqnarray}
\Gamma_{DT} (\vec{k} ,t) & = & e^{- t/t_c(\vec{k})} \quad \textnormal{(damping model of dynamical turbulence)} \nonumber\\
\Gamma_{RS} (\vec{k} ,t) & = & e^{- (t/t_c(\vec{k}))^2} \quad \textnormal{(random sweeping model)}
\label{c2s5e2}
\end{eqnarray}
with the correlation time scale $t_c(\vec{k})$. Bieber et al. (1994) estimated the correlation time as
\begin{equation}
\frac{1}{t_c(\vec{k})} = \alpha v_A \mid \vec{k} \mid.
\label{c2s5e3}
\end{equation}
Here, $v_A$ is the Alfv\'en speed and $\alpha$ is a parameter which allows to adjust the strength of the dynamical
effects, ranging from $\alpha=0$ (magnetostatic turbulence) to $\alpha=1$ (strongly dynamical turbulence). Bieber et al. (1994)
also suggested that the parameter $\alpha$ could be interpreted as $\delta B / B_0$. In this case, the correlation time scale
$t_c(\vec{k})$ becomes comparable to the eddy turnover time. Also, decorrelation effects related to plasma waves (see, e.g., Schlickeiser
\& Achatz 1993) can be achieved by expressing $\alpha$ through parameters such as the plasma $\beta$ (for a definition see also
Schlickeiser \& Achatz 1993).
The damping model was originally introduced for a particle scattering study in dynamcial turbulence by
Bieber et al. (1994). In this model, the dynamical correlation function has an exponential form, whereas in the
random sweeping model $\Gamma (\vec{k} ,t)$ has a Gaussian form.
\subsection{Plasma wave turbulence}
Another prominent model is the plasma wave model which is discussed in Schlickeiser (2002). In this model, the
dynamical correlation function has the form
\begin{equation}
\Gamma_{PW} (\vec{k} ,t) = \cos (\omega t) e^{- \gamma t}.
\label{c2s5e4}
\end{equation}
Here, $\omega$ is the plasma wave dispersion relation, whereas $\gamma$ desribes plasma wave damping.
Often, undamped plasma waves are considered, where $\Gamma_{PW} (\vec{k} ,t) = \cos (\omega t)$ and the
dynamical correlation function is a purely oszillating function. Prominent examples for different plasma waves
are Shear Alfv\'en waves, where $\omega=\pm v_A k_{\parallel}$, and fast magnetosonic waves, where
$\omega=v_A k$.
\subsection{The nonlinear anisotropic dynamical turbulence model}
Recently, an improved dynamical turbulence model, namely the nonlinear anisotropic dynamical turbulence
(NADT) model, has been proposed by Shalchi et al. (2006). This model takes into account plasma wave
propagation effects as well as dynamical turbulence effects. The NADT model was formulated for the slab/2D
composite model, where, in general, we have the two different dynamical correlation functions
$\Gamma^{slab} (k_{\parallel},t)$ and $\Gamma^{2D} (k_{\perp},t)$, namely
\begin{eqnarray}
\Gamma^{slab} (k_{\parallel},t) & = & \cos (\omega t) e^{- \gamma^{slab} \; t} \nonumber\\
\Gamma^{2D} (k_{\perp},t) & = & e^{- \gamma^{2D} \; t}
\label{c2s6e2}
\end{eqnarray}
with
\begin{eqnarray}
\gamma^{slab} & = & \beta \nonumber\\
\gamma^{2D} & = & \beta \; \left\{
\begin{array}{ccc}
1 & \textnormal{for} & k_{\perp} l_{2D} \leq 1 \\
(k_{\perp} l_{2D})^{2/3} & \textnormal{for} & k_{\perp} l_{2D} \geq 1.
\end{array}
\right.
\label{c2s6e3}
\end{eqnarray}
and with the plasma wave dispersion relation of shear Alfv\'en waves
\begin{equation}
\omega = j v_A k_{\parallel} \quad j = \pm 1.
\label{c2s6e4}
\end{equation}
In Eq. (\ref{c2s6e3}) the parameter $\beta$ can be expressed by the strength of the 2D component
$\delta B_{2D} / B_0$, the 2D bendover scale $l_{2D}$, and the Alfv\'en Speed $v_A$ (see Shalchi
et al. 2006):
\begin{equation}
\beta = \sqrt{2} \frac{v_A}{l_{2D}} \frac{\delta B_{2D}}{B_0}.
\label{c2s6e8}
\end{equation}
These forms of the temporal correlation function are discussed in more detail in Shalchi et al. (2006).
They are based on the work of Shebalin (1983), Matthaeus et al. (1990), Tu \& Marsch (1993),
Oughton et al. (1994), Zhou et al. (2004), Oughton et al. (2006). In the current article we approximate
$\gamma^{2D}$ in Eq. (\ref{c2s6e3}) by
\begin{equation}
\gamma^{2D} = \beta \left( 1 + k_{\perp} l_{2D} \right)^{2/3}
\end{equation}
for simplicity.
The parameter $j$ in Eq. (\ref{c2s6e4}) is used to track the wave propagation direction ($j=+1$ is used for forward and $j=-1$
for backward to the ambient magnetic field propagating Alfv\'en waves). A lot of studies have addressed the direction of propagation
of Alfv\'enic turbulence, see, for instance, Bavassano (2003). In general, one would expect that, closer to the sun, most waves
should propagate forward and, far away from the sun, the wave intensities should be equal for both directions. Most of the
observations, which allow conclusions on space plasma and particle propagation properties, have been performed in the solar
wind at 1 AU heliocentric distance. Thus, we can assume that all waves propagate forward, and we therefore set $j=+1$
in the current article.
\section{Eulerian correlation functions}
To investigate the different models we calculate the (combined) single-point-two-time correlation function defined by
\begin{equation}
E_{\perp} (t) := R_{\perp} (\vec{x}=0,t) = \left< \delta B_l (0, t) \delta B_m^{*} (0, 0) \right>.
\end{equation}
Since this function is of particular importance in understanding dynamical turbulence effects and interactions
with energetic charged particles this function is also known as Eulerian correlation function $E_{\perp}(t)$.
For the different models discussed in Section III, Eqs. (\ref{corrdyn2}) become
\begin{eqnarray}
E_{\perp}^{slab} (t) & = & 4 C (\nu) \delta B_{slab}^2 \int_{0}^{\infty} d x \; (1 + x^2 )^{-\nu} \nonumber\\
& \times & \left\{
\begin{array}{cc}
1 & \textnormal{magnetostatic model} \\
\cos (\tau x) & \textnormal{Alfv\'en waves} \\
e^{- \alpha \tau x} & \textnormal{damping model} \\
e^{- (\alpha \tau x)^2} & \textnormal{random sweeping model} \\
\cos (\tau x) e^{- \xi \tau} & \textnormal{NADT model}
\end{array}
\right.
\label{c2s5e9}
\end{eqnarray}
and
\begin{eqnarray}
E_{\perp}^{2D} (t) & = & 4 C (\nu) \delta B_{2D}^2 \int_{0}^{\infty} d x \; (1 + x^2 )^{-\nu} \nonumber\\
& \times & \left\{
\begin{array}{cc}
1 & \textnormal{magnetostatic model} \\
1 & \textnormal{Alfv\'en waves} \\
e^{- \alpha \frac{l_{slab}}{l_{2D}} \tau x} & \textnormal{damping model} \\
e^{- (\alpha \frac{l_{slab}}{l_{2D}} \tau x)^2} & \textnormal{random sweeping model} \\
e^{- \xi \tau (1+x)^{2/3}} & \textnormal{NADT model}
\end{array}
\right.
\label{c2s5e10}
\end{eqnarray}
Here we have used the integral transformations $x=k_{\parallel} l_{slab}$ and $x=k_{\perp} l_{2D}$.
Furthermore we used the dimensionless time
\begin{equation}
\tau=v_A t / l_{slab}
\end{equation}
and the parameter
\begin{equation}
\xi = \sqrt{2} \frac{\delta B_{2D}}{B_0} \frac{l_{slab}}{l_{2D}}.
\end{equation}
In the following paragraphs we evaluate Eqs. (\ref{c2s5e9}) and (\ref{c2s5e10}) numerically and analytically.
\subsection{Numerical calculation of Eulerian correlations}
In Figs. \ref{corrdynf1} and \ref{corrdynf1log} the results for the slab correlation function and in Figs. \ref{corrdynf2}
and \ref{corrdynf2log} for the 2D correlation function are shown for the different dynamical turbulence models. To
obtain these figures we have solved the integrals in Eqs. (\ref{c2s5e9}) and (\ref{c2s5e10}) numerically. For
the damping model and the random sweeping model we used $\alpha=1$. Furthermore, we used $l_{2D} = 0.1 l_{slab}$
as in previous articles based on the result of laboratory experiments such as Robinson \& Rusbridge (1971). For
the turbulence spectrum in the inertial range we employ a Kolmogorov (1941) behavior by setting $\nu=5/6$.
As shown in Figs. \ref{corrdynf1} - \ref{corrdynf2log}, the Eulerian correlations obtained for the damping model
and the random sweeping model are very similar. The results obtained by employing the NADT model are,
however, quite different from the other models.
\begin{figure}[t]
\begin{center}
\epsfig{file=corrdynslab.eps, width=400pt}
\end{center}
\caption{The slab correlation function $R_{\perp}^{slab} (0,t) / \delta B_{slab}^2$ as a function of the time
$\tau=v_A t / l_{slab}$. Shown are the results obtained for the Alfv\'enic plasma wave model (dotted line),
the damping model of dynamical turbulence (dashed line), the random sweeping model (dash-dotted line),
and the NADT model (solid line).}
\label{corrdynf1}
\end{figure}
\begin{figure}[t]
\begin{center}
\epsfig{file=corrdynslablog.eps, width=400pt}
\end{center}
\caption{Same caption as in Fig. \ref{corrdynf1} but now as semi-log-plot. Clearly we can see that we find an exponential
function for the Eulerian correlation if we employ the Alf\'ven wave model (dotted line) or the NADT model (solid line).}
\label{corrdynf1log}
\end{figure}
\begin{figure}[t]
\begin{center}
\epsfig{file=corrdyn2d.eps, width=400pt}
\end{center}
\caption{The 2D correlation function $R_{\perp}^{2D} (0,t) / \delta B_{2D}^2$ as a function of the time
$\tau=v_A t / l_{slab}$. Shown are the results obtained for the Alfv\'enic plasma wave model (dotted line),
the damping model of dynamical turbulence (dashed line), the random sweeping model (dash-dotted line),
and the NADT model (solid line). The result for undampled Alfv\'en waves corresponds to the magnetostatic
model.}
\label{corrdynf2}
\end{figure}
\begin{figure}[t]
\begin{center}
\epsfig{file=corrdyn2dlog.eps, width=400pt}
\end{center}
\caption{Same caption as in Fig. \ref{corrdynf2} but now as semi-log-plot. Clearly we can see that we find an exponential
function for the Eulerian correlation if we employ the NADT model (solid line).}
\label{corrdynf2log}
\end{figure}
\subsection{Analytical calculation of Eulerian correlations}
Here we compute analytically the different Eulerian correlations. For magnetostatic (MS) turbulence
we can use
\begin{equation}
\int_{0}^{\infty} d x \; (1+x^2)^{-\nu} = \frac{1}{4 C (\nu)}
\end{equation}
and therefore
\begin{eqnarray}
E_{\perp}^{slab,MS} & = & \delta B_{slab}^2 \nonumber\\
E_{\perp}^{2D,MS} & = & \delta B_{2D}^2
\end{eqnarray}
which is the expected result. In the following paragraphs we investigate the different other turbulence models.
\subsubsection{Undampled shear Alfv\'en waves}
In this case we can use (see, e.g., Shalchi 2008)
\begin{equation}
\int_{0}^{\infty} d x \; (1+x^2)^{-\nu} \cos ( \tau x )
= \frac{1}{\Gamma (\nu)} \left( \frac{2}{\tau} \right)^{1/2 - \nu} K_{1/2-\nu} ( \tau)
\label{intalf}
\end{equation}
to derive
\begin{equation}
E_{\perp}^{slab,Alf} (t) = \frac{4 \delta B_{slab}^2}{\Gamma (\nu-1/2)} \left( \frac{2 l_{slab}}{v_A t} \right)^{1/2 - \nu}
K_{1/2-\nu} \left( \frac{v_A t}{l_{slab}} \right).
\label{eulalf}
\end{equation}
Obviously the characteristic time scale for temporal decorrelation $t_{c}$ is
\begin{equation}
t_{c}^{slab,Alf} = \frac{l_{slab}}{v_A}.
\end{equation}
Following Shalchi (2008), the Modified Bessel function in Eqs. (\ref{intalf}) and (\ref{eulalf}) can be
approximated for large arguments. We find for times much larger than the temporal decorrelation time
\begin{equation}
E_{\perp}^{slab,Alf} (t \gg t_{c}) \approx \frac{2 \sqrt{\pi}}{\Gamma (\nu-1/2)} \delta B_{slab}^2
\left( \frac{2 l_{slab}}{v_A t} \right)^{1- \nu} e^{- v_A t / l_{slab}}.
\label{eulalf2}
\end{equation}
For the special case $\nu=1$ we obtain an exponential function. For the 2D Eulerian correlations we always
have $E_{\perp}^{2D,Alf} (t) = \delta B_{2D}^2$, since there are no wave propagation effects in the perpendicular
direction for (undamped) Alfv\'enic plasma waves.
\subsubsection{Damping model of dynamical turbulence}
For the damping model of dynamical turbulence (DT) the integrals in Eqs. (\ref{c2s5e9}) and (\ref{c2s5e10})
are difficult to solve. The results can be found in the appendix. As shown there, the characteristic time
scale for the temporal decorrelation is
\begin{eqnarray}
t_{c}^{DT} = \left\{
\begin{array}{cc}
\frac{l_{slab}}{\alpha v_A} & \textnormal{for slab fluctuations} \\
\frac{l_{2D}}{\alpha v_A} & \textnormal{for 2D fluctuations}
\end{array}
\right.
\label{tcdt}
\end{eqnarray}
For the case $t \gg t_{c}$ corresponding to $a_i \tau \gg 1$ ($i=slab,2D$) with
\begin{eqnarray}
a_{slab} & = & \alpha \nonumber\\
a_{2D} & = & \alpha \frac{l_{slab}}{l_{2D}}
\end{eqnarray}
we can easily compute the Eulerian correlation function approximatelly. For large $a_i \tau$ there is only a
contribution to the exponential function in Eqs. (\ref{c2s5e9}) and (\ref{c2s5e10}) for $x \rightarrow 0$. Thus,
we can approximate
\begin{equation}
\int_{0}^{\infty} d x \; (1+x^2)^{-\nu} e^{- a_i \tau x} \approx \int_{0}^{\infty} d x \; e^{- a_i \tau x}
= \frac{1}{a_i \tau}
\end{equation}
to obtain
\begin{eqnarray}
E_{\perp}^{DT} (t \gg t_{c}) = \frac{4 C(\nu)}{\alpha v_A t} \left\{
\begin{array}{cc}
\delta B_{slab}^2 l_{slab} & \textnormal{for slab fluctuations} \\
\delta B_{2D}^2 l_{2D} & \textnormal{for 2D fluctuations}.
\end{array}
\right.
\end{eqnarray}
For the damping model of dynamical turbulence the Eulerian correlation function tends to zero
with $E_{\perp}^{DT} \sim t^{-1}$.
\subsubsection{Random sweeping model}
For the random sweeping model the analytical results can also be found in the appendix. As shown there
we find the same temporal correlation time scale $t_{c}$ as for the damping model of dynamical
turbulence (see Eq. (\ref{tcdt})). For time scale satisfying $t \gg t_{c}$ we can use
\begin{equation}
\int_{0}^{\infty} d x \; (1+x^2)^{-\nu} e^{- (a_i \tau x)^2} \approx \int_{0}^{\infty} d x \; e^{- (a_i \tau x)^2}
= \frac{\sqrt{\pi}}{2 a_i \tau}
\end{equation}
to find
\begin{eqnarray}
E_{\perp}^{RS} (t \gg t_{c}) = 2 \sqrt{\pi} C(\nu) \left\{
\begin{array}{cc}
\delta B_{slab}^2 \frac{l_{slab}}{\alpha v_A t} & \textnormal{for slab fluctuations} \\
\delta B_{2D}^2 \frac{l_{2D}}{\alpha v_A t} & \textnormal{for 2D fluctuations}.
\end{array}
\right.
\end{eqnarray}
Obviously the results for the random sweeping model are very similar to the results obtained for
the damping model. This conclusion based on analytical investigations agrees with the
numerical results from Figs. \ref{corrdynf1} - \ref{corrdynf2log}.
\subsubsection{NADT model}
Here we have to distinguish between the slab and the 2D correlation function. For the slab function
we can use the result for Alfv\'en waves with an additional factor $exp (- \xi \tau)$. Therefore,
we find for late times
\begin{equation}
E_{\perp}^{slab,NADT} (t \gg t_{c}) \approx \frac{2 \sqrt{\pi}}{\Gamma (\nu-1/2)} \delta B_{slab}^2
\left( \frac{2 l_{slab}}{v_A t} \right)^{1- \nu} e^{- v_A t (1+\xi) / l_{slab}}.
\end{equation}
In this case there are two correlation time scales. The first is associated with the plasma wave (PW)
propagation effects
\begin{equation}
t_{c,PW}^{slab,NADT} = \frac{l_{slab}}{v_A}
\end{equation}
and the second is associated with the dynamical turbulence (DT) effects. The latter correlation time is
\begin{equation}
t_{c,DT}^{NADT} = \frac{l_{slab}}{v_A \xi} = \frac{1}{\sqrt{2}} \frac{B_0}{\delta B_{2D}} \frac{l_{2D}}{v_A}.
\label{nadtscaledt}
\end{equation}
For the 2D fluctuations the situation is more complicated and, thus, the analytical calculations can be
found in the appendix. As demonstrated there the correlation time scale is given by Eq. (\ref{nadtscaledt}).
The behavior of the Eulerian correlation function for late time ($t \gg t_{c,DT}^{NADT}$) is an exponential
function
\begin{equation}
E_{\perp}^{2D} \approx 4 C (\nu) \delta B_{2D}^2 e^{- v_A t \xi / l_{slab}}.
\end{equation}
This exponential result agrees with our numerical findings visualized in Figs. \ref{corrdynf2} and \ref{corrdynf2log}.
\section{Summary and conclusion}
In this article we have calculated and discussed Eulerian correlation functions. The motivation for this work are
recent articles from Matthaeus et al. (2005) and Dasso et al. (2007). In these papers it was demonstrated,
that magnetic correlation functions can be obtain from spacecrafts measurements (ACE and Wind).
We expect that from such observations also Eulerian correlations can be obtained. In the current article
we computed analytically and numerically these correlations. These theoretical results are very
useful for a comparison with data obtained from ACE and Wind.
We have employed several standard models for solar wind turbulence dynamics, namely the
(undamped and Alfv\'enic) plasma wave model, the damping model of dynamical turbulence,
the random sweeping model, and the nonlinear anisotropic dynamical turbulence (NADT) model.
All these model are combined with a two-component model and a standard form of the
turbulence wave spectrum. As shown, we find very similar Eulerian correlations for the
damping model and the random sweeping model. Therefore, we expect that in a comparison
between these models and spacecraft data, one can not decide which of these models
is more realistic. The NADT model presented in Shalchi et al. (2006), however, provides
different results in comparison to these previous models. In table \ref{corrtimetab} we have compared the
different correlation time scale derived in this article for the different models.
\begin{table}[t]
\begin{center}
\begin{tabular}{|l|l|l|}\hline
$ \textnormal{Model} $ & $ t_{c}^{slab} $ & $ t_{c}^{2D} $ \\
\hline\hline
$ \textnormal{Magnetostatic model} $ & $ \infty $ & $ \infty $ \\
$ \textnormal{Undampled shear Alfv\'en waves} $ & $ \frac{l_{slab}}{v_A} $ & $ \infty $ \\
$ \textnormal{Damping model of dynamical turbulence} $ & $ \frac{l_{slab}}{\alpha v_A} $ & $ \frac{l_{2D}}{\alpha v_A} $ \\
$ \textnormal{Random sweeping model} $ & $ \frac{l_{slab}}{\alpha v_A} $ & $ \frac{l_{2D}}{\alpha v_A} $ \\
$ \textnormal{NADT model (plasma wave effects)} $ & $ \frac{l_{slab}}{v_A} $ & no effect \\
$ \textnormal{NADT model (dyn. turbulence effects)} $ & $ \frac{1}{\sqrt{2}} \frac{B_0}{\delta B_{2D}} \frac{l_{2D}}{v_A} $
& $ \frac{1}{\sqrt{2}} \frac{B_0}{\delta B_{2D}} \frac{l_{2D}}{v_A} $ \\
\hline
\end{tabular}
\medskip
\caption{\it Comparison for the different correlation time scale found in this article. For the damping model of
dynamical turbulence and the random sweeping model we found the same correlation times. For the NADT model
there are two correlation times, one scale for the plasma wave propagation effects and one scale for the
dynamical turbulence effects.}
\label{corrtimetab}
\end{center}
\end{table}
By comparing the results of this article with spacecraft measurements, we can find out
whether modern models like the NADT model is realistic or not. This would be very
useful for testing our understanding of turbulence. Some results of this article, such as
Eqs. (\ref{generalcorr}) are quite general and can easily be applied for other turbulence
models (e.g. other wave spectra).
\section*{Acknowledgements}
{\it
This research was supported by Deutsche Forschungsgemeinschaft (DFG) under the Emmy-Noether program
(grant SH 93/3-1). As a member of the {\it Junges Kolleg} A. Shalchi also aknowledges support by the
Nordrhein-Westf\"alische Akademie der Wissenschaften.}
| 11,758 |
\section*{\textsc{ACKNOWLEDGEMENTS}}
I would like to thank the people whom I have been
working with the last couple of years, and who have made this thesis
not only possible but also a joy to complete.
The biggest thanks goes to my supervisor {\AA}ke Nordlund;
without him there would be no thesis, and he has always been
willing to answer my questions at all times and guide me
through the labyrinth of numerical astrophysics, not least
in this last stressing month. But fortunately
I have not been left alone on this quest.
It has been a
privilege to be part of the plasma gang;
had it not been for Christian and Trier, my PhD would not
have been the same. They deserve thanks for the
countless discussions over memorable Friday beers, I
owe much of my thesis to them, not least for the many
discussions about pair plasmas and the global
structure of collisionless shocks,
and even though some
of us may shift subjects in the future I will remember this
as the best cooperation I have ever had; I even believe that
in the end they have managed to teach me a bit of that pesky
plasma physics.
The world is not only made of plasma though, nor Friday beers,
and I am grateful to Jakob Hunsballe for the CVS wars
we have waged over the fluid code lately.
The GrMHD code was initiated during a stay in Kyoto in 2003-2004, where
I had the pleasure of working with the group around Shibata-san.
My Japanese may have passed away since then, but I do
remember the good and warm atmosphere and the excellent hospitality
I received there. I learnt a lot about relativistic fluid dynamics,
but also about Japanese culture, the art of eating with chopsticks,
and how everything can be so different and yet you feel at home;
to all of you in Kyoto, specially Shibata-san, Hiro and Mizuno-san:
Thank you for a fantastic half a year in Kyoto, I hope to return soon
again.
Last but not least I am grateful to my proof readers; in spite of
their limited interest in the details of astrophysics, they made
excellent suggestions and enlightened the language at times where
I myself was too tired to do so. Not only I, but also the readers
should thank them:
Ana, Gemma and Sune in this last stressing month, you have made
all the difference. If errors remain, be it linguistic or in the physics,
I surely must have introduced them in the last minute!
\newpage
\thispagestyle{empty}
\section*{\textsc{ABSTRACT}}
In this thesis different numerical methods, as well as applications
of the methods to a number of current problems in relativistic astrophysics,
are presented.
In the first part the theoretical foundation and numerical
implementation of a new general relativistic
magnetohydrodynamics code is discussed. A new form of
the equations of motion using global coordinates, but evolving the dynamical
variables from the point of view of a local observer is presented.
No assumptions are made about the background metric and the design is
ready to be coupled with methods solving the full Einstein equations.
In the second part of the thesis important results concerning the understanding
of collisionless shocks, obtained from experiments with a relativistic
charged particle code, are presented.
Relativistic collisionless shocks are important in a range of
astrophysical objects;
in particular in gamma ray burst afterglows and other relativistic jets.
It is shown that a strong small scale, fluctuating, and predominantly
transversal magnetic field
is unavoidably generated by a two-stream instability.
The magnetic energy density reaches a few percent of equipartition.
A new acceleration mechanism for electrons in
ion-electron collisionless shocks is proposed.
The mechanism is capable of creating a powerlaw
electron distribution in a collisionless shocked region.
The non--thermal acceleration of the electrons is directly related to the
ion current channels generated by the two-stream instability and
is local in nature.
Thus the observed radiation field may
be tied directly to the local conditions of the plasma and could be a strong
handle on the physical processes.
Experiments of colliding pair plasmas are presented and
the formation of a macrophysical shock structure is observed.
A comparable relativistic fluid simulation
is performed and good agreement is found,
implying that the full structure of the shock has been resolved.
The extent of the shock transition region in a pair plasma is estimated to
50-100 electron skin depths.
In the third part of the thesis a new particle-in-cell
code is discussed. It solves the full Maxwell equations,
together with direct microphysical particle-particle interactions,
such as relativistic scattering, pair production,
decay, and annihilation of particles.
The inclusion of such relativistic
interaction processes makes it possible to extract
self consistent synthetic photon spectra directly from
the numerical experiments, thereby
gaining the ability to directly compare models with observations.
\chapter{The Relativistic Maxwell Distribution}
\label{chap:maxwell}
In this appendix I briefly consider the relativistic Maxwell
distribution. When working with data from the
particle code, we have often needed to
assess how thermal or non thermal a given
particle distribution function (PDF) for a subset of
the particles is, and evaluate the
temperature and the overall Lorentz boost of
the population. Even if the particles are in fact
thermally distributed, they can still be moving with
an overall velocity $u$. To find the temperature
and the boost factor we need to compare our
data, not to the standard Maxwell distribution, but rather
a Lorentz boosted Maxwell distribution.
In principle this is a straight forward
exercise, but it becomes complicated because the different
components of the velocity couple through
the Lorentz factor. Then, the Maxwell distribution
of a Lorentz boosted thermal population is not
merely the Lorentz boost of the Maxwell distribution
of the population at rest.
Below in \Eq{eq:bvmaxwell} and \Eq{eq:bvgmaxwell} I
present the Maxwell distribution functions as function
of the boost factor $\Gamma$, boost velocity $u$ and temperature $T$.
\section{The standard relativistic distribution}
The standard Maxwell distribution for a population at rest
in its most basic form can be written
\begin{equation}
dN = N(T)\exp\left(-\frac{\gamma-1}{T}\right)dv_xdv_ydv_z\,,
\end{equation}
where $dN$ is the number of particles per $dv_x dv_y dv_z$
and $N(T)$ is an overall normalisation factor.
Going to spherical coordinates and integrating out the
angle dependence it changes to
\begin{equation}\label{eq:spherical}
dN = 4\pi N(T)\exp\left(-\frac{\gamma-1}{T}\right)v^2dv\,,
\end{equation}
while the most convenient system for boosting the distribution is cylindrical
coordinates, where it can be written
\begin{equation}\label{eq:cylindrical}
dN = 2\pi N(T)\exp\left(-\frac{\gamma-1}{T}\right)v_\perp dv_\perp dv_z\,.
\end{equation}
When considering PDFs, from a numerical
point of view, the most natural variable to work in is not
the normal velocity. The three velocity is bounded by the speed of light
and the PDFs are squeezed towards $c$ at high temperatures.
Instead normally the four velocity $v \gamma$ is used,
which is linear all the way from non relativistic to ultra relativistic
velocities. The Maxwell distribution in terms of $v\gamma$ and $\gamma$ is
\begin{equation}
dN =4\pi N(T)\frac{\sqrt{\gamma^2-1}}{\gamma^4}
\exp\left(-\frac{\gamma-1}{T}\right)d\gamma
\end{equation}
and
\begin{equation}\label{eq:vg}
dN =4\pi N(T)\frac{(v\gamma)^2}{(1+(v\gamma)^2)^{5/2}}
\exp\left(-\frac{\gamma-1}{T}\right)d(v\gamma)\,.
\end{equation}
\section{Boosting the Maxwell distribution}
To generalise the above distributions to those seen by
observers moving with four velocity $u\Gamma$ along the
$z$-axis we need to Lorentz transform the variables.
The Lorentz transformation together with the inverse
transformation between the two rest frames are
\begin{equation}\label{eq:tgamma}
\gamma' = \Gamma\gamma(1 - u v_z) \quad \Leftrightarrow \quad
\gamma = \Gamma\gamma'(1 + u v'_z)
\end{equation}
\begin{equation}\label{eq:tvz}
v'_z = \frac{v_z-u}{1 - u v_z} \quad \Leftrightarrow \quad
v_z = \frac{v'_z+u}{1 + u v'_z}
\end{equation}
\begin{equation}\label{eq:tvperp}
v_\perp = \frac{v'_\perp}{\Gamma(1 + u v'_z)}
\quad \Leftrightarrow \quad
v'_\perp = \frac{v_\perp}{\Gamma(1 - u v_z)}\,,
\end{equation}
where $v_\perp$ is a velocity component perpendicular to the
boost direction and prime denots the boosted reference frame.
To derive the Maxwell distribution, as seen by
an observer moving in the $z$--direction, we
have to transform either \Eq{eq:spherical} or \Eq{eq:cylindrical}
and reexpress it in terms of the new coordinates.
The Maxwell distribution in cylindrical coordinates is best suited,
since from \Eq{eq:tgamma} we see that the transformation of $\gamma$
will pick up a dependence on $v'_z$. Using Eqs.~(\ref{eq:tvz})
and (\ref{eq:tvperp}) to evaluate the Jacobian of the differentials
and substituting the new variables into \Eq{eq:cylindrical}, the boosted Maxwell
distribution in cylindrical coordinates may be found to be
\begin{equation}\label{eq:bmaxwell}
dN = 2\pi N(T) \exp \left( - \frac{\Gamma\gamma'(1 + u v'_z) - 1}{T} \right)
\frac{v'_\perp}{\left[ \Gamma(1 + u v'_z) \right]^4} dv'_\perp dv'_z\,.
\end{equation}
In this form the distribution function cannot be compared directly
with PDFs obtained from numerical data,
since it is still two dimensional. We need to marginalise one of the
two dimensions, to reduce it to a one dimensional PDF.
\section{The boosted Maxwell velocity distribution}
To find the velocity distribution we shift to spherical coordinates, setting
\begin{align}
v'_z & = v' \cos(\theta) & v'_\perp & = v' \sin(\theta)\,,
\end{align}
where $\theta \in [0,\pi], v' \in [0,1]$. Inserting the new coordinates
in \Eq{eq:bmaxwell} and integrating over angles after some algebra
the boosted Maxwell distribution binned linearly in the velocity is
\begin{equation}\label{eq:bvmaxwell}
dN = 2\pi N(T)T \frac{\gamma'^3v'dv'}{\Gamma u} \int^{\alpha_+}_{\alpha_-}
\frac{e^{-\beta} d\beta}{(1+T\beta)^4}\, ,
\end{equation}
where the temperature dependent integral has the limits
$\alpha_\pm = \frac{\Gamma\gamma'(1\pm uv')-1}{T}$.
As mentioned above, when analysing particle data it is important to compute
the PDFs with a linear behaviour from sub-- to ultra relativistic velocities.
Changing from $dv'$ to $d\gamma'v'$ we find the final result
\begin{equation}\label{eq:bvgmaxwell}
dN = 2\pi N(T)T \frac{(v'\gamma')d(v'\gamma')}{\Gamma u\sqrt{1+(v'\gamma')^2}}
\int^{\alpha_+}_{\alpha_-}
\frac{e^{-\beta} d\beta}{(1+T\beta)^4}\, .
\end{equation}
The integral in \Eq{eq:bvgmaxwell} may be simplified by
repeated partial integration
\begin{equation}
\int \frac{e^{-\beta} d\beta}{(1+T\beta)^4} =
\frac{-(1 + T\beta)^2 + T(1+T\beta) - 2T^2}{6T^3{\left( 1 + T\beta \right) }^3}e^{-\beta}
- \frac{1}{6T^3}
\int \frac{e^{-\beta} d\beta}{(1+T\beta)}
\end{equation}
and everything reduces to an exponential integral, that depends on $T$.
When analysing data I use \verb|IDL|. It already contains a function
to evaluate the exponential integral, and it is rather trivial
to implement \Eq{eq:bvgmaxwell} into a computer program that, given
a set of particles, evaluates the PDF, fits a boosted Maxwell distribution
and finds the corresponding temperature and velocity.
\chapter{Transformation of tensors between different metrics}\label{chap:appa}
In Chapter \ref{chap:GrMHD} it was argued that calculating variables in a local frame,
retaining at the same time global coordinates is the best approach for our numerical method.
Methods used for special relativity may then be employed with minimal changes in arbitrary
space times.
In this appendix, I give the detailed transformation rules for
vectors and two tensors. I consider the transformation between three different
frames. The global star fixed coordinate system (SFCS) has the metric
\begin{equation}
ds^2 = -\alpha^2 dt^2 + \gamma_{ij} \left(dx^i + \beta^i dt\right)
\left(dx^j + \beta^j dt\right)\,.
\end{equation}
The local laboratory (LOLA) frame has the metric
\begin{equation}
ds^2 = - d\hat{t}^2 + \gamma_{ij} d\hat{x}^i d\hat{x}^j\,,
\end{equation}
while the pseudo fiducial observer (PFIDO) frame has the metric
\begin{equation}
ds^2 = -d\t{t}^2 + \sum_{i,j} \frac{\gamma_{ij}}{\sqrt{\gamma_{ii}\gamma_{jj}}}
d\t{x}^id\t{x}^j\,.
\end{equation}
In the case that the metric contains no off diagonal spatial components,
the PFIDO frame is, in fact, the frame of a fiducial observer.
In the worst case the PFIDO
metric contains three non-trivial components.
\noindent The three metrics are related by the relations
\begin{align}
\left(dx^i + \beta^i dt\right) &= d\hat{x}^i & \alpha dt &= d\hat{t} \\
\sqrt{\gamma_{ii}}d\hat{x}^i &= d\t{x}^i & d\t{t} &= d\hat{t}\,.
\end{align}
The differentials transform as contravariant vectors. The transformation laws for
contravariant vectors may be found by multiplying with metrics and doing a bit of
linear algebra. Tensors by definition transform as the product of the corresponding vectors
and it is a straight forward, though tedious, exercise to find all the combinations.
I have written them down here, since they are essential for the implementation of any
physics; deriving how the local variables are related to the global one.
The following relations have been of interest when transforming to and from
different frames:\\
\textbf{{SFCS} $\leftrightarrow$ {LOLA} frame:}\\
(vectors)
\begin{align}
\h{U}^t &= \alpha U^t & \h{U}^i &= U^i + \beta^i U^t \\
\h{U}_t &= \frac{1}{\alpha}(U_t-\beta^iU_i) & \h{U}_i &= U_i \\
U^t &= \frac{1}{\alpha}\h{U}^t & U^i &= \h{U}^i - \frac{\beta^i}{\alpha}\h{U}^t \\
U_t &= \alpha\h{U}_t + \beta^i\h{U}_i & U_i &= \h{U}_i \\
\end{align}
\textbf{{SFCS} $\rightarrow$ {LOLA} frame:}\\
(contravariant two--tensors)
\begin{align}
T^{tt} &= \frac{1}{\alpha^2} \h{T}^{tt} \\
T^{ti} &= \frac{1}{\alpha}\left(\h{T}^{ti} - \frac{\beta^i}{\alpha}\h{T}^{tt}\right) \\
T^{ij} &= \h{T}^{ij} - \frac{\beta^i}{\alpha}\h{T}^{tj}
- \frac{\beta^j}{\alpha}\left( \h{T}^{it} - \frac{\beta^i}{\alpha} \h{T}^{tt}\right)
\end{align}
(mixed type two--tensors)
\begin{align}
T^t_t &= \h{T}^t_t + \frac{\beta^i}{\alpha}\h{T}^t_i \\
T^t_i &= \frac{1}{\alpha}\h{T}^t_i \\
T^i_t &= \alpha \left(\h{T}^i_t - \frac{\beta^i}{\alpha} \h{T}^t_t \right)
+ \beta^j \left(\h{T}^i_j - \frac{\beta^i}{\alpha} \h{T}^t_j \right) \\
T^i_j &= \h{T}^i_j - \frac{\beta^i}{\alpha}\h{T}^t_j
\end{align}
(covariant two--tensors)
\begin{align}
T_{tt} &= \alpha^2\left(\h{T}_{tt} + \frac{\beta^j}{\alpha}\h{T}_{tj}\right)
+ \alpha\beta^i\left(\h{T}_{it} + \frac{\beta^j}{\alpha} \h{T}_{ij}\right) \\
T_{ti} &= \alpha\left(\h{T}_{ti} + \frac{\beta^j}{\alpha}\h{T}_{ji}\right) \\
T_{ij} &= \h{T}_{ij}
\end{align}
\textbf{{LOLA} $\rightarrow$ {SFCS} frame:}\\
(contravariant two--tensors)
\begin{align}
\h{T}^{tt} &= \alpha^2 T^{tt} \\
\h{T}^{ti} &= \alpha\left(T^{ti} + \beta^i T^{tt}\right) \\
\h{T}^{ij} &= T^{ij} + \beta^i T^{tj}
+ \beta^j \left( T^{it} + \beta^i T^{tt}\right)
\end{align}
(mixed type two--tensors)
\begin{align}
\h{T}^t_t &= T^t_t - \beta^i T^t_i \\
\h{T}^t_i &= \alpha T^t_i \\
\h{T}^i_t &= \frac{1}{\alpha} \left[ T^i_t + \beta^i T^t_t
- \beta^j \left(T^i_j + \beta^i T^t_j \right) \right]\\
\h{T}^i_j &= T^i_j + \beta^i T^t_j
\end{align}
(covariant two--tensors)
\begin{align}
\h{T}_{tt} &= \frac{1}{\alpha^2}\left(T_{tt} - \beta^j T_{tj}
- \beta^i T_{it} + \beta^i\beta^j T_{ij}\right) \\
\h{T}_{ti} &= \frac{1}{\alpha}\left(T_{ti} - \beta^j T_{ji}\right) \\
\h{T}_{ij} &= T_{ij}
\end{align}
\textbf{{LOLA} frame $\leftrightarrow$ PFIDO frame:}\\
(vectors)
\begin{align}
\h{U}^t &= \t{U}^t & \h{U}^i &= \frac{1}{\sqrt{\gamma_{ii}}} \t{U}^i \\
\t{U}^t &= \h{U}^t & \t{U}^i &= \sqrt{\gamma_{ii}} \h{U}^i
\end{align}
(contravariant two--tensors)
\begin{align}
\h{T}^{tt} &= \t{T}^{tt} & \h{T}^{ti} &= \frac{1}{\sqrt{\gamma_{ii}}} \t{T}^{ti} \\
\h{T}^{ij} &= \frac{1}{\sqrt{\gamma_{ii}\gamma_{jj}}} \t{T}^{ij}
\end{align}
(mixed type two--tensors)
\begin{align}
\h{T}^t_t &= \t{T}^t_t &
\h{T}^t_i &= \sqrt{\gamma_{ii}} \t{T}^t_i \\
\h{T}^i_j &= \sqrt{\frac{\gamma_{jj}}{\gamma_{ii}}} \t{T}^i_j &
\h{T}^i_t &= \frac{1}{\sqrt{\gamma_{ii}}} \t{T}^i_t
\end{align}
(covariant two--tensors)
\begin{align}
\h{T}_{tt} &= \t{T}_{tt} & \h{T}_{ti} &= \sqrt{\gamma_{ii}} \t{T}_{ti} \\
\h{T}_{ij} &= \sqrt{\gamma_{ii}\gamma_{jj}} \t{T}_{ij}
\end{align}
\chapter{General Relativistic Magneto-Hydrodynamics}\label{chap:GrMHD}
Electromagnetic fields are ubiquitous ingredients in most astrophysical
objects. In the
case of very compact objects or at cosmological scales, not only do
electromagnetic fields
interact with matter directly, but they also become a source of
energy-momentum and
impact on the metric curvature. Several general relativistic
magnetohydrodynamics (GrMHD) computer codes have been developed and implemented
recently for the study of
compact relativistic objects and their surroundings
\citep[e.g.][]{2003ApJ...589..458D,2003PhRvD..67j4010K,1999ApJ...522..727K,
bib:anton05,bib:delzanna03,bib:fragile05},
using both conserved and non-conserved formulations of the basic equations of
motion. They are well-suited for their different purposes, but most of
the implementations above are designed for static space time backgrounds
with diagonal spatial terms.
In this chapter I present the analytic
basis for and numerical implementation of a code to solve the GrMHD equations.
My approach is inspired by the
pioneering work of Koide et al.~\cite{1999ApJ...522..727K} and related in
spirit to the methods of Ant\'on et al.~\cite{bib:anton05} and
Pons et al.~\cite{bib:pons98}.
From the beginning it has been designed
to be general enough to solve the GrMHD matter evolution equations on
any general time--dependent metric. This is an essential requirement if
the code ultimately is to be coupled with numerical codes solving the
Einstein equations, which evolve
the metric. As far as the implementation is concerned I have currently
implemented a fully parallelised 3D version
of special relativistic MHD and a general relativistic
extension of the hydrodynamics.
In the following section I describe some of my motivations for
developing the code. In
section \ref{sec:2.3} I present the fundamental equations for GrMHD
and adapt them to our specific approach. The equations are well known
(e.g.~\cite{1982MNRAS.198..339T}), but I make an effort to rewrite
them in a form that is suited for my numerical purpose. For clarity
I first consider hydrodynamics and discuss the question of artificial
viscosity and imperfect fluids, to then extend the system to include
electromagnetic fields. In section \ref{sec:2.4}, I present the numerical algorithm that
I have chosen to implement the equations with. Section \ref{sec:2.5} contains a large
test bed of demanding problems. Section \ref{sec:2.6} contains some astrophysics
related tests of the code and finally, in section \ref{sec:2.7} I consider
the crucial aspects of performance and scalability among others.
\section{Motivation}
An important motivation for developing this kind of code is to make it
possible to study the
evolution of cosmological magnetic fields in the primordial universe,
taking into account the metric back reaction and coupling of gravitational
waves with the electro magnetic field. The WMAP satellite has already
detected the first polarisation signal in the cosmic microwave background
radiation (CMBR) \cite{bib:WMAP}. The Planck
satellite and ground/balloon based experiments will
improve the quality of the signal further in the coming years. Even though
primordial magnetic fields make a
very small contribution to the CMBR, in contrast to other imprints,
they source vector perturbations and hence it may be possible to disentangle the
weak signal from other sources through its unique character
\cite{bib:grasso00,bib:pogosian03,bib:naselsky04}.
Turbulent primordial magnetic fields can arise naturally during a
phase transition, such as the transitions from an electroweak plasma
and from the quark gluon phase to normal matter \cite{bib:vachaspati:01}.
Alternatively, they may be produced during inflation \cite{bib:ashoorioon04}.
If a signal from primordial magnetic fields is indeed detected, we
would have yet another probe to understand early universe physics.
Galaxies and clusters of galaxies at high redshift have been
observed to contain magnetic fields comparable to present day
galaxies. They have only rotated a few times during their short life,
and this is difficult to explain without invoking primordial magnetic
fields at some level. Dynamo theory alone does not
seem to be enough \cite{bib:grasso00,bib:jedamzik03}.
MHD simulations of turbulent helical fields have shown that an inverse
cascade process operates which transfers small scale power to
larger scales, changing the simple energy decay due to the
expansion of the universe \cite{bib:christensson01}.
Until now, except from purely analytical analyses, the question of
evolving magnetic fields in the early universe
has primarily been tackled in two different ways. 1) Simple 3D
turbulence experiments have been made, using existing non-relativistic
MHD codes to
address the possibility of inverse cascades which could alter significantly
the longevity of large scale primordial fields; 2) Semi analytical arguments
have been used to explore the couplings between primordial
magnetic fields and the metric, neutrinos, effects from Silk-dampening,
etc \cite{bib:lewis04}
If imprints in the cosmological microwave background from primordial magnetic
fields are detected, it will be crucial to understand the evolution of the fields
in a realistic manner, in order to constrain possible generation scenarios.
I have verified the results by Christensson et al
\cite{bib:christensson01} using a purely special relativistic version of the code.
With the code developed here, these questions may be addressed
in a unified way, by performing
large scale 3D experiments including general relativistic effects and couplings
between the magnetic field and the metric perturbations.
Another strong motivation for developing a GrMHD code
is the fact that it provides the perfect complement to the particle-
and photon plasma codes, presented in the subsequent chapters,
for the study of extreme astrophysics around compact objects and
in jets. To understand the complex physics, we need to consider
processes happening at many different time and length scales.
A GrMHD code can be used to model the large scale dynamical flow and,
as detailed in Chapter \ref{chap:photonplasma}, provide realistic
boundary conditions for microphysical studies of plasma instabilities
and radiative processes.
We note that the first
results of coupling the full Einstein equations to the MHD equations has been
published \cite{bib:shapiro05} only very recently, and that the field is still
in its infancy.
\section{The GrMHD equations}\label{sec:2.3}
\subsection{3+1 Formulation of general relativity}
In numerical relativity it has proven very fruitful to exploit the so called
3+1 split of the metric. Instead of working with a four dimensional
manifold and the Einstein equations in the form of an elliptic
nonlinear set of partial differential equations, an explicit split
between temporal and spatial dimensions is imposed
(though see \cite{bib:meier03b} for
an alternative four dimensional approach). Assuming that we can construct a
foliation of space time --- usually a very reasonable condition except maybe
for near (naked) singularities --- it is then possible to rewrite the
Einstein equations
as a hyperbolic set of evolution equations, some elliptic constraint equations
and an associated Cauchy data set describing the initial conditions.
This formulation lends itself easily to a numerical implementation and
has been named the 3+1 approach.
The standard way of writing the metric in 3+1 form\footnote{Up to a plus or
minus sign and a
factor of $\alpha^{-1}$ for $\beta$} is:
\begin{equation}\label{metric1}
ds^2 = - \alpha^2 dt^2 + \gamma_{ij}
\left(dx^i + \beta^i dt\right)\left(dx^j + \beta^j dt\right),
\end{equation}
where $\alpha$ is called the lapse function, $\beta$ is the shift or shear and
$\gamma$ is the spatial 3-metric. The contravariant version of the metric
$g^{\mu \nu}$ is written
\begin{equation}
g^{\mu \nu} = \left(
\begin{array}{cc}
-\frac{1}{\alpha^2} & \frac{\beta^i}{\alpha^2} \\
\frac{\beta^j}{\alpha^2} & \gamma^{ij}\\
\end{array}
\right).
\end{equation}
This form of the metric has the same number of degrees of freedom,
namely ten, as in the obvious form $g_{\mu\nu}$. Here they are spread out as
one for the lapse, three for the shear and finally six in the spatial curvature.
Therefore, any metric which is not null may be written in this form.
In this thesis I only consider the evolution of matter and fields in
a background
space time although through the Einstein equation they are
sources for the metric fields. Thus, it is important to leave $\alpha$, $\beta$
and $\gamma_{ij}$ unspecified, making the design ready for
integration with evolving metric fields.
\subsection{Different coordinate systems}
The global coordinate system \Eq{metric1} is often called the \emph{star fixed
coordinate system} ({SFCS}), because in most applications
it is asymptotically flat and, therefore, connected to inertial
observers at infinity.
If we consider instead local observers who do not observe any shear
and measure time in terms of local clocks, their line element must be given as
\begin{equation}\label{metric2}
ds^2 = - d\th^2 + \gamma_{ij} d\h{x}^i d\h{x}^j.
\end{equation}
This coordinate system is denoted the \emph{local laboratory frame} ({LOLA} frame), and
I write any quantity in this coordinate system with a hat.
In the {LOLA} frame in many interesting cases $\gamma_{ij}$ is almost diagonal
and one could then easily rescale the problem as done by Koide et
al.~\cite{1999ApJ...522..727K} to evolve matter and fields as seen by local
observers, or FIDOs\footnote{FIDOs are fiducial observers whose metric
is defined as that seen by observers in local inertial frames.} instead.
I have done so but to keep my approach general, I have exploited the idea
to always rescale the diagonal in the metric, even though it may well be
non diagonal.
Because the off diagonal terms in the spatial part of the metric often
are comparable in size to the diagonal ones, I have effectively normalised
the metric. Since the metric is almost a FIDO metric I have
named it the \emph{pseudo FIDO frame} ({PFIDO}) frame. In this frame the
metric tensor is given as
\begin{align}\label{eq:metric3}
ds^2 &= -d\t{t}^2 + \tilde{\gamma}_{ij} d\t{x}^id\t{x}^j\,, \\
\tilde{\gamma}_{ij} &= \frac{\gamma_{ij}}{\sqrt{\gamma_{i\,i}\gamma_{jj}}}\,,
\end{align}
and there are only three non-trivial terms in the {PFIDO} metric, because
all but the non diagonal terms in \Eq{eq:metric3} have been normalised.
The central idea of my numerical scheme is to use the {PFIDO} frame
to measure all physical quantities. The {PFIDO} frame is only
defined locally and we still need to use the global coordinates connected
to the {SFCS} to measure distances. The general way to construct
an equation is first to derive it in the {SFCS} and then to transform
the tensors and vectors from the {SFCS} to the {PFIDO} frame, while
keeping the derivatives and differentials with respect to the {SFCS}.
It is central that the transformation from the {SFCS} to the {PFIDO}
frame is completely linear and simple, even for generally evolving
coordinates. Had we, instead, chosen to go all the
way to a FIDO frame in
the general case, we would have had to invert a matrix, the metric, at every
point for every time step. The {PFIDO} frame is
a healthy in-between, which gives us almost all of the advantages of
the FIDO frame but at a much lower cost.
Intuitively it is clear that when going to a local frame of reference
the curvature
of space only manifests itself as extra external Coriolis--like forces, giving
some extra terms in the evolution equations below. From a numerical view point
there is an added benefit when we consider space times with a strong shear or
frame dragging,
ie.~points where $\beta^i$ is large. The standard example of this is the Kerr
metric in Boyer-Lindquist coordinates.
Inside the ergosphere, from the point of view of the {SFCS}, everything is
rotating
around the black hole in the same direction as the spin of the hole. The closer
we are to
the event horizon, the faster the rotation induced by the shear. From a local
observers point of view in the {PFIDO} frame though, there is no shear and the
locally defined velocity is much smaller. The locally defined velocity is the
truly interesting velocity, since it arises due to physical processes, while
the apparent high velocity seen by an observer attached to the {SFCS} is partly
due to the geometrical structure of the background space time and partly due
to physical processes, thus, a result of the
chosen reference frame. Near the horizon the shear-induced frame dragging
velocity can be much greater than the local velocity, and we can run into problems
with numerical cancellations smearing out variations in the local
velocity. Yet this is avoided by choosing to work in the {PFIDO} frame.
From the line elements \Eq{metric1} and \Eq{eq:metric3} we may derive the
transformation laws. In particular we have
\begin{align}
\alpha dt & = d\t{t}\,, \\
\sqrt{\gamma_{ii}} (dx^i + \beta^i dt) & = d\t{x}^i\,,
\end{align}
and coordinate differentials are contravariant vectors. Then, any
contravariant vector $U^\mu$ transforms like
\begin{equation}\label{eq:trans1}
\t{U}^t = \alpha U^t\,, \quad \t{U}^i = \sqrt{\gamma_{i\,i}} \left(U^i + \beta^i U^t\right).
\end{equation}
It is a matter of linear algebra to show that covariant
vectors transform like
\begin{equation}\label{eq:trans2}
\t{U}_t = \frac{1}{\alpha}\left(U_t - \beta^i U_i \right)\,, \quad
\t{U}_i = \frac{1}{\sqrt{\gamma_{i\,i}}} U_i\,.
\end{equation}
Tensors transform as the product of vectors by their very definition.
We refer the reader to App.~\ref{chap:appa} for a complete list of
transformation relations that have proven useful when deriving the
equations in this chapter.
\subsection{Basic equations}
The basic fluid equations follow from conservation laws.
The conservation of the baryon current gives
\begin{equation}\label{eq:baryoncons}
\nabla_\mu \left( \rho U^\mu \right)=0\,,
\end{equation}
where $\nabla_\mu$ is the covariant derivative, $\rho$ is the rest mass density
and
$U^\mu$ is the four velocity in the {SFCS} coordinate system.
The conservation of the energy--momentum tensor $T^\mu_\nu$ leads to a similar
expression
\begin{equation}
\nabla_\mu T^\mu_\nu = 0.
\end{equation}
The version that we have chosen
to use of the energy--momentum tensor for a fluid is given as
\begin{equation}\label{eq:tmunu}
T^\mu_{(HD)\nu}=\rho h U^\mu U_\nu + \delta^\mu_\nu P-2\eta\sigma^\mu_\nu\,,
\end{equation}
where $P$ is the pressure, $h = 1 + e_{int}
+ P/\rho$ is the relativistic enthalpy,
$e_{int}$ is the internal energy,
and $\eta\sigma^\mu_\nu$ is the shear
viscosity. It has the definition
\begin{equation}\label{eq:shear}
\sigma^{\mu \nu} = \frac{1}{2}\left( h^{\mu\alpha}\nabla_{\alpha} U^\nu
+ h^{\nu\alpha}\nabla_{\alpha} U^\mu \right)\,,
\end{equation}
where $h^{\mu\nu}$ projects into the fluid rest frame
\begin{equation}
h^{\mu\nu} = U^\mu U^\nu + g^{\mu\nu}.
\end{equation}
We consider the energy--momentum tensor in mixed form as the basic
hydrodynamical object to evolve, because even for general
metrics the pressure term disappears in \Eq{eq:tmunu} for off-diagonal
components \citep{2003ApJ...589..444G}.
This is not the case for the purely co-- or
contravariant versions.
The energy momentum tensor of the electromagnetic field is
\begin{equation}
T^\mu_{(EM)\nu} = F^{\mu\sigma} F_{\sigma\nu}
- \frac{1}{4}\delta^\mu_\nu F^{\kappa\sigma} F_{\kappa\sigma}\,,
\end{equation}
where $F^{\mu\nu}$ is the electromagnetic field strength tensor.
We can simplify the covariant derivatives significantly by using
the following identities
\begin{subequations}\label{coderiv}
\begin{align}
\nabla_\mu f U^\mu &= \frac{1}{\sqrt{-||g||}}
\partial_\mu \left(\sqrt{-||g||}\, f U^\mu \right)\,, \\
\nabla_\mu T^\mu_\nu &= \frac{1}{\sqrt{-||g||}}
\partial_\mu \left(\sqrt{-||g||}\, T^\mu_\nu \right)
- \frac{1}{2}T^{\kappa \sigma} \partial_\nu g_{\kappa \sigma}\,, \\
\nabla_\mu F_\nu^{\phantom{\nu}\mu} &=
\frac{1}{\sqrt{-||g||}}
\partial_\mu \left(\sqrt{-||g||}\, F_\nu^{\phantom{\nu}\mu} \right)\,,
\end{align}
\end{subequations}
where $f$ is a scalar function, $U^\mu$ a vector, $T^\mu_\nu$ is any
symmetric tensor, $F_\nu^{\phantom{\nu}\mu}$ any antisymmetric tensor
and $||g||$ is the determinant of the metric.
\subsection{Selecting evolution variables}
We have chosen our field variables with respect to the
{PFIDO} frame and the basic evolution variables
take the form
\begin{align}\label{eq:D}
D &= \gamma \rho {\t U}^t = \gamma \rho W\,, \\ \label{eq:E}
\mathcal{E} &= - \gamma {\t T}^t_t - D = \gamma \left( \rho h W^2 - P - \rho W \right)\,, \\
\label{eq:Pi}
\P_i &= \sqrt{\gamma_{i\,i}} \gamma {\t T}^t_i = \sqrt{\gamma_{i\,i}} \gamma \rho h W {\t U}_i\, ,
\end{align}
where $W={\t U}^t$ is the Lorentz factor of the fluid with respect to the
{PFIDO} frame and $\gamma=\sqrt{||\gamma||}$ is the square root of the determinant
of the spatial metric.
Looking at \Eq{coderiv} and \Eq{eq:trans1} we see that the reason for
choosing the factor $\gamma$ in front of the variables in
Eqs.~(\ref{eq:D})--(\ref{eq:Pi})
is to cancel out $\sqrt{-||g||}$ in \Eq{coderiv}.
The subtraction of the relativistic mass density in the
definition of the total fluid energy density is done in order to cancel
the rest mass energy
density, which could otherwise jeopardise a numerical implementation when the
flow is non--relativistic.
\subsection{Hydrodynamic equations}
In order to highlight the physical content I first write
down the equations of motion in the
case where there are no electromagnetic fields:
$T^{\mu\nu}=T^{\mu\nu}_{(HD)}$. To find the equations of motion, we use
\Eqs{eq:trans1}{eq:trans2} and their extension to mixed typed
two-tensors (see App.~\ref{chap:appa}) together with the rules for covariant derivatives \Eq{coderiv} and
the fundamental equations of motion in the {SFCS} \Eq{eq:baryoncons} and
\Eq{eq:tmunu}
\begin{align}\label{eq:Deqofm}
\partial_t D &= -\partial_j D \overline{v}^j\,, \\ \nonumber
\partial_t \left[\mathcal{E} + \Sigma^t_t\right]
&= -\partial_j\left[\left(\mathcal{E} + \gamma P\right)\overline{v}^j + \overline{\Sigma v}_t^j\right] \\ \nonumber
&\phantom{=}+\frac{1}{\alpha}\left[\P_i\left(\partial_t+\overline{v}^j\partial_j\right)
+\Sigma^t_i\partial_t + \overline{\Sigma v}_i^j\partial_j\right]\beta^i
\\ \nonumber &\phantom{=}
-\left[DhW\left(\partial_t+\overline{v}^j\partial_j\right)
+\gamma P \left(\partial_t-\beta^j\partial_j\right)\right.
\\ \nonumber &\qquad\qquad\qquad\qquad\left.
+\Sigma^t_t\partial_t+\overline{\Sigma v}_t^j\partial_j\right]\ln\alpha
\\ \label{eq:Eeqofm} &\phantom{=}
-\partial_j\left(\gamma P \beta^j\right) + \left( \beta^i\mathcal{M}_i-\mathcal{M}_t \right)\,,
\\ \label{eq:Peqofm}
\partial_t \left[\P_i + \Sigma^t_i \right] &=
-\partial_j \left[ \P_i \overline{v}^j + \overline{\Sigma v}^j_i \right]
-\partial_i \left[ \alpha\gamma P \right] + \alpha\mathcal{M}_i\, ,
\end{align}
where the normal three-velocity has the usual definition
\begin{equation}
{\t v}^\mu = \frac{{\t U}^\mu}{{\t U}^t} = \frac{{\t U}^\mu}{W}\,,
\end{equation}
the transport velocity is the three velocity seen from the {SFCS}
\begin{equation}
\overline{v}^i = \frac{\alpha}{\sqrt{\gamma_{i\,i}}} {\t v}^i - \beta^i = \frac{U^i}{U^t}\,,
\end{equation}
the geometrical terms $\mathcal{M}_\mu$ are
\begin{equation}
\mathcal{M}_\mu = \frac{1}{2} \gamma T^{\alpha\nu}\partial_\mu g_{\alpha\nu}\,,
\end{equation}
and the viscosity terms are
\begin{align}
\Sigma^t_t &= - \gamma {\t \sigma}^t_t\,, \\
\Sigma^t_i &= \sqrt{\gamma_{i\,i}} \gamma {\t \sigma}^t_i\,, \\
\overline{\Sigma v}^j_t &= - \gamma \left[ \frac{\alpha}{\gamma_{jj}} {\t \sigma}^j_t
-\beta^j {\t \sigma}^t_t \right]\,, \\
\overline{\Sigma v}^j_i &= \sqrt{\gamma_{i\,i}} \gamma \left[ \frac{\alpha}{\gamma_{jj}} {\t \sigma}^j_i
-\beta^j {\t \sigma}^t_i \right]\,.
\end{align}
Even though the evolution equation for the energy has become a bit more
complicated than in the special relativistic case ($\alpha=\gamma=\sqrt{\gamma_{i\,i}}=1$,
$\beta=0$), it represents a substantial simplification in that relations between
the different variables reduce almost to the special relativistic form.
Hence for example the Lorentz factor $W$ may be computed as
\mbox{$W=[1+{\t \gamma}_{ij}\t U^i\t U^j]^{1/2}$} bearing in mind
that the diagonal is already normalised. Let us consider a
space time without any off-diagonal spatial components but
with an arbitrary shear. For example Boyer Lindquist coordinates in an
extreme astrophysics context or the uniform curvature gauge
in a cosmological context. In these examples, the shear viscosity is identical
to the special relativistic form. This is because the {PFIDO} frame
reduces to a FIDO frame of reference. To handle
coordinate systems that penetrate the event horizon of a black hole, for
example the Kerr--Schild coordinates, we need at least one off-diagonal
spatial component \cite{bib:cook00}. In this case extra terms
in the shear tensor arise, but changes are minimal.
\subsection{Artificial viscosity}\label{sec:av}
It was argued by Anninos \& Fragile \cite{2003ApJS..144..243A} that
in order to make a consistent
relativistic finite difference code with artificial viscosity (AV)
it is crucial to use a viscosity that has been defined in a physically
sensible manner,
otherwise it will break down for flows with high Lorentz factors.
An efficient AV should be covariant in its definition, such
that, the code can easily be adapted to general relativity, be physically
meaningful, respect energy conservation, and reduce to some normal Newtonian
AV formulation in the non relativistic limit.
We know of no implementation so far that has respected all of the
above points.
Indeed it seems that the prevalent thing is constructing a mock-up
``viscous pressure''
using the prescription $P \rightarrow P + Q_{visc}$ and then include a
directional dependence such that the effective energy-momentum tensor takes the
form
\begin{equation}\label{eq:viscp}
T^{\mu\nu}_{(HD)} = (\rho h + Q_{visc})U^\mu U^\nu + g^{\mu\nu} P + Q^{\mu\nu}\,.
\end{equation}
Such a viscosity may be able to deal
with mildly relativistic shocks but it does not even reduce properly
in the non relativistic limit.
A general imperfect fluid
energy--momentum tensor may be written
\begin{align}
T^{\mu\nu}_{(HD)} &= \rho h U^\mu U^\nu + g^{\mu\nu} P + Q^{\mu\nu}\,, \\
Q^{\mu\nu} &= - 2\eta\sigma^{\mu\nu} -\xi\theta h^{\mu\nu} \,,
\end{align}
where $\eta$ and $\xi$ is the shear and bulk viscosity coefficients,
\mbox{$\theta = \nabla_\mu U^\mu$}
is the expansion of fluid world lines, and $\sigma^{\mu\nu}$
is the spatial shear tensor (see \Eq{eq:shear}).
In the non relativistic limit we find that
\begin{align}
T^{tt} - D &\rightarrow \frac{1}{2}\rho v^2 + \rho e_{int}\,, \\
T^{ti} &\rightarrow \rho v^i\,,
\end{align}
which shows that any consistent shear viscosity should reduce as
\begin{align}
\left(Q^{ti},Q^{ij}\right) &\rightarrow \left(v^j \tau_{ij},\tau_{ij}\right) \\
\tau_{ij} &= \nu_{ij} \left( \partial_i v^j + \partial_j v^i\right)
\end{align}
in the non relativistic limit. Here $\nu_{ij}$ is some viscous constant,
which could depend on the numerical
grid spacing $dx^i$, the local sound speed
and other factors. Neither the viscous pressure
formulation (\Eq{eq:viscp})
nor the bulk viscosity $\xi\theta h^{\mu\nu}$ reduce properly in the limit.
Only the shear viscosity $\eta \sigma^{\mu\nu}$ does so.
The shear viscosity is included directly in the energy-momentum tensor
and it is by construction covariant and preserves energy and momentum.
\subsection{Electromagnetic fields}
The $3+1$ formulation of Maxwell's equations was originally calculated by
Thorne \& MacDonald \cite{1982MNRAS.198..339T} and may be written (see also
Baumgarte \& Shapiro \cite{2003ApJ...585..921B})
\begin{align}
\partial_i \gamma E^i &= 4\pi \gamma \rho_e\,,\\ \label{eq:solenoid}
\partial_i \gamma B^i &= 0\,,\\
\partial_t \gamma E^i &= \epsilon^{ijk}\partial_j(\alpha \gamma B_k)-4\pi\alpha\gamma J^i
+ \partial_j\left[\beta^j\gamma E^i-\beta^i\gamma E^j\right]\,,\\
\partial_t \gamma B^i &= -\epsilon^{ijk}\partial_j(\alpha \gamma E_k)
+ \partial_j\left[\beta^j\gamma B^i-\beta^i\gamma B^j\right]\,,
\end{align}
where $E^i$, $B^i$, $\rho_e$ and $J^i$ are the electric field, magnetic field,
charge density and current density as seen by observers in the {SFCS}
frame. With the goal of simplifying the equations, we absorb
the determinant of the 3-metric
in the definition of the different fields. Furthermore we use the fields as seen
by observers in the {PFIDO} frame. The Maxwell equations then become
\begin{align}
\partial_i \mathcal{E}^i &= 4\pi \overline{\rho}_e\,,\\
\partial_i \mathcal{B}^i &= 0\,,\\
\label{eq:Ampere}
\partial_t \mathcal{E}^i &= \epsilon^{ijk}\partial_j(\alpha \mathcal{B}_k)-4\pi\alpha\overline{J}^i
+ \partial_j\left[\beta^j\mathcal{E}^i-\beta^i\mathcal{E}^j\right]\,,\\
\label{eq:Faraday}
\partial_t \mathcal{B}^i &= -\epsilon^{ijk}\partial_j(\alpha \mathcal{E}_k)
+ \partial_j\left[\beta^j\mathcal{B}^i-\beta^i\mathcal{B}^j\right]\,,
\end{align}
where $\mathcal{B}^i=\frac{\gamma}{\sqrt{\gamma_{i\,i}}}\t B^i=\gamma B^i$, $\mathcal{B}_i=\sqrt{\gamma_{i\,i}}\gamma\t B_i=\gamma B_i$,
$\mathcal{E}^i=\frac{\gamma}{\sqrt{\gamma_{i\,i}}}\t E^i=\gamma E^i$, $\mathcal{E}_i=\sqrt{\gamma_{i\,i}}\gamma\t E_i=\gamma E_i$,
$\overline{\rho}_e=\gamma\t\rho_e$ and $\overline{J}^i=\frac{\gamma}{\sqrt{\gamma_{i\,i}}}\t J^i$. Except for
the shift terms and some lapse factors, this equation set is identical to the
special relativistic Maxwell equations.
The energy and momentum equations are modified in the presence of
electromagnetic fields, reflecting the transfer between fields and fluids.
\begin{equation}
\nabla_\mu T^\mu_{(HD)\nu} = - \nabla_\mu T^\mu_{(EM)\nu} = F_{\nu\mu}\mathcal{J}^\mu\,,
\end{equation}
where $\mathcal{J}^\nu$ is the four current vector. After some algebra we find
\begin{align}\nonumber
\partial_t\mathcal{E} &=\ldots+\gamma \left[\beta^i F_{i\mu}\mathcal{J}^\mu-F_{t\mu}\mathcal{J}^\mu\right]
=\ldots+\frac{\alpha}{\gamma} \overline{J}^i \mathcal{E}_i \\ \label{eq:energy}
&= \ldots+\frac{\alpha}{\gamma} \overline{J} \cdot \vec\mathcal{E}\,, \\ \nonumber
\partial_t\P_i &=\ldots+\alpha \gamma F_{i\mu}\mathcal{J}^\mu
=\ldots+\frac{\alpha}{\gamma}\left[\epsilon_{ijk}\overline{J}^j\mathcal{B}^k
+ \overline{\rho}_e\mathcal{E}_i\right] \\ \label{eq:momentum}
&=\ldots+\frac{\alpha}{\gamma}\left[\overline{J}\times\vec\mathcal{B}
+\overline{\rho}_e\cdot\vec\mathcal{E}\right]_i\,.
\end{align}
It is worth noticing that the result practically reduces to special
relativity except
for the prefactor $\alpha \gamma^{-1}$.
\subsection{Ohm's Law}
If we consider relativistic MHD, we have to supply an Ohm's law to link
the electric and magnetic fields with the current density. A relativistic
version of the standard non-relativistic Ohm's law may be written
\cite{bib:lichnerowicz67,2003ApJ...585..921B,bib:meier04}
\begin{align}\nonumber
\eta_c J_i &= U^\nu F_{i\nu} \\
&= \alpha E_i U^t + \epsilon_{ijk}\left(U^j + \beta^j U^t\right)
B^k\,,
\end{align}
where $\eta_c$ is the resistivity. Using \Eq{eq:trans1} it reduces to
\begin{align}\nonumber
\mathcal{E}_i &= \frac{\eta_c}{W}\overline{J}_i - \frac{1}{\sqrt{\gamma_{i\,i}}}\epsilon_{ijk}{\t v}^j\mathcal{B}^k\\
\label{eq:ohmslaw}
&= \frac{\eta_c}{W}\overline{J}_i - \frac{1}{\sqrt{\gamma_{i\,i}}}{\t v}^j\times\vec{\mathcal{B}}^k.
\end{align}
Except for the Lorentz factor $W$ and the single geometric factor, this is
identical to the standard non relativistic result.
In this thesis the ideal MHD condition will not be used directly,
since resistivity is applied in the code. However, taking
$\eta_c=0$ and assuming the ideal MHD condition
Faradays law \Eq{eq:Faraday} in the {SFCS} may be reduced to
\citep{2003ApJ...585..921B}
\begin{equation}
\partial_t\gamma B^i = \partial_j\left((U^t)^{-1}U^i\gamma B^j
-(U^t)^{-1}U^j\gamma B^i\right)\,,
\end{equation}
which in our notation is
\begin{equation}\label{eq:idealmhd}
\partial_t\mathcal{B}^i = \partial_j\left(\overline{v}^i\mathcal{B}^j
-\overline{v}^j\mathcal{B}^i\right)\,.
\end{equation}
\section{The Numerical Algorithm}\label{sec:2.4}
I have used the equations of motion Eqs.~(\ref{eq:Deqofm}),
(\ref{eq:energy}), (\ref{eq:momentum}), (\ref{eq:Faraday}) together
with \Eq{eq:Ampere} for the current density and an Ohms law \Eq{eq:ohmslaw}
as a basis for the general relativistic code, but even though many mathematically
equivalent forms of the equations of motion exist, they may lead to numerical
implementations with radically different success rates. In this section,
I detail some of the concepts I have used to deal with the problems
that inevitably arise when solving a set of equations numerically.
The most important choice is to determine if we want to exploit the
characteristic structure of the equations or just directly use finite
differencing to solve the equations. In keeping with the tradition in Copenhagen
I have chosen the latter. This has helped to develop the code
in a relatively short time span and I am indebted in my reuse of
techniques and tricks from the non relativistic codes developed in Copenhagen.
The next fundamental
choice is the form of the equations. Either we can use a flux conservative
or a non conservative formulation. There are benefits to both: In the flux
conservative formulation, the Rankine-Hugoniot jump conditions are automatically
satisfied across shock fronts even if the model does not resolve the shocks
entirely. This is not the case for a non conservative formulation.
On the other hand: In a flux conservative formulation, one of the conserved
variables is the total energy. It contains contributions both from the
fluid and from the electromagnetic fields. If the plasma is strongly dominated
by the electromagnetic fields, the internal energy, the difference
between the total and electromagnetic energies, can be swamped by numerical
noise and round off. Another problem --- albeit technical --- is that the
conservative variables in the MHD case are related algebraically to the
so called primitive variables through a sixth order polynomial. There is no
analytical solution to the problem, and an expensive numerical root finder
method has to be used.
I have chosen a cross breed solution: I use conservative variables for
the hydrodynamics, while in the case of MHD, I do not include the
magnetic energy and momentum in the total energy $\mathcal{E}$ and covariant
momentum $\P_i$. The basic reason for not
using conservative variables is due to the problems with magnetically
dominated plasmas. As an added benefit, I circumvent the problems of
finding primitive variables through non analytical methods. Nonetheless,
still at every time step it is necessary
to find the four velocity $\t U^\mu$ and enthalpy $h$ from the
total hydrodynamic energy $\mathcal{E}$ and covariant momentum $\P_i$.
\subsection{Primitive variables}
Given the dynamical variables $D$, $\mathcal{E}$ and $\P_i$ in
Eqs.~(\ref{eq:D})-(\ref{eq:Pi}) together with the equation of state
for an ideal gas
\begin{equation}
P = (\Gamma - 1)\rho e_{int} = \frac{\Gamma - 1}{\Gamma} \rho (h-1)\,,
\end{equation}
where $\Gamma$ is the adiabatic index, I define two derived quantities
\begin{align}\label{eq:X}
X &\equiv \frac{\mathcal{E}}{D}
= (h-1)W + W - 1 - \frac{\Gamma-1}{\Gamma}\frac{h-1}{W}\,, \\
Y &\equiv \frac{\P_i \P^i}{D^2} \label{eq:Y}
= h^2 (W^2 - 1).
\end{align}
Using \Eq{eq:Y} to solve for $W$ and inserting the solution into \Eq{eq:X}
a fourth order polynomial in $h_m=h-1$ may be constructed, which only contains
$X$, $Y$ and $\Gamma$ in the coefficients, viz.
\begin{align}\nonumber
h_m^4 + 2[\Gamma+1]\,h_m^3 +
[1 + \Gamma(4 - \Gamma X(2 + X) + 2Y) ]\,h_m^2 + \quad\quad \\ \nonumber
[1 - \Gamma X(X + 2) + (1 + \Gamma) Y ]\,h_m + \quad \\ \label{eq:root}
\Gamma^2 (1 + Y)(Y - X^2 - 2 X) & = 0.
\end{align}
When the desired root has been found, it is trivial from \Eq{eq:Y} to
obtain $\t U^i\t U_i = W^2 - 1$ and then any other desired quantity.
Fourth order polynomials
may be solved iteratively using a range of different root finder methods,
such as the Newton--Raphson method. I tried this, and even though it most
often worked flawlessly and was fast, for certain corner cases, it is
both unstable
and slow. Slowness in a few cases may be acceptable, but if the method crashes,
the simulation crashes. Stability is the key. An alternative is to use
an analytic formula for the roots, but
great care has to be taken. In any na{\"\i}ve implementation, for example
taking directly the output from Mathematica, the coefficients will cancel
numerically at the slightest difference in scale of the four velocity and
the Lorentz boost
and the result will be imprecise. In the end I settled on a method detailed
in \cite{bib:stegun} to reformulate the problem in terms of roots in one
third order and four second order polynomials. I find the roots using
stable formulae, which guard for cancellations, from \cite{bib:numrecip}.
With this approach the code runs most tests using single precision variables and
only for the most extreme cases (high Lorentz boost and very low pressure),
we have to fall back to double precision. The solver is not only rock solid
but also very fast. Properly implemented with no if-lines and all calculations
vectorised, it takes approximately 20\% of a time step, and therefore
does not, in any way, dominate the problem. Note that a related approach
has been reported in \cite{bib:delzanna03}.
\subsection{Artificial viscosity}
I do not try to solve, neither exactly nor approximately, the Riemann problem at
cell boundaries. Instead, I use finite difference derivatives. To stabilise
the algorithm it is critical to add AV. During the development of the code I have
tried many different formulations both inspired by the non relativistic codes
developed in Copenhagen, classical formulations of AV and the self consistent AV
detailed in \cite{2003ApJS..144..243A}. In the end I settled for an AV based on a
physical model of shear viscosity derived from the energy momentum tensor
of an imperfect fluid (see section \ref{sec:av}). To determine the viscosity
coefficient $\eta$ in front of the shear viscosity in \Eq{eq:tmunu} I
have extended the prescription already used in the non relativistic
codes in Copenhagen \cite{bib:stagger}, and use a Richtmeyer--Morton type
hyper viscosity that depends on the local conditions in the fluid:
\begin{align}\label{eq:visceta}
\eta_{ij} &= \Delta x_{ij} \left[ \nu_1 c_s + \nu_3 |\overline{v}| + \nu_2 \Delta l
|\partial_\mu \t U^\mu|_{< 0}\right]\,, \\
\Delta x_{ij} &= \frac{1}{2} Dh \left[\Delta x^i + \Delta x^j \right]\,,
\end{align}
where $c_s$ is the relativistic sound speed, $\Delta l = \max(\Delta x^i)$ and
$|\cdot|_{<0}$ means that the strong shock viscosity only is operative
where there is a compression of the fluid. Except for the sound speed,
the only other changes in the coefficient $\nu_{ij}$ compared
to \cite{bib:stagger} are the use of
$Dh$, as seen by an observer in the local {PFIDO}, frame instead of the
mass density $\rho$, and the use of
the divergence of the four velocity in the relativistic case compared
to the normal divergence of the spatial three velocity in the non
relativistic case. It is non trivial to find the time derivative of the
Lorentz boost $W$. We found by experimenting with different, mathematically
equivalent prescriptions, that by far the most stable formulation is
\begin{equation}
\partial_t W = \frac{1}{2W}\partial_t \t U^i \t U_i\,.
\end{equation}
The shear viscosity, given in \Eq{eq:shear}, contains time
derivatives of the four velocity too. In the code I use a third order
Runge--Kutta integrator for the normal dynamical variables.
I evaluate the four velocity derivatives by explicit
derivatives, storing old velocities three sub time steps back in time.
This way I get third order correct time derivatives. Unfortunately they are
not correctly time centred and I speculate that some of the problems
I see in the test problems below for high Lorentz boosts may be due to
the time derivatives lagging approximately half a full time step compared to the
rest of the terms. In the energy and the momentum equations (\ref{eq:Eeqofm})
and (\ref{eq:Peqofm}) AV terms arise both on the right hand side and
in the time derivative. I have currently not included the time derivative
of the shear viscosity in the code.
\subsection{The magnetic field}
The equations are evolved on a staggered mesh (see below) and
the divergence free condition \Eq{eq:solenoid} of the magnetic field
is naturally conserved.
To raise the entropy in magnetically driven shocks I use the exact
same formulation
as in \cite{bib:stagger} for the resistivity $\eta_c$ since the Maxwell
equations by construction comply with special relativity, and the only change
has been to substitute a relativistic correct expression for the fast mode
speed.
Ohms law \Eq{eq:ohmslaw} and Amperes law \Eq{eq:Ampere} are used to derive
the electric field and the current density respectively. We use an explicit
time derivative to evaluate the displacement current. Even though it is lagging
behind with half a time step, like the time derivatives of the four velocity,
it has proven very effective in limiting the magnetically driven wave speeds
except when the Alfv\'en velocity becomes close to the speed of light.
The magnetic part of the code is calculated following the scheme
\begin{itemize}
\item{Calculate the resistivity $\eta_c$. It is proportional to $\nu_B \nu_3$.}
\item{Estimate the electric field:
$\mathcal{E}_i^\star = - \frac{1}{\sqrt{\gamma_{i\,i}}}{\t v}^j\times\vec{\mathcal{B}}^k$.}
\item{Calculate ${\mathcal{E}^\star}^i$ and find the displacement current
using an explicit time derivative.}
\item{Calculate an estimate for the current
$\alpha {\overline{J}^\star}^i = \epsilon^{ijk}\partial_j(\alpha \mathcal{B}_k)
+ \partial_j\left[\beta^j{\mathcal{E}^\star}^i-\beta^i{\mathcal{E}^\star}^j\right]$.}
\item{Lower the current and find the final electric field
$\mathcal{E}_i = \frac{\eta_c}{W}\overline{J}_i^\star + \mathcal{E}_i^\star$.}
\item{Use the displacement current to update the current
$\overline{J}^i = {\overline{J}^\star}^i - \frac{1}{\alpha}\partial_t {\mathcal{E}^\star}^i$.}
\item{Proceed calculating Faradays law \Eq{eq:Faraday} and the energy and
momentum sources Eqs.~(\ref{eq:energy}) and (\ref{eq:momentum}).}
\end{itemize}
I have tested different variations of the scheme above using the full
version of the current density $\overline{J}^i$, including the displacement current,
to find the final electric field. Even though formally better, it turned out to be
less stable, giving short wave oscillations and essentially the same results.
\section{Testing the Code}\label{sec:2.5}
I have implemented an MHD version of the above equations, currently
restricted to special relativity. A pure HD version has been made
for general relativity with diagonal metrics.
To test the code, I have applied a battery of tests that are presented
below. In all tests I have used a 3 dimensional version of the code.
The boundary conditions are implemented
in the $y$ direction by design, and therefore our 1D box has the
size $(1,N_y,1)$. If not stated otherwise,
in all runs, the weak shock viscosity coefficients are
$\nu_1 = \nu_3 = 0.029$, the strong shock viscosity coefficient is
$\nu_2 = 0.55$,
the magnetic resistivity coefficient (see \cite{bib:stagger}) is $\nu_B=1$ and
the Courant limit is $C_{dt}=0.3$. The code can handle more extreme problems
by tuning the different numbers, but I feel that it is important that the code
``just works''; in real physical applications the results should not rely too
much on
the tuning of these technical parameters, since that would question
the validity of the
results. As an example, by just decreasing the Courant limit
and the weak viscosity $\nu_1$ I am able to run the wall shock test with
a $W_{inflow}=5$ and obtain
satisfactory results. Only in two of the magnetic tests, I have tuned the
coefficients to facilitate the comparison with other codes.
\subsection{Hydrodynamical tests}
The code has been developed without extending any preexisting relativistic fluid
dynamics code, and it is important to demonstrate that it can solve correctly
a variety of purely hydrodynamical problems. Fortunately, the analytic solution to
hydrodynamic shock tubes is known \cite{bib:pons00,bib:marti94,bib:thompson86}. I
have used the \verb|RIEMANN| program published by Mart\'i and M\"uller
\cite{bib:marti03} to generate the analytic solutions.
\subsubsection{Blast waves}
The blast wave is a problem with two domains initially at rest with a
discontinuous jump in the density and pressure.
A blast wave is launched at the interface
with a very thin shell of high density.
The fluid separates in five different states.
Two initial states at the left and right boundary, a rarefaction wave, the contact
discontinuity and a shock wave. This setup is ideal for testing how diffusive
the scheme is, since the shock wave, for suitable parameters, is very thin.
The initial states for the three problems we consider are shown in Table I.
\begin{center}
\begin{tabular}[htb]{llllllll}
\hline\hline
Table I & \multicolumn{2}{c}{Problem I} &
\multicolumn{2}{c}{Problem II} &
\multicolumn{3}{c}{Problem III} \\
Blast waves & Left & Right & Left & Right & Left & Center & Right \\
\hline
Pressure & 13.33& 0.001 & 100 & 0.1 & 100 & 0.01 & 100 \\
Density & 10 & 1 & 1 & 1 & 1 & 1 & 1 \\
Gas Gamma &\multicolumn{2}{c}{5/3} &
\multicolumn{2}{c}{1.4} &
\multicolumn{3}{c}{5/3} \\
\hline
\end{tabular}
\end{center}
\begin{figure}[th]
\begin{center}
\epsfig{figure=problemI.eps,width=\textwidth}
\caption{Problem I: A mildly relativistic blast wave problem. Notice the slight
oscillation at the edge of the shock front. This is due to the large jump in
pressure at that point.}
\label{fig:probI}
\end{center}
\end{figure}
\begin{figure}[th]
\begin{center}
\epsfig{figure=problemIIa.eps,width=\textwidth}
\caption{Problem II: A relativistic blast wave problem. Our code has
some problems with maintaining sharp contact discontinuities at points,
where a high density blob is moving away from a low density area, such as
just behind the high density shell.}
\label{fig:probII}
\end{center}
\end{figure}
\begin{figure}[tphb]
\begin{center}
\epsfig{figure=rhomax.eps,width=0.5\textwidth}
\caption{Problem III: Colliding blast waves. The evolution of the maximum
in density as a function of time is shown.
A resolution of 8000 points is needed to resolve the very thin shell
of high density that is created when the two blast waves collide, and to
accurately calculate the post shock profile, while with 2000 points
we marginally resolve the preshock solution at $t=0.26$.}
\label{fig:probIII}
\end{center}
\end{figure}
\begin{figure}[tphb]
\begin{center}
\epsfig{figure=problemIII.eps,width=\textwidth}
\caption{Problem III: The solution at $t=0.2$, just before the two
shock waves collide.}
\label{fig:probIIIa}
\end{center}
\end{figure}
\begin{figure}[tphb]
\begin{center}
\epsfig{figure=problemIIIb.eps,width=\textwidth}
\caption{Problem III: The system, at the collision at $t=0.265$.
Notice we have changed the scale of both the $x$-- and $y$--axis
to reflect the the large change in density, and visualise the thin structures.
See Fig.~\ref{fig:probIIIa} for legend.}
\label{fig:probIIIb}
\end{center}
\end{figure}
\begin{figure}[tphb]
\begin{center}
\epsfig{figure=problemIIIc.eps,width=\textwidth}
\caption{Problem III: The system, after the collision at $t=0.3$.
See Fig.~\ref{fig:probIIIa} for legend.}
\label{fig:probIIIc}
\end{center}
\end{figure}
\begin{figure}[tphb]
\begin{center}
\epsfig{figure=problemV.eps,width=\textwidth}
\caption{Problem IV: The wall shock problem. The solution is shown at $t=2$ and
the resolution is 200 points.}
\label{fig:probIV}
\end{center}
\end{figure}
\begin{figure}[tphb]
\begin{center}
\epsfig{figure=problemVb.eps,width=\textwidth}
\caption{Problem IV: The same as in Fig.~\ref{fig:probIV}, but the
resolution is 400 points. Notice that the number of points in the shock
interface stays the same for different resolutions; about 3, 2 and 1 1/2
points for the different velocities.}
\label{fig:probIVb}
\end{center}
\end{figure}
Problem I, shown in Fig.~\ref{fig:probI}, is a classic shock tube,
that most relativistic codes
have been tested against (see \cite{bib:marti03} for a compilation). Ideally
the right state should have zero pressure but due to numerical reasons
we have set it to $0.001$. A small weakness of the code is already
visible in this test. When a high mass density region separates from
a low density region, such as at the contact discontinuity in
Fig.~\ref{fig:probI}, there
is a certain amount of stickiness. It is in fact a feature to avoid
low density regions to develop into true vacuums, making the code crash, but
it also makes advecting high density blobs become more diffusive at the
trailing edge. The shock velocity is maintained to a very high precision
and the rarefaction wave is near perfect too, even at low resolutions.
Problem II is a more relativistic variation of problem I. The shock wave
is propagating
with $0.92 c$. At $t=0.4$ the shell has a thickness of $\Delta y=0.023$
or 11 grid zones at a resolution of 500 points. The AV spreads out the
discontinuity over 6 points and this explains why the shock wave is under resolved
at this resolution. 2000 points are needed to get a reasonable solution at
$t=0.4$. Notice also, that the diffusion in the density impacts on the flat
profiles of pressure and velocity.
Problem III is the most extreme shock tube. To make a different
setup I have removed the rigid boundaries and instead imposed periodic
boundaries (see Fig.~\ref{fig:probIIIa}). A similar problem was considered
by Mart\'i and M\"uller \cite{bib:marti94}.
Compared to problem II, the pressure in the right zone is also lowered, and the
equation of state is more sensitive to the pressure.
When the two shock waves collide at $t=0.26$ a very dense shell is created.
To track the evolution in an easy way, I have plotted the maximum density as a
function of time in Fig.~\ref{fig:probIII} for different resolutions.
To resolve the preshock state reasonably well, at least 2000 points are needed,
while 8000 points are necessary to resolve the high density region and
the post shocks. In \cite{bib:marti94} 4000 points were
needed using a shock capturing PPM method to accurately model their problem.
\subsubsection{The wall shock}
The last hydrodynamical problem I have tested against is the wall shock. A cold
fluid comes in from the right and hits a wall at the left edge where it is
reflected.
The inflow density is $\rho=1$ and the adiabatic index is $\Gamma=4/3$.
When reflected a warm dense medium builds up. Figs.~\ref{fig:probIV} and
\ref{fig:probIVb} shows the solution at different resolutions and
time $t=2$ for mildly relativistic
velocities of $v_s=0.9$ and downwards. The analytic
solution to the wall shock problem may be found in \cite{2003ApJS..144..243A}
and \cite{bib:marti03}.\\[2ex]
It is clear from the above tests that the code is working very well up to
a Lorentz factor of about $W=2.5$. For higher Lorentz factors the current
artificial viscosity implementation
becomes problematic. I believe there are two problems with the current
implementation: We use explicit time derivatives for the four
velocities, but exactly because
they are explicit, for a given time step $t$ they are found at $t-\frac{1}{2}dt$,
and if the fluid is highly relativistic this will make a difference. In the
wall shock, I observe that only decreasing the Courant limiter from
the stock $0.3$ to $0.01$, I can
reach an inflow velocity with a Lorentz boost of $3.5$. Anninos \& Fragile
\cite{2003ApJS..144..243A} have developed, to our best knowledge,
the only explicit AV based code that can handle high Lorentz factors. This is
possible, because they include the time derivatives of the viscosity.
\subsection{Magnetohydrodynamical tests}
To validate the magnetic aspects of the code, I have performed
a range of tests. Unfortunately,
in relativistic MHD, no analytic solution is known to the Riemann problem,
and I have to rely on comparison with tests considered by other groups
using different codes and methods. Komissarov published in 1999 a
testbed \cite{bib:komissarov99} (hereafter K99) with different shock tubes.
Unfortunately there were some errors in the tables, which are corrected in
\cite{bib:komissarov02}. Some of the tests were used by De Villiers and
Hawley \cite{2003ApJ...589..458D} and Gammie et al \cite{2003ApJ...589..444G}
to validate their respective GrMHD codes. I have continued this trend by
performing the same tests as in \cite{2003ApJ...589..458D}. They augmented
the testbed of K99 with an Alfv\'en pulse test that tests for correct wave
speed of Alfv\'en waves at different background fluid speeds and degrees of
magnetisation, and a
more complete set of magnetosonic shocks. Presented below are tests of
magnetosonic shocks, magnetised shock tubes and similar Alfv\'en pulses.
\begin{figure}[tphb]
\begin{center}
\includegraphics[width=0.4\textwidth]{SlowShock.eps}
\includegraphics[width=0.4\textwidth]{FastShockI.eps}
\includegraphics[width=0.4\textwidth]{FastShockII.eps}
\includegraphics[width=0.4\textwidth]{FastShockIII.eps}
\caption{Problem V: Magnetosonic shocks. The slow shock is top left,
fast shock I is top right, fast shock II bottom left and fast shock III is
bottom right. The buildup in the right side of the slow shock is due to
interaction with the boundary. In the other shocks, the solution is close
to perfect and buildup does not occur.}
\label{fig:probV}
\end{center}
\end{figure}
\begin{figure}[tphb]
\begin{center}
\includegraphics[width=0.4\textwidth]{BrioWu.eps}
\includegraphics[width=0.4\textwidth]{komshock.eps}
\caption{Problem VI: Magnetised shock tubes. To the left is the relativistic
version of the Brio \& Wu shock tube, to the right the K99 shock tube 2. Compared
to Figs.~6 and 7 in \cite{2003ApJ...589..458D} and Fig.~6 in
\cite{bib:komissarov99} it is clear that most of the different waves have
the correct amplitude, but there are problems with too high wave speed
and therefore errors in the rarefaction wave. This is most pronounced
for the K99 shock tube to the right.}
\label{fig:probVI}
\end{center}
\end{figure}
\begin{figure}[tphb]
\begin{center}
\includegraphics[width=0.8\textwidth]{alfven.eps}
\caption{Problem VII: Alfv\'en pulse test. We start two Alfv\'en pulses
at $y=1.5$. The wave speeds depend on the background fluid velocity
and the degree of magnetisation. We begin to get significant errors when
$v_A \gtrsim 0.7c$. In all figures, the time is selected to have the two
waves line up. This is not the case for ALF1 and ALF3.}
\label{fig:probVII}
\end{center}
\end{figure}
\subsubsection{Magnetosonic shocks}
In Fig.~\ref{fig:probV} I present a collection of four different standing
magnetosonic shock waves. The parameters of the different waves may be found in
table II and have been taken from \cite{2003ApJ...589..458D}.
In the most extreme shock, the Fast Shock III, we had to decrease the Courant
limit to $C_{dt}=0.1$ and the shock viscosities to $(\nu_1,\nu_2)=(0.001,0.03)$.
For all cases the solution is in excellent agreement with the analytical
solution. Only in the case of the slow shock a slight over density has built
up and is propagating away from the shock wave. This might be due
to a relaxation of slightly imperfect initial conditions, and the solution has
instead settled to a new static solution with a small difference in the
parameters.
In the cases of the fast shocks initially there is a perturbation too,
but only as a small temporary ripple. In the cases of the Fast Shock II and III
(the lower plots in Fig.~\ref{fig:probV}) the ripple has already been advected
out of the box, while in the case of the Fast Shock I it can still be seen at
the right edge of the figure.
\begin{figure}
\begin{center}
\begin{small}
\begin{tabular}[htb]{lllll}
\hline\hline
Table II & \multicolumn{2}{c}{Slow Shock ($V_s = 0.5$)} &
\multicolumn{2}{c}{Fast Shock I ($V_s = 0$)} \\
& Left & Right & Left & Right \\
\hline
Pressure & 10 & 55.33 & 2.015 & 5.135 \\
Density & 1 & 3.322 & 1.406 & 2.714 \\
Four Vel &(0,1.53,0) &(0,0.957,-0.682)&(0,1.78,0.114) &(0,0.922,0.403)\\
Mag Field&(0,10,18.28)&(0,10,14.49)&(0,3.33,2.5)&(0,3.33,4.52)\\
Gamma &\multicolumn{4}{c}{4/3}\\
$t_{\textrm{final}}$ &\multicolumn{2}{c}{2.0}
&\multicolumn{2}{c}{2.5}\\
Grid size &\multicolumn{2}{c}{512}
&\multicolumn{2}{c}{1024}\\
\hline
\hline
& \multicolumn{2}{c}{Fast Shock II ($V_s = 0.2$)} &
\multicolumn{2}{c}{Fast Shock III ($V_s = 0.2$)} \\
& Left & Right & Left & Right \\
\hline
Pressure & 2.015 & 2.655 & 2.015 & 34.99 \\
Density & 1.406 & 1.725 & 1.406 & 8.742 \\
Four Vel &(0,1.78,0.114)&(0,1.479,0.28)&(0,3.649,0.114)&(0,0.715,0.231)\\
Mag Field&(0,3.33,2.5)&(0,3.33,3.25)&(0,3.33,2.5)&(0,3.33,6.52)\\
Gamma &\multicolumn{4}{c}{4/3}\\
$t_{\textrm{final}}$ &\multicolumn{2}{c}{2.5}
&\multicolumn{2}{c}{2.5}\\
Grid size &\multicolumn{2}{c}{1024}
&\multicolumn{2}{c}{512}\\
\hline
\hline
& \multicolumn{2}{c}{Relativistic Brio \& Wu} &
\multicolumn{2}{c}{Shock tube 2 from K99} \\
& Left & Right & Left & Right \\
\hline
Pressure & 1.0 & 0.1 & 30 & 1.0 \\
Density & 1.0 & 0.13 &1.0 & 0.1 \\
Mag Field&(0,0.75,1.0)&(0,0.75,-1.0)&(0,0,20)&(0,0,0)\\
Gamma &\multicolumn{4}{c}{4/3}\\
$t_{\textrm{final}}$ &\multicolumn{4}{c}{1.0} \\
Grid size &\multicolumn{2}{c}{2048}
&\multicolumn{2}{c}{512}\\
\hline
\end{tabular}
\end{small}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{small}
\begin{tabular}[htb]{lllllllll}
\hline
\hline
\multicolumn{8}{c}{Table III: Alfv\'en pulse tests}\\
Test &$\beta$&$v^y$&$v_a^+$&$v_a^-$&$10^4\times A^+$&$10^4\times A^-$&time&$B^z$\\
\hline
ALF1 &0.05&0.0&1.04(0.85)&-1.04(-0.85)&5.0(5.0)&5.0(5.0)&1.17&7.8\\
ALF2 &0.315&0.249&0.48(0.47)&0.00(0.00)&4.8(4.7)&5.2(5.3)&2.13&5.4\\
ALF3 &0.1&0.8&1.09(0.95)&0.33(0.34)&3.8(2.5)&6.2(7.5)&1.46&10.7\\
ALF4 &0.315&0.088&0.33(0.33)&-0.17(0.17)&0.49(0.49)&0.51(0.51)&4.04&5.0\\
\hline
\end{tabular}
\end{small}
\end{center}
\end{figure}
\subsubsection{Magnetised shock tubes}
I have performed two magnetised shock tube tests and the parameters
may be found in table II. The first is a relativistic
version of the classic shock tube of Brio and Wu \cite{bib:briowu88}.
The shock tube is not very extreme and with a resolution of 2048 points,
just like in \cite{2003ApJ...589..458D} we clearly resolve all shock waves.
The solution is shown at $t=1$. Comparing with Fig.~7 in
\cite{2003ApJ...589..458D}
we see that the wave speeds are wrong. The right rarefaction wave has reached
$y=1.1$ and is superluminal while it should have reached $y=0.9$.
The left rarefaction wave is in good agreement with Fig.~7
in \cite{2003ApJ...589..458D} propagating with $v\approx0.68$.
I was only able to obtain a stable solution of shock tube 2 of K99
by lowering the viscosity to $\nu_1=0.001$ and enhancing the magnetic resistivity
to $\nu_B=3.0$. The shock tube is on the limit of the codes capability; small
oscillations in the density $\rho$ just behind the shock wave in
Fig.~\ref{fig:probVI} are evident and there are large errors in the
rarefaction wave, which propagates superluminally at $v=1.2c$.
The forward going shock wave is only slightly wrong propagating with
$v\approx 1$, where it should be going with $v=0.95$.
\subsubsection{Alfv\'en Pulse test}
The test is conceptually very simple. In a background with constant magnetic
field and velocity in the $y$ direction we set up a small square pulse
in the perpendicular velocity component $v^z$. It splits into two waves
that travel with the Alfv\'en velocity. The test is presented in
\cite{2003ApJ...589..458D} and although simple
in concept it will easily reveal any errors in the wave speed. Since we use
a direct finite difference technique to solve for the magnetic field it
is critical that the displacement current is calculated correctly when
the Alfv\'en speed approaches the speed of light. It is already
evident from the shock tube test above that this is not always
the case, and this test has been invaluable during the implementation,
for assessing different schemes to calculate the displacement current.
Initially there is a constant background magnetic field
$\mathcal{B}^y$ and a constant background fluid velocity $v^y$. On top of
that a small square pulse with transverse velocity $v^z$ is
superimposed. The pulse will split in two waves travelling with
the Alfv\'en velocity, given by \cite{2003ApJ...589..458D}
\begin{equation}\label{eq:pulse}
v_a^{\pm}=\frac{v^y\pm\xi\sqrt{\xi^2 + W^{-2}}}{1+\xi^2}\,,
\end{equation}
where $\xi^2 = |b|^2/(\rho hW^2)$ and $b^\mu$ is the magnetic field measured in
the fluid rest frame. The size of the magnetic field in the fluid rest frame
in a flat space time is related to $\mathcal{B}^i$ as
\begin{equation}
|b|^2 = \frac{1}{W^2}\mathcal{B}^2 + \left[v^i \mathcal{B}_i\right]^2.
\end{equation}
Notice that there is a factor of $4\pi$ in difference
with \cite{2003ApJ...589..458D},
due to different conventions for $\mathcal{B}^i$.
We may parametrise the problem,
by the using the usual definition of $\beta=\sqrt{2P/|b|^2}$ as the ratio of
gas to magnetic pressure in the fluid rest frame. For an ideal equation of
state, in terms of $\beta$ and $P$, $\xi$ is written
\begin{equation}\label{eq:xi}
\xi^2 = \frac{2P}{\rho + \frac{\Gamma}{\Gamma - 1} P}\frac{1}{\beta^2 W^2}.
\end{equation}
To facilitate comparison, I have used the same box size, $0<y<3$, pressure
$P=1/3\,\, 10^{-2}$, background density $\rho=1$ and amplitude of the
perturbation,
$A_0=10^{-3}$, as in \cite{2003ApJ...589..458D}. The adiabatic index is
relativistic with $\Gamma=4/3$. The pulses are set up with a square formed
wave in $v^z$
\begin{equation}
v^z = \left\{ \begin{array}{ll}
A_0 & \textrm{if }1\le y<1.5 \\
-A_0 & \textrm{if }1.5\le y<2 \\
0 & \textrm{elsewhere}
\end{array}\right.
\end{equation}
and for fixed $\rho$ and $P$ the Alfv\'en velocities $v_a^\pm$ only depend on
$\beta$ and $v^y$. The parameters are given in table III and
Fig.~\ref{fig:probVII} shows $v^z$ at the time given in table III.
The times have been selected to those moments in time where the two pulses
line up exactly one after the other, and by visual inspection it is easy to see
how the test fares.
The amplitudes of the two waves are inversely proportional to their
Lorentz factors \cite{2003ApJ...589..458D}
\begin{equation}\label{eq:apm}
\frac{A^+}{A^-} = \frac{W(v_a^-)}{W(v_a^+)}
\end{equation}
and because the starting amplitude $A_0$ is very small, the waves should not
interact with each other.
Then, the sum of the amplitudes is equal to the initial amplitude
$A_0 = A^+ + A^-$. In table III, I have given the measured velocities
and amplitudes together with the expected ones derived from
Eqs.~(\ref{eq:pulse}) and (\ref{eq:apm}).
The tests are selected
to highlight different regimes of \Eq{eq:pulse}. In ALF1, we have a very low
$\beta$ and consequently the Alfv\'en velocity is close to the speed of
light. The code does not fare well, showing $22\%$ disagreement witht the
expected value. In ALF2 the background fluid velocity is selected such that
one pulse is frozen. It can be verified from the figure that the test is
passed. In ALF3 $v^y=0.8$ and both pulses are travelling to
the right. For the fast moving pulse, again the wave speed is too high with
a 15\% overshoot and the amplitudes are furthermore wrong.
In ALF4 I have adjusted $v^y$ to yield two pulses with $v^+_a = -2 v^-_a$,
and there are no problems with the test.
The tests indicate that the code begins to significantly overestimate
the Alf\'en velocity when $v_a \gtrsim 0.75$, but in all cases the sum of the
amplitudes is conserved. This is in accordance with the results from the
shocktubes, where correct jumps where observed albeit propagating with
different velocities.\\[2ex]
The many tests presented in this section document both the strengths and
weaknesses of the code. It is essential to know the limits of the code, not
only in terms of stability, but also when to trust the physical models
produced using it.
It is clear that there are some stability
problems with high Lorentz boosts, it is too viscous in the advection of
high density blobs away from low density areas, and that it
overestimates the Alfv\'en speed, when it is relativistic.
The cures to these problems are twofold:
\begin{itemize}
\item{The time derivatives of four velocities and the electric field have
to be properly centred.}
\item{The time derivatives of the shear viscosity in Eqs.~(\ref{eq:Eeqofm}) and
(\ref{eq:Peqofm}) have to be included.}
\end{itemize}
On the positive side the results all show flux conservation and reproduction of
the proper jump conditions across discontinuities both in HD and MHD tests.
We can successfully model problems with severe pressure and density contrasts
and in most cases faithfully resolve sharp
features with very few points. This is done without showing oscillatory
behaviour. Even though the largest fraction of the CPU time is spent
calculating the shear viscosity, the gains in stability and the sharpness
of discontinuous features increased fundamentally when I shifted from using a
``mockup viscosity'' to a full physically motivated one.
The two points above are not fundamental or unsurmountable
in any way and will be addressed in future work.
\section{Astrophysical Applications}\label{sec:2.6}
We can already apply the code to the understanding of
mildly relativistic phenomena. Here I present first
results from two applications
related to the areas which motivated the development of the code.
\subsection{Decaying magnetic fields in the early universe}
In the introduction, we considered the evolution of magnetic fields
in the early universe. Many analytical studies show that,
at best, it will be very hard to find traces or fingerprints of
primordial magnetic fields in the cosmic microwave background radiation,
but these analyses do not take into account the non linear coupling
between the different wave modes and the possibility of inverse cascades
transferring energy from the small to the large scales.
Christensson et al \cite{bib:christensson01,bib:christensson02} argued
that, in fact, if a turbulent helical magnetic field was created, for
example at the electro weak phase transition, it would undergo
an inverse cascade, while a non-helical field would not.
As a nontrivial 3D test of the code I have initialised a simple
turbulent non-helical magnetic field and a turbulent velocity
field with power spectra given as
\begin{align}
P_\mathcal{B}(k) &= \left< |\mathcal{B}_k|^2\right> = P_0 k^{n_B}
\exp\left[-\left(\frac{k}{k_c}\right)^4\right]\,, \\
P_v(k) &= \left< |v_k|^2\right> = P_0 k^{n_v}
\exp\left[-\left(\frac{k}{k_c}\right)^4\right]\,,
\end{align}
where the index $k$ indicates the Fourier transform and due to causality,
the exponents are constrained to $n_v\ge0$, $n_B\ge2$. In accordance with
\cite{bib:christensson01}, I have taken them to be at their minimal value.
The cut--off $k_c$ is introduced to limit numerical noise near the Nyquist
frequency. In a $96^3$ run, with a box size of $[0,2\pi]^3$, where $k=1$
corresponds to the largest mode in the box, I found that a value of
$k_c=10$ was sufficient to quench the numerical noise. To generate
proper divergence free initial conditions, I first calculate the
corresponding vector potential and then take the curl. The initial
magnetic and kinetic energy are both $5\times10^{-3}$ and the average density is
$\rho=1$. The internal energy is initialised such that the sound speed
is relativistic, $c_s^2 = 1/3$.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.49\textwidth]{Powerspec.eps}
\includegraphics[width=0.49\textwidth]{efit.eps}
\caption{Evolution of the power spectrum and the total energy
density for a turbulent magnetic field. The curves to the left are, in
decaying order, for $t=[0,3,9,12,15,18]$.}
\label{fig:pspec}
\end{center}
\end{figure}
Simulations of turbulence are very
sensitive to the type and amount of viscosity used, since it can alter
the long time evolution of high frequency modes significantly. We have
made a series of runs with less and less viscosity and noticed, that
with coefficients $\nu_{1,3}<0.003$ and $\nu_2 < 0.06$ there was no change
in the decay rate of the spectrum. To the left in Fig.~\ref{fig:pspec}
is shown the magnetic power spectrum at different times for a run
with $\nu_{1,3}=0.0003$ and $\nu_2=0.006$. Correspondingly, to the right
is shown the evolution of the magnetic energy in the box that I find
decays as $E_M \propto t^{-1}$.
Comparing my results with \cite{bib:christensson01} I find good
agreement. They found a $E_M \propto t^{-1.1}$ scaling
law. The main difference between my runs and theirs is that they evolve
the non relativistic equations, while I use the relativistic equations.
\subsection{Relativistic jets}
\begin{figure}[tphb]
\begin{center}
\includegraphics[width=\textwidth]{jet11_50_final.eps}
\includegraphics[width=\textwidth]{jet11_100_final.eps}
\includegraphics[width=\textwidth]{jet11_150_final.eps}
\includegraphics[width=\textwidth]{jet11_200_final.eps}
\caption{From left to right: Density, pressure and four velocity.
From top to bottom: The jet at $t=500,1000,1500,2000$.}
\label{fig:jet}
\end{center}
\end{figure}
Relativistic jets seem ubiquitous in the universe, to be found over a
large range of scales, from sub parsecs to kilo parsecs
\cite{bib:marti03}. It is one of
the purest displays of special relativity at work. While the other
application tested the impact of the viscosity in the code, a jet
is an excellent test of the codes capability to handle strong shocks.
Taking into account the large computer resources a
3D jet takes, I have chosen to make a 2D jet.
At the moment, only Cartesian geometry is implemented in the code and
I have constructed a slab jet, which is periodic in the $z$-direction.
The injection happens in the $y$-direction,
where rigid boundaries are imposed,
while there are periodic boundaries in the $x$-direction. To avoid
significant collision of the bow shock with itself, the computational
domain is a square with $(N_x,N_y)=(800,800)$.
I have tried both to inject the jet in an purely hydrodynamic medium void of
magnetic fields and in a medium with a parallel magnetic field $\mathcal{B}^y$. As
expected the main difference was further collimation of the jet due to
magnetic confinement. Similar experiments have been reported by other authors
\cite{bib:komissarov99,bib:koide96}. The jet has an injection radius of
$R_j = 5$ cells, the density contrast is $\rho_{ambient}/\rho_{jet}=10$ and
the pressure is $P=1$. There is pressure equilibrium between the jet and
the ambient medium. We inject the jet with a Lorentz factor of $W=1.5$
and the adiabatic index is set to be relativistic with $\Gamma=4/3$.
In Fig.~\ref{fig:jet} a sequence of snapshots are shown.
The large resolution and thin injection
radius makes it possible to follow the jet until it becomes unstable
and decays.
At $t=500$ we see a classic jet. At the jet head there is a Mach shock,
and material that has passed through the head is slowly
forming a backflow along the jet, building up a shear layer.
Furthest out is the bow shock. The jet is unstable, and at later
times the jet head disintegrates into a number of vortices and looses
most of the kinetic energy. The perturbation runs backwards, slowly
unwinding the spine of the jet.
\section{Code Implementation: Performance and Scalability}\label{sec:2.7}
To successfully exploit modern massively parallel computers and
clusters of computers,
which at national centres of supercomputing often consist of hundreds of CPUs,
the numerical
implementation has to be carefully crafted. The program has to run at the
optimal speed for small problems on a single CPU on a variety of architectures,
while at the same time,
it is important to distribute the workload evenly over all CPU's in the
machine (be it a large shared memory computer or a cluster of off-the-shelf workstations).
\subsection{The stagger code}
All of the fluid dynamics codes currently in use in Copenhagen are based on or derived from
a common base code, the so called \emph{stagger code}. The first version was made by
Nordlund and Galsgaard in 1995
\cite{bib:stagger}. The GrMHD code makes use of the same basic principles. The
equations of motion Eqs.~(\ref{eq:Ampere}), (\ref{eq:Faraday}), (\ref{eq:energy}),
(\ref{eq:momentum}) are solved with finite differences through direct differentiation, and the variables
are staggered on the grid. Scalar variables $D$ and $\mathcal{E}$, and derived scalar
quantities are centred in each cell. The
primary vector quantities $\P_i$ and $\mathcal{B}^i$ are calculated on the faces of the cell while
the derived vector quantities $\mathcal{E}^i$ and $J^i$ are calculated on the edges (see
Fig.~\ref{fig:yee}). The boundary conditions are implemented as in \cite{bib:stagger}.
\begin{figure}[thb]
\begin{center}
\epsfig{figure=yee.eps,width=0.7\textwidth}
\caption{The basic staggering of different quantities on the grid. The figure
was adapted from \cite{bib:trier}. To make the figure visually easier to
understand I have on purpose drawn a left handed coordinate system.
I use a right handed coordinate system in the code.}
\label{fig:yee}
\end{center}
\end{figure}
The differentiation operators are sixth order in space. Derivatives are
calculated at half grid points and use a stencil of 6 points. In many cases this
gives a natural placement of the variables, since the different quantities already
are staggered in space. For example, the electric current $J^i$ is found located at the
edges and according to Eq.~(\ref{eq:Ampere}) is the curl of $\mathcal{B}^i$ (for
the sake of simplicity in this example we disregard the displacement current, $\alpha$ and shear $\beta^i$). $\mathcal{B}^i$ is located at the face of each
cell, and in the code the calculation of the current can be packed into three simple lines:
\begin{align}\nonumber
J^x &= \textrm{ddydn}(\mathcal{B}^z) - \textrm{ddzdn}(\mathcal{B}^y)\\ \label{eq:curlB}
J^y &= \textrm{ddzdn}(\mathcal{B}^x) - \textrm{ddxdn}(\mathcal{B}^z)\\ \nonumber
J^z &= \textrm{ddxdn}(\mathcal{B}^y) - \textrm{ddydn}(\mathcal{B}^x)
\end{align}
In some cases, most notably the complicated viscosity operator, the differentiation does not
place the variables at the desired position on the grid, and interpolation has to be done.
The corresponding interpolation operator is of fifth order.
It also uses a stencil of 6 points \cite{bib:stagger}. A crucial addition to
the original method described in \cite{bib:stagger}, which later has been
employed in most of the stagger based codes is the use of
exponentials and logarithms to produce geometric interpolation. As an analogy, the
geometric mean of two numbers may be rewritten in terms of the arithmetic mean of the
logarithms
\begin{equation}
G(a,b)=\sqrt{a\cdot b} = \exp\left[\frac{1}{2}\left(\ln a +\ln b \right)\right]
= \exp\left[H(\ln a, \ln b)\right].
\end{equation}
Geometric interpolation has two very appealing qualities, when dealing with
discontinuities across shocks. First of all, positive definite quantities,
such as the density and the energy, stay positive. Secondly, geometric
interpolation is a much better measure
when the density or pressure is changing with orders of
magnitude over a few points. This happens at shock fronts and
surface transitions.
To make the code easily readable and hide all the loops,
where the different interpolations
and differentiations are done, all operators are hidden in
a set of subroutines. In fact,
Eq.~(\ref{eq:curlB}) corresponds exactly to the simple version of the code. In the
production version, we make an effort to optimise memory references and reuse the cache
memory at each CPU, but even with full parallelisation
the $\nabla\times\mathcal{B}^i$ term
only expands to
\begin{small}
\begin{verbatim}
!-----------------------------------------------------------------
! Electric current I = curl(B)
!-----------------------------------------------------------------
do kk=ks,ke
call ddydn_set(Bz,Jx) ! Jx = ddydn(Bz) - ddzdn(By)
call ddzdn_sub(By,Jx)
call ddxdn_set(By,Jz) ! Jz = ddxdn(By) - ddydn(Bx)
call ddydn_sub(Bx,Jz)
call ddzdn_set(Bx,Jy) ! Jy = ddzdn(Bx) - ddxdn(Bz)
call ddxdn_sub(Bz,Jy)
end do
\end{verbatim}
\end{small}
The reason why sixth order differentiation and fifth order interpolation
operators are used in the stagger code is a question of balance between
precision and computational load.
The highest wavenumber a given method can resolve depends not only on the
resolution of the mesh, but
also on the order of the interpolation and differentiation operators. It was found
empirically by Nordlund and Galsgaard \cite{bib:stagger} that sixth order
gives effectively a better resolution than fourth order operators, even
after the somewhat larger computational cost of the 6th order
operations are considered.
Maron \cite{bib:maron04} made a formal investigation of the effective resolution
of different methods and orders and found that
a fourth order scheme can resolve up to 0.24 of the maximal wave number
$k_{max}$, while a sixth order scheme resolves waves with up to
0.34 $k_{max}$. Going to eighth
order the maximal wave number is 0.4 $k_{max}$. At higher wave numbers the gain is
negligible if one takes into account the added communication
and amount of ghost zones that
have to be allocated.
\subsection{The paper code: Optimal parallelisation and cache reuse}
Together with J.~Hunsballe \cite{bib:jaab} I performed what essentially
amounts to a complete rewrite of the stagger code.
We still retain the qualities that have been described above. The basic
physical equations are the same, and artificial viscosity is implemented
in the same manner.
We use the same high order interpolation and differentiation operators.
The difference is
in the technical details: Our goal has been to
produce a very high level object oriented code, that is easily readable,
runs at the highest possible speed and scale to hundreds of CPUs.
In the stagger code the basic scope for any operator (such as a differentiation operator)
has been the full three dimensional array. For example, to interpolate the density to face
values one would write:
\begin{verbatim}
call xdn_set(rho,xdnr)
call ydn_set(rho,ydnr)
call zdn_set(rho,zdnr)
\end{verbatim}
The problem with this approach is that, on modern computers, the bandwidth between the main
memory and the CPU is much lower than the computational power. There is also a big latency
involved. It can easily take 200 clock cycles from the moment the CPU asks for a specific
block of memory until it actually is delivered. To alleviate this problem, there is a small
amount of very fast memory, the cache, often integrated directly on the CPU. On current
high end architectures, such as the Itanium, Power, Alpha, Sparc and Mips based machines the
cache size is of the order of 3-8 MB per CPU, while a normal Opteron or Pentium based CPU
has 1 MB cache. On a small problem, of todays standards, such as a $128^3$ mesh
per CPU, the amount of cache taken by just one array is already 8 MB. That means
the \emph{stagger code} is already beginning to be memory
bandwidth limited, and not limited by the speed of
the CPU at this problem size. In the new Paper Code, the basic scope is instead a slice in
the $x-y$ plane (see Fig.~\ref{fig:cache}); hence the name. The above lines of code would
then be written:
\begin{verbatim}
do kk=ks,ke
call xdn_set(rho,xdnr)
call ydn_set(rho,ydnr)
call zdn_set(rho,zdnr)
end
\end{verbatim}
where the \verb|kk|-loop runs over the papers.
The code is almost identical, since we hide the loop index in a global
variable, but the
characteristics are radically different. Because we use a sixth order scheme
we now only required to store 5 ``papers'' for the z-operator that needs
values from different
papers of $\rho$, in the cache to keep the CPU running at maximal speed. Even for a $512^3$
problem 5 papers only take up 5 MB of memory, and by testing on an SGI Altix machine with 3 MB of cache, we have found that performance starts to decrease
around $400^3$, while at
$1024^2\times20$ performance has fallen to $2/3$ of maximum.
\begin{figure}[thb]
\begin{center}
\epsfig{figure=cache.eps,width=\textwidth}
\caption{The basic scope for any calculation in the new Paper Code is the $x-y$ plane giving
optimal reuse of cache, vectorisation and simple implementation of boundary conditions.}
\label{fig:cache}
\end{center}
\end{figure}
All modern CPUs are able to vectorise and pipeline simple instructions.
By default, we have therefore chosen the
innermost dimension, the $x$-direction, to be as simple as possible, with
periodic boundary conditions. Then, the compiler will be able to schedule
essentially all operations as SIMD instructions. The middle dimension,
the $y$-direction, does not have any significance and is
the best place to calculate boundary conditions, for problems that only
contain boundaries in one direction.
This way any computational load from the boundary is spread evenly over
all the papers.
So far in Copenhagen we have had good access to shared memory machines.
By far the easiest way to parallelise the code is then to use OpenMP.
However, shared memory machines are relatively expensive and limited in size.
A few versions exist of the stagger code that use MPI to run on clusters.
One of the major current technology trends is the integration of
two (Intel, AMD, Sun) or more (IBM) CPU cores on a single piece of
silicon. We can only expect that all CPUs in the future will be massively
multi threaded. Then the optimal approach to parallelisation will be
a hybrid one with OpenMP inside a single CPU node and MPI between nodes.
Current and future parallelisation strategies have been sketched
in Fig.~\ref{fig:parallel}.
\begin{figure}[thb]
\begin{center}
\epsfig{figure=parallel.eps,width=\textwidth}
\caption{Parallelisation strategies. We have demonstrated perfect scalability
up to 128 CPUs with our current OpenMP implementation. Future implementations
will be based on a hybrid OpenMP/MPI model. An added benefit of a hybrid model is
the improved cache reuse for very large box sizes (i.e. $1024^3$ and beyond).}
\label{fig:parallel}
\end{center}
\end{figure}
\begin{figure}[thb]
\begin{center}
\epsfig{figure=perf_brahe.ps,width=0.49\textwidth}
\epsfig{figure=perf_coloumbia.ps,width=0.49\textwidth}
\caption{Scalability of the paper code: Results of scaling a simple HD
experiment on an SGI Altix machine. To the left is the strong scalability
for a $256^3$ experiment on a dedicated machine with a 1.3GHz CPU.
To the right is shown the weak scalability running on a loaded machine with
a 1.6GHz CPU with the experiment size varying according to the number of CPUs.}
\label{fig:scaling}
\end{center}
\end{figure}
In the Paper Code we have effectively hidden the parallel nature of the code.
Each CPU is automatically assigned a number of papers from \verb|ks|
to \verb|ke|, and in the main part of the code, where the physics
are calculated, one only has to consider dependencies in the $z$-direction
and insert synchronisation points as appropriate.
As an example consider the use of geometric means to interpolate the
density to face values
\begin{equation}
\rho_x = \exp(\textrm{xdn}(\ln(\rho))) \,,\,
\rho_y = \exp(\textrm{ydn}(\ln(\rho))) \,,\,
\rho_z = \exp(\textrm{zdn}(\ln(\rho))) \,,
\end{equation}
where $i\textrm{dn}$ denotes interpolation half a point down in the
$i$ direction. This may be coded in two blocks:
\begin{verbatim}
do kk=ks,ke
lnr(:,:,kk) = alog(rho(:,:,kk))
enddo
!$omp barrier !<-- Sync: zdnr=zdn_set(lnr)
! depends on non-local papers
do kk=ks,ke
call xdn_exp(lnr,xdnr)
call ydn_exp(lnr,ydnr)
call zdn_exp(lnr,zdnr)
end
\end{verbatim}
Notice that geometric interpolation is a common operation and to streamline things
we have made special interpolation operators, that automatically applies the exponential.
A barrier works as a synchronisation point.
The CPUs have to stop at a barrier and wait until
all of them have arrived. When only using a small number of
CPUs the number of barriers
are not very important, but when considering hundreds of CPUs, it is
essential that
the barrier count is minimised. Any small disturbance for one
CPU will make all the
others wait at each barrier. To take an example: If there in each
time step are 100 barriers,
and a CPU is randomly disturbed once during a time step, giving a
slowdown of 1\%, this extra
noise will for two CPUS give rise to a 2\% slowdown. For hundred of
CPUs the same disturbance,
since it occurs in random sections, gives on average at least a 50\% slowdown.
By carefully analysing the numeric implementation, we have found that
in a full update of the cells the
minimum number of barriers needed to calculate any part of the code is 6.
The old \emph{stagger code} based MHD was logically structured in different
sections, according
to the different physics. First, the calculation of velocities from momentum,
then, the pressure, the viscosity, the stress tensor, the MHD equations
and at last the equation of motion
for the internal energy. Since all parts need between 4 and 6 barriers,
one ends up having at least 20 barriers.
With the new code, we have applied a ``principle of origami'' folding the
logical structure of the code. After each barrier, we consider all the different
equations and calculate the maximum amount of physics.
When threading the 5 small MHD parts, 6 viscosity parts etc together
we end up having only 6 barriers. Recently, we had the
opportunity to have the paper code tested on the NASA Columbia supercomputer
and the code scaled efficiently on up to at least 128 CPUs.
We have implemented a full HD/MHD code, including self gravity \&
turbulent driving,
and a special relativistic HD/MHD version of the code described in this thesis.
Both codes show spectacular performance. The MHD code can update
1.2 million cells per CPU per second and it runs at
30\% of theoretical peak performance on the SGI Altix machine. A normal grid
based MHD code,
even when optimised, performs at
between 5\% and 10\% of peak performance (see \cite{bib:SC04}
for a detailed analysis of five state of the art codes).
In fact, we believe that the paper code is one of the highest
performing codes of its kind. This is both due to the low absolute cost
of evaluating the MHD equations for a single cell and the effectiveness
with which we have implemented the
algorithm. The special relativistic MHD version runs at 250.000 zone
updates per second, which is also well above quoted numbers in the literature.
\section{Discussion}\label{sec:2.8}
In this chapter I have discussed the theoretical foundation and numerical
implementation of a new general relativistic magnetohydrodynamics code.
When designing a new code without building upon existing work,
it is tempting to use an already existing theoretical basis.
However, instead of this, I have derived a new form of
the equations of motion with global coordinates evolving the dynamical
variables from the point of view of a local observer.
This approach makes it possible to employ a highly sophisticated
artificial viscosity.
This is just but an example of the
possibilities the new formulation opens up for.
The implication of my approach is that any new physics
that is implemented and working in special relativity in the future,
be it a new equation of state, radiative transfer, or a perturbative
implementation of gravity, may easily be reused in the general
relativistic version of the code.
This may be done because the special and the general relativistic
versions are related through the simple formulas given in App.~\ref{chap:appa}.
When deriving the equations of motion, I have not made any assumptions
about the background metric, so that the design is ready to be coupled with
methods solving the full Einstein equations, such as the CactusCode.
This new GrMHD code has been tested on a variety of demanding problems,
and it has been
demonstrated that it is able to deal with huge pressure and density gradients.
It shows some problems in the case of flows with high Lorentz factors, but
they can be addressed and will be solved in the near future.
The tests carried out include both synthetic benchmarks that tests a certain
aspect of the code, and real astrophysical applications.
The computer code is based on a refinement of the current
infrastructure for fluid dynamics used in Copenhagen, which has
been developed together with J.~Hunsballe. It shows a spectacular
performance on modern computer architectures, exploiting up to
30\% of the theoretical peak performance.
The special relativistic versions of the hydrodynamics and
magnetohydrodynamics codes are three dimensional and have been fully
parallelised.
They have been tested and scale to hundreds of CPUs, making it possible
to exploit massive supercomputers at national centres to the full extent.
I plan to employ the code in combination with the other numerical tools
presented in this thesis in order to understand extreme astrophysics near compact
objects. A first joint application of the particle code and
this code is presented in Chapter \ref{chap:global}. Furthermore, observational
cosmology is reaching a level of quality, where soon not
everything can be addressed in terms of simple one dimensional linear
perturbation theory, and I plan to employ the code in the understanding of
the non-trivial evolution of magnetic fields in the early universe.
\chapter{The Global Structure of Collisionless Shocks}\label{chap:global}
In collisionless shocks the mean free path of a particle is greater
than the extent of the shock interface. Hence the particle distribution
functions are highly anisotropic and one cannot study them using fluid
methods. Rather, the dominant means of collision is
indirect, mediated by the collective electromagnetic field.
In Chapter \ref{chap:field} it was demonstrated that in collisionless shocks,
where no large scale magnetic field exists beforehand, the resulting
electromagnetic field is largely dictated by the evolution of two-stream
instabilities.
In this chapter I study global charged particle dynamics in relativistic
collisionless $e^+e^-$ pair--plasma shocks numerically, using three dimensional
computer experiments with up to $\sim 10^9$ particles, and present
results on the application and limitations of two dimensional
simulations for the study of the global structure in ion-electron plasmas.
The pair plasma simulations are advanced to a quasi-steady state,
and compared to a fluid model. There is good agreement with the fluid model
for physically relevant quantities, such as bulk density and momentum, which
shows that the evolution can be extrapolated to larger timescales. We
find that the two-stream instability decays over 50-100 electron skin depths
and hence for a pair plasma shock remains firmly in the microphysical
domain.
This type of microphysical experiment may be used to determine
empirically an effective equation of state in a collisionless shock,
and the necessary sink and source terms that describe the conversion
of kinetic to magnetic energy due to the two-stream instability, which
could then be implemented in global fluid models,
leading to more accurate large scale simulations of phenomena such as gamma
ray bursts and relativistic jets from AGN's, where collisionless shocks
are of importance.
The technique would be similar in spirit to the role played by subgrid models
in understanding large scale turbulence, where models
of the small scale behaviour are integrated into the fluid simulations to
extend the dynamical range.
\section{Introduction}
Three dimensional particle-in-cell experiments of
ion-electron collisionless shocks with open boundaries in the
streaming direction have been considered in
Chapters \ref{chap:field} \& \ref{chap:acc}, and by
Fredriksen et al.~\cite{bib:frederiksen2002,bib:frederiksen2004},
Hededal et al,~\cite{bib:hededal2004,bib:hededal2005} and
Nishikawa et al.~\cite{2003ApJ...595..555N},
but as shown in Chapter \ref{chap:acc} the estimated true extent of
a collisionless ion-electron shock is much larger than
the computational domains that have been considered up to now.
In Chapter \ref{chap:acc} I found that for
a mass ratio of $m_i/m_e=16$ the shock extends at least
1500 ion skin depths. Unfortunately, with current
computer technology there is no hope of performing
three dimensional experiments that resolve scales all
the way from sub \emph{electron}-skin depths to 1500 ion skin
depths.
In Chapter \ref{chap:field} a qualitative difference
between ion-electron shocks and pair plasma shocks
was noted: In the case of an ion-electron plasma the
heavier ions disrupt the electron channels and the
electrons form a Debye shielding cloud around
the ion channels. This cloud of electrons stabilises the ion
channels; indeed this is why the channels survive significantly
longer and ions from the
upstream and downstream mediums interact less.
The consequence is that thermalisation of the ions and decay of
the two-stream instability in an
ion-electron dominated shock interface happens
on a fundamentally longer time scale than in a shock interface
dominated by a pair plasma.
Even though full three dimensional ab initio experiments of
ion-electron shocks are out of reach, that is not
so for pair plasma shocks. In a pair plasma the
electrons and positrons generate channels on the same time scale,
and with no shielding they are quickly disrupted. In terms
of the electron skin depth time scale $\omega_p/c$, thermalisation is
faster than in the ion-electron case. Furthermore, the electrons
and positrons have the same mass, and therefore many more
skin depths can be resolved in a single box.
\section{Collisionless Pair Plasma Shocks}
The two-stream instability deflects particles in the transverse direction
to the flow, and to correctly describe a collisionless shock the model has to
be fully multi-dimensional. It was shown in Chapter \ref{chap:field},
that the current channels merge in a self similar process generating
a powerlaw distributed magnetic field. Medvedev et
al.~\cite{2005ApJ...618L..75M} investigated the problem theoretically
and demonstrated that the magnetic field correlation length in the
shock interface grows with the speed of light for relativistic shocks.
It is therefore necessary that our computational domain perpendicular
to the shock is comparable in size to the longest two-stream unstable
regions in the box. Otherwise the process may be limited numerically, and
with periodic boundaries perpendicular to the shock interface the experiment
reaches a state containing just a
few self interacting current channels; then it cannot
be entirely clear if the saturation of the magnetic field, decay of the
current channels, and thermalisation of the particles happen due to
numerical or physical effects.
The basic setup of the numerical experiment has already been described in
the previous chapters. In this section I consider three variants of the
same experiment. All are pure pair plasmas. Initially the dense downstream
medium is at rest. The density jump is a factor 3, and the inflow Lorentz factor
of the upstream medium is 3.
The only differences between the setups are in the box sizes and the
plasma frequencies considered. They all contain initially 8 particles per
species per cell in the medium at rest. In the main experiment, $A$,
the box size is $nx\times ny\times nz=80\times80\times2800$ and
the plasma frequency
is $\omega_p = 0.42$. In the two complementary experiments, $B$ and $C$, the box
sizes are $160\times160\times1400$ and $80\times80\times2800$, with
plasma frequencies of $\omega_p=0.21$ and $\omega_p=0.105$ respectively.
\begin{figure*}[!t]
\begin{center}
\epsfig{figure=maghist.eps,width=0.7\textwidth,angle=270}
\caption{The evolution of the total magnetic energy in the box as a function
of time for the three runs.}
\label{fig:maghist}
\end{center}
\end{figure*}
In experiment $A$ plasma oscillations are resolved with only
2.4 cells, which is close to the Nyquist frequency of the grid, and
indeed there is a higher level of small scale numerical noise than in
experiments $B$ and $C$. While it would have been preferable with an
experiment with a
smaller plasma oscillation frequency, the presented runs are at the limit
of current computer capacities, containing up to a billion particles, and
only experiment $A$ settles to a steady state with selfsimilar evolution.
Experiment $B$ and $C$
are used to validate the behaviour for early times. In fact the first
stages of a thermalised shock
is observed in experiment $B$, but it does not separate into different states
before reaching the edge of the box.
Specifically, the evolutions in averaged current and mass
densities are equal for early times in the different experiments.
Figure \ref{fig:maghist} shows
the evolution of the total magnetic energy in the box. There is a clear
difference in the initial level of fluctuations between the experiments,
due to the difference in plasma oscillation frequencies, but the growth
rate is the same for the three cases, and the experiments show the
same late time behaviour. We can separate the evolution in
three phases: First an initial inflow phase, where the particles
have not yet undergone the two-stream instability. At around
$t=10\,\omega_p^{-1}$ the two-stream instability commences, and it
is saturated at around $t=40-100\,\omega_p^{-1}$. From then
on the region containing shocked material and a diffuse turbulent
magnetic field expands, while a reverse and a forward shock
are formed, and a slow rise in the total magnetic field is seen.
Both at the forward and the reverse shock interface a permanent
two-stream instability is observed.
\begin{figure*}[!th]
\begin{center}
\epsfig{figure=lscale_2600.eps,width=\textwidth}
\epsfig{figure=hydro_2600.eps,width=\textwidth}
\caption{The large scale structure at $t=1100\,\omega_p^{-1}$.
(a) The average density of electrons in an $x-y$ slice, with
similar plots for $t=1016\,\omega_p^{-1}$ and $t=1184\,\omega_p^{-1}$
as dashed lines. (b) The relative amount of power in the dominating
mode compared to the total power in a two dimensional Fourier
transform of the transverse magnetic field. (c) Average energy
density in the magnetic field. (d) Average four velocity in the $z$-direction
and a similar profile from an HD run. (e) Density from an HD run.}
\label{fig:lscale}
\end{center}
\end{figure*}
The large scale structure at $t=1100\,\omega_p^{-1}$ for experiment $A$ is
shown in Fig.~\ref{fig:lscale}. The average density profile has a
pile-up of matter from $Z=350-700$, this is the shocked area.
In Fig.~\ref{fig:lscale} (a) profiles for earlier
and later times have been overplotted to illustrate how the shocked
area expands with time, with the
forward shock to the right moving faster than the reverse shock to
the left.
Panel (b) in Fig.~\ref{fig:lscale} shows the relative amount of power in the
strongest mode of a Fourier transform of the transverse
magnetic field compared to the integrated power. It has been calculated
by taking the Fourier transform of the magnetic field in the $x-y$ plane,
finding the largest amplitude mode, and then dividing that with the
integrated power.
In the two-streaming shock interface, the largest transverse scale
of the box is dominant, and the ratio is close to 1, while in the centre of the
shocked medium the magnetic field has decayed and power is distributed
over a range of scales. In (c) the magnetic energy in the field is plotted
and the two shock interfaces are clearly seen to be separated.
From Fig.\ref{fig:lscale} (c) it can be estimated that the two-stream unstable
regions have a width of between 50 and 100 \emph{electron} skin depths.
In order to validate that the jump conditions are satisfied I have used the
fluid code described in Chapter \ref{chap:GrMHD} to setup a similar shock
problem, but in fluid dynamics. I have chosen a relativistic equation of
state with $\Gamma=4/3$ and a density jump of 3, an inflow four velocity of
1.7 and a uniform pressure of $P=0.2$. The velocities and densities are
taken in accordance with the unperturbed states seen in experiment $A$ around
$t=1100\,\omega_p^{-1}$. A priori it is not clear how to measure the pressure,
since there are contributions from the random magnetic field, the two-stream
generated magnetic field, heating
from backscattered particles to the left of the shock, and heating from
particles that were not scattered to the right of the shock.
Instead of trying to measure an ill defined pressure, I have chosen
the pressure $P$ such that the maximum density in the right shock wave is in
accordance with the particle data.
This is a reasonable approach, because by fixing the pressure according
to one single parameter, we see good
agreement between the two models for the velocity profile
and density profile in the other parts of the shock wave.
Furthermore the shock profile
is seen to move with a velocity of 0.6 (Approximately 50 skin depths in 84
skin depth times, see Fig.~\ref{fig:lscale} (a)),
while the corresponding fluid shock, in good agreement, is moving with a
velocity of 0.65.
Naturally, the profiles and bulk velocities do not correspond exactly,
since not all facets of the particle experiment are reflected
in the fluid shock. The hydrodynamical experiment does not include magnetic
fields in any way, since no magnetic fields are present in the initial
state. In contrast to that, in experiment $A$, strong magnetic field
generation at the discontinuities in the velocity profile is seen. Moreover, the
discontinuities are still rather smooth in experiment $A$, due to the
collisionless nature of the shock. To take an
example, some of the upstream particles, coming from the left,
do not scatter in the shocked region. As a result there is a smooth transition
in velocity and density at the forward shock front to the right.
If the box size was larger, and the shock could be followed
for longer times, making the extent of the shocked medium larger compared
to the two-stream interfaces and effective mean free path, this smooth transition
would stay constant and the solution would converge even better to that of a
fluid shock. A detailed analysis between larger experiments
running for longer timescales and MHD shock tubes with a range of magnetic
field configurations has to be done to fully understand
the implications for the jump conditions of the two-stream
instability; but this first experiment has shown that indeed it is possible
to recreate the fluid representation ab initio of a pair plasma collisionless
shock using a particle code and working from first principles.
Recently similar experiments were presented by
Spitkovsky~\cite{bib:spitkovsky}, and his results are in agreement
with our findings.
The relatively short thermalisation length observed in this experiment implies
that in the case of a pair plasma shock for astrophysical applications,
the two-stream instability remains a purely microphysical phenomenon, which
probably has little impact on any observed quantities in astrophysical
shocks, simply due to the small volume in which it takes place.
\section{Collisionless Ion-Electron Shocks and Limits to Current Experiments}
The computer experiments presented in last section are only a few of
a series of large scale experiments that I have performed during
the last year. The aim is to understand the global structure
in ion-electron dominated collisionless shocks, by performing a series
of three-dimensional experiments with lower and lower mass ratios
in order to finally obtain a thermalised profile, and furthermore with this body
of experiments to be able to rescale the results to realistic mass ratios.
But even using mass ratios down to $m_i/m_e=4$,
with current computer limitations of around a billion particles, it
is not possible to reach a state in the experiments, where both the
ions and electrons fully thermalise and two shock interfaces emerge.
A related but possible project is instead to understand collisionless
shocks in two dimensions. Two-dimensional collisionless shocks have
been considered for some time in the literature (e.g.~Califano
et al.~\cite{1998PhRvE..57.7048C} and Kazimura et al.~\cite{bib:Kazimura}),
but until now no large scale 2D simulations have
been made with open boundaries (though see Medvedev et
al.~\cite{2005ApJ...618L..75M} for an experiment with periodic
boundaries in the streaming direction, and the work
by Hededal \cite{bib:hededalthesis}).
To compare 2D and 3D simulations, in this section I present two
experiments with \emph{exactly} the same initial conditions as
the 3D experiment considered in
Chapter \ref{chap:acc} (from now on experiment $C$).
They both have an inflow Lorentz boost of $\Gamma=15$, an electron
plasma frequency
of $\omega_{p,e}=0.3$ and a mass ratio of $m_i/m_e=16$. I use 8 particles
per species per cell.
Experiment $A$ has the same box size transverse to the
flow as experiment $C$ with $nx\times nz=125 \times 4000$, while
experiment $B$ is much wider with $nx\times nz=1600\times 4000$.
\begin{figure*}[!th]
\begin{center}
\epsfig{figure=final2d.eps,width=\textwidth}
\epsfig{figure=final3d.eps,width=\textwidth}
\caption{From top to bottom: The current density of the ions and electrons
of experiment $B$, experiment $A$ and the averaged current density in the
$y$-direction
of experiment $C$, reported on in Chapter \ref{chap:acc}.
The dashed lines indicate the region used for constructing particle
distribution functions. The figure is shown with the correct aspect ratio
and the snapshots are taken at $t=125\,\omega_{p,e}^{-1}$. Length units
are given in electron skin depths.}
\label{fig:lscale2d}
\end{center}
\end{figure*}
I have selected the two experiments to address the following
questions: 1) How much do 2D and 3D experiments differ, both quantitatively
but also in the underlying physical process. 2) What is the impact
of the narrow boxes that have been considered until now. In nature
a collisionless shock is much wider than the shock interface is long,
and during the instability, the different regions, by causality, can
only interact with a finite area of the shock front. But in some of the
3D experiments, to grasp the streaming nature of the shock properly,
the boxes have been far too small in the transverse direction of the shock.
In Fig.~\ref{fig:lscale2d} the size of the current densities for the
three experiments are shown at time $t=125\,\omega_{p,e}^{-1}$. We see basic
agreement between experiment $A$ and experiment $C$ for the morphology,
though the ion current channels in experiment $C$ are thicker and more
well structured.
Comparing experiment $A$ and $B$ we see that the larger box size leads to a
much more complex picture than the simple idea of current channels
streaming strictly in the direction of the flow. There is a dazzling
array of interactions going on, with nontrivial interactions between
the channels. In the lower right of experiment $B$ there is an almost
square formed complex of current channels, which in itself is wider
than both experiments $A$ and $C$. The filamentary structure is sustained,
but in contrast to the simple toy model presented in
Chapter \ref{chap:field} and by Medvedev et al.~\cite{2005ApJ...618L..75M},
one cannot speak of a merging hierarchy of ion channels, and the lifetime
of an individual channel is quite small.
The process is initially ignited by the two-stream instability, and in
experiment $B$ abundant examples of forming channels may be found, not
only at the initial interface at $Z=200$, but also downstream
of the first generation, due to two-stream like configurations
of the magnetic field. Moreover we observe direct merging and head on collisions
between the individual channels.
Counteracting this process and partly responsible for the decay of ion
channels is the electric potential. Near the centre of the channel
there is a strong over density of positive charges and even after taking
into account a relativistic
time dilation factor -- or equivalently -- the self generated magnetic fields
of the channels, they will ``explode''. In two dimensions they leave a
cone-like
structure containing two trails of ions, with some symmetry, since
everything happens at the speed of light, while in three dimensions a
ring-like structure is created.
A three-dimensional version of this explosion may be seen in
Fig.~\ref{fig:acceleration}B, where, except for the helix structure,
everything is stabilised and symmetric
due to selfinteraction of the channel. This electrostatic explosion makes
the evolution of collisionless shocks even more dynamic and intermittent.
It is important to point out that the effect depends
on the effectiveness by which the electrons Debye shield the ion channels,
and therefore on the mass ratio of the experiment. At higher, more realistic
mass ratios, the shielding is more effective and the timescale for the
breakup of the ion channel longer.
The relatively short time scale of experiment $B$ does not make it
possible to assess what the long time implications are of this richer
structure, but does show
that it is important to have an adequate resolution transverse to
the flow, and not only in the streaming direction, if a full
understanding of collisionless ion-electron shocks is to be obtained.
The highly dynamic nature and evolution along the streaming direction
also show that streaming experiments with open boundaries are essential
to understand the state of the plasma far downstream of a collisionless
shock interface.
\begin{figure*}[!t]
\begin{center}
\epsfig{figure=hist_2000.eps,width=0.49\textwidth}
\epsfig{figure=hist_2050.eps,width=0.49\textwidth}
\caption{Particle distribution function for the electrons in a slice around
$Z=400\,\omega_{p,e}^{-1}$. To the left is shown the PDF for experiment $B$
and to the right the PDF for experiment $C$. The PDF for experiment $A$
is identical to $B$.}
\label{fig:slope}
\end{center}
\end{figure*}
It is important to understand if there are differences in the morphology
of the currents observed in experiments with two- and
three-dimensional shocks. But in order to make a more formal and
quantitative investigation we have to look at the particle statistics.
I have measured the particle distribution functions (PDFs) for the electrons in
a slice of the domain delimited in Fig.~\ref{fig:lscale2d} by dashed lines for
the three experiments. The slice has been selected to lie at the point in
the shock
where the electrons are on the brink of merging to one single continuous
population, but still the form of the PDF is dominated by remnants of the
initial upstream and downstream populations. The slope of the PDF
indicated in the figure depends on the amount of heating in the populations.
A warmer upstream population will be broader in phase space, and consequently
the maximum is lower, giving rise to a steeper slope. It should be
emphasised that the perceived power law seen in the figure is but
a consequence of the merging populations. It can easily be verified
by noting that the ``powerlaw'' breaks at around $v\gamma=15$, the
velocity of the instreaming population.
We find perfect agreement
between the PDFs of the two-dimensional experiment $A$ and $B$, and they both
have a slope index of $2.1$, while experiment $C$ has a slope index of $1.55$
(see Fig.~\ref{fig:slope}). The reason for this significant difference
in the heating rates of the electrons in two-dimensional shocks compared
to three dimensional is the same as the reason for the lower lifetime
of the ion channels in the two-dimensional experiments.
Essentially we can understand the differences between two- and
three-dimensional simulations by considering the \emph{real}
space available in the two cases. 1) There is a difference in the dynamics of
the ion channels: If we make a transverse cut through the flow
in the three-dimensional
case, the channels may be likened to 1D particles in a plane
(see Fig.~\ref{fig:Slice} in Chapter
\ref{chap:field} for an illustration of this).
We have carefully studied the time evolution of this 1D gas of current channels
in the experiment considered in Chapter \ref{chap:field}. It is found
that, if channels collide head on, they will generally not coalesce, and
instead in
many cases destabilise, because
of the inertia of each channel, while in cases where the impact parameter is
very low, or the two channels slightly miss each other, a much smoother
collision process is initiated by the in-spiralling of
the two channels, ultimately leading to the formation of a single channel.
In a two-dimensional simulation, though, the two-dimensional transverse plane
reduces to a one dimensional line, and consequently two merging channels will
always collide head on, making coalescence more difficult, the transverse
velocities, i.e.~the temperature, of the ions higher, and the lifetime of
the channels lower. 2) There is a difference in the dynamics of the electrons:
The electrons Debye shield the ion channels, and move generally in accordance
with the toy model described in Chapter \ref{chap:acc}. In the three-dimensional
case the paths of two such
electrons have been depicted in Fig.~\ref{fig:acceleration} in
Chapter \ref{chap:acc}.
In general the electrons do not move exactly through the centre of the current
channel, but instead traverse some kind of complicated ellipsoidal path.
In the two-dimensional case though, because there is no ``third dimension''
to miss the centre in, the electrons have to go right through
the centre of the ion channel, and gain the maximum amount of acceleration.
The acceleration is to a large extend potential, and the electron looses
most of the energy climbing out of the potential well, but because of the
time dependence of the fields, statistically there will be some momentum
transfer. In the two-dimensional case the electrons have to pass through
the local minimum of the electrostatic potential, and hence the heating
is more effective than in the three-dimensional case.
The experiments presented in this section have shown that, while the
basic morphology and dynamics do carry over from three to two dimensions,
there are some quantitative differences. The heating of electrons and ions
is more effective and the two-stream generated ion channels are not as
stable in two as in three-dimensional experiments. Nonetheless, if we take these
differences into consideration when interpreting two-dimensional experiments,
these experiments are still the most promising tools for understanding
the global structure
of ion-electron collisionless shocks. From the above discussion it is clear that
the extent of the two-streaming region in
an ion-electron collisionless shock, as inferred from future two-dimensional
experiments of the global shock structure,
will most likely
be smaller than in the case of a three-dimensional shock. This conclusion
may be drawn directly from the higher heating rate alone, as observed in
Fig.~\ref{fig:slope}.
\section{Conclusions}
Collisionless shocks arise in many astrophysical objects and the correct
understanding of relativistic collisionless shocks have implications for the
observations of outflows from compact objects, such as gamma ray burst afterglows
and relativistic jets. We have seen in the preceding chapters that magnetic field
generation and particle acceleration are integral parts of collisionless shocks
in the case of weak or absent large scale magnetic fields.
To understand the impact on observations it is essential to investigate how far
down stream of the initial shock the two-stream unstable region
extends. With this in mind I have, in the current chapter,
discussed the global structure of collisionless shocks.
In the first part of the chapter I have presented a fully three-dimensional
model of colliding pair plasmas using
a particle-in-cell code, and observed the thermalisation of the plasma due to
the collective electromagnetic field, and the formation of a macro physical
shock structure. Comparing the results to a fluid simulation, with the same
initial conditions, good agreement is found, implying that the full structure
of the shock has been resolved.
Crucially, I have estimated that the decay of the two-streaming region and
subsequent thermalisation happens over 50-100 \emph{electron} skin depths.
Thus, for the specific case we have considered it renders the two-stream
instability for pair plasmas completely microphysical. Hence,
the two-stream instability in collisionless shocks comprised
purely of leptonic matter may have few direct observational consequences.
In the second part of the chapter I have considered the global structure
of ion-electron dominated collisionless shocks. With current computer capacities
it is impossible to correctly model the global structure of an ion-electron
shock in three dimensions. Two-dimensional collisionless shocks remain a
promising alternative, and I have investigated their applicability
in understanding
three-dimensional models. It has been
shown that while indeed two-dimensional shocks, for the time being, are
our best hope to grasp numerically the global structure of ion-electron
collisionless shocks, there are some differences, and caution should be
voiced in directly generalising results from two-dimensional experiments to
three dimensions. The ion channels that form
due to the two-stream instability are less stable, and the heating
rate of the electrons is higher. Both factors contribute to a faster
thermalisation than what can be expected from three-dimensional experiments
in the future, and hence give rise to an underestimation of the extent
of the two-stream
unstable region. Nonetheless, the overall physical picture is the same, and these
differences can be taken into account.
\chapter{Magnetic Field Generation in Collisionless Shocks;
Pattern Growth and Transport}\label{chap:field}
In this chapter I present results from three-dimensional particle
simulations of collisionless
shock formation, with relativistic counter-streaming ion-electron plasmas
first published in Fredriksen et al.~\cite{bib:frederiksen2004}.
Particles are followed over many skin depths downstream of the shock. Open
boundaries allow the experiments to be continued for several particle crossing
times. The experiments confirm the generation of strong magnetic and electric
fields by a Weibel-like kinetic streaming instability, and demonstrate that the
electromagnetic fields propagate far downstream of the shock. The magnetic
fields are predominantly transversal, and are associated with merging ion current
channels. The total magnetic energy grows as the ion channels merge, and as the
magnetic field patterns propagate down stream. The electron populations are
quickly thermalised, while the ion populations retain distinct bulk speeds in
shielded ion channels and thermalise much more slowly. The results help us to reveal
processes of importance in collisionless shocks, and may help to explain the origin
of the magnetic fields responsible for afterglow synchrotron/jitter radiation from
Gamma-Ray Bursts.
\section{Introduction}
The existence of a strong magnetic field in the shocked external
medium is required in order to explain the observed radiation in
Gamma-Ray Burst afterglows as synchrotron radiation
\citep[e.g.][]{bib:Panaitescu+Kumar}. Nearly collisionless shocks,
with synchrotron-type radiation present, are also common in many other
astrophysical contexts, such as in supernova shocks, and in jets
from active galactic nuclei. At least in the context of Gamma-Ray
Burst afterglows the observed synchrotron radiation requires the
presence of a stronger magnetic field than can easily be explained by just
compression of a magnetic field already present in the external medium.
Medvedev \& Loeb \citep{1999ApJ...526..697M} showed through a linear
kinetic treatment how a
two-stream magnetic instability -- a generalisation of the Weibel
instability \citep{1959PhRvL...2...83W,bib:YoonDavidson} -- can generate a
strong magnetic field ($\epsilon_B$, defined as the ratio of magnetic energy to
total kinetic energy, is $10^{-5}$-$10^{-1}$ of equipartition value)
in collisionless shock fronts
\citep[see also discussion in][]{2003MNRAS.339..881R}. We
note in passing that this instability is well-known in other plasma
physics disciplines, e.g. laser-plasma interactions
\cite{bib:YangGallantAronsLangdon,1998PhRvE..57.7048C},
and has been applied in the context of pulsar winds
by Kazimura et al.~\cite{bib:Kazimura}.
Using three-dimensional particle-in-cell simulations to study
relativistic collisionless shocks (where an external plasma impacts the
shock region with a bulk Lorentz factor $\Gamma = 5-10$),
Frederiksen et al.~\cite{bib:frederiksen2002},
Nishikawa et al.~\cite{2003ApJ...595..555N}, and
Silva et al.~\cite{2003ApJ...596L.121S}
investigated the generation of magnetic fields by the two-stream
instability.
In these first studies the
growth of the transverse scales of the magnetic field was limited by the
dimensions of the computational domains. The durations of the
Nishikawa et al.~\cite{2003ApJ...595..555N} experiments were less than
particle travel times through the experiments, while
Silva et al.~\cite{2003ApJ...596L.121S} used periodic
boundary conditions in the direction of streaming.
Further, Frederiksen et al.~\cite{bib:frederiksen2002}
and Nishikawa et al.~\cite{2003ApJ...595..555N} used electron-ion ($e^-p$)
plasmas, while experiments reported upon by
Silva et al.~\cite{2003ApJ...596L.121S} were done with $e^-e^+$
pair plasmas.
Here, we report on 3D particle-in-cell simulations
of relativistically counter-streaming $e^-p$
plasmas. Open boundaries are used in the streaming direction, and experiment
durations are several particle crossing times.
Our results can help to reveal the most important
processes in collisionless shocks, and help to explain the observed afterglow
synchrotron radiation from Gamma-Ray Bursts.
We focus on the earliest development in
shock formation and field generation. Late stages in shock formation will be
addressed in Chapter \ref{chap:global}.
\section{Simulations}
Experiments were performed using a self-consistent 3D3V electromagnetic
particle-in-cell code originally developed for simulating reconnection
topologies \citep{bib:HesseKuzenova},
redeveloped by Frederiksen \cite{bib:trier} to obey special relativity
and to be second order accurate in both space and time.
The code solves Maxwell's equations for the electromagnetic
field with continuous sources, with fields and field source
terms defined on a staggered
3D Yee-lattice \citep{bib:Yee}. The sources in Maxwell's equations
are formed by weighted averaging of particle data to the field grid,
using quadratic spline interpolation. Particle velocities and positions are
defined in continuous (${\bf{r}},\gamma{\bf{v}}$)-space,
and particles obey the relativistic equations of motion.
The grid size used in the main experiment was $(x,y,z)=200\times200\times800$,
with 25 particles per cell, for a total of $8\times10^8$ particles,
with ion to electron mass ratio $m_{i}/m_{e} = 16$.
To adequately resolve a significant number of electron and ion
skin-depths ($\delta_e$ and $\delta_i$), the box size was chosen such that
$L_{x,y} = 10\delta_i \sim 40\delta_e$ and $L_z \sim 40 \delta_i
\sim 160\delta_e$. Varying aspect and mass ratios were used in complementary experiments.
\begin{figure*}[!t]
\begin{center}
\epsfig{figure=contour_small.eps,width=\textwidth}
\caption{
The left hand side panel shows the longitudinal electron current
density through a transverse cut at $z=100$, with a small inset showing
the ion current in the same plane. The right hand side panel shows
the ion current at $z=600=30\delta_i$, with the small inset now instead showing
the electron current. The arrows represent the transverse magnetic field. Both panels are from time $t = 1200$.}
\label{fig:Slice}
\end{center}
\end{figure*}
Two counter-streaming -- initially
quasi-neutral and cold -- plasma populations are simulated. At the two-stream interface (smoothed around $z=80$)
a plasma ($z<80$) streaming in the positive z-direction, with a
bulk Lorentz factor $\Gamma=3$, hits another plasma ($z\ge80$) at rest in
our reference frame. The latter plasma is denser than the former by a factor of 3.
Experiments have been run with both initially sharp and initially smooth
transitions, with essentially the same results.
The long simulation time gradually allows the shock to converge towards
self-consistent jump conditions.
Periodic boundaries are imposed in
the $x$-- and $y$--directions, while the boundaries at $z=0$ and $z=800$ are open,
with layers absorbing transverse electromagnetic waves. Inflow
conditions at $z=0$ are fixed, with incoming particles supplied at a
constant rate and with uniform speed. At $z=800$ there is free outflow of particles.
The maximum experiment duration is 480 $\omega_{pe}^{-1}$ (where $\omega_{pe}$ is the electron plasma frequency),
sufficient for propagating $\Gamma \approx 3$ particles 2.8 times through the box.
\section{Results and Discussions}
The extended size and duration of these experiments make it possible
to follow the two-stream instability through several stages of development;
first exponential growth, then non-linear saturation, followed by pattern growth
and downstream advection. We identify the mechanisms
responsible for these stages below.
\subsection{Magnetic field generation, pattern growth \\ and field transport}
\label{field_generation}
Encountering the shock front the
incoming electrons are rapidly (being lighter than the ions) deflected by
field fluctuations growing due to the two-stream instability \citep{1999ApJ...526..697M}.
The initial perturbations grow
non-linear as the deflected electrons collect into first caustic
surfaces and then current channels (Fig.~\ref{fig:Slice}). Both streaming and
rest frame electrons are deflected, by arguments of symmetry.
In accordance with Ampere's law the current channels
are surrounded by approximately cylindrical magnetic fields
(illustrated by arrows in Fig.~\ref{fig:Slice}), causing
mutual attraction between the current channels. The current
channels thus merge in a race where larger electron
channels consume smaller, neighbouring channels.
In this manner, the transverse magnetic field
grows in strength and scale downstream. This continues until
the fields grow strong enough to deflect the
much heavier ions into the magnetic voids between
the electron channels. The ion channels are then subjected to
the same growth mechanism as the electrons. When ion channels
grow sufficiently powerful, they begin to experience Debye shielding by the
electrons, which by then have been significantly heated by scattering
on the increasing electromagnetic field structures. The two electron
populations, initially separated in $\gamma{\bf{v}}$-space, merge
to a single population in approximately $20\delta_e$ ($z=80$--$200$)
as seen in Fig.~\ref{fig:acc}. The same trend is seen for the ions -- albeit
the merging rate might be significantly slower than predicted by
extrapolating with $m_i/m_e$, since Debye shielding stabilises the
ion channels.
\begin{figure}[!t]
\begin{center}
\epsfig{figure=jez_jiz.eps,width=\textwidth}
\caption{
Electron (top) and ion (bottom) currents, averaged over the $x$-direction, at time
$t=1200$.
}
\label{fig:jiz}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\epsfig{figure=B_power.eps,width=\textwidth}
\caption{Power spectrum of ${\mathbf B}_{\perp}$ for $z = 250$ at different times.}
\label{fig:power}
\end{center}
\end{figure}
The Debye shielding quenches the electron channels, while
at the same time supporting the ion-channels; the large
random velocities of the electron population allow the concentrated
ion channels to keep sustaining strong magnetic fields.
Fig.~\ref{fig:Slice}, shows the highly concentrated ion currents, the more diffuse
-- and shielding -- electron currents, and the resulting magnetic field.
The electron and ion channels are further illustrated in Fig.~\ref{fig:jiz}.
Note the limited $z$-extent of the electron
current channels, while the ion current channels extend throughout
the length of the box, merging to form larger scales downstream.
Because of the longitudinal current channels the magnetic field is
predominantly transversal; we find $|B_z|/|B_{tot}| \sim 10^{-1} - 10^{-2}$.
Figure \ref{fig:power} shows the temporal development of the
transverse magnetic field scales around $z=250$.
The power spectra follow power-laws,
with the largest scales growing with time.
The dominant scales at these $z$ are of the order $\delta_i$
at early times. Later they become comparable to $L_{x,y}$.
Figure \ref{fig:epsb}
captures this scaling behaviour as a function of depth for $t=2400$.
\begin{figure}[!t]
\begin{center}
\epsfig{figure=epsilon_power_B.eps,width=\textwidth}
\caption{Relative electromagnetic energy density $\epsilon_{B}$.
The contour colour plot shows the power in the transverse magnetic
field through the box distributed on spatial Fourier modes at $t=2400$,
with the dotted line marking the wavenumber with maximum power.
Superposed is the spatial distribution of $\epsilon_{B}$, averaged across the beam,
at $t=2320$ (dashed-dotted) and $t=2400$ (full drawn), highlighting how EM-fields
are advected down through the box.
}
\label{fig:epsb}
\end{center}
\end{figure}
The time evolutions of the electric and magnetic field energies are shown in
Fig.~\ref{fig:B_energy}. Seeded by fluctuations in the fields, mass
and charge density, the two-stream instability initially grows super-linearly
($t=80-100$), reflecting approximate exponential growth in a small sub-volume. Subsequently the
total magnetic energy grows more linearly, reflecting essentially the
increasing volume filling factor as the non-linearly saturated magnetic
field structures are advected downstream.
\begin{figure}[!t]
\begin{center}
\epsfig{figure=field_energy.eps,width=\textwidth}
\caption{
Total magnetic (full drawn) and electric (dashed) energy
in the box as a function of time. The inset shows a log-log
plot of the same data.
}
\label{fig:B_energy}
\end{center}
\end{figure}
At $t\approx 1100$ the slope drops off, due to advection of the generated
fields out of the box. The continued slow growth, for $t > 1100$, reflects
the increase of the pattern size with time (cf.\ Fig.~\ref{fig:power}).
A larger pattern size
corresponds, on the average, to a larger mean magnetic energy, since the
total electric current is split up into fewer but stronger ion current
channels.
The magnetic energy scales with
the square of the electric current, which in turn grows in inverse proportion
to the number of current channels. The net
effect is that the mean magnetic energy increases accordingly.
The magnetic energy density keeps growing throughout our experiment,
even though the duration of the experiment (480 $\omega_{pe}^{-1}$) significantly
exceeds the particle crossing time, and also exceeds the advection time of the
magnetic field structures through the box. This is in contrast to the results
reported by Silva et al.~\cite{2003ApJ...596L.121S},
where the magnetic energy density drops back after about 10-30 $\omega_{pe}^{-1}$.
It is indeed obvious from the preceding discussion that the ion-electron
asymmetry is essential for the survival of the current channels.
From the requirement that the total plasma momentum should be conserved,
the (electro)magnetic field produced by the two-stream instability
acquires part of the z-momentum lost by the two-stream population
in the shock; this introduces the possibility that magnetic field structures
created in the shock migrate downstream of the shock and thus
carry away some of the momentum impinging on the shock.
Our experiments show that this does indeed happen;
the continuous injection of momentum transports the generated
field structures downstream at an accelerated advection speed.
The dragging of field structures through the dense plasma acts as to transfer momentum between
the in-streaming and the shocked plasmas.
\subsection{Thermalisation and plasma heating}
At late times the entering electrons are effectively
scattered and thermalised: The magnetic field isotropises the velocity distribution
whereas the electric field generated by the $e^{-}$--$p$ charge
separation acts to thermalise the populations.
Figure \ref{fig:acc} shows that this happens over the $\sim$ 20 electron skin
depths from around $z=80$ -- $200$.
The ions are expected to also thermalise, given sufficient space and time. This
fact leaves the massive ion bulk momentum constituting a vast energy reservoir for
further electron heating and acceleration. Also seen in Fig.~\ref{fig:acc}, the ions
beams stay clearly separated in phase space, and are only slowly broadened (and
heated).
\begin{figure}[!t]
\begin{center}
\epsfig{figure=Zaccel-2400b.eps,width=\textwidth}
\caption{Thermalisation and longitudinal acceleration,
illustrated by scatter plots of the electron (orange) and ion (blue)
populations.
Note the back-scattered electron population ($v_z\gamma(v) < 0$).
}
\label{fig:acc}
\end{center}
\end{figure}
We do not see indications of a super-thermal tail in the heated electron
distributions, and there is thus no sign of second order Fermi-acceleration in
the experiment presented in this Letter.
\cite{2003ApJ...595..555N} and \cite{2003ApJ...596L.121S} reported acceleration of particles
in experiments similar to the current experiment, except for more limited
sizes and durations, and the use of an $e^-e^+$ plasma \citep{2003ApJ...596L.121S}.
On closer examination of the published results it appears that there is no
actual disagreement regarding the absence of accelerated particles. Whence,
\cite{2003ApJ...595..555N} refer to transversal velocities of the order of $0.2
c$ (their Fig.\ 3b), at a time where our experiment shows similar
transversal velocities (cf. \fig{fig:acc})
that later develop a purely thermal spectrum. \cite{2003ApJ...596L.121S} refer to
transversal velocity amplitudes up to about $0.8 c$ (their Fig.\ 4), or $v\gamma\sim 2$,
with a shape of the distribution function that appears to be compatible with thermal.
In comparison, the electron distribution illustrated by the scatter plot in
Fig.~\ref{fig:acc}
covers a similar interval of $v\gamma$, with distribution functions that are
close to Lorentz boosted relativistic Maxwellians (see
App.~\ref{chap:maxwell} for a discussion of Lorentz boosted thermal profiles).
Thus, in the experiment reported on in this chapter there is
no compelling evidence for non-thermal particle acceleration.
Thermalisation is a more likely cause of the increases in transversal velocities.
Frederiksen et al.~\cite{bib:frederiksen2002} reported evidence for
particle acceleration, with electron gammas up to $\sim100$, in experiments
with an external magnetic field present in the up-stream plasma.
This is indeed
a more promising scenario for particle acceleration experiments
(although in the experiments by \cite{2003ApJ...595..555N} results with an
external magnetic field were similar to those without).
Figure \ref{fig:acc} shows the presence of a population of
back-scattered electrons ($v_z\gamma < 0$). In the presence of
an external magnetic field in the in-streaming plasma,
this possibly facilitates Fermi acceleration in the shock.
\section{Conclusions}
The experiment reported upon in this chapter illustrates a number of
fundamental properties of relativistic, collisionless shocks:
1.
Even in the absence of a magnetic field in the up-stream plasma,
a small scale, fluctuating, and predominantly transversal magnetic field
is unavoidably generated by a two-stream instability reminiscent of the
Weibel-instability. In the current experiment the magnetic energy density
reaches a few percent of the energy density of the in-coming beam.
2.
In the case of an $e^-p$ plasma the electrons are rapidly thermalised, while
the ions form current channels that are the sources of deeply
penetrating magnetic field structures. The channels merge in the downstream
direction, with a corresponding increase of the average
magnetic energy with shock depth. This is expected
to continue as long as a surplus of bulk relative momentum remains in the
counter-streaming plasmas.
3.
The generated magnetic field patterns are advected downstream at speeds
intermediate of the streaming and rest frame plasmas.
The electromagnetic field structures
thus provide scattering centres that interact with both the fast, in-coming
plasma, and with the plasma that is initially at rest. As a result the
electron populations of both components quickly thermalise and form
a single, Lorentz-boosted thermal electron population. The two ion populations
merge much more slowly, with only gradually increasing ion temperatures.
4. The observed strong turbulence in the field structures at the shocked streaming interface
provides a promising environment for particle acceleration.
We emphasise that quantification of the interdependence and development of
$\epsilon_U$ and $\epsilon_B$ is accessible by means of such experiments as reported
upon here.
Rather than devising abstract scalar parameters $\epsilon_B$
and $\epsilon_U$, that may be expected to depend on shock depth, media densities
etc., a better approach is to compute synthetic radiation spectra directly from
the models, and then apply scaling laws to predict what would be observed from
corresponding, real supernova remnants and Gamma-Ray Burst afterglow shocks.
\chapter{Non--Fermi Power law Acceleration in
Astrophysical Plasma Shocks}\label{chap:acc}
Collisionless plasma shock theory, which applies for example
to the afterglow of gamma ray
bursts, still contains key issues that are poorly understood.
In this chapter I discuss the results of
charged particle dynamics in a highly
relativistic collisionless
shock numerically using $\sim 10^9$ particles first
published by Hededal et al.~\cite{bib:hededal2004}. We find a power
law distribution of accelerated electrons, which upon detailed
investigation turns out to originate
from an acceleration mechanism that is decidedly
different from Fermi acceleration.
Electrons are accelerated by strong filamentation
instabilities in the shocked interpenetrating plasmas and coincide
spatially with the
powerlaw distributed current filamentary structures. These structures are an
inevitable consequence
of the now well established Weibel--like two--stream instability that operates
in relativistic collisionless shocks.
The electrons are accelerated and decelerated instantaneously and
locally; a scenery that differs qualitatively from recursive
acceleration mechanisms such as Fermi acceleration.
The slopes of the electron distribution powerlaws are in concordance with the
particle powerlaw spectra inferred from observed afterglow synchrotron radiation
in gamma ray bursts, and the mechanism can possibly explain more generally the
origin of non--thermal radiation from shocked inter-- and circum--stellar regions
and from relativistic jets.
\section{Introduction}
Given the highly relativistic conditions in the outflow from
gamma ray bursts (GRBs), the mean free path for particle Coulomb
collisions in the afterglow shock is several orders of magnitude
larger than the fireball itself.
In explaining the microphysical processes that work to define the
shock, MHD becomes inadequate and collisionless plasma shock theory
stands imperative.
In particular two key issues remain, namely
the origin and nature of the magnetic field in the shocked region,
and the mechanism by which electrons are accelerated from a thermal
population to a powerlaw distribution $N(\gamma)d\gamma\propto\gamma^{-p}$.
Both ingredients are needed to explain the
observed afterglow spectra
(e.g.~\cite{2000ApJ...538L.125K, 2001ApJ...560L..49P}).
Regarding the origin of the magnetic field in the shocked region, observations
are not compatible with a compressed inter--stellar magnetic field, which would
be orders of magnitude smaller than needed \cite{1999ApJ...511..852G}.
It has been suggested that a Weibel--like two--stream instability
can generate a magnetic field in the
shocked region (see Chapter \ref{chap:field}, and
Medvedev \& Loeb \cite{1999ApJ...526..697M};
Frederiksen et al.~\cite{bib:frederiksen2002};
Nishikawa et al.~\cite{2003ApJ...595..555N};
Silva et al.~\cite{2003ApJ...596L.121S}).
Computer experiments presented in Chapter \ref{chap:field} and
\cite{bib:frederiksen2004} showed that the
nonlinear stage of a two--stream instability induces a magnetic field
{\it in situ} with an energy content of a
few percent of the equipartition value, consistent with
that required by observations.
Fermi acceleration \cite{1949PhRv...75.1169F} has, so far,
been widely accepted as the mechanism that provides the inferred electron
acceleration.
It has been employed extensively in Monte Carlo simulations
(e.g.~\cite{bib:Niemiec} and references therein),
where it operates in conjunction with certain
assumptions about the scattering of particles
and the structure of the magnetic field.
The mechanism has, however,
not been conclusively demonstrated to occur in
{\em ab initio} particle simulations.
As pointed out by Niemiec \& Ostrowski \cite{bib:Niemiec},
further significant advance in the study of relativistic shock
particle acceleration is
unlikely without understanding the detailed microphysics of
collisionless shocks. Also,
recently Baring and Braby \cite{bib:baring} found that
particle distribution functions (PDFs)
inferred from GRB observations are in contradistinction with standard
acceleration mechanisms such as diffusive Fermi acceleration.
In this chapter we study {\em ab initio} the particle dynamics
in a collisionless shock with bulk Lorentz factor $\Gamma=15$.
We find a new particle
acceleration mechanism, which is presented in section \ref{sec:4.2}. Detailed
numerical results are presented
and interpreted in section \ref{sec:4.3}, while section \ref{sec:4.4} contains the conclusions.
\begin{figure*}[!th]
\begin{center}
\epsfig{figure=f1.eps,width=\textwidth}
\caption{(A) Ray traced electron paths (red) and current density (blue).
The colours of the electron paths reflect their four velocity according
to the colour table in the inset (B). The shadows are
equivalent to the $x$ and $y$ projections of their paths. The ion
current density is
shown with blue colours according to the colour table in the inset. The inset also
shows the ion current density (blue) integrated along the
$x$ axis with the spatial distribution of fast
moving electrons (red) over plotted.}
\label{fig:acceleration}
\end{center}
\end{figure*}
\section{A New Acceleration Mechanism}\label{sec:4.2}
A series of numerical experiments have been performed where collisionless
shocks are created by two colliding plasma populations. These experiments
are described in more detail below, but a common feature is
that the electron PDF has a high energy tail which is powerlaw distributed. By
carefully examining the paths of representative accelerated electrons,
tracing them backwards and forwards in time, it has been possible to
identify the mechanism responsible for their acceleration.
The acceleration mechanism, which was presented for the first
time in \cite{bib:hededal2004}, works as follows:
When two non--magnetised collisionless plasma populations interpenetrate,
current channels are formed
through a Weibel--like two--stream instability
(see Chapter \ref{chap:field}; Medvedev \& Loeb \cite{1999ApJ...526..697M};
Frederiksen et al.~\cite{bib:frederiksen2002};
Nishikawa et al.~\cite{2003ApJ...595..555N};
Silva et al.~\cite{2003ApJ...596L.121S}).
In the nonlinear stage of evolution of this instability, ion current
channels merge
into increasingly stronger patterns, while electrons act to
Debye shield these channels, as shown in Chapter \ref{chap:field}.
Further it was showed
that a Fourier decomposition of the transverse structure of the
ion current filaments
exhibits powerlaw behaviour which has been recently confirmed by
Medvedev et al.~\cite{2005ApJ...618L..75M}.
At distances less than the Debye length, the ion current channels are surrounded
by transverse electric fields that accelerate the electrons toward the current
channels. However, the magnetic fields that are induced around
the current channels act to deflect the path of the accelerated electrons,
boosting them instead in the direction of the ion flow.
Since the forces working are due to quasi--stationary fields the acceleration is a
simple consequence of potential energy being converted into
kinetic energy. Therefore the electrons are decelerated again
when leaving the current channel, and reach their maximal velocities
at the centres of the current channels. Hence, as illustrated by
Fig.~\ref{fig:acceleration}B, the spatial distribution
of the high energy electrons is a direct match to the ion current channels and
the properties of the accelerated electrons depend
primarily on the local conditions in the plasma.
One might argue that the near--potential behaviour of the electrons,
where they essentially must loose most of their energy to escape from
the current channels, would make the mechanism uninteresting as an
acceleration mechanism since fast electrons cannot easily escape.
However, this feature may instead be a major advantage, since it means that
energy losses due to escape are small, and that the electrons
remain trapped long enough to have time to loose their energy
via a combination of bremsstrahlung and synchrotron or jitter radiation.
We observe that only a very small fraction of the electrons manage to escape,
while still retaining most of their kinetic energy.
This happens mainly at sudden bends or mergers of
the ion channels, where the electron orbits cannot be described in terms of
a particle moving in a static electromagnetic field.
\begin{figure}[!t]
\begin{center}
\epsfig{figure=f2.eps,width=0.8\textwidth}
\caption{An ion current channel surrounded by an electric -- and a magnetic field.
Electrons in the vicinity of the current channels are thus subject to
a Lorentz force with both an electric and magnetic component,
working together to accelerate the
electrons along the ion flow. Crossing the centre of the channel the process
reverses leading to an oscillating movement along the channel.}
\label{fig:current_acc}
\end{center}
\end{figure}
To analyse the acceleration scenario quantitatively
we construct a toy model. It has been sketched in Fig.~\ref{fig:current_acc}.
We assume that the ion current channel has radius $R$, that
the total charge inside
the cylinder per unit length is $\lambda$ and the ions all stream with velocity
$u$ and Lorentz factor $\Gamma$ in the laboratory rest frame
(see Fig.~\ref{fig:current_acc} and inset for definition of rest frames).
Consider an electron
with charge $-q$ and mass $m$ at a distance $r$ from the centre of the channel,
initially having no velocity components perpendicular to the cylinder,
and four velocity $\gamma_0 v_{z,0}$ parallel to the cylinder, and disregard
for the moment any other shielding electrons.
By analysing everything in the ion channel rest frame the problem reduces
to electrostatics and it is possible to analytically calculate the change in
four velocity of the electron when it reaches the surface of the cylinder.
In the ion channel rest frame the electron has the
Lorentz factor and four velocity
\begin{align}
\label{eq:initgamma}
\gamma'_0 &= \Gamma \gamma_0(1 - u v_{z,0})\,, \\
\label{eq:initvz}
\gamma'_0 v'_{z,0} &= \Gamma \gamma_0 (v_{z,0}-u)\,,
\end{align}
where quantities in the ion channel rest frame are denoted with a prime.
The ions were before moving with velocity $u$, and hence subject
to a Lorentz contraction, but are now in their rest frame. The line charge
density is reduced by a factor of $\Gamma$: $\lambda' = \lambda/\Gamma$.
The electron will be attracted to the cylinder and will gain downward momentum in
the $r$--direction. This is simply a free fall in the electric potential and
the final velocity, when the electron reaches the outer edge of the cylinder,
can be found by calculating the change in potential energy
\begin{equation}\label{eq:potenergy}
\Delta {E'}_{pot}^{r\rightarrow R} =
-\int^{r}_{R} q\vec{E}' \cdot \vec{dr}
= -\frac{q\lambda'}{2\pi\epsilon_0} \ln(r/R)\,.
\end{equation}
The change in the Lorentz boost $\gamma'$ is then
$m c^2 \Delta\gamma' = \Delta E'_{kin} = - \Delta E'_{pot}$.
The electric force only works along the $r$-axis and the four velocity
along the $z$--axis of the electron is conserved in the ion channel rest frame.
Exploiting this we can calculate not only the total change in energy
but also the change in the different velocity components.
Returning to the laboratory rest frame we find
\begin{align}\label{eq:acc}
\Delta\gamma_{electron} &= \frac{q \lambda}{2 \pi m c^2 \epsilon_0}
\ln \frac{r}{R}\,, \\
\Delta(\gamma v_z )_{electron} &= u \Delta\gamma_{electron}\,.
\end{align}
The change in the Lorentz boost is directly proportional to
the total charge inside the channel and inversely
proportional to the electron mass. In reality the Debye shielding
reduces the electric field further away from the ion channel, so the estimate
above is only valid for distances smaller than a Debye length.
Inside the ion channel the electron is accelerated too, but the amount
depends on the detailed charge distribution of the ions, and one should
remember, that in general the electrons do indeed have velocity
components perpendicular. The above estimate then can be understood as
an upper limit to the observed acceleration.
\section{Computer Experiments}\label{sec:4.3}
The experiments were performed with the
three-dimensional relativistic kinetic and electromagnetic particle--in--cell code
described briefly in Chapter \ref{chap:field} and more thoroughly in
\cite{bib:trier}. The code works from first principles,
by solving Maxwell's equations for
the electromagnetic fields and solving the Lorentz force equation of motion
for the particles.
Two colliding plasma populations are set up in the rest frame of one
of the populations (downstream, e.g. a jet). A less dense population (upstream,
e.g. the ISM) is continuously injected at the left boundary with a
relativistic velocity corresponding
to a Lorentz factor $\Gamma=15$. The two populations initially differ in
density by a factor of 3.
We use a computational box with $125\times125\times2000$ grid points and a
total of $8\times10^8$ particles. The ion rest frame plasma frequency in
the downstream
medium is $\omega_{pi}=0.075$, rendering the box 150 ion skin depths long.
The electron rest frame plasma frequency is $\omega_{pe}=0.3$ in order to resolve
also the microphysics of the electrons.
Hence the ion-to-electron mass ratio is $m_i/m_e = 16$. Other mass ratios and
plasma frequencies were used in complementary experiments.
Initially, both plasma populations are unmagnetised.
The maximum experiment duration has $t_{max} =$ 340 $\omega_{pi}^{-1}$, which
is sufficient for the continuously injected upstream plasma
($\Gamma = 15$, $v\sim c$)
to travel 2.3 times the length of the box.
The extended size and duration of these experiments enable
observations of the streaming instabilities and concurrent particle
acceleration through several stages of development \citep{bib:frederiksen2004}.
Momentum losses to radiation (cooling) are presently not included in the
model. We have, however, verified that none of the accelerated particles in the
experiment would be subject to significant synchrotron
cooling. The emitted radiation may thus be
expected to accurately reflect the distribution of accelerated electrons.
When comparing numerical data with \Eq{eq:acc}
we take $r$ to be the radius where Debye shielding starts to be
important. Using a cross section
approximately in the middle of \fig{fig:acceleration}
we find $\Delta(\gamma v_z)_{electron} = 58 \ln (r/R)$. It is hard to determine
exactly when Debye shielding becomes effective, but looking at electron paths
and the profile of the electric field we estimate that
$\ln (r/R) \approx 1.3$. Consequently, according to \Eq{eq:acc}, the
maximally attainable four velocity in this experiment is in the neighbourhood of
$(\gamma v_z)_{max}=75$. This is in good agreement with the results from our
experiments, where the maximum four velocity is $(\gamma v_z)_{max}\simeq80$.
\begin{figure}[!t]
\begin{center}
\epsfig{figure=f3.eps,width=\textwidth}
\caption{A scatter plot of the local ion current density $J_{Ion}$ versus the
four velocity of the electrons in a region downstream of the shock. Over
plotted is a line (thin) showing the average four velocity as a function
of $J_{Ion}$, and a line (thick) showing a straight line fit.
Because 'cold' trapped thermal electrons (indicated
with the ellipse) exist inside the ion current channel they count towards
lowering the average four velocity at high $J_{Ion}$. If the scatter plot
was cleaned, statistically removing all thermal electrons, we would see
a much tighter relation. Such cleaning, though, is rather delicate and
could introduce biases by itself. The trend is clearly there though even for the
'raw' data.}
\label{fig:jvg}
\end{center}
\end{figure}
The theoretical model does of course not cover all details of the experiment.
For example, in general the
electrons also have velocity components parallel to the magnetic field; instead of
making one dimensional harmonic oscillations in the plane perpendicular
to the current
channel the electrons will describe complicated ellipsoidal paths.
Fig.~\ref{fig:acceleration}A shows the path of two electrons in the vicinity of
an ion channel. But, overall, the electrons behave as expected from the model
considerations. Consequently, high speed electrons are tightly coupled
to the ion channels, as clearly illustrated by Fig.~\ref{fig:acceleration}B.
Figure \ref{fig:pdfpower} shows that the electrons are powerlaw distributed at
high energies, with index $p=2.7$.
The electrons at the high gamma cut-off are found where the ion current peaks,
as may be seen from Fig.~\ref{fig:jvg}. The maximum ion current is limited
by the size of our box; larger values would probably be found if the
merging of current channels could be followed further down stream.
The PDF is not isotropic in any frame of reference due to the high
anisotropy of the Weibel generated electromagnetic field.
The powerlaw in the electron PDF is dominant for $10<\gamma<30$.
Likewise, a powerlaw dominates the ion current channel strength, $J_{Ion}$, for
$100<J_{Ion}<1000$ (inset).
A relation between the powerlaw distributions of these two quantities
on their respective
intervals is provided with Fig.\ref{fig:jvg}: We see that
the average four velocity is proportional
(straight line fit) to a power of the local ion current
density on their respective relevant intervals,
$10<\gamma<30$ and $100<J_{Ion}<1000$. Their kinship stems
from the fact that acceleration is
local. $J_{Ion}$ has a powerlaw tail and its potential
drives the high energy distribution of
the electrons according to Eq.~(\ref{eq:acc}), thus
forming a powerlaw distributed electron PDF.
Measuring the rate at which the in--streaming ions transfer momentum to the
ion population initially at rest allows us to
make a crude estimate of the length scales
over which the two--stream instability in the current
experiment would saturate due to ion
thermalisation. A reasonable estimate appears to be approximately 10 times
the length of the current computational box, or
about 1500 ion skin depths. Assuming that
the shock propagates in an interstellar environment with a plasma density of
$\sim 10^6$ m$^{-3}$ we may calculate a typical
ion skin depth. Comparing this value with the upstream
ion skin depth from our experiments,
we find that the computational box corresponds to
a scale of the order of $10^7$ m,
or equivalently that the collisionless shock transition
region of the current experiment corresponds to about $10^8$ m.
For an ion with a Lorentz factor $\gamma=15$ this length corresponds roughly
to 40 ion gyro radii in the average strength of the generated magnetic field.
But it should be stressed that the in--streaming ions actually do not
really gyrate
since they mainly travel inside the ion current channels where the magnetic
field, by symmetry, is close to zero. Also, the strong electromagnetic fields
generated by the Weibel instability and the non-thermal electron acceleration,
which is crucial from the interpretation of GRB afterglow observations,
emphasise the shortcoming of MHD in the context of collisionless shocks.
In the computer experiments presented here we have used a mass ratio
$m_i/m_e=16$ in order to resolve the dynamics of
both species. \Eq{eq:acc} suggests that
reducing the electron mass to $1/1836\,m_i$ will increase the acceleration of
the electrons, but the gained energy is independent of the mass (see
\Eq{eq:potenergy}). In this experiment we observe electrons with energies of
approximately 5 GeV.
Even further acceleration may occur as ion channels keep growing down stream,
outside of our computational box.
The scaling estimates above depend, among other things, on plasma
densities, the bulk Lorentz factor, and the mass ratio ($m_i/m_e$).
A parameter study is necessary to explore these dependencies,
but this is beyond the scope of the present chapter. We thus stress that the
extrapolations performed here are speculative and that
unresolved physics could influence the late stages
of the instability in new and interesting ways as discussed in the following
chapter.
When the in--streaming ions are fully thermalised they can no longer support the
magnetic field structures. Thus one might speculate that the radiating region of
the GRB afterglow is actually very thin, as
suggested by Rossi \& Rees \cite{2003MNRAS.339..881R}.
Further, traditional synchrotron radiation theory does not apply to intermittent
magnetic field generated by the two--stream instability, since the
electron gyro radii often are larger than the scales of
the magnetic field structures.
We emphasise the importance of the theory of
jitter radiation for understanding the generated radiation
\cite{2000ApJ...540..704M}.
\begin{figure}[!t]
\begin{center}
\epsfig{figure=f4.eps,width=\textwidth}
\caption{The normalised electron particle distribution function downstream of
the shock. The dot--dashed line is a powerlaw fit to the non--thermal high
energy tail, while the dashed curve is a Lorentz boosted thermal electron
population. The histogram is made from the four velocities of
electrons in a thin slice in the $z$--direction
of the computational box. The inset shows a similar
histogram for ion current density
sampled in each grid point in the same slice. The bump
in the inset is a statistical fluctuation due to a single
ion channel.
}
\label{fig:pdfpower}
\end{center}
\end{figure}
\section{Conclusions}\label{sec:4.4}
In this chapter we have proposed an acceleration mechanism for electrons in
collisionless shocks. The theoretical considerations were suggested by
particle--in--cell computer experiments, which also allowed quantitative
comparisons with the theoretical predictions. We have shown that the
non--thermal acceleration of electrons is directly related to the
ion current channels in the shock transition zone.
The results are applicable to
interactions between relativistic outflows and the interstellar medium.
Such relativistic outflows occur in GRB afterglows and in jets from
compact objects \cite{2004Natur.427..222F}. The suggested acceleration
scenario might overcome some of the problems pointed out by Baring \&
Braby \cite{bib:baring} regarding the apparent contradiction between
standard Fermi acceleration and spectral observations of GRBs.
The mechanism has important implications for the way
we understand and interpret observations of collisionless shocks:
1. The acceleration mechanism is capable of creating a powerlaw
electron distribution in a collisionless shocked region.
In the computer experiment presented here a bulk flow with
$\Gamma=15$ results in a powerlaw slope $p=2.7$ for the electron PDF.
Additional experiments will be needed to disentangle what determines
the exact value of the slope.
2. The acceleration is local; electrons
are accelerated to a powerlaw in situ.
Therefore the observed radiation field may
be tied directly to the local conditions of the plasma and could be a strong
handle on the physical processes.
3. Our results strengthen the point already made in Chapter \ref{chap:field};
that the fractions of the bulk kinetic energy that go into in the electrons
and the magnetic field, $\epsilon_e$ and $\epsilon_B$ respectively, are not
free and independent parameters
of collisionless shock theory. Most likely they represent interconnected parts
of the same process.
4. In the case of a weak or no upstream magnetic field,
the Weibel--like two--stream
instability is able to provide the necessary electromagnetic
fields. We have shown here
that the collisionless shocked region is relatively thin, and we suggest that the
non--thermal radiation observed from GRB afterglows
and relativistic jets in general is emitted from such a relatively thin shell.
It is clear that the non-thermal electron acceleration, the ion current
filamentation, the magnetic field amplification/generation, and hence the
strong non-thermal radiation from the shock, is beyond the explanatory
capacity of MHD. Whether or not the relativistic MHD jump conditions
become valid on any
larger scale is not possible to decide from the simulations presented in
this chapter.
\chapter{Summary \& Conclusions}
In the past chapters of this thesis I have presented
different numerical methods, as well as applications of the methods to a
number of current problems in relativistic astrophysics.
The thesis
is logically structured into three parts, and below I would like to
summarise the most important points of the work presented.\\[1.5ex]
In the first part (Chapter \ref{chap:GrMHD})
I have presented the theoretical foundation and numerical
implementation of a new general relativistic
magnetohydrodynamics code. I have derived a new form of
the equations of motion, with global coordinates evolving the dynamical
variables from the point of view of a local observer.
When deriving the equations of motion, I have not made any assumptions
about the background metric, so the design is ready to be coupled with
methods solving the full Einstein equations.
The code has been tested on a variety of demanding problems,
and it has been demonstrated that it is able to deal with huge pressure
and density gradients.
The computer code is fully three-dimensional and parallelised and
shows a spectacular
performance on modern computer architectures exploiting up to
30\% of the theoretical peak performance.
It has been tested and verified to scale to hundreds of CPUs, making it possible
to exploit massive supercomputers at national centres
to the full extent.\\[1.5ex]
In the second part of the thesis (Chapters \ref{chap:field}--\ref{chap:global})
I have presented important results in the understanding of collisionless
shocks using a charged relativistic particle-in-cell code.
Together with Jacob Trier Frederiksen, Christian Hededal and {\AA}ke
Nordlund I have investigated the fundamental consequences of the two-stream
instability for observations of collisionless shocks in general, and the
implications for gamma ray afterglows in particular. In Chapter \ref{chap:global}
I extended our analysis and presented results on the global structure and
transition of collisionless shocks to fluid shocks.
In Chapter \ref{chap:field} we have shown that
even in the absence of a magnetic field in the up-stream plasma,
a small scale, fluctuating, and predominantly transversal magnetic field
is unavoidably generated by a two-stream instability reminiscent of the
Weibel-instability. In the current experiments the magnetic energy density
reaches a few percent of the energy density of the in-coming beam.
In Chapter \ref{chap:acc} we proposed an acceleration mechanism for electrons in
ion-electron collisionless shocks.
The acceleration mechanism is capable of creating a powerlaw
electron distribution in a collisionless shocked region.
The theoretical considerations were suggested by
particle--in--cell computer experiments, which also allowed quantitative
comparisons with the theoretical predictions. We have shown that the
non--thermal acceleration of electrons is directly related to the
ion current channels in the shock transition zone and is local in nature.
The electrons are accelerated to a powerlaw in situ.
Therefore the observed radiation field may
be tied directly to the local conditions of the plasma and could be a strong
handle on the physical processes.
To understand the impact on observations it is essential to investigate how far
down stream of the initial shock the two-stream unstable region
extends. With this in mind I have analysed, in Chapter \ref{chap:global}
the global structure of collisionless shocks.
I have presented three-dimensional experiments of colliding pair plasmas using
the particle-in-cell code, and observed the thermalisation of the plasma, due to
the collective electromagnetic field, and the formation of a macrophysical
shock structure. Comparing the results to a fluid simulation, made using
the code presented in Chapter \ref{chap:GrMHD}, with the same
initial conditions, good agreement is found, implying that the full structure
of the shock has been resolved.
I have estimated that the decay of the two-streaming region and
subsequent thermalisation happens over 50-100 \emph{electron} skin depths.
Hence, the two-stream instability in collisionless shocks comprised
purely of leptonic matter may have few direct observational consequences.
In the second part of Chapter \ref{chap:global}
I have considered the global structure of ion-electron
dominated collisionless shocks.
I have investigated the applicability of global models using
two-dimensional shocks -- just possible with current computer technology --
in the understanding of the complete three-dimensional shock structure
It is demonstrated that caution should be
observed in generalising results from two-dimensional experiments to
three dimensions. In two dimensions the ion channels that form
due to the two-stream instability are less stable, and the heating
rate of the electrons is higher. Both factors contribute to a faster
thermalisation than what may be expected from three-dimensional experiments
in the future, and hence cause an underestimation of the extent
of the two-stream
unstable region. Nonetheless, the overall physical picture is the same, and these
differences may be taken into account.\\[1.5ex]
In the third part of the thesis (Chapter \ref{chap:photonplasma}) together
with Christian Hededal I have presented a new code under development by our
group, which will enable us to study not only charged particle dynamics, but
also the propagation of neutral particles, such as photons and neutrons,
as well as interactions between these.
The code is an extension of the current particle-in-cell code, and
also solves the full Maxwell equations, but
furthermore considers particle-particle interactions and
microphysical processes, such as scattering, pair production,
decay, and annihilation of particles.
Especially the inclusion of photons and related radiative processes is
important. In the future we will be able to extract self consistent spectra
from our numerical experiments, thereby gaining the ability to directly
compare our models with observations.\\[1.5ex]
Even though the different tools presented in this thesis
\emph{per se} are not connected,
they all revolve around the same physical problems.
In Chapter \ref{chap:global} we saw
the first example of connecting the codes, to obtain different points of view
on the same physical situation.
In conclusion, and with a look to the future, I believe that the coupling of
the GrMHD code with the new photon plasma
code yields a great potential for obtaining
realistic synthetic light curves from fluid
simulations, connecting them directly with observations.
\chapter{Introduction \& Overview}
During the last decade we have seen fundamental advances in the
observation of compact objects, active galactic nuclei,
gamma ray bursts, and other objects characterised
by their extreme physical conditions and emittence of
light over the full electromagnetic spectrum. This branch
of astrophysics has aptly been named ``extreme astrophysics'',
and advances in the field are driven by the technical
development and launch of new satellites, such as
Beppo/Sax, Chandra, XMM and Swift, and the construction of
powerful ground based facilities, such as the HESS telescope and
the Auger observatory to measure X-rays and gamma rays.
Moreover the technique of combining radio telescopes
to perform interferometric observations with synthetic dishes
comparable to the entire globe has played an important role for
resolving the inner engines of active galactic nuclei (AGN).
In 1997 the first afterglow
from a gamma ray burst (GRB) was observed, placing GRBs firmly
out of reach at cosmological distances and earning them the
title as the most violent explosions in the Universe.
Very high energy gamma rays have also been observed from
AGNs, and with the increasing resolution of high frequency radio
interferometers, we will soon be able to resolve the
launching region of the jets associated with AGNs, only a
few Schwarzchild radii from the central supermassive
black hole.
In the decade to come we can foresee that two entirely
new windows to the Universe will be opened to complement the
observations of electromagnetic radiation and cosmic
rays that are made today: On the south pole the IceCube
project will detect cosmic neutrinos generated in the core
of supernovae and possibly in GRBs and other cataclysmic events,
while laser interferometers on the ground, such as LIGO, VIRGO
and GEO 600, together with the space interferometer LISA, will
have reached levels of sensitivity where the gravitational
waves from the coalescence of compact objects and super massive black holes
may be detected.
A decade ago cosmology was still the branch of astrophysics where
one could get along with back-of-the-envelope calculations, since
fundamental parameters such as the Hubble expansion rate, the age
and the matter content of the Universe were all quoted with error bars
as large as the numbers themselves. This is all in the past now.
The Hubble space telescope has finally determined the expansion rate.
Observations of supernovae of type Ia at moderate and high redshifts have
led to the surprising conclusion that the Universe is in fact
accelerating in its expansion. The Boomerang and Maxima balloon
missions and later the WMAP telescope have nailed down fluctuations in
the cosmic microwave background radiation (CMBR) with high precision and
determined the overall geometry of the Universe to be\ldots flat!
Euclid was right. The pieces in the cosmological puzzle are slowly falling
into place. Current and future dedicated facilities to observe
the CMBR, together with large scale galaxy redshift surveys such as the SLOAN
digital sky survey and the 2DF survey, will give strong limits on the
distribution of matter and fields in the early Universe.
It is thus fair to say that both extreme astrophysics
and cosmology, together known as relativistic
astrophysics, are in a golden age and are slowly but firmly entering the
realm of ``messy astrophysics'', where
predictions cannot be based on sketchy ideas anymore but instead
detailed physical models must be worked out, tested, and validated or falsified
by observations.
Parallel to the development in observational relativistic
astrophysics, there has been a revolution in the tools employed by theoretical
astrophysicists. The computational power has for decades been rising
exponentially, doubling every 18 months in accordance with Moores law.
At the end of the nineties three-dimensional computer models of
astrophysical objects became affordable, and for some time computer
modelling has been indispensable in understanding the Universe.
In order to interpret observations, we have to develop theories that in
simple terms grasp the central physical mechanisms and let us
understand how fundamental parameters are related.
As the observations become more complicated, and the quality
of the data improves, so must the theories to be successful in
explaining these new details. Astronomy is different from other natural sciences,
in that we cannot perform experiments in the laboratory,
and in most cases the timescales are so long that we cannot even wait and watch
them unfold in the Universe.
In compact objects and in the early Universe many different physical processes
play important roles to shape the final picture, ranging from the large scale
fluid
dynamics, the curvature of space, the interactions in the plasma between
matter and electromagnetic fields, all the way to the microphysical
generation and scattering of the light, which, ultimately, is
observed on Earth. The computer gives us, as a complement to observations,
the ability to create models, and in contrast to the real Universe,
we can spin our models around and visualise the data in three dimensions,
instead of the projected, two-dimensional view which is the only one that
the real Universe offers.
In this sense computer models have become the virtual laboratory of
the astrophysicist.
The physical insights gained from these models are essential,
and often the complexity of the phenomena leaves us at loss without
access to such models.
\section{A Swiss Army Knife for Relativistic Astrophysics}
In this thesis I present the application, development and implementation of
several computer codes which may be used to model relativistic astrophysics.
They span a range of scales and interactions. The GrMHD code, presented
in Chapter \ref{chap:GrMHD}, may be used to describe the
flow of matter from cosmological scales down to the scales of black holes.
The charged particle code, used in Chapters \ref{chap:field}--\ref{chap:global},
is applied to understanding the
small scale structure in collisionless shocks. Finally, the photon plasma code,
presented in Chapter \ref{chap:photonplasma}, will
enable us to study a fuller range of plasma
physics, including microphysical interactions, scatterings and the detailed
propagation of radiation.
\section{General Relativistic Magneto-Hydrodynamics}
In Chapter \ref{chap:GrMHD} I present a reformulation of the equations of
motion for general relativistic magnetohydrodynamics (GrMHD) that is
well suited for numerical purposes, and the
implementation in a three-dimensional numerical code that solves the
equations. Before
starting the implementation of the code, I carefully considered the approaches
employed in the handful of existing codes worldwide. My main idea has been
to make a conscious split between the reference frame in which we measure our
coordinates, and the reference frame in which we measure our physical variables.
The coordinate system, naturally, has to cover the whole physical domain. In the
case of compact objects, it is normal to use a coordinate system connected
to observers at infinity. But there is no a priori reason why we
should measure physical variables, such as density, velocity, and internal energy,
as seen by observers at infinity. If one measures them in a locally
defined frame, which is related to a local inertial frame, then
the physics, by the equivalence principle, becomes almost like the physics
of special relativity, for \emph{arbitrary} background space times.
All equations have been derived without placing any constraints on the metric
tensor. It is important to keep everything completely general, to allow
the code in the future to be enhanced with procedures that solve
the Einstein equations and evolve the metric tensor.
The code is based on finite difference techniques, and to handle discontinuities
we have to include some form of artificial viscosity to enhance the entropy
across shock fronts. I have chosen to employ the correct physical description
of shear viscosity, to enforce energy and momentum conservation. The full
shear viscosity is very complicated, and in the general case of an arbitrary
space time it would be impractical if we did not use variables
tied to a local frame of reference.
The code has been subjected to an extensive testbed of hydrodynamic and
magnetohydrodynamic problems, and it is demonstrated that it can
handle large gradients in density, pressure and the magnetic field.
As an example Fig.~\ref{fig:tube} shows a highly relativistic
shock tube problem, with two shock waves travelling toward each other,
involving a relative pressure jump of $10^4$. The solution is shown
at different resolutions with the analytic solution overplotted.
This test is described further as problem III in Chapter \ref{chap:GrMHD}.
Moreover, as an example of the capabilities the code, I use it in two
relevant astrophysical applications, one of them being the injection
of a thin and hot relativistic jet into an ambient medium
shown in Fig.~\ref{fig:jet-intro}.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\textwidth]{jet11_150_final.eps}
\caption{A relativistic jet. From left to right: The density, the pressure
and the total four velocity. The jet head has partly destabilised
and is creating a complex bubble of vortices in the cocoon of the jet.}
\label{fig:jet-intro}
\end{center}
\end{figure}
\begin{figure}[!t]
\begin{center}
\epsfig{figure=problemIII.eps,width=\textwidth}
\caption{A highly relativistic shock tube. The solution is shown at $t=0.2$,
just before the two shock waves collide.}
\label{fig:tube}
\end{center}
\end{figure}
I have implemented a fully three-dimensional version of the code. The code
is parallelised, has been tested on several supercomputers, and has been shown
to yield excellent performance on hundreds of CPUs.
\section{Magnetic Field Generation in Collisionless Shocks}
Chapter \ref{chap:field} was published by Frederiksen, Hededal,
Haugb{\o}lle and Nordlund \cite{bib:frederiksen2004}.
Using three-dimensional particle
simulations we report on the evolution of an ion-electron
dominated counter-streaming collisionless shock.
Our experiment consists initially of two populations. An in-streaming population,
with a Lorentz boost of $\Gamma=3$ upstream of the shock interface, and
a population at rest downstream of the shock interface (see
Fig.~\ref{fig:cc} to the left).
It is predicted theoretically, that colliding collisionless plasmas are
susceptible to the Weibel-- or two-stream instability.
Microscopic fluctuations in the magnetic field deflect the particles
that in turn enhance the fluctuation and an exponential growth of
the fluctuations results in the generation of strong current channels.
In our simulations this is confirmed and we observe the instability
develop in the shock interface.
In Fig.~\ref{fig:cc} to the right is shown the current densities at late
times. Associated with the current channels is a strong
transversal magnetic field. The magnetic field energy density reaches a
few percent of the kinetic energy in the in-coming beam.
\begin{figure}[!t]
\begin{center}
\epsfig{figure=initial.eps,width=0.49\textwidth}
\epsfig{figure=jez_jiz.eps,width=0.49\textwidth}
\caption{
Left: The initial conditions for our experiment.
Right: Electron (top) and ion (bottom) currents, averaged
over the $x$-direction. The plasma is streaming from left
to right.}
\label{fig:cc}
\end{center}
\end{figure}
For an ion-electron plasma this is in fact a two stage process.
Initially when the electrons encounter the shock interface, being
the lighter particles they are rapidly deflected into
first caustic surfaces and then current channels.
The magnetic field
keeps growing in scale and strength, until the ions undergo
the same process and similarly ion current channels are formed.
Because of charge separation, the electrons will be attracted
to the ions, and the electron instability is quenched.
Instead the electrons start to Debye shield the ions, forming a
fuzzy cloud around the ion channels (see Fig.~\ref{fig:cc}).
The Debye shielding partly neutralises the ion channels, and helps
stabilise the evolution. The electrons are fully thermalised,
but the ions are only slightly deflected from their initial distribution,
due to the strong shielding of the electrons, and thermalisation might be
significantly slower than predicted simply by extrapolating with
the mass ratio.
The ion current channels mutually attract each other and a
self similar merging process commences, where neighbouring
channels merge to form larger channels.
With the capacity of current computers, the ions cannot be followed
all the way to thermalisation, and merging of current channels is
ongoing when they reach the end of the box and stream out at the
open boundary.
To generate the radiation seen
in observations of GRB afterglows, a magnetic field
containing $10^{-5}-10^{-1}$ of the kinetic energy is required.
The two-stream instability seen to occur in our experiments is
a strong candidate for explaining the generation of this magnetic
field, since it is unavoidable in collisionless shocks with low degrees
of magnetisation.
It then follows that the magnetic field cannot be taken as a free
parameter, but is a consequence
of the parameters of the shock, such as the inflow velocity and the
density contrast. These findings
do not only pertain to GRB afterglows, but also
imply that magnetic field generation may be an important ingredient
in all weakly magnetised collisionless shocks, and therefore occurs
in a range of objects from supernovae remnants to internal shocks
in outflows from AGN, all harbouring collisionless shocks.
\section{Non-Fermi Powerlaw Acceleration in Astrophysical Plasma Shocks}
In Chapter \ref{chap:acc} I present the results published by
Hededal, Haugb{\o}lle, Frederiksen and Nordlund \cite{bib:hededal2004}.
We study highly relativistic charged ion-electron particle dynamics
in collisionless shocks.
The numerical experiment reported on here is different from the one in
Chapter \ref{chap:field} in that the in-streaming
plasma has a higher Lorentz factor ($\Gamma=15$) and the computational
box employed is about 3 times longer in the streaming direction, enabling us
to follow the process further downstream of the shock interface and for a
longer period of time, until the shock structure has been more fully
developed.
We find a powerlaw distribution
of accelerated electrons, which turns out to originate from an
acceleration process that is a direct consequence of the two-stream
instability observed in Chapter \ref{chap:field} and is local in nature.
The electrons are accelerated and decelerated when passing through
the cores of the ion current channels generated by the two-stream
instability, and the process is fundamentally different from
recursive acceleration processes, such as Fermi acceleration.
We find a powerlaw slope of $2.7$, in concordance with that
inferred from observations of the afterglow in gamma ray bursts,
and the process may explain
more generally the origin of part of the non-thermal radiation from relativistic
jets, supernovae remnants and shocked inter-- and circum-stellar
regions.
When two collisionless plasmas interpenetrate, current channels are
formed through the two-stream instability. The
ion current channels dominate the dynamics, due to the heavier mass
of the ions, and downstream of the shock the channels merge
in a hierarchical manner forming increasingly stronger patterns.
The electrons act to Debye shield the channels yielding charge neutrality
at large distances. At distances less than the Debye length
the ion channels are surrounded by an intense transverse electric field
that accelerate the electrons toward the channels and then decelerate
them, when they move away from the channel.
\begin{figure*}[!th]
\begin{center}
\epsfig{figure=f1.eps,width=\textwidth}
\caption{(A) Ray traced electron paths (red) and current density (blue).
The colours of the electron paths reflect their four velocity according
to the colour table in the inset (B). The shadows are
equivalent to the $x$ and $y$ projections of their paths. The ion
current density is
shown with blue colours according to the colour table in the inset.
The inset also
shows the ion current density (blue) integrated along the
$x$ axis with the spatial distribution of fast
moving electrons (red) over plotted.}
\label{fig:accfig}
\end{center}
\end{figure*}
This can be seen in Fig.~\ref{fig:accfig}, where we in part (A) have ray traced
two selected electron paths and colour coded them according to the velocity and
in part (B) have shown the spatial distribution on the fastest electrons in
the box overplotted on top of the ion current distribution. Notice
the strong correlation between fast moving electrons and high ion current
density.
To analyse the process quantitatively we have constructed a
toy model, idealising the ion channel as a solid cylinder of moving ions.
Given an electron we can calculate the maximal
energy gained in the acceleration towards the cylinder (see
Fig.~\ref{fig:toymodel} to the left). We have compared the acceleration
predicted by this model with the acceleration observed in the experiment
and find good agreement.
\begin{figure}[!ht]
\begin{center}
\epsfig{figure=f2-intro.eps,width=0.49\textwidth}
\epsfig{figure=f4.eps,width=0.49\textwidth}
\caption{Left: A toy model of the acceleration process.
Electrons in the vicinity of the current channels are subject
to an electromagnetic force,
working to accelerate them along the ion flow.
Crossing the centre of the channel the process
reverses leading to an oscillating movement along the channel.
Right: The normalised electron particle distribution function downstream of
the shock. The dot--dashed line is a powerlaw fit to the non--thermal high
energy tail. The inset shows a similar
histogram for ion current density
sampled in each grid point in the same slice as the electrons.
}
\label{fig:toymodel}
\end{center}
\end{figure}
To the right in Fig.~\ref{fig:toymodel} we have plotted the particle distribution
function for the electrons in a small slice in the box. We observe a powerlaw
distribution. This should be understood as consequence of 1) the acceleration
mechanism described above that directly relates the maximum kinetic energy of the
electrons to the local ion current density and 2) the powerlaw distribution
of the ion currents, as a consequence of the two-stream instability, seen
as an inset in the figure. The maximum acceleration observed is around
$v\gamma\approx 80$. Using the toy model and rescaling the ion to electron
ratio of 16, used in the experiment, to the real value of $1836$, we find
the maximum energy gained by the electrons to be around $5GeV$.
The presented acceleration mechanism is essentially due to the electrons
oscillating in a potential, though as seen in Fig.~\ref{fig:accfig} the
true paths of the electrons are more complicated, and the radiative
efficiency can be very high, because there are no free high energy electrons
carrying away the kinetic energy such as in recursive acceleration scenarios.
Moreover the properties of the process depend primarily on the local conditions
of the plasma.
In the chapter we estimate the thermalisation length for the ions,
and find by extrapolating the fractional thermalisation observed at the boundary
of the box, that the ions should thermalise in approximately 1500 ion skin depths.
Using typical values for density in a gamma ray burst afterglow this is
equivalent to $10^8$ m. We emphasise that the thermalisation length depends
on the inflow velocity and mass ratio of ions to electrons among others,
and a parameter study is necessary to uncover the true interdependence
of parameters.
Even though the two-streaming shock interface is estimated to be relatively thin,
the high radiative efficiency implies that the non-thermal radiation
observed in gamma ray burst afterglows and relativistic jets in general
could be emitted from such a thin shell.
\section{The Global Structure of Collisionless Shocks}
Collisions in ``collisionless shocks'' are
mediated by the collective electromagnetic field, and the scattering of the
particles on the field slowly heats the particles. At some
point the two-stream instability cannot be sustained, and the current
channels become unfocused and decay, due to the thermal
motion of the individual particles,
which creates a warm turbulent medium with no significant large scale magnetic
field. In Chapters \ref{chap:field} \& \ref{chap:acc} it is shown how
magnetic field
generation and particle acceleration are integral parts of relativistic
collisionless shocks
in the case of weak or absent large scale magnetic fields.
To understand the impact on observations it is essential to investigate how far
down stream of the initial shock that the two-stream unstable region
extends. With this in mind, in Chapter \ref{chap:global} I
discuss the global structure of collisionless shocks.
A range of experiments are presented, both three-dimensional models of pair
plasmas and two-dimensional models of ion-electron plasmas.
There is a fundamental difference between ion-electron shocks, where the
mass difference leads to the ions dominating the dynamics and the electrons
stabilising the ion channels, and a pair plasma, where the electrons and
positrons form channels on the same timescale, and no shielding occurs.
In the latter case the two-stream unstable
region is significantly smaller than in the case of ion-electron shocks.
In the three-dimensional computer experiments we observe that the electrons and
positrons thermalise fully, and the medium contains five different regions:
The unperturbed upstream medium coming in from the left of the box; the
first discontinuity in the velocity, with a two-stream unstable region; a warm
thermalised region that is separated into a high and a low density state;
another two-stream unstable discontinuity, where the warm shocked medium
collides with the unperturbed downstream medium; and finally the
unperturbed downstream medium. To verify that I have in fact resolved the full
shock structure in a satisfactory manner, and the jump conditions have been
established, I compare the experiment with a fluid simulation and
find good agreement.
From this experiment we can estimate that the two-stream unstable regions
for electron-positron plasmas decay after 50-100 electron skin depths.
In the second part of Chapter \ref{chap:global} I consider the global structure
of ion-electron dominated collisionless shocks. With current computer capacities
it is impossible to correctly model the global structure of an ion-electron
shock in three dimensions.
Two-dimensional collisionless shocks, being less costly computationally,
remain a promising alternative, and I have investigated the applicability
to understanding real three-dimensional models by performing large scale
two-dimensional experiments (see Fig.~\ref{fig:final2d-intro}), comparing
them to the three-dimensional
experiment discussed in Chapter \ref{chap:acc}.
\begin{figure*}[!t]
\begin{center}
\epsfig{figure=final2d-intro.eps,width=\textwidth}
\caption{The current density of the ions in a high resolution two-dimensional
experiment. The dashed lines indicate the region used for constructing particle
distribution functions. Length units
are given in electron skin depths.}
\label{fig:final2d-intro}
\end{center}
\end{figure*}
\begin{figure*}[!t]
\begin{center}
\epsfig{figure=hist_2000.eps,width=0.49\textwidth}
\epsfig{figure=hist_2050.eps,width=0.49\textwidth}
\caption{Particle distribution function for the electrons in a slice
indicated on Fig.~\ref{fig:final2d-intro}.
To the left is shown the PDF for the largest
two-dimensional experiment, while to right the PDF for the three-dimensional
experiment is shown.}
\label{fig:pdfelec}
\end{center}
\end{figure*}
The particle distribution functions (PDFs) of the electrons
for the two-dimensional
and three-dimensional experiments are compared in Fig.~\ref{fig:pdfelec}.
The slope indicated in Fig.~\ref{fig:pdfelec} depends on the
amount of heating in the upstream population, impacting the high energy
part of the spectrum, and the down stream population, impacting the low energy
part of the spectrum.
A warmer upstream population will be broader in phase space, and consequently
the maximum is lower, giving rise to a steeper slope. The two-dimensional
experiments have a slope index of $2.1$, while the three-dimensional
experiment has a slope index of $1.55$. The difference in heating rates
is understood in terms of the toy model, introduced
above in section 1.4 and discussed in Chapter \ref{chap:acc},
as a consequence of the different geometries.
The physical significance of the the two-stream instability remains directly
related to the extent of the two-stream unstable region, and caution should be
voiced about uncritically generalising results from two-dimensional experiments to
three dimensions. My experiments seem to indicate that one will observe a
faster thermalisation rate in two-dimensional experiment than what may be
expected from three-dimensional experiments.
\section{A Next Generation PIC Code}
In Chapter \ref{chap:photonplasma} I present, together with C. Hededal,
the first results from a new particle-in-cell code in development.
The particle code that has been used to obtain the
results described in Chapters \ref{chap:field}--\ref{chap:global}
is limited to modelling the dynamics of charged particles under the
influence of electromagnetic fields.
In the new code, the concept of particles is generalised; most
notably we have introduced photons, and we consider microphysical
interactions such as scatterings, decay, annihilation
and pair production.
Even though work still has to be done before we may start to investigate
non trivial astrophysical scenarios, solid progress has already been made,
and to test the infrastructure of the new code we have implemented
Compton scattering as a simple scattering mechanism.
The results are very promising; there is excellent
agreement between theory and the numerical experiment.
The new code will enable us to target problems that reside in the grey zone
between the MHD and collisionless plasma domains.
This grey zone covers many astrophysical scenarios of great interest,
among others internal shocks in gamma-ray bursts, solar flares
and magnetic substorms, compact relativistic objects, and aspects of supernova
remnants.
\chapter{A Next Generation PIC Code}\label{chap:photonplasma}
\section{Introduction}
Over the last couple of years the Copenhagen group has been using PIC
models that include electromagnetic fields and charged particles to
understand the plasma microphysics of collisionless shocks
\cite{bib:frederiksen2002,bib:frederiksen2004,bib:hededal2004,bib:hededal2005}. It has
turned out to be a very successful tool, but it is still limited in the scope of
phenomena that may be addressed.
Even though a large class of astrophysical
environments are indeed collisionless, scattering and collision processes
do play an important role in several key scenarios.
Examples are given below. Another key ingredient, which has been
missing in charged particle simulations, is a full treatment of
photon propagation. It can be argued that photons are represented
directly on the mesh by electromagnetic waves, which certainly is
correct. But the mesh can only represent waves with frequencies
smaller than the Nyquist frequency. The physical length
of a typical cell has in our applications typically been $10^5-10^6\
\textrm{cm}$ and hence it is clear that only low frequency radio waves can be
represented. High frequency photons have to be implemented as
particles that propagate through the box and interact, either
indirectly through messenger fields on the mesh, or directly with
other particles. A valuable consequence of modeling the detailed
photon transport is that extraction of electromagnetic spectra is
trivial. Even in cases where the photon field is only a passive
participant, this fact should not be underestimated as
it enables direct comparison with observations.
There exists Monte Carlo based particle codes (see e.g.~\cite{bib:stern95}
and references therein) that address various particle interactions, but one
of their main shortcomings is the poor spatial resolution. This makes it
impossible to couple the particle aspects to a self consistent evolution
of the plasma.
Our goal has been to develop a framework where both
electromagnetic fields and scattering processes are included in a consistent way. We can then
correctly model the plasma physics and the radiative dynamics. The
scattering processes include, but are not limited to, simple
particle-particle scattering, decay and annihilation/creation
processes. Our new code is not limited in any way to charged
particles, but can also include neutrals such as photons and neutrons.
In the next section we describe some of the physics that can be addressed
with this new code. In section \ref{sec:NGPimplementation} we discuss how
the code has been implemented, the general framework and in detail
which physical processes that are currently implemented. In section
\ref{sec:NGPresults} we present the first results in the form of a toy
experiment that we have performed to validate the code. In the last
section \ref{sec:NGPdiscussion} we summarize.
\subsection{Motivation}
Before we continue and describe in detail the methods, physics and test
problems we have implemented and used, it is important to consider
the general class of scenarios we have had in mind as motivation for
developing the code. There are several key objects, where only the
bulk dynamics is understood, and we are lacking detailed
understanding of the microphysics.
\subsubsection{Internal shocks in gamma ray bursts}
In the internal/external GRB shock model, the burst of gamma-rays is
believed to be generated when relativistic shells collide and dissipate
their relative bulk energy \cite{bib:rees1992,bib:meszaros1993}.
The nature of the radiation is presumably inverse Compton scattering and
synchrotron radiation. Particle/photon
interactions might also play an important role in the very early
afterglow as suggested by
\cite{bib:thompson2000,bib:beloborodov2002}: Even though the medium
that surrounds the burst (ISM or wind) is optically very thin to
gamma-rays, a tiny fraction of the gamma-rays will Compton
scatter on the surrounding plasma particles. This opens up for the
possibility of pair-creation between back scattered and outgoing
gamma-rays. The creation of pairs may increase the rate of back
scattered photons in a run-away process \cite{bib:stern2003}.
The Compton scattering may accelerate the pair-plasma through the surrounding medium with many
complicated and non-linear effects, including streaming plasma
instabilities and electromagnetic field generation. Hence, it is
crucial that plasma simulations of internal GRB plasma shocks
include lepton-photon interactions.
\subsubsection{Solar corona and the solar wind}
Space weather (defined as the interaction of the solar wind on
the Earth) is in high focus for several reasons. Not only is the Sun
our closest star, providing us with invaluable data for stellar
modeling, but coronal mass ejections from the Sun potentially have
impact on our every day life. The strong plasma outflows from the
sun can induce large electrical discharges in the Earths ionosphere.
This may disrupt the complex power grids on Earth, causing rolling
blakcouts such as the one in Canada and North America in 1989. Also
high-energy particles can be hazardous to astronauts and airline
passengers. Computer simulations have provided a successful way of
obtaining insight in these complex plasma physical processes.
However, in the solar coronal and in the solar wind plasma out to
distances beyond the earth orbit, difficulties arise in finding the
right formalism to describe the plasma. Neither a
collisionless model based on the Vlasov equation nor an MHD fluid
model provides a adequate framework for investigation. The problem
has already been studied using three dimensional PIC simulations
but without taking collisions into account (e.g.
\cite{bib:buneman1992,bib:hesse2001}).
\subsubsection{The corona of compact objects}
The bulk dynamics of accreting compact objects have been modeled for many
years using fluid based simulations (e.g. \cite{bib:balbus2003} and references therein). Nevertheless,
it has been a persistent problem
to extract information about the radiating processes. Furthermore in
the corona the MHD approximation becomes dubious, just as in the
solar corona. The environment around a compact object
is much more energetic than the solar corona, and therefore
radiative scattering processes play an important role. Pair
production is also believed to be abundant. Using our new code it would
be possible to model a small sub box of the corona.
The main problem here -- as in most numerical implementations -- is to
come up with realistic
boundaries for the local model. A shearing box approach may be
appropriate, but in fact we can do even better.
The size of a stellar mass black hole is around $10^6\ \textrm{cm}$.
In a fluid simulation we want to model the accretion disk--compact
object system out to
hundreds of radii of the compact object. The normal approach is
to use a non uniform mesh. Nonetheless, the Courant criterion,
which determines the time step, is still limited by the sound
crossing time of the compact object. I.e.~the time step is limited
by the size of the innermost (and smallest) cells in the mesh. The very small
time step corresponds to those found in a typical particle
simulation, where the strict time step arises from the need to
resolve plasma oscillations. Hence data from an MHD simulation
could provide temporally well resolved fluxes on the boundaries of
the much smaller sub box containing the particle simulation.
In this sense the particle simulation will act as a probe or
thermometer of the fluid model. The particle model includes the
full microphysics in a realistic manner and most importantly
includes photon transport. Realistic spectra
could be obtained indirectly from the fluid model, testing
fluid theory against observations. We have already
worked on interfacing fluid models with the old PIC code.
\subsubsection{Pre-acceleration in cosmic ray acceleration}
Accepting Fermi acceleration as a viable mechanism for accelerating
electrons and creating the
the non-thermal cosmic ray spectrum is still left with some big
unanswered questions. One is that the Fermi
mechanism requires injection of high energy electrons while still keeping a large, low-energy population to sustain
the magnetic turbulence. Hence, a pre-acceleration mechanism needs
to be explained.
The shocks in supernova remnants are believed to be cosmic ray
accelerators. However, the Fermi acceleration process in shocks is
still not understood from first principles but rely on assumptions on
the electromagnetic scattering mechanism. PIC codes would seem ideal
in exploring the mechanism from first principles, since they include
field generation mechanisms and the back-reaction that the high
energy particles have on this scattering agent. In Supernova
remnants however, the mean free path for Coulomb collisions are
comparable to the system and particle-particle interactions cannot
be fully neglected.
\section{Implementation}\label{sec:NGPimplementation}
Implementing any state-of-the-art large scale numerical code is a big
undertaking, and can
easily end up taking several man years. We estimate that the final version of the next generation
code will contain more than 50.000 lines of code. Starting in
February this year, it has taken us three man months to implement
the current incarnation of the code which has already grown to
approximately 10.000 lines. Besides T.~Haugb{\o}lle and
C.~B.~Hededal, the development is done together with
{\AA}.~Nordlund and J.~T.~Frederiksen. Fortunately we have a good
tradition and expertise for numerical astrophysics in Copenhagen
and we have been able to port different technical concepts and
solutions from our suite of fluid codes and to a lesser extent
from the old PIC code. The aim is to build an extremely scalable
code that is able to run on thousands of CPUs on modern cluster
architectures and
utilize MPI as the inter node communication protocol. In this
chapter we will not go further into technical details. Instead
we will put emphasis on the important concepts and physics and how we have
implemented these.
\subsection{Concepts}
The two fundamental objects in a particle-in-cell code are the
mesh and the particles. We have adopted the solver and interpolation routines from the old PIC code
to solve the Maxwell equations and find fluxes and densities on the mesh.
The mesh is used to distribute messenger fields
-- such as the electromagnetic fields -- and to calculate volume averaged
fluxes and densities of the particles The latter are used as source terms in the evolution of the messenger
fields.
The particles really represent an ensemble of particles and are often
referred to as \emph{pseudoparticles} \cite{bib:birdsall} or {\em large particles}. A so called smoothing kernel describes the
density distribution of a single pseudoparticle on the mesh. In our
implementation the volume of a particle is comparable to a cell in the mesh.
\subsubsection{Pseudoparticles with variable weights}
The concept of
pseudoparticles is introduced since the ``real space'' particle
density easily exceeds any number that is computationally reasonable
(i.e. of the order of a billion particles). The pseudoparticle charge to mass ratio
is kept the same as the ratio for a single particle.
In ordinary PIC codes the weight of each pseudoparticle of a given
species is kept constant throughout the simulation. The benefit is a
simple code and a unique identity for each particle. The first is a
convenience in the practical implementation, the second important
when understanding the detailed dynamics and history of a single
particle.
Notwithstanding possible conveniences, as detailed below in section
\ref{scat}, we have decided to improve this concept to a more
dynamical implementation where each pseudoparticle carries a
individual weight. Particles are then allowed to merge and split up
when a cell contains too many/few particles, or when particles are
scattered. The concept is sometimes used in smooth particle
hydrodynamics (SPH), where different
techniques have been proposed for the splitting and merging of
particles. It is both used to adjust the density of individual
particles \cite{bib:trulsen2001} and in the conversion of gas--
to star particles in galaxy formation models \cite{bib:governato2004}.
An important quality of SPH is its adaptive resolution
capabilities. These are important in the description of collapsing
self gravitating systems, ranging from core collapse supernovae to
the formation of galaxy clusters, scenarios where matter is collapsing
many orders of magnitude, and therefore the smoothing length or
volume of the individual particles is readjusted accordingly.
Consequently, when splitting particles or adjusting the weights in
an SPH code, it is important to match precisely the spatial density
distribution of the parent particle to the spatial distribution
of the child particles. In PIC codes, though, the spatial size or
smoothing parameter of an individual particle is
determined beforehand by the mesh spacing. This is reasonable since
we are not interested in adaptive resolution but rather a kinetic
description of the plasma dynamics. Splitting a {\it
parent} particle with weight $w_p$ into {\it child} particles
with weights $w^i_c$ is therefor trivial.
The requirements of conservation of mass and
four velocity together with conservation of the density and flux
distribution in the box, can all be satisfied by setting
\begin{align}
w_p &= \sum^n_{i=1} w^i_c\,, & e_p &= e^i_c\,, &
\gamma_p\vec{v}_p &= \gamma^i_c\vec{v}^{\, i}_c\,,
\end{align}
since the smoothing kernel is determined by the mesh spacing, not the
mass of the individual particle.
The merging or renormalization of pseudoparticles requires a much more thorough
analysis. Up to now we have investigated two schemes, one that respects conservation
of mass, energy and four velocity by merging three particles into two at a time,
and one where only mass, energy and average direction is conserved by merging two particles into one
particle. While these schemes probably
work well for approximately thermal distributions, it will easily give
rise to a large numerical heating when considering head on beam collisions.
We believe it can be improved by first selecting a random ``merger particle'' and
then find other particles in the local cell, that are close to the merger
particle in momentum space. A more radical approach is to resample
the full phase distribution in a cell every time the number density becomes
above a certain threshold. Nevertheless, it requires testing of different extreme
situations to find the optimal method to merge particles, and it is still a work in progress.
To obtain the results that we present in section \ref{sec:NGPresults}, we
ran the code without merging of the pseudoparticles activated.
\subsubsection{Scattering processes and splitting of particles}\label{scat}
In Monte Carlo based particle codes the generic way to compute an
interaction is first to calculate the probability for the interaction
$P_S$, then compute a random number $\alpha$. If $\alpha \le P_S$ then
the full pseudoparticle is scattered otherwise nothing happens. This
probabilistic approach is numerically rather efficient and simple to implement, but
it can be noisy, especially when very few particles are present in a cell.
In large particle Monte Carlo codes the typical cell contains up to $10^4$
particles per species per cell (hence ``large particle''). In our
PIC code typical numbers are $10^1-10^2$ particles per species per cell, since
we need many cells to resolve the plasma dynamics. For our requirements the
probabilistic approach would result in an unacceptable level of noise. For example, in a beam
experiment the spectra of the first generation of scattered particles may come
out relatively precise, but the spectra of higher generation scattered particles
(i.e.~particles that are scattered more than once) will come out with
poor resolution or require an excessive amount of particles. Another well known
consequence of the probabilistic approach is that for a given experiment
the precision goes in the best case inversely proportional to
\emph{the square root} of the number of particles used in the experiment.
\begin{figure}[t]
\begin{center}
\epsfig{figure=scattering.eps,width=\textwidth}
\caption[Schematics of a generic scattering process]{To implement the
scattering of
two pseudoparticles we transform to the rest frame of the target particle
(shown as red/light gray) and computes the probability $P(n)$ that a single
incident particle (shown as blue/dark gray) during a time step $\Delta t$ is
scattered on the $n$ target particles. If the incident particle has weight
$m$, then $k=P(n) m$ particles will interact and two new pseudoparticles are
created.}
\label{fig:splitting_schematic}
\end{center}
\end{figure}
To increase effective spectral resolution we have instead decided to take a
more direct approach. For simplicity we will here describe the method for a
two-particle interaction, and disregard
all factors converting code units to physical units. For example, the weight
of a pseudoparticle is proportional to the number of physical particles in
the
pseudoparticle. Although, these prefactors all represent trivial conversions of units,
they must be taken into account in the real code.
Consider a single cell containing a single pseudoparticle (red) with weight $w_t=n$ and a single pseudoparticle (purple) with weight
$w_i=m$, where
$n>m$ (see Fig.~\ref{fig:splitting_schematic}). We first select the red particle
as the \emph{target}, since $n>m$, and the purple as the \emph{incident}
particle. We then transform the four velocity of the incident particle to the rest
frame of the target particle, and calculate the total cross section
$\sigma_t$ of the interaction. Conceptually we consider the process as a single incident particle
approaching a slab of the target particle. The number density of target
particles in the slab can be calculated from the weight $w_t$ as
$\rho_t = w_t/\Delta V$, where
$\Delta V = \Delta x \Delta y \Delta z$ is the volume of a single cell. Given the number density
the probability that a single incident particle is scattered
\emph{per unit length} is
\begin{equation}
P_l = \rho_t \sigma_t = \frac{w_t \sigma_t}{\Delta V}\,.
\end{equation}
During a time step $\Delta t$ the incident particle travels
$\Delta l =v_{i} \Delta t$,
and the probability that a single incident particle is scattered then becomes
\begin{align}\nonumber
P_S &= 1 - \exp\left[ - P_l \Delta l\right] \\ \label{eq:scat}
&= 1 - \exp\left[ - \frac{w_t \sigma_t v_i \Delta t}{\Delta V}\right]\,.
\end{align}
The weight of the incident pseudoparticle is $w_i=m$. Pseudoparticles
represent an ensemble of particles. Therefore
$P_S$ is the fraction of incident particles that are scattered on the
target. To model the process we create two new particles with weight
$w_{new} = w_i P_S = k$. Given the detailed interaction, we can calculate
the theoretical angular distribution of scattered particles in accordance with the
differential scattering cross section. Drawing from this distribution we
find the momentum and energy of the new scattered particles. The
weights of the target and incident particles are decreased to $w_t=n-k$ and
$w_i=m-k$ respectively (see Fig.~\ref{fig:splitting_schematic}).
Our method faithfully represents the actual physics even for small cross
sections. However, if all the particles are allowed to interact, the number of
particles in the box will increase at least proportionally to the total number of
particles squared. This is potentially
a computational run away. Normally we will have on the order of up to $100$
particles per species per cell, but to be computationally efficient we only calculate interactions for
a subset of the particles in a cell. This subset is chosen at random according
to an arbitrary distribution we are free to select. If the probability that
two particles are selected for scattering in a given time step is $Q$ then
the traveling length $\Delta l$ simply has to be adjusted as $\Delta l/Q$.
If this arbitrary distribution is chosen cleverly, the particles with the
largest cross section are actually the ones selected most often for
scattering, and everything ends up as a balanced manner: We only
calculate the full cross section and scattering as often as needed, and the
computational load that is given to a certain particle is proportional to the
probability of that particle to scatter.
We rely on the merging of particles as described above to avoid the copious
production of pseudoparticles. Every time the number of pseudoparticles in a
given cell crosses a threshold, pseudoparticles are merged and this way the
computational load per cell is kept within a given range.
\subsection{Neutron decay}
Free neutrons not bound in a nucleus will decay with a
half-life a little longer than ten minutes. The neutron
decays into an electron and a proton and an electron antineutrino
to satisfy lepton number conservation
\begin{equation}
n \to p + e^{-} + \bar{\nu}_e
\end{equation}
The rest mass difference of the process (0.78 MeV) goes into kinetic energy
of the proton, electron and neutrino. Let the neutron lifetime be $\tau$ in
code units. If $\tau$ is comparable to or less than a typical time step, then
practically all neutrons decay in one iteration, and it is irrelevant to
include them. If $\tau$ is much larger than the total runtime, the
neutron can be considered a stable particle (unless the
neutron density in the box is much larger than the proton-- or electron density). If instead $\tau \simeq \alpha
\Delta t$ where $\alpha \sim 100$, then we can select a fraction $f$ of the
pseudoparticle neutrons in each cell and let them decay. This is done in an
analogous manner to the generic scattering process described above in
section \ref{scat}. The weight of the selected neutron
is decreased with a factor
\begin{equation}
\exp\left[-\frac{f\Delta t}{\gamma \tau}\right]\,,
\end{equation}
where $\gamma$ is the Lorentz boost of the neutron pseudoparticle and $f$
is chosen to give reasonable values for the decrease in the weight. At the
same time a pair of electron
and proton pseudoparticles is created with the same weight.
The generated particles share the excess mass of the process (where the
neutrino is neglected for now, but could be included in the future).
The momenta are selected to give an isotropic distribution in the rest frame of the
decaying neutron.
\subsection{Compton scattering}
Here we briefly describe a specific physical scattering mechanism which
has already been implemented in the code, namely Compton scattering.
Compton scattering is the relativistic generalization of the classical
Thompson scattering process, where a low energy photon scatters of on
a free electron. In the rest frame of the electron, the photon
changes direction and loses energy to the electron which is set in
motion. The cross section for Thompson scattering is
\cite{bib:rybicki}
\begin{equation}
\sigma_T=\frac{8\pi}{3}r_0^2\,,
\end{equation}
where $r_0\equiv e^2/(m c^2)$ is called the {\it classical electron
radius}. The Thompson scattering approximation is valid as long as
the photon energy is much lower than the electron rest mass $h\nu\ll
m_ec^2$ and the scattering can be regarded as elastic. For photon
energies comparable to, or larger than, the electron rest mass,
recoil effects must be taken into account.
Measured in the electron rest frame we define $\epsilon_1$ as the
photon energy before the scattering,
$\epsilon_2$ as the photon energy after the scattering and $\theta$
the photon scattering angle (\ref{fig:compton_schematic}).
\begin{figure}[t]
\begin{center}
\epsfig{figure=compton_schematic.eps,width=.5\textwidth}
\caption[Schematic picture of Compton scattering] {Schematic view
of the Compton scattering process.
Impinging on the electron, an incoming photon with energy
$\epsilon_1$ is scattered into the angle $\theta$ with
energy $\epsilon_2$. In the initial rest-frame of the electron, the
electron will be recoiled to conserve energy and momentum.}
\label{fig:compton_schematic}
\end{center}
\end{figure}
By conservation of energy and momentum one can show
(e.g. \cite{bib:rybicki}) that
\begin{equation}\label{eq:comptonenergy}
\epsilon_2=\frac{\epsilon_1}{1+\frac{\epsilon_1}{m_e c^2}
(1-\cos\theta)}\,.
\end{equation}
The differential cross section as a function of scattering angle is
given by the Klein-Nishina formula
\cite{bib:klein1929,bib:heitler1954}
\begin{equation}\label{eq:kn}
\frac{d\sigma_C}{d\Omega}=\frac{r_0^2}{2}
\frac{\epsilon_2^2}{\epsilon_1^2}
\left(\frac{\epsilon_1}{\epsilon_2}
+\frac{\epsilon_2}{\epsilon_1}-\sin^2\theta\right).
\end{equation}
The Klein-Nishina formula takes into account the relative intensity
of scattered radiation, it incorporates the recoil factor, (or
radiation pressure) and corrects for relativistic quantum mechanics.
The total cross section is then
\begin{equation}
\sigma_C=\sigma_T\frac{3}{4}\left[\frac{1+x}{x^3}
\left\{\frac{2x(1+x)}{1+2x}-\mathrm{ln}(1+2x)
\right\}+ \frac{1}{2x}\mathrm{ln}(1+2x)-\frac{1+3x}{(1+2x)^2}\right]\,,
\end{equation}
where $x\equiv h\nu/(mc^2)$.
\section{Results}\label{sec:NGPresults}
To test the new code and it capabilities in regard to the inclusion
of collisions, we have implemented and tested a simple scenario
involving Compton scattering.
\begin{figure}[t]
\begin{center}
\epsfig{figure=compt_scatter.eps,width=1.\textwidth}
\caption[3D
scatter plot of a photon beam passing through a pair plasma] {3D
scatter plot of a photon beam ({\it black}) passing through a cold
pair plasma ({\it gray}). Left panel show initial setup where a
photon beam is injected in the upward direction. Right panel shows
how photons are scattered on the electron-positron pairs}
\label{fig:compt_scatterplot}
\end{center}
\end{figure}
In the test setup, we place a thin layer of cold electron-positron pair plasma
in the computational box. From the boundary, we inject a monochromatic beam of photons
all traveling perpendicular to the pair-layer (Fig. \ref{fig:compt_scatterplot} left panel).
As the beam passes through the plasma layer, photons are scattered (Fig. \ref{fig:compt_scatterplot} left panel).
For each scattered photon we sample the weight of the photon
and its direction (remembering that all particles are pseudoparticles
that represent whole groups of particles).
Fig. \ref{fig:compt_theory} shows the theoretical cross section as
function of scattering angle compared with the result from the simulations.
Four plots for different energies of the incoming photon
beam are shown. We find excellent agreement between the simulation results and
the theoretical predictions.
\begin{figure}[htb]
\begin{center}
\epsfig{figure=crosssection.eps,width=1.\textwidth} \caption[The
theoretical Compton scattering cross section compared to data] {The
theoretical Compton scattering differential cross section. We have
performed a test experiment with an incoming laser beam on a very
cold electron population. Over plotted the differential distribution
is the theoretical curve according to Eqs.~(\ref{eq:kn}) and
(\ref{eq:comptonenergy}). } \label{fig:compt_theory}
\end{center}
\end{figure}
\section{Discussion}\label{sec:NGPdiscussion}
A next generation PIC code that includes many different kinds of
scattering process is under development.
It will enable us to target problems that resides in the grey zone
between the MHD and collisionless plasma domains.
This domain covers many astrophysical scenarios of great interest
counting internal shocks in gamma-ray bursts, solar flares
and magnetic substorms, compact relativistic objects, supernova
remnants and many more.
The concept of splitting/merging particles and keeping individual weights of
each particle carry many important features. Variable weights represent the true statistics of a scattering process
in an optimal way compared to the Monte Carlo approach.
Also, for MPI-parallelization it is crucial that the
number of particles per cell is kept more or less constant to ensure an
optimal CPU load-balancing. To localize calculations we are employing a
sorting algorithm that maintains neighboring particles on the mesh as
neighbors in memory. This is not only good for parallelization, but also
makes all computations very cache efficient; a crucial requirement on modern
computer architectures.
To test the infrastructure of the new code we have implemented
Compton scattering as a simple scattering mechanism.
The first results are very promising in form of excellent
agreement with the theoretical prediction.
We note that a recent paper by \cite{bib:moderski2005}
provide an interesting test suite for various kind of particle-photon interactions
that can be tested in the future.
Merging particles has not been satisfactorily implemented yet.
Parallelization of code is still not there yet, and is necessary to obtain the
capability of performing truly large scale experiments.
In summary: Work has still to be done before we can start to investigate
non trivial astrophysical scenarios, nevertheless solid progress has already been made
\vspace{2ex}
This chapter has been written jointly by Christian Hededal and
Troels Haugb{\o}lle, reflecting the fact that the development
process of the next generation PIC code
has been highly team based. Essentially everybody have contributed
time and effort to every single source file of the code. It would
not make sense to write the chapter separately, essentially
repeating each other and reusing the same figures.
| 67,238 |
\section{Introduction} \label{s1}
The most recent catalogue by Dias et~al. (2002) presents
data on 1537 Galactic open clusters. These are excellent
objects with which to probe the structure and evolution
of the Galactic disc. Since old and intermediate-age
open clusters cover the entire lifetime of the disc,
they allow us to trace the history of chemical enrichment
and the formation of the different Galactic populations.
For example, Salaris, Weiss \& Percival (2004) determined the ages
of 71 old open clusters and concluded that the thin
and thick disc started to form at the same time.
Carraro et al. (2004) observed open clusters from the
outskirts of the disc and suggested that the outer part of the Galactic
disc underwent a completely different evolution compared
to the inner disc. Based on a sample of 39 open clusters,
Friel et~al. (2002) noted a slight correlation between metallicity
and age for clusters in the outer disc beyond 10 kpc.
Recently, intermediate-age open clusters were also found towards
the Galactic Centre direction (Carraro, M\'endez \& Costa 2005),
where star clusters do not survive for enough time due to
the higher-density environment.
The observations of the clusters presented in this
paper were conducted as a part of a photometric survey
of a large sample of distant open clusters. The goal
of the project was an identification of the oldest open
clusters in order to obtain a lower limit on the age of the
Galactic disc (Kaluzny \& Rucinski 1995, see also references therein).
In this paper, we present for CCD photometry of three faint
open clusters: NGC 2425, Haffner~10 and Czernik~29.
The equatorial and galactic coordinates of the cluster
centres are listed in Table~1.
\begin{table}
\centering
\begin{minipage}{200mm}
\caption{\small Equatorial and galactic coordinates of target
clusters}
{\small
\begin{tabular}{lcccc}
\hline
Name & RA(2000.0) & Dec(2000.0) & $l$ & $b$ \\
& [h:m:s] & [$^{\circ}:'$] & [$^{\circ}$] & [$^{\circ}$] \\
\hline
NGC 2425 & 07:38:22 & -14:52.9 & 231.50 & 3.31 \\
Haffner~10 & 07:28:36 & -15:21.9 & 230.80 & 1.01 \\
Czernik~29 & 07:28:23 & -15:24.0 & 230.81 & 0.95 \\
\hline
\end{tabular}}
\end{minipage}
\end{table}
\section{Observations and reductions} \label{s2}
The observations were performed at the Las Campanas Observatory,
using the 1.0-m Swope telescope equipped with a Tektronix
$1024~\times~1024$ CCD camera. The field of view was about
$11 \farcm 9~\times~11 \farcm 9$ with a scale 0.695 arcsec/pixel.
The observations were conducted on two nights, Feb 20/21 and Feb 21/22,
1995. Two or three exposures of different length
were taken in each of the $BVI$ passbands. Preliminary processing of
the CCD frames was done with standard routines in the IRAF package.
Both, dome and sky flatfield frames were obtained in each
filter. The average seeing was $1\farcs60$ and $2\farcs07$
on the first and second night, respectively.
We observed 49 standard stars from the three fields
(SA 98, Ru 149, PG 1323-086) listed by Landolt (1992), and 27 stars
from two fields (SA 98, Ru 149) during the two subsequent nights.
These standards were observed over airmasses from 1.07 to 1.76.
The following relations between the instrumental (lower case letters)
and the standard colours and magnitudes were adopted:
$$
v=2.561+V-0.019 \times (B-V)+0.15 \times X
\eqno(1)
$$
$$
b-v=0.214+0.931 \times (B-V)+0.12 \times X
\eqno(2)
$$
$$
v-i=-0.668+1.019 \times (V-I)+0.09 \times X
\eqno(3)
$$
where $X$ is an airmass. The instrumental photometry was extracted
with the DAOPHOT/ALLSTAR V2.0 (Stetson 1987) package. Aperture
photometry of standards was obtained with an aperture radius
of $8\farcs34$ arcsec (12 pixels). For stars from the cluster area we
obtained profile photometry. Appropriate aperture corrections
were derived preceding the transformation of instrumental photometry
to the standard system.
We applied the following procedure to a set of images
obtained for a given cluster. First, objects with unusually large
errors of photometry, i.e. with large values of CHI and SHARP
parameters, returned by DAOPHOT, were rejected. Also very bright,
overexposed stars were removed from further analysis. In practice,
the percentage of rejected stars ranged for a given frame
from 5 to 15 percent. Second, the coordinates of all objects
were transformed to a common pixel grid defined by the reference image
(always the longest exposure in the $V$ filter). We than corrected
the photometry for the zero-point offset in each filter and
created a master list of all objects. The instrumental magnitudes
were calculated as weighted averages of magnitudes measured
on individual frames.
\section{NGC 2425} \label{s3}
The open cluster NGC 2425 (C 0736-147) lacks any published
study. We observed it on 1995 Feb 20/21.
Details of the observations are presented in Table 2.
\begin{table}
\centering
\begin{minipage}{200mm}
\caption{\small Journal of observations of NGC 2425}
{\small
\begin{tabular}{lccccccc}
\hline
UT Date & Filter & Exp. & Airmass & Seeing \\
Feb 1995 & & [sec] & & [arcsec] & \\
\hline
21.092 & $I$ & 5 & 1.031 & 1.70 \\
21.094 & $I$ & 15 & 1.031 & 1.47 \\
21.098 & $I$ & 100 & 1.031 & 1.56 \\
21.101 & $V$ & 15 & 1.032 & 1.70 \\
21.102 & $V$ & 100 & 1.032 & 1.63 \\
21.104 & $V$ & 300 & 1.032 & 1.51 \\
21.110 & $B$ & 30 & 1.034 & 1.64 \\
21.111 & $B$ & 150 & 1.035 & 1.51 \\
21.114 & $B$ & 500 & 1.037 & 1.64 \\
\hline
\end{tabular}}
\end{minipage}
\end{table}
The cluster centre was found by calculating the density centre
for stars inside a circle of radius of 150 pixels, using an
iterative procedure similar to that described by Mateo \& Hodge (1986).
In Fig.~1 we show the density profile for all stars brighter
than $V=21.9$. The average stellar density was
calculated in successive 27.8 arcsec (40 pixels) wide annuli
around the cluster centre. The resulting density
profile is rather noisy due to small number statistics.
The smooth
solid line represents a fit by the King (1962) profile:
$$
f(r) = \frac{f_0}{1+(r/r_c)^2} + f_b
\eqno(4)
$$
where $f_0$ is the central density, $r_c$ is the radius of the cluster
core and $f_b$ is the background density.
For NGC 2425 we found as follows:
$f_0=0.0056 \pm 0.0008$ stars per arcsec$^2$,
$r_c=154 \pm 41$ arcsec and $f_b=0.0039 \pm 0.0004$ stars/arcsec$^2$.
We estimate that the cluster contains about 850 stars with $V<21.9$.
\begin{figure}
\vspace{5.9cm}
\special{psfile=fig1.ps hoffset=-10 voffset=-67 vscale=42 hscale=42}
\caption{\small Surface density distribution of stars with $V<21.9$
from NGC 2425 field. The King (1962) profile is fit to the data
and presented as the smooth solid line.}
\label{fig1}
\end{figure}
\begin{figure}
\vspace{5.9cm}
\special{psfile=fig2.ps hoffset=-10 voffset=-67 vscale=42 hscale=42}
\caption{\small CMDs for the stars located within radius
$r<0.7~r_c\approx108\arcsec$ from the centre of NGC 2425}
\label{fig2}
\end{figure}
Colour-magnitude diagrams for the field of NGC 2425
are shown in two panels of Fig.~2. We plot only the innermost
stars to get some structures, in particular the main sequence
and red clump area. We can conclude that the morphology
of CMDs for NGC 2425 is typical of intermediate-age
open clusters. We note the presence of the red giant branch clump
at $V \approx 14.1$, $B-V \approx 1.37$ and $V-I \approx 1.31$.
One can also distinguish the main sequence turn-off point
at $V \approx 16.5$, $B-V \approx 0.68$ and $V-I \approx 0.84$.
We suspect that there are candidates for blue stragglers among
stars above the turn-off.
Using theoretical isochrones published by Girardi et~al.
(2000), we are able to estimate the basic parameters of the cluster.
We fit isochrones with two different chemical compositions:
$(Z,Y)=(0.008,0.25)$ and $(0.019,0.273)$. In Fig.~3 we show
CMDs with superimposed isochrones of age of 2.5~Gyr and lower metal content,
$Z=0.008$. The shape of the main sequence for $16<V<20$ is well reproduced,
while the blue and red hooks are not clearly seen. Comparing
other isochrones to the data we estimated the uncertainty of the age
as 0.5~Gyr. We derived, by shifting the isochrones, the apparent distance
modulus of $(m-M)_V=13.5$, the reddenings of $E(B-V)=0.29$ and
$E(V-I)=0.34$. These values are lower than the upper limits of
reddening $E(B-V)=0.47$ and $E(V-I)=0.60$ for $(l,b)=(231.5,+3.3)$
extracted from the maps of Schlegel, Finkbeiner \& Davis (1998). We assume
the error of the distance modulus as 0.1~mag, and the error
of the reddening $E(B-V)$ as 0.05~mag.
Fig.~4 presents the CMDs of NGC 2425 with superimposed isochrones
of the same age, as in Fig.~3, but with solar metal abundance,
$Z=0.019$. The fit is worse just above the turn-off point while the
RGB branch is reproduced quite well. In this case we established:
$(m-M)_V=13.3 \pm 0.1$, $E(B-V)=0.20 \pm 0.05$, $E(V-I)=0.26 \pm 0.05$.
Adopting $R_V=3.2$ and taking into account the results for both
metallicites we estimated the minimum value of the
heliocentric distance of the cluster as $d_{min}=2.9$~kpc
and the maximum value $d_{max}=3.8$~kpc.
\begin{figure}
\vspace{5.9cm}
\special{psfile=fig3.ps hoffset=-10 voffset=-67 vscale=42 hscale=42}
\caption{\small Left panel: $V/B-V$ diagram
for the cluster NGC 2425, as compared to Girardi et~al. (2000)
isochrone of age $2.5 \times 10^9$~yr and metallicity $Z=0.008$
(solid line). The fit was obtained by adopting a distance
modulus of $(m-M)_V=13.5$ and reddening of $E(B-V)=0.29$.
Right panel: field-star corrected $V/V-I$ diagram
for NGC 2425 cluster, as compared to Girardi et~al. (2000)
isochrone of age $2.5 \times 10^9$~yr for the metallicity $Z=0.008$
(solid line). The fit was obtained by adopting a distance
modulus of $(m-M)_V=13.5$ and reddening of $E(V-I)=0.34$.}
\label{fig3}
\end{figure}
\begin{figure}
\vspace{5.9cm}
\special{psfile=fig4.ps hoffset=-10 voffset=-67 vscale=42 hscale=42}
\caption{\small The same as Fig. 3, but for $Z=0.019$.
The fit in the left panel was obtained by adopting a distance
modulus of $(m-M)_V=13.3$ and reddening of $E(B-V)=0.20$,
whereas the fit in the right panel by adopting
$(m-M)_V=13.3$ and $E(V-I)=0.26$.}
\label{fig4}
\end{figure}
\section{Haffner~10 and Czernik~29} \label{s4}
The clusters Haffner~10 (OCl 594, C 0726-152) and Czernik~29
(OCl 595, C 0726-153) were identified by Haffner (1957)
and Czernik (1966), respectively. Later Fitzgerald \& Moffat (1980)
studied the clusters based on photographic $UBV$ plates.
However, due to rather low photometric
limits ($B=18$, $V=15.1$) they underestimated the number
of stars in the clusters ($68\pm12$ in Haffner~10 and $56\pm11$ in
Czernik~29 based on $B$ filter data) and the age of Haffner~10 (0.2 Gyr).
We observed these clusters on 1995 Feb 20/21
and 21/22. The list of used CCD frames is given in Table 3.
The angular separation between the clusters is only 3 \farcm 8
and both of them were embraced within each frame.
\begin{table}
\centering
\begin{minipage}{200mm}
\caption{\small Journal of observations of Haffner~10}
{\small
\begin{tabular}{lccccccc}
\hline
UT Date & Filter & Exp. & Airmass & Seeing \\
Feb 1995 & & [sec] & & [arcsec] & \\
\hline
21.127 & $I$ & 10 & 1.054 & 1.37 \\
21.129 & $I$ & 60 & 1.057 & 1.81 \\
21.130 & $I$ & 180 & 1.059 & 1.50 \\
21.135 & $V$ & 20 & 1.067 & 1.43 \\
21.136 & $V$ & 100 & 1.069 & 1.63 \\
21.138 & $V$ & 360 & 1.073 & 1.63 \\
21.144 & $B$ & 40 & 1.084 & 1.69 \\
21.146 & $B$ & 150 & 1.089 & 1.63 \\
21.149 & $B$ & 500 & 1.093 & 1.72 \\
22.056 & $V$ & 20 & 1.046 & 2.36 \\
22.058 & $B$ & 40 & 1.043 & 2.30 \\
22.060 & $I$ & 10 & 1.041 & 1.86 \\
22.062 & $I$ & 300 & 1.040 & 2.03 \\
22.067 & $V$ & 600 & 1.036 & 2.00 \\
22.077 & $B$ & 900 & 1.031 & 1.88 \\
\hline
\end{tabular}}
\end{minipage}
\end{table}
As for previous cluster, for Haffner~10 and Czernik~29
we present the density histograms (Fig.~5). The histograms
include stars with $V<21.7$. We determined the centres
of the clusters and calculated the average stellar density
in successive 27.8 arcsec (40 pixels) annuli around the centre,
excluding regions within the $1 \farcm 9$ area around
the neighbouring cluster. This radius is equal to half
the angular distance between the clusters,
which are of comparable size (from Fitzgerald \& Moffat 1980).
For Haffner~10 the best fit of King's profile results
in the following coefficients:
the central density $f_0=0.0123 \pm 0.0020$ stars/arcsec$^2$,
the radius of the cluster core $r_c=65 \pm 11$ arcsec and the
background density $f_b=0.0051 \pm 0.0002$ stars/arcsec$^2$.
We established the number of cluster stars with magnitude $V<21.7$
to be approximately 600 objects. This was determined by adopting
the above value of $f_b$. For Czernik~29 we found:
the central density $f_0=0.0066 \pm 0.0013$ stars/arcsec$^2$,
the radius of the cluster core $r_c=78 \pm 17$ arcsec and the
background density $f_b=0.0053 \pm 0.0002$ stars/arcsec$^2$.
The number of cluster stars with magnitude $V<21.7$
is approximately 420 objects.
\begin{figure}
\vspace{8.3cm}
\special{psfile=fig5.ps hoffset=-10 voffset=0 vscale=42 hscale=42}
\caption{\small Surface density distributions of stars with $V<21.7$
from the field of Haffner~10 and Czernik~29}
\label{fig5}
\end{figure}
In Fig.~6 we present $V/B-V$ and $V/V-I$ colour-magnitude diagrams
for Haffner~10. The morphology of CMDs is typical
for intermediate-age open clusters.
Interestingly, the red clump is represented by a tilted
branch. Its width reaches 0.20 in the $B-V$ and 0.22 in the $V-I$.
The branch is elongated in the direction parallel to the standard
reddening vector. It means that there is significant
differencial absorption in the cluster direction.
The blue turn-off point is located at $V \approx 17.3$,
$B-V \approx 0.92$ and $V-I \approx 1.18$, while
the blue end of the red clump is located at $V \approx 14.8$,
$B-V \approx 1.41$ and $V-I \approx 1.58$.
\begin{figure}
\vspace{5.9cm}
\special{psfile=fig6.ps hoffset=-10 voffset=-67 vscale=42 hscale=42}
\caption{\small CMDs for the stars located within radius
$r<1.2~r_c\approx78\arcsec$ from the centre of Haffner~10}
\label{fig6}
\end{figure}
Using a set of theoretical isochrones published by
Girardi et~al. 2000 we estimated some cluster parameters,
in particular the age and the heliocentric distance.
As in the case of NGC 2425, we adopted isochrones for
two different metallicites. Figures 7 and 8 show
CMDs with superimposed isochrones of 2.5~Gyr for metallicity
$Z=0.008$ and of 2.0~Gyr for $Z=0.019$, respectively.
The fit for lower metal content seems to be more precise.
However we note that it is difficult to establish
the location of the blue and red hooks.
Trying to improve the fit and comparing other isochrones we estimated
the error of the age as 0.5~Gyr. By shifting the isochrones for both
metallicities, we obtained the value of the apparent distance modulus
$(m-M)_V$ between 14.3 and 14.7, the reddenings $0.41<E(B-V)<0.64$
and $0.58<E(V-I)<0.78$. This is in agreement with the upper value
of the interstellar reddening, $E(B-V)=0.89$ and $E(V-I)=1.14$,
for $(l,b)=(230.8,+1.0)$ derived from maps presented by
Schlegel, Finkbeiner \& Davis (1998). We estimated that the heliocentric
distance $d$ to Haffner~10 is in the range between 3.1 and 4.3 kpc.
Figure 9 shows $V/B-V$ and $V/V-I$ colour-magnitude diagrams
for Czernik~29. The cluster lacks the red giant clump, which makes
estimation of its age and distance difficult. The main sequence
has brighter stars than the main sequence of Haffner~10,
therefore Czernik~29 is either younger or/and has a lower distance
than Haffner~10.
\begin{figure}
\vspace{5.9cm}
\special{psfile=fig7.ps hoffset=-10 voffset=-67 vscale=42 hscale=42}
\caption{\small Left panel: $V/B-V$ diagram
for the cluster Haffner~10, as compared to Girardi et~al. (2000)
isochrone of age $2.5 \times 10^9$~yr for the metallicity $Z=0.008$
(solid line). The fit was obtained by adopting a distance
modulus of $(m-M)_V=14.6$ and reddening of $E(B-V)=0.59$.
Right panel: field-star corrected $V/V-I$ diagram
for the Haffner~10 cluster, as compared to Girardi et~al. (2000)
isochrone of age $2.5 \times 10^9$~yr for the metallicity $Z=0.008$
(solid line). The fit was obtained by adopting a distance
modulus of $(m-M)_V=14.6$ and reddening of $E(V-I)=0.73$.}
\label{fig7}
\end{figure}
\begin{figure}
\vspace{5.9cm}
\special{psfile=fig8.ps hoffset=-10 voffset=-67 vscale=42 hscale=42}
\caption{\small CMDs for the cluster Haffner~10 with
superimposed isochrones of age $2.8 \times 10^9$~yr
for the metallicity $Z=0.019$.
The fit in the left panel was obtained by adopting a distance
modulus of $(m-M)_V=14.4$ and reddening of $E(B-V)=0.46$,
whereas the fit in the right panel by adopting
$(m-M)_V=14.4$ and $E(V-I)=0.63$.}
\label{fig8}
\end{figure}
\begin{figure}
\vspace{5.9cm}
\special{psfile=fig9.ps hoffset=-10 voffset=-67 vscale=42 hscale=42}
\caption{\small CMDs for the stars located within radius
$r<1.2~r_c\approx94\arcsec$ from the centre of Czernik~29}
\label{fig8}
\end{figure}
\section{Summary} \label{s5}
\begin{table*}
\begin{center}
\caption{\small Observed and determined parameters of analysed
open clusters}
\vspace{0.4cm}
\begin{tabular}{lcccccccc}
\hline
Name & $r_c$ & N & $(m-M)_V$ & $E(B-V)$ & $d$ & Age \\
& [arcsec] & & [mag] & [mag] & [kpc] & [Gyr] \\
\hline
NGC 2425 & 155 & 850 & 13.2-13.6 & 0.15-0.34
& 2.9-3.8 & $2.5 \pm 0.5$ \\
Haffner~10 & 65 & 600 & 14.3-14.7 & 0.41-0.64
& 3.1-4.3 & $2.5 \pm 0.5$ \\
Czernik~29 & 78 & 420 & - & - & - & - \\
\hline
\end{tabular}
\end{center}
\end{table*}
We have presented $BVI$ photometry for three poorly studied open
clusters from the southern hemisphere. For each cluster we calculated
surface density profile and established the probable number of cluster
members. The analysis of the derived colour-magnitude diagrams
allowed us to estimate basic parameters like the age and distance.
The results are summarized in Table 4. We found that the clusters
NGC 2425 and Haffner~10, are intermediate-age open clusters.
Probably due to the relatively large number of star members,
the clusters still constitute physical systems.
We notice that the age as well as the heliocentric distance
estimations of the two clusters are very similar, though
the extinctions are quite different.
Interestingly, the angular separation between the clusters is only
$2 \fdg 4$, which gives an approximate linear seperation of
150~pc, assuming an average distance of 3.65~kpc.
This is about 50 and 100 times larger than the cluster
core radius of NGC 2425 and Haffner~10, respectively.
We should note that the seperation is larger than for other double
systems of clusters presented in the literature
(Subramaniam et~al. 1995, Pietrzy\'nski \& Udalski 2000).
However, there is a possibility that the two clusters
constituted a pair of open clusters in the past.
Based on the comparison of the CMDs with the theoretical
isochrones we were not able to firmly establish the metallicity
of either of the clusters. We may only suggest that the metal abundance
is comparable for both clusters, which may indicate a common origin.
A spectroscopic determination of the metallicity and radial
velocities would help to verify this hypothesis.
\section*{Acknowledgments}
We would like to thank Greg Stachowski for remarks on
the draft version of this paper.
PP was supported by the grant {\bf 1~P03D 024 26} from the
State Committee for Scientific Research, Poland.
| 6,635 |
\section{\label{sec:intro}Introduction}
The stress distribution in dry granular media have been studied for
more than a century. The German engineer Janssen studied the apparent
weight at the bottom of a silo as function of its filling
height \cite{pap:Janssen1895}.
Janssen found that the pressure at the bottom of a container of
granular material increases linearly with small filling heights,
but approaches a constant level exponentially slowly
for large filling heights.
That the measured weight at the bottom is less than the
total weight of grains is referred to as a screening effect.
It is well known that the screening effect is due to the
grain--wall friction
and how the stress distributes in a granular ensemble \cite{book:Duran99}.
Janssen's mathematical expression for this, the Janssen law, compares
surprisingly well to experiments \cite{pap:Arroyo-Cetto03,pap:Vanel00},
in spite of its crude assumptions regarding friction and
stress distribution \cite{pap:deGennes99}.
Over the last decade, various aspects of the stress distribution in
static granular media have been studied.
Experiments have shown that the stress distribution
is sensitive to the deposition history \cite{pap:Vanel99}, the shape
and size distribution of grains \cite{pap:Geng01a}, elastic properties
of the base \cite{pap:Brockbank97} and grains \cite{pap:Erikson02},
and that an exponential size distribution of forces is found at
the bottom of a container for forces larger than the average
\cite{pap:Liu95,pap:Lovoll99}.
The stress distribution in dynamic systems has been investigated
in pushed columns of granular media inside a
cylinder \cite{pap:Arroyo-Cetto03,pap:Bertho03,pap:Ovarlez03a,
pap:Ovarlez03b}
by measuring the normal force at the bottom for constant driving velocities.
At small velocities, the measured force has a stick--slip
behavior \cite{pap:Nasuno98,pap:Ovarlez03b} that is related
to aging of the grain--wall friction due to
capillary condensation and shear strengthening of the contacts at the
walls \cite{pap:Ovarlez03b}.
These dynamic systems consist of elastic particles, and the
time dependence studied relate to other properties than the particle
rheology.
In Nature, and in many technological processes, slowly compacted
or sheared systems are dominated by the deformation of particles.
The time dependence in these systems is mainly given by the plastic
properties of the grains.
Here, the results from experiments on granular media
consisting of plastically deforming grains in a cylinder are presented.
This system deformed slowly under its own weight, compacting 10\%
in a week, while the normal force at the bottom (the {\it apparent mass}
\cite{misc:ma}) was measured.
The initial expectation was that the system would show a granular
Janssen type stress
distribution in the initial stage, but that due
to the viscous rheology of the grains a stress distribution close
to the hydrostatic would develop.
Thus, the apparent mass was expected to increase.
Instead, the apparent mass developed unexpected (non-harmonic) oscillations,
resembling the stick--slip behavior observed in hard granular
media \cite{pap:Nasuno98,pap:Ovarlez03b}, except that it decreased
initially, and the time scale of a ``slip'' could be up to an hour.
No overall increase was observed in the apparent mass after a week
of compaction.
The strain development and wall contact dynamics were also studied during
the compaction, both showing behavior related to the weight oscillations.
The wall interaction between grains and cylinder was varied significantly in
a few experiments, and proved crucial to the oscillations, as these
disappeared when wall friction was increased or decreased.
The experiments and results are described in the following two sections,
while some proposed mechanisms for the oscillations are discussed
in section \ref{sec:discussion}.
We propose that the observed oscillations are due to two competing effects:
Grain--wall interaction opposing motion
and the slow flow and relaxation of the
grains inducing motion.
\section{\label{sec:exp}Experiments}
We performed 30 experiments in which an ensemble of $N$ deformable grains
were left to compact in a Plexiglas cylinder of diameter $D$.
The system was studied in several ways, but mainly by measuring the
apparent mass $m_a$ at the bottom of the cylinder in order to
follow the overall evolution of the stress distribution in the
compacting ensemble.
A Mettler PM4800 balance was used to measure the apparent mass.
This balance operates by an induction mechanism that keeps the
vertical position of the measurement area constant \cite{misc:Mettler},
and thus does not affect the compaction procedure.
The weight was measured
to a precision of 0.03 g, and was typically a fraction (2--3)$\cdot 10^{-4}$
of the total mass of the grains.
The cylindrical container was mounted outside of the measurement
area.
Spherical grains were prepared manually from Play-Doh (Hasbro International
Inc., UK) to a diameter $d=(8.8\pm 0.2)$ mm, and poured into the cylinder
approximately ten at a time.
The initial packing fractions were in the range $c=0.5$--0.6.
The material is viscous \cite{misc:visc} over
a large range of strain rates, $\dot{\epsilon}=(10^{-2}$--$10^{-6})$
s$^{-1}$, with a viscosity of $\mu=3\cdot 10^5$ Pa\,s.
A schematic illustration of the setup is shown in Fig. \ref{fig:setupres}(a)
along with the typical result of the observed weight as a function of time.
The measured apparent mass $m_a$ presented in Fig. \ref{fig:setupres}(b)
has been normalized by the total mass $m$ of the grains in the cylinder.
The apparent mass was found to oscillate in a quasi periodic manner.
The period depended on details of the packing, and could increase or
decrease slightly over the duration of each experiment.
\begin{figure}[floatfix]
\epsfig{file=SetupRes.eps,width=8.3cm}
\caption{(a) Schematic illustration of the setup. Ductile grains
were filled to a height $h$ in a cylinder and left to compact while
the apparent mass $m_a$ at the bottom was measured. (b) A typical
recording of the apparent mass (shown normalized by the total mass
$m$ of the grains) as a function of time.}
\label{fig:setupres}
\end{figure}
The filling height $h(0)$ at $t=0$ was varied between 1--4 times the
cylinder diameter, and the cylinder diameter was varied between 3.4 and
15 times the grain diameter.
In two experiments the total height $h(t)$ of compacting (granular)
ensembles
were measured using two different setups:
A camera was used in one experiment to take pictures of the compaction
process at various times.
Image analysis was then used to extract the height of the ensemble
based on the position of the uppermost 6 grains, to a
resolution of 46 $\mu$m.
In another experiment, the height in the middle of the granular column
was recorded by the use of a laser and two mirrors.
A small, light weight piston was placed on top of the central grains
and allowed to move only vertically.
A small mirror was hinged onto the top of the piston, its lower end
resting on the upper cylinder rim.
As the grains compacted, the mirror was tilted, reducing its angle $\phi$
to the horizontal plane.
A laser beam was reflected in the small mirror, and again in
another, larger, mirror so that the beam was visible as a point on the floor.
The position of this point was recorded manually with time, and
the height of the granular ensemble calculated to a precision of 3 $\mu$m.
The piston was positioned along the central axis of the container, and
followed the motion of the internal grains that did not touch the wall.
Figure \ref{fig:strain} illustrates the second strain measurement
method.
\begin{figure}[floatfix]
\epsfig{file=Setup.eps,width=8.3cm}
\caption{Illustration of the experimental setup for strain
measurement by the use of mirrors and laser; A balance (a) recorded the
apparent mass $m_a$ at the bottom of the cylinder.
The height of the packing
was measured as a function of time by a laser (b) beam that was reflected
in a small and a large mirror (c), onto a point on the floor $x$.
The position $x$ moved to the left as the angle $\phi$ between the
small mirror and the horizontal plane was reduced, following the
reduction of the height $h$ of the compacting grains.
The piston rested on grains that did not touch the walls, thus the
strain was measured along a central axis.}
\label{fig:strain}
\end{figure}
From the measurements of the total height the global strain, $\varepsilon$,
was found as $\varepsilon=1-h(t)/h(0)$.
The dynamics of the grain contacts at the cylinder wall was studied
using a camera (AstroCam, Capella, LSR Life Science Resources,
UK) in one experiment.
The camera had a spatial resolution of 2000$\times$3000 square pixels,
and 14 bit intensity resolution.
The contrast between the intensity inside and outside of a contact area
was within an 8 bit subset of the 14 bit dynamic range.
The rim of a contact was established within two pixels with the
spatial and intensity resolutions as specified.
The uncertainty that one extra rim of pixels introduced to the area
of a contact could be as high as 20\% for the smallest contact areas.
The precision of the center of mass position was, however, much better,
as it does not depend on the exact choice of thresholding for the
contact area.
The cylinder containing the ductile ensemble was placed in front of
two mirrors which were set in an angle of $72^\circ$ to each other.
The cylinder was reflected twice in each mirror, thus the camera view
held five versions of the cylinder (I--V), capturing it from all sides.
The grains' contacts to the wall were literally highlighted by shining
light down the Plexiglas wall of the cylinder.
The light only reflected out of the wall in areas where the difference
in refraction indices were smaller than that between Plexiglas and air,
thus the contacts between grains and wall were bright in contrast to the
pore space.
Figure \ref{fig:mirror}(a) illustrates the setup.
Each of the five cylinder images I-V (see Fig. \ref{fig:mirror})
was then `unwrapped' \cite{misc:Unwrap} and scaled according
to the geometry of the setup, then put together to form a continuous
image of the surface area of the cylinder.
An example of the resultant image is shown in Fig. \ref{fig:mirror}(b).
\begin{figure}[floatfix]
\epsfig{file=Mirror.eps,width=8.3cm}
\caption{(a) Schematic drawing of the setup for the measurement of
contact areas at the wall of the cylinder. Two mirrors in an angle
$72^\circ$ to each other reflect the cylinder surface and the total area
can be extracted. Light emitting diodes were fitted into the top of the
cylinder wall to enhance the contrast between contact regions (white)
and regions of no contact (gray).
(b) The unwrapped \cite{misc:Unwrap} surface after
image treatment. Each of the five (I--V) cylinder images is scaled and
unwrapped before they are fitted in overlapping regions. The match is
only at the cylinder surface, which is
why the internal regions seems mismatched in some places.}
\label{fig:mirror}
\end{figure}
The spatial resolution in these images were 160 $\mu$m.
Images were recorded every 10 or 20 minutes for two days in order to capture
several oscillations.
A total of 90 contacts were recovered, and 79 of these were used in the
analysis.
The remaining 11 contacts were discarded because of some mismatch of
their area across boundaries between cylinder images.
An increase of the contact area of 70\% was observed during the 60
hours that images were recorded, 60\% during the first 20 hours
of compaction, and 10\% in the time interval $t\in[20,60]$ h.
A contact diameter was defined as $2\sqrt{A/\pi}$ for each contact
area $A$, and found as a function of time.
The average contact diameter, $d_c$, was found by first taking the
average value of each contact diameter over the series of time steps in
$t\in [20,60]$, and then find the average of this set,
$d_c=2.66\pm 0.02$ mm.
\section{\label{sec:results}Results}
The typical behavior of the apparent mass $m_a$ in an experiment
is as follows:
At time $t=0$ all grains have been poured into the cylinder.
The apparent mass increases slightly over a period of a
few minutes, reaches its maximum (often a global maximum)
and then starts to decrease.
Weight oscillations mostly initiate during this decrease.
When oscillations have developed, their minima decrease toward a
global minimum of $m_a$, before they increase slowly toward
a plateau.
The plateau varies between experiments in the range 45\%--88\% of the
total mass $m$ of the grains,
but are mostly in the range 60\%--80\%.
Figure \ref{fig:Pardef} illustrates the definition of the periods, intervals,
and amplitude of an oscillation, which will be referred to in the
following.
The period $\Delta t$ of one oscillation is defined as the time between
peaks $i$ and ($i+1$).
This period can be further divided into intervals $t_d$ and $t_i$ of
overall decrease and increase, respectively, of the apparent mass.
The point of minimum apparent mass between peaks $i$ and ($i+1$)
marks the transition between the regions $t_d$ and $t_i$, see
Fig. \ref{fig:Pardef}(b).
\begin{figure}[floatfix]
\epsfig{file=ParDefIII.eps,width=8.6cm}
\caption{(a) The evolution of the normalized apparent mass $m_a/m$ as
a function of time. (b) Closeup of one period. The total period $\Delta t$
is the time between two peaks. $t_d$ is the time through which the
apparent mass decreases in one period, while $t_i$ is the time of
increasing apparent mass. $\Delta a=\Delta m_a/m$ is the amplitude of an
oscillation. A subscript $n$ is added to these parameters when they
describe the specific values in oscillation number $n$.}
\label{fig:Pardef}
\end{figure}
The amplitude $\Delta a$ of one oscillation is the change in normalized
apparent mass $m_a/m$ during $t_i$.
The weight oscillations initially have small amplitudes, $\Delta a$, which
increase toward a maximum after typically 3--16 oscillations.
The amplitudes reduce somewhat after this maximum value;
In some experiments they nearly disappear after 100 hours, while in
others they are still at their maximum value after 200 hours.
The period $\Delta t$ of an oscillation also tends to increase
initially, and then stabilize at a constant value after
typically 17--80 hours.
In a few cases the period only stabilized after 150 hours, or not
at all in the time span of the particular experiment.
During $t_d$, irregularities larger than the typical noise level occur in
$m_a/m$ in most of the experiments, see Fig. \ref{fig:Walls}, curve B.
These irregularities are referred to as ``micro-slips'' in the following.
Technically, a micro-slip, $dm_a^+$, is defined as the increase of $m_a/m$
in time intervals where the time derivative of $m_a$ is positive.
The observed oscillations in the apparent mass measured under the
ductile granular ensemble was seen for all cylinder diameters and
filling heights, and proved very robust
to most perturbations applied to the system.
Varying the cylinder diameter and the filling height of grains did
not affect the amplitudes and periods in any consistent manner.
Amplitudes spanned 3\%--24\% of the total mass and the periods were
in the range $\Delta t=(0.7$--47) h when all experiments are
considered.
Two otherwise equal experiments could produce different characteristics,
in one case producing amplitudes of 6\% and 20\%, and periods of
3.8 and 7.3 hours, respectively.
The variability is probably due to details of the random packings that is
beyond experimental control.
\begin{figure}[floatfix]
\epsfig{file=Walls.eps,width=8.6cm}
\caption{The resulting apparent masses for different surface
treatments:
Curve A was the result of coating the walls with Teflon (low friction).
No coating of the Plexiglas wall resulted in curve B.
Gluing sandpaper to the wall to enhance surface friction gave
curve C. }
\label{fig:Walls}
\end{figure}
Changing the surface properties on the cylinder wall was the only
perturbation that dramatically affected the oscillations.
Figure \ref{fig:Walls} shows results from experiments in which
the surface friction was reduced by Teflon (curve A), and enhanced
by (400 grit) sandpaper (curve C).
In the following these experiments are referred to as `the Teflon-'
and `the sandpaper experiment', respectively.
No alteration was done to the surface of the wall in the experiment
that produced curve B, which, apart from the surface, was identical to
the Teflon- and sandpaper experiments.
As can be seen from the figure, reducing or enhancing the wall
friction both removed the weight oscillations.
By reducing the friction on the wall the apparent mass increased
slightly from the initial value (curve A, Fig. \ref{fig:Walls}).
Although Teflon reduced friction considerably, it did not remove
it fully, which would have made the apparent mass equal to the
total mass of the grains.
By increasing wall friction another behavior emerged, as the
apparent mass decreased, apparently toward
a constant level (curve C, Fig. \ref{fig:Walls}).
Curve C was fitted excellently by
\begin{equation}
m_a/m=(m_{a\infty}+\Delta m_a \exp{[-t/\tau_s]})/m\,,
\label{eq:FitSandpaper}
\end{equation}
where $m_{a\infty}=(7.027\pm 0.001)$ g, $\Delta m_a=(10.830\pm 0.005)$ g,
and $\tau_s$ is a characteristic time constant of $(13.52\pm 0.01)$ h.
The uncertainties are the likely error in the best fit parameters.
Figure \ref{fig:DevFitSandpaper} shows the deviations between the data and
the fit, $(m_a - m_{a\infty}+\Delta m_a \exp{[-t/\tau_s]})/m$.
\begin{figure}[floatfix]
\epsfig{file=DevFitSandpaper.eps,width=8.6cm}
\caption{Deviations of the fit from the measured normalized apparent mass,
$m_a/m$, as a function of time $t$ for the sandpaper experiment, see
Eq. \ref{eq:FitSandpaper}.}
\label{fig:DevFitSandpaper}
\end{figure}
The exponential decay fits the observation exceptionally well, and the
deviations are within the range $[-0.0077, 0.0076]$ of the normalized data.
Nevertheless, micro-slips are easily recognizable above the experimental
noise, which is of the order of $2\cdot10^{-4}$ (0.03 g/142.6 g) of
the normalized apparent mass, while the slips are of the order
of $7\cdot10^{-3}$.
The experimental noise is not visible in the figure.
A fit has also been made to the decreasing part $t_d$ of each oscillation
$n$ in the curve of Fig. \ref{fig:Pardef}(a), which is consistent with
logarithmic decay with time:
\begin{equation}
\frac{m_{an}(t)}{m_{an}(0)}=\big(1-B_n\ln{[1+t/\tau_{dn}]}\big)\, .
\label{eq:FitOsc}
\end{equation}
Here, $m_{an}(t)$ is the apparent mass of the $n$-th oscillation, and
$m_{an}(0)$, $B_n$ and $\tau_{dn}$ are best fit parameters to the equation,
calculated for each oscillation $n$ separately.
$m_{an}(0)$ is the best fit value of $m_{an}$ at the start of the decrease,
based on the first 2.5 h of the decreasing $m_{an}$.
$\overline{m_{an}(0)}=76.4\,[-0.4,0.5]$ g is the median value of $m_{an}(0)$,
with the quartile deviations in brackets.
$\overline{B_n}=0.042\,[-0.002,0.004]$ is the median of the set of
dimensionless constants $B_n$, and
$\overline{\tau_{dn}}=0.16\,[-0.02, 0.03]$ h is the median and quartiles
of the set of $\tau_{dn}$.
Figure \ref{fig:FitOsc} shows the collapse of the weight data when
plotted according to Eq. \ref{eq:FitOsc}.
\begin{figure}[floatfix]
\epsfig{file=OscCollapse.eps,width=8.6cm}
\caption{The decreasing apparent mass of 17 oscillations in one experiment
(see Fig. \ref{fig:Pardef}(a))
plotted as a function of time according to Eq. \ref{eq:FitOsc}.
(The expression on the horizontal
axis is the time dependent part of Eq. \ref{eq:FitOsc}.)}
\label{fig:FitOsc}
\end{figure}
The limited dynamic range on both axes suggests that one can also fit
the data by a power law with a small exponent.
We have not found any theoretical arguments for the choice of one fit over
the other, thus the main observation is that the decreasing parts of the
oscillations have the same form over the first 2.5 hours,
with a time constant of $\overline{\tau_{dn}}=0.16$ h.
The sandpaper gave a decreasing exponential function with time, as
gives the initial decrease of $m_a$ during $t_d$ in an oscillation:
\begin{equation}
\lim_{t\rightarrow0}1-B\ln{(1+t/\tau_d)}\simeq 1-Bt/\tau_d\simeq
\exp{(-Bt/\tau_d)}=\exp{(-t/\tau_0}\,.
\end{equation}
The functional dependence is thus similar to the sandpaper at the
start of the decrease, with a time constant of $\tau_0=\tau_d/B=3.8$ h.
The deviation from the fit is plotted for one oscillation in Fig.
\ref{fig:DevOsc}.
Large deviations on the order of 2\% of the total mass (the micro-slips)
develop some time into $t_d$ (typically 3 hours in this experiment).
\begin{figure}[floatfix]
\epsfig{file=DevFitOsc.eps,width=8.6cm}
\caption{The deviations from the measured $m_a/m$ of its fit for the
decreasing part of one oscillation, as a function of time, see
Eq. \ref{eq:FitOsc}. }
\label{fig:DevOsc}
\end{figure}
All visible irregularities in this plot is above the noise level of
the measurements.
Taking the time derivative of $m_a$ as
$dm_a/(m\,dt)=(m_a(i+1)-m_a(i))/{m[t(i+1)-t(i)]}$, the set of positive
increments of $m_a/m$ (the micro-slips, $dm_a^+$) and negative increments ($dm_a^-$) were
found for each oscillation's $t_d$.
The micro-slips were removed from the decreasing part of the oscillations
by cumulative summation of $dm_a^-$, and the resulting data set fitted
by a power law,
\begin{equation}
\sum_n dm_a^-(t) -1=-(t/\tau_-)^\alpha\,.
\label{eq:Fitminus}
\end{equation}
The median and quartiles of the fitting parameters are $\alpha=0.55$
[-0.02, 0.02], and $\tau_-=130$ [-37, 24] h, see Fig. \ref{fig:FitMinus}.
\begin{figure}[floatfix]
\epsfig{file=Fitmaminus.eps,width=8.6cm}
\caption{The cumulative sum of decreasing $m_a$ during an oscillation
as a function of the scaled time $(t/\tau_-)^\alpha$, see Eq.
\ref{eq:Fitminus}.}
\label{fig:FitMinus}
\end{figure}
No characteristic time exists for a power law, since
$-\lambda^\alpha(t/\lambda\tau)^\alpha$ fits equally well for all $\lambda$.
The micro-slips $dm_a^+$ were found as a function of time in all the
oscillations of the experiment shown in Fig. \ref{fig:Pardef}, and
binned in 50 time intervals.
The sum of micro-slips was taken for each bin, and divided by the size
of the bin to produce the temporal evolution of micro-slip `activity'.
Figure \ref{fig:maPlus} presents the result.
As the $t_d$ were of different lengths for each period, not all bins
contain contributions from all oscillations.
The bullets present times that include data from all oscillations,
whereas a circle includes only data from $t_d$
long enough to contribute to the specific bin.
The line through the data is a linear fit, based on all but the first
bullet, given by $\sum_n m_a^+(t_n)/t_n =A(t-t_0)$.
Here, $A=(0.076\pm0.005)$ h$^{-2}$, and $t_0=(0.6\pm 0.2)$ h.
The activity presented by the bullet at $t\sim0$ is probably
remnants from the big slip that occurred at $t=0$, thus the micro-slip
activity is initiated at time $t_0$ after each big slip and grows
linearly until another big slip occurs.
\begin{figure}[floatfix]
\epsfig{file=Fitmaplus2.eps,width=8.6cm}
\caption{The micro-slip `activity' as a function of time after each
big slip. The activity is found as the sum of micro-slips, $dm_a^+/m$,
from all oscillations in Fig. \ref{fig:Pardef}, binned in times $t_n$
and normalized by the width of the bin.}
\label{fig:maPlus}
\end{figure}
We could not find a model with few parameters that would fit the
`Teflon' results (curve A in Fig. \ref{fig:Walls}) due to the
complex initial evolution.
This curve also shows some micro-slips, of size $1.5\cdot10^{-3}$,
larger than the noise level of $4\cdot10^{-4}$.
The measurements of the height of the system as function of time
revealed that the vertical motion occurs in steps.
This was seen in both strain experiments, and is shown in
Fig. \ref{fig:strains}(a) and (c).
Figures \ref{fig:strains}(b) and (d) show the simultaneous
measurements of the normalized apparent mass, $m_a(t)/m$.
\begin{figure}[floatfix]
\epsfig{file=StrainsII.eps,width=8.6cm}
\caption{Details of the strain as a function of time, measured in two
experiments compared to the
weight $m_a$.
(a) The global strain, $\varepsilon$, measured with 3 $\mu$m
resolution (see Fig. \ref{fig:strain}) as a function of time.
(b) The normalized apparent mass, $m_a/m$, as a function of time for
the experiment in (a).
(c)$\varepsilon$ measured with 46 $\mu$m resolution by the high
resolution camera.
(d) The apparent mass as a function of time for the experiment in (c).}
\label{fig:strains}
\end{figure}
From the experiment with 3 $\mu$m resolution, the minimum and maximum
compaction velocities of the central part of the cylinder were found
to be $5.4\cdot 10^{-9}$ m/s and $7\cdot 10^{-8}$ m/s, respectively.
The maximum acceleration, occurring at the start of a compaction step,
was $1\cdot 10^{-11}$ m/s$^2$.
Comparing the region of decreasing $m_a$ of Fig. \ref{fig:strains}(b)
to the strain in (a), a small but visible vertical movement
occurs along the central axis of the packing during the weight decrease.
The main increase of strain during one oscillation (that is, the step)
takes place within the region in which the apparent mass increases
from its minimum to its maximum.
Unfortunately, the limited resolution of the strain measurements in
Fig. \ref{fig:strains}(c) prevented a detailed comparison
between the strain evolution of the 6 uppermost grains and the
apparent mass.
It is evident from this measurement, however, that the global strain
motion is directly correlated with the changes in the apparent mass.
A compaction velocity of the uppermost grains of (0.6--3)$\cdot 10^{-9}$
m/s was found during $t_d$, and 4$\cdot 10^{-8}$ m/s during $t_i$.
The dynamics of the wall contacts were studied in one
experiment as described in section \ref{sec:exp}.
Having found the `unwrapped', properly scaled surface of the cylinder
(see Fig. \ref{fig:mirror}), we obtained a high contrast of the contact.
The development of the
area and center of mass position of each contact was followed through
the experiment.
The weight oscillations correlated strongly to the contacts' center
of mass motion, while no such correlation was found with the changes
in contact area.
The contacts were seen to move ``simultaneously'', that is, within the
temporal resolution of the images, which means they all slipped
within a period of 15--20 minutes, during $t_i$.
Figure \ref{fig:Ydisp} shows the cumulative distribution $P(s>\Delta y)$
of vertical contact displacement $\Delta y$ between two consecutive
images.
The contact displacement is normalized by the average contact diameter
$d_c$.
Each curve corresponds to the distribution in one time step of the experiment.
Curves A present the motion during slips, while curves B are the motion
in time steps between slips.
The gray band through B spans the average $\pm$ the standard deviation
of vertical motion.
\begin{figure}[t]
\epsfig{file=CumSumDY2.eps,width=8.6cm}
\caption{The cumulative distribution $P(s>\Delta y)$ as a function of
normalized vertical contact displacement $\Delta y/d_c$ between two
consecutive images. Motion
downward has positive values of $\Delta y$.
Curves A result during slips, while curves B present the remaining
movement between time steps. The gray region through B covers the average value
taken at each 1/79 interval of $P$, plus and minus
the standard deviation from the average.}
\label{fig:Ydisp}
\end{figure}
The median vertical displacement of a contact during a slip was 6 [$-1, 2$]\%
of the average contact diameter, $d_c$.
Outside of the slips the median displacement was only 0.07 [$-0.20, 0.24$]\%
of $d_c$.
\begin{figure}[th]
\epsfig{file=Contacts4.eps,width=8.6cm}
\caption{(a), (b) and (c) shows difference images of a contact between
consecutive images. White is newly established area, black is area that
no longer is a part of the contact, and light gray is the unchanged contact
area. The center of mass motion between the images are shown as black
lines. The length of the lines is exaggerated 10 times.
(d) shows the normalized apparent mass, the triangles
($\triangle$) mark the times when pictures were taken of the ensemble.
Circles ($\circ$) mark the times when minimum 15\% of the contacts
moved more than 1\% of the average contact diameter.
Bullets ($\bullet$) mark the
times when more than 80\% of the contacts moved at least 2\% of the
average contact diameter.
The lower plot (e) shows the average strain development found from image
analysis for the 20 lower
($\circ$, curve B) and upper ($\diamond$, curve A) wall contacts.
Filled symbols represent the times that a picture was taken at or
immediately after a peak in the apparent mass presented in (d).}
\label{fig:contacts}
\end{figure}
Figures \ref{fig:contacts}(a), (b) and (c) show the difference in one
contact area between consecutive images in one experiment.
White corresponds to new contact area, black to area that was left since
the previous image, and light gray shows contact area where no changes
occurred.
Figures \ref{fig:contacts}(d) and (e) show the normalized apparent
mass and the average strain of the upper (diamonds) and lower
(circles) 20 wall contacts, respectively.
The markers in both plots represent the times when pictures were
taken.
In Fig. \ref{fig:contacts}(d) circles mark the times when 15\% of the
contacts moved more than 1\% of the average contact diameter in 20 minutes
(since the last image).
The bullets show the times when 80\% of the contacts moved at least
2\% of the average contact diameter.
Triangles represent the times when pictures were taken.
Based on the observed area of the grain--wall contacts and the
measured $m_a$, the average load per square millimeter carried by a
contact was calculated to be in the range (0.5--1.2) kPa.
Table \ref{tab:param} presents the characteristic values of various
parameters: (a) gives the median period, amplitude, intervals $t_d$
and $t_i$, and characteristic times $\overline{\tau_d}$ and $t_0$
for the oscillations in one experiment (see Fig. \ref{fig:Pardef}).
(b) presents the characteristic time from the fit of $m_a/m$ from the
sandpaper experiment, and the estimated characteristic time of elastic
relaxation (see section \ref{sec:discussion}).
\begin{table}
\begin{tabular*}{\linewidth}{@{\extracolsep{1cm minus 1cm}}ccl}
\hline
\hline
\multicolumn{3}{c}{Characteristic values}\\
\hline
&$\overline{\Delta t}$& 6.4 [$-$0.7, 1.2] h\\
&$\overline{t_d}$&5.2 [$-$0.3, 1.1] h\\
&$\overline{t_i}$&0.8 [$-$0.3, 1.0] h\\
\hspace{0.8cm}(a)\hspace{0.8cm}&$\overline{\tau_d}$&0.16 [$-$0.02, 0.03] h \\
&$\tau_0 = \overline{\tau_d}/B$& $\sim$ 3.8 h\\
&$t_0$&0.6 $\pm$ 0.2 h\\
&$\overline{\Delta a}$&12.6 [$-$0.3, 1.0] \%\\
\hline
&$\tau_s$&13.52 $\pm$ 0.01 h\hspace{5mm}\\
\raisebox{2mm}[0cm][0cm]{\hspace{0.8cm}(b)}&$\tau_e$\hspace{5mm}
$\sim 10^{-6}$ h\\
\hline
& $l_s$&$\sim 260\,\mu$m\\
& $l_0$&$\sim 74\,\mu$m\\
\raisebox{2mm}[0cm][0cm]{\hspace{0.8cm}(c)}&$l_d$&$\sim 101\,\mu$m\\
&$l_i$&$\sim 115$--$200\,\mu$m\\
\hline
\hline
\end{tabular*}
\caption{(a) Median values of the period $\Delta t$, amplitude $\Delta a$,
the intervals $t_d$ and $t_i$, the characteristic times
$\overline{\tau_d}$ and $\tau_0=\overline{\tau_d}/B$ of decreasing $m_a$,
and $t_0$ of activation of
micro-slips of the experiment presented in Fig. \ref{fig:Pardef}(a).
(b) Characteristic times $\tau_s$ of the $m_a/m$
evolution in the sandpaper experiment,
and $\tau_e$, the estimated time of relaxation of elastic stress.
(c) Estimated characteristic length scales, from time scales in (a) and (b),
see section \ref{sec:discussion}.}
\label{tab:param}
\end{table}
One experiment was performed to understand how the granular geometry of
the ensemble affected the apparent mass.
The granular ensemble was exchanged with a non-porous slab of Play-Doh
that did not fill the cylinder, but touched both
the bottom and the walls of the setup.
This experiment is referred to as `the bulk experiment' in the following.
The slab was left to flow into the available space, and the
apparent mass was measured as before, see curve B of Fig. \ref{fig:bulk}.
Again, a granular version of this experiment was conducted for
comparison, in which the total mass of the grains and the cylinder
diameter were the same as those of the bulk experiment, see Fig.
\ref{fig:bulk}, curve A.
\begin{figure}[floatfix]
\epsfig{file=Bulk.eps,width=8.6cm}
\caption{The normalized apparent mass, $m_a/m$, as a function of time, $t$,
at the bottom of a cylindrical
Play-Doh sample (B), as
compared to $m_a/m$ from a granular geometry (A) as a function of time. }
\label{fig:bulk}
\end{figure}
As seen from the figure, both setups produced weight oscillations,
thus the granular geometry is not the (only) reason for the oscillations.
Oscillations started later in the bulk case than in the granular
case, and both systems show uncommon irregularities in their periods.
The granular system had nearly 100 grain--wall contacts, while
the bulk sample had 3--4 large contact areas.
The oscillations are probably due to the multi-contact nature of the
interface between the deforming sample and the confining cylindrical
wall.
\section{\label{sec:discussion} Discussion}
The self-compaction of a ductile ensemble
depends on the deformability of the grains and on a porous structure.
The granular geometry of the ensemble was not necessary for
oscillations to form, as weight oscillations also resulted
under a bulk slab of material that deformed viscously into
available space.
This result emphasizes the importance of the multi-contact wall
interaction to the observed oscillations in the apparent mass.
The grain--wall interaction proved to be crucial to the oscillations
in the apparent mass by the experiments with
varying wall friction.
No oscillations were observed when increasing or decreasing the wall
friction from that of the regular experiments with Plexiglas walls.
The evolution of $m_a$ in these experiments is interesting because it
shows two different behaviors according to the wall friction.
A low wall friction resulted in an increasing apparent mass, while
a high wall friction made the measured weight decrease.
The same mechanisms leading to these results are likely to be
the reason for the oscillations observed in $m_a$ in the
regular experiments.
The reason for the decrease of the apparent mass must be that the
walls sustain an increasing part of the total grain mass, that is,
a dynamic vertical force must act on the grain contacts from the wall.
This force could be friction, shear cohesion, or a combination of
the two, and will be referred to as the grain--wall interaction
in the following.
The increasing weight was initially believed to be due to a new
internal grain--grain contact.
As the stress distribution in hard granular media is known to be
very sensitive to local arrangements of grains, a new contact
was believed to change the stress distribution.
New contacts would preferentially form in the vertical direction,
because of the anisotropic compaction, and thus would tend to
redirect stresses toward the bottom.
The number of new contacts in the ensemble is limited, and the
average number of contacts per grain increased from 6.5 to
7 \cite{pap:Uri05a} during a typical experiment.
50--100 new contacts were expected to form during an experiment,
which is roughly twice the typical number of oscillations.
If we assume that not all new contact formations are noticed in $m_a$,
perhaps because of their position in a low stressed region,
that would explain the shortage of oscillations, and the
micro-slips in $m_a$ in the oscillations and the sandpaper
experiment.
On the other hand, this assumption directly disagrees with the
nearly constant amplitudes seen in all experiments.
The experiment of a bulk slab, which also produced weight oscillations,
eventually proved that new internal contacts between grains were not
the main reason for the weight oscillations.
Stress redistribution is, however, thought to take place continuously
during the slow internal flow of material, both in the granular and
the bulk systems.
In principle, elastic energy could be stored in compressed parts
of the packing after a slip, resulting in a decreased grain--wall
interaction.
The relaxation of this elastic energy could cause the observed
decrease in the apparent mass.
The characteristic time of elastic relaxation is expressed as
the ratio of viscosity $\mu$ to bulk modulus $K$.
We know that the viscosity of Play-Doh is of the order of $10^5$ Pa\,s
for shear rates as low as $10^{-6}$ s$^{-1}$.
The bulk modulus was not measured, but it is expected to be closer
to that typical of fluids ($K_f\simeq$ 1--2 GPa, \cite{book:scidata})
than that of iron ($K_i\simeq 170$ GPa, \cite{book:mathand}), thus
on the order of $10^9$ Pa.
The resulting estimate of elastic relaxation time for Play-Doh is
\begin{equation}
\tau_e=\mu/K=10^{5}/10^9=10^{-4}\text{s}\, .
\end{equation}
Elastic compressive stresses should relax in (less than) seconds,
which is much less than any time scale observed in an oscillation.
Another explanation for the decreasing $m_a$ emerges from
the assumption that the ratio of horizontal to vertical stresses increases
with increasing packing fraction.
If friction is assumed to be proportional to the normal force,
an increasing horizontal stress in the packing would result in
increased wall friction, hence a decrease in $m_a$.
The packing fraction increases approximately 10\% during the experiment,
while the characteristics of the oscillations does not change.
This implies that the packing fraction is
not the main parameter for describing the dynamic behavior.
The reason for a decreasing apparent mass can be seen in connection
to the shearing of grain--wall contact regions.
During the time $t_d$ of decreasing $m_a$ the strain increases
very slowly, suggesting that only an internal flow of grains contributes to
the strain in this regime (see Fig. \ref{fig:strains}(a) and (b)).
The analysis of the motion of grain--wall contacts shows that
the vertical motion of contacts in this regime is limited and noisy,
thus most contacts are practically at rest (even though the central
part of the packing creeps).
Due to the slow flow internally in the packing, they are also
continuously sheared.
There are clear slips of the order of 2\% in the normalized apparent
mass during the
decreasing part of the period in the granular setups (see Figs.
\ref{fig:Pardef} and \ref{fig:DevOsc}).
Micro-slips are not seen in the weight data from the bulk experiment,
thus their origin seems to be the granular geometry, or possibly the
large difference in the number of wall contacts between the bulk
and granular systems.
No collective motion is seen at the wall during the micro-slips in the
granular experiment, although 5\% of the contacts move a distance of 1\%
of $d_c$ in every time step, thus their motion might be connected
to the measured micro-slips in $m_a$.
Micro-slips might be due to the internal reorganization of forces
within the granular system, which may trigger
some of the grain--wall contacts into motion.
A reorganization of forces must also take place in the material in the
bulk experiment, although probably in a different way than
that of the more complex granular geometry.
The reorganization must increase the average shear stress in the contact
regions, which again leads to an increase of the vertical grain--wall
interaction.
Once a contact experiences a shear stress that can not be sustained by
the grain--wall interaction, it ``breaks'', or starts to move.
The strain development could not be measured in the
sandpaper experiment, thus whether this system compacted much is
not known.
Similar, but smaller, micro-slips than those seen in regular experiments
were seen in the sandpaper experiment.
This suggests that internal stress rearrangement was taking place.
The grain--wall interaction was considerably higher in the
sandpaper experiment than in the regular setup (as the apparent mass
reached a minimum of 15\% of the total mass).
It is reasonable to assume that the contacts did not move much,
or in any correlated manner, based on the lack of weight oscillations.
The direct correspondence between the step in the strain and
the increasing $m_a$ in Fig. \ref{fig:strains}(a) and (b) implies
that the motion of wall contacts is very important for the weight increase.
Assuming that wall contacts are broken, or mobilized, at a critical
shear stress, one or more contacts will initiate a slip, and
the others follow.
The contacts that break contribute to a decrease in the total
wall interaction, thus a possible increase of the apparent mass.
The sum of wall interactions decrease over a time period
that must depend on how fast the contact breaking propagates among
the other contacts, and how fast each contact breaks.
From our temporal resolution in the study of grain--wall contacts,
we see that all grains move within 20 minutes in connection
with the slip.
The apparent mass will increase according to the decreasing wall
interaction $F_w$, as the force balance of the system is
$\sum F=m_a\,g -F_w = m\,a < 10^{-12}$ N $\simeq 0$.
The strain development was not measured in the
Teflon experiment, thus it
is not known whether the strain had similar steps during the
compaction as in the regular experiments.
Based on the direct correlation between weight oscillations and
the observed strain in the regular experiments, however, it seems likely
that the wall contacts in the Teflon experiment in some sense moved
continuously, as no oscillations in $m_a$ were observed here.
Micro-slips were observed in $m_a$, however, thus some dynamic
interaction between the grains and the wall was present, probably
because of internal stress rearrangements.
Sliding contacts also support some of the grain mass, as neither
during $t_i$ in the regular experiments nor in the Teflon experiment
does the apparent mass reach 100\% of the grain mass.
The grain--wall interactions during motion are smaller, however,
than in the static case, as the apparent mass increases during
motion in the regular experiments, see Fig. \ref{fig:strains}.
That all contacts are mobilized within a time interval corresponding
to a slip could imply that, when sufficiently sheared, they are
sensitive to changes in the stress distribution, and thus easily
triggered.
From Fig. \ref{fig:contacts}(d) we see that
more than 80\% of the contacts move more than 2\% of $d_c$ during a
slip event, and that 15\% move at least 1\% of $d_c$ immediately before
these slips.
In some cases, although not consistently, 15\% of the contacts
move at least 1\% of $d_c$ in connection to micro-slips.
Also, the activity of micro-slips increases during $t_d$, which
suggests that the system becomes more critical.
The time scales of the system spans a factor 100, see Table
\ref{tab:param}, ranging from 0.16 h to 13.52 h.
It is tempting to speculate that these time scales
reflect the spatial dimensions in the system, from 1 mm (diameter
of small contact area) to 10 cm (filling height).
A direct estimate of the maximum length scale can be made from the
velocities and the observed time scales.
Assuming that the grain--wall contacts in the sandpaper experiment do
not slip, the internal flow of velocity $v_d=5.4\cdot 10^{-9}$ m/s with the
characteristic time $\tau_s$ gives a length scale $l_s=260\, \mu$m.
The corresponding length from the initial exponential decrease of
$m_a$ in an oscillation is $l_0=\tau_0\cdot v_d$ m/s = 74 $\mu$m,
and from the $t_d$ and $t_i$, we get $l_d=t_d\cdot v_d= 101\,\mu$m
and $l_i=t_i\cdot v_i=$115--200$\,\mu$m, respectively.
$v_i$ is the velocity of the bulk during a slip.
The range of $l_i$ results from the different compaction velocities
found during $t_i$ in the two experiments presented in Fig. \ref{fig:strains}.
The length scales extracted from the characteristic times span
a smaller range than the time scales do, and are much smaller than the
macroscopic lengths mentioned above.
The small length scales suggest that details of the contact motion
might be of importance to the time scales observed in the system.
Flow of viscous fluid along a wall can be described by a Navier length
\cite{pap:deGennes02}.
An average contact velocity, $v_c$, during a slip can be found from knowing
that contacts slip 6\% of the average contact diameter in 20 minutes,
$v_c=1.15\cdot 10^{-7}$ m/s.
The amount of fluid slip along a wall is given by the Navier length,
$b=\mu/k$, where $mu$ is the fluid viscosity and $k$ is the surface
friction coefficient given by $\sigma/v_c$.
The average shear, $\sigma$, of a contact was found to be between 0.5--1.2 kPa,
thus $k$ is within the range (2.7--11)$\cdot 10^{9}$ Pa\,s/m.
The Navier length is then $b\in[27$--90$] \mu$m, slightly smaller,
but of the same order as some of the lengths estimated above.
The motion of a contact was not studied with sufficient temporal
or spatial resolution to conclude whether the whole contact slid a
fraction of $d_c$ at constant velocity, or it slid by self-healing slip pulses
\cite{pap:Gerde01, pap:Baumberger02}.
Both processes are known from experiments on
frictional motion in low velocity sheared systems of
(hard) granular systems \cite{pap:Nasuno98} and slipping of a gel/glass
interface \cite{pap:Baumberger02}.
\section{Conclusions}
We observe semi-regular oscillations in the measured apparent mass, $m_a$,
at the bottom of a self-compacting ductile grain packing.
The oscillations in one particular experiment are on the order of
10\% of the total mass $m$ of the grains, and have periods of roughly 6 hours.
The oscillations persist when the granular setup is exchanged with
a bulk sample of the same ductile material, but disappear when the
grain--wall interaction is reduced or increased.
Grain--wall contacts are seen to move collectively in correspondence
to the slip events in $m_a$, as at least 80\% of the contacts move a
distance larger than 2\% of the average contact diameter during a slip,
see Figs. \ref{fig:Ydisp} and \ref{fig:contacts}.
The decrease of the apparent mass in an oscillation is
thought to be the result of shearing of static wall contacts between
grains and the container wall.
The slow ductile flow internally in the cylinder causes a dynamic stress
distribution, which results in a continuous increase of
the shear stress at the grain--wall contacts.
This continuous increase is the reason for the decreasing apparent
mass.
``Micro-slips'' of the order of 2\% are seen in the normalized apparent
mass $m_a/m$ during the
decrease, which probably result from internal stress redistribution in
granular setups, as they were not seen in $m_a$ of the bulk experiment.
The micro-slips correspond in some cases to limited grain--wall contact
motion, and their `activity' increases during the interval of decreasing
$m_a$.
These slips are also seen when the grain--wall interaction is reduced
or enhanced, that is, when contact motion is stimulated or repressed.
Different characteristic times have been found from curve fitting
of the apparent mass evolution during the `sandpaper' experiment and
the decreasing part of oscillations in one experiment.
We have also estimated a typical timescale of relaxation of elastic
compressive stresses, and concluded that elasticity is not the driving
mechanism for the observed oscillations.
The characteristic times, together with the period and intervals of
increasing and decreasing $m_a$, are presented in Table \ref{tab:param}.
A successful model should reproduce these characteristic times.
Some attempts at constructing a minimum model have been pursued, but
the models were discarded as they depended on a finite acceleration
or on unknown microscopic parameters of the system.
Further work is necessary to understand the dynamic behavior of the
system, and ideas and modeling approaches are welcome.
\section{Acknowledgments}
We wish to thank the Norwegian Research Council for financial support
through grant number 146031.
Many thanks also to Thomas Walmann for help with image analysis,
and Renaud Toussaint, Jean Christophe Geminard, Espen Jettestuen, and
Yuri Podladchikov for helpful discussions. Thanks to Bo Nystr{\"o}m
and Anna-Lena Kj{\o}niksen for viscosity measurements.
| 13,873 |
\section{Introduction.}
\label{sec0}
The properties of hybrid superconductor-normal metal structures (SN) continue
to attract considerable attention both experimentally~\cite{Gueron} and theoretically~\cite{Brouwer, Ossipov,
Golubov, Zhou, Ivanov, Bruder, Brouwer1, Ostrovsky}, though
the fundamental process governing the physics of such systems, Andreev reflection~\cite{Andreev},
has been discovered long ago. In fact, while it is well known that generically the proximity to a superconductor
leads to a modification of the density of states in the normal metal, the nature and
extent of this effect depends on the details the hybrid structure. In particular, it was
recently pointed out~\cite{Brouwer} that
when a closed mesoscopic metallic region is contacted on one side to a superconductor,
the resulting DOS turns out to depend on its shape.
If integrable, the DOS is finite everywhere but at the Fermi level, where it vanishes as
a power law. On the contrary, in a generic chaotic metallic region one expects the opening of
a gap around the Fermi level, the Thouless gap~\cite{Ossipov}.
In analogy with the considerations above, a diffusive metallic region
sandwiched between two bulk superconducting electrodes has been predicted to
have a gapped density of states, the gap being at energies comparable to to the Thouless energy
$E_{Th}=D/L_z^2$, where $D$ is the diffusion constant and $L_z$ the width of the normal
layer~\cite{Golubov, Zhou, Ivanov, Bruder} [see Fig.1].
In a diffusive SNS structure with transparent SN interfaces, the density of states in the normal part,
averaged over its thickness, and at energies $E$ right above the gap edge $E_g \simeq 3.12 E_{Th}$,
is $\nu \propto 1/\pi V\;\sqrt{(E-E_g)/\Delta_0^3}$, where
$\Delta_0=(E_g \delta^2)^{1/3}$, $\delta=1/(\nu_0 V)$, and $V=L_xL_yL_z$ is the volume of the
normal region. This dependence is reminiscent of the density of states at the edge of a Wigner
semicircle in Random Matrix Theory [RMT], $\Delta_0$ being the effective level
spacing right above the gap edge. Using this analogy, Vavilov et al.~\cite{Brouwer1} realized
that the disorder averaged DOS should not display a real gap, but
have exponentially small tails below the gap edge, analogous to the Tracy-Widom
tails~\cite{Tracy} in RMT. A rigorous study in terms of a Supersymmetric Sigma Model
description of the SNS structure has shown that this is indeed the case~\cite{Ostrovsky}. However,
in analogy to the theory of Lifshits tails~\cite{Lifshits} in disordered conductors,
the nature of the resulting subgap quasiparticle states depends additionally on the effective dimensionality
$d$, determined by comparing the interface length scales $L_x,L_y$, with the typical length scale
of a subgap quasiparticle state, $L_{\bot}$. In particular, if $L_x \gg L_{\bot}>L_y$ or $L_x,L_y \gg L_{\bot}$
the subgap quasiparticle states are localized either in the $x$ direction or in the $x-y$ plane along the interface,
respectively. Correspondingly, the asymptotic tails of the DOS deviate from the universal RMT result,
applicable only in the zero dimensional case [$L_x,L_y < L_{\bot}$].
The analogy with RMT applies, within the appropriate symmetry
class, to other physical situations, such as diffusive
superconductors containing magnetic
impurities~\cite{Brouwer1,Lamacraft,Aleiner}, and superconductors
with inhomogeneous coupling constants~\cite{Meyer}. In both cases,
at mean field level the density of states has a square root
singularity close to the gap edge~\cite{Abrikosov,Larkin}.
Correspondingly, accounting for mesoscopic RM-like fluctuation,
the disorder averaged density of states has tails below the gap
edge, with an asymptotics similar to the one calculated in
Ref.[\onlinecite{Ostrovsky}] for SNS structures. On the other
hand, in the case of diffusive superconductors containing magnetic
impurities, it was shown~\cite{Me,Balatsky} that, in addition to
\it mesoscopic fluctuations \rm, subgap quasiparticle states can
form as a result of \it classical fluctuations \rm, i.e. long-wave
fluctuations of the concentration of magnetic impurities
associated to their Poissonian statistics. Similarly, also in
superconductors with inhomogeneous coupling constant long-wave
fluctuations of the coarse grained gap lead to the appearance of
subgap quasiparticle states, and consequently to tails of the
DOS~\cite{Larkin}. Interestingly, in both cases the tails
originating from mesoscopic fluctuations and from classical ones
are formally related by a dimensional reduction~\cite{Me}.
In this paper, we close this set of analogies, studying
the contribution to the subgap tails of the DOS in a diffusive
SNS junction arising from long-wave fluctuations of the concentration of impurities
in the normal layer. Combining the results of this analysis with those obtained by Ostrovsky,
Skvortsov, and Feigel'man~\cite{Ostrovsky}, who considered the subgap tails originating
from mesoscopic fluctuations, we provide a consistent picture of the physics of the subgap
states. In particular, a quantitative comparison of the two contribution shows that
mesoscopic fluctuations dominate in long and dirty junctions, while classical fluctuations
dominate in wider and/or cleaner ones. In analogy with diffusive superconductors with
magnetic impurities, and superconductors with inhomogeneous coupling constants, also
in the present case the two contributions to the subgap tails, arising from mesoscopic
and classical fluctuations, are related by a dimensional reduction.
The rest of the paper is organized as follows: in Sec.II we
present the details of the analysis of the subgap DOS arising from
fluctuations of the concentration of impurities $n_{imp}$ in an
SNS junction. In Sec.III, we compare the two contributions to the
subgap DOS associated to mesoscopic and classical fluctuations. In
Sec.IV, we present our conclusions.
\section{Subgap DOS associated to fluctuations of $n_{imp}$.}
~\label{sec01}
Let us start considering a diffusive metallic layer in between two
superconducting bulk electrodes, a geometry represented
schematically in Fig.1. Assuming $k_F l>>1$, where $l$ is the mean
free path, this system can be described in terms of the
quasiclassical approximation. In particular, at mean field level [
i.e., neglecting both mesoscopic and classical fluctuations ],
neglecting electron-electron interaction, and assuming the
thickness of the metallic layer $L_z>>l$, one can describe the
SNS structure by the Usadel equation~\cite{Usadel,Kopnin}
\begin{eqnarray}\label{Usadel}
\frac{D}{2}\nabla^2 \theta + i\;E\;\sin[\theta]=0,
\end{eqnarray}
where $D=v_{F}^2 \tau/3$ is the diffusion constant, $E$ is the
energy measured from the Fermi level, assumed to be $\mid E \mid
\ll \Delta$, where $\Delta$ is the gap in the bulk electrodes. The
field $\theta$ is related to the quasiclassical Green's functions
and the anomalous Green's function by the relations $g({\bf
r},E)=\cos[\theta({\bf r},E)]$, $ f({\bf r},E)=i \sin[\theta({\bf
r},E)]$. In addition, assuming the interfaces to be perfectly
transparent, the proximity to the two superconducting regions can
be described by the boundary conditions
$\theta(z=\pm L_z/2)=\pi/2$.
{\begin{figure} \epsfxsize=7cm \centerline{\epsfbox{System.eps}}
\vspace*{0cm} \caption{A schematic plot of an SNS junction: two
bulk superconducting electrodes (S) connected to a diffusive metal
(N) of thickness $L_z$. The interfaces have linear size $L_x$,
$L_y$. } \label{Fig3}
\end{figure}}
It is convenient to measure all lengths in units $L_z$, and set $\theta=\pi/2+i\Psi$. Therefore,
Eq.(\ref{Usadel}) becomes
\begin{eqnarray}\label{UsadelPsi}
\nabla^2 \Psi +2 \frac{E}{E_{Th}} \cosh[\Psi]=0,
\end{eqnarray}
where $E_{Th}=D/L_z^2$ is the Thouless energy. The boundary
conditions for the field $\Psi$ are simply $\Psi(z=\pm 1/2)=0$.
In terms of $\Psi$ the DOS is $\nu=2\nu_0 {\rm Im}[\sinh[\Psi]]$,
where $\nu_0$ is the density of states of the normal metal at the
Fermi level. The DOS can be calculated looking for solutions of
Eq.(\ref{UsadelPsi}) uniform in the $x-y$
plane~\cite{Golubov,Zhou,Ivanov,Ostrovsky}. In particular, for
$E<E_g\equiv C_2 E_{Th}$ [$C_2\simeq 3.122$] all solutions of
Eq.(\ref{UsadelPsi}) are real, implying $\nu=0$. Therefore, one
identifies $E_g$ with the proximity induced gap within the normal
metal layer. The mean field DOS right above $E_g$ averaged over
the $z$ direction is found to be
\begin{eqnarray}\label{resultDOSuniform}
\nu \simeq 3.72\;\nu_0 \sqrt{\frac{E-E_g}{E_g}}.
\end{eqnarray}
Let us proceed analyzing the tails of the DOS at energies $E<E_g$
arising from fluctuations of the concentration of impurities, i.e.
long-wave inhomogeneities in the $x-y$ plane of $1/\tau$. We first
consider an SNS structure such that the linear size of the SN
interfaces is much larger than the thickness of the metallic layer
[$L_x, L_y \gg L_z$]. In the framework of the Usadel description
of the metallic layer [Eq.(\ref{UsadelPsi})] one can account for
long-wave transversal fluctuations of the concentration of
impurities by promoting $E_{Th}$, or equivalently $E_g=C_2
E_{Th}$, to be a position dependent random variable, characterized
by the statistics
\begin{eqnarray}
E_g({\bf x})&=&E_g+\delta E_g({\bf x}),\label{stat1}\\
\langle \delta E_g ({\bf x}) \rangle &=& 0,\label{stat2}\\
\langle \delta E_g ({\bf x}) \delta E_g({\bf x'}) \rangle
&=& \frac{E_g^2}{n_{d} L_z^d}\;\delta({\bf x}-{\bf x'}),\label{stat3}
\end{eqnarray}
where $d$ is the effective dimensionality of the system,
and $n_{d}$ the effective concentration of impurities.
As shown below, $d$ is determined by comparing the linear sizes of the interface
$L_x,L_y$ to the linear scale of the subgap states $L_{\bot} \simeq L_z/((E_g-E)/E_g)^{1/4}$.
If $L_x,L_y\gg L_{\bot}$ the system is effectively two dimensional, and $n_{2}=n_{imp} L_z$.
On the other hand, if $L_x < L_{\bot} \ll L_y$ [or $L_y < L_{\bot} \ll L_x$], the system is effectively one dimensional,
and $n_1=n_{imp}\;L_z\;L_x$.
Accounting for these fluctuations, the Usadel equation Eq.(\ref{UsadelPsi}) becomes
\begin{eqnarray}\label{Usadelfluctuations}
\partial_z^2\Psi+\nabla^2_{{\bf x}}\Psi+2C_2 \frac{E}{E_g}(1-\delta \epsilon_g({\bf x}))\cosh[\Psi]=0,
\end{eqnarray}
where $\delta \epsilon_g=\delta E_g/E_g$.
Our purpose is to calculate the DOS averaged over fluctuations of
$\delta E_g$ at energies $E < E_g$. For this sake, let us
introduce $\delta E=E_g-E$, and $\delta\Psi(z,{\bf x})=\Psi(z,{\bf
x})-\Psi_0(z)$, where $\Psi_0$ is the solution of
Eq.(\ref{UsadelPsi}) at $E=E_g$. Expanding
Eq.(\ref{Usadelfluctuations}) and keeping the lowest order
nonlinearity in $\delta\Psi$ one obtains
\begin{eqnarray}\label{intermediate}
(\partial_z^2 + f_0(z)) \delta\Psi+ \nabla^2_{{\bf x}} \delta \Psi +
\frac{g_0(z)}{2} \delta\Psi^2=
g_0(z)(\delta\epsilon-\delta\epsilon_g),
\end{eqnarray}
where $\delta\epsilon=\delta E/E_g$, $g_0(z)= 2C_2
\cosh[\Psi_0(z)]$, and $f_0(z)= 2C_2 \sinh[\Psi_0(z)]$.
In order to simplify further Eq.(\ref{intermediate}), it is useful
to notice that the operator ${\cal H}=-\partial_z^2 -f_0 (z) $,
diagonalized with zero boundary conditions at $\pm 1/2$, admits an
eigenstate $\Phi_0$ with zero eigenvalue. Physically, $\Phi_0$
determines the shape of the mean field $z$-dependent DOS obtained
from Eq.(\ref{UsadelPsi}). Therefore, it is natural to set
\begin{eqnarray}\label{approximation}
\delta \Psi(z,{\bf x}) \simeq \sqrt{A_1/A_2}\;\chi({\bf
x})\;\Phi_0(z),
\end{eqnarray}
with $A_1=\int\;dz\;g_0\;\Phi_0 \simeq 7.18$, and
$A_2=\int\;dz\;\frac{g_0}{2}\;\Phi_0^3 \simeq 2.74$.
Substituting Eq.(\ref{approximation}) in Eq.(\ref{intermediate}),
and projecting the resulting equation on $\Phi_0$, one obtains
\begin{eqnarray}\label{optimal}
\nabla^2 \chi + \chi^2= \delta \epsilon-\delta \epsilon_g({\bf x})
\end{eqnarray}
where we rescaled the length by $(A_1\;A_2)^{-1/4}$, and
\begin{eqnarray}
\langle \delta \epsilon_g({\bf x}) \delta\epsilon_g ({\bf x'})
\rangle= \eta\;\delta ({\bf x}-{\bf x'}),
\end{eqnarray}
with $\eta \equiv (A_1 A_2)^{1/4}/(n_{d}\;L_z^d)$.
Let us now split $\chi=-u+iv$, and obtain the system
\begin{eqnarray}~\label{potential}
&& -\nabla^2 u+ u^2-v^2=\delta\epsilon-\delta\epsilon_g, \\
&& -\frac{1}{2}\nabla^2\;v +u\;v=0.~\label{wavefunction}
\end{eqnarray}
Interestingly, this set of equations is analogous to the equations obtained by
Larkin and Ovchinikov in the context of the study of gap smearing in
inhomogeneous superconductors~\cite{Larkin}, and to the equations obtained by the author and
Ioffe in the context of the study of subgap tails in diffusive superconductors containing
magnetic impurities~\cite{Me}.
Let us now proceed with the calculation of the DOS.
In the present notation, the DOS averaged over the thickness of the normal layer is given by
\begin{eqnarray}\label{DOS2}
\frac{\nu({\bf x},\delta \epsilon \mid \delta\epsilon_g({\bf
x}))}{\nu_0} \simeq 3.72\; v({\bf x},\delta \epsilon \mid
\delta\epsilon_g({\bf x})).
\end{eqnarray}
We are interested in calculating the average density of states $\langle \nu \rangle/\nu_0 \simeq 3.72 \langle v \rangle$
at energies below the Thouless gap [$\delta \epsilon>0$].
In this parameter range, the corresponding functional integral
\begin{eqnarray}\label{functional}
\langle v \rangle \simeq \frac{\int\;D[\delta \epsilon_g] v({\bf
x},\delta \epsilon \mid \delta\epsilon_g({\bf x}))
e^{-1/(2\eta)\;\int d{\bf x} (\delta \epsilon_g({\bf x}))^2}}
{\int\;D[\delta \epsilon_g]
e^{-1/(2\eta)\;\int d{\bf x} (\delta \epsilon_g({\bf x}))^2}},
\end{eqnarray}
receives its most important contributions by exponentially rare
instanton configurations of $\delta \epsilon_g$ such that, at
specific locations along the interfaces of the junction, $\delta
\epsilon_g({\bf x}) \geq \delta \epsilon$. The remaining task is
to select among all these fluctuations the one that dominates the
functional integral Eq.(\ref{functional}), i.e. the \it optimal
fluctuation \rm.
The action associated to a configuration of $\delta\epsilon_g$ is
\begin{eqnarray}
S=\frac{1}{2\eta} \int\;d{\bf x} (\delta \epsilon_g)^2 \simeq
\int\;d{\bf x} (\nabla^2 u- u^2+\delta\epsilon)^2,
\end{eqnarray}
where we used Eq.(\ref{potential}) to express $\delta \epsilon_g$
in terms of $u,v$, and, with exponential accuracy, neglected the
term $v^2$ in the action. In order to find the optimal fluctuation
one has to find a nontrivial saddle point $u_0$ of $S$, tending
asymptotically to the solution of the homogeneous problem [$u_0
\rightarrow \sqrt{\delta \epsilon}$], and
subject to the constraint of having nontrivial solutions for $v$
of Eq.(\ref{wavefunction}).
Since the normal metal layer is diffusive, and momentum scattering isotropic,
it is natural to assume the optimal fluctuation to be spherically symmetric.
The Euler-Lagrange equation associated to $S$ is
\begin{eqnarray}\label{Euler}
(-\frac{1}{2}\Delta^{(d)}+u)\;(\Delta^{(d)}u-u^2+\delta\epsilon)=0
\end{eqnarray}
where
\begin{eqnarray}
\Delta^{(d)}\equiv\partial_r^2+\frac{d-1}{r}\;\partial_r,
\end{eqnarray}
is the radial
part of the Laplacian in spherical coordinates. An obvious solution to Eq.(\ref{Euler})
is obtained setting
\begin{eqnarray}\label{trivial}
\Delta^{(d)}u-u^2+\delta\epsilon=0.
\end{eqnarray}
This equation is equivalent to the homogeneous Usadel equation
with uniform $E_g$, i.e. Eq.(\ref{optimal}) with
$\delta\epsilon_g=0$. Though this equation has definitely
nontrivial instanton solutions for $u$ with the appropriate
asymptotics, it is possible to show that the constraint of
Eq.(\ref{wavefunction}) is satisfied only by $v=0$. This is
physically obvious since Eq.(\ref{trivial}) describes a uniform
system where all long-wave fluctuations of $1/\tau$ have been
suppressed, and thus, within the present approximation scheme, the
subgap DOS must vanish. However, it should be pointed out that,
accounting for mesoscopic fluctuations, the instanton solutions of
Eq.(\ref{trivial}) describe the optimal fluctuation associated to
mesoscopic gap fluctuations, as shown in
Ref.[\onlinecite{Ostrovsky}].
Let us now look for the nontrivial saddle point.
Equation (\ref{Euler})
is equivalent to the system
\begin{eqnarray}\label{one}
&&(-\frac{1}{2}\Delta^{(d)}+u) h=0, \\
&&\Delta^{(d)}u-u^2+\delta\epsilon=h.\label{two}
\end{eqnarray}
which can be reduced to a single second order instanton equation setting
$h=(2\partial_r u)/r$. With this substitution, Eq.(\ref{one}) becomes the derivative
of Eq.(\ref{two}), which now reads
\begin{eqnarray}\label{dimred}
\Delta^{(d-2)}u-u^2+\delta\epsilon=0.
\end{eqnarray}
Notice that this equation is, upon reduction of the dimensionality
by $2$, identical in form to the one associated to mesoscopic
fluctuations, Eq.(\ref{trivial}). As we will see later, this
reduction of dimensionality relates in a similar way the
dependence of the action associated to classical and mesoscopic
fluctuations on $\delta \epsilon$.
It is now straightforward to see that the instanton solution $u_0$
of this equation with the appropriate asymptotics describes indeed
the optimal fluctuation, the constraint of Eq.(\ref{wavefunction})
being automatically satisfied in virtue of Eq.(\ref{one}), with
$v_0 \propto (2\partial_r u_0)/r$. Moreover, the corresponding
optimal fluctuation of $\delta \epsilon_g$ is
$\delta \epsilon_g = 2 \partial_r u_0/r$.
It is clear that the instanton solutions of Eq.(\ref{dimred}) must
have the form $u_0=\sqrt{\delta\epsilon} \upsilon(r/\lambda)$,
with $\lambda=1/(\delta\epsilon)^{1/4}$. The corresponding
equation for $\upsilon(r)$ is $\partial^2_r \upsilon + (d-3)/r
\partial_r \upsilon - \upsilon^2+1=0$. The instanton solution of
this equation can be easily found numerically, and the
corresponding action $S$ calculated. The result is
\begin{eqnarray}
S_d&=& a_d n_d L_z^d\;\delta\epsilon^{\frac{8-d}{4}}
\end{eqnarray}
where the constants $a_d$ are $a_1 \simeq 0.88$, and $a_2 \simeq 7.74$.
Within our approximation scheme, the density of states is $\langle \nu \rangle \propto W \exp[-S]$, where
$W$ is a prefactor due to gaussian fluctuations around the instanton saddle point.
The calculation of $W$ can be performed using the standard
technique due to Zittarz and Langer, and is similar to those reported in
Ref.[\onlinecite{Me},\onlinecite{Larkin}]. To leading order in the saddle point approximation,
the final result is
\begin{eqnarray}\label{finalresult}
\frac{\langle \nu \rangle}{\nu_0} \simeq \beta_d \; \sqrt{n_d\;L_z^d}\;\delta\epsilon^{\frac{d(10-d)-12}{8}}\;e^{-S_d},
\end{eqnarray}
where $\beta_1 \approx 0.1$ and $\beta_2 \approx 0.5$.
The result in Eq.(\ref{finalresult}) relies on a saddle point approximation, which is justified provided
$S_d \gg 1$. This translates into the condition
\begin{eqnarray}\label{condition}
\delta \epsilon \gg \left(\frac{1}{a_d n_d L_z^d}\right)^{\frac{4}{8-d}}.
\end{eqnarray}
As mentioned before, the effective dimensionality, and therefore
the asymptotic density of states, is determined by comparing the
linear size of the optimal fluctuation, in dimensionfull units
$L_{\bot} \simeq L_z \lambda=L_z/\delta\epsilon^{1/4}$, to the
linear dimensions of the interfaces $L_x,L_y$. If $L_x,L_y \gg
L_{\bot}$ the asymptotics is effectively two dimensional [$d=2$],
while for $L_y \gg L_{\bot}, L_x \ll L_{\bot}$ the asymptotic DOS
is effectively one dimensional [$d=1$]. Since $L_{\bot}$ increases
as the energy gets closer to the average gap edge, it is clear
that in any finite size system the applicable asymptotics might
exhibit various crossovers, $2{\rm d} \rightarrow 1{\rm d}
\rightarrow 0{\rm d}$, as $\delta\epsilon \rightarrow 0$. In
particular, the tails are zero dimensional when $L_x,L_y <
L_{\bot}$, in which case the asymptotic form of the DOS is
obtained by calculating the integral
\begin{eqnarray}
\frac{\langle \nu \rangle}{\nu_0} &\simeq& 3.72 \int \frac{d(\delta \epsilon_g)}{\sqrt{2\pi\eta_0}}\;
\sqrt{\delta\epsilon_g-\delta\epsilon}\;e^{-\frac{\delta\epsilon_g^2}{2\eta_0}} \nonumber \\
& \approx & \frac{1}{\delta\epsilon^{3/2}}\;e^{-S_0},
\end{eqnarray}
where $\eta_0=1/(n_{imp}V)$ [$V=L_xL_yL_z$] and $S_0=1/(2\eta_0)\delta\epsilon^2$.
\section{Mesoscopic vs. Classical fluctuations.}
In the previous section we have discussed the asymptotic density
of states below the Thouless gap originating from classical
fluctuations, i.e. inhomogeneities in the concentration of
impurities or equivalently in $1/\tau$. As discussed in the
introduction, this mechanism to generate subgap states is
complementary to mesoscopic fluctuations of the gap edge.
The tails
associated to mesoscopic gap fluctuations have been calculated by Ostrovsky, Feigel'man and
Skvortsov in Ref.[\onlinecite{Ostrovsky}]. To exponential accuracy, the subgap DOS associated to mesoscopic fluctuations
is $\langle \nu \rangle/\nu_0 \propto \exp[-\tilde{S}_d] $, where
\begin{eqnarray}\label{meso}
\tilde{S}_d &\simeq& \tilde{a}_d\; G_d\;
(\delta\epsilon)^{\frac{6-d}{2}},
\end{eqnarray}
where $\tilde{a}_d$ is a constant [$\tilde{a}_0\simeq 1.9$, $\tilde{a}_1\simeq 4.7$, and $\tilde{a}_2 \simeq 10$],
and $G_d$ is the effective dimensionless conductance
\begin{eqnarray}
G_0&=&4\pi \nu_0 D \frac{L_x L_y}{L_z},\\
G_1&=&4\pi \nu_0 D L_x, \\
G_2&=&4\pi \nu_0 D L_z.
\end{eqnarray}
The scale of the optimal fluctuation associated to mesoscopic fluctuations is also
$L_{\bot} \simeq L_z/(\delta\epsilon)^{1/4}$. Therefore, the effective dimensionality
$d$ is to be determined according to the criteria presented in the previous section.
Before discussing the comparison of mesoscopic and classical
fluctuations, let us first explain the rationale behind the
separation these two contributions. Though it is clear that the
only physical fluctuations in a real sample are associated to
fluctuations in the positions of impurities, these fluctuations
can affect the DOS in two ways: \it i)- \rm depress the Thouless
gap edge by increasing locally the scattering rate [classical
fluctuations], or \it ii)- \rm take advantage of interference
effects in the quasiparticle wave functions to generate
quasiparticle states that couple inefficiently to the
superconducting banks [mesoscopic fluctuations]. It makes sense to
think of two types of effects separately if the actions associated
to them are very different in magnitude [$\tilde{S} \gg S$ or vice
versa]. Obviously, in the crossover region, where $S \approx
\tilde{S}$ the separation of these two mechanisms is meaningless,
because the system can take advantage of both at the same time.
With this caveat, let us proceed in the comparison of these two contributions,
starting with the zero dimensional case. Since the dimensionless conductance
is $G_0 \approx E_g/\delta$, where $\delta \approx 1/(\nu_0 V)$
is the level spacing, then the $d=0$ action associated to mesoscopic
fluctuations can be written as
\begin{eqnarray}\label{universal}
\tilde{S}_0 \approx \left(\frac{\delta E}{\Delta_0}\right)^{3/2},
\end{eqnarray}
where $\Delta_0=(E_g \delta^2)^{1/3}$, where $\delta=1/(\nu_0 V)$ is the
level spacing in the metallic layer. Physically, $\Delta_0$ can be
interpreted as being the \it effective \rm
level spacing right above the gap edge. Indeed, from Eq.(\ref{resultDOSuniform})
one sees that
\begin{eqnarray}
\nu \approx \frac{1}{\pi V}\;\sqrt{\frac{\delta E}{\Delta_0^3}}.
\end{eqnarray}
Therefore, the result of Eq.(\ref{universal}) indicates that tails originating
from mesoscopic fluctuations of the gap edge are universal [in $d=0$], in
accordance to the conjecture formulated in Ref.[\onlinecite{Brouwer1}] on the basis
of Random Matrix Theory.
In turn, in the zero dimensional case the action associated to classical fluctuations is
\begin{eqnarray}
S_0 \approx \left(\frac{\delta E}{\delta E_0}\right)^{2},
\end{eqnarray}
where $\delta E_0=E_g/\sqrt{n_{imp} V}$ is the scale of typical fluctuations
of the gap edge associated to fluctuations of the concentration of impurities.
The dimensionless parameter controlling which which mechanism dominates
is therefore
\begin{eqnarray}
\gamma_0=\frac{\Delta_0}{\delta E_0}.
\end{eqnarray}
Clearly, for $\gamma_0 \gg 1$ mesoscopic fluctuations dominate the subgap tails,
while for $\gamma_0 \ll 1$ classical fluctuations give the largest contribution
to the subgap DOS~\cite{detailedcomparison}.
Let us now write $\gamma_0$ in terms of elementary length scales, one can estimate
\begin{eqnarray}\label{gamma0}
\gamma_0 &\approx& \frac{1}{k_F l}\;\frac{1}{\sqrt{k_F^2 \sigma}}
\frac{(L_z/l)^{7/6}}{(L_x L_y/l^2)^{1/6}} \nonumber \\
&\approx& \frac{1}{k_F l}\;
\frac{(L_z/l)^{7/6}}{(L_x L_y/l^2)^{1/6}},
\end{eqnarray}
where we used the fact that the scattering cross section of a single impurity
$\sigma$ is typically of the same order of $\lambda_F^2$. Within the assumptions
of the theory, $\gamma_0$ is the ratio of two large numbers, and therefore
its precise value depends on the system parameters. However, from Eq.(\ref{gamma0})
we see that making the junction longer and longer, i.e. increasing $L_z$,
tends to favor mesoscopic fluctuations. Intuitively, this is due to
the fact that as $L_z$ increases, the dimensionless conductance of the junction
diminishes while the average number of impurities
increases, therefore suppressing the associated fluctuations of the gap edge.
At the same time, increasing the area of the junction, or making them cleaner,
reverses the situation. In summary, mesoscopic fluctuations are favored
in \it long and dirty \rm junctions, while classical fluctuations are favored in
\it wider and/or cleaner \rm ones.
Since in higher dimensionalities the linear scale of the optimal fluctuation
associated to the two mechanism is identical [$L_{\bot}=L_z/(\delta\epsilon)^{1/4}$],
it is possible, and physically suggestive, to reduce the form of the actions
in $d=1,2$ to a zero dimensional action calculated within the typical volume of
the optimal fluctuation. The latter is $V_{\bot}=L_{x}L_{\bot}L_z$ for $d=1$, and
$V_{\bot}=L_{\bot}^2 L_z$ in $d=2$.
For example, for $d=1$ one can write
\begin{eqnarray}
S_1 &\approx& n_{imp}L_x L_{\bot} L_z \;(\delta \epsilon)^2 \nonumber \\
&\approx& \left(\frac{\delta E}{\delta E_{eff}} \right)^2,
\end{eqnarray}
where $\delta E_{eff}=E_g/\sqrt{n_{imp} V_{\bot}}$. Similarly,
\begin{eqnarray}
\tilde{S}_1 &\approx& \left(\frac{\delta E}{\Delta_{eff}} \right)^2,
\end{eqnarray}
where $\Delta_{eff}=(E_g \delta_{eff}^2)^{1/3}$,
$\delta_{eff}=1/(\nu_0 V_{\bot})$ being the level spacing in the
volume of the optimal fluctuation. In analogy to the zero
dimensional case, one is therefore led to conclude that also in
for one dimensional tails \it long and dirty \rm junctions are
dominated by mesoscopic fluctuations, while \it wider and/or
cleaner \rm junctions favor classical ones. This qualitative
statement is indeed correct, but the proof is complicated by the
energy dependence $L_{\bot}$.
The appropriate way to proceed for $d=1,2$ is to write
the actions associated to classical and mesoscopic fluctuations in compact form as
\begin{eqnarray}
S&=&\left(\frac{E_g-E}{\delta E_d}\right)^{\frac{8-d}{4}},\\
\tilde{S}&=&\left(\frac{E_g-E}{\Delta_d}\right)^{\frac{6-d}{4}}
\end{eqnarray}
where $\delta E_d={E_g}/{(a_d\;n_d L_z^d)^{{4}/{(8-d)}}}$, and
$\Delta_d={E_g}/{(\tilde{a}_d G_d)^{{4}/{(6-d)}}}$. Therefore, the dimensionless
parameter that determines which contributions dominates the subgap DOS is
\begin{eqnarray}
\gamma_d \equiv \frac{\Delta_d}{\delta E_d}.
\end{eqnarray}
If $\gamma_d \gg 1$, the subgap DOS is dominated by mesoscopic gap fluctuations,
and the applicable result is Eq.(\ref{meso}). On the other hand, for $\gamma_d \ll 1$
the DOS below the gap is determined by long-wave fluctuations of $1/\tau$ [Eq.(\ref{finalresult})].
Finally, estimating $\gamma_d$ in terms of elementary length scales, one obtains
\begin{eqnarray}
\gamma_1& \approx & \frac{1}{(k_F l)^{16/35}}\;\frac{(L_z/l)^{8/7}}{(L_x/l)^{8/35}},\\
\gamma_2& \approx & \frac{1}{(k_F l)^{2/3}}\;(L_z/l).
\end{eqnarray}
In analogy to Eq.(\ref{gamma0}), the fact that $\gamma_d$ is proportional to a
power of $L_z/l$ implies that mesoscopic fluctuations are dominant in long junctions,
while the inverse proportionality of $\gamma_d$ on a power of $k_F l$ and of
the linear size of the interface [in $d=0,1$] implies that wide interfaces and/or
cleaner samples may favor the contribution arising from classical fluctuations.
~\label{sec3}
\section{Conclusions}
In this paper, we discussed the effect of inhomogeneous
fluctuations of the concentration of impurities, or equivalently
of $1/\tau$, on the tails of the DOS below the Thouless gap in
diffusive SNS junctions. We have shown that these classical
fluctuations lead to the formation of subgap quasiparticle states
and are complementary to mesoscopic fluctuations in determining
the asymptotic DOS. Finding the dimensionless parameter that
controls which mechanism gives the dominant contribution to the
subgap tails, one finds that, qualitatively, mesoscopic
fluctuations are favored in long and dirty junctions, while
classical ones dominate in wider and/or cleaner ones.
We have observed that, as for diffusive superconductors containing
magnetic impurities, and for diffusive superconductors with an
inhomogeneous coupling constant, the two contributions are
formally related by a dimensional reduction by $2$, both at the
level of instanton equations determining the optimal fluctuation,
and in the dependence of the DOS on the distance from the gap edge
$\delta\epsilon$. As in other physical systems~\cite{Cardy}, it is
natural to expect that supersymmetry is at the root of dimensional
reduction also in this context. This fact could in principle be
elucidated generalizing the Sigma Model describing mesoscopic
fluctuations to include the physics associated to classical
fluctuations.
\section{Acknowledgements.}
I would like to thank E. Lebanon, A. Schiller, and especially
L. B. Ioffe and M. M\"{u}ller for discussions. This work is supported by NSF
grant DMR 0210575.
| 10,111 |
\section{Introduction}
Suspended nanotubes form a interesting and promising system for
various nanoelectromechanical device setups and has been studied
both experimentally\cite{nygard01,leroy05,sapm05} and
theoretically\cite{kina03,jons04,jons05,sapm05,ustu05,izum05}.
Because of their large aspect ratio, nanotubes can be modeled as
simple one-dimensional strings using classical elasticity
theory\cite{suzu02}. Here we study the electromechanical coupling
when suspended nanotubes are put in a single-electron-transistor
(SET) setup.
When nanotubes are contacted by electrodes they form in most cases
contacts with a large resistance, which results in Coulomb blockade
behavior. This type of single-electron-tunnelling devices have also
been fabricated with the nanotubes suspended between the two
electrodes\cite{nygard01,leroy05,sapm05}. For these devices the
interesting possibility occurs that a coupling between the
electronic degree of freedom and the vibration might show up in the
current. Such a coupling has indeed been observed in several
experiments. The first example LeRoy et al.\cite{leroy05} observed
phonon sidebands with both absorbtion and emission peaks, which were
taken as evidence for the radial breathing mode being strongly
excited and thus behaving essentially as a classical external
time-dependent potential.
In the quantum regime the electron-vibron coupling leads to steps in
the IV characteristic, similar to the well-known Franck-Condon
principle. This has been seen in a number of single-molecule
devices\cite{park00,pasu05} and is well understood in terms of rate
equations\cite{boes01,mcca03,brai03flen,koch05}. Recently, similar
physics was observed in suspended nanotubes\cite{sapm05} where the
vibrational energy suggested that the sidebands were due to coupling
to a longitudinal stretching mode. However, the coupling mechanism
was unclear. It was suggested in Ref. \cite{sapm05}, that the
electric field parallel to the tube coupled to the nuclear motion.
Here we will argue that due to screening in the tube the
longitudinal electric field is too weak to excite the longitudinal
mode, and instead point to the non-linear coupling between the
traverse and longitudinal mode as the possible coupling mechanism.
The paper is organized as follows. First, we discuss in
section~\ref{sec:electrostat} the electrostatics of the charged
suspended nanotube, followed by an account in
section~\ref{sec:elastic} of the elastic properties of the hanging
tube. In section~\ref{sec:FC}, the modes of the tube are quantized
and the Franck-Condon overlap factors are calculated. Finally,
conclusions are given in section~\ref{sec:disc}.
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{fig1.eps}}
\caption{Illustration of suspended nanotube device.}
\label{fig:cnt}
\end{figure}
\section{Electrostatics of charge nanotube}
\label{sec:electrostat}
We now discuss the electrostatics of the charge suspended nanotube.
First we consider the longitudinal electric field, and then the
radial field.
\subsection{Electric field parallel to the nanotube}
\label{sec:Epar}
For a metallic tube there would, of course, be no electric field in
the longitudinal direction. However, the tube has a finite screening
length due to the kinetic energy. We have analyzed this situation
using the following density functional for the total energy of a
nanotube with linear dispersion
\begin{equation}\label{Frho}
F[\rho]=
\frac{\hbar}{2}\int_0^L dx\,v_\mathrm{F}^{{}}(x)[\rho(x)]^2+\frac12\int_0^L dx\int_0^L
dx'\,\rho(x)V(x,x')\rho(x'),
\end{equation}
where $v_F$ is the Fermi velocity and $V(x,x')$ is the effective 1D
potential for a cylindric conductor. Details about the interaction
and the solution are given in Appendix A. One should include
screening due to both gate and source-drain electrodes. The gate
can be included as a metallic plane at some distance $h$, and the
source-drain electrodes as a section of the wire with $v_\mathrm{F}^{{}}=0$. See
figure~2 for an illustration of this.
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{fig2.eps}}
\caption{The model used to find the electric. The electrodes are represented by a 1D lead with $v_F=0$.}
\label{fig:model}
\end{figure}
To minimize the energy functional in \eqref{Frho} under the
constraint that a charge of one electron is added to the tube, we
add a Lagrange multiplier
\begin{equation}\label{Frholambda}
F_1[\rho,\lambda]=E[\rho]+\lambda\left(\int_0^L dx\rho(x)-e\right).
\end{equation}
First, we minimize with respect to $\rho$ and find
\begin{equation}\label{Erhomin}
\frac{\delta F_1}{\delta \rho[x]}=
\hbarv_\mathrm{F}^{{}} \rho(x)+\int_0^L
dx'\,V(x,x')\rho(x')+\lambda=0,
\end{equation}
and then with respect to $\lambda$:
\begin{equation}\label{Dlambda}
\frac{\partial F_1}{\partial \lambda}=\int_0^L dx\rho(x)-e=0.
\end{equation}
These two linear equations are readily solved numerically. Once the
solution is found, the electric field is given by
\begin{equation}\label{Exx}
eE_x(x)=-\frac{\partial }{\partial x}\int_0^L
dx'\,V(x,x')\rho(x').
\end{equation}
The important parameters in this solution are the aspect ratio of
the tube, i.e., $\frac{L}{R}$, and the strength of the interaction
\begin{equation}\label{rs}
r=\frac{e^2}{4\pi\epsilon_0 \hbarv_\mathrm{F}^{{}}}.
\end{equation}
For typical parameters, one has $r=10-20$, while the aspect ratio is
$200-2000$. The distance to the gate is not important, as long as it
longer than the length over which the electric decays, which is
typically the case. Our numerical solution gives an electric field
comparable to the results of Guinea\cite{guin05} (see also
reference~\cite{mish05}), which results in an electric force of
order $eE_x\sim 10^{-9}$ N, for typical nanotube devices. However,
this field is limited to a small region near the contacts, and
therefore the total effect of it is small.
For $r\gg 1$, we can in fact make a simple approximation which quit
accurately describes the full numerical solution. The electric field
can related to the charge density by differentiating the condition
(\ref{Erhomin}) with respect to $x$:
\begin{equation}\label{Exx2}
eE_x(x)=\hbar\frac{\partial }{\partial x}\left[\rho(x)v_F(x)\right].
\end{equation}
Because $\rho(x)$ changes little along the wire, we may set
$\rho\approx 1/L$ and we thus obtain
\begin{equation}\label{Exx3}
eE_x(x)\approx \frac{\hbar v_F }{L}\left[\delta(x)-\delta(x-L)\right].
\end{equation}
The width of the delta function will be given by microscopic details
of the interface between the tube and the contact, i.e. a length
scale of the order of the radius of the tube itself. This length
scale we denote by $x_0$.
\subsubsection{The electrostatic force in the longitudinal direction}
The term in the Hamiltonian describing the interaction between the
electron charge density and the nuclear system is
\begin{equation}\label{Helphlong}
H_{\mathrm{el-vib},x}=-\int dxdx'\, \rho(x)V(x,x')\rho_n(x'),
\end{equation}
where $\rho_n(x)$ is the density of the positive ions in the tube.
The force per length acting on the mechanical degrees of freedom is
therefore given by $eE_x\rho_n$. In terms of the displacement field
defined below in \eqref{udef}, $H_{\mathrm{el-vib},x}$ becomes
\begin{equation}\label{Helph1}
H_{\mathrm{el-vib},x}=-\rho_0 \int dx\, \rho(x)V(x,x')\left[\partial_{x'} u(x')\right]=
-e\rho_0 \int dx\,E_x(x)\, u(x).
\end{equation}
\subsection{Electric field perpendicular to the nanotube}
\label{sec:Eperp}
To find the electric field in the radial direction we model the
nanotube as a distributed capacitor similarly to Sapmaz et
al.\cite{sapm03}. We include capacitances to the electrodes $C_l$
and to the gate
\begin{equation}\label{Ctot}
C=C_l+C_g,
\end{equation}
where the capacitance to the gate is
\begin{equation}\label{Cgate}
C_g=\int_0^L dx\, c(h(x)),\quad c(h)=\frac{2\pi\epsilon_0}{\cosh^{-1}(h/R)},
\end{equation}
with $c$ being the distributed capacitance of a tube over a plane.
To find the total charge on the tube, we write the total
electrostatic energy as
\begin{equation}\label{Etotq}
W=\frac{q^2}{2C}-q\Delta \Phi/e,
\end{equation}
where $C$ is total capacitance with charge $q$ and $\Delta \Phi$ is
the difference between the nanotube and the electrode work
functions. (Here we neglect the effect of the source, drain and gate
voltages, because they are considerably smaller than $\Delta \phi$.)
The optimum charge is thus
\begin{equation}\label{qopt}
q_0\equiv n_0e=\Delta \Phi \,C/e.
\end{equation}
For single-walled carbon nanotubes the workfunction is about 4.7
eV\cite{gao01,zhao02,shan05}, while for gold it is 5.1 eV. For
typical devices $C\sim 10^{-17}$ F and hence $n_0\sim 30$. The
electrostatic energy is used in the following section to calculate
force acting on the tube.
\subsubsection{The electrostatic force in the transverse direction}
Below we solve for the distortions of the wire due to the
electrostatic forces. The force in the direction perpendicular to
the charged wire (denoted the $z$-direction) is given by
\begin{equation}\label{kedef}
k=-\frac{d W}{dC_g}\left. \frac{\delta c_{g}}{\delta
z(x)}\right\vert_{z=0}=\,\left(\frac{C_g}{C}\right)^2\frac{e^2n_0^2}{4\pi\epsilon_0
hL^2},
\end{equation}
where
\begin{equation}\label{dCgdz}
\left.\frac{\delta c_{g}}{\delta z(x)}\right\vert_{z=0}=\frac{2\pi
\varepsilon
_{0}^{{}}}{h\left[\cosh^{-1}(h/R)\right]^{2}}=\frac{C_g^2}{2\pi\epsilon_0
hL^2}, \quad\mathrm{with}\,C_{g}=Lc(0).
\end{equation}
\section{Elastic properties}
\label{sec:elastic}
In this section, we discuss in detail the elastic properties of a
suspended nanotube. Most devices are made by under-etching after
depositum of the nanotube, which is done at room temperatures.
Therefore, since the thermal expansion coefficient of nanotubes is
negative (100 nm wire expands a few nm from room temperature to 4 K)
it seems unlikely that there is a large tension in tube unless it
becomes heavily charged\cite{sapm03}. When the tube is charged it is
attracted towards the metallic gates and leads.
The radial force, off course, couples to the breathing-type modes,
which, however, have too large energies ($\sim $ 30 meV) to be
relevant for the low-voltage experiment in reference~\cite{sapm05}.
Here we are interested in the lower part of the excitation spectrum
and disregard this mode. We are left with the bending and stretching
modes. The energy of the bending is typically much lower than those
of stretching modes\cite{ustu05}, and therefore we treat these as
purely classical, which means that we will solve for the bending
mode separately and then via an anharmonic term this solution acts
as a force term on the longitudinal mode.
We thus consider two possible mechanism for coupling to the
stretching mode: either directly via the longitudinal electric field
discussed in section~\ref{sec:Epar} or through the perpendicular
field, section~\ref{sec:Eperp}, which bends the tubes and hence also
stretches it.
\subsection{Elasticity theory of a hanging string}
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{fig3.eps}}
\caption{The coordinate system used to describe the hanging tube. A point on the tube, which
before the distortion was at $(x',0)$ has after the deformation the
coordinates $(\xi(x'),z(x')).$}
\label{fig:xizeta}
\end{figure}
Assuming the tube to lie in a single plane and thus having no
torsion in the longitudinal direction, we can describe the
distortion of the bend tube by $(\xi (x),z(x))$, where $x\in \lbrack
0,L]$ runs along the unbend tube in absence of external forces, see
figure~\ref{fig:xizeta}. (If the tube has some slack, $x$ is a curve
linear coordinate along the equilibrium position the hanging tube.)
This means that a point along the tube, which before was at $(x,0)$
after the deformation has the coordinates $(\xi (x),z(x)).$ The
total elastic energy of the tube is then follows from standard
elasticity theory of an elastic string\cite{landau:elasticity}
\begin{equation}\label{W}
W=\int_{0}^{L}dx\left(\frac{ EA[\zeta
(x)]^{2}}{2}+\frac{EI}{2[R(x)]^{2}}
+k_\perp^{{}}z(x)+k_\parallel^{{}}\;u(x)\right)
,
\end{equation}
where $\zeta $ is the linear strain of extension, $R$ is the radius
of curvature, $A=\pi R^2$ the area and $I=\pi R^4/4$ the moment of
inertia, $E$ is Young's modulus, and $k_\parallel,k_\perp$ the
external forces, and where we have defined the longitudinal
displacement field as
\begin{equation}\label{udef}
u(x)=\xi (x)-x.
\end{equation}
The linear extension of an infinitesimal element between $x$
and $x+dx$ is
\begin{equation}
\zeta (x)dx=\sqrt{(\xi (x+dx)-\xi (x))^{2}+(z(x+dx)-z(x))^{2}}-dx,
\end{equation}
or
\begin{equation}
\zeta (x)=\left( \sqrt{[1+u^{\prime }(x)]^{2}+[z^{\prime
}(x)]^{2}}-1\right).
\end{equation}
The linear extension elastic energy is thus
\begin{equation}\label{Wlin}
W_{\mathrm{lin}}=\frac{EA}{2}\int_{0}^{L}dx\left( \sqrt{[1+u^{\prime
}(x)]^{2}+[z^{\prime }(x)]^{2}}-1\right)^2 .
\end{equation}
The curvature contribution is determined in a similar way. First,
the unit tangential vector is
\begin{equation}
\mathbf{t=}\frac{\left( \xi ^{\prime }(x),z^{\prime
}(x)\right)}{\sqrt{[\xi ^{\prime }(x)]^{2}+[z^{\prime }(x)]^{2}}} ,
\end{equation}
which then gives the square of the radius of curvature as
\begin{equation}
R^{-2}=\left(\frac{d\mathbf{t}}{dl}\right)^{2}=\left(\frac{d\mathbf{t}}{dx}
\frac{dx}{dl}\right)^{2},\quad \frac{dl}{dx}=\sqrt{[1+u^{\prime
}(x)]^{2}+[z^{\prime }(x)]^{2}},
\end{equation}
and then the curvature contribution to the elastic energy finally
becomes
\begin{equation}\label{Wcurv}
W_{\mathrm{curv}}=\frac{EI}{2}\int_{0}^{L}dx\,
\frac{(z'(x)u''(x)-(1+u'(x))z''(x))^2}{([1+u'(x)]^2+[z'(x)]^2)^3}.
\end{equation}
\subsection{Weak distortions}
Since we are interested in small deflections, we expand the two
elastic energy expressions for small $z$ and $u.$ For
$W_{\mathrm{lin}}$, we obtain to third order in $u $ and $z$
\begin{equation}\label{Wlinf}
W_{\mathrm{lin}}\approx \frac{EA}{2}\int_0^L\,dx\left( [u^{\prime
}(x)]^{2}+\frac{[z^{\prime }(x)]^{4}}{4}+u^{\prime }(x)[z^{\prime
}(x)]^{2}\right) .
\end{equation}
Here we note that the last term couples the bending and stretching
modes. For the curvature contribution, we find to the same order
\begin{equation}
W_{\mathrm{cur}}=\frac{EI}{2}\int_0^L dx\,\left( 1-4u^{\prime
}(x)\right) \left[ z^{\prime \prime }(x)\right] ^{2}\approx
\frac{EI}{2}\int_0^L dx\,\left[ z^{\prime \prime }(x)\right] ^{2}.
\end{equation}
Again, there is a term which couples the two modes. However, for
nanotubes this term is much smaller than the last term in
\eqref{Wlinf}, because it smaller by a factor $(R/L)^2$, and hence
we have neglected the coupling term in $W_\mathrm{cur}$.
\section{Solution for the bending of the tube}
As mentioned above, the bending mode itself has a resonance
frequency too low to be seen tunnelling spectroscopy ($\sim 100$
MHz\cite{ustu05})(even when under tension due to the charging of the
wire ), which means that it can be treated as a classical degree of
freedom and Franck-Condon type physics is not involved. In
contrast, the longitudinal stretching mode has been seen in the
single-electron-transistor (SET) device\cite{sapm05} and here we
wish to calculate the Franck-Condon coupling constants for this
mode. Therefore we take the following approach: first we solve for
the bending mode classically and then insert this as an external
force acting on the longitudinal mode. The differential equation for
$z(x)$ is
\begin{equation}
IEz^{\prime \prime \prime \prime }-\frac{AE}{2}(z^{\prime
})^{2}z^{\prime \prime }=k. \label{eqom}
\end{equation}
This equation cannot be solved analytically. One approach is to
approximate $(z^{\prime })^{2}$ by the average value, which is
corresponds to assuming constant tension in the wire\cite{tension}.
Below we solve for the bending function $z(x)$ in two regimes: the
linear and the non-linear regime. Once we know $z(x)$, we will be
interested in the \textit{change of $z(x)$ due to tunnelling of a
single electron}. For large $n_0$, the relevant change is thus
\begin{equation}\label{zegendef}
z_e(x)=\frac{dz(x)}{dn_0}.
\end{equation}
This change will then couple to the longitudinal mode via the
coupling term in \eqref{Wlinf}.
\subsection{Linear regime}
For weak forces we can simply neglect the non-linear term in
\eqref{eqom}, and with boundary conditions $z(0)=z(L)=z^{\prime
}(0)=z^{\prime }(L)=0$ the solution is
\begin{equation}\label{z0def}
z_0(x)=\frac{kL^{4}}{24EI}\left( 1-\frac{x}{L}\right) ^{2}\left(
\frac{x}{L}\right)^{2}. \label{smallK}
\end{equation}
The shift in $z_0(x)$ due to the charge of a single electron is
according to \eqref{zegendef} given by
\begin{equation}\label{zedef}
z_{0,e}(x)=\frac{e^2 n_0}{12\pi\epsilon_0 h}\frac{1}{E
A}\left(\frac{L}{R}\right)^2\left(\frac{C_g}{C}\right)^2\left(
1-\frac{x}{L}\right) ^{2}\left( \frac{x}{L}\right)^{2}.
\label{smallKe}
\end{equation}
For a tube with $R=$ 0.6 nm, $L=1\,\mu$m, and $E \approx 10^{12}$ Pa
and a device with $h=200$ nm, $n_0=50$, and $C_g/C=0.5$, the maximum
distortion is of order a few nm.
The linear approximation is valid when the second term in
\eqref{eqom} is much smaller than the first. Using the solution in
\eqref{smallK}, this translates to the condition
\begin{equation}\label{condition}
10^{-6}\left(\frac{L}{R}\right)^2\left(\frac{kL^3}{EI}\right)^2\ll 1.
\end{equation}
For the typical parameters used below, the number on the left hand
side of (\ref{condition}) is $\lesssim 1$, and therefore the linear
approximation is only marginally valid.
\subsection{Non-linear regime.}
For larger distortions the non-linear term in \eqref{eqom} becomes
important. In the strongly non-linear regime, we can neglect the
first term and we have
\begin{equation}\label{eqomnonlin}
\frac{AE}{2}(z^{\prime })^{2}z^{\prime
\prime}=\frac{AE}{6}\frac{d}{dx}(z^{\prime })^{3}=-k,
\end{equation}
with boundary condition $z(0)=z(L)=0$. The solution of this
equation is
\begin{equation}\label{znonlin}
z_1'(x)= \left(\frac{6kL^3}{EA}\right)^{1/3}\left|\frac{x}{L}-\frac12\right|^{1/3}
\mathrm{sign}\left(\frac{L}{2}-x\right).
\end{equation}
In this non-linear regime, the change in the slope the bending
function $z_1(x)$ due to a single electron charge, is
\begin{equation}\label{ze1def}
z_{1,e}'(x)=2\left(\frac{C_g}{C}\right)^{2/3}\left(\frac{e^2}{2\pi\epsilon_0
hAn_0}\right)^{1/3}\left|\frac{x}{L}-\frac12\right|^{1/3}
\mathrm{sign}\left(\frac{L}{2}-x\right).
\end{equation}
\section{Distortion of the longitudinal mode}
Due to the electrostatic forces the equilibrium position of
longitudinal displacement field $u(x)$ shifts. Since the forces are
small the displacement is going to be small. For the tunnelling
overlap factors, the important point is, however, whether these
displacements are large compared to the quantum uncertainty length,
which later is seen to be of order of pm. In this section, we
calculate the classical displacements of $u(x)$. One example is
shown in figure~\ref{fig:u}.
\subsection{Distortion due to the longitudinal electrostatic force}
The displacement of the longitudinal mode is readily found from its
equation of motion. The displacement due to the longitudinal
electric field follows from
\begin{equation}\label{ulong}
EA u''_0(x)=k_\parallel(x)=-eE_x\rho_0,
\end{equation}
with boundary conditions $u(0)=u(L)=0$. With forces concentrated
near the contacts as in \eqref{Exx3}, there is an abrupt change of
$u(x)$ at $x=0$ and $x=L$. See figure~\ref{fig:model} red dashed
curve.
\subsection{Distortion due to the transverse electrostatic force}
Once we have solved for $z(x)$ in \eqref{zegendef}, we can find the
force that acts on the longitudinal diplacement field $u$ by
inserting the solution into the last term of \eqref{Wlinf}, and then
identify the force $k_\perp$ in \eqref{W}. This gives
\begin{equation}\label{Kudef}
k_\perp=\frac{d}{dx}\frac{EA}{2}[z'(x)]^2.
\end{equation}
The size of the displacement follows from the balance between this
force and the strain:
\begin{equation}\label{usolve}
EA u''_0(x)=k_\perp(x) \Leftrightarrow u''_0(x)=z'(x)z''(x),
\end{equation}
which together with the boundary condition, $u_0(0)=u_0(L)=0$, gives
the solution
\begin{equation}\label{usolvef}
u_0(x)=-\frac12\int_0^x dy \,[z'(y)]^2+\frac{x}{2L}\int_0^L
dy\,[z'(y)]^2.
\end{equation}
One example is shown in figure~\ref{fig:model} (blue curve).
In the next section, we analyze the Franck-Condon overlap factor due
to this electrostatic distortion of the stretching mode.
\begin{figure}
\centerline{\includegraphics[width=0.4\textwidth]{fig4.eps}}
\caption{The solutions for the shifted longitudinal mode $u(x)$ due to the parallel (red dashed curve) and
perpendicular (blue curve) electric fields. We have used typical parameters as in section~\ref{sec:fc}}
\label{fig:u}
\end{figure}
\section{Quantum mechanics and Franck-Condon overlap factors}
\label{sec:FC}
The longitudinal eigenmodes of a nanotube modeled as a 1D elastic
medium follows from the Hamiltonian
\begin{equation}
\hat{H}=\int_{0}^{L}dx\left( \frac{\hat{p}^{2}(x)}{2\rho
_{m}^{{}}}+\frac{AE }{2}(\partial _{x}\hat{u}(x))^{2}\right) ,
\end{equation}
where $\rho _{m}$ is mass density per unit length, and $\hat{u}$ and
$\hat{p}$ are conjugated variables, i.e.,
$[\hat{u}(x),\hat{p}(x')]=i\hbar\delta(x-x')$. In order to
diagonalize $\hat{H}$, we introduce the Fourier transforms:
\begin{eqnarray}
\hat{u}(x) &=&\sqrt{2}\sum_{n=1}^{\infty }\sin \left( \frac{\pi
nx}{L} \right) \hat{u}_{n}^{{}},\quad \hat{u}_{n}=\frac{\sqrt{2}}{L}
\int_{0}^{L}dz\sin \left( \frac{\pi nx}{L}\right) \hat{u}(x), \\
\hat{p}(x) &=&\frac{\sqrt{2}}{L}\sum_{n=1}^{\infty }\sin \left(
\frac{\pi nx }{L}\right) \hat{p}_{n}^{{}},\quad
\hat{p}_{n}=\sqrt{2}\int_{0}^{L}dz\sin \left( \frac{\pi
nx}{L}\right) \hat{p}(x),
\end{eqnarray}
where $\hat{u}_{n}$ and $\hat{p}_{n}$ obey
$[\hat{u}_{n},\hat{p}_{n}]=i\hbar$. Now $\hat{H}$ transforms to
\begin{equation}\label{Hunpn}
\hat{H}=\sum_{n=1}^{\infty }\left(
\frac{\hat{p}_{n}^{2}}{2M}+\frac{AE}{2L} \left( \pi n\right)
^{2}\hat{u}_{n}^{2}\right) ,\quad M=\rho _{m}^{{}}L,
\end{equation}
The Hamiltonian (\ref{Hunpn}) is easily diagonalized by
\begin{equation}
\hat{u}_{n}^{{}}=\ell _{0,n}\sqrt{\frac{1}{2}}\left(
\hat{a}_{n}^{{}}+\hat{a} _{n}^{\dagger }\right) ,\quad
\hat{p}_{n}^{{}}=\frac{i}{\ell _{0,n}}\sqrt{ \frac{1}{2}}\left(
\hat{a}_{n}^{\dagger }-\hat{a}_{n}^{{}}\right) ,
\end{equation}
where $\hat{a}_n^{{}}$ and $\hat{a}_n^\dagger$ are usual
annihilation and creation operators, and
\begin{equation}
\Omega =\pi \sqrt{\frac{AE}{ML}}=\frac{\pi }{L}\sqrt{\frac{AE}{\rho
_{m}^{{}} }}\equiv \frac{\pi v_s}{L},\quad \ell
_{0,n}=\sqrt{\frac{\hbar }{nM\Omega }}.
\end{equation}
With these operators, the Hamiltonian (\ref{Hunpn}) becomes
\begin{equation}
\hat{H}=\sum_{n=1}^{\infty }\hbar \Omega n\left(
\hat{a}_{n}^{\dagger }\hat{a}_{n}^{{}}+\frac12\right) .
\end{equation}
As we saw in the previous sections, additional terms in the
Hamiltonian appears due to the force generated by the tunnelling
electron. These are included next.
\subsection{Coupling due to longitudinal electric field}
The longitudinal electric field $E_x$ gives rise to a coupling
Hamiltonian (see \eqref{Helph1}):
\begin{equation}\label{Hparal}
\hat{H}_{\mathrm{el-vib},\parallel}=-e\rho_0\int_{0}^{L}dx~\hat{u}(x)E_x(x)
=\sum_{n=1}^\infty \hat{u}_n f_{n,\parallel},
\end{equation}
where
\begin{equation}\label{fparal}
f_{n,\parallel}^{{}}= -e\rho_0\sqrt{2}\int_{0}^{L}dx~\sin
\left( \frac{\pi nx}{L} \right) E_x(x)\approx-(2\pi\sqrt{2})\frac{
n\hbarv_\mathrm{F}^{{}}\rho_0x_0}{L^2},
\end{equation}
for $n$ even and zero for $n$ odd.
\subsection{Coupling due to the capacitative force}
The transverse force leads to the following term in the Hamiltonian
(see \eqref{Wlinf}):
\begin{equation}\label{Hperp}
\hat{H}_{\mathrm{el-vib},\perp}=\frac{EA}{2}\int_{0}^{L}dx~\hat{u}^{\prime
}(x)[z^{\prime }_e(x)]^{2}=\sum_{n=1}^{\infty
}\hat{u}_{n}^{{}}f_{n,\perp}^{{}}.
\end{equation}
where
\begin{equation}\label{fnperp}
f_{n,\perp}^{{}}= \frac{EA}{\sqrt{2}}\frac{n\pi
}{L}\int_{0}^{L}dx~\cos \left( \frac{\pi nx}{L} \right) [z^{\prime
}_e(x)]^{2}.
\end{equation}
\subsection{Franck-Condon overlap factors}
\label{sec:fc}
The tunnelling of an electron leads to a displacement of the
equilibrium displacement field according to
equations~(\ref{Hparal}),(\ref{Hperp}). Each mode represented by
$\hat{u}_n$ is shifted by the amounts
\begin{equation}\label{elln}
\ell _{n,a}= \frac{Lf_{n,a}^{{}}}{AE(\pi n)^{2}},\quad
a=(\parallel,\perp).
\end{equation}
This allows us to calculate the Franck-Condon parameters, which for
each mode $n$ expresses the overlaps between the new eigenstates
around the new equilibrium positions and the old one. This parameter
is defined as\cite{brai03flen}
\begin{equation}\label{gndefa}
g_{n,a}=\frac{1}{2}\left(\frac{\ell_{n,a}}{\ell_{0,n}}\right)^2.
\end{equation}
The size of $g$ determines the character of the vibron sidebands in
the $IV$-characteristics, such that for $g\ll 1$ there are no
sidebands, for $g$ of order one clear sidebands are seen, while for
$g\gg 1$ a gap appears in $IV$
characteristic\cite{brai03flen,koch05}.
For the Franck-Condon parameter due to the parallel electric field,
we thus have
\begin{equation}\label{gndefp}
g_{n,\parallel}=\frac{4}{\pi^2n^2}\,\frac{M\Omega}{\hbar}\left(\frac{\hbarv_\mathrm{F}^{{}}\rho_0x_0}{AEL}\right)^2.
\end{equation}
Using $x_0\approx 1$ nm, $L=500$ nm, $v_\mathrm{F}^{{}}=10^6$ ms$^{-1}$, $R=0.6$
nm, and $E=10^{12}$ Pa, we find $g_{2,\parallel}\sim 10^{-5}$. This
is clearly too small a coupling constant to explain the experimental
findings in reference~\cite{sapm05}.
The coupling constant due to the perpendicular electric can be
expressed explicitly, for the case of small $z(x)$, using
\eqref{z0def}. The integral in \eqref{fnperp} can then be performed
and we obtain
\begin{equation}\label{gperp}
g_{n,\perp}=\frac{M\Omega L^{2}}{\hbar}\left( \frac{k_eL^{3}}{EI}
\right) ^{4}\frac{\left(n^{2}\pi^{2}-40\right)^{2}}{8 n^{13}\pi
^{14}},\quad \mathrm{for}\quad n\quad \mathrm{even},
\end{equation}
and zero for $n$ odd. Using $k_e$ as defined in \eqref{kedef}, and
typical parameter for single wall carbon nanotube devices: $R=0.6$
nm, $L=1\,\mu$m, $E=10^{12}$ Pa, $h\simeq 10-200$ nm, $n_0=L
c(0)\Delta\phi\sim 30$, $C_g/C=0.1-0.75$, we find the maximum $g_n$
factor to occur for $n=4$. However, for this range of parameters we
also find $g_4\ll 1$, unless the geometry is such that
\begin{equation}\label{geo}
\frac{n_0\alpha^2}{h [\mathrm{nm}]}>0.1.
\end{equation}
Even though we can get a significant coupling, the condition
(\ref{geo}) does not seem to be compatible with the experimental
realizations in reference~\cite{sapm05}. Even more so because the
coupling strongly strength, \eqref{gperp}, depends strongly on the
length of wire, which is not seen experimentally.
\section{Conclusion and discussion}
\label{sec:disc}
We have considered the electromechanics of suspended nanotube
single-electron-transistor devices. When the charge on the tube is
changed by one electron the resulting electric field will distort
the tube in both the longitudinal and transverse directions, and
both these distortions couple to the stretching mode. We have
calculated they consequences for the coupling constant for
vibron-assisted tunnelling expressed as the Franck-Condon factor.
This is expressed in terms of the ratio of the classical
displacement to the quantum uncertainty length. Even though both are
in the range of picometers, the effective coupling parameters, $g$,
turn out to be small for most devices.
Because the screening of the longitudinal electric field is very
effective, the dominant interaction seems to be the coupling via the
bending mode. However, only if the tube is very close to the gate do
we get a sizeable $g$-parameter. This could indicate that in the
experiment of Sapmaz et al.\cite{sapm05} the suspended nanotube has
a considerable slack, which would diminish the distance to the gate.
Further experiments and more precise modelling of actual geometries
should be able to resolve these issues.
\ack \vspace{-.25cm} We thank the authors of reference~\cite{sapm05}
for valuable discussions. The work is supported in part by the
European Commission through project FP6-003673 CANEL of the IST
Priority.
| 10,270 |
\section{Introduction}
Let $\hnabla$ be the standard flat affine connection on $\rea^{n+1}$ and fix a $\hnabla$-parallel volume form $\Psi$. Define $\H(F)$ by $\det \hess F = \H(F) \Psi^{\tensor 2}$. An immersed hypersurface $\Sigma$ in $\rea^{n+1}$ is \textit{nondegenerate} if its second fundamental form with respect to $\hnabla$ is nondegenerate. In this case, there is a distinguished equiaffinely invariant transverse vector field defined along $\Sigma$, the \textit{equiaffine normal}. A nondegenerate connected hypersurface $\Sigma$ is an \textit{improper affine sphere} if its equiaffine normals are parallel.
By Theorem \ref{ahtheorem}, the level sets of a smooth translationally homogeneous function $F$ on $\rea^{n+1}$ satisfying
\begin{align}\label{ma}
\H(F) = \kc F^{n+1}
\end{align}
for some $\kc \neq 0$ are improper affine spheres. That $F$ be translationally homogeneous means that there is a constant vector $\rad^{i} \in \rea^{n+1}$ such that $F(x + t \rad) = e^{\la t}F(x)$ for all $t \in \rea$ and $x \in \rea^{n+1}$.
The main result reported here is Theorem \ref{triangularizabletheorem0}, which yields translationally homogeneous solutions of \eqref{ma} having the form $F = e^{P}$ where $P$ is a weighted homogeneous polynomial arising as the characteristic polynomial of the left-symmetric algebra associated with the prehomogeneous action of a simply-connected solvable Lie group. These solutions of \eqref{ma} have the nice properties that they are defined and nonvanishing on all of $\rea^{n+1}$ and their level sets are connected, everywhere nondegenerate graphs that are homogeneous with respect to the action of a group of affine transformations.
An \textit{algebra} $(\alg, \mlt)$ means a finite-dimensional vector space $\alg$ with a bilinear product (multiplication) $\mlt: \alg \times \alg \to \alg$ that need not be either unital or associative. Here mainly Lie algebras and left-symmetric algebras are considered, although other algebras are mentioned occasionally. A \textit{left-symmetric algebra}\footnote{Left-symmetric algebras are also called \textit{pre-Lie algebras}, \textit{Vinberg algebras}, \textit{Koszul-Vinberg algebras}, and \textit{chronological algebras}, and some authors prefer to work with the opposite category of \textit{right-symmetric algebras}.} (abbreviated \textit{LSA}) $(\alg, \mlt)$ is a vector space $\alg$ equipped with a multiplication $\mlt$ such that the associated skew-symmetric bracket $[a, b] = a\mlt b - b\mlt a$ satisfies the Jacobi identity, so makes $\alg$ into a Lie algebra, and such that the left regular representation $L:\alg \to \eno(\alg)$ defined by $L(a)b = a\mlt b$ is a Lie algebra representation, meaning $[L(a), L(b)] = L([a, b])$. Equivalently, the right regular representation $R:\alg \to \eno(\alg)$ defined by $R(a)b = b\mlt a$ satisfies
\begin{align}\label{rlsa}
R(x\mlt y) - R(y)R(x) = [L(x), R(y)].
\end{align}
By \eqref{rlsa} the \textit{trace form} $\tau$ defined on $\alg$ by $\tau(x, y) = \tr R(x)R(y) = \tr R(x\mlt y)$ is symmetric. An LSA is \textit{incomplete} if the linear form $\tr R$ is not zero. For an incomplete LSA with nondegenerate trace form $\tau$, the unique element $r \in \alg$ such that $\tr R(x) = \tau(r, x)$ for all $x \in \alg$, is an idempotent called the \textit{right principal idempotent}. An LSA $(\alg, \mlt)$ defined over a field $\fie$ of characteristic zero is \textit{triangularizable} if there is a basis of $\alg$ with respect to which every $L(x)$ is triangular. By Lemma \ref{cslemma} this is equivalent to the condition that the underlying Lie algebra $(\alg, [\dum, \dum])$ is solvable and for every $x \in \alg$ the eigenvalues of $L(x)$ are contained in $\fie$.
\begin{theorem}\label{triangularizabletheorem0}
Let $(\alg, \mlt)$ be a triangularizable $n$-dimensional LSA over a field of characteristic zero and having nondegenerate trace form $\tau$ and codimension one derived Lie subalgebra $[\alg, \alg]$. Let $G$ be the simply-connected Lie group with Lie algebra $(\alg, [\dum, \dum])$. There are a nonzero constant $\kc$ and a closed unimodular subgroup $H \subset G$ having Lie algebra $[\alg, \alg]$, such that the characteristic polynomial $P(x) = \det(I + R(x))$ of $(\alg, \mlt)$ solves $\H(e^{P}) = \kc e^{nP}$, and the level sets of $P$ are improper affine spheres homogeneous for the action of $H$ and having affine normals equal to a constant multiple of the right principal idempotent $r$.
\end{theorem}
The translational homogeneity of $e^{P}$ is equivalent to the identity $P(x + tr) = P(x) + t$, while the weighted homogeneity of $P$ is the statement that $dP(E) = P$ where $E$ is the vector field $E_{x} = r + r\mlt x$.
LSAs were introduced by Vinberg in \cite{Vinberg} as a tool in the classification of homogeneous convex cones.
In \cite{Vinberg}, a triangularizable LSA with a Koszul form for which the associated metric is positive definite is called a \textit{clan}; see also \cite{Shima-homogeneoushessian}, where there is studied the more general class of triangularizable LSAs equipped with a positive definite Hessian metric (the definitions are recalled in section \ref{hessiansection}). In \cite{Vinberg}, Vinberg showed that the automorphism group of a homogeneous convex cone contains a triangularizable solvable subgroup acting simply transitively on the cone, and established a bijective correspondence between clans and homogeneous convex cones. Although it has not been completely developed, there should be a correspondence similar to that for homogeneous convex cones relating a different sort of prehomogeneous actions of solvable Lie groups with the domains bounded by homogeneous improper affine spheres.
In the special case of a convex cone that is a component of the complement of the zero set of the fundamental relative invariant of a real form of an irreducible regular prehomogeneous vector space, the relative invariant $Q$ of the prehomogeneous vector space is among the relative invariants of this triangular subgroup. If the underlying space has dimension $n$, $Q$ solves an equation of the form $\H(Q) = \kc Q^{m}$ where $m = n(k-2)/k$ and $k = \deg Q$. The equation $\H(P) = \kc P^{n}$ results formally when the homogeneity degree $k$ tends to $\infty$, so in some formal sense the analogue for this equation of degree $k$ homogeneous polynomial solutions of $\H(Q) = \kc Q^{m}$ should be functions that somehow can be regarded as polynomials homogeneous of infinite degree. The conclusion of Theorem \ref{triangularizabletheorem0} shows that this makes sense if one regards a translationally homogeneous exponential of a weighted homogeneous polynomial as having infinite homogeneity degree.
The point relevant here is that the $P$ of Theorem \ref{triangularizabletheorem0} is relatively invariant for the action of the simply-connected Lie group corresponding to the Lie algebra underlying the LSA, so that Theorem \ref{triangularizabletheorem0} fits the case of improper affine spheres in a common framework with the case of proper affine spheres studied in \cite{Fox-prehom}.
Section \ref{affinespheresection} presents the needed background on improper affine spheres. Theorem \ref{ahtheorem} shows that the level sets of a translationally homogeneous function are improper affine spheres if and only if the function solves a Monge-Ampère equation of the form \eqref{ma}. Lemma \ref{improperlemma} shows the equivalence of \eqref{ma} to various other equations of Monge-Ampère type; these alternative formulations are used in the proof of Theorem \ref{triangularizabletheorem0}.
Section \ref{impropersection} reviews background on left-symmetric algebras, affine actions, and completeness. Although most of this material can be found in other sources, it is recalled here to have in one place all that is needed in subsequent sections. Following H. Shima, an LSA is Hessian if it admits a nondegenerate symmetric bilinear form (a metric) satisfying the compatibility condition \eqref{hessianmetric}. Section \ref{hessiansection} treats Hessian LSAs. The technical Lemma \ref{principalidempotentlemma} generalizes to indefinite signature Hessian LSAs results obtained for clans by H. Shima and E.~B. Vinberg. Theorem \ref{lsacptheorem} gives conditions on a Hessian LSA that in conjunction with Theorem \ref{ahtheorem} guarantee that the level sets of its characteristic polynomial are improper affine spheres.
There are many notions of nilpotence used in studying LSAs and section \ref{nilpotencesection} discusses the interrelationships between those most relevant here. Some of the results obtained have purely algebraic interest. Theorem \ref{trivalgtheorem} shows that a finite-dimensional LSA over a field of characteristic zero is nilpotent if and only if it is right nilpotent with nilpotent underlying Lie algebra. The reader should see section \ref{nilpotencesection} for the definitions because terminology related to notions of nilpotent varies with the source; here the conventions follow those standard in the study of nonassociative algebras, so that an algebra is \textit{nilpotent} if the associative multiplication algebra generated by all left and right multiplication operators is nilpotent.
By Lemma \ref{rightnilpotentlemma} such a right nilpotent LSA with nilpotent underlying Lie algebra is triangularizable, and there results the following corollary of Theorem \ref{triangularizabletheorem}.
\begin{corollary}\label{triangularizabletheorem2}
Let $(\alg, \mlt)$ be an $n$-dimensional LSA over a field of characteristic zero that is right nilpotent with nilpotent underlying Lie algebra. Suppose the trace-form $\tau$ is nondegenerate and the derived Lie subalgebra $[\alg, \alg]$ has codimension one. Let $G$ be the simply-connected Lie group with Lie algebra $(\alg, [\dum, \dum])$. There are a nonzero constant $\kc$ and a closed unimodular subgroup $H \subset G$ having Lie algebra $[\alg, \alg]$, such that the characteristic polynomial $P(x) = \det(I + R(x))$ of $(\alg, \mlt)$ solves $\H(e^{P}) = \kc e^{nP}$, and the level sets of $P$ are improper affine spheres homogeneous for the action of $H$ and having affine normals equal to a constant multiple of the right principal idempotent $r$.
\end{corollary}
By Theorem \ref{trivalgtheorem} the nilpotency hypothesis of Corollary \ref{triangularizabletheorem2} can be restated simply as that $(\alg, \mlt)$ be nilpotent.
Theorem \ref{triangularizabletheorem} gives a sort of weight space decomposition of an LSA as in Theorem \ref{triangularizabletheorem0} that is useful in constructing examples. Although this is not developed systematically, section \ref{examplesection} concludes with some illustrative examples obtained by applying Theorem \ref{triangularizabletheorem0}.
A motivating example, treated in Example \ref{cayleyexample}, is given by the $n$th generalized \textit{Cayley hypersurface}, defined by M. Eastwood and V. Ezhov in \cite{Eastwood-Ezhov} as the zero level set of the polynomial
\begin{align}\label{eepolynomials}
\Phi_{n}(x_{1}, \dots, x_{n}) = \sum_{i = 1}^{n}(-1)^{i}\frac{1}{i}\sum_{j_{1} + \dots + j_{i} = n}x_{j_{1}}\dots x_{j_{i}} = \sum_{\la \part n}(-1)^{|\la|}\frac{c_{\la}}{|\la|}x_{(\la)},
\end{align}
where the second sum is over all partitions $\la$ of $n$; $|\la|$ is the length of the partition $\la$; $x_{(\la)} = x_{1}^{m_{1}}\dots x_{n}^{m_{n}}$, where $m_{i}$ is the multiplicity of $i$ in $\la$; and $c_{\la}$ is the number of integer compositions of $n$ determining the partition $\la$. Eastwood and Ezhov prove that the Cayley hypersurface is an improper affine sphere admitting a transitive abelian group of affine motions and whose full symmetry group has one-dimensional isotropy. They ask if these properties characterize these hypersurfaces, and with the additional assumption that the domain above the hypersurface is homogeneous this was proved by Y. Choi and H. Kim in \cite{Choi-Kim} using the theory of LSAs. Relations between homogeneous improper affine spheres, LSAs, and Monge-Ampère equations like that in Theorem \ref{triangularizabletheorem0} and Theorems \ref{lsacptheorem} and \ref{triangularizabletheorem} in section \ref{impropersection} have been studied by Choi and Kim and K. Chang in the papers \cite{Choi-domain}, \cite{Choi-Chang}, and \cite{Choi-Kim}, that address a characterization of the generalized Cayley hypersurfaces conjectured in \cite{Eastwood-Ezhov}. Their work, as well as that of H. Shima \cite{Shima-homogeneoushessian} and A. Mizuhara \cite{Mizuhara, Mizuhara-solvable}, provided motivation for the material described here. In example \ref{cayleyexample} there is constructed for each positive integer $n$ an LSA $(\cayn, \mlt)$ that satisfies the hypotheses of Theorem \ref{triangularizabletheorem0} and, by Lemma \ref{cayleypolynomiallemma}, has the polynomial $P_{n} = 1 - n\Phi_{n}$ as its characteristic polynomial. This gives an alternative demonstration that the Cayley hypersurfaces are homogeneous improper affine spheres with the properties demonstrated in \cite{Eastwood-Ezhov}. A consequence, also proved in Lemma \ref{cayleypolynomiallemma}, of the realization of $1 - n\Phi_{n}$ as a determinant, is the recursive formula
\begin{align}\label{cayleyrecursion2}
\Phi_{n}(x_{1}, \dots, x_{n}) = - x_{n} + \sum_{i = 1}^{n-1}(\tfrac{i}{n} - 1)x_{i}\Phi_{n-i}(x_{1}, \dots, x_{n-i}),
\end{align}
determining $\Phi_{n}$ (where $\Phi_{1}(x) = -x$).
\section{Improper affine spheres as level sets}\label{affinespheresection}
This section gives the background on improper affine spheres and translationally homogeneous functions needed to understand the statement and proof of Theorem \ref{triangularizabletheorem0}.
The reader primarily interested in left-symmetric algebras can skip directly to section \ref{impropersection}.
The group $\Aff(n+1, \rea)$ of affine transformations of $\rea^{n+1}$ comprises the automorphisms of the standard flat affine connection $\hnabla$ on $\rea^{n+1}$. Elements of its subgroup preserving the tensor square $\Psi^{2}$ of a fixed $\hnabla$-parallel volume form $\Psi$ are called \textit{unimodular affine} or \textit{equiaffine}.
Let $\Sigma$ be a connected coorientable nondegenerate immersed hypersurface in $\rea^{n+1}$. Via the splitting $T\rea^{n+1} = T\Sigma \oplus \lb N\ra$ determined by a vector field $N$ transverse to $\Sigma$, the connection $\hnabla$ induces on $\Sigma$ a connection $\nabla$, a symmetric covariant two tensor $h$ representing the second fundamental form, a shape operator $S \in \Ga(\eno(T\Sigma))$, and the connection one-form $\tau \in \Ga(T^{\ast}\Sigma)$; these are defined by $\hnabla_{X}Y = \nabla_{X}Y + h(X, Y)N$ and $\hnabla_{X}N = -S(X) + \tau(X)N$, where $X$ and $Y$ are tangent to $\Sigma$. Here, as in what follows, notation indicating the restriction to $\Sigma$, the immersion, the pullback of $T\rea^{n+1}$, etc. is omitted. As generally in what follows, when indices are used the abstract index and summation conventions are employed and indices are to be understood as labels indicating valence and symmetries. Tensors on $\Sigma$ are labeled using capital Latin abstract indices. That $\Sigma$ be \textit{nondegenerate} means that the second fundamental form of $\Sigma$, equivalently $h_{IJ}$, is nondegenerate. Since by assumption $\Sigma$ is cooriented, it is orientable, and the interior multiplication $\imt(N)\Psi$ is a volume form on $\Sigma$. Since $\hnabla \Psi = 0$, for $X$ tangent to $\Sigma$, $\nabla_{X} \imt(N)\Psi = \tau(X)\imt(N)\Psi$. Let $\vol_{h} = q\imt(N)\Psi$ be the volume form induced on $\Sigma$ by $h$ and the orientation consistent with $\imt(N)\Psi$. Since $\vol_{h}^{2} = |\det h|$,
\begin{equation}\label{deth}
h^{PQ}\nabla_{I}h_{PQ} = 2\vol_{h}^{-1}\nabla_{I}\vol_{h} = 2\left(q^{-1}dq_{I} + \tau_{I} \right).
\end{equation}
Any other transversal to $\Sigma$ has the form $\tilde{N} = a(N + Z)$ for a nowhere vanishing function $a$ and a vector field $Z$ tangent to $\Sigma$. The second fundamental form $\tilde{h}$, connection $\tnabla$, and connection one-form $\tilde{\tau}$ determined by $\tilde{N}$ and $\hnabla$ are related to $h$, $\nabla$, and $\tau$ by
\begin{align}\label{transform}
&\tilde{h}_{IJ} = a^{-1}h_{IJ},& &\tnabla = \nabla - h_{IJ}Z^{K}, & &\tilde{\tau}_{I} = \tau_{I} + a^{-1}da_{I} + h_{IP}Z^{P}.
\end{align}
It follows from \eqref{deth} and \eqref{transform} that
\begin{equation}\label{normalize}
n\tilde{\tau}_{I} + \tilde{h}^{PQ}\tnabla_{I}\tilde{h}_{PQ} = n \tau_{I} + h^{PQ}\nabla_{I}h_{PQ} + (n+2)Z^{P}h_{IP},
\end{equation}
where $h^{IJ}$ and $\tilde{h}^{IJ}$ are the symmetric bivectors inverse to $h_{IJ}$ and $\tilde{h}_{IJ}$. Since \eqref{normalize} does not depend on $a$, the span of $\tilde{N}$ is determined by requiring $n\tilde{\tau}_{I} =- \tilde{h}^{PQ}\tnabla_{I}\tilde{h}_{PQ}$, so that, by \eqref{deth} and \eqref{normalize},
\begin{equation}\label{zdet}
Z^{P}h_{PI} = -\tfrac{1}{n+2}\left(n\tau_{I} + h^{PQ}\nabla_{I}h_{PQ}\right) = -\tau_{I} - \tfrac{2}{n+2}q^{-1}dq_{I}= -\tfrac{1}{2}h^{PQ}\nabla_{I}h_{PQ} + \tfrac{1}{n+2}q^{-1}dq_{I}.
\end{equation}
Whatever is $a$, the resulting transversal $\tilde{N}$ is called an \textit{affine normal}, and the line field it spans is called the \textit{affine normal distribution of $\Sigma$}. Since $\det \tilde{h} = a^{-n}\det h$, the \textit{equiaffine normal} $\nm = a(N + Z)$ is determined up to sign by requiring $|\vol_{\tilde{h}}| = |\imt(\nm)\Psi|$, which forces $q = |a|^{(n+2)/2}$.
By \eqref{transform}, the connection one-form associated with the equiaffine normal vanishes. Once a coorientation has been fixed, let $\nabla$, $h$, and $S$ be determined by the cooriented equiaffine normal. The pseudo-Riemannian metric $h_{IJ}$ is called the \textit{equiaffine metric}. The \textit{equiaffine mean curvature} is $\amc = n^{-1}S_{I}\,^{I}$.
A coorientable nondegenerate connected hypersurface $\Sigma$ is an \textit{improper affine sphere} if its equiaffine normals are parallel. It is straightforward to check that $\Sigma$ is an improper affine sphere if and only if the shape operator determined by any affine normal vanishes identically.
The definition of a connected affine sphere does not require a choice of coorientation, but some coherence condition on coorientations is necessary when there are multiple connected components. The convention used in this paper is the following. A smoothly immersed hypersurface having more than one connected component is an improper affine sphere if each connected component is an improper affine sphere and the affine normal lines of the different components are all parallel and there is a choice of coorientations of the components so that for the equiaffine normal consistent with this choice the signatures modulo $4$ of the equiaffine metrics of the different components are all the same. Note that if a disconnected hypersurface is an affine sphere with respect to a given choice of coorientations of the components, it is an affine sphere with respect to the opposite choice of coorientations, but with respect to no other choice of coorientations. In this sense, the definition is consistent with the definition for a connected hypersurface.
Let $\Omega \subset \rea^{n+1}$ be an open domain (a nonempty open subset). For $F \in C^{k}(\Om)$ let $F_{i_{1}\dots i_{k}} = \hnabla_{i_{1}}\dots\hnabla_{i_{k-1}}dF_{i_{k}}$, and let $g_{ij} = (\hess F)_{ij} = F_{ij} = \hnabla_{i}dF_{j}$ be the \textit{Hessian} of $F$. As $\det \hess F$ and the tensor square $\Psi^{2}$ are $2$-densities, it makes sense to define the \textit{Hessian determinant} $\H(F)$ of a $C^{2}$ function $F$ by $\det \hess F = \H(F)\Psi^{2}$. If $x^{1}, \dots, x^{n+1}$ are coordinate functions such that $dx^{1}, \dots, dx^{n+1}$ is a $\hnabla$-parallel coframe and $\Psi = dx^{1}\wedge \dots \wedge dx^{n+1}$, then $\H(F) = \det \tfrac{\pr^{2}F}{\pr x^{i}\pr x^{j}}$.
Where $\H(F)$ is nonzero, $g_{ij}$ is a pseudo-Riemannian metric with inverse symmetric bivector $g^{ij}$. In this case, indices are raised and lowered using $g_{ij}$ and $g^{ij}$, so, for example, $F^{i} = g^{ip}F_{p}$. There is written $|dF|_{g}^{2} = F^{p}F_{p}$, although this need not be positive when $g_{ij}$ is not positive definite.
Let $\lin:\Aff(n+1, \rea) \to GL(n+1, \rea)$ be the projection onto the linear part. Because of the identity $g\cdot \H(F) = \det{}^{2}\lin(g) \H(g\cdot F)$, the equation
\begin{align}\label{mai}
&\H(F) = \phi(F)
\end{align}
is affinely covariant in the sense that $F$ solves \eqref{mai} for some function $\phi$ if and only if $g\cdot F$ solves \eqref{mai} with a positive constant multiple of $\phi$ in place of $\phi$. In particular, it is natural to consider solutions of \eqref{mai} up to unimodular affine equivalence. Moreover, the affine covariance suggests also that properties of the equations \eqref{mai} should be reflected in the unimodular affine geometry of the level sets of $F$.
An interesting general problem is the determination up to affine equivalence of all sufficiently smooth solutions of \eqref{mai} on a domain $\Om \subset \rea^{n+1}$ for some particular choice of $\phi$, e.g. when $\phi$ is a power or an exponential, and for particular choices of $\Om$. Of particular interest are cases of \eqref{mai} that admit solutions that are \textit{entire}, meaning defined on all of $\rea^{n+1}$, or \textit{polynomial}, meaning that $F_{i_{1}\dots i_{k}} = 0$ for some $k$.
Here the interest is in equations of the form \eqref{mai} whose solutions have level sets that are improper affine spheres. Requiring some kind of homogeneity property of the function $F$ restricts the possible forms of $\phi$ in \eqref{mai}. In particular, here there will be considered functions $F$ that are translationally homogeneous in the sense explained next, and that such a function solve an equation of the form \eqref{mai} forces $\phi$ to be a polynomial. The precise statement is Theorem \ref{ahtheorem}.
Let $\Omega \subset \rea^{n+1}$ be an open domain. For $F \in C^{0}(\Omega)$ and $r \in \rea$, let $\lc_{r}(F, \Omega) = \{x \in \Omega: F(x) = r\}$. For $\la\in \rea$ define $\amg^{\la}(\Omega)$ to comprise those $F \in C^{0}(\Omega) \cap \cinf(\Omega \setminus \lc_{0}(F, \Omega))$ for which there exists a parallel vector field (that is, a constant vector) $\rad^{i} \in \rea^{n+1}$ such that $F(x + t \rad) = e^{\la t}F(x)$ for all $t \in \rea$ and $x \in \Om$ such that $x + t \rad \in \Omega$.
An element of $\amg^{\la}$ is \textit{$\la$-translationally (affinely) homogeneous} with \textit{axis} $\rad^{i}$. For $F \in C^{0}(\Om)$ and $g \in \Aff(n+1, \rea)$ define $g\cdot F \in C^{0}(g \Om)$ by $(g \cdot F)(x) = F(g^{-1}x)$. Translational affine homogeneity is an affinely invariant condition in the sense that $F$ is translationally homogeneous if and only if $g\cdot F$ is translationally homogeneous for all $g \in \Aff(n+1, \rea)$.
\begin{lemma}\label{affhomlemma}
A function $F \in \cinf(\Omega \setminus \lc_{0}(F, \Omega)) \cap C^{0}(\Omega) $ is in $\amg^{\la}(\Om)$ if and only if there is $\rad^{i} \in \rea^{n+1}$ such that $\rad^{p}F_{p} = \la F$.
\end{lemma}
\begin{proof}
First suppose $F \in \amg^{\la}(\Om)$. If $x \in \Om$ and $F(x) \neq 0$ then, for any $t \in \rea$ such that $x + t\rad \in \Om$, $F(x + t\rad) = e^{\la t}F(x) \neq 0$, so $x + t\rad \notin \lc_{0}(F, \Om)$. Since $\Om$ is open, there is some small interval $I \subset \rea$ containing $0$ such that $x + t \rad \in \Om$ for $t \in I$. Hence $\rad^{p}F_{p}(x) = \tfrac{d}{dt}\big|_{t = 0}F(x + t\rad) = \tfrac{d}{dt}\big|_{t = 0}\left(e^{\la t}F(x)\right) = \la F(x)$. Now suppose $F \in \cinf(\Om\setminus \lc_{0}(F, \Omega)) \cap C^{0}(\Omega)$ satisfies $\rad^{p}F_{p} = \la F$ for some fixed $\rad^{i} \in \rea^{n+1}$. Then $f(t) = F(x + t\rad)$ solves the initial value problem $f(0) = F(x)$ and $\tfrac{d}{dt}f(t) = \la f(t)$, so $F(x + t\rad) = f(t) = e^{\la t}F(x)$ for $t$ such that $x + t\rad \in \Om$.
\end{proof}
Let $\reat = GL(1, \rea)$ be the group of nonzero real numbers.
\begin{lemma}\label{homognondegenlemma}
Suppose given an open domain $\Omega \subset \rea^{n+1}$ and $F \in \amg^{\la}(\Om)$ for $\la \in \reat$. By Lemma \ref{affhomlemma} there is a vector field $\rad^{i} \in \rea^{n+1}$ such that $\rad^{p}F_{p} = \la F$. For $r \in \reat$, the level set $\lc_{r}(F, \Omega)$ is smoothly immersed and transverse to $\rad^{i}$, and $\lc_{r}(F, \Omega)$ is nondegenerate if and only if $\H(F)$ does not vanish on $\lc_{r}(F, \Omega)$, in which case $dF$ and $|dF|^{2}_{g}$ do not vanish on $\lc_{r}(F, \Omega)$, and $\lc_{r}(F, \Om)$ is coorientable with equiaffine normal
\begin{equation}\label{nm2}
\nm^{i} = -\la^{-1}(n+2)^{-1}\left|F \H(F)\right|^{1/(n+2)}\left(F^{-1}\rad^{i} + \la \H(F)^{-1}\H(F)^{i}\right).
\end{equation}
\end{lemma}
\begin{proof}
Since $\hnabla_{i}\rad^{j} = 0$, differentiating $\rad^{p}F_{p} = \la F$ yields $\rad^{p}F_{pi_{1}\dots i_{k}} = \la F_{i_{1}\dots i_{k}}$. Hence
\begin{align}\label{hompol1}
&\la F^{i}= \rad^{i},&
&|dF|^{2}_{g} = F.
\end{align}
Tracing $\rad^{p}F_{ijp} = \la F_{ij}$ and combining the result with \eqref{hompol1} yields
\begin{align}\label{hompol2}
&\rad^{p}\H(F)_{p} = \H(F)\rad^{p}F_{pq}\,^{q} = \la(n+1)\H(F), &
\end{align}
Since for $x \in \lc_{r}(F, \Omega)$, $\rad^{i}F_{i}(x) = \la r \neq 0$, $dF$ does not vanish on $\lc_{r}(F, \Omega)$ and so the level set $\lc_{r}(F, \Omega)$ is smoothly immersed; moreover, $\rad^{i}$ is transverse to $\lc_{r}(F, \Omega)$. Let $h_{IJ}$ be the corresponding second fundamental form.
The restrictions $F_{IJ}$, $F_{Ip}\rad^{p}$, and $F_{I}$ satisfy
\begin{align}\label{hdl}
&F_{IJ} = -\la Fh_{IJ},& &F_{Ip}\rad^{p} = \la F_{I} = 0,& & F_{pq}\rad^{p}\rad^{q} = \la^{2}F,
\end{align}
along $\lc_{r}(F, \Omega)$. By \eqref{hdl}, $h_{IJ}$ is nondegenerate along $\lc_{r}(F, \Omega)$ if and only $\H(F)$ does not vanish along $\lc_{r}(F, \Omega)$. In this case, it follows from \eqref{hompol1} and $\la r \neq 0$ that neither $dF$ nor $|dF|^{2}_{g}$ vanishes along $\lc_{r}(F, \Omega)$.
The equiaffine normal of $\lc_{r}(F, \Om)$ has the form $\nm^{i} = a(\rad^{i} + Z^{i})$ where $Z^{p}F_{p} = 0$. Let $\vol_{h} = q\imt(\rad)\Psi$. By \eqref{deth}, $2q^{-1}q_{I} = h^{PQ}\nabla_{I}h_{PQ}$ and so, since $\hnabla_{i}\rad^{j} = 0$, it follows from \eqref{normalize} that $\la^{-1}F^{-1}(n+2)Z^{P}g_{IP} = -(n+2)Z^{P}h_{IP} = 2q^{-1}dq_{I}$. On the other hand, it follows from \eqref{hdl} that $q = |\la|^{-(n+2)/2}|F|^{-(n+1)/2}|\H(F)|^{1/2}$. Hence, by \eqref{hompol1} and \eqref{hompol2}, $Z^{i} = (n+2)^{-1}\la F(\H(F)^{-1}\H(F)^{i} -(n+1)F^{-1}F^{i})$ is tangent to $\lc_{r}(F, \Om)$, and $|a| = |\la|^{-1}|F|^{-(n+1)/(n+2)}|\H(F)|^{1/(n+2)}$. With the coorientation convention these formulas combine to yield \eqref{nm2}.
\end{proof}
Let $\sign:\reat \to \zmodtwo$ be the sign homomorphism $\sign(r) = r|r|^{-1}$. Define the \textit{standard coorientation} of a connected component of a level set of $F$ to be that consistent with the vector field $-\sign(|dF|^{2}_{g})F^{i}$, where $F^{i} = g^{ip}F_{p}$. That under the hypotheses of Theorem \ref{ahtheorem} this vector field is nonzero follows from Lemma \ref{homognondegenlemma}.
Theorem \ref{ahtheorem} shows that the nonzero level sets of a translationally homogeneous solution of \eqref{ma} on $\rea^{n+1}$ are improper affine spheres.
\begin{theorem}\label{ahtheorem}
Let $\la \in \reat$. Let $\Omega \subset \rea^{n+1}$ be a nonempty open subset, and let $I \subset \reat$ be a nonempty, connected, open subset. For $F \in \amg^{\la}(\Omega)$, let $\Omega_{I} = F^{-1}(I)\cap \Omega$. Let $\rad^{i} \in \rea^{n+1}$ be the axis of $F$. The following are equivalent.
\begin{enumerate}
\item \label{aht2} There is a nonvanishing function $\phi:I \to \rea$ such that $F$ solves $\H(F) = \phi(F)$ on $\Omega_{I}$.
\item \label{aht1improper} For all $r \in I$ each level set $\lc_{r}(F, \Omega_{I})$, equipped with the coorientation of its components consistent with $-\sign(|dF|^{2}_{g})F^{i}$, is an improper affine sphere with equiaffine normal equal to $c\rad^{i}$ for a constant $c$ depending only on $r$ (and not the connected component).
\end{enumerate}
When these conditions hold, there is a nonzero constant $\kc$ such that $\phi$ has the form $\phi(r) = \kc r^{n+1}$.
\end{theorem}
\begin{proof}
Suppose there holds \eqref{aht1improper}. That is, $F \in \amg^{\la}(\Omega)$ and there is a connected open interval $I \subset \rea \setminus\{0\}$ such that for all $r \in I$ each level set $\lc_{r}(F, \Omega_{I})$, equipped with the coorientation of its components consistent with $-\sign(|dF|^{2}_{g})F^{i}$, is an affine sphere with affine normal parallel to a fixed vector $\rad$.
Because, by assumption, each connected component of $\lc_{r}(F, \Om_{I})$ is nondegenerate, Lemma \ref{homognondegenlemma} implies that neither $\H(F)$ nor $|dF|^{2}_{g}$ vanishes on $\lc_{r}(F, \Omega_{I})$. A posteriori, this justifies assigning to each component the coorientation given by $-\sign(|dF|^{2}_{g})F^{i}$. By assumption, the equaffine normal $\nm$ satisfies $\nm \wedge \rad = 0$ along $\lc_{r}(F, \Om_{I})$. Comparing with \eqref{nm2} shows that $d_{i}\log\H(F) = c\rad_{i} = c\la F_{i}$ for some $c$ locally constant on $\lc_{r}(F, \Om_{I})$. Contracting with $\rad^{i} = \la F^{i}$ and using \eqref{hompol2} yields $(n+1)\la = c\la^{2}F$, so that $d_{i}\log\H(F) = (n+1)F^{-1}F_{i}$. Hence $F^{-n-1}\H(F)$ is locally constant on $\lc_{r}(F, \Om_{I})$. By assumption the signatures of the second fundamental forms of the connected components of $\lc_{r}(F, \Om_{I})$ are the same modulo $4$, and by \eqref{hdl} this implies that the signatures of $\hess F$ on the different connected components are the same modulo $4$, and so the signs of $\H(F)$ on the different connected components must be the same. This means that $|\H(F)|$ can be replaced by one of $\pm \H(F)$ coherently on all of $\lc_{r}(F, \Om_{I})$. Since by assumption there is $\rad^{i} \in \rea^{n+1}$ such that $\nm^{i} = c\rad^{i}$ for some $c$ depending only on $r$ and not the connected component of $\lc_{r}(F, \Om_{I})$, it follows from \eqref{nm2} that $\H(F)$ is constant on $\lc_{r}(F, \Om_{I})$. This is true for each $r \in I$, and so there is a function $\phi$ defined on $I$ such that $\H(F) = \phi(F)$ for $x \in \Omega_{I}$. This shows \eqref{aht1improper}$\implies$\eqref{aht2}.
The implication \eqref{aht2}$\implies$\eqref{aht1improper} is proved as follows. If $F \in \amg^{\la}(\Omega)$ solves $\H(F) = \phi(F)$ on $\Om_{I}$ for some nonvanishing function $\phi:I \to \rea$, then, by Lemma \ref{homognondegenlemma}, each level set $\lc_{r}(F, \Omega_{I})$ is nondegenerate and $dF$ and $|dF|^{2}_{g}$ do not vanish on $\lc_{r}(F, \Om_{I})$. In particular, the equiaffine normal $\nm^{i}$ is defined on $\Omega_{I}$. Since $\H(F)$ is constant on $\lc_{r}(F, \Omega_{I})$, $d\log\H(F) \wedge dF = 0$ on $\Omega_{I}$. Hence, by \eqref{hompol1}, $\H(F)^{-1}\H(F)^{i}$ is a multiple of $F^{i} = \la^{-1}\rad^{i}$. In \eqref{nm2} this shows that $\nm^{i}$ is a multiple of $\rad^{i}$, so that the connected components of $\lc_{r}(F, \Omega_{I})$ are affine spheres with affine normals parallel to $\rad^{i}$. Since by assumption $\H(F) = \phi(F)$ depends only on $r$, and not on the component, it follows that the equiaffine mean curvatures of different components of $\lc_{r}(F, \Om_{I})$ are the same. In both cases, from the constancy of $\H(F)$ on each $\lc_{r}(F, \Om_{I})$ and \eqref{hdl} it follows that the signatures of the distinct connected components of $\lc_{r}(F, \Om_{I})$ are the same modulo $4$.
Suppose given $F \in \amg^{\la}(\Omega)$, an open interval $I \subset \rea \setminus\{0\}$, and a function $\phi$ defined on $I$ such that $\H(F) = \phi(F)$ for $x \in \Omega_{I}$. Since, by \eqref{hompol2}, $\H(F)$ has positive homogeneity $(n+1)\la$, there holds $\phi(e^{\la t}r) = e^{(n+1)\la t}\phi(r)$ for $r \in I$ and $t$ sufficently small. In particular, this shows that $\phi$ is continuous on $I$. Similarly, setting $h(t) = (e^{\la t} - 1) r $,
\begin{align}
\lim_{t \to 0} \tfrac{\phi(r + h(t)) - \phi(r)}{h(t)}
= \lim_{t \to 0} \tfrac{\phi(e^{\la t}r ) - \phi(r)}{\la r t} = \lim_{t \to 0}\tfrac{(e^{(n+1)\la t} - 1)}{\la r}\phi(r)= \tfrac{(n+1) }{r }\phi(r),
\end{align}
so that $\phi$ is differentiable at $r$ and $\phi^{\prime}(r) = \tfrac{(n+1)}{r}\phi(r)$. The general solution has the form $\phi(r) = \kc r^{n+1}$ for some $\kc \neq 0$.
\end{proof}
\begin{lemma}\label{twisteddetlemma}
If $F \in \cinf(\rea^{n+1})$ satisfies $F(x + t\rad) = F(x) + \la t$ for some $0 \neq \rad^{i} \in \rea^{n+1}$ and some $\la \in \rea$, then $\det(\hess F + cdF\tensor dF) = c\det(\hess F + dF \tensor dF)$ for all $c \in \cinf(\rea^{n+1})$.
\end{lemma}
\begin{proof}
By assumption $\rad^{i}F_{i} = \la$ and $\rad^{i}F_{ij} = 0$. Fix a unimodular basis $e_{1}, \dots, e_{n+1}$ such that $e_{n+1} = v$. Writing $\hess F + cdF\tensor dF$ as a matrix with respect to this basis, there results
\begin{align}
\begin{split}
\det(\hess F + cdF\tensor dF) & = \begin{vmatrix} F_{IJ} & 0 \\ 0 & c\la^{2}\end{vmatrix}
= c\begin{vmatrix} F_{IJ} & 0 \\ 0 & \la^{2}\end{vmatrix} = c \det(\hess F + dF \tensor dF)
\end{split}
\end{align}
where the indices $I$ and $J$ run over $\{1, \dots, n\}$.
\end{proof}
\begin{lemma}\label{improperlemma}
Let $\ste$ be an $(n+1)$-dimensional real vector space equipped with the standard equiaffine structure $(\nabla, \Psi)$, where $\Psi$ is given by the determinant. Let $\stw \subset \ste$ be a codimension one subspace, let $0 \neq \rad \in \ste$ be a vector transverse to $W$, and let $\mu \in \std$ be such that $\mu(\rad) = 1$ and $\ker \mu = \stw$.
Equip $\stw$ with the induced affine structure and the parallel volume form $\mu = \imt(\rad)\Psi$ and define the operator $\H$ with respect to this induced equiaffine structure. Let $\pi:\ste \to \stw$ be the projection along the span $\lb \rad \ra$ of $\rad$. The following are equivalent.
\begin{enumerate}
\item\label{grp1} The \emph{graph of $f \in \cinf(\stw)$ along $\rad$}, $\{(w, t\rad) \in \stw \oplus \lb \rad \ra: t = f(w)\}$, is an improper affine sphere with affine normals parallel to $\rad$.
\item\label{grp2} There is $\kc \in \reat$ such that $\H(f) = \kc$ on $\stw$.
\item\label{grp3} There is $\kc \in \reat$ such that $F = \mu - f\circ \pi$ solves $\det(\hess F + dF \tensor dF) = (-1)^{n}\kc \Psi^{2}$ on $\ste$.
\item\label{grp4} There is $\kc \in \reat$ such that $G = \exp(\mu - f\circ \pi)$ solves $\H(G) = (-1)^{n}\kc G^{n+1}$ on $\ste$.
\item\label{grp5} There is $\kc \in \reat$ such that $\phi = \log(\mu - f\circ \pi)$ solves $\H(\phi) = (-1)^{n+1}\kc e^{-(n+2)\phi}$ on $\{x \in \ste: \mu(x) > f\circ \pi(x)\}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Routine computations show the first two equalities of
\begin{align}\label{improperequals}
\begin{split}
(-1)^{n}\H(f)\circ \pi =\Psi^{-2}\tensor (\det(\hess F + dF \tensor dF)) = G^{-n-1}\H(G) = -e^{(n+2)\phi}\H( \phi),
\end{split}
\end{align}
while the third equality follows from Lemma \ref{twisteddetlemma}. From \eqref{improperequals} the equivalence of \eqref{grp2}-\eqref{grp5} is immediate. Since $G$ is by definition translationally homogeneous in the $\rad$ direction, the equivalence of \eqref{grp1} and \eqref{grp4} follows from Theorem \ref{ahtheorem}.
\end{proof}
\begin{remark}
After an equiaffine transformation, $\ste$, $\stw$, $\rad$, and the associated connections and volume forms can always be put in the following standard form. Let $\ste = \rea^{n+1}$ be equipped with its standard equiaffine structure $(\nabla, \Psi)$, where $\Psi = dx^{1}\wedge \dots \wedge dx^{n+1}$, and regard $\rea^{n}$ as the equiaffine subspace $\stw = \{x \in \ste: x^{n+1} = 0\}$ with the induced connection, also written $\nabla$, and the volume form $\mu =dx^{1}\wedge \dots \wedge dx^{n}$. Here $\mu = dx^{n+1}$, and the relation between $f$ and $F$ is $F(x_{1}, \dots, x_{n+1}) = x_{n+1} - f(x_{1}, \dots, x_{n})$.
\end{remark}
\begin{remark}
Examples of solutions of $\H(f) = \kc$ abound.
\begin{enumerate}
\item If $\kc < 0$, any function of the form $f(x_{1}, \dots, x_{n+1}) = (-\kc)^{1/2}x_{1}x_{n+1} + q(x_{1}) + \tfrac{1}{2}\sum_{i = 2}^{n}x_{i}^{2}$ with $q \in C^{2}(\rea)$ solves $\H(f) = \kc$ on all of $\rea^{n+1}$.
This gives an infinitude of affinely inequivalent solutions to $\H(f) = \kc$ for $\kc < 0$, and so, by Lemma \ref{improperlemma}, an infinitude of affinely inequivalent entire solutions of $\H(G) = (-1)^{n}\kc G^{n+1}$ with $\kc < 0$.
\item Let $\ste$ be an $n$-dimensional vector space. Let $\Phi:\ste \to \ste$ be a $C^{1}$ diffeomorphism and write $\Phi_{i}\,^{j} = \tfrac{\pr}{\pr x^{i}}\Phi(x)^{j}$. Define $f:\ste \times \std \to \rea$ by $f(x, y) = y_{p}\Phi(x)^{p}$. Let $\mu$ be the standard parallel volume form on $\ste$ and let $\Psi$ be the parallel volume form on $\ste \times \std$ determined by $\mu$ and the dual volume form on $\std$. A straightforward computation shows that $\H(f) = (-1)^{n}(\det \Phi_{i}\,^{j})^{2}$, where $\H$ is defined with respect to $\Psi$ and $\phi^{\ast}(\mu) = (\det \Phi_{i}\,^{j})\mu$. In particular, if $\Phi$ has constant Jacobian, $\det \Phi_{i}\,^{j} = \kc$, then $\H(f) = (-1)^{n}\kc^{2}$. In this case $\hess F$ has split signature.
\end{enumerate}
In section \ref{impropersection} it is shown how to construct many more solutions of the equations in Lemma \ref{improperlemma} from prehomogeneous actions of solvable Lie groups.
Some simple, but typical, examples obtained in this way are given by the translationally homogeneous (in the $z$-direction) functions
\begin{align}\label{expcayley}
&F(x, y,z) = e^{z - x^{2} - y^{2}},& &G(x, y, z) = e^{x^{3}/3 - xy + z},
\end{align}
that solve $\H(F) = 4F^{3}$ and $\H(G) = -G^{3}$, respectively. By Lemma \ref{improperlemma} their level sets are improper affine spheres.
\end{remark}
\section{Left-symmetric algebras, affine actions, and completeness}\label{impropersection}
This section reviews basic material on LSAs in a form adequate for later applications. Part of the material presented is an amalgamation of background taken from \cite{Goldman-Hirsch-orbits}, \cite{Helmstetter}, \cite{Kim-completeleftinvariant}, \cite{Kim-lsa}, \cite{Kim-developingmaps}, \cite{Segal-lsa}, \cite{Vinberg}. In particular, many results from J. Helmstetter's \cite{Helmstetter} are used.
Although all LSAs considered in this section have finite dimension over a field of characteristic zero, these conditions are sometimes repeated so that the statements of lemmas and theorems are self-contained. The base field is usually supposed to be $\rea$, but this is stated explicitly where it is necessary, and many claims remain true over an arbitrary field $\fie$ of characteristic zero.
For a connected Lie group $G$ with Lie algebra $\g$, the isomorphism classes of the following structures are in pairwise bijection:
\begin{itemize}
\item Left-invariant flat torsion-free affine connections on a Lie group $G$.
\item Left-symmetric structures on the Lie algebra $\g$ of $G$ compatible with the Lie bracket on $\g$.
\item Étale affine representations of the Lie algebra $\g$.
\end{itemize}
This section begins by sketching the constructions of the bijections. While this is explained elsewhere, for example in Proposition $1.1$ of \cite{Kim-completeleftinvariant}, \cite{Burde-etale}, or \cite{Kang-Bai}, it is recalled here to fix terminology and notation for later use.
A map $f:\A \to \B$ between affine spaces $\A$ and $\B$ is affine if there is a linear map $\lin(f):\ste \to \stw$, between the vector spaces $\ste$ and $\stw$ of translations of $\A$ and $\B$, such that $f(q) - f(p) = \lin(f)(q - p)$ for all $p, q \in \A$. With the operation of composition the bijective affine maps of $\A$ to itself form the Lie group $\Aff(\A)$ of affine automorphisms of $\A$, and $\lin:\Aff(\A) \to GL(\ste)$ is a surjective Lie group homomorphism.
A one-parameter subgroup through the identity in $\Aff(\A)$ has the form $\Id_{A} + t\phi + O(t^{2})$ for some affine map $\phi:\A \to \ste$. Consequently, the Lie algebra $\aff(\A)$ of $\Aff(\A)$ is the vector space of affine maps from $\A$ to $\ste$ equipped with the bracket $[f, g] = \lin(f)\circ g - \lin(g) \circ f$. An \textit{affine representation} of a Lie algebra $(\g, [\dum, \dum])$ on the affine space $\A$ is a Lie algebra homomorphism $\rho:\g \to \aff(\A)$, that is, a linear map satisfying $\rho([a, b]) = \lin(\rho(a))\rho(b) - \lin(\rho(b))\rho(a)$ for all $a,b\in \g$. Any choice of fixed point (origin) $a_{0} \in \A$ determines a projection $\trans:\aff(\A) \to \ste$ onto the translational part defined by $\trans(f) = f(a_{0}) $, so that $f(a) = \lin(f)(a - a_{0}) + \trans(f)$ for any $a \in \A$. The Lie bracket on $\aff(\A)$ can be transported to $\eno(\ste) \oplus \ste$ via the resulting linear isomorphism $\lin \oplus \trans: \aff(\A) \to \eno(\ste) \oplus \ste$.
Let $\rho:\g \to \aff(\A)$ be an affine representation of the Lie algebra $(\g, [\dum, \dum])$ on the affine space $\A$ with translations $\ste$. The representation $\rho$ is \textit{faithful} if it is injective, it is \textit{prehomogeneous at $x_{0}$} if there exists $x_{0} \in \A$ such that the map $\g \to \ste$ defined by $a \to \rho(a)x_{0}$ is a linear surjection, and it is \textit{étale at $x_{0}$} if there exists $x_{0} \in \A$ such that the map $\g \to \ste$ defined by $a \to \rho(a)x_{0}$ is a linear isomorphism. (If it is not important what $x_{0}$ is, $\rho$ is simply said to be \textit{prehomogeneous} or \textit{étale}.) An affine representation is étale if and only if it is faithful and prehomogeneous.
Let $(\alg, \mlt)$ be a finite-dimensional LSA.
Let $\lin:\aff(\alg) \to \eno(\alg)$ be the projection onto the linear part and let $\trans:\aff(\alg) \to \alg$ be the projection, $\trans(\phi) = \phi(0)$, corresponding to the origin $0 \in \alg$. That $\mlt$ be left-symmetric is equivalent to the requirement that the map $\phi = L \oplus I:\alg \to \eno(\alg) \oplus \alg \simeq \aff(\alg)$ be an affine Lie algebra representation, where $I$ is the identity endomorphism of $\alg$ and the isomorphism $ \eno(\alg) \oplus \alg \simeq \aff(\alg)$ is that inverse to $\lin \oplus \trans$. Since $\trans \circ \phi = I$, that is $\phi(a)0 = a$, $\phi$ is étale. The map $\phi$ is the \textit{canonical affine representation} of the LSA $(\alg, \mlt)$.
In the other direction, given an étale affine representation $\rho: \g \to \aff(\A)$, for $a, b \in \g$ there exists a unique $a \mlt b \in \g$ such that $\lin(\rho(a))\rho(b)x_{0} = \rho(a \mlt b)x_{0}$. From the fact that $\rho$ is a Lie algebra representation it follows that $\mlt$ is a left-symmetric multiplication on $\g$ with underlying Lie bracket $[\dum, \dum]$. The special case where $\g = \alg = \A$ is a vector space with origin $x_{0} = 0$ and $\trans \circ \rho = I$ yields the affine representation $\rho:\alg \to \aff(\alg)$ of the Lie algebra $(\alg, [\dum, \dum])$ and the compatible left-symmetric multiplication $a\mlt b = \rho(a\mlt b)0 = \lin(\rho(a))b = \rho(a)(0 + b) - \rho(a)0 = \rho(a)b - a$ with left multiplication operator $L = \ell \circ \rho$.
The étale affine representation $\rho:\alg \to \aff(\alg)$ determined by a left-symmetric multiplication $\mlt$ extends to a faithful linear representation $\hat{\rho}:\alg \to \gl(\alg \oplus \rea)$ defined by $\hat{\rho}(a)(b, t) = (\lin(\rho(a))b + t\trans(\rho(a)), 0)$. Consequently, an $n$-dimensional Lie algebra that admits no faithful linear representation of dimension $n+1$ admits no compatible left-symmetric multiplication. The first such example, with $n = 11$, was constructed by Y. Benoist in \cite{Benoist-nilvariete}.
Given an LSA $(\alg, \mlt)$, let $G$ be the simply-connected Lie group with Lie algebra $(\alg, [\dum, \dum])$. For $a \in \alg$, the vector field $\livf^{a}_{g} = \tfrac{d}{dt}\big|_{t = 0}g\cdot \exp_{G}(ta)$ generated on $G$ by right multiplication by $\exp_{G}(ta)$ is left-invariant. The relations defining an LSA mean that the left-invariant connection $\nabla$ on $G$ defined by $\nabla_{\livf^{a}}\livf^{b} = \livf^{a\mlt b}$ is torsion-free and flat. Conversely, given a flat torsion-free left-invariant connection $\nabla$ on a Lie group $G$, defining $a\mlt b$ by $\nabla_{\livf^{a}}\livf^{b} = \livf^{a\mlt b}$ for $a$ and $b$ in the Lie algebra $\g$ of $G$, makes $\g$ into an LSA.
Suppose $G$ is simply-connected with identity element $e$ and let $\dev:G \to \alg$ be the developing map such that $\dev(e) = 0$.
Let $R_{g}$ be the operator on $G$ of right multiplication by $g$. By the left invariance of $\nabla$ there exists a unique $\hol(g) \in \Aff(\alg)$ such that $\dev \circ R_{g} = \hol(g)\circ \dev$. The map $\hol:G \to \Aff(\alg)$ is a group homomorphism. Although the homomorphism $\hol$ depends on the choice of origin $0$, another such choice leads to a homomorphism conjugate to $\hol$ by an element of $\Aff(\alg)$.
Since $\dev$ is an open map, the image $\dev(G)$ is an open subset of $\alg$ on which $\hol(G)$ acts transitively with discrete isotropy group. This is the canonical affine representation of $G$ on $\alg$. Since the isotropy group of $\hol(G)$ is discrete, the corresponding affine representation $T\hol(e):\g \to \aff(\alg)$ is étale. Lemma \ref{infhollemma} shows that $T\hol(e)$ is the affine representation determined by the original LSA and shows how the affine representation $\hol$ intertwines the exponential maps on $G$ and $\Aff(\alg)$. It is equivalent to results in \cite{Kim-developingmaps}.
\begin{lemma}[cf. \cite{Kim-developingmaps}]\label{infhollemma}
Let $G$ be the simply-connected Lie group with Lie algebra the underlying Lie algebra of an LSA $(\alg, \mlt)$, and let $\nabla$ be the flat torsion-free left-invariant affine connection determined on $G$ by the multiplication $\mlt$. The differential $T\hol(e)$ at the identity $e \in G$ of the affine representation $\hol:G \to \Aff(\alg)$ corresponding to the developing map $\dev:G \to \alg$ such that $\dev(e) = 0$ is equal to the canonical affine representation $\phi = L \oplus I:\alg \to \aff(\alg)$, where $L$ is the operator of left multiplication in $(\alg, \mlt)$.
Precisely, for $a, b \in \alg$,
\begin{align}\label{holexp0}
&\tfrac{d}{dt}\big|_{t = 0}\hol(\exp_{G}(ta))\cdot b = T\hol(e)a \cdot b = \phi(a)b = a\mlt b + a = (I + R(b))a,\\
\label{holexp}
&\hol(\exp_{G}(a)) = \exp_{\Aff(\alg)}\circ \phi(a) = (e^{L(a)}, E(a)),
\end{align}
where $E(a) = \sum_{k \geq 1}\tfrac{1}{k!}L(a)^{k-1}a$. The differential $T\dev_{g}:T_{g}G \to \alg$ of the developing map at $g \in G$ equals $(I + R(\dev(g)))\circ TR_{g^{-1}}$. For $a \in \alg$ regard the map $x \to (I + R(x))a = \rvf^{a}_{x}$ as a vector field on $\alg$. The flow of $\rvf^{a}$ is $\phi^{a}_{t}(x) = \hol(\exp_{G}(ta))\cdot x = e^{tL(a)}x + E(ta)$.
\end{lemma}
Note that, by \eqref{holexp},
\begin{align}
\dev(\exp_{G}(a)) = \hol(\exp_{G}(a))0 = (e^{L(a)}, E(a))0 = E(a).
\end{align}
The identity \eqref{holexp} appears in an equivalent form in section II.$1$. of \cite{Vinberg}; see also \cite{Choi-domain}.
\begin{proof}
It follows from the definition of the exponential map of a Lie group that $\hol \circ \exp_{G} = \exp_{\Aff(\alg)}\circ T\hol(e)$, where $T\hol(e):\alg \to \aff(\alg)$ is the differential of $\hol$ at $e$. Precisely, for $a \in \g$, $\si(t) = \exp_{\Aff(\alg)}(tT\hol(e)a)$ is by definition the unique one-parameter subgroup of $\Aff(\alg)$ such that $\si(0) = e_{\Aff(\alg)}$ and $\si^{\prime}(0) = T\hol(e)a$; since $\hol\circ \exp_{G}(ta)$ is a one-parameter subgroup of $\Aff(\alg)$ satisfying the same initial conditions, it equals $\si(t)$.
By the definition of $\dev$, the image $\dev(\ga(t))$ of the $\nabla$-geodesic $\ga(t)$ such that $\ga(0) = e$ and $\dot{\ga}(0) = a$ is a straight line in $\alg$ tangent to $a$ at $0 \in \alg$, so $T\dev(e)a = \tfrac{d}{dt}\big|_{t = 0}\dev(\ga(t)) = a$. Since $\ga(t)$ is tangent at $e$ to the curve $\exp_{G}(ta)$,
\begin{align}\label{atdevea}
\begin{split}
a &= \tfrac{d}{dt}\big|_{t = 0}\dev(\ga(t)) = T\dev(e)a = \tfrac{d}{dt}\big|_{t = 0}\dev(\exp_{G}(ta)) \\
&= \tfrac{d}{dt}\big|_{t = 0}\hol(\exp_{G}(ta))\dev(e) = T\hol(e)a \cdot \dev(e)= \trans(T\hol(e)a),
\end{split}
\end{align}
so $\trans \circ T\hol(e) = I$. By the definition of $\dev$, $\nabla$ is the pullback via $\dev$ of the flat connection $\pr$ on $\alg$ determined by the flat affine structure on $\alg$ whose geodesics are the lines in $\alg$.
Let $Y^{a}_{\dev(g)} = \tfrac{d}{dt}\big|_{t = 0}\dev(g\cdot \exp_{G}(ta)) = T\dev(g)(\livf^{a}_{g})$. The integral curve through $e$ of $\livf^{a}$ is $\exp_{G}(ta)$ and the integral curve through $\dev(e)$ of $Y^{a}$ is $\dev(\exp_{G}(ta))$. Combining the preceding observations with \eqref{holexp0} and \eqref{atdevea} yields
\begin{align}
\begin{split}
a\mlt b& = T\dev(e)(\livf^{a\mlt b}_{e}) = T\dev(e)((\nabla_{\livf^{a}}\livf^{b})_{e}) = (\pr_{Y^{a}}Y^{b})_{e}\\
& = \tfrac{d}{dt}\big|_{t = 0}Y^{b}_{\dev(\exp_{G}(at))} = \tfrac{d}{dt}\big|_{t = 0}\tfrac{d}{ds}\big|_{s = 0} \dev(\exp_{G}(ta)\exp_{G}(sb))\\
& = \tfrac{d}{dt}\big|_{t = 0}\tfrac{d}{ds}\big|_{s = 0} \hol(\exp_{G}(ta))\dev(\exp_{G}(sb))\\
& = \tfrac{d}{dt}\big|_{t = 0}\tfrac{d}{ds}\big|_{s = 0}\exp_{\Aff(\alg)}(T\hol(e)(ta))\dev(\exp_{G}(sb))\\
& = \tfrac{d}{dt}\big|_{t = 0}\lin(\exp_{\Aff(\alg)}(tT\hol(e)a))b = \lin(T\hol(e)a) \cdot b.
\end{split}
\end{align}
This shows $\lin (T\hol(e)a) = L(a)$, and so $T\hol(e) = \phi$. The exponential map $\exp_{G}$ of $G$ can be described explicitly for $a \in \alg$ near $0$ by representing $\aff(\alg)$ as a subgroup of $\eno(\alg \oplus \rea)$ and exponentiating the image $\phi(a)$. There results \eqref{holexp}. Finally, for $a \in \alg$ and $g \in G$,
\begin{align}
\begin{split}
T\dev_{g}\circ TR_{g}(a) & = \tfrac{d}{dt}\big|_{t = 0}\dev(\exp_{G}(ta)g) = \tfrac{d}{dt}\big|_{t = 0}\hol(\exp_{G}(ta))\dev(g) \\&= T\hol(e)a \cdot \dev(g) = (I + R(\dev(g))a = \rvf^{a}_{\dev(g)},
\end{split}
\end{align}
by \eqref{atdevea}. There remains to show that the flow of $\rvf^{a}$ is $\phi^{a}_{t}(x) = e^{tL(a)}x + E(ta)$. Because
\begin{align}
\tfrac{d}{dt}E(ta) = \tfrac{d}{dt}\sum_{k\geq 1}\tfrac{t^{k}}{k!}L(a)^{k-1}a = \sum_{k \geq 1}\tfrac{t^{k-1}}{(k-1)!}L(a)^{k-1}a = e^{L(a)}a,
\end{align}
there holds
\begin{align}
\tfrac{d}{dt}\phi^{a}_{t}(x) = \tfrac{d}{dt}(e^{tL(a)}x + E(ta)) = L(a)e^{tL(a)}x + e^{tL(a)}a.
\end{align}
On the other hand,
\begin{align}
\rvf^{a}_{\phi^{a}_{t}(x)} = (I + R(\phi^{a}_{t}(x))a = a + L(a)e^{tL(a)}x + L(a)E(ta) = L(a)e^{tL(a)}x + e^{tL(a)}a.
\end{align}
This shows the final claim.
\end{proof}
By \eqref{holexp0}, for $a, x \in \alg$, the differential at $e \in G$ of the map $g \to \hol(g)\cdot x$ is $I + R(x)$, and the tangent space $T_{x}\hol(G)\cdot x$ at $x$ of the orbit $\hol(G)\cdot x$ is the image of $I + R(x)$, which is spanned by the vector fields $\rvf^{a}$.
Following Helmstetter in \cite{Helmstetter}, define the \textit{characteristic polynomial} of the LSA by $P(x) = \det(I + R(x))$. Since $R(x)$ is linear in $x$, $\deg P \leq \dim \alg$.
\begin{lemma}[J. Helmstetter \cite{Helmstetter}] \label{charpolylemma}
Let $G$ be the simply-connected Lie group with Lie algebra the underlying Lie algebra of the finite-dimensional LSA $(\alg, \mlt)$. The characteristic polynomial $P(x) = \det(I + R(x))$ of $(\alg, \mlt)$ satisfies
\begin{align}\label{holgp}
(\hol(g)\cdot P)(x) = P(\hol(g^{-1})x) = \chi(g^{-1})P(x)
\end{align}
for $g \in G$ and $x \in \alg$, where $\chi:G \to \reap$ is the group character satisfying $\chi\circ \exp_{G}(a) = e^{\tr R(a)}$ for $a \in \alg$.
\end{lemma}
\begin{proof}
Proposition $3.1$ of H. Kim's \cite{Kim-developingmaps} gives a proof of \eqref{holgp} in terms of the developing map. The proof given here, based on \eqref{holexp0}, is similar to Kim's. By \eqref{holexp0}, for $g \in G$ and $a, x \in \alg$,
\begin{align}
\begin{split}
(I + R(\hol(g)x))a & = \tfrac{d}{dt}\big|_{t = 0}\hol(\exp_{G}(ta))\hol(g)x = \tfrac{d}{dt}\big|_{t = 0}\hol(g\exp_{G}(t\Ad(g^{-1})a))x.
\end{split}
\end{align}
Suppose that $g = \exp_{G}(b)$. Then, using \eqref{holexp0} and \eqref{holexp},
\begin{align}
\begin{split}
(I &+ R(\hol( \exp_{G}(b))x))a = \tfrac{d}{dt}\big|_{t = 0}\hol(\exp_{g}(b)\exp_{G}(t\Ad(\exp_{G}(-b))a)x\\
& = \tfrac{d}{dt}\big|_{t = 0}\left(e^{L(b)}\hol(\exp_{G}(t\Ad(\exp_{G}(-b))a))x + E(b)\right)
= e^{L(b)}(I + R(x))\Ad(\exp_{G}(-b))a.
\end{split}
\end{align}
Consequently $P(\hol(\exp_{G}(b))x) = e^{(\tr L(b) - \tr \ad(b))}P(x) = e^{\tr R(b)}P(x)$. Since $G$ is generated by any given open neighborhood of the identity, in particular by $\exp_{G}(\alg)$, corresponding to the infinitesimal character $\tr R$ there is a unique group character $\chi:G \to \reat$ satisfying $\chi\circ \exp_{G}(b) = e^{\tr R(b)}$ for all $b \in \alg$. Then $P(\hol(g)x) = \chi(g)P(x)$ holds for $g \in \exp_{G}(\alg)$. Since $\exp_{G}(\alg)$ generates $G$, this implies that equality holds for all $g \in G$.
\end{proof}
A consequence of Lemma \ref{charpolylemma} is the result of \cite{Goldman-Hirsch-orbits} (see section $1A.8$) that the orbit $\hol(G) 0 = \hol(G)\dev(e) = \dev(G)$ is the connected component containing $0$ of the complement of the zero set of $P$ (see also Proposition $3.2$ of \cite{Kim-developingmaps}).
\begin{corollary}[\cite{Goldman-Hirsch-orbits}, \cite{Kim-developingmaps}, \cite{Helmstetter}]\label{orbitcorollary}
Let $G$ be the simply-connected Lie group with Lie algebra the underlying Lie algebra of the finite-dimensional LSA $(\alg, \mlt)$. The orbit $\hol(G) 0 = \hol(G)\dev(e) = \dev(G)$ is the connected component $\Om$ containing $0$ of the complement of the zero set $\{x \in \alg: P(x) = 0 \}$ of the characteristic polynomial $P$ and $\dev:G \to \Om$ is a diffeomorphism.
\end{corollary}
\begin{proof}
Taking $x = 0$ in \eqref{holgp} shows that
\begin{align}\label{pdev}
P(\dev(g))= P(\hol(g)0) =\chi(g),
\end{align}
for $g \in G$, so that $P$ is nonvanishing on $\dev(G)$.
By \eqref{holgp}, the action of $\hol(G)$ preserves $\{x \in \alg: P(x) = 0 \}$. Since $G$ is path-connected, $\hol(g)\Om = \Om$ for all $g \in G$, so $\hol(G)\Om = \Om$. Consequently $\dev(G) = \hol(G)0 \subset \hol(G)\Om = \Om$.
By Lemma \ref{infhollemma}, the differential of $\dev$ at $g \in G$ equals $I + R(\dev(g))$, and so is a linear isomorphism, since $P(\dev(g)) \neq 0$. Hence $\dev:G \to \Om$ is a local diffeomorphism, so an open map, and $\dev(G)$ is connected open subset of $\Om$, hence equals $\Om$. Since an open map is proper, $\dev$ is a proper local diffeomorphism and so a covering map. Since $G$ is simply-connected, $\dev$ must be a diffeomorphism.
\end{proof}
Let $(\alg, \mlt)$ be a finite-dimensional LSA. The \textit{trace form} $\tau$ of the LSA $(\alg, \mlt)$ is the symmetric bilinear form defined by
\begin{align}\label{traceformdefined}
\tau(x, y) = \tr R(x)R(y) = \tr R(x\mlt y)
\end{align}
(the second equality follows from \eqref{rlsa}).
Differentiating $\hol(\exp_{G}(-ta))\cdot P$ using \eqref{holexp} and \eqref{holgp} yields
\begin{align}\label{dptra}
\begin{split}
dP_{x}((I + R(x))a) & = \tfrac{d}{dt}\big|_{t = 0}P(\hol(\exp_{G}(ta))x) = \tfrac{d}{dt}\big|_{t = 0}e^{t\tr R(a)}P(x) = P(x)\tr R(a).
\end{split}
\end{align}
In particular, $dP_{0} = d\log P_{0} = \tr R$.
Since $R(x)$ is linear in $x$, differentiating $R(x)$ in the direction of $b \in \alg$ yields $R(b)$. Writing \eqref{dptra} as $d\log P_{x}((I + R(x))a) = \tr R(a)$ and taking the derivative in the $b$ direction yields
\begin{align}\label{hesslogp}
(\hess \log P)_{x}((I + R(x))a, b) = - d\log P_{x}(a \mlt b) = -\tr R((I + R(x))^{-1}(a\mlt b)).
\end{align}
In particular, $\tau(a, b) = -(\hess \log P)_{0}(a, b)$, so that nondegeneracy of the trace form is the algebraic condition corresponding to nondegeneracy of the Hessian of $\log P$, or, what is essentially the same, the nondegeneracy of the level sets of $P$. Since the eventual goal here is to find characteristic polynomials $P$ solving $\H(e^{P}) = \kc e^{nP}$ with $\kc \neq 0$, the focus is LSAs for which $\tau$ is nondegenerate. Differentiating \eqref{hesslogp} yields
\begin{align}\label{cubiclogp}
(\pr^{2}d\log P)_{x}((I + R(x))a, b, c) = - (\hess\log P)_{x}(a\mlt b, c) -(\hess \log P)_{x}(b, a \mlt c),
\end{align}
which combined with \eqref{hesslogp} shows that
\begin{align}\label{taucubic}
(\pr^{2}d\log P)_{0}(a, b, c) = \tau(a\mlt b, c) + \tau(b, a\mlt c).
\end{align}
More generally there holds the following identity due to Helmstetter.
\begin{lemma}[J. Helmstetter; $(5)$ of \cite{Helmstetter}]
The characteristic polynomial $P$ of an LSA $(\alg, \mlt)$ satisfies
\begin{align}\label{nablaklogp2}
(\pr^{k}d\log &P)_{0}(a_{0}, a_{1}, \dots, a_{k}) = (-1)^{k}\sum_{\si \in S_{k}}\tr\left(R(a_{\si(1)})\dots R(a_{\si(k)})R(a_{0}) \right),
\end{align}
for all $a_{0}, \dots, a_{k} \in \alg$.
\end{lemma}
\begin{proof}
The identity \eqref{nablaklogp2} is proved by induction on $k$.
The base case $k = 1$ is true by \eqref{hesslogp}. Repeated differentiation shows that, for $k \geq 1$,
\begin{align}\label{nablakdlogp}
\begin{split}
(\pr^{k}d\log &P)_{x}((I + R(x))a_{0}, a_{1}, \dots, a_{k}) = -\sum_{i = 1}^{k}(\pr^{k-1}d\log P)_{x}(a_{1}, \dots, a_{i-1}, a_{0}\mlt a_{i}, a_{i+1}, \dots, a_{k}).
\end{split}
\end{align}
The inductive step is proved using \eqref{nablakdlogp}, \eqref{rlsa}, and that $\sum_{\si \in S_{k+1}}[A_{\si(1)}\dots A_{\si(k)}, A_{\si(k+1)}] = 0$
for any $A_{1}, \dots, A_{k+1} \in \eno(\alg)$.
\end{proof}
An LSA $(\alg, \mlt)$ is \textit{right nil} (\textit{left nil}) if $R(a)$ (respectively $L(a)$) is nilpotent for every $a \in \alg$.
A finite-dimensional LSA $(\alg, \mlt)$ is \textit{complete} if its characteristic polynomial is nowhere vanishing. Equivalently, $I + R(a)$ is invertible for all $a \in \alg$. The LSA is \textit{incomplete} otherwise. If $R(a)$ is nilpotent, then $I + R(a)$ is invertible, so $P(a) \neq 0$. Hence a right nil LSA is complete. The contrary claim, that a complete LSA is right nil, is due to D. Segal. This and other equivalent characterizations of completeness are explained in Lemma \ref{completenesslemma}.
\begin{lemma}\label{completenesslemma}
Let $(\alg, \mlt)$ be a finite-dimensional real LSA $(\alg, \mlt)$ with characteristic polynomial $P$. Let $\nabla$ be the flat left-invariant affine connection determined by $\mlt$ on the simply-connected Lie group $G$ with Lie algebra $(\alg, [\dum, \dum])$. The following are equivalent.
\begin{enumerate}
\item\label{cmp1} $\nabla$ is complete.
\item\label{cmp5} $R(a)$ is nilpotent for all $a \in \alg$; that is, $(\alg, \mlt)$ is right nil.
\item\label{cmp6} $\tr R(a) = 0$ for all $a \in \alg$.
\item\label{cmp7} The trace form $\tau$ defined in \eqref{traceformdefined} is identically zero.
\item\label{cmp8} $(\pr^{N}d\log P)_{0} = 0$ for some $N \geq 1$.
\item\label{cmp3} $P$ is constant and equal to $1$.
\item\label{cmp4} $P$ has a critical point at $0 \in \alg$.
\item\label{cmp2} $P$ is nowhere vanishing on $\alg$; that is, $(\alg, \mlt)$ is complete.
\end{enumerate}
\end{lemma}
\begin{proof}
\eqref{cmp1}$\iff$\eqref{cmp2}. By Corollary \ref{orbitcorollary}, $\dev$ is a diffeomorphism onto the connected component containing $0$ of the complement in $\alg$ of the zero set of $P$. The affine connection $\nabla$ is complete if and only if $\dev$ is a diffeomorphism from $G$ onto $\alg$. Hence $\nabla$ is complete if and only if $P$ is nowhere vanishing.
\eqref{cmp5}$\implies$\eqref{cmp2} and \eqref{cmp6}. If $R(a)$ is nilpotent, then $I + R(a)$ is invertible and $\tr R(a) = 0$.
\eqref{cmp2}$\implies$\eqref{cmp6}. It is equivalent to show that $I + R(a)$ is invertible for all $a \in \alg$ if and only if $(\alg, \mlt)$ is right nil. This is proved as Theorem $1$ of D. Segal's \cite{Segal-lsa} (an alternative algebraic proof is given in \cite{Elduque-Myung-lsas}). The proof uses facts about dominant morphisms and, in particular, the fact that the orbits of a morphic action of a unipotent algebraic group are closed. Segal's proof shows the stronger statement that for a base field of characteristic zero, an LSA is complete if and only if the LSA obtained by extension of scalars to the algebraic closure of the base field is also complete. In the present context, this amounts to showing that the characteristic polynomial of the complexified LSA is nowhere vanishing; since the nonvanishing of the characteristic polynomial implies that no $R(b)$ has a nonzero eigenvalue in the base field, this suffices to show that all $R(b)$ are trace free.
\eqref{cmp3}$\iff$\eqref{cmp5}. If $P$ is equal to $1$ then \eqref{nablaklogp2} with $a_{0} = a_{1} = \dots = a_{k} = a$ implies $\tr R(a)^{k+1} = 0$ for all $k \geq 0$. Hence $R(a)$ is nilpotent for all $a \in \alg$.
If $R(a)$ is nilpotent for all $a \in \alg$, then $I + R(a)$ is unipotent for all $a \in \alg$, so $P(a) = \det (I + R(a)) = 1$.
\eqref{cmp3}$\iff$\eqref{cmp6}. If $P$ is nowhere vanishing then $I+R(x)$ is invertible for all $x \in \alg$. Associate with $a \in \alg$ the vector field $\rvf^{a}_{x} = (I + R(x))a$. If $P$ is nonvanishing, and $\{a_{1}, \dots, a_{n}\}$ is a basis of $\alg$, then $\rvf^{a_{1}}_{x}, \dots, \rvf^{a_{n}}_{x}$ are linearly independent for all $x \in \alg$. By \eqref{dptra}, $d\log P_{x}(\rvf^{a_{i}}_{x}) = \tr R(a_{i})$. If $P$ is constant, the constant must be $1$ because $P(0) = 1$, and so $\tr R(a) = d\log P_{x}(\rvf^{a}_{x})) = 0$ for all $a \in \alg$. On the other hand, if $\tr R = 0$, then $d\log P_{x}(\rvf^{a_{i}}_{x}) = 0$ for $1 \leq i \leq n$, so $\log P$ is constant.
\eqref{cmp4}$\iff$\eqref{cmp6}. Since $P(0) = 1$ and $dP_{0} = d\log P_{0} = \tr R$, $P$ has a critical point at $0 \in \alg$ if and only if $\tr R = 0$.
\eqref{cmp6}$\iff$\eqref{cmp7}. If there holds \eqref{cmp6}, by \eqref{traceformdefined}, $\tau(x, y) = \tr R(x \mlt y) = 0$ for all $x, y \in \alg$. Suppose there holds \eqref{cmp7}. Since $\tr R$ is linear there is $r \in \alg^{\ast}$ such that $\tr R(x) = r(x)$. Since $P(0) = 1$ and $P$ is a polynomial, $\log P$ is real analytic in a neighborhood of $0$. By \eqref{nablakdlogp} with $x = 0$, if there holds $(\pr^{k-1}d\log P)_{0} = 0$ then $(\pr^{k}d\log P)_{0} = 0$. By \eqref{hesslogp}, that $\tau = 0$ means $(\pr d\log P)_{0} = 0$, so $(\pr^{k}d\log P)_{0} = 0$ for $k \geq 2$. Hence for $x$ sufficiently near $0$, $\log P(x) = r(x)$, or $P(x) = e^{r(x)}$. Differentiating this equality $k$ times yields $(\pr^{k}P)_{0} = r^{\tensor k}$, the $k$fold tensor product of $r$ with itself. Since $P$ is a polynomial, this vanishes for $k > \deg P$, and hence $r$ must be $0$.
\eqref{cmp8}$\iff$\eqref{cmp3}. That \eqref{cmp3}$\implies$\eqref{cmp8} is immediate, so suppose there holds \eqref{cmp8}. Since $P(0) = 1$ and $P$ is a polynomial, $\log P$ is real analytic in a neighborhood $U$ of $0$. By \eqref{nablakdlogp}, that $(\pr^{N}d\log P)_{0} = 0$ means that $(\pr^{k}d\log P)_{0} =0$ for all $k \geq N$. Since $\log P$ is real analytic, this implies it equals some degree $N-1$ polynomial $Q$ in $U$. That $P$ and $e^{P}$ both be polynomials means that both are constant, so $P$ is constant, equal to $1$.
\end{proof}
\begin{remark}
It may be tempting to believe that \eqref{cmp2} implies \eqref{cmp4} for any real polynomial, that is, that a nonvanishing real polynomial must have a critical point, but this is false. For example the polynomial $Q(x, y) = (1 + x +x^{2}y)^{2} + x^{2}$ takes on all positive real values and has nonvanishing gradient (this example is due to \cite{fedja}). This means that this $Q$ cannot be the characteristic polynomial of a real finite-dimensional LSA. This suggests the problem: \textit{Characterize those polynomials that occur as the characteristic polynomials of some real finite-dimensional LSA}.
\end{remark}
If $(\alg, \mlt)$ is incomplete, there is $a \in \alg$ such that $\tr R(a) \neq 0$ and so $dP_{0}$ cannot be identically $0$ and $\ker \tr R$ has codimension one.
\begin{lemma}\label{cpafflemma}
If $\Psi:(\alg, \mlt) \to (\balg, \circ)$ is an isomorphism of LSAs, the characteristic polynomials $P_{\mlt}$ and $P_{\circ}$ satisfy $P_{\circ} \circ \Psi = P_{\mlt}$.
\end{lemma}
\begin{proof}
The right multiplication operators satisfy $R_{\circ}(\Psi(x)) \circ \Psi = \Psi \circ R_{\mlt}(x)$ for all $x \in \alg$. The claim follows from the definition of the characteristic polynomial.
\end{proof}
\begin{remark}
By Lemma \ref{cpafflemma}, isomorphic LSAs have affinely equivalent characteristic polynomials. The converse is trivially false, because by Lemma \ref{completenesslemma} any two complete LSAs have affinely equivalent characteristic polynomials. Example \ref{negeigexample} exhibits nonisomorphic LSAs having the same nondegenerate characteristic polynomial.
\end{remark}
\section{Hessian LSAs, Koszul forms, idempotents, and the trace-form}\label{hessiansection}
A symmetric bilinear form $h$ on an LSA $(\alg, \mlt)$ is \textit{Hessian} if its \textit{cubic form} $C_{h}$ defined by $C_{h}(x, y, z) = h(x\mlt y, z) + h(y, x\mlt z)$ is completely symmetric, meaning
\begin{align}\label{hessianmetric}
0 = C_{h}(x, y, z) - C_{h}(y, x, z) = h([x, y], z) - h(x, y\mlt z) + h(y, x \mlt z),
\end{align}
for all $x, y, z\in \alg$. A \textit{metric} is a nondegenerate symmetric bilinear form $h$. A \textit{Hessian LSA} is a triple $(\alg, \mlt, h)$ comprising an LSA $(\alg, \mlt)$ and a Hessian metric $h$. The terminology \textit{Hessian} follows \cite{Shima-homogeneoushessian}. It follows from \eqref{rlsa} that the cubic form $C = C_{\tau}$ of the trace-form $\tau$ of an LSA is completely symmetric, so $\tau$ is Hessian. Hence, if $\tau$ is nondegenerate then it is a Hessian metric. Moreover, the identity \eqref{taucubic} shows that $C(a, b, c) = (\pr^{2}d\log P)_{0}(a, b, c)$.
A \textit{Koszul form} on the LSA $(\alg,\mlt)$ is an element $\la \in \alg^{\ast}$ such that $h(x, y) = \la(x \mlt y)$ is a metric.
The assumed symmetry of $h$ implies $[\alg, \alg] \subset \ker \la$, while the left symmetry of $\mlt$ implies that $h$ is a Hessian metric.
If an LSA admits a Koszul form, then its left regular representation $L$ is faithful, for, if $L(x) = 0$, then $h(x, y) = \la(L(x)y) = 0$ for all $y \in \alg$ implies $x = 0$.
An algebra $(\alg, \mlt)$ is \textit{perfect} if $\alg \mlt \alg = \alg$.
An algebra $(\alg, \mlt)$ is \textit{simple} if its square $\alg^{2}$ is not zero, and $\alg$ contains no nontrivial two-sided ideal. A nontrivial ideal in an LSA $(\alg, \mlt)$ is a nontrivial ideal in the underlying Lie algebra $(\alg, [\dum, \dum])$. Consequently, if the Lie algebra underlying an LSA is simple then the LSA is simple.
Since a simple algebra is perfect, if $(\alg, \mlt)$ is a simple LSA then two Koszul forms inducing the same metric must be equal.
Let $(\alg, \mlt)$ be a finite-dimensional LSA. If the trace form $\tau$ defined in \eqref{traceformdefined} is nondegenerate, then $\tr R$ is a Koszul form.
By \eqref{cmp7} of Lemma \ref{completenesslemma}, if $(\alg, \mlt)$ is complete, then $\tau$ vanishes identically, so an LSA with nontrivial trace form $\tau$ is necessarily incomplete.
Given a Koszul form $\la$ on $(\alg, \mlt)$, the unique element $u \in \alg$ such that $\la(a) = \la(a \mlt u)$ for all $a \in \alg$ is idempotent. By \eqref{hessianmetric}, $h(R(u)x, y) - h(x, R(u)y) = h(u, [x,y]) = \la([x, y]) = 0$. Consequently, $h(u\mlt u, x) = h(u, x\mlt u) = \la(x\mlt u) = h(x, u)$ for all $x \in \alg$, so $u\mlt u = u$. The element $u \in\alg$ is the \textit{idempotent associated with $\la$}.
Write $\lb x_{1}, \dots, x_{k} \ra$ for the span of $x_{1}, \dots, x_{k} \in \alg$ and $V^{\perp}$ for the orthogonal complement of the subspace $V \subset \alg$ with respect to a given Hessian metric $h$.
For $A \in \eno(\alg)$, let $A^{\ast} \in \eno(\alg)$ be the \textit{$h$-adjoint} of $A$ defined by $h(Ax, y) = h(x, A^{\ast}y)$.
\begin{lemma}\label{principalidempotentlemma}
For an $n$-dimensional LSA $(\alg, \mlt)$ with a Koszul form $\la$ with associated Hessian metric $h$, the idempotent $u$ associated with $\la$ has the following properties.
\begin{enumerate}
\item\label{lurupre} $L(u)$ and $R(u)$ preserve the Lie ideal $\ker \la = \lb u \ra^{\perp}$.
\item\label{lladj} $L(u) + L(u)^{\ast} = R(u) + I$.
\item\label{rsa} $R(u)$ is $h$-self-adjoint, $R(u) = R(u)^{\ast}$.
\item\label{trru} $R(u) - R(u)^{2}$ is nilpotent, and there is an integer $k$ such that $1 \leq \tr R(u) = \dim \alg_{1} = k \leq n$ and $(R(u) - R(u)^{2})^{\max\{k, n-k\}} = 0$.
\item \label{a0l} $\sum_{k \geq 1} \ker R(u)^{k} \subset \ker \la$.
\item\label{rclsa6b} No nontrivial left ideal of $(\alg, \mlt)$ is contained in $\ker \la$.
\item\label{kos8} $\ker R(u) \subset \ker \la$ is a Lie subalgebra of $\alg$ on which $L(u)$ acts as a Lie algebra derivation.
\end{enumerate}
\end{lemma}
\begin{proof}
If $x \in \ker \la$ then $\la(R(u)x) = \la(L(u)x) = h(u, x) = \la(x) = 0$, so $L(u)\ker \la \subset \ker \la$ and $R(u)\ker \la \subset \ker \la$. This shows \eqref{lurupre}. Taking $y = u$ and $h = \tau$ in \eqref{hessianmetric} yields \eqref{lladj} and so also \eqref{rsa}. Let $N = R(u) - R(u)^{2} = [L(u), R(u)]$. Then $\tr N^{k+1} = \tr [L(u), R(u)]N^{k} = \tr L(u)[R(u), N^{k}] = 0$ for all $k \geq 0$. This implies $N = R(u) - R(u)^{2}$ is nilpotent.
Next it is claimed that the Fitting decomposition $\alg = \alg_{0} \oplus \alg_{1}$ of $\alg$ with respect to $R(u)$ is an $h$-orthogonal $L(u)$-invariant direct sum, that is $\alg_{0}^{\perp} = \alg_{1}$ and $\alg_{1}^{\perp} = \alg_{0}$ and $L(u)$ preserves $\alg_{0}$ and $\alg_{1}$, and, moreover, the Fitting decomposition $\alg = \bar{\alg}_{0} \oplus \bar{\alg}_{1}$ of $\alg$ with respect to $I - R(u)$ satisfies $\bar{\alg}_{0} = \alg_{1}$ and $\bar{\alg}_{1} = \alg_{0}$. By definition of the Fitting decomposition $\alg_{0} = \sum_{k \geq 1} \ker R(u)^{k}$ and $\alg_{1} = \cap_{k \geq 1}R(u)^{k}\alg$. Because $\alg$ is finite-dimensional there is a minimal $p$ such that $\alg_{0} = \ker R(u)^{p}$ and $\alg_{1} = R(u)^{p}\alg$. If $z\in \alg_{1}$ there is $y \in \alg$ such that $z = R(u)^{p}y$ and so for all $x \in \alg_{0}$, $h(z, x) = h(R(u)^{p}y, x) = h(y, R(u)^{p}x) = 0$, showing $\alg_{1} \subset \alg_{0}^{\perp}$. If $z \in \alg_{1}^{\perp}$, then $h(R(u)^{p}z, x) = h(z, R(u)^{p}x) = 0$ for all $x \in \alg$, so $R(u)^{p}z = 0$ and $z \in \alg_{0}$. Thus $\alg_{1}^{\perp} \subset \alg_{0}$. This shows $\alg_{0}^{\perp} = \alg_{1}$ and $\alg_{1}^{\perp} = \alg_{0}$. It is easily checked, by induction on $k$, that $[R(u)^{k},L(u)] = k(R(u)^{2} -R(u))R(u)^{k-1}$ for $k \geq 1$. If $x \in \alg_{0}$ then $R(u)^{p}L(u)x = L(u)R(u)^{p}x + p(R(u)^{p+1} -R(u)^{p})x = 0$, so $L(u)x \in \alg_{0}$. If $x \in \alg_{1}$ then there is $y \in \alg$ such that $x = R(u)^{p}y$ and $L(u)x = L(u)R(u)^{p}y = R(u)^{p}(L(u) + p(I - R(u)))y \in \alg_{1}$. This shows that the Fitting decomposition is $L(u)$ invariant. If $(I - R(u))^{m}x = 0$ then
\begin{align}\label{xru}
x = -\sum_{j = 1}^{m}\binom{m}{j}(-R(u))^{j}x = R(u)\sum_{j = 1}^{m}\binom{m}{j}(-R(u))^{j-1}x.
\end{align}
This shows $x \in \im R(u)$, and, substituted in \eqref{xru} repeatedly, this implies $x \in \cap_{j \geq 1}\im R(u)^{j} = \alg_{1}$. Consequently $\bar{\alg}_{0} \subset \alg_{1}$. Let $q \geq 1$ be such that $(R(u) - R(u)^{2})^{q} = 0$. If $m \geq q$ and $x = (I - R(u))^{m}y$ then $R(u)^{m}x = (R(u) - R(u)^{2})^{m}y = 0$, so $x \in \ker R(u)^{m} \subset \alg_{0}$. Hence $\bar{\alg}_{1} \subset \alg_{0}$. Since $\bar{\alg}_{0} \oplus \bar{\alg}_{1} = \alg = \alg_{0}\oplus \alg_{1}$, this suffices to show $\bar{\alg}_{1} = \alg_{0}$ and $\bar{\alg}_{0} = \alg_{1}$.
Since the only eigenvalue of $R(u) - R(u)^{2}$ (over the algebraic closure of the base field) is $0$, the minimal polynomial of $R(u)$ divides some power of $x(1-x)$, and any eigenvalue of $R(u)$ is either $0$ or $1$.
Hence $k = \tr R(u)$ is an integer no greater than $n$. Since $R(u)$ is invertible on $\alg_{1}$ and nilpotent on $\alg_{0}$, $k = \tr R(u) = \dim \alg_{1}$. Since $R(u)u = u$, $k \geq 1$. Let $q \geq 1$ be the minimal integer such that $(R(u) - R(u)^{2})^{q} = 0$. Since $R(u)$ is nilpotent on $\alg_{0}$ and $I - R(u)$ is nilpotent on $\alg_{1}$ there is a minimal integer $t\geq 1$ such that $R(u)^{t}$ vanishes on $\alg_{0}$ and $(I-R(u))^{t}$ vanishes on $\alg_{1}$. Then $(R(u) - R(u)^{2})^{t} = R(u)^{t}(I - R(u))^{t} = 0$, so $t \geq q$. Since $t \leq \max\{\dim \alg_{0}, \dim \alg_{1}\} = \max\{n-k, k\}$, it follows that $q \leq \max\{n-k, k\}$.
Since, for all $x \in \alg$, $\la(R(u)x) = \la(x \mlt u) = h(x, u) = \la(x)$, there holds $\la(R(u)^{k}x) = \la(x)$ for all $k \geq 1$. If $x \in \alg_{0}$ then $x \in \ker R(u)^{p}$ for some $p \geq 0$, so $0 = \la(R(u)^{p}x) = \la(x)$, showing that $x \in \ker \la$. This shows \eqref{a0l}. Suppose $J$ is a nontrivial left ideal in $(\alg, \mlt)$ and $0\neq x \in J \cap \ker \la$. By the nondegeneracy of $h$ there is $y$ such that $\la(y\mlt x) = h(x, y) \neq 0$, and so $y \mlt x \in J$ but $y \mlt x \notin \ker \la$. This shows \eqref{rclsa6b}. Claim \eqref{kos8} is straightforward.
\end{proof}
Claims \eqref{lladj} and \eqref{rsa} are stated explicitly as Proposition $2.6$ and Corollary $2.7$ of \cite{Choi-Chang}. When $h$ is positive definite the content of Lemma \ref{principalidempotentlemma} is due to Vinberg in \cite{Vinberg} (see pages $369-370$), and is also in \cite{Shima-homogeneoushessian}; the proofs for general $h$ are essentially the same, although the conclusions are not as strong as for positive definite $h$. If $h$ is positive definite then that $R(u) - R(u)^{2}$ be nilpotent and self-adjoint means it is zero, so $R(u)$ is a projection operator. In this case, $R(u)$ commutes with $L(u)$, so if the eigenvalues of $L(u)$ are real, so too are those of $R(u)$, in which case it follows that $L(u)$ is self-adjoint. From these observations Vinberg constructed a canonical decomposition of the LSA on which is based his structure theory for homogeneous convex cones.
These results motivate much of what follows. When $h$ has mixed signature the conditions that $R(u) - R(u)^{2}$ be self-adjoint and nilpotent alone are insufficient to conclude its vanishing; in this case it is not clear what purely algebraic condition forces the conclusion that $R(u)$ be a projection.
The \textit{rank} of a Koszul form is the integer $\tr R(u)$. The rank is a basic invariant of the Koszul form. For the LSAs appearing in the classification of homogeneous convex cones, the idempotent $u$ is a unit, and so $\tr R(u) = n$, whereas for the LSAs giving rise to homogeneous improper affine spheres, $R(u)$ is a projection onto a one-dimensional ideal, so $\tr R(u) = 1$. Thus these two cases can be seen as opposite extremes.
The \emph{right principal idempotent} of an incomplete LSA with nondegenerate trace form $\tau$ is the idempotent $r$ associated with the Koszul form $\tr R$.
\begin{lemma}\label{trrunilemma}
For an incomplete LSA $(\alg, \mlt)$ with nondegenerate trace form $\tau$ and right principal idempotent $r$, $(\ker \tr R, [\dum, \dum])$ is a unimodular Lie algebra if and only if $\tr R(r) \tr L = \tr L(r) \tr R$.
\end{lemma}
\begin{proof}
Clearly $\mu = \tr R(r) \tr L - \tr L(r) \tr R \in \alg^{\ast}$ satisfies $\mu(r) = 0$. Since $[\alg, \alg] \subset \ker \tr R$, there holds $\tr \ad_{\ker \tr R}(x) = \tr \ad(x) = \tr L(x) - \tr R(x) = \tr L(x)$ for $x \in \ker \tr R$. Hence $\mu(x) = \tr R(r)\tr\ad_{\ker \tr R}(x)$ for $x \in \ker \tr R$. Since by Lemma \ref{principalidempotentlemma}, $1 \leq \tr R(r) \leq \dim \alg$, it follows that $\mu$ vanishes if and only if $\ker \tr R$ is unimodular.
\end{proof}
Theorem \ref{lsacptheorem} shows that if the characteristic polynomial of an LSA satisfies certain conditions then its level sets are improper affine spheres. This motivates identifying algebraic properties of the LSA that guarantee that its characteristic polynomial satisfies these conditions.
\begin{theorem}\label{lsacptheorem}
If the characteristic polynomial $P(x)$ of an $n$-dimensional LSA $(\alg, \mlt)$ satisfies $P(x + tv) = P(x) + \la t$ for some $0 \neq v \in \alg$ and $\la \in \rea$ and $(\alg, \mlt)$ satisfies $2\tr L = (n+1)\tr R$, then $\H(e^{P}) = \kc e^{nP}$ for some constant $\kc \in \rea$.
If, moreover, the trace form $\tau$ is nondegenerate, then
\begin{enumerate}
\item\label{thom2} $\kc \neq 0$ and each level set of $P$ is an improper affine sphere with affine normal equal to a multiple of $v$;
\item\label{thom1} $\la \neq 0$, the element $r = \la^{-1}v$ is the right principal idempotent of $(\alg, \mlt)$, and $\tr R(r) = 1$.
\end{enumerate}
\end{theorem}
\begin{proof}
For any LSA, differentiating \eqref{holgp} yields
\begin{align}\label{hessholp}
\begin{split}
(\det &\lin(\hol(g)))^{-2} \hol(g)\cdot \det(\hess P + dP \tensor dP) \\&= \det(\hess(\hol(g)\cdot P) + d(\hol(g)\cdot P) \tensor d(\hol(g)\cdot P))\\
& = \det(\chi(g^{-1})\hess P + \chi(g^{-2})dP\tensor dP)= \chi(g)^{-n}\det(\hess P + \chi(g)^{-1}dP\tensor dP).
\end{split}
\end{align}
By Lemma \ref{twisteddetlemma}, if $e^{P}$ is translationally homogeneous in some direction then the last term in \eqref{hessholp} equals $\chi(g)^{-n-1}\det(\hess P + dP \tensor dP)$. In such a case taking $g = \exp_{G}(a)$ and noting $\det \lin(\hol(\exp_{G}(a))) = \det e^{L(a)} = e^{\tr L(a)}$ yields
\begin{align}
\hol(g)\cdot \det(\hess P + dP \tensor dP) = e^{2\tr L(a) - (n+1)\tr R(a)}\det (\hess P + dP \tensor dP).
\end{align}
If $2\tr L = (n+1)\tr R$, it follows that $\det (\hess P + dP \tensor dP)$ is constant on an open set, and so constant, because it is a polynomial. In this case $e^{P}$ solves $\H(e^{P}) = \kc e^{n P}$ for some constant $\kc$. If $\kc \neq 0$ then the conclusion of claim \eqref{thom2} follows from Theorem \ref{ahtheorem}. Suppose $\tau$ is nondegenerate. To show $\kc \neq 0$ it suffices to show that $\H(e^{P})$ is nonzero at a single point of $\alg$. Since $P\hess(\log P) = P_{ij} - P_{i}P_{j}$ and $e^{-P}\hess(e^{P}) = P_{ij} + P_{i}P_{i}$, it follows from $P(x + tv) = P(x) + \la t$ and Lemma \ref{twisteddetlemma} that $P^{n}\H(\log P) = - e^{-nP}\H(e^{P})$. By \eqref{hesslogp} the nondegeneracy of $\tau$ implies that $\H(\log P) \neq 0$ at $0$, and so $\H(e^{P})$ is nonzero at $0$. There remains to show \eqref{thom1}. Differentiating $P(x + tv) = P(x) + \la t$ at $t = 0$ and using \eqref{dptra} shows that $\la = dP_{0}(v) = P(0)\tr R(v) = \tr R(v)$, and using this and \eqref{hesslogp}, shows that $-\la \tr R(a) = (\hess P)_{0}(a, v) - \la \tr R(a) = (\hess \log P)_{0}(a, v) = -\tau(a, v)$ for all $a \in \alg$. By the nondegeneracy of $\tau$, were $\la$ zero then $v$ would be zero, a contradiction. Hence $\la \neq 0$ and $r = \la^{-1}v$ satisfies $\tr R(r) = 1$ and $\tau(a, r) = \tr R(a)$.
\end{proof}
\section{Notions of nilpotence for LSAs}\label{nilpotencesection}
Various notions of nilpotence, and their interrelations, that play a role in what follows are explained now. The reader should be aware that the terminology for various different notions of nilpotence is different in different papers on LSAs. For example, in \cite{Helmstetter} an LSA is called \textit{nilpotent} if it is what is here called \textit{right nil}, while here \textit{nilpotent} is used in a different sense.
\begin{lemma}\label{solvablekernellemma}
For a finite-dimensional LSA $(\alg, \mlt)$ over a field of characteristic zero the kernel $\ker L$ of the left regular representation $L$ is a two-sided ideal and a trivial left-symmetric subalgebra of $(\alg, \mlt)$, for which $-R$ is a Lie algebra representation by commuting operators.
\end{lemma}
\begin{proof}
By the definition of an LSA, $\ker L$ is a Lie subalgebra of $(\alg, [\dum, \dum])$. If $x \in \alg$ and $n \in \ker L$ then $n\mlt x = L(n)x = 0 \in \ker L$, and $L(x\mlt n) = L(n\mlt x) + [L(x), L(n)] = 0$, showing that $\ker L$ is a two-sided ideal of $(\alg, \mlt)$. By \eqref{rlsa}, if $n, m \in \ker L$ then $0 = R(m \mlt n) = R(n)R(m)$, so $0 = R([m, n]) = -[R(m), R(n)]$. This shows $-R$ is a Lie algebra representation of $\ker L$ on $\alg$ by commuting operators.
\end{proof}
A finite-dimensional LSA $(\alg, \mlt)$ defined over a field of characteristic zero is \textit{triangularizable} if there is a basis of $\alg$ with respect to which $L(x)$ is triangular for every $x \in \alg$.
\begin{lemma}\label{cslemma}
For a finite-dimensional LSA $(\alg, \mlt)$ defined over a field $\fie$ of characteristic zero, the following are equivalent:
\begin{enumerate}
\item\label{csl1} $(\alg, \mlt)$ is triangularizable.
\item\label{csl2} The underlying Lie algebra $(\alg, [\dum, \dum])$ is solvable and for every $x \in \alg$ the eigenvalues of $L(x)$ are contained in $\fie$.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose $(\alg, \mlt)$ is triangularizable and fix a basis with respect to which $L(x)$ is triangular for all $x \in \alg$. This implies that the eigenvalues of $L(x)$ are contained in $\fie$. Then $L([x, y]) = [L(x), L(y)]$ is strictly triangular for all $x, y \in \alg$ and so $L(a)$ is strictly triangular for every $a \in [\alg, \alg]$. This implies $\tr L(a)L(b) = 0$ for all $a, b \in [\alg, \alg]$. Since, by Lemma \ref{solvablekernellemma}, $\ker L$ is an abelian Lie algebra, this implies $(\alg, [\dum, \dum])$ is solvable, by Cartan's criterion (see, e.g., section III.$4$ of \cite{Jacobson}). On the other hand, if there holds \eqref{csl2} then, by one version of Lie's Theorem (see, e.g., Theorem $1.2$ in chapter $1$ of \cite{Gorbatsevich-Onishchik-Vinberg}), there is a complete flag in $\alg$ invariant under $L(\alg)$, so $(\alg, \mlt)$ is triangularizable.
\end{proof}
Lemma \ref{completesolvablelemma} is due to Helmstetter.
\begin{lemma}[Proposition $(20)$ and Corollary $(21)$ of \cite{Helmstetter}]\label{completesolvablelemma} Let $\fie$ be a field of characteristic zero.
\noindent
\begin{enumerate}
\item\label{ulie1} The underlying Lie algebra of a complete finite-dimensional LSA over $\fie$ is solvable.
\item\label{ulie2} The underlying Lie algebra of a finite-dimensional LSA over $\fie$ is not perfect. In particular, the codimension of $[\alg, \alg]$ in $\alg$ is always at least one.
\end{enumerate}
\end{lemma}
\begin{proof}
If the underlying Lie algebra of a finite-dimensional LSA $(\alg, \mlt)$ over $\fie$ is not solvable, it contains a nontrivial semisimple Lie subalgebra $\S$ and $L$ is a representation of $\S$ on $\alg$. Since $\S$ is semisimple, the first Lie algebra cohomology of any finite-dimensional representation of $\S$ is trivial. View $\alg$ as an $\S$-module with the action given by $L$. Since the inclusion of $\S$ in $\alg$ is a Lie algebra cocycle of $\S$ with coefficients in $\alg$, there exists $a \in \S$ such that $x = L(x)a = R(a)x$ for all $x \in \S$. This means that $I - R(a)$ is not invertible, so $P(-a) = 0$. This shows \eqref{ulie1}.
For a general finite-dimensional LSA $(\alg, \mlt)$, were $\alg = [\alg, \alg]$, then, by \eqref{rlsa}, $\alg$ would be right nil, and so complete. By \eqref{ulie1} this would imply that the underlying Lie algebra was solvable, contradicting $\alg = [\alg, \alg]$.
\end{proof}
\begin{corollary}
A complete finite-dimensional LSA over a field $\fie$ of characteristic zero is triangularizable if and only if for every $x \in \alg$ the eigenvalues of $L(x)$ are contained in $\fie$. In particular, a complete finite-dimensional LSA over an algebraically closed field of characteristic zero is triangularizable.
\end{corollary}
\begin{proof}
This follows from Lemmas \ref{completesolvablelemma} and \ref{cslemma}.
\end{proof}
\begin{lemma}[Proposition $(26)$ of \cite{Helmstetter}]\label{underlyingnilpotentlemma}
A finite-dimensional LSA over a field of characteristic zero is left nil if and only if it is right nil and its underlying Lie algebra is nilpotent.
\end{lemma}
\begin{proof}
This is Proposition $(26)$ of \cite{Helmstetter} (see also section $2$ of \cite{Kim-completeleftinvariant}).
\end{proof}
Associated with an LSA $(\alg, \mlt)$ there are the following descending series of subspaces of $\alg$. Define $\alg^{1} = \alg$, $\lnil^{1}(\alg) = \alg$ and $\rnil^{1}(\alg) = \alg$, and define recursively $\alg^{i+1} = [\alg, \alg^{i}]$, $\lnil^{i+1}(\alg) = \alg \mlt \lnil^{i}(\alg) = L(\alg)\lnil^{i}(\alg)$, and $\rnil^{i+1}(\alg) = \rnil^{i}(\alg)\mlt \alg = R(\alg)\rnil^{i}(\alg)$. It can be checked by induction on $i$ that $\alg^{i}\mlt \lnil^{j}(\alg) \subset \lnil^{i+j}(\alg)$; using this it can be checked by induction on $i$ that $\alg^{i} \subset \lnil^{i}(\alg)$. Using $\alg^{i} \subset \lnil^{i}(\alg)$ it can be checked by induction on $i$ that $\rnil^{i}(\alg)$ is a two-sided ideal of $(\alg, \mlt)$. This fact is contained in Proposition $23$ of \cite{Helmstetter} and the proof just sketched is indicated in \cite{Kim-completeleftinvariant}. The LSA $(\alg, \mlt)$ is \textit{right nilpotent} of \textit{length $k$} if there is $k \geq 1$ such that $\rnil^{k}(\alg) \neq \{0\}$ and $\rnil^{k+1}(\alg) = \{0\}$. A two-sided ideal in a right nilpotent LSA is right nilpotent, as is the quotient of the LSA by the ideal. A right nilpotent LSA is right nil, but a right nil LSA need not be right nilpotent.
Lemma \ref{rightnilpotentlemma} is a slight refinement of claim $(3)$ of Proposition $24$ of \cite{Helmstetter}.
\begin{lemma}\label{rightnilpotentlemma}
For an $n$-dimensional LSA $(\alg, \mlt)$ the following are equivalent:
\begin{enumerate}
\item\label{jdes1} $(\alg, \mlt)$ is right nilpotent.
\item\label{jdes2} There is a sequence of two-sided ideals $\alg = \I^{1} \supset \I^{2} \supset \dots \supset \I^{r} = \{0\}$ such that $R(\alg)\I^{i} \subset \I^{i+1}$ for all $1 \leq i \leq r$. In particular each quotient $\I^{i}/\I^{i+1}$ is a trivial LSA.
\item\label{jdes3} There is a sequence of two-sided ideals $\alg = \J^{1} \supset \J^{2} \supset \dots \supset \J^{n} \supset \J^{n+1} = \{0\}$ such that $\dim \J^{i} = n+1-i$ and such that $R(\alg)\J^{i} \subset \J^{i+1}$ for all $1 \leq i \leq n$. In particular each quotient $\J^{i}/\J^{i+1}$ is a trivial one-dimensional LSA.
\end{enumerate}
In the case there hold \eqref{jdes1}-\eqref{jdes3}, the LSA $(\alg, \mlt)$ is triangularizable.
\end{lemma}
\begin{proof}
If $(\alg, \mlt)$ is right nilpotent, then
\eqref{jdes1} implies \eqref{jdes2} with $\I^{i}= \rnil^{i}(\alg)$. In a trivial LSA any subspace is a two-sided ideal, so any descending sequence of subspaces, each having codimension one in the preceding, is a descending sequence of two-sided ideals, each having codimension one in the preceding. Supposing given ideals as in \eqref{jdes2}, choosing such a sequence in each quotient $\I^{i}/\I^{i+1}$ and lifting the result to $\alg$ yields the desired sequence of ideals $\J^{j}$. Suppose given a sequence of ideals $\J^{i}$ as in \eqref{jdes3}. Then $R(a)\J^{i} \subset \J^{i+1}$ for all $a \in \alg$, so $R(a_{1})\dots R(a_{n})\J^{1} = \{0\}$ for all $a_{1}, \dots, a_{n} \in \alg$, which means $\alg$ is right nilpotent. If there holds \eqref{jdes3}, then there is a complete flag of subspaces in $\alg$ stable under $L(\alg)$, so $(\alg, \mlt)$ is triangularizable.
\end{proof}
In a finite-dimensional algebra $(\alg, \mlt)$ define $\mnil^{1}(\alg) = \alg$ and define $\mnil^{i}(\alg)$ to be the vector subspace of $\alg$ generated by all products of at least $i$ elements of $\alg$, however associated. Each term of the decreasing sequence $\alg = \mnil^{1}(\alg) \supset \mnil^{2}(\alg) \supset \dots \supset \{0\}$ is a two-sided ideal in $\alg$. If there is $k$ such that $\mnil^{k}(\alg) = \{0\}$, then $(\alg, \mlt)$ is \textit{nilpotent}. By a theorem of I.~M.~H. Etherington \cite{Etherington}, an algebra is nilpotent if and only if the associative multiplication algebra $\mult(\alg) \subset \eno(\alg)$ generated by $L(\alg)$ and $R(\alg)$ is nilpotent. The proof amounts to showing that
\begin{align}\label{mnilmult}
\mnil^{i+1}(\alg) \supset \mult(\alg)^{i}\alg \supset \mnil^{2^{i}}(\alg).
\end{align}
(Care is needed because although $\mult^{i}(\alg)\alg = L(\alg)\mult^{i-1}(\alg)\alg + R(\alg)\mult^{i-1}(\alg)\alg$, by definition of $\mult(\alg)$, it need not be the case that $L(\alg)\mnil^{i}(\alg) + R(\alg)\mnil^{i}(\alg)$ equal $\mnil^{i+1}(\alg)$.)
By \eqref{mnilmult}, a nilpotent LSA is right nilpotent with nilpotent underlying Lie algebra because $\rnil^{k+1}(\alg) = R(\alg)^{k}\alg \subset \mult(\alg)^{k}\alg$ and $\alg^{k+1} = \ad(\alg)^{k}\alg \subset \mult(\alg)^{k}\alg$. Although, for a general not necessarily associative algebra, nilpotence is stronger than right nilpotence, Theorem \ref{trivalgtheorem} shows that a right nilpotent LSA with nilpotent underlying Lie algebra is nilpotent.
When considering multiple LSAs it is convenient to use subscripts indicating the dependence of left and right regular representations $L_{\mlt}$ and $R_{\mlt}$ corresponding to a given LSA $(\alg, \mlt)$. However, such subscripts will be used only when necessary, and when they are omitted, $L$ and $R$ are always defined with respect to the multiplication indicated by the symbol $\mlt$.
\begin{lemma}\label{hereditylemma}
Let $(\alg, \mlt)$ be a finite-dimensional LSA. If $(\alg, \mlt)$ is left nil, right nil, right nilpotent, nilpotent, or has nilpotent underlying Lie algebra, then any left-symmetric subalgebra or any homomorphic image of $(\alg, \mlt)$ has the same property.
\end{lemma}
\begin{proof}
All the claims for a homomorphic image follow from the observation that, if $\Phi:(\alg, \mlt) \to (\balg, \circ)$ is a surjective LSA homormophism, then
\begin{align}\label{rphir}
&R_{\circ}(\Phi(x))\circ \Phi = \Phi \circ R_{\mlt}(x), & &L_{\circ}(\Phi(x)) \circ \Phi = \Phi \circ L_{\mlt}(x),
\end{align}
for all $x \in \alg$. By \eqref{rphir} and its analogue for left multiplication operators it is immediate that a homomorphic image of a right nil or a left nil LSA has the same property. Similarly, by \eqref{rphir}, there hold $\rnil^{k}(\balg) = \Phi(\rnil^{k}(\alg))$ and $\mnil^{i}(\balg) = \Phi(\mnil^{i}(\alg))$, from which it follows that a homomorphic image of a right nilpotent or a nilpotent LSA is right nilpotent or nilpotent. The analogous claims for subalgebras are all straightforward.
\end{proof}
Let $(\alg, \mlt)$ be a finite-dimensional LSA.
Define a two-sided ideal $\triv(\alg)$ by
\begin{align}
\triv(\alg) = \ker L \cap \ker R = \{a \in \alg: a\mlt x = 0 = x\mlt a \,\,\text{for all}\,\, x \in \alg\}.
\end{align}
Any vector subspace $\I \subset \triv(\alg)$ is also a two-sided ideal of $(\alg, \mlt)$.
\begin{lemma}
The subspaces $\triv^{i}(\alg)$ of the LSA $(\alg, \mlt)$ defined by $\triv^{1}(\alg) = \triv(\alg)$ and
\begin{align}
\triv^{i+1}(\alg) = \{z \in \alg: L(z)\alg \subset \triv^{i}(\alg), R(z)\alg \subset \triv^{i}(\alg)\}
\end{align}
for $i \geq 1$ are two-sided ideals in $(\alg, \mlt)$ satisfying $\triv^{1}(\alg) \subset \triv^{2}(\alg) \subset \dots \subset \alg$.
\end{lemma}
\begin{proof}
The proof is by induction on $i$. The case $i = 1$ is clear. Suppose that for $1 \leq j \leq i$ it is known that $\triv^{j}(\alg)$ is a two-sided ideal of $(\alg, \mlt)$ and there holds $\triv^{1}(\alg) \subset \dots \subset \triv^{j}(\alg)$. If $z \in \triv^{i}(\alg)$ and $x \in \alg$ then $z \mlt x$ and $x \mlt z$ are contained in $\triv^{i}(\alg)$ by the inductive hypothesis, so $z \in \triv^{i+1}(\alg)$. Suppose $z \in \triv^{i+1}(\alg)$ and $x \in \alg$. Since $z \mlt x, x \mlt z \in \triv^{i}(\alg)$ and $\triv^{i}(\alg)$ is a two-sided ideal in $(\alg, \mlt)$ it follows that $L(z \mlt x)y$, $R(z\mlt x)y$, $L(x\mlt z)y$, and $R(x\mlt z)y$ are contained in $\triv^{i}(\alg)$, which shows that $x\mlt z$ and $z \mlt x$ are contained in $\triv^{i+1}(\alg)$.
\end{proof}
From the definition it is immediate that $\triv(\alg/\triv^{i}(\alg)) = \triv^{i+1}(\alg)/\triv^{i}(\alg)$.
\begin{lemma}\label{trivnilpotentlemma}
A finite-dimensional LSA $(\alg, \mlt)$ is nilpotent if and only if there is $m \geq 1$ such that $\triv^{m}(\alg) = \alg$.
\end{lemma}
\begin{proof}
An element $z \in \alg$ is contained in $\triv^{m}(\alg)$ if and only if any product of $m$ left and right multiplication operators annihilates $z$. Consequently, $\triv^{m}(\alg) = \alg$ if and only if $\mult(\alg)^{m}\alg = \{0\}$, where $\mult(\alg) \subset \eno(\alg)$ is the multiplication algebra generated by $L(\alg)$ and $R(\alg)$. By \eqref{mnilmult}, $\mult(\alg)^{m}\alg = \{0\}$ if and only if $\alg$ is nilpotent.
\end{proof}
Theorem \ref{trivalgtheorem} generalizes Lemma $4.2$ of Choi and Kim's \cite{Choi-Kim}, which reaches similar conclusions in the case of a complete abelian LSA.
\begin{theorem}\label{trivalgtheorem}
If a finite-dimensional LSA $(\alg, \mlt)$ is right nilpotent and has nilpotent underlying Lie algebra, then $\triv(\alg)$ has dimension at least one. In this case there is some $m \geq 1$ such that $\triv^{m}(\alg) = \alg$ and so $(\alg, \mlt)$ is nilpotent. If, moreover, $\alg \mlt \alg \neq \{0\}$ then $\triv(\alg) \cap (\alg \mlt \alg) \neq \{0\}$.
\end{theorem}
\begin{proof}
Since $(\alg, \mlt)$ is right nilpotent it is right nil, and since it has nilpotent underlying Lie algebra, it is left nil.
Let $\J^{i}$ be a descending sequence of ideals as in Lemma \ref{rightnilpotentlemma}. Then $\J^{n}$ is one-dimensional where $n = \dim \alg$. Let $z$ generate $\J^{n}$. Then $z \mlt x = 0$ for all $x \in \alg$, so $L(z) = 0$. Since $\J^{n} = \lb z \ra$ is a two-sided ideal, $R(z)x \in \lb z \ra$ for all $x \in \alg$. If there is $x \in \alg$ such that $R(z)x \neq 0$, then, after rescaling $x$, it can be supposed that $L(x)z = z$. Since this means $L(x)$ is not nilpotent, it contradicts that $(\alg, \mlt)$ is left nil. Hence $R(z) = 0$ and so $z \in \triv(\alg)$.
Since, by Lemma \ref{hereditylemma}, $\alg/\triv^{i}(\alg)$ is again right nilpotent with nilpotent underlying Lie algebra, if $\triv^{i}(\alg) \neq \alg$ then $\triv^{i+1}(\alg)/\triv^{i}(\alg) = \triv(\alg/\triv^{i}(\alg))$ has dimension at least one, by the preceding. Since this implies that $\dim \triv^{i+1}(\alg) > \dim \triv^{i}(\alg)$ unless $\triv^{i}(\alg) = \alg$, it implies that there is some $m \geq 1$ such that $\triv^{m}(\alg) = \alg$. By Lemma \ref{trivnilpotentlemma}, $(\alg, \mlt)$ is nilpotent.
If $\alg \mlt \alg \neq \{0\}$, then the ideals $\J^{i}$ may be chosen so that $\J^{n} \subset \alg \mlt \alg$, and since $z \in \J^{n}$, this proves the final claim.
\end{proof}
\begin{lemma}
A right nilpotent LSA admits no Koszul form.
\end{lemma}
\begin{proof}
Since $(\alg, \mlt)$ is right nilpotent there is $k \geq 1$ such that $\rnil^{k+1}(\alg) = \{0\}$ and $\rnil^{k}(\alg) \neq \{0\}$. Then $\rnil^{k}(\alg)$ is a two-sided ideal contained in $\ker L$. However, if $(\alg, \mlt)$ were to admit a Koszul form, then $L$ would be injective.
\end{proof}
\section{LSAs with nondegenerate trace form and rank one right principal idempotent}
A derivation $D$ of the Hessian LSA $(\alg, \mlt, h)$ is \textit{conformal} if there is a constant $c \in \rea$ such that
\begin{align}\label{conformalder}
h(Dx, y) + h(x, Dy) = ch(x, y),
\end{align}
for all $x, y \in \alg$. Equivalently, $D + D^{\ast} = cI$. If $c = 0$ in \eqref{conformalder} then $D$ is an \textit{infinitesimally isometric derivation}. The conformal derivations and the infinitesimally isometric derivations constitute Lie subalgebras of the Lie algebra of derivation of $(\alg, \mlt)$. Following Definition $2.8$ of \cite{Choi-domain}, a derivation $D$ of $(\alg, \mlt)$ satisfying \eqref{conformalder} with $c = 1$ is called a \textit{compatible derivation} of the Hessian LSA $(\alg, \mlt, h)$.
Following Choi in \cite{Choi-domain}, define the \textit{graph extension} $(\alg, \mlt, \hat{h})$ of the Hessian LSA $(\balg, \circ, h)$ with respect to the compatible derivation $D \in \eno(\balg)$ to be the vector space $\alg = \balg \oplus \lb D \ra$ equipped with the multiplication
\begin{align}\label{gemlt}
(x + aD) \mlt (y + bD) = x\circ y + aDy + (h(x, y) + ab)D
\end{align}
and the symmetric bilinear form
\begin{align}\label{gehath}
\hat{h}(x + aD, y + bD) = h(x, y) + ab.
\end{align}
\begin{lemma}\label{graphextensionlemma}
The graph extension $(\alg = \balg \oplus \lb D \ra, \mlt, \hat{h})$ of the Hessian LSA $(\balg, \circ, h)$ with respect to the compatible derivation $D \in \eno(\balg)$ satisfies:
\begin{enumerate}
\item\label{ge1} $(\alg, \mlt, \hat{h})$ is a Hessian LSA with Koszul form $\la(x + aD) = a$ generating $\hat{h}$.
\item\label{ge2} $D$ is the idempotent element of $(\alg, \mlt, \hat{h})$ associated with $\la$ and satisfies $\ker R_{\mlt}(D) = \ker \la = \balg \oplus \lb 0 \ra$, and $R_{\mlt}(D)^{2} = R_{\mlt}(D)$.
\item\label{ge3} The restriction to $\balg$ of $L_{\mlt}(D)$ equals $D$,
and
\begin{align}\label{geadj}
L_{\mlt}(D)^{\ast} + L_{\mlt}(D) = R_{\mlt}(D) + I.
\end{align}
\item\label{ge4} Any nontrivial two-sided ideal of $(\alg, \mlt)$ contains $D$.
\item\label{ge6b}The Lie normalizer $\n(\lb D\ra)$ of the linear span of $D$ in $\alg$ equals $\ker D \oplus \lb D \ra$. In particular $D$ is invertible if and only if $\n(\lb D\ra) = \lb D\ra$.
\item\label{ge6} If $D$ is invertible then $\balg = [\alg, \alg]$ and $(\alg, \mlt)$ is simple.
\item\label{ge5} For $x \in \balg \oplus \lb 0 \ra \subset \alg$ and $\hat{y} \in \alg$,
\begin{align}\label{geder}
L_{\mlt}(D)(x \mlt \hat{y}) = L_{\mlt}(D)x \mlt \hat{y} + x \mlt L_{\mlt}(D)\hat{y}.
\end{align}
Consequently $e^{tL_{\mlt}(D)}(x \mlt \hat{y}) = e^{tL_{\mlt}(D)}x \mlt e^{tL_{\mlt}(D)}\hat{y}$, and $e^{tL_{\mlt}(D)}$ is an automorphism of $(\balg, \circ)$.
\end{enumerate}
Moreover,
\begin{align}
\label{rmltrcirc1}\tr R_{\mlt}(x + aD) &= \tr R_{\circ}(x) + a,\\
\label{rmltrcirc2}\hat{h}(x + aD, y + bD) &= \tau_{\mlt}(x + aD, y + bD) - \tau_{\circ}(x, y),\\
\label{papb0}P_{\alg, \mlt}(x + aD) &= P_{\balg, \circ}(x)\left(1 + a - h(x, (I + R_{\circ}(x))^{-1}Dx)\right),
\end{align}
where $\tau_{\mlt}$ and $\tau_{\circ}$ are the right trace forms and $P_{\alg, \mlt}$ and $P_{\balg, \circ}$ are the characteristic polynomials of $(\alg, \mlt)$ and $(\balg, \circ)$.
\end{lemma}
\begin{proof}
With respect to a basis $e_{1}, \dots, e_{n}$ of $\alg$ such that $e_{n} = D$ and $\balg \oplus \lb 0 \ra = \lb e_{1}, \dots, e_{n-1}\ra$, the left and right regular representations $L_{\circ}$, $R_{\circ}$, $L_{\mlt}$, and $R_{\mlt}$ of $(\balg, \circ)$ and $(\alg, \mlt)$ are related by
\begin{align}\label{mltcirc}
&L_{\mlt}(x + aD) = \begin{pmatrix} L_{\circ}(x) + aD & 0 \\ x^{\flat} & a \end{pmatrix},& &R_{\mlt}(x + aD) = \begin{pmatrix} R_{\circ}(x) & Dx \\ x^{\flat} & a \end{pmatrix}
\end{align}
where $x^{\flat} \in \balg^{\ast}$ is defined by $x^{\flat}(y) = h(x, y)$ for $y \in \balg$. Claims \eqref{ge1}-\eqref{ge3} can be verified by straightforward computations using the definitions and \eqref{mltcirc}.
If $J$ is a two-sided ideal in $(\alg, \mlt)$, by \eqref{rclsa6b} of Lemma \ref{principalidempotentlemma} there is $z \in J$ such that $\la(z) \neq 0$. Then $\la(z)D = z\mlt D \in J$, so $D \in J$. This shows \eqref{ge4}.
If $x + aD \in \n(\lb D \ra)$ then $Dx = [D, x + aD] \in \lb D\ra$. As this holds if and only if $x \in \ker D$, $\n(\lb D \ra) = \ker D \oplus \lb D \ra$. This shows \eqref{ge6b}.
If $D$ is invertible, then for $x \in \balg$ there exists $z \in \balg$ such that $x = Dz = D\mlt z - z\mlt D$. This shows that $\balg \subset [\alg, \alg]$. Since the opposite containment is clear from \eqref{gemlt}, this shows $\balg = [\alg, \alg]$. By \eqref{ge4}, a nontrivial two-sided ideal $\J$ in $(\alg, \mlt)$ contains $D$. From \eqref{mltcirc} it is apparent that $L_{\mlt}(D)$ is invertible if and only if $D$ is invertible. Consequently, if $D$ is invertible, $\alg = L_{\mlt}(D)\alg \subset \J$. This shows that $(\alg, \mlt)$ is simple and completes the proof of \eqref{ge6}.
Claim \eqref{ge5} is essentially Lemma $3.2$ of \cite{Shima-homogeneoushessian}. The proof is recalled for convenience.
Since $\ker R_{\mlt}(D) = \ker \la$, for $x \in \ker \la$ and $\hat{y} \in \alg$ there holds \eqref{geder}.
Since $L_{\mlt}(D)$ preserves $\ker \la$ there follows $L_{\mlt}(D)^{m}(x \mlt \hat{y}) = \sum_{j = 0}^{m}\binom{m}{j}L_{\mlt}(D)^{j}x \mlt L_{\mlt}(D)^{m-j}\hat{y}$, and so $e^{tL_{\mlt}(D)}(x \mlt \hat{y}) = \sum_{m \geq 0}\sum_{j = 0}^{m}t^{m}(j!(m-j)!)^{-1}L_{\mlt}(D)^{j}x \mlt L_{\mlt}(D)^{m-j}\hat{y} = e^{tL_{\mlt}(D)}x \mlt e^{tL_{\mlt}(D)}\hat{y}$. Differentiating $\hat{h}(e^{tL_{\mlt}(D)}x, e^{tL_{\mlt}(D)}\hat{y})$ in $t$ and simplifying the result using \eqref{hessianmetric} and \eqref{geadj} yields
\begin{align}
\begin{split}
\tfrac{d}{dt}\hat{h}(e^{tL_{\mlt}(D)}x, e^{tL_{\mlt}(D)}\hat{y}) &= \hat{h}(D \mlt e^{tL_{\mlt}(D)}x, e^{tL_{\mlt}(D)}\hat{y}) + \hat{h}(e^{tL_{\mlt}(D)}x, D\mlt e^{tL_{\mlt}(D)}\hat{y}) \\
&= \hat{h}(e^{tL_{\mlt}(D)}x \mlt D, e^{tL_{\mlt}(D)}\hat{y}) + \hat{h}(D, e^{tL_{\mlt}(D)}x \mlt e^{tL_{\mlt}(D)}\hat{y}) \\
&= \hat{h}(D, e^{tL_{\mlt}(D)}(x \mlt \hat{y})) = \hat{h}(e^{tL_{\mlt}(D)^{\ast}}D, x\mlt \hat{y}) = e^{t}\hat{h}(x, \hat{y}),
\end{split}
\end{align}
which implies $\hat{h}(e^{tL_{\mlt}(D)}x, e^{tL_{\mlt}(D)}\hat{y}) = e^{t}\hat{h}(x, \hat{y})$. Since for $x, y \in \ker \la$ there holds $x \mlt y = x \circ y$, there follows $e^{tL_{\mlt}(D)}(x \circ y) = e^{tL_{\mlt}(D)}x \circ e^{tL_{\mlt}(D)}y$, showing \eqref{ge5}.
The identities \eqref{rmltrcirc1}-\eqref{rmltrcirc2} follow from \eqref{mltcirc}. Writing vertical bars to indicate determinants,
\begin{align}
\begin{split}
P_{\alg, \mlt}(x + aD)
& = \begin{vmatrix} I + R_{\circ}(x) & Dx\\ x^{\flat} & 1 + a \end{vmatrix}
= \begin{vmatrix} I + R_{\circ}(x) & Dx\\ x^{\flat} & 1 \end{vmatrix} + \begin{vmatrix} I + R_{\circ}(x) & 0\\ x^{\flat} & a \end{vmatrix}\\
& = \begin{vmatrix} I + R_{\circ}(x) - (Dx) \tensor x^{\flat}& 0 \\ x^{\flat} & 1 \end{vmatrix} + a| I + R_{0}(x)|\\
& = |I + R_{0}(x)|(1 - h(x, (I + R_{\circ}(x))^{-1}Dx)) + a| I + R_{0}(x)| \\
&= P_{\balg, \circ}(x)\left(1 + a - h(x, (I + R_{\circ}(x))^{-1}Dx) \right),
\end{split}
\end{align}
where the penultimate equality follows from a standard identity for the determinant of a rank one perturbation.
\end{proof}
\begin{remark}
Claim \eqref{ge1} of Lemma \ref{graphextensionlemma} is a reformulation of Proposition $4.8$ of Choi's \cite{Choi-domain} (Choi assumes $(\balg, \mlt)$ is complete with unimodular underlying Lie algebra but, while this is necessary for the applications in \cite{Choi-domain}, it is irrelevant for the lemma).
\end{remark}
\begin{remark}
Conclusion \eqref{ge6} of Lemma \ref{graphextensionlemma}, that the graph extension of a Hessian LSA by an invertible compatible derivation is simple, answers affirmatively a question posed by Choi in Remark $5.2$ of \cite{Choi-domain}.
\end{remark}
\begin{lemma}\label{completegraphextensionlemma}
Let $(\alg, \mlt, \hat{h})$ be the graph extension of the Hessian LSA $(\balg, \circ, h)$ with respect to the compatible derivation $D \in \eno(\balg)$. Then $(\balg, \circ)$ is complete if and only if $\ker \tr R_{\mlt} = \ker R_{\mlt}(D)$. In this case:
\begin{enumerate}
\item $\hat{h} = \tau_{\mlt}$.
\item\label{cge6} There is an integer $0 \leq m \leq \dim \balg - 1$ such that characteristic polynomial $P$ of $(\alg, \mlt)$ has the form
\begin{align}\label{papb}
P(x + aD) &= 1 + a - h(x, \sum_{l = 0}^{m}(-R_{\circ}(x))^{l}Dx).
\end{align}
In particular, $e^{P}$ is $1$-translationally homogeneous with axis $D$.
\item\label{cgenil} If $\deg P = k+1$, there exists $x \in \balg$ such that $R_{\circ}(x)^{k-1} \neq 0$; consequently, $\rnil^{k}(\balg, \circ) \neq \{0\}$.
\item\label{cge5b} The Lie algebras underlying $(\balg, \circ)$ and $(\alg, \mlt)$ are solvable.
\item\label{cgewt} $dP(E) = P$ where $E$ is defined by $E_{X} = (I + R_{\mlt}(X))D = D + D\mlt X$ for $X \in \alg$.
\end{enumerate}
If, additionally, the Lie algebra underlying $(\balg, \circ)$ is unimodular, then
\begin{enumerate}
\setcounter{enumi}{5}
\item \label{cge10} there is a nonzero constant $\kc$ such that $\H(e^{P}) = \kc e^{nP}$, where $n = \dim \alg$, and the level sets of $P$ are improper affine spheres with affine normal a constant multiple of $D$.
\end{enumerate}
\end{lemma}
\begin{proof}
Note that $\balg \oplus \lb 0 \ra = \ker R_{\mlt}(D)$ by definition of a graph extension. If $(\balg, \circ)$ is complete, then $\tr R_{\circ}(x) = 0$ for all $x \in \balg$, so by \eqref{rmltrcirc1}, $\tr R_{\mlt}(x + aD) = a$ for all $x \in \balg$, from which it is apparent that $\ker \tr R_{\mlt} = \balg \oplus \lb 0\ra = \ker R_{\mlt}(D)$. On the other hand, if $\ker \tr R_{\mlt} = \ker R_{\mlt}(D)$, then $\ker \tr R_{\mlt} = \balg \oplus \lb 0 \ra$, so, by \eqref{rmltrcirc1}, $\tr R_{\circ}(x) = \tr R_{\mlt}(x) = 0$ for all $x \in \balg$.
If $(\balg, \circ)$ is complete, then $\tau_{\circ}$ vanishes identically, so \eqref{rmltrcirc2} yields $\tau_{\mlt} = \hat{h}$.
As $(\balg, \circ)$ is complete it is right nil, so $R_{\circ}(x)$ is nilpotent, and there is a minimal $m \leq \dim \balg -1$ such that for all $x \in \balg$, $R_{\circ}(x)^{m+1} = 0$, while for some $x\in \balg$, $R_{\circ}(x)^{m} \neq 0$. Hence $(I + R_{\circ}(x))^{-1} = \sum_{l = 0}^{m}(-R_{\circ}(x))^{l}$. With $P_{\balg, \circ} = 1$, in \eqref{papb0} this yields \eqref{papb}.
By \eqref{papb}, if $P$ has degree $k+1$ then $m \geq k-1$. This means there exists $x \in \balg$ such that $R_{\circ}(x)^{k-1} \neq 0$. In particular, this implies $\rnil^{k}(\balg, \circ) \neq \{0\}$. This shows \eqref{cgenil}.
Since $(\balg, \circ)$ is complete, the action on $\balg$ of the simply-connected Lie group of $(\balg, [\dum, \dum])$ is simply transitive (see \cite{Helmstetter} or \cite{Kim-completeleftinvariant}). By Theorem I.$1$ of \cite{Auslander-affinemotions} this group is solvable, and so also $(\balg, [\dum, \dum])$ is solvable. Since $\alg/[\alg, \alg]$ is an abelian Lie algebra, $(\alg, [\dum, \dum])$ is solvable too. This proves \eqref{cge5b}.
Claim \eqref{cgewt} follows from \eqref{dptra} and \eqref{rmltrcirc1}.
If the Lie algebra underlying $(\balg, \circ)$ is unimodular, then by Lemma \ref{trrunilemma} and \eqref{cge6}, $P$ satisfies the conditions of Theorem \ref{lsacptheorem}, and so \eqref{cge10} follows.
\end{proof}
By Lemma \ref{graphextensionlemma}, the graph extension of a Hessian LSA with respect to a compatible derivation is an LSA equipped with a Koszul form. Lemma \ref{gecharlemma} characterizes when an LSA equipped with a Koszul form is isometrically isometric to a graph extension.
\begin{lemma}\label{gecharlemma}
For a finite-dimensional LSA $(\alg, \mlt)$ equipped with a Koszul form $\la \in \alg^{\ast}$ having associated idempotent $u$ and Hessian metric $h$ the following are equivalent:
\begin{enumerate}
\item \label{geclsa2}$\ker \la = \ker R(u)$.
\item \label{geclsa3}$h(u,u) = \la(u) \neq 0$, $R(u)^{2} = R(u)$, $\tr R(u) = 1$, and the linear span $\lb u \ra$ is a left ideal and $\la(u) \neq 0$.
\item \label{geclsa1}$h(u,u) = \la(u) \neq 0$ and the multiplication $x \circ y = x \mlt y - \la(u)^{-1}h(x, y)u$ makes $\ker \la$ an LSA for which the restriction of any nonzero multiple of $h$ is a Hessian metric and on which $L(u)$ acts as a compatible derivation.
\end{enumerate}
In the case there hold \eqref{geclsa2}-\eqref{geclsa1}, $(\alg, \mlt, \la(u)^{-1}h)$ is isometrically isomorphic to the graph extension of $(\ker \la, \circ, \la(u)^{-1}h)$ with respect to $L(u)$.
Moreover, if there hold \eqref{geclsa2}-\eqref{geclsa1}, $L(u)$ is invertible if and only if the linear span $\lb u \ra$ equals its normalizer in $(\alg, [\dum, \dum])$.
In this case, $[\alg, \alg] = \ker \la = \ker R(u)$ and $(\alg, \mlt)$ is a simple LSA.
\end{lemma}
\begin{proof}
Suppose there holds \eqref{geclsa2} so that $\ker \la = \ker R(u)$. Then $u \in \ker \la = \ker R(u)$ implies the contradictory $u = R(u)u = 0$. Hence $h(u, u) = \la(u) \neq 0$. For $x \in \ker \la$, $R(u)x = 0$, while $R(u) u = u$, so $R(u)$ is the projection onto $\lb u \ra$ along $\ker \la$. This is equivalent to the conditions $R(u)^{2} = R(u)$ and $\tr R(u) = \text{codim}\ker R(u) = 1$. As $x \mlt u = R(u)x \in \lb u\ra$, $\lb u \ra$ is a left ideal in $(\alg, \mlt)$. Thus \eqref{geclsa2} implies \eqref{geclsa3}. If there holds \eqref{geclsa3}, then $R(u)$ is a projection with one-dimensional image, and since $R(u)u= u$ the image is $\lb u \ra$. For any $x \in \alg$ there is $c \in \rea$ such that $R(u)x = cu$. Then $c\la(u) = \la(cu) = \la(x\mlt u) = h(x, u)= \la(x)$, so $\la(u)R(u)x = \la(x)u$ from which it follows that $\ker \la \subset \ker R(u)$. This suffices to show $\ker R(u) = \ker \la$. Thus \eqref{geclsa3} implies \eqref{geclsa2}.
If $\la(u) = h(u, u) \neq 0$ then $\circ$ is defined and $x \circ y \in \ker \la$ for all $x, y \in \alg$. If $x, y, z \in \ker \la$ then
\begin{align}\label{circlsa}
(x \circ y - y\circ x)\circ z- x\circ(y\circ z) + y\circ(x \circ z) = \la(u)^{-1}R(u)(h(x, z)y - h(y, z)x).
\end{align}
From \eqref{circlsa} it is immediate that $\ker R(u) = \ker \la$ implies that $(\ker \la, \circ)$ is an LSA. Because, for $x, y, z \in \ker \la$, $h(x\circ y, z) = h(x\mlt y, z)$, the restriction to $\ker \la$ of $h$ is a Hessian metric for $(\ker \la, \circ)$. Since $\la(L(u)x) = h(u, x) = \la(x)$, $L(u)$ preserves $\ker\la$. If $x \in \ker \la$ then $h(L(u)x, y) + h(x, L(u)y) = h(R(u)x, y) + h(u, x\mlt y) = h(x, y)$. Since $\ker R(u) = \ker \la$, if $x \in \ker \la$ then $L(u)(x\mlt y) - L(u)x \mlt y - x \mlt L(u)y = R(u)x \mlt y = 0$. Hence, for $x, y \in \ker \la$,
\begin{align}
\la(u)(L(u)(x \circ y) - L(u)x \circ y - x \circ L(u)y) = (h(L(u)x, y) + h(x, L(u)y - h(x, y))u =0 ,
\end{align}
showing that $L(u)$ is a compatible derivation of the Hessian LSA $(\ker \la, \circ, h)$. This shows that \eqref{geclsa2} implies \eqref{geclsa1}.
Suppose there holds \eqref{geclsa1}. Since $h(u, u) \neq 0$ the restriction of $h$ to $\ker \la$ is nondegenerate. If $\dim \alg > 2$ then $\dim \ker \la > 1$ and so given $0 \neq y \in \ker \la$ there can be found $x, z \in \ker \la$ such that $h(y, z) = 0$ and $h(x, z) = 1$. Susbtituting this into \eqref{circlsa} and supposing $\circ$ is left-symmetric on $\ker \la$ shows that $R(u)y = 0$. Hence $\ker \la \subset \ker R(u)$ which suffices to show \eqref{geclsa2}. If $\dim \alg = 2$ then $u$ is transverse to both $\ker \la$ and $\ker R(u)$ and so $\ker R(u) \subset \ker \la$. Since $L(u)$ is a compatible derivation of $(\ker \la, \circ)$, for all $x \in \ker \la$, $h(R(u)x, x) = h(x\mlt u, x) = h(u\mlt x, x) - h(u, x\mlt x) + h(x, u\mlt x) = h(x, x)- h(x, x) =0$. Since $h(R(u)x, u) = \la(R(u)x) = h(u, x) = \la(x) = 0$, it follows that $R(u)x = 0$ if $x \in \ker \la$, so $\ker R(u) = \ker \la$. Thus \eqref{geclsa1} implies \eqref{geclsa2}.
Suppose there hold \eqref{geclsa2}-\eqref{geclsa1}. Let $D = L(u)$ and define $\Psi: \ker \la \oplus \lb D \ra \to \alg$ by $\Psi(x + aD) = x + au$. Let $(\ker \la \oplus \lb D \ra, \hat{\mlt}, \hat{h})$ be the graph extension of $(\ker \la, \circ, \la(u)^{-1}h)$ with respect to $D = L(u)$. In particular, for $x, y \in \ker \la$, $\hat{h}(x + aD, y + bD) = ab + \la(u)^{-1}h(x, y)$, and
\begin{align}
\begin{split}
\Psi((x + aD)\hat{\mlt}(y + bD))& = \Psi(x\circ y + aDy + (ab + \la(u)^{-1}h(x, y))D )\\
&= x\circ y + aDy + (ab + \la(u)^{-1}h(x, y))u = x\mlt y + aL(u)y + ab u \\
&= (x + au) \mlt (y + bu) = \Psi(x + aD)\mlt \Psi(y + aD),
\end{split}
\end{align}
so $\Psi$ is an algebra isomorphism. Moreover, $h(\Psi(x + aD), \Psi(y + bD)) = h(x, y) + ab\la(u) = \la(u)\hat{h}(x + aD, y + bD)$, so $\Psi$ is isometric with respect to $\hat{h}$ and $\la(u)^{-1}h$.
Suppose there hold \eqref{geclsa2}-\eqref{geclsa1}. If $L(u)$ is not invertible there exists $0 \neq x \in \alg$ such that $L(u)x = 0$. Then $\la(x) = h(u, x) = \la(L(u)x) = 0$, so $x \in \ker \la = \ker R(u)$. Hence $[x, u] = R(u)x - L(u)x = 0$ and $x$ is contained in the normalizer in $(\alg, [\dum, \dum])$ of $\lb u \ra$. It follows that if $\lb u \ra$ equals its normalizer in $(\alg, [\dum, \dum])$, then $L(u)$ is invertible. Suppose $L(u)$ is invertible and that $x \in \alg$ is contained in the normalizer of $\lb u \ra$ in $(\alg, [\dum, \dum])$. Then $\bar{x} = x - \la(u)^{-1}\la(x)u \in \ker \la = \ker R(u)$ is also contained in the normalizer of $\lb u \ra$ in $(\alg, [\dum, \dum])$, since $[\bar{x}, u] = [x, u]$. There is some $c \in \rea$ such that $L(u)\bar{x} = (L(u) - R(u))\bar{x} = [u, \bar{x}] = cu$. Since $\la(u) \neq 0$, that $0 = \la([u, \bar{x}]) = c\la(u)$ implies $c = 0$ so $L(u)\bar{x} = [u, \bar{x}] = 0$. Since $L(u)$ is invertible, this implies $\bar{x} = 0$, so $x \in \lb u \ra$.
The conclusions that $[\alg, \alg] = \ker \la = \ker R(u)$ and $(\alg, \mlt)$ is a simple LSA when $L(u)$ is invertible follow from \eqref{ge6b} and \eqref{ge6} of Lemma \ref{graphextensionlemma} and the fact that $(\alg, \mlt, \la(u)^{-1}\hat{h})$ is isometrically isomorphic to a graph extension with respect to $L(u)$.
\end{proof}
If $\la$ is a Koszul form, then, for any nonzero $c$, $\tilde{\la} = c\la$ is also a Koszul form. The metric $\tilde{h}$ associated with $\tilde{\la}$ is $\tilde{h} = ch$. Both $\la$ and $\tilde{\la}$ determined the same idempotent, for if $\tilde{u}$ is the idempotent associated with $\tilde{\la}$, then $ch(u, x) = c\la(x) = \tilde{\la}(x) = \tilde{h}(\tilde{u}, x) = ch(\tilde{u}, x)$ for all $x \in \alg$, so $\tilde{u} = u$, by the nondegeneracy of $h$. If $\la(u) \neq 0$ then, replacing $\la$ by $\la(u)^{-1}\la$ it can be assumed $\la(u) = 1$. A Koszul form $\la$ is \textit{normalized} if $\la(u) \in \{0, 1\}$. If $\la$ is a normalized Koszul form and there hold \eqref{geclsa2}-\eqref{geclsa1} of Lemma \ref{gecharlemma}, then $(\alg, \mlt, h)$ is the graph extension of $(\ker \la, \circ, h)$ with respect to $L(u)$.
Given an LSA $(\alg, \mlt)$ and an endomorphism $D \in \eno(\alg)$ define $\Pi(\alg, D)$ to be the set of weights for the action of $D$ on $\alg$ and let
\begin{align}
\alg^{\al} = \{x \in \alg: (D - \al I)^{p}x = 0 \,\,\text{for some}\,\, p \geq 1\}
\end{align}
be the weight space corresponding to $\al \in \Pi(\alg, D)$.
\begin{lemma}\label{weightlemma}
Let $D \in \eno(\balg)$ be a compatible derivation of the Hessian LSA $(\balg, \circ, h)$ and let $(\alg, \mlt, \hat{h})$ be the corresponding graph extension. Then
\begin{enumerate}
\item \label{gbwt1} If $\al\in \Pi(\alg, L_{\mlt}(D))$ and $\al \neq 1$ then $\al \in \Pi(\balg, D)$ and $\balg^{\al} = \alg^{\al} \subset \balg$. Consequently $\Pi(\balg, D) \subset \Pi(\alg, L_{\mlt}(D))$.
\item \label{gbwt2} $\al \in \Pi(\balg, D)$ if and only if $1 - \al \in \Pi(\balg, D)$, and $\dim \balg^{\al} = \dim \balg^{1-\al}$.
\item \label{gbwt2b} $\balg^{1} = \{0\}$ if and only if $D$ is invertible.
\item \label{gbwt3} For $\al, \be \in \Pi(\balg, D)$, $\balg^{\al}\mlt \balg^{\be} \subset \alg^{\al + \be}$ and $\balg^{\al}\circ \balg^{\be} \subset \balg^{\al + \be}$.
\item\label{gbwt4} For $\al, \be \in \Pi(\balg, D)$, $h:\balg^{\al}\times \balg^{\be} \to \rea$ is nondegenerate if $\al + \be = 1$ and is the zero map otherwise.
\item\label{gbwt5} $[\alg^{\al}, \alg^{1-\al}] \subset \balg^{1}$.
\end{enumerate}
If $\Pi(\balg, D) \subset (0, 1)$ and $D$ is triangularizable then $(\balg, \circ)$ is right nilpotent, so nilpotent.
\end{lemma}
\begin{proof}
Let $\lambda = \hat{h}(D, \dum)$.
If $\al \in \Pi(\alg, L_{\mlt}(D))$ and $0 \neq a \in \alg^{\al}$, let $p \geq 1$ be the minimal integer such that $(L_{\mlt}(D) - \al I)^{p}a = 0$. By \eqref{geadj} of Lemma \ref{graphextensionlemma},
\begin{align}
\begin{split}
0 &= \hat{h}((L_{\mlt}(D) - \al I)^{p}a, D) = \hat{h}(a, (L_{\mlt}(D)^{\ast} - \al I)^{p}D) \\
&= \hat{h}(a, (R_{\mlt}(D) - L_{\mlt}(D) + (1-\al)I)^{p}D) = (1-\al)^{p}\la(a),
\end{split}
\end{align}
so if $\al \neq 1$ then $a \in \balg$. This shows $\alg^{\al} \subset \balg$ if $\al \neq 1$. It follows that $\balg^{\al} = \alg^{\al}$ if $\al \neq 1$. Since $L_{\mlt}(D)$ preserves $\balg$, $\balg^{1} = \alg^{1} \cap \balg$. This proves \eqref{gbwt1}.
If $\al \in \Pi(\balg, D)$, $0 \neq a \in \balg^{\al}$, and $x \in \balg$, then there is a minimal $p \geq 1$ such that $(D - \al I)^{p}a = 0$, and so, since $D$ is a compatible derivation,
\begin{align}\label{hathax}
0 = h((D - \al I)^{p}a, x) = (-1)^{p}h(a, (D - (1 - \al)I)^{p}x).
\end{align}
Were $(D - (1 - \al)I)^{p}$ invertible on $\balg$, then, by the nondegeneracy of $h$, \eqref{hathax} would imply $a = 0$, so $(D - (1 - \al)I)^{p}$ has nonempty kernel on $\balg$, so $\balg^{1-\al}$ is nontrivial. This proves that $1 - \al \in \Pi(\balg, D)$ and $\dim \balg^{\al} \leq \dim \balg^{1-\al}$. By symmetry, $\dim \balg^{\al} = \dim \balg^{1-\al}$, and so \eqref{gbwt2} is proved. By \eqref{gbwt2}, $\balg^{1} = \{0\}$ if and only if $\balg^{0} = \{0\}$. As $D$ is invertible on $\balg$ if and only if $\balg^{0} = \{0\}$, there follows \eqref{gbwt2b}.
If $\al, \be \in \Pi(\balg, D)$, $0 \neq a \in \balg^{\al}$, and $0 \neq b \in \balg^{\be}$, then, by \eqref{ge5} of Lemma \ref{graphextensionlemma}, $(D - (\al + \be)I)(a \mlt b) = ((D - \al I)a) \mlt b + a \mlt (D - \be I)b$, so
\begin{align}\label{halbeprod}
(D - (\al + \be)I)^{m}(a \mlt b) = \sum_{j = 0}^{m}\binom{m}{j}(D - \al I)^{j}a \mlt (D - \be I)^{m-j}b.
\end{align}
If $m \geq 2\max\{\dim \balg^{\al}, \dim \balg^{\be}\}$ then every term on the right-hand side of \eqref{halbeprod} vanishes, showing that $a \mlt b \in \alg^{\al + \be}$. Note that, although this shows $\balg^{\al}\mlt \balg^{\be} \subset \alg^{\al + \be}$, the possibility that $a\mlt b = 0$ is not excluded and this argument does not show that $\al + \be \in \Pi(\balg, D)$, and this need not be the case.
Suppose $\al, \be \in \Pi(\balg, D)$, $0 \neq a \in \balg^{\al}$, and $0 \neq b \in \balg^{\be}$. If $\al + \be \neq 1$ then, by \eqref{gbwt1}, $\alg^{\al + \be} = \balg^{\al + \be} \subset \balg$ so $\hat{h}(a, b) = \la(a \mlt b) = 0$. Hence $a\circ b = a\mlt b$, so $\balg^{\al} \circ \balg^{\be} \subset \balg^{\al + \be}$. If $\be = 1 -\al$ then $a \mlt b \in \alg^{1}$ so $a\circ b = a \mlt b - \hat{h}(a, b)D \in \alg^{1} \cap \balg = \balg^{1}$. This proves \eqref{gbwt3}.
If $\al, \be \in \Pi(\balg, D)$ and $\al + \be \neq 1$ then $a \mlt b \in \balg^{\al + \be} \subset\ker \la$, so $h(a, b) = \hat{h}(a, b) = \la(a\mlt b) = 0$. On the other hand, if $\al \in \Pi(\balg, D)$ and $0\neq a \in \balg^{\al}$ by the nondegeneracy of $h$ on $\balg$ there exists $x \in \balg$ such that $h(a, x) \neq 0$. Since, for $\be \neq 1 - \al$, the projection of $x$ onto $\balg^{\be}$ is $h$ orthogonal to $a$, it can be assumed $x \in \balg^{1-\al}$. This proves \eqref{gbwt4}.
Since $[\alg, \alg] \subset \ker \la$, if $a \in \alg^{\al}$ and $b \in \alg^{1-\al}$ then $[a, b] \in \alg^{1}\cap \balg = \balg^{1}$. This proves \eqref{gbwt5}.
Suppose $\Pi(\balg, D) \subset (0, 1)$ and let $0 < \al_{1} < \al_{2} < \dots < \al_{r} < 1$ be the distinct elements of $\Pi(\balg, D)$ arranged in increasing order. If $D$ is triangularizable, then $\balg$ equals the direct sum $\oplus_{i = 1}^{r}\balg^{\al_{i}}$ of the weight spaces of $D$. By \eqref{gbwt3}, each subspace $\I^{i} = \oplus_{j \geq i}\balg^{\al_{j}}$ is a two-sided ideal and the quotients $\I^{i}/\I^{i+1}$ are trivial LSAs, so, by Lemma \ref{rightnilpotentlemma}, $(\balg, \circ)$ is right nilpotent.
Since $\Pi(\balg, D) \subset (0, 1)$, $\balg = [\alg, \alg]$ is a nilpotent Lie algebra, so by Theorem \ref{trivalgtheorem}, $(\balg, \circ)$ is a nilpotent LSA.
\end{proof}
Lemma \ref{arithmeticrelationlemma} shows that the weights of a compatible triangularizable derivation acting on a complete Hessian LSA necessarily satisfy an arithmetic relation of a particular form.
\begin{lemma}\label{arithmeticrelationlemma}
Let $(\alg, \mlt, \hat{h})$ be the graph extension of the complete Hessian LSA $(\balg, \circ, h)$ with respect to a triangularizable compatible derivation $D \in \eno(\balg)$, and let $P$ be the characteristic polynomial of $(\alg, \mlt)$. Let $\Pi(\balg, D) = \{\al_{1}, \dots, \al_{r}\}$. For each integer $0 \leq l \leq \dim \balg - 1$ such that the degree $l + 2$ component of $P$ is not identically zero, there exists a partition $l+2 = \sum_{k = 1}^{r}i_{k}$ of $l+2$ as a sum of nonnegative integers $i_{1}, \dots, i_{r}$ such that $\sum_{k = 1}^{r}i_{k}\al_{k} = 1$. In particular, this is true for at least one nonnegative integer $l$, namely that for which $l + 2 = \deg P$.
\end{lemma}
\begin{proof}
Since $(\balg, \circ)$ is complete, by Lemma \ref{completegraphextensionlemma}, the characteristic polynomial $P_{\alg, \mlt}$ of its graph extension is given by \eqref{papb} and $\hat{h} = \tau_{\mlt}$, where $\tau_{\mlt}$ is the trace form of $(\alg, \mlt)$. Because $D$ is triangularizable, $\balg$ is the direct sum $\oplus_{\al_{i} \in \Pi(\balg, D)}\balg^{\al_{i}}$ of the weight spaces $\balg^{\al_{i}}$ of $D$. If $x \in \balg$ write $x = \sum_{i = 1}^{r}x_{\al_{i}}$ where $x_{\al_{i}} \in \balg^{\al_{i}}$. Consider an expression of the form $h(x, R_{\circ}(x)^{l}Dx)$. Each term $R_{\circ}(x)^{l}Dx$ is a linear combination of terms of the form $R_{\circ}(x_{\be_{1}})\dots R_{\circ}(x_{\be_{l}})Dx_{\be_{l+1}}$ where $\be_{1}, \dots, \be_{l+1}$ are not necessarily distinct elements of $\{\al_{1}, \dots, \al_{r}\} = \Pi(\balg, D)$. By \eqref{gbwt3} of Lemma \ref{weightlemma}, the term $R_{\circ}(x_{\be_{1}})\dots R_{\circ}(x_{\be_{l}})Dx_{\be_{l+1}}$ lies in $\balg^{\be_{1} + \dots + \be_{l+1}}$. Hence $\sum_{q = 1}^{l+1}\be_{q} = \sum_{i = 1}^{r}j_{i}\al_{i}$ where $j_{i}$ is the number of times $\al_{i}$ appears in the set $\{\be_{1}, \dots, \be_{l+1}\}$ and so $\sum_{q =1}^{r}j_{q} = l+1$. By \eqref{gbwt4} of Lemma \ref{weightlemma}, for the $h$-pairing of $R_{\circ}(x_{\be_{1}})\dots R_{\circ}(x_{\be_{l}})Dx_{\be_{l+1}}$ with some $x_{\al_{i}}$ to be nonzero, it must be that $1 - \al_{i} = \sum_{q = 1}^{r}j_{q}\al_{q}$. Equivalently, there are nonnegative integers $i_{1}, \dots, i_{r}$ such that $\sum_{q = 1}^{r}i_{q} = l+2$ and $1 = \sum_{q = 1}^{r}i_{q}\al_{q}$. By \eqref{papb} of Lemma \ref{completegraphextensionlemma}, the degree $l+2$ part of $P(x,a)$ is $(-1)^{l+1}h(x, R_{\circ}(x)^{l}Dx)$ for $(x, a) \in \alg$. If there is some $x \in \balg$ such that $(-1)^{l+1}h(x, R_{\circ}(x)^{l}Dx)$ is nonzero, then there must $\be_{1}, \dots, \be_{l+1}$ and $\al_{i}$ such that $h(x_{\al_{i}}, R_{\circ}(x_{\be_{1}})\dots R_{\circ}(x_{\be_{l}})Dx_{\be_{l+1}}) \neq 0$, so there are nonnegative integers $i_{1}, \dots, i_{r}$ such that $\sum_{q = 1}^{r}i_{q} = l+2$ and $1 = \sum_{q = 1}^{r}i_{q}\al_{q}$. If there is no such set of nonnegative integers $i_{1}, \dots, i_{r}$ for any $0 \leq l \leq \dim \balg -1$, then, by \eqref{papb} of Lemma \ref{completegraphextensionlemma}, $h(x, R_{\circ}(x)^{l}Dx) = 0$ for all $0 \leq l \leq \dim\balg -1$. In this case $P_{\alg, \mlt}(x + aD) = 1 + a$. Then the Hessian $\hess P_{\alg, \mlt}$ is identically zero, which contradicts \eqref{hesslogp}, relating this Hessian to the nondegenerate bilinear form $\tau_{\mlt}$.
\end{proof}
\begin{lemma}\label{incompletelsalemma}
If the right principal idempotent $r$ of an incomplete $n$-dimensional LSA $(\alg, \mlt)$ with nondegenerate trace form $\tau$ and characteristic polynomial $P$ satisfies $\ker \tr R = \ker R(r)$ then:
\begin{enumerate}
\item\label{rclsa5} $\balg = \ker \tr R$ equipped with the multiplication $x\circ y = x \mlt y - \tau(x, y)r$ is a complete LSA.
\item\label{rclsa5b} The Lie algebra $(\alg, [\dum, \dum])$ is solvable.
\item\label{rclsa6}
$P(x + tr) = P(x) + t$ for all $t \in \rea$ and $x \in \alg$.
\item\label{rclsanil} For $x, y \in \balg$ write $R_{\circ}(x)y = y\circ x$. If $P$ has degree $k+1$ then there exists $x \in \balg$ such that $R_{\circ}(x)^{k-1} \neq 0$; consequently, $\rnil^{k}(\balg, \circ)\neq \{0\}$.
\item\label{rclsawt} $dP(E) = P$, where $E$ is the vector field defined by $E_{x} = (I + R(x))r = r + r\mlt x$.
\end{enumerate}
If, additionally, the Lie subalgebra $\balg = \ker \tr R$ is unimodular, then
\begin{enumerate}
\setcounter{enumi}{5}
\item \label{rclsa10} there is a nonzero constant $\kc$ such that $\H(e^{P}) = \kc e^{nP}$ and the level sets of $P$ are improper affine spheres with affine normal a constant multiple of $r$.
\end{enumerate}
In particular, if, in addition to satisfying $\ker \tr R = \ker R(r)$, $L(r)$ is invertible, then there holds \eqref{rclsa10}, $[\alg, \alg] = \ker \tr R = \ker R(r)$, and $(\alg, \mlt)$ is a simple LSA.
\end{lemma}
\begin{proof}
This follows from Lemmas \ref{completegraphextensionlemma} and \ref{gecharlemma}.
\end{proof}
\begin{lemma}\label{triangularizablelemma}
Over a field of characteristic zero, a triangularizable LSA $(\alg, \mlt)$ having nondegenerate trace form $\tau$ contains an idempotent element $u$ such that the linear span $\lb u \ra$ is a left ideal and $R(u)$ is the operator of projection onto $\lb u \ra$. That is, $R(u)^{2} = R(u)$ and $\tr R(u) = 1$.
\end{lemma}
\begin{proof}
By assumption there exists a complete flag invariant under $L(\alg)$, so there exist $u \in \alg$ and $\la \in \alg^{\ast}$ such that $R(u)x = L(x)u = \la(x)u$ for all $x \in \alg$. Then $\tr R(u) = \la(u)$ and $\tau(x, u) = \tr R(x \mlt u) = \la(x)\tr R(u)$. Since $\tau$ is nondegenerate there is $v \in \alg$ such that $0 \neq \tau(v, u) = \la(v)\tr R(u)$, so $\la(u) = \tr R(u) \neq 0$. Replacing $u$ by $\la(u)^{-1}u$ it may be assumed that $\la(u) = \tr R(u) = 1$. Then $x\mlt u = \la(x) u$, so $\lb u \ra$ is a left ideal, and, in particular, $u\mlt u = u$. Finally, $R(u)^{2}x = \la(x)R(u)u = \la(x)u = R(u)x$.
\end{proof}
Note that Lemma \ref{triangularizablelemma} implies that a triangularizable LSA with nondegenerate trace form is incomplete. Suppose $(\alg, \mlt)$ is a triangularizable LSA with nondegenerate trace form $\tau$. It need not be the case that the right principal idempotent $r$ generate a left ideal. For example, in a clan, $r$ can be a right unit. Likewise it need not be the case that an idempotent $u$ as in Lemma \ref{triangularizablelemma} be the right principal idempotent (such a $u$ need not be uniquely determined).
Note that, for $u$ as in Lemma \ref{triangularizablelemma}, by Lemma \ref{gecharlemma} the hypothesis that $[\alg, \alg]$ have codimension one follows from the assumption that $L(u)$ be invertible.
Theorem \ref{triangularizabletheorem} is a slight refinement of Theorem \ref{triangularizabletheorem0} in the introduction.
\begin{theorem}\label{triangularizabletheorem}
Let $(\alg, \mlt)$ be a triangularizable $n$-dimensional LSA over a field of characteristic zero and having nondegenerate trace form $\tau$ and codimension one derived Lie subalgebra $[\alg, \alg]$. Let $G$ be the simply-connected Lie group with Lie algebra $(\alg, [\dum, \dum])$. There are a nonzero constant $\kc$ and a closed unimodular subgroup $H \subset G$ having Lie algebra $[\alg, \alg]$, such that the characteristic polynomial $P$ of $(\alg, \mlt)$ solves $\H(e^{P}) = \kc e^{nP}$, and the level sets of $P$ are improper affine spheres homogeneous for the action of $H$ and having affine normals equal to a constant multiple of the right principal idempotent $r$.
Moreover:
\begin{enumerate}
\item If $\Pi(\balg, L(r)) \subset (0, 1)$ then $(\balg, \circ)$ is right nilpotent, so nilpotent.
\item If $0 \notin \Pi(\balg, L(r))$, then $(\alg, \mlt)$ is simple.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $u$ and $\lambda$ be as in the proof of Lemma \ref{triangularizablelemma}. Since $[\alg, \alg] \subset \ker \la \cap \ker \tr R$, if $[\alg, \alg]$ has codimension one then $[\alg, \alg] = \ker \la = \ker \tr R$. Since also $\la(u) = \tr R(u)$, it follows that $\la = \tr R$, and so $u$ is the right principal idempotent. The claims that the level sets of $P$ are affine spheres with affine normals a multiple of the right principal idempotent $r$ follow from Lemmas \ref{graphextensionlemma}, \ref{gecharlemma}, and \ref{incompletelsalemma}. If $P(x) = 1$ then $x \in \hol(G)0$, so $x = \hol(g)0$ for some $g \in G$. By \eqref{holgp}, $1 = P(x) = P(\hol(g)0) = \chi(g)P(0) = \chi(g)$, so $g$ is contained in the subgroup $H = \ker \chi$, which is unimodular, since the Lie algebra of $\ker \chi$ is $[\alg, \alg] = \ker \tr R$ is unimodular. It follows that the level set $\{x \in \alg: P(x) = 1\}$ is homogeneous for the action of $\ker \chi$. By the translational homogeneity of $P$ in the $r$ direction shown in \eqref{rclsa6} of Lemma \ref{principalidempotentlemma}, $\{x \in\alg: P(x) = 1 + t\} = \{x \in \alg: P(x) = 1\} + tr$ is an affine image of $\{x \in \alg: P(x) = 1\}$, so is also a homogeneous improper affine sphere.
Suppose $\Pi(\balg, L(r)) \subset (0, 1)$ and let $0 < \al_{1} < \al_{2} < \dots < \al_{r} < 1$ be the distinct elements of $\Pi(\balg, L(r))$ arranged in increasing order. By \eqref{gbwt3} of Lemma \ref{weightlemma}, each subspace $\I^{i} = \oplus_{j \geq i}\balg^{\al_{j}}$ is a two-sided ideal and the quotients $\I^{i}/\I^{i+1}$ are trivial LSAs, so, by Lemma \ref{rightnilpotentlemma}, $(\balg, \circ)$ is right nilpotent. Since $(\alg, \mlt)$ is triangularizable, by Lemma \ref{cslemma} its underlying Lie algbera is solvable, so the underlying Lie algebra of $\balg = [\alg, \alg]$ is nilpotent. Hence, by Theorem \ref{trivalgtheorem}, $(\balg, \circ)$ is nilpotent.
That $0 \notin \Pi(\balg, L(r))$ means $L(r)$ is invertible. In this case $(\alg, \mlt)$ is simple by Lemma \ref{triangularizablelemma}.
\end{proof}
Note that the argument showing $(\balg, \circ)$ is right nilpotent if $\Pi(\balg, L(r)) \subset (0, 1)$ might fail were there supposed instead $\Pi(\balg, L(r)) \subset [0, 1]$. The problem is that one cannot conclude that the quotient $\I^{1}/\I^{2}$ is a trivial LSA.
\begin{remark}
In \cite{Mizuhara}, Mizuhara takes as hypotheses conditions corresponding to special cases of the conclusions \eqref{gbwt1}-\eqref{gbwt4} of Lemma \ref{weightlemma}. In particular, Mizuhara considers LSAs as in Theorem \ref{triangularizabletheorem} and for which $\Pi(\balg, L(r))$ is contained in $(0, 1)$ and $L(r)$ is diagonalizable. Such hypotheses exclude nontrivial LSAs. In example \ref{negeigexample} there are constructed LSA satisfying the hypotheses of Theorem \ref{triangularizabletheorem} and for which $\Pi(\balg, L(r))$ contains negative numbers or for which $L(r)$ is not diagonalizable.
\end{remark}
\begin{remark}
That a triangularizable LSA satisfying certain conditions must be simple shows a sense in which LSAs are very different from Lie algebras.
\end{remark}
\begin{remark}
By \eqref{dptra} and \eqref{hesslogp} the characteristic polynomial $P$ of an LSA $(\alg, \mlt)$ determines the Koszul form $\tr R$ and the metric $\tau$, and, by \eqref{nablakdlogp} (with $k = 2$), $P$ determines the commutative part of the multiplication $\mlt$, but by itself $P$ does not determine in an obvious way the multiplication $\mlt$. To reconstruct $\mlt$ it seems necessary, but not sufficient without further hypotheses, to know also the underlying Lie algebra $(\alg, [\,,\,])$ and its relation to $L(r)$. This raises the question whether two triangularizable $n$-dimensional LSAs having nondegenerate trace form and codimension one derived Lie algebra and having the same characteristic polynomial must be isomorphic. Example \ref{negeigexample} shows that the answer is negative. There needs to be imposed some normalization condition situating the underlying Lie algebra of the LSA inside the symmetry algebra of $P$; it is planned to discuss this elsewhere.
\end{remark}
\section{Examples}\label{examplesection}
This section records some of the simplest illustrative examples.
\begin{example}\label{posdefsection}
Given an $(n-1)$-dimensional vector space $\ste$ with an nondegenerate inner product $g$ define an $n$-dimensional LSA $(\parab_{n}(g), \mlt)$ as follows. Equip $\alg = \ste \oplus \lb u \ra$ with the multiplication $\mlt$ defined, for $x, y \in \ste$, by $x\mlt y = g(x, y)u$, $x\mlt u = 0$, $u \mlt x = x/2$, and $u \mlt u = u$. It is straightforward to check that $(\parab_{n}, \mlt)$ is an LSA with trace form $\tau(x + au, y + bu) = ab + g(x, y)$ and characteristic polynomial $P(x + au)= 1 + a - g(x, x)/2$, whose level sets are paraboloids. In the case that $g = \delta$ is the standard Euclidean inner product, there will be written simply $\parab_{n} = \parab_{n}(\delta)$. In this case $\tau$ is positive definite. Moreover, if $g$ is positive definite then $\parab_{n}(g)$ is isomorphic to $\parab_{n}$.
\begin{lemma}\label{posdeflemma}
Suppose the $n$-dimensional LSA $(\alg, \mlt)$ carries a Koszul form $\la \in \alg^{\ast}$ for which the associated Hessian metric $h$ is positive definite. Suppose that the associated idempotent $u$ satisfies $\la(u) = 1$ and that $L(u)$ is invertible with real eigenvalues. Then $(\alg, \mlt)$ is isomorphic to $\parab_{n}$.
\end{lemma}
\begin{proof}
By the discussion following the proof of Lemma \ref{principalidempotentlemma}, $L(u)$ is self-adjoint and so diagonalizable. By the argument proving Proposition $4.1$ of \cite{Shima-homogeneoushessian}, the eigenvalues of the restriction of $L(u)$ to $\ker \la$ are all $1/2$, so that this restriction is half the identity, and, if $x, y \in \ker\la$, then $x \mlt y$ is an eigenvector of $L(u)$ with eigenvalue $1$, so equals $h(x, y)u$. It follows that $(\alg, \mlt)$ is isomorphic to $\parab_{n}(g)$ where $g$ is the restriction of $h$ to $\ker\la$.
\end{proof}
Lemma \ref{posdeflemma} can be deduced as a special case of Proposition II.$5$ of \cite{Vinberg}. Its relevance here is that an LSA yielding a homogeneous improper affine sphere that is not an elliptic paraboloid must have an indefinite signature trace form.
\end{example}
\begin{example}\label{cayleyexample}
Let $\cayn$ be an $n$-dimensional real vector space equipped with a basis $e_{1}, \dots, e_{n}$. Let $\ep_{1}, \dots, \ep_{n}$ be the dual basis of $\cayn^{\ast}$ and write $e_{ij} = e_{i}\tensor \ep_{j}$ for the standard basis of $\eno(\cayn)$. For $x = \sum_{i=1}^{n}x_{i}e_{i}$ there holds $e_{ij}(x) = x_{j}e_{i}$. For $x = \sum_{i=1}^{n}x_{i}e_{i}, y = \sum_{i=1}^{n}y_{i}e_{i} \in \cayn$, define $L(x)$ and $R(x)$ by
\begin{align}\label{cayl}
\begin{split}
L(x) &= x_{n}\sum_{i=1}^{n}ie_{ii} + \sum_{1 \leq j < i \leq n}x_{i-j}e_{ij} =
\begin{pmatrix}
x_{n} & 0 & 0 & \dots & 0 &0 & 0\\
x_{1} & 2x_{n} & 0 & \dots &0& 0 & 0\\
x_{2} & x_{1} & 3x_{n} & \dots & 0&0 & 0\\
\vdots & \vdots & \vdots & \dots& \vdots & \vdots & \vdots \\
x_{n-2} & x_{n-3} & x_{n-4} & \dots & x_{1} & (n-1)x_{n} & 0\\
x_{n-1} & x_{n-2} & x_{n-3} & \dots & x_{2} & x_{1} & nx_{n}
\end{pmatrix},\\
R(x) &= \sum_{i = 1}^{n}ix_{i}e_{in} + \sum_{1 \leq j < i \leq n}x_{i-j}e_{ij}=
\begin{pmatrix}
0 & 0 & 0 & \dots & 0 &0 & x_{1}\\
x_{1} & 0 & 0 & \dots &0& 0 & 2x_{2}\\
x_{2} & x_{1} & 0 & \dots & 0&0 & 3x_{3}\\
\vdots & \vdots & \vdots & \dots& \vdots & \vdots & \vdots \\
x_{n-2} & x_{n-3} & x_{n-4} & \dots & x_{1} & 0 & (n-1)x_{n-1}\\
x_{n-1} & x_{n-2} & x_{n-3} & \dots & x_{2} & x_{1} & nx_{n}
\end{pmatrix},
\end{split}
\end{align}
so that $x\mlt y = L(x)y = R(y)x$ and $[x, y]$ are given by
\begin{align}\label{cayl2}
\begin{split}
x\mlt y &= \sum_{i = 1}^{n}\left(ix_{n}y_{i} + \sum_{1 \leq j < i}x_{i-j}y_{j}\right)e_{i}, \\
[x, y] &= \sum_{i = 1}^{n-1}i(y_{i}x_{n} - x_{i}y_{n})e_{i}.
\end{split}
\end{align}
(When $n = 1$, $(\cayn, \mlt)$ is just the algebra of real numbers.)
The \textit{Cayley algebra} $(\cayn, \mlt)$ is an LSA with underlying Lie bracket $[x, y]$. Since $(n+1)\tr R(x) = n(n+1)x_{n} = 2\tr L(x)$, $(\cayn, \mlt)$ is not complete. By \eqref{cayl2},
\begin{align}
\tau(x, y) = \tr R(x\mlt y) = n(x \mlt y)_{n} = n^{2}x_{n}y_{n} + n\sum_{1 \leq j < n}x_{n-j}y_{j},
\end{align}
where $(x\mlt y)_{n}$ is the coefficient of $e_{n}$ in $x \mlt y$. The right principal idempotent in $\cayn$ is $r = n^{-1}e_{n}$ and $L(r)$ is invertible with
\begin{align}
\Pi(\cayn, L(r)) = \{\tfrac{1}{n}, \tfrac{2}{n}, \dots, \tfrac{n-1}{n}, 1\}.
\end{align}
By \eqref{cayl2}, $\fili_{n-1} = [\cayn, \cayn] = \ker \tr R = \ker R(r)$ is a codimension one abelian Lie subalgebra, and $(\cayn, [\dum, \dum])$ is solvable. By Lemma \ref{incompletelsalemma} the multiplication
\begin{align}\label{fn1}
x \circ y = x\mlt y - \tfrac{1}{n}\tau(x, y)e_{n} = \sum_{i = 2}^{n-1}\left( \sum_{1 \leq j < i}x_{i-j}y_{j}\right)e_{i}
\end{align}
on $\fili_{n-1}$ makes $(\fili_{n-1}, \circ)$ a complete LSA for which the restriction of $h = \tfrac{1}{n}\tau$ to $(\fili_{n-1}, \circ)$ is a Hessian metric on which $L(r)$ acts as a compatible derivation and satisfying
\begin{align}\label{fn2}
&h(x, y) = \sum_{1 \leq j \leq n-1}x_{n-j}y_{j},&& x, y \in \fili_{n-1}.
\end{align}
In terms of the basis $\{e_{1}, \dots, e_{n-1}\}$ the relations \eqref{fn1} and \eqref{fn2} have the forms
\begin{align}
&e_{i}\circ e_{j} = \begin{cases} e_{i+j} & i + j \leq n-1\\ 0 & i+j > n - 1\end{cases},& &\tau(e_{i}, e_{j}) = \begin{cases} 1 & i + j = n\\ 0 & i+j \neq n\end{cases},
\end{align}
and $L(r)$ is the derivation defined by $De_{i} = \tfrac{i}{n}e_{i}$. The expression
\begin{align}
h(x\circ y, z) = \sum_{i + j + k = n, i\geq 1, j \geq 1, k\geq 1}x_{i}y_{j}z_{k},
\end{align}
is completely symmetric and $\circ$ is abelian.
From Lemma \ref{incompletelsalemma} it follows that the level sets of the characteristic polynomial $P_{n}$ are improper affine spheres homogeneous for the action of the simply connected abelian Lie group corresponding to $[\cayn, \cayn]$. As was mentioned in the introduction and will be proved now, these level sets are the hypersurfaces called \textit{Cayley hypersurfaces} in \cite{Eastwood-Ezhov}.
\begin{lemma}
For $n \geq 1$, the characteristic polynomial $P_{n}(x_{1}, \dots, x_{n})$ of $(\cayn, \mlt)$ satisfies the recursion
\begin{align}\label{cayleyrecursion}
P_{n}(x_{1}, \dots, x_{n}) - 1 = \sum_{i = 1}^{n-1}x_{i}(1 - P_{n-i}(x_{1}, \dots, x_{n-i})) + nx_{n},
\end{align}
where $P_{1}(x) = 1 + x$ and, when $n = 1$, the sum in \eqref{cayleyrecursion} is understood to be trivial.
\end{lemma}
\begin{proof}
Write
\begin{tiny}
\begin{align}\label{cayrec}
\begin{split}&P_{n}(x) = \det(I + R(x))\\
&=
\begin{vmatrix}
1 & 0 & 0 & \dots & 0 &0 & x_{1}\\
x_{1} & 1 & 0 & \dots &0& 0 & 2x_{2}\\
x_{2} & x_{1} & 1 & \dots & 0&0 & 3x_{3}\\
\vdots & \vdots & \vdots & \dots& \vdots & \vdots & \vdots \\
x_{n-3} & x_{n-4} & x_{n-5} & \dots & 1& 0 & (n-2)x_{n-2}\\
x_{n-2} & x_{n-3} & x_{n-4} & \dots & x_{1} & 1 & (n-1)x_{n-1}\\
x_{n-1} & x_{n-2} & x_{n-3} & \dots & x_{2} & x_{1} & 1 + nx_{n}
\end{vmatrix} = \begin{vmatrix}
1 & 0 & 0 & \dots &0 & 0 & x_{1}\\
x_{1} & 1 & 0 & \dots& 0 & 0 & 2x_{2}\\
x_{2} & x_{1} & 1 & \dots& 0 & 0 & 3x_{3}\\
\vdots & \vdots & \vdots & \dots& \vdots & \vdots & \vdots \\
x_{n-3} & x_{n-4} & x_{n-5} & \dots & 1 & 0 & (n-2)x_{n-2}\\
x_{n-2} & x_{n-3} & x_{n-4} & \dots & x_{1} & 1 & 1+ (n-1)x_{n-1}\\
x_{n-1} & x_{n-2} & x_{n-3} & \dots &x_{2} & x_{1} & 1 + nx_{n} + x_{1}
\end{vmatrix}\\
& = \begin{vmatrix}
1 & 0 & 0 & \dots & 0 & x_{1}\\
x_{1} & 1 & 0 & \dots & 0 & 2x_{2}\\
x_{2} & x_{1} & 1 & \dots & 0 & 3x_{3}\\
\vdots & \vdots & \vdots & \dots & \vdots & \vdots \\
x_{n-3} & x_{n-4} & x_{n-5} & \dots & 1 & (n-2)x_{n-2}\\
x_{n-1} & x_{n-2} & x_{n-3} & \dots & x_{2} & 1 + nx_{n} + x_{1}
\end{vmatrix}-x_{1}P_{n-1}(x_{1}, \dots, x_{n-1}),
\end{split}
\end{align}
\end{tiny}
\noindent
where the first equality results from adding the penultimate column to the last column, and the second equality results upon
evaluating the second determinant by expanding by cofactors down the penultimate column. Iterating the same procedure applied to the determinant appearing in the final expression of \eqref{cayrec} yields the recursion \eqref{cayleyrecursion}.
\end{proof}
\begin{lemma}\label{cayleypolynomiallemma}
The Eastwood-Ezhov polynomials $\Phi_{n}$ defined in \eqref{eepolynomials} are related to the polynomials $P_{n}$ by
\begin{align}\label{eec}
P_{n} - 1= -n\Phi_{n}.
\end{align}
As a consequence the polynomials $\Phi_{n}$ satisfy the recursion \eqref{cayleyrecursion2}.
\end{lemma}
\begin{proof}
Write $P = P_{n}$ and $\Phi = \Phi_{n}$. By \eqref{dptra}, for $1 \leq i \leq n-1$, the vector field $E_{i} = (I + R(x))e_{i} = \pr_{i} + \sum_{j = i+1}^{n}x_{j-i}\pr_{j}$ satisfies $dP_{x}(E_{i}) = P(x)\tr R(e_{i}) = 0$. By Proposition $1$ of \cite{Eastwood-Ezhov}, $d\Phi(E_{i}) =0$. Examining the descriptions \eqref{cayleyrecursion} and \eqref{eepolynomials} of $P$ and $\Phi_{n}$, it is straightforward to see that that in $P$ and $-n\Phi$ the only monomial in which $x_{n}$ appears is $nx_{n}$, so that $\pr_{n}$ annihilates $P + n\Phi$. Hence the $n$ linearly independent vector fields $E_{1}, \dots, E_{n-1}$, and $\pr_{n}$ annihilate $P + n\Phi$, so it is constant. As $P(0) = 1$ and $\Phi(0) = 0$, the relation \eqref{eec} follows. The recursion \eqref{cayleyrecursion2} for $\Phi_{n}$ follows from \eqref{cayleyrecursion} and \eqref{eec}.
\end{proof}
Since the polynomials \eqref{cayleyrecursion} are related to the polynomials $\Phi_{n}$ by $P_{n} - 1= -n\Phi_{n}$, the preceding discussion of $\cayn$ proves that the Cayley hypersurfaces are homogeneous improper affine spheres in a manner different from that in \cite{Eastwood-Ezhov}.
\begin{lemma}
The characteristic polynomial $P_{n}$ of the Cayley algebra $(\cayn, \mlt)$ solves
\begin{align}\label{hepn}
\H(e^{P_{n}}) =(-1)^{n(n-1)/2}n^{n+1}e^{nP_{n}}.
\end{align}
\end{lemma}
\begin{proof}
Define $E_{i} = (I + R(x))e_{i} = \pr_{i} + \sum_{j = i+1}^{n}x_{j-i}\pr_{j}$ for $1 \leq i \leq n-1$ and $E_{n} = \pr_{n}$. (Note that $E_{n}$ is not equal to $(I + R(x))e_{n}$.)
These vector fields satisfy $\pr_{E_{i}}E_{j} = E_{i+j}$ if $i + j \leq n$ and $\pr_{E_{i}}E_{j} = 0$ if $i + j> n$. By \eqref{cayleyrecursion}, $dP_{n}(E_{n}) = -n$. Since $dP_{n}(E_{i}) = 0$ if $i < n$, there results
\begin{align}
\left(\hess P_{n} + dP_{n} \tensor dP_{n}\right)(E_{i}, E_{j}) = \begin{cases}
-n & \text{if}\,\, i + j = n,\\
n^{2} & \text{if}\,\, i=n= j,\\
0 & \text{if} \,\, i+j \neq n,\, i+j < 2n,
\end{cases}
\end{align}
from which it follows that $\det(\hess P_{n} + dP_{n} \tensor dP_{n}) = (-1)^{n(n-1)/2}n^{n+1}\Psi^{2}$, where $\Psi$ is the standard volume form such that $\Psi(E_{1}, \dots, E_{n}) = 1$. The claim \eqref{hepn} follows from Lemma \ref{improperlemma}.
\end{proof}
Here is an alternative description of $\fili_{n-1}$. Consider an $n$-dimensional real vector space $\ste$ with basis $\{\ep_{1}, \dots, \ep_{n}\}$. Let $\{\ep^{1}, \dots, \ep^{n}\}$ be the dual basis of $\std$. View $\ep_{i}\,^{j} = \ep_{i}\tensor \ep^{j}$ as an element of $\eno(\ste)$, such that $\ep_{i}\,^{j}(x) = x^{j}\ep_{i}$ for $x = \sum_{i = 1}^{n}x^{p}\ep_{p}$. Let $\eno_{0}(\ste)$ be the subspace of trace-free endomorphisms. The associative algebra structure on $\eno(\ste)$ is given via composition by $\ep_{i}\,^{j}\ep_{k}\,^{l} = \delta_{k}\,^{j}\ep_{i}\,^{l}$, where $\delta_{k}\,^{j}$ is the Kronecker delta. The associated Lie bracket is the usual commutator of endomorphisms. The Lie centralizer $\cent(J)$ in $\eno_{0}(\ste)$ of the nilpotent endomorphism $J = \sum_{i = 1}^{n}\ep_{i}\,^{i+1}$ is the $(n-1)$-dimensional subspace generated by the nontrivial powers of $J$. The map $P:\fili_{n-1} \to \cent(J)$ associating with $x = \sum_{i = 1}^{n-1}x_{i}e_{i} \in \fili_{n-1}$ the polynomial $P(x) = \sum_{i = 1}^{n-1}x_{i}J^{i}$ is a linear isomorphism, and it is straightforward to check that $P(x)P(y) = P(x \circ y)$, where juxtaposition of elements of $\eno(\ste)$ indicates composition.
Let $E = \sum_{i = 1}^{n}\tfrac{n+1 - 2i}{2n}\ep_{i}\,^{i} \in \eno_{0}(\ste)$ ($E$ is the sum of the standard positive roots of $\eno_{0}(\ste)$, divided by $2n$). Then $[E, J^{k}] = \tfrac{k}{n}J^{k}$, so $D = \ad(E)$ is the derivation of $\cent(J)$ corresponding to $L(r)$ in the sense that $\ad(E)P(x) = P(L(r)x)$.
Define $\fili_{\infty}$ to be the vector space of infinite sequences $(x_{1}, x_{2}, \dots)$. It is straightforward to check that the multiplication $\circ$ defined on $\fili_{\infty}$ by
\begin{align}\label{filii}
(x \circ y)_{i} = \sum_{1 \leq j < i}x_{i-j}y_{j}
\end{align}
is associative (so left-symmetric) and commutative. The subspaces $\fili_{\infty}^{k} = \{x \in \fili_{\infty}: x_{i} = 0 \,\,\text{if}\,\, i \leq k\}$ constitute a decreasing filtration of $\fili_{\infty}$. If $x \in \fili_{\infty}^{p}$ and $y \in \fili_{\infty}^{q}$ and $i \leq p + q$, then for a summand in \eqref{filii} to be nonzero it must be that $i-j > p$ and $j > q$. These inequalities together yield the vacuous condition $q < j < i - p \leq q$. This shows that $\fili_{\infty}^{p}\circ \fili_{\infty}^{q} \subset \fili_{\infty}^{p+q}$. Define $\pi_{n}:\fili_{\infty} \to \fili_{n}$ to be the projection onto the first $n$ components, so that $\ker \pi_{n} = \fili_{\infty}^{n}$. The preceding implies that $\pi_{n}(x \circ y) = \pi_{n}(x) \circ \pi_{n}(y)$, so that $\pi_{n}$ is a left-symmetric homomorphism. Define $\la_{n}:\fili_{\infty} \to \rea$ by $\la_{n}(x) = x_{n}$. It is claimed that the metric $h$ on $\fili_{n-1}$ is defined by $h(x, y) = \la_{n}(\bar{x} \circ \bar{y})$ where $\bar{x}$ and $\bar{y}$ are any elements of $\fili_{\infty}$ such that $\pi_{n-1}(\bar{x}) = x$ and $\pi_{n-1}(\bar{y}) = y$. If $\pi_{n-1}(\hat{x}) = x$, then $\hat{x} - \bar{x} \in \fili_{\infty}^{n-1}$, and so its product with $\bar{y}$ is contained in $\fili_{\infty}^{n}$. Consequently, $\la_{n}(\bar{x}\circ \bar{y})$ does not depend on the choice of $\bar{x}$ and $\bar{y}$. It is given explicitly by \eqref{fn2}, and this shows that it is nondegenerate.
The space $(\fili_{\infty}, \circ)$ can be viewed as the space of formal power series $\sum_{i \geq 1}x_{i}t^{i}$ with no constant term with its usual multiplication. It is straightforward to check that the formal Euler operator $E:\fili_{\infty} \to \fili_{\infty}$ defined by $E(x)_{i} = ix_{i}$ is a derivation of $\circ$, that is $E(x\circ y) = E(x)\circ y + x \circ E(y)$.
\end{example}
\begin{example}
Here is given an example of a $6$-dimensional incomplete LSA $(\alg, \mlt)$ with nondegenerate trace form and satisfying the conditions of Theorem \ref{triangularizabletheorem}. With respect to the standard basis on $\alg = \rea^{6}$ the left and right multiplication operators are given by
\begin{small}
\begin{align}
&L(x) = \begin{pmatrix}
\tfrac{1}{4}x_{6} & 0 & 0 & 0 & 0 & 0\\
0 & \tfrac{1}{4}x_{6} & 0 & 0 & 0 & 0\\
0 & 0 & \tfrac{1}{2}x_{6} & 0 & 0 & 0\\
0 & 0 & x_{2} & \tfrac{3}{4}x_{6} & 0 & 0\\
x_{3} & x_{3} & x_{1} + 2x_{2}& 0 & \tfrac{3}{4}x_{6} & 0 \\
6x_{4} & 6x_{5} & 6x_{3} & 6x_{1} & 6x_{2} & x_{6}
\end{pmatrix},&&
&R(x) = \begin{pmatrix}
0 & 0 & 0 & 0 & 0 & \tfrac{1}{4}x_{1}\\
0 & 0 & 0 & 0 & 0 & \tfrac{1}{4}x_{2}\\
0 & 0 & 0 & 0 & 0 & \tfrac{1}{2}x_{3}\\
0 & x_{3} & 0 & 0 & 0 & \tfrac{3}{4}x_{4}\\
x_{3} & 2x_{3} & x_{1} + x_{2}& 0 & 0 & \tfrac{3}{4}x_{5} \\
6x_{4} & 6x_{5} & 6x_{3} & 6x_{1} & 6x_{2} & x_{6}
\end{pmatrix}.
\end{align}
\end{small}
That this defines an LSA is tedious but straightforward. The trace form is
\begin{align}
\tau(x, y) = 6(x_{1}y_{4} + x_{4}y_{1} + x_{2}y_{5} + x_{5}y_{2} + x_{3}y_{3}) + x_{6}y_{6}.
\end{align}
The derived algebra $[\alg, \alg] = \ker \tr R = \ker R(r)$ has codimension one and is nilpotent but not abelian, for $[[\alg, \alg], [\alg, \alg]]$ is a two-dimensional abelian subalgebra. The characteristic polynomial is
\begin{align}
P(x) = 6x_1x_2x_3 + 6x_2^2x_3 - 6x_1x_4 - 6x_2x_5 - 3x_3^2 + x_6 + 1,
\end{align}
and $p(x_{1}, \dots, x_{5}) = P(x) - 1 - 6x_{6}$ solves $\H(p) = - 6^{5}$. By Lemma \ref{incompletelsalemma} the level sets of $P$ are improper affine spheres with affine normals parallel to $\pr_{x_{6}}$.
\end{example}
\begin{example}\label{negeigexample}
Here is described a class of examples of incomplete LSAs $(\alg, \mlt)$ having nondegenerate trace forms, satisfying the conditions of Theorem \ref{triangularizabletheorem}, and such that $L(r)$ has various interesting properties, such as a negative eigenvalue or a nontrivial Jordan block.
Consider a trivial LSA $(\balg, \circ)$. Any metric $h$ is necessarily Hessian, and any endomorphism $D \in \eno(\balg)$ is a derivation. Any invertible endomorphism of $\balg$ is an automorphism of $\circ$, so modulo automorphisms of $(\balg, \circ)$ it can be assumed that $D$ has its real Jordan normal form. Supposing a particular relation between $D$ and $h$ leads to the following examples.
Work with respect to the standard basis $e_{1}, \dots, e_{n+1}$ of $\alg = \rea^{n+1}$ and write $x = \bar{x} + \hat{x}e_{n+1}$ where $\bar{x} = x_{1}e_{1} + \dots + x_{n}e_{n}$. Let $\balg$ be the subspace of $\alg$ spanned by $e_{1}, \dots, e_{n}$. Define $J \in \eno(\balg)$ by $J(e_{n+1-i}) = e_{i}$. Let $D \in \eno(\balg)$ be diagonal with respect to the standard basis and such that the diagonal entries $d_{i}$ defined by $D(e_{i}) = d_{i}e_{i}$ satisfy $d_{n+1-i} = 1- d_{i}$. This is equivalent to the relation $JD + DJ = J$. Let $N\in \eno(\balg)$ be a nilpotent endomorphism of $\balg$, strictly lower triangular with respect to the standard basis, and satisfying $[D, N] = 0$ and $N^{t}J = -JN$, where the transposed endomorphism $N^{t}$ is defined using the Euclidean structure on $\balg$ for which the standard basis is orthonormal. Examples showing that there exist $J$, $D$, and $N$ satisfying the conditions $DJ + JD = J$, $[D, N] = 0$, and $N^{t}J + JN = 0$ are given below. The conditions $DJ + JD = J$ and $N^{t}J + JN = 0$ correspond to requiring that the derivation of the trivial LSA structure on $\balg$ having matrix $D + N$ with respect to the standard basis be compatible with $h$. Using the mentioned Euclidean structure, $x \in \alg$ is identified with a column vector with components $x_{i}$. Let $\al$ be a nonzero real number. The left and right multiplication operators are defined by
\begin{align}
&L(x) = \begin{pmatrix*}[c] \hat{x}(D + N) & 0 \\ \al\bar{x}^{t}J & \hat{x}\end{pmatrix*},& &R(x) = \begin{pmatrix*}[c] 0 & (D + N)\bar{x} \\ \al\bar{x}^{t}J & \hat{x}\end{pmatrix*}.
\end{align}
That this defines an LSA is straightforward using the identities $DJ + JD = J$, $[D, N] = 0$, and $N^{t}J = -JN$. The multiplication and underlying Lie bracket are given explicitly by
\begin{align}\label{dnjex}
&x \mlt y = \begin{pmatrix*}[c]\hat{x}(D+N)\bar{y} \\ \al\bar{x}^{t}J\bar{y} + \hat{x}\hat{y}\end{pmatrix*},& &[x, y] = \begin{pmatrix*}[c]\hat{x}(D+N)\bar{y} - \hat{y}(D+N)\bar{x}\\ 0\end{pmatrix*}
\end{align}
The trace form is
\begin{align}
\tau(x, y) = \hat{x}\hat{y} + \al \bar{x}^{t}J\bar{y} = x_{n+1}y_{n+1} + \al\sum_{i = 1}^{n}x_{n+1-i}y_{i},
\end{align}
which is evidently nondegenerate.
The induced multiplication $\circ$ on $\balg$ is trivial. The characteristic polynomial is $P(x) = 1 + \hat{x} - \al \bar{x}^{t}JD\bar{x}$. Note that neither $\tau$ nor $P$ depends on the choice of $N$.
The derived algebra $[\alg, \alg]$ is abelian. If $D$ is invertible then, because $D$ and $N$ commute, $D^{-1}N$ is nilpotent, so $D + N = D(I + D^{-1}N)$ is invertible and it is apparent from \eqref{dnjex} that $[\alg, \alg]$ has codimension one in $\alg$. However, if $D$ is not invertible, then it can be that $[\alg, \alg]$ has codimension greater than one in $\alg$, as occurs for example for the LSA given by
\begin{align}
& J = \begin{pmatrix*}[r]0 & 1 \\ 1 & 0 \end{pmatrix*},&& D = \begin{pmatrix*}[r]1 & 0 \\ 0 & 0 \end{pmatrix*},&
\end{align}
$N = 0$, and $\al = 1$.
A simple special case of the preceding showing that $\Pi(\alg, L(r))$ can contain nonpositive numbers and irrational numbers is the following example of dimension $n+1 = 4$. For any $\si \in \rea$, define
\begin{align}
& J = \begin{pmatrix*}[r]0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix*},&& D = \begin{pmatrix*}[r] \si & 0 & 0 \\ 0 & 1/2 & 0 \\ 0 & 0 & 1- \si \end{pmatrix*},
\end{align}
and $N = 0$.
With respect to the standard basis on $\alg = \rea^{4}$ the left and right multiplication operators are
\begin{align}
&L(x) = \begin{pmatrix}
\si x_{4} & 0 & 0 & 0 \\
0 & \tfrac{1}{2}x_{4} & 0 & 0 \\
0 & 0 & (1-\si)x_{4} & 0 \\
x_{3} & x_{2} & x_{1} & x_{4}
\end{pmatrix},&&
&R(x) = \begin{pmatrix}
0 & 0 & 0 & \si x_{1} \\
0 & 0 & 0 & \tfrac{1}{2}x_{2} \\
0 & 0 & 0 & (1-\si)x_{3} \\
x_{3} & x_{2} & x_{1} & x_{4}
\end{pmatrix}.
\end{align}
The trace form is
\begin{align}
\tau(x, y) = x_{1}y_{3} + x_{3}y_{1} + x_{2}y_{2} + x_{4}y_{4}.
\end{align}
The derived algebra $[\alg, \alg]$ is abelian and, if $\si \notin \{0, 1\}$, it has codimension one and equals $\ker \tr R = \ker R(r)$. The characteristic polynomial is $P(x) = 1 + x_{4} - x_{1}x_{3} - \tfrac{1}{2}x_{2}^{2}$.
A simple special case of the preceding showing that $L(r)$ need not be diagonalizable is the following example of dimension $n+1 = 4$. Define
\begin{align}
& J = \begin{pmatrix*}[r]0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix*},&& N = \begin{pmatrix*}[r] 0 & 0 & 0 \\ t & 0 & 0 \\ 0 & -t & 0 \end{pmatrix*},
\end{align}
where $t \in \rea$, and let $D = (1/2)I$. The left and right multiplication operators are
\begin{align}
&L(x) = \begin{pmatrix*}[c] \tfrac{1}{2}x_{4}I + x_{4}N & 0 \\ \bar{x}^{t}J & x_{4}\end{pmatrix*},& &R(x) = \begin{pmatrix*}[c] 0 & \tfrac{1}{2}\bar{x} + N\bar{x} \\ \bar{x}^{t}J & x_{4}\end{pmatrix*}.
\end{align}
That this defines an LSA is straightforward using the identity $N^{t}J = -JN$. The trace form is
\begin{align}
\tau(x, y) = \bar{x}^{t}J\bar{y} + x_{4}y_{4} = x_{1}y_{3} + x_{3}y_{1} + x_{2}y_{2} + x_{4}y_{4},
\end{align}
which is evidently nondegenerate.
The derived algebra $[\alg, \alg] = \ker \tr R = \ker R(r)$ has codimension one and is abelian. The characteristic polynomial is $P(x) = 1 + x_{4} -x_{1}x_{3} - \tfrac{1}{2}x_{2}^{2}$.
\end{example}
In the preceding examples $\tau$ and $P$ are the same, although the underlying LSAs $(\alg, \mlt)$ are not isomorphic because they have different $\Pi(\alg, L(r))$. This shows that a triangularizable LSA having nondegenerate trace form and codimension one derived Lie algebra is not determined up to isomorphism by its characteristic polynomial. On the other hand, the LSAs $(\balg, \circ)$ are in all cases trivial, so isomorphic.
\bibliographystyle{amsplain}
\def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$}
\def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def\cprime{$'$}
\def\Dbar{\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex \accent"16\hss}D}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\dbar{\leavevmode\hbox
to 0pt{\hskip.2ex \accent"16\hss}d} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| 64,004 |
\section{Introduction}
Benchmarking is the process of analyzing key performance indicators with the aim of creating standards for comparing competing units \citep{bogetoft2010benchmarking}. Probabilistic benchmarks measure the probability of a unit falling into an interval along with the cumulative probability of exceeding a predetermined threshold \citep{wolfe2019diagnosis}. As a management tool, benchmarks allow to identify and apply better documented practices \citep{bogetoft2013performance}.
Benchmarks are widely used in diverse scientific disciplines. Pharmaceutics compare the prices of prescription drugs with benchmarks \citep{gencarelli2005one}. In environmental science, benchmarks set water quality standards \citep{van2019specific} or define thresholds for radiation risk \citep{bates2011environmental}. In finance, interest-rate benchmarks mitigate search frictions by lowering informational asymmetries in the markets \citep{duffie2017benchmarks}.
This study develops a 2-step processes for calculating probabilistic benchmarks in noisy datasets. In step 1, double-hyperbolic undersampling filters the noise of key performance indicators (KPIs); in step 2, a relevance vector machine estimate probabilistic benchmarks with filtered KPIs. Archimidean copulas approximate the joint density of KPIs during the denoising step. Besides estimating probabilistic benchmarks, the methods of step 2 identify the continuous and categorical factors influencing benchmarks.
The 2-step methodology is illustrated with an application to a database of nanofinance+ working with business interventions. In nanofinance, low-income individuals without access to formal financial services get together and start to accumulate their savings into a fund, which they later use to provide themselves with loans and insurance. In nanofinance+ (NF+), development agencies, donors and governments help communities to create NF+ groups for financial inclusion and then the groups become a platform for additional `plus' sustainable development programs---see \citet{gonzales2019b} for details.
The methods proposed in this study complement the state-of-the-art in probabilistic benchmarking of \citet{chakarov2014expectation}, \citet{chiribella2014quantum} or \citet{yang2014certifying}. Along with this methodological contribution, the empirical findings of this document fill the research gap left by economic studies that have been focused only on calculating benchmarks for microfinance institutions---see for example \citet{tucker2001} or \citet{reille2002}. In microfinance, benchmarks are used to compare institutions; in nanofinance, benchmarks are aimed to compare groups. Benchmarks for nanofinance groups allow to set performance standards for monitoring and evaluating intervention programs implemented in communities worldwide.
The definition of multivariate probabilistic benchmarks used in the study is described in Section \ref{sec:def}. Section \ref{sec:methods} discusses the methods for estimating multivariate probabilistic benchmarks in noisy datasets. Section \ref{sec:empap} shows the empirical application to the NF+ database. Section \ref{sec:conclusion} concludes. The data and the MatLab codes that allow to replicate the results of the study are freely available at MathWorks file-exchange (https://nl.mathworks.com/matlabcentral/fileexchange/74398-double-hyperbolic-undersampling-probabilistic-benchmarks).
\section{Multivariate probabilistic benchmarks}\label{sec:def}
Classical benchmarking makes use of fixed inputs to calculate point estimates for classification standards. Probabilistic benchmarking, in contrast, takes into account elements of uncertainty in the inputs and thus generates interval estimates as an output \citep{liedtke1998irreproducible}. For example, probabilistic benchmarks are calculated for quantum information protocols---teleportation and approximate cloning---in \citet{yang2014certifying}; more recently, \citet{lipsky2019accuracy} calculate probabilistic benchmarks for noisy anthropometric measures, and \citet{wolfe2019diagnosis} use probabilistic benchmarks to quantify the uncertainty in fibromyalgia diagnosis.
Proposition 1 below shows the definition of multivariate probabilistic benchmarks used in this study.
\begin{flushleft}\rule{\textwidth}{0.4pt}\end{flushleft}
\noindent\textit{\textbf{Proposition 1: Multivariate probabilistic benchmarks.} Let $\mathbf{y}$ be a $N \times j$ matrix $\mathbf{y} \in \mathbb{R}^j$ of $j$-KPIs ($y_1,y_2,...,y_j$ key performance indicators) for a set $\mathcal{H}$ of $\left\{ \eta_1, \eta_2,..., \eta_N \right\} \ni \mathcal{H}$ comparable units. Given the joint density,
\[
f_{\mathbf{y}} \left(\mathbf{y}\right) := f\left(y_1,y_2,...,y_j\right)
= \frac{\partial F\left(y_1,y_2,...,y_j\right)}{\partial y_1 \partial y_2 \cdots \partial y_j},
\]
where $F\left(y_1,y_2,...,y_j\right)$ is a CDF and $f\left(y_1,y_2,...,y_j\right) \geq 0$, the differentiated units $\mathcal{h}_{\tau} \subset \mathcal{H}$ will be those for which:
\begin{equation}
1 - \int_{\tau_1}^{\infty} \int_{\tau_2}^{\infty} \cdots \int_{\tau_j}^{\infty} f\left(y_1,y_2,...,y_j\right) d y_1 d y_2 \cdots d y_j,
\label{eq:succ}
\end{equation}
given a threshold $\tau$ in $\tau \in \mathbb{R}^j$.} \newline
\rule{\textwidth}{0.4pt}\bigskip
In proposition 1, the discrimination of $\eta_1, \eta_2,..., \eta_N$ units in a comparable set $\mathcal{H}$ is based on interval estimates of a multi-dimensional threshold (the benchmark) $\tau$. Proposition 1 sets a probabilistic standard based on the joint multivariate distribution function of the KPIs $\left\{y_1,y_2,...,y_j\right\} \ni \mathbf{y}$ used for calculating $\tau$. The isolines---the contour intervals---defined by the benchmarks $\tau$ allow to identify the units $\mathcal{h}_{\tau}$ with a different performance in the unit hypercube ($\mathcal{h}_{\tau} \subset \mathcal{H}$).
Proposition 2 below states that the thresholds $\tau$ can be calculated without the need to know the exact form of the joint density $f_{\mathbf{y}} \left(\mathbf{y}\right)$ in Equation \ref{eq:succ}:
\begin{flushleft}\rule{\textwidth}{0.4pt}\end{flushleft}
\noindent\textit{\textbf{Proposition 2: Unit-hypercube approximation.}
Let $\mathcal{C}_{\Theta}:[0,1]^d \mapsto [0,1]$ be a $d$-dimensional multivariate cumulative distribution function with $\mathbf{u} \in \left\{u_1, u_2, ... , u_d\right\}$ uniform marginal distributions and a dependence structure defined by $\Theta$. If $\mathbf{u} \equiv F_{\mathbf{y}} \left( \mathbf{y}\right)$, the joint density of $\mathbf{y}$ needed to calculate $\tau$ can be approximated with the simulation of $\mathcal{C}_{\Theta} \left( \mathbf{u} \right)$ in the unit hypercube:
\[
\mathcal{C}_{\Theta} \left( \mathbf{u} \right)
:=
\mathcal{C}_{\Theta} \left( u_1, u_2,\dots,u_d\right) = \int_{-\infty}^{u_1} \int_{-\infty}^{u_2} \cdots \int_{-\infty}^{u_j} c \left( u_1, u_2, \dots, u_j\right) d u_1 d u_2 \cdots d u_j,
\]
for $\mathcal{C}_{\Theta} \left( \mathbf{u} \right) = 0$ if $u_d = 0$, $\mathcal{C}_{\Theta} \left( \mathbf{u} \right) = u_d$ for any $u_d = 1$, and $\mathcal{C}_{\Theta}\left(\cdot\right)$ satisfies the non-negativity condition on the volume, i.e. $\mathcal{C}_{\Theta}\left(\cdot\right)$ is $d$-increasing quasi-monotone in $[0,1]^j$.
}\newline
\rule{\textwidth}{0.4pt}\bigskip
Proposition 2 is based on Sklar's theorem (\cite{sklar1959fonctions}; \cite{sklar1996random}), which indicates that any multivariate joint distribution can be written in terms of univariate marginal distribution functions and a copul{\ae} $\mathcal{C}_{\Theta}$ that captures the co-dependence between the variables \citep{DURANTE2013945}.
Archimidean copulas are a type of copul{\ae} that approximate the joint multivariate distribution of KPIs that are not elliptically distributed \citep{naifar2011modelling}. In an Archimedean copula $\mathcal{C}_g$, an additive generation function $g(u)$ models the strength of dependence in arbitrarily high dimensions with only one scalar parameter: $\theta$ \citep{smith2003modelling}. Formally:
\begin{equation}
\mathcal{C}_g (u_1, u_2, ... , u_d) =
\begin{cases}
g^{-1}(g(u_1) + g(u_1) + \cdots + g(u_d)) \text{ if } \sum_{v= 1}^d g(u_v) \leq g(0) \\
0 \text{ otherwise, }
\end{cases}
\end{equation}
with $g(u)$ a generator function that satisfies $g(1) = 0$, $g'(u) < 0$ and $g''(u) > 0$ for all $0 \leq u \leq 1$; hence $\mathcal{C}_{\theta} \equiv \mathcal{C}_g$. In Clayton's Archimedean copula, for example, the generator function is equal to $g_{\theta} (u) = u^{-\theta} -1$ for $\theta > 1$ (\cite{mcneil}; \cite{cherubini2011dynamic}):
\begin{equation}
\mathcal{C}_{\theta} (u_1, u_2, ... , u_d) =
\left(
1 - d + \sum_{v= 1}^d u_v ^{-\theta}
\right)^{-1/\theta}.
\end{equation}
\section{Estimation of multivariate probabilistic benchmarks in noisy datasets}\label{sec:methods}
Based on Propositions 1 and 2 above, a 2-step processes is suggested to calculate multivariate probabilistic benchmarks in noisy data sets:
\begin{enumerate}
\item In the first step, a swarm algorithm estimates the vector of parameters of a double-hyperbolic noise filter. The optimal estimates of the vector maximize the dependence structure $\theta$ in an Archimidean copula calculated with noisy KPIs. The optimal double-hyperbolic filter that maximizes $\theta$ is used to denoise the KPIs.
\item In the second step, a relevance vector machine is applied to the denoised KPIs in order to calculate multivariate probabilistic benchmarks. Besides estimating isolines of benchmarks, the relevance vector machine allows to identify factors that influence the benchmarks.
\end{enumerate}
\subsection{Step 1: Double-hyperbolic undersampling and swarm optimization}
Let $f_h \left( \bm{\psi}, \mathbf{y} \right)$ be the real $\mathcal{R}$ part---the imaginary part is discarded---of a translated generalized hyperbola of the form \citep{hamilton1998}:
\begin{equation}
f_h \left( \bm{\psi}, \mathbf{y} \right) :=
\mathcal{R}
\left\{
\psi_1
\sqrt{\psi_2 + \frac{\psi_3}{\psi_4 + \mathbf{y}}}
\right\},
\quad \left\{ \psi_1, \psi_2, \psi_3, \psi_4 \right\} \in \bm{\psi},
\label{eq:hyperbola}
\end{equation}
If $f_h^\perp \left( \bm{\psi ^\perp}, \mathbf{y} \right)$ is an orthogonal/quasi-orthogonal rotation of the translated generalized hyperbola defined by equation \ref{eq:hyperbola}---with rotation parameters $\left\{ \psi_1^\perp, \psi_2^\perp, \psi_3^\perp, \psi_4^\perp \right\} \ni \bm{\psi}^\perp$---then the region of the double hyperbola defined by the lobes of $f_h \left( \bm{\psi}, \mathbf{y} \right)$ and $f_h^\perp \left( \bm{\psi ^\perp}, \mathbf{y} \right)$ can be used to filter the noise of the joint distribution of $\mathbf{y}$, if elements of $\mathbf{y}$ outside the lobes of $f_h \left( \bm{\psi}, \mathbf{y} \right)$ and inside the lobes of the rotated hyperbola $f_h^\perp \left( \bm{\psi ^\perp}, \mathbf{y} \right)$ are discarded.
Let $\mathbf{y}_h \subset \mathbf{y}$ be a vector with the non-discarded elements of $\mathbf{y}$ inside the lobes of $f_h \left( \bm{\psi}, \mathbf{y} \right)$ and outside the lobes of $f_h^\perp \left( \bm{\psi ^\perp}, y \right)$. The vector $\mathbf{y}_h$ is an optimal noise reduction of the original data $\mathbf{y}$ if the values of $\bm{\psi}$ and $\bm{\psi}^\perp$ maximize the dependence structure ($\theta$) of an Archimidean copula estimated with \textit{samples} of $\mathbf{y}$,
\begin{equation}
\max_{
\substack{
\left\{
\bm{\psi}, \bm{\psi^\perp}
\right\}
\in \mathbb{R} \\ \mathbf{y}_h \subset \mathbf{y}
}
}
\left( -2 \int_0^1 \frac{u^ {-\theta} - 1 }{-\theta u^ {-(\theta + 1)}}
\right)^{-1} - 2 \int_0^1 \frac{u^ {-\theta} - 1}{-\theta u^ {-(\theta + 1)}}
\label{eq:maxtheta}
\end{equation}
Box 1 below shows a swarm algorithm proposed to estimate the optimal values of $\bm{\psi}$ and $\bm{\psi}^\perp$ that maximize $\theta$. The algorithm maximizes the co-dependence in the Archimidean copula by taking samples of the KPIs contained in $\mathbf{y}$. The structure of the swarm algorithm---separation, alignment, cohesion---is inspired by the BOIDS algorithm of artificial life described in \citet{reynolds1987flocks}.
\smallskip
\begin{tcolorbox}[arc=0pt,boxrule=0pt,bottomrule=1pt,toprule=1pt]
\textbf{Box 1. Pseudo-code of the swarm algorithm}
\tcblower
\begin{algorithm}[H]
\KwData{$\left\{y_1,y_2,...,y_j\right\} \ni \mathbf{y}$}
\KwResult{$\bm{\psi}$, $\bm{\psi}^\perp$}
initialization\;
$\delta, M, \theta_0, p_0, , p^\perp_0$, $\zeta$, $\zeta^*$ \;
\While{$m \in \mathbb{Z}_+$}{
$w_\delta = \delta \frac{|p|}{\norm{p} }$, $\quad w_\delta^\perp = \delta \frac{|p^\perp|}{\norm{p^\perp} } $ \;
\For{ $m \leftarrow 1$}
{
random exploration of hyperbola parameters\;
$p_m = p_{m - 1} + w_\delta \epsilon$, $\quad p_m^\perp = p^{m - 1}_\perp + w_\delta^\perp\epsilon$, $\quad\epsilon \sim (0,1)$ \;
hyperbolic undersampling\;
$ y_h = f_h ( p_m, \mathbf{y}) $, $\quad y_h^\perp = f_h^\perp (p_m^\perp , \mathbf{y})$, $\quad\left\{ y_h, y_h^\perp
\right\} \ni \mathbf{y}_h$\;
copula dependence estimated with filtered $m$-samples\;
$ \hat\theta_m = \mathcal{C}_\theta (\mathbf{y}_h) $ \;
}
$\hat\theta^* = \max \left\{ \hat\theta_i \right\}_{i= 1}^m$ (optimal dependence)\;
$p^* = p (\hat\theta^*)$, $\quad p^{\perp *} = p^\perp (\hat\theta^*)$ (optimal hyperbola parameters)\;
cohesion = $\frac{1}{2} \left( \norm{ p_m - p_m^* } + \norm{ p_m^\perp - p_m^{\perp *} } \right)$\;
separation = $\frac{1}{2} \left( \norm{ p_m - \overline{p}_m^* } + \norm{ p_m^\perp - \overline{p}_m^{\perp *} } \right)$\;
\eIf{$\hat\theta^*$ > $\hat\theta^{m-1}$}{
$\hat\theta_m = \hat\theta^*$ \;
$p_m = p^*$, $\quad p_m^\perp = p^{\perp *}$ \;
alignment\;
$\delta_m = \delta_{m-1} \left( \zeta^* \right)$ \;
$m = M - 1$ \;
}{
$\delta_m = \delta_{m-1} \left( \zeta \right)$ \;
}
}
\end{algorithm}
\end{tcolorbox}
In the swarm algorithm, $\delta, M, \theta_0, p_0, p^\perp_0$, $\zeta$, $\zeta^*$ are initialization parameters. The parameter $\delta \in \mathbb{R}_+$ controls the initial dispersion of the particles, $M$ is the initial number of particles used to explore possible values of $\theta$; $\theta_0 = 0$ is the starting value of $\theta_{m}$; $p_0, p^\perp_0$ are the starting values of $p_m, p^\perp_m$; $\zeta$, $\zeta^*$ are parameters that control the degree of exploration in the swarm algorithm. Exploitation ($\delta$) and exploration parameters ($\zeta$, $\zeta^*$) are typical of metaheuristic algorithms in general and swarm intelligence in particular---see for example \citet{tilahun2019balancing}.
The algorithm described in Box 1 explores optimal values of the hyperbola parameters $p_m, p^\perp_m$ during $m$-iterations, based on two behavioral rules: cohesion and separation. Swarm cohesion depends on the euclidean norm between $p_m, p^\perp_m$ and the optimal values $p_m^*, p^{\perp*}_m$ calculated with $\theta^*$. Swarm separation is a function of the norm between $p_m, p^\perp_m$ and the centroids $\overline{p}_m^*, \overline{p}_m^{\perp *}$. Cohesion abstains the swarm $m$ from including extreme outliers---and thus avoids a biased estimation of $\theta$---and separation guarantees that the swarm properly explores all the potential values that can maximize $\theta$ for an optimal noise filtering. Alignment is achieved by gradually reducing exploration and exploitation with $\zeta^*$ ($0 < \zeta > \zeta^* \leq 1$).
\subsection{Step 2: Relevance vector machines}
Traditional methods of supervised learning---as stochastic vector machines---produce point estimates of benchmarks as an output. Relevance vector machines, in contrast, estimate the conditional distribution of multivariate benchmarks in a fully probabilistic framework. Compared to stochastic vector machines, relevance vector machines capture uncertainty and make use of a small number of kernel functions to produce posterior probabilities of membership classification.
Let $\left\{ \mathbf{x}_i \right\}_{i=1}^k$ be a $k$-set of covariates influencing the KPIs contained in $\mathbf{y}$. The importance of each covariate is defined by a weight vector $\mathbf{w} = (w_0,\dots,w_k)$. In a linear approach, $
\mathbf{y} = \mathbf{w}^\top\mathbf{x}$. In the presence of a non-linear relationship between $\mathbf{y}$ and $\mathbf{x}$, a nonlinear maping $\mathbf{x} \to \phi(\mathbf{x})$ is a basis function for $
\mathbf{y} = \mathbf{w}^\top \phi(\mathbf{x})$.
Given an additive noise $\epsilon_k$, the benchmark targets $\mathbf{t}$ will be,
\begin{equation}
\mathbf{t} = \mathbf{w}^\top \phi(\mathbf{x}) + \epsilon_k,
\label{eq:targets}
\end{equation}
where $\epsilon_k$ are independent samples from a mean-zero Gaussian noise process with variance $\sigma^2$. \citet{tipping2000sparse} and
\citet{tipping2001sparse} offer a spare Bayesian learning approach to estimate $\mathbf{w}$ in Equation \ref{eq:targets} based on the likelihood of the complete data set,
\[
p(\mathbf{t} | \mathbf{w}, \sigma^2 ) = (2\pi \sigma^2) \exp \left\{ -\frac{1}{2\sigma^2} || \mathbf{t} - \Phi \mathbf{w} ||^2
\right\},
\]
where $\Phi$ is a $k \times (k + 1)$ design matrix $\Phi = \left[ \phi(\mathbf{x}_1),\phi(\mathbf{x}_2),\dots, \phi(\mathbf{x}_k) \right]^\top$ and $\phi(\mathbf{x}_i) = \left[1, \mathcal{K}(\mathbf{x}_i,\mathbf{x}_1),\mathcal{K}(\mathbf{x}_i,\mathbf{x}_2),\dots, \mathcal{K}(\mathbf{x}_i,\mathbf{x}_k) \right]^\top$ for a kernel function $\mathcal{K}(\cdot,\cdot)$. In a zero-mean Gaussian prior for $\mathbf{w}$,
\[
p(\mathbf{w} | \bm{\alpha}) = \prod_{i = 0}^k \mathcal{G} ( w_i | 0,\alpha_i^{-1}),
\]
$\bm{\alpha}$ is a vector of $k+1$ hyperparameters, and the posterior distribution over the weights is:
\begin{align*}
p(\mathbf{w} | \mathbf{t}, \bm{\alpha}, \sigma^2) & = \frac{
p(\mathbf{t} | \mathbf{w}, \sigma^2) p(\mathbf{w} | \bm{\alpha})
}{p(\mathbf{t} | \bm{\alpha}, \sigma^2)}, \\
& = (2\pi)^{-(k+1)/2} |\mathbf{\Sigma}|^{-1/2} \exp \left\{ -\frac{1}{2}
(\mathbf{w} - \bm{\mu})^\top
\mathbf{\Sigma}^{-1}
(\mathbf{w} - \bm{\mu}) ||^2
\right\},
\end{align*}
where $\mathbf{\Sigma} = (\sigma^{-2} \mathbf{\Phi}^\top \mathbf{\Phi} + \mathbf{A})^{-1}$, $\bm{\mu} = \sigma^{-2} \mathbf{\Sigma} \mathbf{\Phi}^\top \mathbf{t}$ and
$\mathbf{A} = \text{diag}(\alpha_1,\alpha_2,\dots,\alpha_k)$. Updating methods for $\alpha_i$ are described in \citet{barber2012}. The complete specification of the hierarchical priors---based on the automatic relevance determination of \citet{mackay1996bayesian} and \citet{neal2012bayesian}---can be found in \citet{tipping2001sparse}.
The assignment of an individual hyperparameter $\alpha_k$ to each weight $w_k$ allows to achieve sparsity in the relevance vector machine. As the posterior distribution of many of the weights is peaked around zero, non-zero weights are associated only with `relevant' vectors, i.e. with the most relevant influencing factors of the probabilistic benchmarks estimated with the denoised KPIs.
\section{Empirical application: probabilistic benchmarks in nanofinance+}\label{sec:empap}
This section illustrates the methods described in Section \ref{sec:methods} with an application to a database of 7830 nanofinance+ groups receiving entepreneurship and business training in 14 African countries: Benin, Burkina Faso, Ethiopia, Ghana, Malawi, Mozambique, Niger, Sierra Leone, South Africa, Sri Lanka, Tanzania, Togo, Uganda and Zambia. Almost all of the groups in the database work with a development agency (94\%), and 43\% of the groups are located in rural regions.
Table \ref{tab:desmicmac} shows descriptive statistics of group-level characteristics and the macro-economic environment of the countries where the groups operate. On average, each member of NF+ contributes around 29 USD of savings to the common fund and receives on average a loan of 22 USD. Despite the low values of savings and loans, returns on savings in the groups are on average 47\%, whereas the equity per member is on average equal to 40 USD (Table \ref{tab:desmicmac}).
Returns on savings ($y_1$) and equity per member ($y_2$) are the KPIs used for calculating the benchmarks of NF+ in the empirical application. Hence, $j =2$, $\mathbf{y} = [y_1 \;\; y_2]$, and the joint distribution in Proposition 1 simplifies to,
\begin{equation}
f_{y_1,y_2} \left( y_1, y_2\right) = f_{y_1|y_2} \left( y_1|y_2\right) f_{y_1|y_2} \left( y_2\right) = f_{y_2|y_1} \left( y_2|y_1\right) f_{y_2|y_1} \left( y_1\right).
\label{eq:joint2}
\end{equation}
Successful units---NF+ groups with a higher financial performance---will be those with KPIs delimitied by the isolines of the threshold $\tau$,
\[
1 - \int_{\tau_1}^{\infty} \int_{\tau_2}^{\infty} f_{y_1,y_2} \left( y_1, y_2\right) d y_1 d y_2,
\]
for a probabilistic benchmark $\tau \in \left\{\tau_1 \;\; \tau_2\right\}$, $\tau \in \mathbb{R}^2$.
Following Proposition 2, the joint density of the KPIs (equation \ref{eq:joint2}) is approximated with a bivariate Archimedean copula:
\begin{equation}
\mathcal{C}_g (u_1, u_2) =
\begin{cases}
g^{-1}(g(u_1) + g(u_2)) \text{ if } g(u_1) + g(u_2) \leq g(0) \\
0 \text{ otherwise. }
\end{cases}
\end{equation}
Clayton's Archimedean copula is particularly suitable to model the dynamics of nanofinance+. Clayton's copula has greater dependence in the lower tail compared to the upper tail. In the case of NF+, greater lower tail dependence is expected because groups with low equity will have zero or negative returns, while in contrast there is more dispersion in the indicators of groups with higher performance---i.e. some groups show higher equity but low levels of returns due to lower repayment rates, while groups with low equity may have higher returns due to the higher interest rates charged for their loans.
A bivariate Clayton's Archimedean copula for the uniform marginal distributions of returns on savings ($u_1$) and equity per member ($u_2$) will be:
\begin{align}
\mathcal{C}_{\theta} (u_1, u_2) & = g^{-1}(g(u_1) + g(u_2)) \\
& = (1 + u_1^{-\theta} - 1 + u_2^{-\theta} -1)^{-1/\theta} \\
& = (u_1^{-\theta} + u_2^{-\theta} -1)^{-1/\theta}
\end{align}
with a probability density function,
\begin{equation}
c_{\theta} (u_1, u_2) = \frac{\partial^2}{\partial u_1 \partial u_2} \mathcal{C}_{\theta} = \frac{1 + \theta}{(u_1 u_2) ^{\theta + 1} } (u_1^{-\theta} + u_2^{-\theta} -1)^{-2 - \frac{1}{\theta}}
\end{equation}
and a co-dependence parameter $\theta \in [0,+\infty)$,
\begin{equation}
\theta = \left( -2 \int_0^1 \frac{u^ {-\theta} - 1 }{-\theta u^ {-(\theta + 1)}}
\right)^{-1} - 2 \int_0^1 \frac{u^ {-\theta} - 1}{-\theta u^ {-(\theta + 1)}}.
\end{equation}
The parameter $\theta$ controls the amount of dependence in $\mathcal{C}_{\theta} (u_1, u_2)$. When $\theta \to +\infty$ the dependency between $u_1$ and $u_2$ approaches comonoticity,
\begin{equation}
\lim_{\theta \to +\infty} \mathcal{C}_{\theta} (u_1, u_2) = \min (u_1, u_2),
\end{equation}
while in turn when $\theta \to 0$, $u_1$ and $u_2$ become independent:
\begin{equation}
\lim_{\theta \to 0} \mathcal{C}_{\theta} (u_1, u_2) = u_1 u_2.
\end{equation}
In the case of returns on savings and equity per member, it is expected that $\theta \to +\infty$, as both financial indicators should show lower tail co-dependence in NF+.
Figure \ref{fig:swarm} shows indeed that the swarm optimization of $\theta$---using the data of returns on savings and equity per member---leads to a value of $\hat\theta = 3.97$. The estimates of the parameters of the hyperbolas for $\hat\theta$ are equal to,
\begin{align*}
\hat{\bm\psi} & := \left\{ \hat{\psi}_1, \hat{\psi}_2, \hat{\psi}_3, \hat{\psi}_4 \right\} = \left\{ 77.42, 0.87, -10.38, -46.51 \right\}, \\
\hat{\bm\psi}^\perp & := \left\{ \hat{\psi}_1^\perp, \hat{\psi}_2^\perp, \hat{\psi}_3^\perp, \hat{\psi}_4^\perp \right\} = \left\{ 55.92, 0.67, 2.26, -15.43 \right\}.
\end{align*}
Figure \ref{fig:filtering} shows the optimal denoising of the KPIs of NF+ with double-hyperbolic undersampling. The first step discards the values of ROS and EPM outside the lobes of the hyperbole estimated with $\bm\psi$ and inside the lobes of the hyperbole estimated with $\bm{\psi^\perp}$ (Figures \ref{fig:filtering}b and \ref{fig:filtering}d). The co-dependence between the KPIs before denoising is contaminated with a high number of outliers (Figure \ref{fig:filtering}e). After denoising, the co-dependence in the lower and upper tails of the KPIs is kept but noisy elements are discarded (Figure \ref{fig:filtering}f).
Table \ref{tab:results} and Figure \ref{fig:RVM_surfs} show the results of estimating the relevance vector machine with the denoised KPIs (step 2). In terms of continuous factors influencing the benchmarks, the main covariates affecting the financial benchmarks of NF+ are those related to the macroeconomic environment, mainly GDP growth, poverty, inequality and the percentage of rural population in the country where a NF+ group operates (Table \ref{tab:results}). Savings accumulation and loan provision are the main group-level characteristics influencing the financial benchmarks of NF+; this result is expected---because in NF+ the lending channel is the main source of profit generation---and shows the ability of the relevance vector machine to properly detect variables related to financial benchmarks in denoised datasets.
In relation to categorical factors influencing the benchmarks, Figure \ref{fig:RVM_surfs} shows that the probabilistic benchmarks of NF+ are different in rural groups (Figure \ref{fig:RVM_surfs} left) compared to urban groups (Figure \ref{fig:RVM_surfs} right). While both rural and urban groups have a concentration of financial performance in the lower tail of the joint distribution of the KPIs, higher dispersion in the upper tail is observed in rural groups, and hence the isolines of the probabilistic benchmarks are wider for rural groups compared to urban groups.
In the case of urban and peri-urban nano-finance, groups can be classified as successful with a probability higher than 90\% (red contour isoline in Figure \ref{fig:RVM_surfs}b) when the groups have returns higher than 55\% and equity higher than 80 USD per member (Figures \ref{fig:RVM_surfs}f). In rural NF+, however, groups that do not show negative returns and have an equity per member higher than 10 USD are classified as successful with a probability higher than 80\% (Figures \ref{fig:RVM_surfs}c and \ref{fig:RVM_surfs}e).
\renewcommand{\arraystretch}{1.15}
\begin{table}[htbp]
\centering
\caption{Descriptive statistics of the SAVIX. Nanofinance groups in the SAVIX have on average 21 members and 82\% of the members are women. The members show a high commitment to the group meetings: member's attendance is 92\%, and the members that end up leaving the group are only 1.2\% of the total of participants. In macro-economic terms, the GDP growth in the countries where the nanofinance groups operate is on average 4.88\%, and the GDP per capita is on average 1353 USD. The countries where the groups are located have also low levels of literacy (the literacy rate is 56\%), low levels of financial inclusion (the indicator of financial deepening is 33\%), and a high percentage of population living in poverty (40\%) and in rural areas (60\%).}
\begin{tabular}{lrrrr}
\textbf{Variables} & \multicolumn{1}{c}{\textbf{Mean}} & \multicolumn{1}{c}{\textbf{Std. Dev.}} & \multicolumn{1}{c}{\textbf{Min}} & \multicolumn{1}{c}{\textbf{Max}} \\
\midrule
\multicolumn{5}{l}{\textbf{Group-level characteristics of nanofinance+}} \\ [.05in]
Returns on savings$^a$ & 48.63 & 47.14 & 0 & 199.47 \\
Equity per member$^b$ & 40.41 & 40.25 & 0.10 & 269.90 \\
Savings per member$^b$ & 29.15 & 28.71 & 0.06 & 235.79 \\
Fund utilisation rate$^b$ & 57.73 & 34.88 & 0 & 100.00 \\
Number of loans per member & 0.51 & 0.33 & 0 & 1.00 \\
Average loans per member$^b$ & 22.40 & 29.51 & 0 & 186.14 \\
Welfare fund per member$^b$ & 1.32 & 1.67 & 0 & 12.59 \\
Member's attendance$^a$ & 92.32 & 11.16 & 39.29 & 100.00 \\
Drop-out rate$^a$ & 1.17 & 4.32 & 0 & 45.00 \\
Number of members & 21.11 & 6.55 & 5 & 33.50 \\
Women members$^a$ & 81.99 & 23.19 & 0 & 100.00 \\
Accumulated loans per member & 0.51 & 0.33 & 0.00 & 1.75 \\ \midrule
\multicolumn{5}{l}{\textbf{Macro-economic variables}}\\ [.05in]
Uncertainty (inflation deviation)$^a$ & 2.87 & 1.39 & 0.66 & 11.54 \\
Inflation rate$^a$ & 6.68 & 6.88 & -1.01 & 21.87 \\
Age-dependency ratio$^a$ & 87.19 & 12.91 & 51.23 & 111.67 \\
Gini coefficient$^a$ & 45.40 & 8.10 & 32.90 & 63.20 \\
Financial deepening$^a$ & 33.31 & 31.75 & 12.55 & 179.78 \\
Literacy rate$^a$ & 56.24 & 24.18 & 15.46 & 94.37 \\
GDP per capita$^b$ & 1353.04 & 1410.32 & 386.73 & 7575.18 \\
Population density$^a$ & 82.61 & 54.79 & 15.12 & 334.33 \\
Rural population$^a$ & 60.16 & 13.23 & 34.15 & 84.03 \\
Poverty headcount ratio$^a$ & 39.18 & 11.54 & 17.70 & 56.90 \\
GDP growth$^a$ & 4.88 & 1.61 & -1.93 & 10.25 \\
\bottomrule
\multicolumn{5}{l}{\scriptsize{$^a$ Percentage (\%)}}\\[-.05in]
\multicolumn{5}{l}{\scriptsize{$^b$ US dollars (USD)}}
\end{tabular}%
\label{tab:desmicmac}%
\end{table}%
\begin{table}[htbp]
\small
\centering
\caption{Results of estimating the relevance vector machine for $\mathbf{y}$ with the set of covariates $\mathbf{x}$}
\begin{tabular}{crrrrrr}
\textbf{Type} & \multicolumn{1}{c}{\textbf{Covariates ($\mathbf{x}$)}} & \multicolumn{1}{c}{\textbf{AUC$^a$}} & \multicolumn{1}{c}{\textbf{Gini}} & \multicolumn{1}{c}{\textbf{Bacc$^b$}} & \multicolumn{1}{c}{\textbf{Prec$^c$}} & \multicolumn{1}{c}{\textbf{FDR$^d$}} \\
\midrule
\multicolumn{1}{c}{\multirow{9}[2]{*}{
\shortstack[c]{
Micro-level \\ characteristics
}
}} & Savings per member* & 1.0000 & 1.0000 & 0.9908 & 1.0000 & 0.0000 \\
& Fund utilization rate & 0.7096 & 0.4193 & 0.6326 & 0.3723 & 0.6277 \\
& Number of loans per member* & 0.8261 & 0.6522 & 0.7514 & 0.6355 & 0.3645 \\
& Average loans per member* & 0.7852 & 0.5703 & 0.8716 & 0.9955 & 0.0045 \\
& Welfare fund per member* & 0.8035 & 0.6071 & 0.7675 & 0.8110 & 0.1890 \\
& Mmember's attendance & 0.6067 & 0.2134 & 0.5000 & 0.0000 & 1.0000 \\
& Drop-out rate & 0.5511 & 0.1021 & 0.5149 & 0.0731 & 0.9269 \\
& Women members & 0.6374 & 0.2748 & 0.5155 & 0.0619 & 0.9381 \\
& Accumulated loans per member* & 0.8261 & 0.6522 & 0.7514 & 0.6355 & 0.3645 \\
& Rural location* & 0.7946 & 0.5893 & 0.7946 & 0.8031 & 0.1969 \\
\midrule
\multicolumn{1}{c}{\multirow{11}[1]{*}{
\shortstack[c]{
Macro-economic \\ variables
}
}} & Uncertainty (inflation deviation)* & 0.7620 & 0.5241 & 0.7128 & 0.5073 & 0.4927 \\
& Inflation rate & 0.5860 & 0.1721 & 0.5000 & 0.0000 & 1.0000 \\
& Age-dependency ratio* & 0.7606 & 0.5212 & 0.7836 & 0.8268 & 0.1732 \\
& Inequality (Gini index)* & 0.8393 & 0.6785 & 0.7447 & 0.5534 & 0.4466 \\
& Financial deepening & 0.7369 & 0.4739 & 0.7378 & 0.5816 & 0.4184 \\
& Literacy rate & 0.6774 & 0.3548 & 0.5000 & 0.0000 & 1.0000 \\
& GDP per capita* & 0.7873 & 0.5745 & 0.5011 & 0.1271 & 0.8729 \\
& Population density* & 0.7939 & 0.5878 & 0.7276 & 0.5748 & 0.4252 \\
& Rural population in a country* & 0.8485 & 0.6970 & 0.7532 & 0.5591 & 0.4409 \\
& Poverty headcount ratio* & 0.8487 & 0.6973 & 0.7961 & 0.7030 & 0.2970 \\
& GDP growth* & 0.8516 & 0.7031 & 0.7374 & 0.6614 & 0.3386 \\
\midrule
\multicolumn{1}{c}{\multirow{8}[1]{*}{
\shortstack[c]{
Facilitation \\ mechanisms \\ of development \\ agencies
}
}} & No facilitating agency & 0.5025 & 0.0051 & 0.5000 & 0.0000 & 1.0000 \\
& No donors & 0.5479 & 0.0957 & 0.5000 & 0.0000 & 1.0000 \\
& Group formed by paid agent & 0.6803 & 0.3605 & 0.6803 & 0.6828 & 0.3172 \\
& Group formed by field officer & 0.5147 & 0.0295 & 0.5000 & 0.0000 & 1.0000 \\
& Group formed by unpaid agent & 0.5264 & 0.0529 & 0.5264 & 0.0754 & 0.9246 \\
& Group formed by project-paid agent & 0.5264 & 0.0528 & 0.5264 & 0.0877 & 0.9123 \\
& Graduated groups & 0.5882 & 0.1764 & 0.5000 & 0.0000 & 1.0000 \\
\bottomrule
\multicolumn{7}{l}{\scriptsize{(*) Variables with the best machine-learning indicators}} \\[-.05in]
\multicolumn{7}{l}{\scriptsize{$^a$ AUC: Area under the ROC courve}} \\[-.05in]
\multicolumn{7}{l}{\scriptsize{$^b$ Bacc: Balanced accuracy}} \\[-.05in]
\multicolumn{7}{l}{\scriptsize{$^c$ Prec: Precision}} \\[-.05in]
\multicolumn{7}{l}{\scriptsize{$^d$ FDR: False detection rate}} \\
\end{tabular}%
\label{tab:results}%
\end{table}%
\begin{figure}[ht]
\centering
\caption{Swarm optimisation of $\theta$ in Clay's Archimidean copula. The copula was estimated with the data of returns on savings and equity per member of nanofinance groups. In the graph, the swarm shows greater dispersion at the start of the iterations, but the cohesion and separation of the flock converge after the iteration 15, when the value of the estimate of $\theta$ tends to stabilize.}
\includegraphics[width=\textwidth]{swarm_opt.png}
\label{fig:swarm}
\end{figure}
\begin{figure}[ht]
\centering
\caption{Denoising with double-hyperbolic undersampling. For an optimal filtering of noise, the points outside the lobes of the first hyperbole are discarded in graph (b), and the points inside the lobes of the second hyperbole are discarded in graph (d). Figure (e) shows the relationship between the KPIs before denoising, and figure (f) shows the relation after denoising.}
\includegraphics[width=\textwidth]{hyper_filter.png}
\label{fig:filtering}
\end{figure}
\begin{figure}[ht]
\centering
\caption{Probabilistic benchmarks estimated with the relevance vector machine}
\includegraphics[width=\textwidth]{RVM_surfs.png}
\label{fig:RVM_surfs}
\end{figure}
\FloatBarrier
\section{Conclusion}\label{sec:conclusion}
This study suggested a 2-step approach for calculating probabilistic benchmarks with noisy KPIs. An empirical application to a noisy database of nanofinance+ shows that the methods are able to denoise KPIs, estimate probabilistic benchmarks, and properly identify the continuous and discrete factors influencing the benchmarks.
In the case of NF+ groups with business training, the results indicate that macroeconomic factors and the region where a group is located influence their financial benchmarks. Governments, international donors and development agencies can use the estimated benchmarks for monitoring the performance of NF+ and gain an independent perspective about how well a group/project is performing when compared to other similar groups/projects. In the presence of performance gaps, the benchmarks will be useful to identify opportunities for change and improvement among the groups\footnote{It is estimated that over 100 million people in 10.5 million households participate in nanofinance groups worldwide (\cite{greaney2016}; \cite{burlando-2017}).
Due to the importance of NF+ for financial inclusion and multidimensional poverty reduction, all major international donors and development agencies work with NF+, but these organizations lack of benchmarks to evaluate the financial performance of NF+ groups.}.
Future studies can extend the denoising methods to the quadratic surface defined by hyperbolic cylinders. The higher-dimensional hierarchical Archimedean copula proposed by \citet{savu2010} can be applied to approximate the multivariate probability distribution of KPIs denoised with hyperbolic cylinders. The recent developments in orthogonal machine learning---see \textit{inter alia} \citet{oprescu2018orthogonal}, \citet{knaus2018double}, \citet{semenova2018essays} or \citet{kreif2019machine}---can be used to estimate quasi-causal factors influencing the benchmarsk, complementing the non-parametric correlational approach of relevance vector machines.
\printbibliography
\end{document}
\section{Introduction}
Benchmarking is the process of analyzing key performance indicators with the aim of creating standards for comparing competing units \citep{bogetoft2010benchmarking}. Probabilistic benchmarks measure the probability of a unit falling into an interval along with the cumulative probability of exceeding a predetermined threshold \citep{wolfe2019diagnosis}. As a management tool, benchmarks allow to identify and apply better documented practices \citep{bogetoft2013performance}.
Benchmarks are widely used in diverse scientific disciplines. Pharmaceutics compare the prices of prescription drugs with benchmarks \citep{gencarelli2005one}. In environmental science, benchmarks set water quality standards \citep{van2019specific} or define thresholds for radiation risk \citep{bates2011environmental}. In finance, interest-rate benchmarks mitigate search frictions by lowering informational asymmetries in the markets \citep{duffie2017benchmarks}.
This study develops a 2-step processes for calculating probabilistic benchmarks in noisy datasets. In step 1, double-hyperbolic undersampling filters the noise of key performance indicators (KPIs); in step 2, a relevance vector machine estimate probabilistic benchmarks with filtered KPIs. Archimidean copulas approximate the joint density of KPIs during the denoising step. Besides estimating probabilistic benchmarks, the methods of step 2 identify the continuous and categorical factors influencing benchmarks.
The 2-step methodology is illustrated with an application to a database of nanofinance+ working with business interventions. In nanofinance, low-income individuals without access to formal financial services get together and start to accumulate their savings into a fund, which they later use to provide themselves with loans and insurance. In nanofinance+ (NF+), development agencies, donors and governments help communities to create NF+ groups for financial inclusion and then the groups become a platform for additional `plus' sustainable development programs---see \citet{gonzales2019b} for details.
The methods proposed in this study complement the state-of-the-art in probabilistic benchmarking of \citet{chakarov2014expectation}, \citet{chiribella2014quantum} or \citet{yang2014certifying}. Along with this methodological contribution, the empirical findings of this document fill the research gap left by economic studies that have been focused only on calculating benchmarks for microfinance institutions---see for example \citet{tucker2001} or \citet{reille2002}. In microfinance, benchmarks are used to compare institutions; in nanofinance, benchmarks are aimed to compare groups. Benchmarks for nanofinance groups allow to set performance standards for monitoring and evaluating intervention programs implemented in communities worldwide.
The definition of multivariate probabilistic benchmarks used in the study is described in Section \ref{sec:def}. Section \ref{sec:methods} discusses the methods for estimating multivariate probabilistic benchmarks in noisy datasets. Section \ref{sec:empap} shows the empirical application to the NF+ database. Section \ref{sec:conclusion} concludes. The data and the MatLab codes that allow to replicate the results of the study are freely available at MathWorks file-exchange (https://nl.mathworks.com/matlabcentral/fileexchange/74398-double-hyperbolic-undersampling-probabilistic-benchmarks).
\section{Multivariate probabilistic benchmarks}\label{sec:def}
Classical benchmarking makes use of fixed inputs to calculate point estimates for classification standards. Probabilistic benchmarking, in contrast, takes into account elements of uncertainty in the inputs and thus generates interval estimates as an output \citep{liedtke1998irreproducible}. For example, probabilistic benchmarks are calculated for quantum information protocols---teleportation and approximate cloning---in \citet{yang2014certifying}; more recently, \citet{lipsky2019accuracy} calculate probabilistic benchmarks for noisy anthropometric measures, and \citet{wolfe2019diagnosis} use probabilistic benchmarks to quantify the uncertainty in fibromyalgia diagnosis.
Proposition 1 below shows the definition of multivariate probabilistic benchmarks used in this study.
\begin{flushleft}\rule{\textwidth}{0.4pt}\end{flushleft}
\noindent\textit{\textbf{Proposition 1: Multivariate probabilistic benchmarks.} Let $\mathbf{y}$ be a $N \times j$ matrix $\mathbf{y} \in \mathbb{R}^j$ of $j$-KPIs ($y_1,y_2,...,y_j$ key performance indicators) for a set $\mathcal{H}$ of $\left\{ \eta_1, \eta_2,..., \eta_N \right\} \ni \mathcal{H}$ comparable units. Given the joint density,
\[
f_{\mathbf{y}} \left(\mathbf{y}\right) := f\left(y_1,y_2,...,y_j\right)
= \frac{\partial F\left(y_1,y_2,...,y_j\right)}{\partial y_1 \partial y_2 \cdots \partial y_j},
\]
where $F\left(y_1,y_2,...,y_j\right)$ is a CDF and $f\left(y_1,y_2,...,y_j\right) \geq 0$, the differentiated units $\mathcal{h}_{\tau} \subset \mathcal{H}$ will be those for which:
\begin{equation}
1 - \int_{\tau_1}^{\infty} \int_{\tau_2}^{\infty} \cdots \int_{\tau_j}^{\infty} f\left(y_1,y_2,...,y_j\right) d y_1 d y_2 \cdots d y_j,
\label{eq:succ}
\end{equation}
given a threshold $\tau$ in $\tau \in \mathbb{R}^j$.} \newline
\rule{\textwidth}{0.4pt}\bigskip
In proposition 1, the discrimination of $\eta_1, \eta_2,..., \eta_N$ units in a comparable set $\mathcal{H}$ is based on interval estimates of a multi-dimensional threshold (the benchmark) $\tau$. Proposition 1 sets a probabilistic standard based on the joint multivariate distribution function of the KPIs $\left\{y_1,y_2,...,y_j\right\} \ni \mathbf{y}$ used for calculating $\tau$. The isolines---the contour intervals---defined by the benchmarks $\tau$ allow to identify the units $\mathcal{h}_{\tau}$ with a different performance in the unit hypercube ($\mathcal{h}_{\tau} \subset \mathcal{H}$).
Proposition 2 below states that the thresholds $\tau$ can be calculated without the need to know the exact form of the joint density $f_{\mathbf{y}} \left(\mathbf{y}\right)$ in Equation \ref{eq:succ}:
\begin{flushleft}\rule{\textwidth}{0.4pt}\end{flushleft}
\noindent\textit{\textbf{Proposition 2: Unit-hypercube approximation.}
Let $\mathcal{C}_{\Theta}:[0,1]^d \mapsto [0,1]$ be a $d$-dimensional multivariate cumulative distribution function with $\mathbf{u} \in \left\{u_1, u_2, ... , u_d\right\}$ uniform marginal distributions and a dependence structure defined by $\Theta$. If $\mathbf{u} \equiv F_{\mathbf{y}} \left( \mathbf{y}\right)$, the joint density of $\mathbf{y}$ needed to calculate $\tau$ can be approximated with the simulation of $\mathcal{C}_{\Theta} \left( \mathbf{u} \right)$ in the unit hypercube:
\[
\mathcal{C}_{\Theta} \left( \mathbf{u} \right)
:=
\mathcal{C}_{\Theta} \left( u_1, u_2,\dots,u_d\right) = \int_{-\infty}^{u_1} \int_{-\infty}^{u_2} \cdots \int_{-\infty}^{u_j} c \left( u_1, u_2, \dots, u_j\right) d u_1 d u_2 \cdots d u_j,
\]
for $\mathcal{C}_{\Theta} \left( \mathbf{u} \right) = 0$ if $u_d = 0$, $\mathcal{C}_{\Theta} \left( \mathbf{u} \right) = u_d$ for any $u_d = 1$, and $\mathcal{C}_{\Theta}\left(\cdot\right)$ satisfies the non-negativity condition on the volume, i.e. $\mathcal{C}_{\Theta}\left(\cdot\right)$ is $d$-increasing quasi-monotone in $[0,1]^j$.
}\newline
\rule{\textwidth}{0.4pt}\bigskip
Proposition 2 is based on Sklar's theorem (\cite{sklar1959fonctions}; \cite{sklar1996random}), which indicates that any multivariate joint distribution can be written in terms of univariate marginal distribution functions and a copul{\ae} $\mathcal{C}_{\Theta}$ that captures the co-dependence between the variables \citep{DURANTE2013945}.
Archimidean copulas are a type of copul{\ae} that approximate the joint multivariate distribution of KPIs that are not elliptically distributed \citep{naifar2011modelling}. In an Archimedean copula $\mathcal{C}_g$, an additive generation function $g(u)$ models the strength of dependence in arbitrarily high dimensions with only one scalar parameter: $\theta$ \citep{smith2003modelling}. Formally:
\begin{equation}
\mathcal{C}_g (u_1, u_2, ... , u_d) =
\begin{cases}
g^{-1}(g(u_1) + g(u_1) + \cdots + g(u_d)) \text{ if } \sum_{v= 1}^d g(u_v) \leq g(0) \\
0 \text{ otherwise, }
\end{cases}
\end{equation}
with $g(u)$ a generator function that satisfies $g(1) = 0$, $g'(u) < 0$ and $g''(u) > 0$ for all $0 \leq u \leq 1$; hence $\mathcal{C}_{\theta} \equiv \mathcal{C}_g$. In Clayton's Archimedean copula, for example, the generator function is equal to $g_{\theta} (u) = u^{-\theta} -1$ for $\theta > 1$ (\cite{mcneil}; \cite{cherubini2011dynamic}):
\begin{equation}
\mathcal{C}_{\theta} (u_1, u_2, ... , u_d) =
\left(
1 - d + \sum_{v= 1}^d u_v ^{-\theta}
\right)^{-1/\theta}.
\end{equation}
\section{Estimation of multivariate probabilistic benchmarks in noisy datasets}\label{sec:methods}
Based on Propositions 1 and 2 above, a 2-step processes is suggested to calculate multivariate probabilistic benchmarks in noisy data sets:
\begin{enumerate}
\item In the first step, a swarm algorithm estimates the vector of parameters of a double-hyperbolic noise filter. The optimal estimates of the vector maximize the dependence structure $\theta$ in an Archimidean copula calculated with noisy KPIs. The optimal double-hyperbolic filter that maximizes $\theta$ is used to denoise the KPIs.
\item In the second step, a relevance vector machine is applied to the denoised KPIs in order to calculate multivariate probabilistic benchmarks. Besides estimating isolines of benchmarks, the relevance vector machine allows to identify factors that influence the benchmarks.
\end{enumerate}
\subsection{Step 1: Double-hyperbolic undersampling and swarm optimization}
Let $f_h \left( \bm{\psi}, \mathbf{y} \right)$ be the real $\mathcal{R}$ part---the imaginary part is discarded---of a translated generalized hyperbola of the form \citep{hamilton1998}:
\begin{equation}
f_h \left( \bm{\psi}, \mathbf{y} \right) :=
\mathcal{R}
\left\{
\psi_1
\sqrt{\psi_2 + \frac{\psi_3}{\psi_4 + \mathbf{y}}}
\right\},
\quad \left\{ \psi_1, \psi_2, \psi_3, \psi_4 \right\} \in \bm{\psi},
\label{eq:hyperbola}
\end{equation}
If $f_h^\perp \left( \bm{\psi ^\perp}, \mathbf{y} \right)$ is an orthogonal/quasi-orthogonal rotation of the translated generalized hyperbola defined by equation \ref{eq:hyperbola}---with rotation parameters $\left\{ \psi_1^\perp, \psi_2^\perp, \psi_3^\perp, \psi_4^\perp \right\} \ni \bm{\psi}^\perp$---then the region of the double hyperbola defined by the lobes of $f_h \left( \bm{\psi}, \mathbf{y} \right)$ and $f_h^\perp \left( \bm{\psi ^\perp}, \mathbf{y} \right)$ can be used to filter the noise of the joint distribution of $\mathbf{y}$, if elements of $\mathbf{y}$ outside the lobes of $f_h \left( \bm{\psi}, \mathbf{y} \right)$ and inside the lobes of the rotated hyperbola $f_h^\perp \left( \bm{\psi ^\perp}, \mathbf{y} \right)$ are discarded.
Let $\mathbf{y}_h \subset \mathbf{y}$ be a vector with the non-discarded elements of $\mathbf{y}$ inside the lobes of $f_h \left( \bm{\psi}, \mathbf{y} \right)$ and outside the lobes of $f_h^\perp \left( \bm{\psi ^\perp}, y \right)$. The vector $\mathbf{y}_h$ is an optimal noise reduction of the original data $\mathbf{y}$ if the values of $\bm{\psi}$ and $\bm{\psi}^\perp$ maximize the dependence structure ($\theta$) of an Archimidean copula estimated with \textit{samples} of $\mathbf{y}$,
\begin{equation}
\max_{
\substack{
\left\{
\bm{\psi}, \bm{\psi^\perp}
\right\}
\in \mathbb{R} \\ \mathbf{y}_h \subset \mathbf{y}
}
}
\left( -2 \int_0^1 \frac{u^ {-\theta} - 1 }{-\theta u^ {-(\theta + 1)}}
\right)^{-1} - 2 \int_0^1 \frac{u^ {-\theta} - 1}{-\theta u^ {-(\theta + 1)}}
\label{eq:maxtheta}
\end{equation}
Box 1 below shows a swarm algorithm proposed to estimate the optimal values of $\bm{\psi}$ and $\bm{\psi}^\perp$ that maximize $\theta$. The algorithm maximizes the co-dependence in the Archimidean copula by taking samples of the KPIs contained in $\mathbf{y}$. The structure of the swarm algorithm---separation, alignment, cohesion---is inspired by the BOIDS algorithm of artificial life described in \citet{reynolds1987flocks}.
\smallskip
\begin{tcolorbox}[arc=0pt,boxrule=0pt,bottomrule=1pt,toprule=1pt]
\textbf{Box 1. Pseudo-code of the swarm algorithm}
\tcblower
\begin{algorithm}[H]
\KwData{$\left\{y_1,y_2,...,y_j\right\} \ni \mathbf{y}$}
\KwResult{$\bm{\psi}$, $\bm{\psi}^\perp$}
initialization\;
$\delta, M, \theta_0, p_0, , p^\perp_0$, $\zeta$, $\zeta^*$ \;
\While{$m \in \mathbb{Z}_+$}{
$w_\delta = \delta \frac{|p|}{\norm{p} }$, $\quad w_\delta^\perp = \delta \frac{|p^\perp|}{\norm{p^\perp} } $ \;
\For{ $m \leftarrow 1$}
{
random exploration of hyperbola parameters\;
$p_m = p_{m - 1} + w_\delta \epsilon$, $\quad p_m^\perp = p^{m - 1}_\perp + w_\delta^\perp\epsilon$, $\quad\epsilon \sim (0,1)$ \;
hyperbolic undersampling\;
$ y_h = f_h ( p_m, \mathbf{y}) $, $\quad y_h^\perp = f_h^\perp (p_m^\perp , \mathbf{y})$, $\quad\left\{ y_h, y_h^\perp
\right\} \ni \mathbf{y}_h$\;
copula dependence estimated with filtered $m$-samples\;
$ \hat\theta_m = \mathcal{C}_\theta (\mathbf{y}_h) $ \;
}
$\hat\theta^* = \max \left\{ \hat\theta_i \right\}_{i= 1}^m$ (optimal dependence)\;
$p^* = p (\hat\theta^*)$, $\quad p^{\perp *} = p^\perp (\hat\theta^*)$ (optimal hyperbola parameters)\;
cohesion = $\frac{1}{2} \left( \norm{ p_m - p_m^* } + \norm{ p_m^\perp - p_m^{\perp *} } \right)$\;
separation = $\frac{1}{2} \left( \norm{ p_m - \overline{p}_m^* } + \norm{ p_m^\perp - \overline{p}_m^{\perp *} } \right)$\;
\eIf{$\hat\theta^*$ > $\hat\theta^{m-1}$}{
$\hat\theta_m = \hat\theta^*$ \;
$p_m = p^*$, $\quad p_m^\perp = p^{\perp *}$ \;
alignment\;
$\delta_m = \delta_{m-1} \left( \zeta^* \right)$ \;
$m = M - 1$ \;
}{
$\delta_m = \delta_{m-1} \left( \zeta \right)$ \;
}
}
\end{algorithm}
\end{tcolorbox}
In the swarm algorithm, $\delta, M, \theta_0, p_0, p^\perp_0$, $\zeta$, $\zeta^*$ are initialization parameters. The parameter $\delta \in \mathbb{R}_+$ controls the initial dispersion of the particles, $M$ is the initial number of particles used to explore possible values of $\theta$; $\theta_0 = 0$ is the starting value of $\theta_{m}$; $p_0, p^\perp_0$ are the starting values of $p_m, p^\perp_m$; $\zeta$, $\zeta^*$ are parameters that control the degree of exploration in the swarm algorithm. Exploitation ($\delta$) and exploration parameters ($\zeta$, $\zeta^*$) are typical of metaheuristic algorithms in general and swarm intelligence in particular---see for example \citet{tilahun2019balancing}.
The algorithm described in Box 1 explores optimal values of the hyperbola parameters $p_m, p^\perp_m$ during $m$-iterations, based on two behavioral rules: cohesion and separation. Swarm cohesion depends on the euclidean norm between $p_m, p^\perp_m$ and the optimal values $p_m^*, p^{\perp*}_m$ calculated with $\theta^*$. Swarm separation is a function of the norm between $p_m, p^\perp_m$ and the centroids $\overline{p}_m^*, \overline{p}_m^{\perp *}$. Cohesion abstains the swarm $m$ from including extreme outliers---and thus avoids a biased estimation of $\theta$---and separation guarantees that the swarm properly explores all the potential values that can maximize $\theta$ for an optimal noise filtering. Alignment is achieved by gradually reducing exploration and exploitation with $\zeta^*$ ($0 < \zeta > \zeta^* \leq 1$).
\subsection{Step 2: Relevance vector machines}
Traditional methods of supervised learning---as stochastic vector machines---produce point estimates of benchmarks as an output. Relevance vector machines, in contrast, estimate the conditional distribution of multivariate benchmarks in a fully probabilistic framework. Compared to stochastic vector machines, relevance vector machines capture uncertainty and make use of a small number of kernel functions to produce posterior probabilities of membership classification.
Let $\left\{ \mathbf{x}_i \right\}_{i=1}^k$ be a $k$-set of covariates influencing the KPIs contained in $\mathbf{y}$. The importance of each covariate is defined by a weight vector $\mathbf{w} = (w_0,\dots,w_k)$. In a linear approach, $
\mathbf{y} = \mathbf{w}^\top\mathbf{x}$. In the presence of a non-linear relationship between $\mathbf{y}$ and $\mathbf{x}$, a nonlinear maping $\mathbf{x} \to \phi(\mathbf{x})$ is a basis function for $
\mathbf{y} = \mathbf{w}^\top \phi(\mathbf{x})$.
Given an additive noise $\epsilon_k$, the benchmark targets $\mathbf{t}$ will be,
\begin{equation}
\mathbf{t} = \mathbf{w}^\top \phi(\mathbf{x}) + \epsilon_k,
\label{eq:targets}
\end{equation}
where $\epsilon_k$ are independent samples from a mean-zero Gaussian noise process with variance $\sigma^2$. \citet{tipping2000sparse} and
\citet{tipping2001sparse} offer a spare Bayesian learning approach to estimate $\mathbf{w}$ in Equation \ref{eq:targets} based on the likelihood of the complete data set,
\[
p(\mathbf{t} | \mathbf{w}, \sigma^2 ) = (2\pi \sigma^2) \exp \left\{ -\frac{1}{2\sigma^2} || \mathbf{t} - \Phi \mathbf{w} ||^2
\right\},
\]
where $\Phi$ is a $k \times (k + 1)$ design matrix $\Phi = \left[ \phi(\mathbf{x}_1),\phi(\mathbf{x}_2),\dots, \phi(\mathbf{x}_k) \right]^\top$ and $\phi(\mathbf{x}_i) = \left[1, \mathcal{K}(\mathbf{x}_i,\mathbf{x}_1),\mathcal{K}(\mathbf{x}_i,\mathbf{x}_2),\dots, \mathcal{K}(\mathbf{x}_i,\mathbf{x}_k) \right]^\top$ for a kernel function $\mathcal{K}(\cdot,\cdot)$. In a zero-mean Gaussian prior for $\mathbf{w}$,
\[
p(\mathbf{w} | \bm{\alpha}) = \prod_{i = 0}^k \mathcal{G} ( w_i | 0,\alpha_i^{-1}),
\]
$\bm{\alpha}$ is a vector of $k+1$ hyperparameters, and the posterior distribution over the weights is:
\begin{align*}
p(\mathbf{w} | \mathbf{t}, \bm{\alpha}, \sigma^2) & = \frac{
p(\mathbf{t} | \mathbf{w}, \sigma^2) p(\mathbf{w} | \bm{\alpha})
}{p(\mathbf{t} | \bm{\alpha}, \sigma^2)}, \\
& = (2\pi)^{-(k+1)/2} |\mathbf{\Sigma}|^{-1/2} \exp \left\{ -\frac{1}{2}
(\mathbf{w} - \bm{\mu})^\top
\mathbf{\Sigma}^{-1}
(\mathbf{w} - \bm{\mu}) ||^2
\right\},
\end{align*}
where $\mathbf{\Sigma} = (\sigma^{-2} \mathbf{\Phi}^\top \mathbf{\Phi} + \mathbf{A})^{-1}$, $\bm{\mu} = \sigma^{-2} \mathbf{\Sigma} \mathbf{\Phi}^\top \mathbf{t}$ and
$\mathbf{A} = \text{diag}(\alpha_1,\alpha_2,\dots,\alpha_k)$. Updating methods for $\alpha_i$ are described in \citet{barber2012}. The complete specification of the hierarchical priors---based on the automatic relevance determination of \citet{mackay1996bayesian} and \citet{neal2012bayesian}---can be found in \citet{tipping2001sparse}.
The assignment of an individual hyperparameter $\alpha_k$ to each weight $w_k$ allows to achieve sparsity in the relevance vector machine. As the posterior distribution of many of the weights is peaked around zero, non-zero weights are associated only with `relevant' vectors, i.e. with the most relevant influencing factors of the probabilistic benchmarks estimated with the denoised KPIs.
\section{Empirical application: probabilistic benchmarks in nanofinance+}\label{sec:empap}
This section illustrates the methods described in Section \ref{sec:methods} with an application to a database of 7830 nanofinance+ groups receiving entepreneurship and business training in 14 African countries: Benin, Burkina Faso, Ethiopia, Ghana, Malawi, Mozambique, Niger, Sierra Leone, South Africa, Sri Lanka, Tanzania, Togo, Uganda and Zambia. Almost all of the groups in the database work with a development agency (94\%), and 43\% of the groups are located in rural regions.
Table \ref{tab:desmicmac} shows descriptive statistics of group-level characteristics and the macro-economic environment of the countries where the groups operate. On average, each member of NF+ contributes around 29 USD of savings to the common fund and receives on average a loan of 22 USD. Despite the low values of savings and loans, returns on savings in the groups are on average 47\%, whereas the equity per member is on average equal to 40 USD (Table \ref{tab:desmicmac}).
Returns on savings ($y_1$) and equity per member ($y_2$) are the KPIs used for calculating the benchmarks of NF+ in the empirical application. Hence, $j =2$, $\mathbf{y} = [y_1 \;\; y_2]$, and the joint distribution in Proposition 1 simplifies to,
\begin{equation}
f_{y_1,y_2} \left( y_1, y_2\right) = f_{y_1|y_2} \left( y_1|y_2\right) f_{y_1|y_2} \left( y_2\right) = f_{y_2|y_1} \left( y_2|y_1\right) f_{y_2|y_1} \left( y_1\right).
\label{eq:joint2}
\end{equation}
Successful units---NF+ groups with a higher financial performance---will be those with KPIs delimitied by the isolines of the threshold $\tau$,
\[
1 - \int_{\tau_1}^{\infty} \int_{\tau_2}^{\infty} f_{y_1,y_2} \left( y_1, y_2\right) d y_1 d y_2,
\]
for a probabilistic benchmark $\tau \in \left\{\tau_1 \;\; \tau_2\right\}$, $\tau \in \mathbb{R}^2$.
Following Proposition 2, the joint density of the KPIs (equation \ref{eq:joint2}) is approximated with a bivariate Archimedean copula:
\begin{equation}
\mathcal{C}_g (u_1, u_2) =
\begin{cases}
g^{-1}(g(u_1) + g(u_2)) \text{ if } g(u_1) + g(u_2) \leq g(0) \\
0 \text{ otherwise. }
\end{cases}
\end{equation}
Clayton's Archimedean copula is particularly suitable to model the dynamics of nanofinance+. Clayton's copula has greater dependence in the lower tail compared to the upper tail. In the case of NF+, greater lower tail dependence is expected because groups with low equity will have zero or negative returns, while in contrast there is more dispersion in the indicators of groups with higher performance---i.e. some groups show higher equity but low levels of returns due to lower repayment rates, while groups with low equity may have higher returns due to the higher interest rates charged for their loans.
A bivariate Clayton's Archimedean copula for the uniform marginal distributions of returns on savings ($u_1$) and equity per member ($u_2$) will be:
\begin{align}
\mathcal{C}_{\theta} (u_1, u_2) & = g^{-1}(g(u_1) + g(u_2)) \\
& = (1 + u_1^{-\theta} - 1 + u_2^{-\theta} -1)^{-1/\theta} \\
& = (u_1^{-\theta} + u_2^{-\theta} -1)^{-1/\theta}
\end{align}
with a probability density function,
\begin{equation}
c_{\theta} (u_1, u_2) = \frac{\partial^2}{\partial u_1 \partial u_2} \mathcal{C}_{\theta} = \frac{1 + \theta}{(u_1 u_2) ^{\theta + 1} } (u_1^{-\theta} + u_2^{-\theta} -1)^{-2 - \frac{1}{\theta}}
\end{equation}
and a co-dependence parameter $\theta \in [0,+\infty)$,
\begin{equation}
\theta = \left( -2 \int_0^1 \frac{u^ {-\theta} - 1 }{-\theta u^ {-(\theta + 1)}}
\right)^{-1} - 2 \int_0^1 \frac{u^ {-\theta} - 1}{-\theta u^ {-(\theta + 1)}}.
\end{equation}
The parameter $\theta$ controls the amount of dependence in $\mathcal{C}_{\theta} (u_1, u_2)$. When $\theta \to +\infty$ the dependency between $u_1$ and $u_2$ approaches comonoticity,
\begin{equation}
\lim_{\theta \to +\infty} \mathcal{C}_{\theta} (u_1, u_2) = \min (u_1, u_2),
\end{equation}
while in turn when $\theta \to 0$, $u_1$ and $u_2$ become independent:
\begin{equation}
\lim_{\theta \to 0} \mathcal{C}_{\theta} (u_1, u_2) = u_1 u_2.
\end{equation}
In the case of returns on savings and equity per member, it is expected that $\theta \to +\infty$, as both financial indicators should show lower tail co-dependence in NF+.
Figure \ref{fig:swarm} shows indeed that the swarm optimization of $\theta$---using the data of returns on savings and equity per member---leads to a value of $\hat\theta = 3.97$. The estimates of the parameters of the hyperbolas for $\hat\theta$ are equal to,
\begin{align*}
\hat{\bm\psi} & := \left\{ \hat{\psi}_1, \hat{\psi}_2, \hat{\psi}_3, \hat{\psi}_4 \right\} = \left\{ 77.42, 0.87, -10.38, -46.51 \right\}, \\
\hat{\bm\psi}^\perp & := \left\{ \hat{\psi}_1^\perp, \hat{\psi}_2^\perp, \hat{\psi}_3^\perp, \hat{\psi}_4^\perp \right\} = \left\{ 55.92, 0.67, 2.26, -15.43 \right\}.
\end{align*}
Figure \ref{fig:filtering} shows the optimal denoising of the KPIs of NF+ with double-hyperbolic undersampling. The first step discards the values of ROS and EPM outside the lobes of the hyperbole estimated with $\bm\psi$ and inside the lobes of the hyperbole estimated with $\bm{\psi^\perp}$ (Figures \ref{fig:filtering}b and \ref{fig:filtering}d). The co-dependence between the KPIs before denoising is contaminated with a high number of outliers (Figure \ref{fig:filtering}e). After denoising, the co-dependence in the lower and upper tails of the KPIs is kept but noisy elements are discarded (Figure \ref{fig:filtering}f).
Table \ref{tab:results} and Figure \ref{fig:RVM_surfs} show the results of estimating the relevance vector machine with the denoised KPIs (step 2). In terms of continuous factors influencing the benchmarks, the main covariates affecting the financial benchmarks of NF+ are those related to the macroeconomic environment, mainly GDP growth, poverty, inequality and the percentage of rural population in the country where a NF+ group operates (Table \ref{tab:results}). Savings accumulation and loan provision are the main group-level characteristics influencing the financial benchmarks of NF+; this result is expected---because in NF+ the lending channel is the main source of profit generation---and shows the ability of the relevance vector machine to properly detect variables related to financial benchmarks in denoised datasets.
In relation to categorical factors influencing the benchmarks, Figure \ref{fig:RVM_surfs} shows that the probabilistic benchmarks of NF+ are different in rural groups (Figure \ref{fig:RVM_surfs} left) compared to urban groups (Figure \ref{fig:RVM_surfs} right). While both rural and urban groups have a concentration of financial performance in the lower tail of the joint distribution of the KPIs, higher dispersion in the upper tail is observed in rural groups, and hence the isolines of the probabilistic benchmarks are wider for rural groups compared to urban groups.
In the case of urban and peri-urban nano-finance, groups can be classified as successful with a probability higher than 90\% (red contour isoline in Figure \ref{fig:RVM_surfs}b) when the groups have returns higher than 55\% and equity higher than 80 USD per member (Figures \ref{fig:RVM_surfs}f). In rural NF+, however, groups that do not show negative returns and have an equity per member higher than 10 USD are classified as successful with a probability higher than 80\% (Figures \ref{fig:RVM_surfs}c and \ref{fig:RVM_surfs}e).
\section{Conclusion}\label{sec:conclusion}
This study suggested a 2-step approach for calculating probabilistic benchmarks with noisy KPIs. An empirical application to a noisy database of nanofinance+ shows that the methods are able to denoise KPIs, estimate probabilistic benchmarks, and properly identify the continuous and discrete factors influencing the benchmarks.
In the case of NF+ groups with business training, the results indicate that macroeconomic factors and the region where a group is located influence their financial benchmarks. Governments, international donors and development agencies can use the estimated benchmarks for monitoring the performance of NF+ and gain an independent perspective about how well a group/project is performing when compared to other similar groups/projects. In the presence of performance gaps, the benchmarks will be useful to identify opportunities for change and improvement among the groups\footnote{It is estimated that over 100 million people in 10.5 million households participate in nanofinance groups worldwide (\cite{greaney2016}; \cite{burlando-2017}).
Due to the importance of NF+ for financial inclusion and multidimensional poverty reduction, all major international donors and development agencies work with NF+, but these organizations lack of benchmarks to evaluate the financial performance of NF+ groups.}.
Future studies can extend the denoising methods to the quadratic surface defined by hyperbolic cylinders. The higher-dimensional hierarchical Archimedean copula proposed by \citet{savu2010} can be applied to approximate the multivariate probability distribution of KPIs denoised with hyperbolic cylinders. The recent developments in orthogonal machine learning---see \textit{inter alia} \citet{oprescu2018orthogonal}, \citet{knaus2018double}, \citet{semenova2018essays} or \citet{kreif2019machine}---can be used to estimate quasi-causal factors influencing the benchmarsk, complementing the non-parametric correlational approach of relevance vector machines.
\end{document}
| 23,132 |
\section{Introduction}\label{Introduction}
From recent observations and theoretical studies, it is believed that the first stars known as population~III~(Pop~III) stars played essential roles in the history of the cosmological structure formation.
As the first luminous objects in the Universe, they formed around a few hundred million years after the big bang~(the redshift $z \sim 10$--$30$)~\cite{2002ApJ...564...23B}.
After their birth, Pop III stars contributed to the ionizing and heating of the surrounding intergalactic medium~(IGM) gas~\cite{2006ApJ...639..621A,2007ApJ...665...85J} and provided a significant impact on the first galaxy formation ~\cite{2012ApJ...745...50W,2013RvMP...85..809K}.
They could also trigger the formation of supermassive black holes~\cite{Venemans_2013,Wu_2015,2014AJ....148...14B}.
However, despite their importance, the detailed nature of Pop III stars is still unknown.
Various observational approaches are demanded to obtain further information about Pop III stars.
Although compared with typical stars at present, Pop III stars are luminous and massive, $m \gtrsim 10 ~M_\odot$~\cite{Hirano_2015,Susa_Hasegawa_2014}, it is difficult to observe them directly.
However, the recent studies pointed out that Pop III stars with a mass between $130M_{\odot}$ and $270M_{\odot}$ end with pair-instability supernovae~(PISNe), which is
roughly $100$ times more powerful than typical Type Ia or Type II SNe~\cite{Herger_Woosley_star_nucleosynthetic,Umeda_2002}.
Furthermore, cosmological simulation~\cite{Hirano_2015,Susa_Hasegawa_2014} also show such relatively massive Pop III stars, and therefore their PISNe would not be rare.
Hence, it could be possible that we obtain the probe for the PISNe from the cosmological and astrophysical measurements.
One way to get the probe is the next-generation observation of near infrared.
The redshifted ultraviolet emission from PISNe in high redshifts is a good target for it, such as the James Webb Space Telescope\footnote{https://www.jwst.nasa.gov/} and the Nancy Grace Roman Space Telescope\footnote{https://roman.gsfc.nasa.gov/}.
So far, there are a lot of theoretical works which examine the detectability of such PISNe using these observations~(e.g.~Refs.\cite{2005ApJ...633.1031S,2011ApJ...734..102K,Whalen_2012}).
Besides near-infrared observations,
it is also suggested that the sampling of metal-poor stars in the Milky Way
can provide the limit on the PISNe rate~\cite{2017MNRAS.465..926D}.
Additionally, Ref.~\cite{Peng_Oh_2003} studied the effect of
PISNe in high redshifts on the temperature anisotropy
of the cosmic microwave background~(CMB).
Since the gas inside an SN remnant~(SNR) is a hot ionized plasma,
CMB photons passing through the SNR suffer the inverse-Compton scattering.
That is the thermal Sunyaev-Zel'dovich~(tSZ) effect of PISNe, creating the CMB temperature anisotropy on small scales.
Although the anisotropy amplitude depends on the model of Pop III stars and PISNe,
they showed that the tSZ temperature anisotropy due to PISNe could be subdominant to the one from galaxy clusters.
This work investigates the effect for the global ionization fraction of PISNe in high redshifts with Planck polarization data.
The gas inside the SNRs of PISNe is compressed and fully ionized.
If many PISNe occur, the CMB photons suffer more scattering, and the E-mode angular power spectrum of CMB traces it.
Using Markov chain Monte Carlo(MCMC) method with the Planck 2018 polarization data, we constrain the amount of PISNe events.
After that, we also show that the restraints would lead us to the further astrophysical information of Pop III stars.
The rest of this paper is organized as follows.
In Sec.~II, we describe the time evolution of the SNR shock shell.
Accordingly,
we show the relevant time scale for this work.
In Sec. III, introducing the effect for global ionization fraction due to the PISNe,
we explain our reionization model considered here.
After that, we show the equation of computing the number density of the PISNe with the model parameter.
In Sec. IV, we explain the MCMC methods used in this work and show the resulting constraint. Subsequently, we discuss the restriction compared with the cosmological simulation about the Pop III stars in Sec.V.
Finally, we summarize in Sec.~VI.
Throughout our paper,
we take the flat $\Lambda$CDM model with the Planck best fit parameters~\cite{Planck2018_cospara}:
$(\Omega_{\rm m},\Omega_{\rm b},h,n_{\rm{s}},\sigma_{8})$=$(0.32,0.049,0.67,0.97,0.81)$.
\section{The properties of Supernova remnants of Pop III stars}
\label{section2}
Since Pop III stars are massive, $m \gtrsim 10 ~M_\odot$~\cite{Hirano_2015,Susa_Hasegawa_2014}, it is theoretically predicted that
Pop III stars cause SNe at the final stage of their lives, which is about 1$\rm{Myr}$ after its birth.
In addition, from the recent studies, the Pop III stars with mass between $130M_{\odot}$ and $270M_{\odot}$ end with super energetic SNe, called PISNe, which are
roughly $100$ times more powerful than typical Type Ia or Type II SNe~\cite{Herger_Woosley_star_nucleosynthetic,Umeda_2002}.
Once supernovae occur, the supernovae remnants~(SNRs) would expand with a shock wave. In this section, we describe the time evolution of the general SNR with the analytical model.
After occurring the SN explosion,
a certain mass is ejected into a surrounding gas with supersonic velocity.
The ejecta sweeps up the surrounding gas, creating the expanding shock waves.
This is a trigger to form the SNR.
The SNR expands outwards nearly spherically.
The evolution of the SNR has mainly three phases~\cite{Reynolds_2017}. The first phase is called the free-expansion phase.
In this initial phase,
the swept-up mass by the SNR is negligible compared with the ejected mass. Therefore, the evolution of the SNR in this phase is determined by only the initial energy and the ejected mass.
The SNR evolution enters the second phase, the adiabatic phase, when
the mass of the swept-up surrounding gas is comparable with
the initial ejected mass.
The swept-up surrounding gas
is compressed and heated by the shock
and
forms a shell structure.
The evolution in this phase
is well described by
the Sedov-Taylor self-similar solution.
As the SNR evolves, the velocity of the SNR decreases, and the resultant expansion times scale of the SNR becomes long.
Finally,
since the expansion time scale will be longer than the cooling time scale,
the radiative cooling is not negligible in the evolution of the SNR.
This third phase is called
the momentum conserving phase.
The thermal energy of the SNR is lost by the radiative cooling.
The expansion of the SNR just followed the momentum conservation.
To evaluate the impact of the SNR as the cosmological ionization photon source,
we are interested in the second phase, the adiabatic phase.
This is because
the first phase has a very short duration and,
in the third phase,
most energy is taken away
to the CMB through the inverse-Compton scattering.
As mentioned above, the evolution of the SNR shocked shell in the adiabatic phase is well described by the Sedov-Taylor self similar solution.
In this solution, the radius evolution of the shocked shell can be written as the function of the SN explosion energy $E_{\rm{SN}}$:
\begin{equation}\label{rsn}
R_{\rm{SN}}(t)=2.0~[\mathrm{kpc}] \left[
\left(\frac{t}{10^{7} \mathrm{yr}}\right)^{2 }
\left(\frac{E_{\mathrm{sn}}}{10^{46} \mathrm{J}}\right)\left(\frac{10^{3} \mathrm{m}^{-3}}{n_{\rm{g}}}\right)
\right]^{\frac{1}{5}},
\end{equation}
where $t$ represents the time after the SN explosion, and $n_{\rm{g}}$ is the
number density of the hydrogen atom in the outer gas of the shocked shell.
First the SNR propagates in a denser gas in the host dark matter halo and subsequently in the IGM outward.
In this work, we neglect the effect of the overdensity in a halo, and set to $n_{\mathrm{g}}\approx n_{\mathrm{b,IGM}}$, where $n_{\mathrm{b,IGM}}$ is the number density of baryons in the IGM.
Although the SNR can expand larger than the virial radius,
high density gas in a halo reduce the energy of the SNR in the IGM propagation and decrease the radius given in Eq.~\eqref{rsn}.
In order to evaluate such an overdensity effect, one needs to perform the numerical calculation including the density profile in where the SNR propagates.
In the limit of a strong shock,
the number density in the shell,~$n_{\rm SN}$, is
related to the surrounding one~$n_{\mathrm{g}}$ with
the adiabatic index $\gamma$, $n_{\rm{SN}}=(\gamma+1)/(\gamma-1)n_{\mathrm{g}}$.
Furthermore, the thickness of the shocked shell,~$\Delta R_{\rm SN}$, is
obtained from the mass conservation law as
$\Delta R_{\rm{SN}}(t)=(\gamma-1)/(3(\gamma+1))R_{\rm{SN}}(t)$.
Here, we neglect the density profile in the shock shell.
For $\gamma$, we adopt a monoatomic gas case, $\gamma=5/3$~\cite{Barkana_2001}.
The adiabatic phase terminates
when the cooling becomes effective.
Since the gas in SNRs is fully ionized by the shock heating,
the major cooling mechanism is
Compton cooling.
The time scale of Compton cooling is given by
\begin{equation}
t_{\mathrm{C}}=\frac{3m_{\mathrm{e}}}{4\sigma_{\mathrm{T}}aT^4_{\mathrm{\gamma}}}=1.4\times 10^7\lr{\frac{1+z}{20}}^{-4}\mathrm{yr}.
\end{equation}
The SNR evolves following the equation~\eqref{rsn} until $t=t_{\rm C}$.
After that, the thermal energy, which drives the shell expansion, is quickly lost by Compton cooling.
In this paper, we simply
evaluate the effect of SNRs discussed in the following section at $t=t_{\rm C}$.
The radial profiles of the electron density in an SNR is given by
\begin{equation}
n_{\rm e}(r) =
\frac{\gamma+1}{\gamma-1}n_g, \quad
\lr{\frac{\gamma-1}{3(\gamma+1)}r_{\mathrm{SN}}<r<r_{\mathrm{SN}}}
\end{equation}
where $r$ is the comoving radial distance from the center of the SNR and, therefore,
$r_{\rm SN}$ is $r_{\rm SN} = (1+z)R_{\rm SN}$.
As the SNRs are cooled,
electrons in them are recombined again.
Accordingly, the effect for global ionization fraction from the SNR is suppressed.
The time scale of recombination in the SNR can
be written as
\begin{equation}
t_{\rm rec} = \frac{\gamma-1}{\alpha_{\mathrm{B}}(t_{\mathrm{C}})(\gamma+1)n_{\mathrm{g}}}=3.3\times 10^7\lr{\frac{1+z}{20}}^{-0.12}\mathrm{yr},
\end{equation}
where $\alpha_{\rm{B}}$ is the case B recombination rate given in Ref.~\cite{Fukugita&Kawasaki_cooling}.
This time scale is not negligible, compared with the cosmological time scale,~$t_{\mathrm{cos}}$.
We take into account this suppression in the abundance of PISNe in the next section.
\section{THE REIONIZATION MODEL}
In the standard analyses of the reionization history adopted by Planck CMB measurements, only the overall optical depth of electrons is considered assuming a $\mathrm{tanh}$-model reionization history.
The polarization data, however, should contain additional information for the full reionization history.
Here we investigate the effect of Pop III star supernovae especially in PISNe for the global ionization history with these data.
\subsection{Reionization model}
In the reionization models considered here, we add the effects from PISNe of Pop III stars to the fiducial ionization history adopted by Planck CMB measurements.
We assume that the Pop III stars are only hosted by the massive halos
with the virial temperature $T_{\mathrm{vir}}>10^4~\mathrm{K}$.
The condition of $T_{\mathrm{vir}}>10^4~\mathrm{K}$ comes from the efficiency of the atomic cooling in the halo.
It is a fact that the Pop III stars can be formed in the halos which are not satisfied by this condition.
However, if the virial temperature $T_{\mathrm{vir}}$ is lower than $10^4\mathrm{K}$, the star formation rate is suppressed and even in the halo-host-star case, may become "one star per halo" because internal UV photodissociation of $H_2$ by the Pop III stars ceases further gas cooling and star formation~\cite{1999ApJ...518...64O}.
Moreover, in the case of the more massive halos with $T_{\mathrm{vir}}>10^4\mathrm{K}$, there is a conceivable scenario that many stars form together in such a halo where atomic cooling allows gas to collapse and have much higher density~\cite{2002ApJ...569..558O}.
The effect of PISNe on the cosmic reionization could be subdominant
and the main reionization photon sources are Pop II stars and first galaxies.
Therefore, taking into account the PISNe reionization effect,
we assume that the evolution of the global ionization fraction can be decomposed into three terms,
\eq{\label{eq: ion_model}
x_e(z)=x_e^{\mathrm{rec}}(z)+x_e^{\mathrm{reio}(z)}+x_e^{\mathrm{SN}}(z),
}
where $x_e^{\mathrm{rec}}$ is the global ionization fraction in the recombination epoch and $x_e^{\mathrm{reio}}$ represents
the contribution from the main reionization source including Pop II stars and galaxies.
For obtaining $x_e^{\mathrm{rec}}$, we employ the recombination code~{\tt RECFAST}~\cite{1999ApJ...523L...1S,2000ApJS..128..407S,2008MNRAS.386.1023W,2009MNRAS.397..445S}.
Then, we adopt the widely used "tanh" model for $x_e^{\mathrm{reio}}$~\cite{2008PhRvD..78b3002L},
\begin{eqnarray}
x_e^{\mathrm{reio}}(z)&=&x_e^{\mathrm{before}}+\frac{1}{2}\left(x_e^{\mathrm{after}}-x_e^{\mathrm{before}}\right)
\nonumber \\
&&\quad \
\times \left[1+\tanh \left(\frac{y^{\mathrm{reio}}-y(z)}{\Delta y}\right)\right],
\label{eq:tanh-shape}
\\
y(z)&=&(1+z)^{3 / 2},
\end{eqnarray}
where
$y^{\mathrm{reio}}=y(z^{\mathrm{reio}})$, $\Delta y=1.5 \sqrt{1+z^{\mathrm{reio}}} \Delta z$ with the duration of reionization, $\Delta z=0.5$.
In Eq.~\eqref{eq:tanh-shape},
$x_{\mathrm{e}}^{\mathrm{after}}$ is the ionization fraction after finishing reionization, $x_{\mathrm{e}}^{\mathrm{after}}$=1 and $x_{\mathrm{e}}^{\mathrm{before}}$ is the left-over ionization fraction well after the recombination epoch adopted as $x_{\mathrm{e}}^{\mathrm{before}}=10^{-4}$.
The impact of PISNe on the reionization process is provided by the additional term, $x_e^{\mathrm{SN}}$.
Since the gas inside SNRs is fully ionized, the volume occupation of the SNRs represents the global ionization fraction.
Thus we estimate the SN term by
\eq{\label{eq: ion_add}
x_e^{\mathrm{SN}}(z)= f_{\mathrm{ion}}(z)n_{\mathrm{SN}}(z)V_{\mathrm{ion}}(z),
}
where $f_{\rm ion}(z)$, $n_{\mathrm{SN}}$ and $V_{\mathrm{ion}}$ represent the survival probability of
ionized SNRs, the number density of PISNe, and the volume of each ionized SNR respectively.
In this form of additional ionization fraction of Eq.~\eqref{eq: ion_add}, we assume that each SNRs cover a different region.
Although it is totally ionized soon after the creation,
the inside of SNRs gradually become
neutral in the time scale of recombination,
$t_{\mathrm{rec}}$.
In order to account for this effect, we introduce the probability
$f_{\mathrm{ion}}(z)=t_{\mathrm{rec}}(z)/t_{\mathrm{cos}}(z)$
with the upper bound, $f_{\mathrm{ion}}\leq 1$.
The volume $V_{\mathrm{ion}}$ is given by $V_{\mathrm{ion}}(z)=4\pi/3 R_{\mathrm{SN}}^3(t_{\mathrm{c}},z)$ using the radius the SNe in ~\eqref{rsn} with $E_{\mathrm{sn}}=10^{46}\mathrm{J}$.
In the next subsection, we discuss the number density of PISNe, $n_{\rm SN}$.
In our model, we assume that each PISN occurs isolatedly and an ionized SNR expands in the neutral IGM to increase the ionization fraction.
This assumption could lead to overestimate the contribution of SNRs to the ionization fraction.
In Sec.~\ref{subsec: limit_assumption}
we will discuss the limitation of our assumptions and
the cases where our assumption is not applicable.
\subsection{The abundance of PISNe }
Since the abundance of PISNe has not been well decided yet because of lots of theoretical uncertainties (i.e. the mass function of the Pop III stars), here we consider it is proportional to the collapsed mass of baryon in dark matter halo.
We model the number density of PISNe at given $z$ as
\eq{\label{eq: numsn}
n_{\rm SN}(z) = \zeta \frac{1}{m_*} f_{\mathrm{coll}}(M_{\mathrm{min}})\bar{\rho}_{\mathrm{b}}(z),
}
where $\zeta$ is the model parameter
whose combination $\zeta f_{\mathrm{coll}}\bar{\rho}_{b}$ means the total mass of the Pop III stars which occurs PISNe in one halo, $M_{\mathrm{min}}$ is the mass corresponding to $T_{\mathrm{vir}}$, $\bar{\rho}_{\mathrm{b}}$ is the background baryon density, and $m_*$ is the typical mass of the Pop III star which occurs PISNe. Although it is known that the Pop III stars cause PISNe in the case of mass range $[130\mathrm{M}_{\odot},270\mathrm{M}_{\odot}]$\cite{Herger_Woosley_star_nucleosynthetic}, we simply assume $m_*=130\mathrm{M_{\odot}}$ in our model.
We set the geometry of the gravitational collapse to spherical one (i.e. the halo mass function is Press-Schechter). Then, $f_{\mathrm{coll}}(M)$ which is the collapse fraction in halos with the mass $M_{\mathrm{halo}}>M$ is calculated by
\eq{
f_{\mathrm{coll}}(M)=\frac{2}{\sqrt{2 \pi} \sigma(M)} \int_{\delta_{c}}^{\infty} d \delta
\exp \left(-\frac{\delta^{2}}{2 \sigma^{2}(M)}\right)
= \operatorname{erfc}\left(\frac{\nu}{\sqrt{2}}\right),
}
where $\nu\equiv\delta_c/\sigma(M)$ and $\delta_c=1.67$.
The variance of the matter density fluctuation~$\sigma$ is written by
\eq{\label{eq: dens_variance}
\sigma^2(M) = \int \mathrm{dlog}k~ W^2(kR_{\mathrm{vir}})\mathcal{P}(k),
}
where $R_{\mathrm{vir}}$ is the virial radius for $M$. Here $W(kR)$ is the 3D window function. In this work, we employ the top-hat window function
\eq{
W\left(k, R\right)=\frac{3}{\left(k R\right)^{3}}\left(\sin \left(k R\right)-kR\cos \left(k R\right)\right).
}
The nondimensional matter power spectrum $\mathcal{P}(k)$ can be calculated as
\eq{
\mathcal{P}(k)=\frac{4}{25}\lr{\frac{(1+z)k}{H_0}}^4T_{\mathrm{q}}^2~\mathcal{P}_{\mathcal{R}}(k),
}
using the transfer function $T_{\mathrm{q}}$ formulated by Bardeen et al.~\cite{1986ApJ...304...15B},
\eq{
T_{q}=&\frac{\ln (1+2.34 q)}{2.34 q}\\
&\ \times\left[1+3.89 q+(16.1 q)^{2}+(5.46 q)^{3}+(6.71 q)^{4}\right]^{-1 / 4},
}
where $q \equiv k/\Gamma~h \mathrm{Mpc}^{-1}$, and $\Gamma$ is the apparent shape parameter including baryonic effect~\cite{1995ApJS..100..281S}, $\Gamma\equiv \Omega_{\mathrm{m}}h\mathrm{exp}(-\Omega_{\mathrm{b}}-\sqrt{2h}\Omega_{\mathrm{b}}/\Omega_{\mathrm{m}})$.
The nondimensional primordial power spectrum $\mathcal{P}_{\mathcal{R}}$ is
\eq{
\mathcal{P}_{\mathcal{R}}=\mathcal{A}_s\lr{\frac{k}{k_{\mathrm{pivot}}}}^{n_s-1},
}
where $\mathcal{A}_s$, $k_{\mathrm{pivot}}$ and $n_s$ are the amplitude of the primordial scalar power spectrum, the pivot scale, and the scalar spectral index respectively.
As Pop III star formation proceeds, the primordial IGM is
contaminated by metals through SNe of Pop III stars.
When the metallicity reaches the critical threshold value
at $z_{\rm end}$,
the formation of Pop III stars terminates.
Although new Pop III PISNe no longer happen after that,
SNRs created until $z_{\rm end}$ still survive for a while because the recombination time scale in SNRs is
comparable the cosmological time scale at that redshift.
Therefore, SNRs of Pop III stars can contribute the
global ionization fraction even after $z_{\rm end}$
In order to take this contribution, we provide $x_e^{\mathrm{SN}}(z)$ as
\eq{
x_e^{\mathrm{SN}}(z)= f_{\rm ion }(z)n_{\mathrm{SN}}(z_{\mathrm{end}})V_{\mathrm{ion}}(z_{\mathrm{end}})~\quad (z<z_{\mathrm{end}}),
}
where we assume that the SNRs created at $z_{\rm end}$
fade away in the time scale $t_{\mathrm{rec}}(z)$.
For simplicity, we set $z_{\mathrm{end}}=12$ in this work.
We will discuss the impact of $z_{\rm end}$ on our analysis later.
Figure~\ref{fig: xe_zeta} shows the global ionization history with Pop III PISNe models.
In the model~I and II, we set the model parameter to
$(z_{\mathrm{reio}},\mathrm{log}_{10}\zeta)=(6.90,-2.17),(6.60,-1.86)$, respectively.
For comparison, we plot the standard reionization model without the PISNe effects.
One of good indicators for the cosmological reionization history is the optical depth of the Thomson scattering for CMB photns,
\eq{\label{eq: optical_depth}
\tau = \int \frac{d z}{H(z)} \sigma_{\rm T } x_e(z) n_e(z) .
}
All of the three models
have the same optical depth of the Thomson scattering,
$\tau\simeq0.054$,
which is consistent with the Planck result.
We can see that the Pop III PISNe can enhance the ionization fraction in the early universe, $z_{\mathrm{reio}}<z\leq 15$.
\if0
In our model, we neglect the reionization contribution from the progenitor Pop III stars of PISNe. During the stellar stage before PISN explosions, they can emit ionizing photons to the IGM and make ionized bubbles. However, the size of ionized bubbles depends on the ionizing photons' escape fraction,~$f_{\rm{esc}}$. Although there is still a large theoretical uncertainty in the escape fraction, some theoretical works predict the escape fraction smaller than the unity. For example, Ref.~\cite{2000ApJ...545...86W} reported that $f_{\rm{esc}}\lesssim \mathcal{O}(10^{-2})$ is preferred in the high redshifts from their simulations and Ref.~\cite{2020MNRAS.498.2001M} suggest the value of $f_{\rm{esc}}$ would be $0.05<f_{\rm{esc}}<0.3$ at a high redshift $z\sim 10$.
From a simple estimation,
if the escape fraction of the progenitors of PISNe is $f_{\rm{esc}}\lesssim 0.3$,
we found that the Str\"{o}mgren sphere of the ionized bubbles are smaller than
the SNR given in Eq.~\eqref{rsn}.
Therefore, in this paper, we assume that
the bubble created by a Pop III star before the SNe
can be destroyed by an SNR and negligible as a reionization source.
However, in general, some fraction of Pop III stars can be progenitors of PISNe and the rest of them is not.
Therefore,
the effect of Pop III stars cannot be negligible, depending on the initial mass function of Pop III stars which is still under debate.
In this paper, although we ignore the contribution of Pop III stars on the cosmic reionization for simplicity,
we will come back to this issue in Sec~\ref {sec:results}.
\fi
In our model, we neglect the reionization due to the Pop III stars, which do not have enough mass to occur PISNe, although they also contribute to the early stage of the cosmic reionization.
The fraction of such low-mass stars depends on the initial mass function of Pop III stars which is still under debate.
In this paper, we ignore the contribution of Pop III stars on the cosmic reionization for simplicity. however, we will come back to this issue in Sec.~\ref{subsec: limit_assumption}.
\begin{figure}
\centering
\includegraphics[width=8cm,clip]{ionization_history.pdf}
\caption{global ionization history with several values of $\zeta$.}
\label{fig: xe_zeta}
\end{figure}
\section{MCMC analysis with Planck 2018}
In order to constrain the effect of PISNe based on our model in Eq.~\eqref{eq: ion_model}, we employ the MCMC analysis with Planck 2018 data.
Chains of MCMC samples are generated by the publicly open code {\tt MontePython}~\cite{Audren:2012wb}, which adopts the code {\tt CLASS}~\cite{Blas_2011} for calculating the theoretical CMB angular power spectrum.
We have modified the {\tt CLASS} code including the PISNe effect for global ionization fraction represented in Eq.~\eqref{eq: ion_model}.
The optical depth is mainly constrained by the reionization bump that appeared on small scales in the CMB polarization.
Since we are interested in
$\zeta$ and $z_{\mathrm{reio}}$,
which mainly control the ionization history
and the optical depth $\tau$ with $z^{\rm reio}$ in equation~\eqref{eq:tanh-shape},
we fix other cosmological parameters to the Planck best-fit parameter of the TT, TE, EE, low-$\ell$ + lensing
measurement,
$\Omega_{\mathrm{b}}=2.237$, $\Omega_{\mathrm{cdm}}=0.1200$, $100\theta_{\mathrm{s}}=1.04092$, $\mathrm{ln}10^{10}A_{\mathrm{s}}=3.044$, and $n_s=0.9649$.
These parameters do not affect the reionization bump much.
To obtain accurate results from MCMC methods, it is essential to check if the MCMC chains contain enough samples which are independent of each other and cover a sufficient volume of parameter space such that the density of the samples converges to the actual posterior probability distribution.
Therefore, here,
we run the MCMC chain until the Gelman and Rubin convergence statistic R, which represents the ratio of the variance of parameters between chains to the variance within each chain, satisfies $R-1<0.05$ ~\cite{1992StaSc...7..457G,doi:10.1080/10618600.1998.10474787}.
\section{Results and Discussion}\label{sec:results}
Our resulting constraint is shown in Fig.~\ref{fig: mcmc}, in which
$\zeta$ and $z_{\rm reio}$ are our model free parameters and the optical depth $\tau$ is derived from Eq.~\eqref{eq: optical_depth} with the sampling data of
$\zeta$ and $z_{\rm reio}$.
The dark green region shows the $1\sigma$ region and the light green region represents the $2\sigma$ region.
Since the CMB anisotropy is sensitive to the total optical depth $\tau$ during and after the cosmic reionization, the Planck measurement basically provides the constraint on $\tau$.
In our model, the main contribution to $\tau$ comes from the "\rm{tanh}" term while, the PISNe effect is subdominant.
Therefore, $z_{\mathrm{reio}}$ for the "\rm{tanh}" term is strongly constrained.
When $\zeta$ increases more than $\zeta > 10^{-3}$, SNRs can induce early reionization and make a non-negligible contribution to $\tau$.
To compensate for this effect, small $z_{\mathrm{reio}}$ is preferred as $\zeta$ becomes large as shown in Fig.~\ref{fig: mcmc}.
However, when
$\zeta$ is larger than $10^{-2}$, even only PISNe
can fully ionize the Universe.
Therefore, $\zeta > 10^{-2}$ can be ruled out.
\begin{figure}
\centering
\includegraphics[width=8cm,clip]{MCMCresult.pdf}
\caption{MCMC result}
\label{fig: mcmc}
\end{figure}
The Planck measurement gives the constraint
on our model parameter, $\zeta\leq10^{-2}$.
Now let us discuss what implication on the physics of Pop III stars we can obtain from our constraint.
Our model parameter $\zeta$ is introduced to
connect between the number density of the PISNe and the collapse fraction as shown in
Eq.~\eqref{eq: numsn}.
On the other hand,
conventionally, one can relate the
PISN density to the dark matter mass function
\begin{align}\label{eq:number_density_SN}
&n_{\rm SN}(z) = \int_{m_{\mathrm{min}}} ^{m_{\mathrm{max}}} dm ~\frac{dn_{\ast} (m,z) }{dm},
\\
&\frac{dn_{\ast}}{dm}= \frac{g_\ast(m)}{m} \int_{M_{\mathrm{min}}}\hspace{-3mm} dM ~ f_{\rm host}(M) f_{\rm \ast}(M) M\frac{dn (M,z)}{dM},
\label{eq:starfunc}
\end{align}
where $dn_{\ast} (m,z) /dm$ is the mass function of Pop III stars with a mass $m$ at a redshfit $z$, and $m_{\mathrm{min}}$ and $m_{\mathrm{max}}$ are the lower and upper mass limit of the Pop III stars which occur PISNe.
In Eq.~\eqref{eq:starfunc},
$dn(M,z)/M$ is the mass function of dark matter halos with a halo mass $M$ at $z$,
$M_{\rm min}$ is a minimum mass of dark matter halos for hosting Pop III stars,
$f_{\rm host}(M)$ is
the fraction of dark matter halos with mass $M$
which can host stars, $f_{\rm \ast}$ is the fraction of the total stellar mass to the dark matter halo mass~$M$, and $g_\ast(m)$ is the initial mass function~(IMF) of Pop III stars which is normalized as $\int dm~g_{\ast}(m) =1$~($g_\ast$ has a dimension of $(\rm mass)^{-1}$).
In general, $f_{\mathrm{host}}$, $f_{\rm \ast}$ and $g_\ast (m)$ also depend on redshift $z$.
For the mass function of dark matter halos, $dn(M)/dM$, we adopt the Press-Schechter theory here.
Therefore, we can relate the mass function with collapse fraction $f_{\mathrm{coll}}$ as
\eq{
f_{\mathrm{coll}}\bar{\rho}_{\mathrm{m}}(z) =\int_{M_{\mathrm{min}}}dM~M\frac{dn_{\ast} (m,z) }{dm},
\label{eq:fcoll_def}
}
where $\bar{\rho}_{\mathrm{m}}(z)$ is the background matter density at $z$.
It is useful to define the weighted average value for
$f_{\rm host}(M)$ and $f_{\rm \ast}(M)$
\eq{\label{def: mathcalB}
\bar{f}_{\mathrm{X}}=\frac{\int_{M_{\mathrm{min}}}\hspace{-3mm} dM ~ f_{\mathrm{X}}(M) M\frac{dn (M,z)}{dM}}{\int_{M_{\mathrm{min}}}\hspace{-3mm} dM ~M\frac{dn (M,z)}{dM}},
}
where the subscript~$\mathrm X$ stands for ${\ast}$ or $\mathrm{host}$.
We also introduce the number fraction of PISN progenitors to total Pop III stars as
\begin{equation}\label{def: fmf}
f_{\mathrm{mf}} \equiv \int_{m_{\rm min}} ^{m_{\rm max}} dm ~g_\ast(m).
\end{equation}
If the IMF is the delta-function type mass function, $g_\ast(m)=\delta_{\mathrm{D}}(m-m_*)$ with ${m_{\rm min}}< m_* <{m_{\rm max}}$, $f_{\mathrm{mf}}$ equals to unity, and if the IMF is the mass function obtained from Pop III star formation simulation in Ref.~\cite{Susa_Hasegawa_2014}, it is about $g_\ast(m)\sim 0.3$.
Here we set $(m_{\mathrm{min}}, m_{\mathrm{max}}) = (130\mathrm{M}_{\odot},270\mathrm{M}_{\odot})$ as before.
Using Eqs~\eqref{eq:fcoll_def}~\eqref{def: mathcalB} and \eqref{def: fmf}, we can approximately estimate
the number density of PISNe from Eq.~\eqref{eq:number_density_SN} in
\eq{\label{eq: num_sn_john}
n_{\rm SN}(z)
\approx \frac{1}{m_*}f_{\mathrm{mf}}
f_{\rm star}
f_{\mathrm{coll}}(M_{\mathrm{min}})\bar \rho_{\mathrm{m}}(z),
}
where $f_{\rm star}$ is defined as $f_{\rm star} \equiv \bar{f}_{\mathrm{host}}\bar{f}_{*}$ and
represents the fraction of the total stellar mass to the total dark matter halo mass in the universe.
Comparing both Eqs.~\eqref{eq: numsn} and \eqref{eq: num_sn_john}, we obtain the relation as
\eq{\label{relation:zeta}
\zeta\approx f_{\mathrm{mf}}f_{\rm star}\frac{\Omega_{\mathrm{m}}}{\Omega_{\mathrm{b}}}.
}
Therefore, the constraint, $\zeta \lesssim 10^{-2}$, can be converted into
\eq{\label{eq: main_result}
f_{\mathrm{mf}}f_{\rm star} \lesssim 1.4\times10^{-3}.
}
Cosmological numerical simulations suggest {$f_{\rm star}\lesssim 10^{-3}$} around the epoch of Pop III star formation in Ref.~\cite{2014MNRAS.442.2560W},
although there are still some uncertainties in both our theoretical model and the redshift evolution of $f_{\rm star}$~($\bar{f}_{\mathrm{host}}$ and $\bar{f}_{*}$).
Therefore, it is difficult to provide the constraint on $f_{\mathrm{mf}}$
from our MCMC analysis on $\zeta \lesssim 10^{-2}$.
However, it is worth mentioning that, if further observations provide more information on the evolution of the ionization fraction during reionization, the constraint on PISNe allows us to access the Pop III star IMF through $f_{\mathrm{mf}}$. For example, the recent high-redshift quasi-stellar object~(QSO) observation suggests that the volume-averaged neutral fraction is
$\langle x_{\rm HI} \rangle ={0.60} $ at $z = 7.54$. When considering this result, our constraint could be improved to $\zeta \lesssim 10^{-3}$~\cite{2018ApJ...864..142D}.
In this case, our constraint tells us $f_{\rm mf} < 0.1$ and
prefers the Pop III star IMFs in which the progenitors of Pop III stars are subdominant in the terms of the total Pop III star abundance.
In our model, one of the most important uncertainties is
$z_{\mathrm{end}}$, which is the redshift for the termination of PISNe.
In general, $z_{\mathrm{end}}$ is significantly related to the metal pollution of the Universe, that is, the cumulative number density of PISNe. However, in this paper, we introduce $z_{\mathrm{end}}$ by hand. In order to investigate the impact of $z_{\mathrm{end}}$ on the constraint of $\zeta$, we perform the MCMC analysis with different $z_{\mathrm{end}}$ between $10 < z_{\mathrm{end}}<14$.
As a result, our constraint is changed by about $\pm25\%$ and
we find out the fitting form in
\eq{
\mathrm{log}_{10}\zeta \leq -2.0\lr{\frac{z_{\mathrm{end}}}{12}}^{1.22}.
}
The second one is the energy injected into SNRs of PISNe, $E_{\mathrm{sn}}$.
In this paper, although we adopt a constant injected energy, $E_{\mathrm{sn}}=10^{46}\mathrm{J}$,
it depends on the progenitor mass and the metallicity.
In our model, $E_{\mathrm{sn}}$ affect our constraint through the SNR volume in Eq.~\eqref{eq: ion_add} where
one can see that both $\zeta$ and $E_{\mathrm{sn}}$ degenerate each other.
Therefore, our constraint on $\zeta$ have the dependence on $E_{\mathrm{sn}}$,
\eq{
\zeta \leq 10^{-2}\lr{\frac{E_{\mathrm{sn}}}{10^{46}\mathrm{J}}}^{-3/5} .
}
In this paper, we neglect the effect on the reionization process, which Pop III stars provide directly by emitting the ionization photons during their main sequence.
The authors of Ref.~\cite{Miranda2017} have investigated this effect on the early stage in the reionization history.
They parametrized the abundance of Pop III stars, relating the collapse fraction as we have done for the parametrization of the PISN abundance in this paper
, and provide the constraint by using MCMC methods with Planck 2015 data.
Using the similar way to obtain Eq.~\eqref{eq: main_result},
their result suggests $\bar f_{\mathrm{esc}} f_{\rm star}\leq 10^{-2}$ where $\bar f_{\mathrm{esc}}$ is the weighted average escape fraction of ionizing photon for dark matter halos.
Therefore, the constraints on PISNe and Pop III stars are complementary:
the constraint on PISNe is sensitive to the IMF of Pop III stars through $f_{\rm mf}$ while the one on Pop III stars provides useful information on $\bar f_{\mathrm{esc}}$.
\subsection{the limitation of our isolated SNR assumption}\label{subsec: limit_assumption}
In our model, we take the assumption that isolated PISNe create the SNRs expanding in the neutral IGM and increase the ionization fraction.
For the validity of this assumption, there are mainly two concerns. One is the ionized bubble created by a massive Pop III star before a PISN and the second is the overlapping (or clustering) of SNRs.
Before PISN explosions, massive Pop III star emit ionizing photon and creates the ionized bubbles.
When an ionized bubble is larger than a SNR of PISN, PISNe
cannot increase the ionization fraction and most of PISNe energy is consumed to heat up the SNRs.
The size of the ionized bubbles is roughly estimated by the Str\"{o}mgren radius,
$r_{\mathrm{s}}$ which is given by the equilibrium between the number of ionizing photons and the neutral hydrogen.
With the ionizing photons emitting from a massive Pop III star, $N_\gamma$,
the Str\"{o}mgren radius in the IGM density is
\eq{\label{eq: rion_with_z}
r_{\rm{s}} = 2.8\left[\left(\frac{f_{\rm{esc}}}{0.1}\right) \left(\frac{N_{\gamma}}{10^5}\right)\right]^{1/3}\hspace{-1mm}\left( \frac{13}{1+z}
\right)~\rm{kpc}.
}
where $f_{\rm esc}$ is the escape fraction of ionizing photons.
Although there is still a large theoretical uncertainty in the escape fraction, $f_{\rm esc}$, some theoretical works predict the escape fraction smaller than the unity. For example, Ref.~\cite{2000ApJ...545...86W} reported that $f_{\rm{esc}}\lesssim \mathcal{O}(10^{-2})$ is preferred in the high redshifts from their simulations and Ref.~\cite{2020MNRAS.498.2001M} suggest $0.05<f_{\rm{esc}}<0.3$ in a redshift $z\sim 10$.
Figure~\ref{fig: rion_vs_rsn2} shows the comparison between
$R_{\rm SN}$ and $r_{\rm ion}$ with two different $f_{\rm esc}$.
The blue solid line shows the redshift evolution of $R_{\rm{SN}}$ in Eq.~\eqref{rsn}, and the orange dotted-dashed and green dotted lines represent the one of radius of the ionized bubble with $f_{\rm{esc}}=0.1$ and $0.3$ respectively.
When $f_{\rm esc} < 0.3$, the figure tells us that
$R_{\rm{SN}}$ is larger than $r_{\rm ion}$, in
particular, in redshifts (z<15).
Therefore we can conclude that, in $f_{\rm esc} < 0.3$,
the bubble created by a Pop III star before the SNe
can be destroyed by an SNR and
SNRs can increase the ionization fraction substantially.
Note that in the above estimation, we assume that the SNR energy does not significantly lose in a dark matter halo. However,
as the ionizing photons are absorbed in dark matter halos
and are reduced by $f_{\rm esc}$, some fraction of the SNR energy is consumed
inside a dark matter halo and, then, the SNR radius might be smaller than in Eq.~\eqref{rsn}.
The dependency on the gas density, $r_{\rm s} \propto n_{\rm g}^{-1/3}$ and $R_{\rm SN} \propto n_{\rm g}^{-1/5}$, suggest us that, even in high density $n_{\rm g} \sim 200 n_{\rm g, IGM}$,
the SNR can escape a dark matte halo more easily than the Str\"{o}mgren radius.
Nevertheless considering the propagation of the SNR in a dark matter halo
requires smaller $f_{\rm esc}$ to satisfy the condition $r_{\rm s} < R_{\rm SN}$.
The overlapping of SNRs also leads to overestimate the SNR contribution to the ionization fraction.
One can see in Fig.~\ref{fig: xe_zeta} that the additional ionization fraction in the early reionization stage due to PISNe is $x_{\rm{e}}^{\rm{SN}}\lesssim \mathcal{O}(0.1)$.
In such a small ionization fraction, the probability of the overlapping would be small.
However, in massive halos,
there is a possibility that
the star formation is very effective and many stars form
almost at the same time.
When such starburst mode happens,
PISNe also happens simultaneously in a massive halo and, resultantly, one large SNR is created with the total energy of all PISNe in this halo.
If this starburst mode is dominant in the star formation,
our constraint would be overestimated.
Although we neglect it, the contribution of small-mass Pop III stars
also causes the overestimation of the SNR contribution.
The clustering of small-mass Pop III stars near a massive Pop III progenitor of PISNe could create a large bubble before the PISNe.
Therefore, the SNR cannot contribute to increase the ionization fraction when the bubble sized is much larger than the PISNe.
In order to evaluate this effect,
it is required to include the ionized bubble evolution by assuming the IMF, the escape fraction, and clustering of Pop III stars consistently.
Such computation can be performed in cosmological numerical simulation and it is beyond our scope.
\begin{figure}
\centering
\includegraphics[width=8cm,clip]{rion_vs_rsn2.pdf}
\caption{The comparison between the Str\"omgren radius in Eq.~\eqref{eq: rion_with_z} and the Sedov-Taylor self-similar solution in Eq.~\eqref{rsn}.}
\label{fig: rion_vs_rsn2}
\end{figure}
\section{Conclusion}
It is theoretically predicted that massive Pop III stars can cause energetic PISNe at the final stage of their lives.
The generated SNRs expand to several kpc and their inside continues to be fully ionized.
In this paper, we investigate the impact of PISNe of Pop III stars on the reionization history.
The abundance of PISNe is unknown both theoretically and observationally.
Therefore, to model the PISN contribution to cosmic reionization, we have introduced a parameter~$\zeta$, which relates to the abundance of the PISNe to the collapse fraction of the universe.
We have shown that,
although PISNe cannot ionize the universe entirely enough,
PISNe induce early reionization and its efficiency highly depends on the abundance of PISNe.
Since the early reionization can affect the CMB anisotropies, the CMB anisotropy measurement allows us to obtain the constraint on the abundance of PISNe.
In order to investigate the constraint,
we have performed the MCMC analysis with the latest Planck data incorporating our model of the PISN early reionization.
On top of the PISN contribution,
our reionization model include the conventional "tanh"
type, which represents the contribution of first galaxies
and Pop II stars as the main sources of ionization photons.
We have found that when $\zeta < 10^{-3}$, the PISN contribution is totally subdominant, and the constraint on
the "tanh" type is similar to the constraint without the PISNe.
However, when $\zeta > 10^{-3}$, PISNe strongly affect the Thomson optical depth of CMB and the reionization by "tanh" type delayed to compensate the early reionization due to PISNe.
Our constraint on the PISN abundance is $\zeta <10^{-2}$ from the latest Planck measurement.
In general, the abundance of PISNe depends on the nature of Pop III stars including
their mass fraction to the dark matter halo mass and
the IMF.
We have shown that our parameter $\zeta$ is related to
the mass fraction of Pop III stars to dark matter halos of the universe, $f_{\rm star}$, and
the number fraction of PISN progenitors in the total Pop III stars, $f_{\rm mf}$.
Our constraint on $\zeta$ can be converted to $f_{\mathrm{mf}}f_{\rm star} \lesssim 1.4\times10^{-3}$.
Cosmological simulations suggests $f_{\rm star} \sim 10 ^{-3}$ for the Pop III star formation~\cite{2014MNRAS.442.2560W}.
It is difficult to obtain the constraint on the Pop III star IMF,~$f_{\mathrm{mf}}$, from our current analysis.
However, we have also shown that our constraint can be improved and provide useful information on the Pop III star IMF.
The high redshift QSO observation suggests $x_e \sim 0.5$ at $z \sim 7.5$. When we take into account this result, our constraint can be improved to $\zeta < 10^{-3}$ and $f_{\mathrm{mf}} \lesssim 0.1$.
Therefore, the further measurements of the ionized fraction in high redshifts allow us to rule out
the top-heavy IMF,
in which massive Pop III stars causing PISNe dominate smaller Pop III stars in abundance.
The most effective theoretical uncertainties in our model
is the termination redshift of PISNe, $z_{\mathrm{end}}$.
Although it strongly depends on the abundance of PISNe, we set $z_{\mathrm{end}}$ by hand. In order to investigate the impact of this redshift on our constraint, we have redone the MCMC analysis for the different redshifts.
We found out that
the dependence of our constraint on $z_{\mathrm{end}}$
is approximated to
$\mathrm{log}_{10}\zeta \leq -2.0\lr{z_{\mathrm{end}}/12}^{1.22}$
in the range of $10<z_{\mathrm{end}}<15$.
Our constraint is obtained in the isolated SNR assumption with neglecting the reionization due to Pop III stars.
These assumptions are valid in the limited case as discussed in Sec.~\ref{subsec: limit_assumption} and, otherwise,
could lead our result to be overestimated.
Besides, in order to constraint the PISN contribution to the reionization more,
we need to take into account the Pop III star contribution as well.
To address these concerns consistently,
the cosmological numerical simulation could be required.
We leave the detailed study to our future work.
Although our work is based on the optimistic case,
our result illuminates that
the CMB measurement
has the potential to explore
observational
signatures of PISNe.
Further investigation on the PISNe contribution
can provide
the access to the nature of Pop III stars.
\acknowledgments
This work is supported by JSPS KAKENHI Grants
No.~JP20J22260 (K.T.A)
and No.~JP21K03533 (H.T)
| 13,573 |
\section{Introduction}
Breakage, also known as fragmentation, is a basic process that describes the dissociation of particles that may occur in a variety of scientific and technical fields, including chemical process engineering, astrophysics, atmospheric science, and cellular biology. Depending on the particle breakage behaviour, the breakage process may be categorised into two kinds: The first is the \emph{linear breakage} which can happen spontaneously or as a result of external forces, and the second is the \emph{collision-induced nonlinear breakage} which takes place when two particles collide. One of the most effective approaches to characterising the kinetics of such phenomena is with the help of a rate equation which captures the evolution of the distribution of interacting clusters with respect to their sizes (or masses). In this article, we are interested in studying a mathematical model that governs the collision-induced breakage, which is often exploited to depict the raindrop breakup, cloud formation, and planet formation, see, for instance, \cite{LG 1976, SV 1972, SRC 1978}. The model under consideration here is known as the \emph{collision-induced breakage equation} or sometimes also termed as the \emph{nonlinear fragmentation equation}. The continuous collision-induced breakage equation is recently well-studied in \cite{AKG 2021I}. In the continuous case, the size (or mass) of each particle is denoted by a positive real number, whereas in the discrete case, the ratio of the mass of the basic building block (monomer) to the mass of a typical cluster is a positive integer and the size of a cluster is a finite multiple of the monomer's mass, i.e., \ a positive integer.
The collision-induced breakage equation, also known as the non-linear fragmentation equation, describes the dynamics of a large number of particles breaking apart as a result of binary collisions and is used to model cloud drop production and planet formation (see \cite{LG 1976, SV 1972, SRC 1978}). One of the most effective approaches to characterising the kinetics of such phenomena is with the help of a rate equation, which captures the evolution of the distribution of interacting clusters with respect to their size/mass. These types of the equation were first derived in the work of Smoluchowski to describe pure coagulation in the discrete case; that is, the ratio of the mass of the basic building block (monomer) to the mass of a typical cluster is positive, and the size of a cluster is a finite multiple of the monomer's mass.
The coagulation process is binary in nature when we look at a very short period of time, whereas the breakage process might be linear (spontaneous) or non-linear. The linear breakage process is regulated only by cluster attributes (and, if applicable, external forces), whereas the non-linear breakage process happens when two or more clusters collide and matter is transferred between them. As a result, the emergent cluster's mass in a non-linear breakage process may be bigger than expected.
Denoting by $w_i(t)$, $i \in \mathbb{N}$, the number of clusters made of $i$ clusters ($i$-particles) per unit volume at time $t \geq 0$, the discrete collision induced breakage equations read
\begin{align}
\frac{dw_i}{dt} =&\frac{1}{2} \sum_{j=i+1}^{\infty} \sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} w_{j-k} w_k -\sum_{j=1}^{\infty} a_{i,j} w_i w_j \label{NLDCBE}\\
w_i(0) &= w_i^0, \hspace{.5cm} i \in \mathbb{N}.\label{NLDCBEIC}
\end{align}
Here $a_{i,j}$ denotes the rate of collisions of $i$-clusters with $j$-clusters satisfying
\begin{align}
a_{i,j}=a_{j,i}, \label{ASYMM}
\end{align}
while $\{B_{i,j}^s, s=1,2,...,i+j-1\}$ is the distribution function of the resulting fragments and satisfies
\begin{align}
B_{i,j}^s = B_{j,i}^s \geq 0 \hspace{.7cm} \text{and} \hspace{.7cm} \sum_{s=1}^{i+j-1} s B_{i,j}^s = i+j, \label{LMC}
\end{align}
where second term in \eqref{LMC} infer that mass is conserved during each collisional breakage event. The first term in \eqref{NLDCBE} takes into account collisions in which a $j$-mer and a $k$-mer collide and form $i$-mers at a rate determined by the breakup kernel $B_{j,k}^i$ whereas second term accounts for the depletion of $i$-mer due to its collisions with the other clusters in the system, which occur at a rate determined by the collision kernel $a_{i,j}$.
The linear(spontaneous) fragmentation equation with coagulation, which was first explored by Filippov \cite{FAF 61}, Kapur \cite{KAPUR 72}, McGrady and Ziff \cite{MCG 87, ZRM 85}, has gained a lot of attention in the recent several decades. Banasiak and Lamb used the semigroup technique to investigate the existence and uniqueness of classical solutions to linear fragmentation equations with coagulation under proper coagulation and fragmentation kernel assumptions (see \cite{BAN 2019, BLL 2019} and references therein). We will adopt the weak compactness technique to deal with our problem, and it consists of considering a family of truncated problems, establishing the weak compactness of their solutions, and passing to the limit, establishing in this way the existence of weak solutions to \eqref{SNLBE}--\eqref{SNLBEIC}. Uniqueness, however, requires additional assumptions and other techniques. The existence and uniqueness of weak solutions to the classical coagulation fragmentation equation have been studied using this approach in \cite{BALL 90, CARR 94, COSTA 2015, Laurencot 1999, Laurencot 2001, Laurencot 2002}.
On the other hand, the non-linear breakage equation has not been thoroughly investigated. Cheng and Redner studied the dynamics of continuous, linear, and collision-induced non-linear fragmentation events in their work \cite{CHNG 90}. They studied the asymptotic behaviour of a class of models in which a two-particle collision causes both particles to split into two equal halves, just the larger particle to split in two, or only the smaller particle to split in two for a linear fragmentation process. Later, Krapivsky and Ben-Naim \cite{Krapivsky 2003} looked into the dynamics of non-linear collision-induced fragmentation, calculating the fragment mass distribution analytically using the nonlinear collision equation traveling wave behaviour. The analytical solutions to the non-linear breakage problem and their asymptotic information were also investigated by Kostoglou, and Karabelas \cite{Kostoglou 2000}. To discuss self-similar solutions, they exploited various simple homogeneous collision and break-up kernels to convert the nonlinear breakage equation into a linear one.
In recent years, continuous versions of the coagulation equation with non-linear collision have been discussed using the weak $L^1$ compactness technique in \cite {PKB 2020, PKB 2020I, PKB 2021, AKG 2021}, where the existence and uniqueness of weak solutions for different classes of collision kernels have been discussed. Existence, uniqueness, and mass-saving solutions were discussed in \cite{AKG 2021I} for the kernel of the form $a(x,y)= x^{\alpha} y^{\beta} + x^{\beta} y^{\alpha}$ where $ \lambda := \alpha + \beta \in [1,2] $ and finite time existence of solutions also discussed for $\lambda \in [0,1]$ under the no mass transfer condition during collision. Using the same technique, a discrete version of the coagulation equation with collisional breakage is explored by Lauren\c{c}ot, and Wrzosek \cite{Laurencot 2001}, where they have studied the existence, uniqueness, mass conservation, and the large time behavior of weak solutions to \eqref{NLDCBE}--\eqref{NLDCBEIC} with reasonable restrictions on the collision kernel and probability function. Cheng and Redner look at a specific situation of \eqref{NLDCBE}. When two clusters collide in this model, they fragment into smaller pieces, and no matter is exchanged between them. They looked at the continuous class of these models, in which clusters are described by a continuous variable. For clusters described by a discrete variable reads
\begin{align}
\frac{dw_i}{dt} =& \sum_{j=i+1}^{\infty} \sum_{k=1}^{\infty} a_{j,k} b_{i,j,k} w_j w_k -\sum_{j=1}^{\infty} a_{i,j} w_i w_j,\label{SNLBE}\\
w_i(0) &=w_{0i}, \label{SNLBEIC}
\end{align}
for $i\geq 1$, where $\{b_{i,j,k}, 1\leq i \leq j-1\}$ denotes the distribution function of the fragments of a $j$ cluster after a collision with a $k$-cluster, and satisfies
\begin{align}
\sum_{i=1}^{j-1} i b_{i,j;k} = j, \hspace{.5cm} j\geq 2,~~~ k\geq 1. \label{LMC1}
\end{align}
To obtain \eqref{SNLBE} from \eqref{NLDCBE}, we put
\begin{align}
B_{i,j}^s = \textbf{1}_{[s, +\infty)} (i) b_{s,i;j} + \textbf{1}_{[s, +\infty)} (j) b_{s,j;i} \label{NMT}
\end{align}
for $i,j\geq 1$ and $s\in \{1,2,\cdots,i+j-1\},$ where $\textbf{1}_{[s, +\infty)}$ denotes the characteristic function of the interval $[s,+\infty)$. As each cluster splits into smaller pieces after collision it is expected that, in the long time, only 1-clusters remain.\\
In this article, we look for the existence of solutions to \eqref{NLDCBE}--\eqref{NLDCBEIC} for the class of collisional kernel
having quadratic growth, i.e.
\begin{align}
a_{i,j} \leq A_0 ij \hspace{.2cm} \text{for some} \hspace{.2cm} A_0>0 \hspace{.2cm}\text{and}\hspace{.2cm} i,j\geq 1.\label{QUADGROWTH}
\end{align}
In addition to \eqref{NMT}, we assume that there is a constant $\beta$ such that
\begin{align}
B_{i,j}^s \leq \beta, \hspace{.2cm} 1\leq s\leq i+j-1, \hspace{.2cm} i,j\geq 1. \label{FNP}
\end{align}
We expect the density $\rho =\sum_{i=1}^{\infty} i w_i(t)$ to be conserved because particles are neither generated nor destroyed in the interactions represented by \eqref{NLDCBE}--\eqref{NLDCBEIC}. This is mathematically equivalent to
\begin{align}
\sum_{i=1}^{\infty} iw_i(t) = \sum_{i=1}^{\infty} iw_i^0.\label{MCC}
\end{align}
In other words, the density of the solution $w$ remains constant over time.
The paper is organized as follows: The next section is devoted to a precise statement of our results, including definitions, the existence of solutions to \eqref{NLDCBE}--\eqref{NLDCBEIC}, and the mass conservation property of solutions. In Section \ref{PMCDID}, propagation of moments, uniqueness, and continuous dependence of solutions on initial data have been explored, whereas, in Section \ref{IPOS}, some invariance properties of solutions are shown. Finally, in Section \ref{LTBOS}, the large-time behaviour of solutions is discussed.
\section{Existence of Solutions} \label{EOS}
For $\gamma \geq 0$, let $Y_{\gamma}$ be the Banach space defined by
\begin{align*}
Y_{\gamma} = \Big\{ y =(y_i)_{i\in\mathbb{N}}: y_i \in \mathbb{R}, \sum_{i=1}^{\infty} i^{\gamma} |y_i| < \infty \Big\}
\end{align*}
with the norm
\begin{align*}
\|y\|_{\gamma} =\sum_{i=1}^{\infty} i^{\gamma} |y_i|.
\end{align*}
We will use the positive cone $Y_{\gamma}^+$ of $Y_{\gamma}$, that is,
\begin{align*}
Y_{\gamma}^+ =\{y\in Y_{\gamma}: ~~y_i \geq 0~~\text{for each} i\geq 1\}.
\end{align*}
It is worth noting that the norm $\|w\|_0$ of a particular cluster distribution $w$ represents the total number of clusters present, and the norm $\|w\|_1$ estimates the overall density or mass of the cluster distribution $w$ as in classical coagulation or coagulation fragmentation equations.
As in previous works on similar equations, the existence of solutions to \eqref{NLDCBE}--\eqref{NLDCBEIC} follows by taking a limit of solutions to finite-dimensional systems of ordinary differential equations
obtained by truncation of these equations. More precisely, given $l \geq 3$, we consider the following
system of $l$ ordinary differential equations
\begin{align}
\frac{dw_i^l}{dt}&= \frac{1}{2} \sum_{j=i+1}^{l} \sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} w_{j-k}^l w_k^l -\sum_{j=1}^{l-i} a_{i,j} w_i^l w_j^l,\hspace{.2cm} \label{FDNLBE} \\
w_i^l(0) &= w_{0i}, \label{FDNLBEIC}
\end{align}
for $i \in\{1,2, \cdots, l\}$, where first term on right hand side is zero when $i=l$.
Let us now define what we mean by a solution to \eqref{NLDCBE}--\eqref{NLDCBEIC}.
\begin{definition} \label{DEF1}
Let $T\in(0,+\infty]$ and $w^0= (w_{0i})_{i \geq 1}\in Y_1^+$ be a sequence of non negative real numbers. A solution $w=(w_i)_{i \geq 1} $ to \eqref{SNLBE}--\eqref{SNLBEIC} on $[0,T)$ is a sequence of non-negative continuous functions satisfying for each $i\geq 1$ and $t\in(0,T)$
\begin{enumerate}
\item $w_i\in \mathcal{C}([0,T))$, $\sum_{j=1}^{\infty} a_{i,j} w_j \in L^1(0,t)$, $ \sum_{j=i+1}^{\infty}\sum_{k=1}^{j-1}B_{j-k,k}^i a_{j-k,k} w_{j-k} w_k \in L^1(0,t)$,
\item and there holds
\begin{align}
w_i(t) = w_{0i} + \int_0^t \Big( \frac{1}{2} \sum_{j=i+1}^{\infty} \sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} w_{j-k}(\tau) w_k(\tau) -\sum_{j=1}^{\infty} a_{i,j} w_i(\tau) w_j(\tau) \Big) d\tau. \label{IVOE}
\end{align}
\end{enumerate}
\end{definition}
In the following lemmas, we collect basic results about solutions to this finite-dimensional system, which were proved in \cite{BALL 90} for discrete coagulation fragmentation equations, and we follow their proof closely.
\begin{lemma} \label{LEMMAREG}
Let $w^l$ be a solution of \eqref{FDNLBE}--\eqref{FDNLBEIC} and let $(\mu_i)$ be a sequence of real numbers. Then for $1\leq r\leq l$,
\begin{align}
\sum_{i=r}^l \mu_i \dot{w}_i^l =& \frac{1}{2}\sum_{R_1}\Big( \sum_{i=r}^{j+k-1} \mu_i B_{j,k}^i -\mu_j-\mu_k\Big)a_{j,k} w_j^l w_k^l +\frac{1}{2}\sum_{R_2}\Big(\sum_{i=r}^{j+k-1} \mu_iB_{j,k}^i \Big) a_{j,k} w_j^l w_k^l \nonumber\\
&+ \sum_{R_3} \Big(\sum_{i=r}^{j+k-1} \mu_i B_{j,k}^i -\mu_k \Big) a_{j,k} w_j^l w_k^l \label{GME}
\end{align}
where
\begin{align*}
R_1 &= \{(j,k):~~~(j,k)\geq r,~~~j+k\leq l\}\\
R_2 &= \{(j,k):~~~j,k< r,~~~r\leq j+k \leq l\}\\
R_3 &= \{ (j,k):~~~1 \leq j \leq r-1,k\geq r, j+k\leq l\}
\end{align*}
with the sums equal to zero if the associated region is empty.
\end{lemma}
\begin{proof}
Using the symmetry of collision kernel and distribution function and \eqref{FDNLBE}, we have
\begin{align*}
\sum_{i=r}^l\mu_i \dot{w}_i^l =\frac{1}{2} \sum_{i=r}^l \sum_{i+1\leq j+k \leq l} \mu_i B_{j,k}^i a_{j,k}w_j^l w_k^l -\frac{1}{2}\sum_{j=r}^{l-1} \mu_j \sum_{k=r}^{l-j} a_{j,k}w_j^l w_k^l-\frac{1}{2}\sum_{k=r}^{l-1} \mu_k \sum_{j=r}^{l-k} a_{j,k}w_j^l w_k^l.
\end{align*}
The result is obtained by grouping the above terms into common regions in $j-k$ space.
\end{proof}
\begin{lemma}\label{LER}
The system \eqref{FDNLBE}--\eqref{FDNLBEIC} has a unique solution for $t \geq 0$ with $w_i^l(t)\geq 0$, $1\leq i \leq l$, and
$$ \sum_{i=1}^l i w_i^l(t) = \sum_{i=1}^l i w_i^l(0).$$
\end{lemma}
\begin{proof}
Since \eqref{FDNLBE} is a finite-dimensional system having a polynomial right-hand side, hence the existence of local solutions to the problem follows from the Cauchy–Lipschitz theorem, while the non-negativity of each $w_i^l(t)$ may be proved similarly to the corresponding result in \cite{BALL 86}. The fact that $\sum_{i=1}^l i w_i^l(t) $ is constant is obtained by putting $\mu_i= i$ in Lemma \ref{LEMMAREG}, and the global existence is attained by the bounds $0\leq w_i^l(t) \leq \sum_{i=1}^l iw_i^l(0)$.
\end{proof}
\begin{lemma}\label{LEMMALEQ}
Assume that $ a_{i,j} \leq Aij$ for all $i,j \geq 1$ where $A$ is a constant. Let $w^l$ be a solution of \eqref{FDNLBE}--\eqref{FDNLBEIC} and let $ \varrho^l(0) =\sum_{i=1}^l i w_i^l(0).$ Then
\begin{align*}
\frac{d}{dt}\Bigg\{e^{-t} \Big[\sum_{i=r}^l i w_i^l(t) + 2r A\varrho^l(0)^2\Big]\Bigg\}\leq 0.
\end{align*}
\end{lemma}
\begin{proof}
From Lemma \ref{LEMMAREG},
\begin{align*}
\frac{d}{dt}\sum_{i=r}^l i w_i^l(t) \leq \frac{1}{2}\sum_{R_2} \Big(\sum_{i=m}^{j+k-1} iB_{j,k}^i \Big) a_{j,k} w_j^l w_k^l + \sum_{R_3} \Big(\sum_{i=r}^{j+k-1} i B_{j,k}^i -k \Big) a_{j,k} w_j^l w_k^l.
\end{align*}
Hence
\begin{align*}
\frac{d}{dt}\Bigg\{e^{-t} \Big[\sum_{i=r}^l i w_i^l(t) &+ 2r A\varrho^l(0)^2\Big]\Bigg\}= e^{-t} \Bigg[ \frac{1}{2}\sum_{R_2} \Big(\sum_{i=r}^{j+k-1} iB_{j,k}^i \Big) a_{j,k} w_j^l w_k^l \\
&+ \sum_{R_3} \Big(\sum_{i=r}^{j+k-1} i B_{j,k}^i -k \Big) a_{j,k} w_j^l w_k^l-\sum_{i=r}^l i w_i^l(t) - 2m A\varrho^l(0)^2\Bigg]\\
&\leq A e^{-t}\Bigg[\frac{1}{2}\sum_{R_2} (j+k) jk w_j^l w_k^l +\sum_{R_3} j^2k w_j^l w_k^l - 2r \varrho^l(0)^2\Bigg]\\
& \leq 0.
\end{align*}
The result follows.
\end{proof}
Let $T \in (0,\infty)$ is given and \eqref{NMT} holds and $w$ be a solution of \eqref{NLDCBE}--\eqref{NLDCBEIC} on $[0,T)$ and let $(\psi_i)$ be a sequence. Then for $1 \leq l<\infty$ and $0\leq t_1<t_2 <T$, the following moment's equation holds
\begin{align}
\sum_{i=1}^l \psi_i (w_i(t_2)- w_i(t_1))=& \frac{1}{2} \int_{t_1}^{t_2}\sum_{j=1}^l \sum_{k=1}^l \Big( \sum_{i=1}^{l} \psi_iB_{j,k}^i - j -k \Big) a_{j,k} w_j(s) w_k(s) ds\nonumber \\
&+\int_{t_1}^{t_2}\sum_{j=1}^l \sum_{k=l+1}^{\infty} \Big( \sum_{i=1}^{l} \psi_iB_{j,k}^i - j \Big) a_{j,k} w_j(s) w_k(s) ds\nonumber \\
&+\frac{1}{2}\int_{t_1}^{t_2} \sum_{j=1+1}^{\infty} \sum_{k=l+1}^{\infty} \sum_{i=1}^l \psi_i B_{j,k}^ia_{j,k} w_j(s) w_k(s) ds. \label{MCE}
\end{align}
Next, we shall prove the existence of solutions to \eqref{NLDCBE}--\eqref{NLDCBEIC} in $Y_1^+$ under some mild conditions on the collision kernel. Our existence result is as follows:
\begin{theorem}\label{MAINTHEOREM}
Assume $a_{i,j}\leq Aij$, for some positive constant $A$ and all positive integers $i$ and $j$. Let $w_0\in Y_1^+$ and the distribution function satisfy \eqref{NMT} and \eqref{FNP}. Then, there is at least one solution of \eqref{SNLBE}--\eqref{SNLBEIC} with initial condition $w(0) = w_0$, defined on $[0, T)$, for some $T \in (0,+\infty]$ and in addition if for $0\leq t_1 <t_2 \leq T$, the following
\begin{align}
\int_{t_1}^{t_2} \sum_{j=1}^{\infty} \sum_{k=1}^{\infty} (j+k) a_{j,k} w_j(s) w_k(s) ds <+\infty\label{IP}
\end{align}
holds, then $w$ is mass conserving i.e.
\begin{align}
\|w(t)\|_{Y_1} = \|w_0\|_{Y_1}, \hspace{.3cm} t \in [0,\infty). \label{GMC}
\end{align}
\end{theorem}
\begin{proof}
Under the condition \eqref{NMT}, Lemma \ref{LEMMAREG} gives
\begin{align*}
\frac{d}{dt}\sum_{i=r}^l \mu_i w_i^l(t) =& \sum_{j=r}^l \sum_{k=1}^{l-j}\Big( \sum_{i=r}^{j-1} \mu_i b_{i,j,k} -\mu_j\Big)a_{j,k} w_j^l w_k^l + \sum_{j=1}^{r-1}\sum_{k=r}^{l-j} \Big(\sum_{i=r}^{k-1} \mu_i b_{i,k,j} -\mu_k \Big) a_{j,k} w_j^l w_k^l.
\end{align*}
By putting $\mu_i = i$, and using \eqref{LMC1}, we immediately conclude that,
\begin{align*}
\frac{d}{dt} \sum_{i=r}^l i w_i^l \leq 0.
\end{align*}
Thus, we have
\begin{align}\label{PMOMNT}
\sum_{i=r}^{l} i w_i^l \leq \sum_{i=r}^l i w_{0i} \leq \sum_{i=r}^{\infty} i w_{0i} \leq \sum_{i=1}^{\infty} iw_{0i}=\|w_0\|_1.
\end{align}
Fix $T \in(0,\infty)$. Consider now $i\geq 1$ and $l\geq i$. It follows from Lemma \ref{LER},
\eqref{ASYMM}, \eqref{QUADGROWTH} and \eqref{PMOMNT} that the $i$-th component $w_i^l$ of the solution to \eqref{FDNLBE}--\eqref{FDNLBEIC} satisfies
\begin{align}\label{DERVBOUND}
\Big|\frac{dw_i^l}{dt}\Big|=&\leq \frac{A_0 A_1}{2} \sum_{j=i+1}^{l} \sum_{k=1}^{j-1} (j-k)k w_{j}^l w_k^l +A \sum_{j=1}^{l-i} i jw_i^l w_j^l \nonumber \\
&\leq A(\beta+1) \|w_0\|_1^2.
\end{align}
As a result of \eqref{PMOMNT} and \eqref{DERVBOUND}, the sequence $(w_i^l)_{l\geq i}$ is bounded in $\mathcal{C}^1([0,T])$ and thus relatively compact in $\mathcal{C}([0,T])$. Therefore, according to Helly's selection theorem, for each fixed $i$ there exists a subsequence of $w_i^l(.)$ (not relabeled) that converges pointwise to a BV function $w_i(.)$ in $[0,T]$,
\begin{align}
w_i^l(t) \longrightarrow w_i(t), \hspace{.3cm} \text{as} \hspace{.3cm} l \to \infty ,~~~\forall t\in [0,T],~~~ \forall i \in \mathbb{N}. \label{LIMITw}
\end{align}
But then, for each $q \in \mathbb{N}$, and for each $t \in [0, T]$,
\begin{align*}
\sum_{i=1}^q iw_i^l(t) \longrightarrow \sum_{i=1}^q i w_i(t), ~~~~\text{as}~~l \to \infty.
\end{align*}
and therefore, by \eqref{PMOMNT}, for any such $q$,
\begin{align}
\sum_{i=1}^q iw_i(t) \leq \|w_0\|_1. \label{QBOUND}
\end{align}
By letting $q \to \infty$, we obtain
\begin{align}\label{LIMITBOUND}
\sum_{i=1}^{\infty} iw_i(t) \leq \|w_0\|_1.
\end{align}
Since Lemma \ref{LER} implies $w_i(t) \geq 0 $ , this proves not only $w(t) \in Y_1^+$, for each $ t\in [0,T]$, but in addition the first condition of Definition \ref{DEF1} is also satisfied.
We shall show that the limit function $w_i$ solves the system \eqref{NLDCBE}--\eqref{NLDCBEIC}. To achieve this result, we shall pass to the limit as $l\to \infty$ in the equation for $w^l_i$
\begin{align*}
w_i^l(t) = w_{0i} + \frac{1}{2}\int_0^t \sum_{j=i+1}^{l} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}^l(s) w_k^l(s) ds-\int_0^t \sum_{j=1}^{l}a_{i,j}w_i^l(s) w_j^l(s) ds.
\end{align*}
Hence, we need to prove that for all $t\in [0,T]$,
\begin{align}\label{LIMIT1}
\int_0^t \sum_{j=i+1}^{l} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}^l(s) w_k^l(s) ds \xrightarrow[]{l \to \infty}\int_0^t \sum_{j=i+1}^{\infty} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}(s) w_k(s) ds,
\end{align}
and
\begin{align}\label{LIMIT2}
\int_0^t \sum_{j=1}^{l-i}a_{i,j}w_i^l(s) w_j^l(s) ds \xrightarrow[]{l \to \infty} \int_0^t \sum_{j=1}^{\infty}a_{i,j}w_i(s) w_j(s) ds.
\end{align}
To begin, we prove that the right-hand side of \eqref{LIMIT1}1 is well defined. Let $p\geq i+1$ be a fixed positive integer. Now recalling the definition of $(w_i)_{i\in \mathbb{N}}$, we know that
\begin{align*}
\sum_{j=i+1}^p \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}^l w_k^l \xrightarrow[]{l \to \infty} \sum_{j=i+1}^p \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k} w_k
\end{align*}
and from \eqref{GME}, for all positive integers $l$ and $p$, we get
\begin{align*}
\sum_{j=i+1}^p \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}^l w_k^l =& \sum_{i+1\leq j+k \leq p} a_{j,k} B_{j,k}^i w_j^l w_k^l \\
& \leq A \beta \sum_{j=1}^p \sum_{k=1}^{p+1-j} jk w_j^l w_k^l \leq A\beta \|w_0\|_1^2
\end{align*}
and thus also
\begin{align*}
\sum_{j=i+1}^p \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k} w_k \leq A \beta \|w_0\|_1^2.
\end{align*}
Owning to the fact that the right hand side is independent of $p$, and all the terms are non negative, we have
\begin{align*}
\sum_{j=i+1}^{\infty} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k} w_k \leq A \beta \|w_0\|_1^2.
\end{align*}
Invoking the dominated convergence theorem, we can easily deduce that the right-hand side of \eqref{LIMIT1} is well defined for all $t \in (0, T)$, with $T <\infty$. The next step is to show that the limit in \eqref{LIMIT1} holds. Let $r$ be a fixed positive integer such that $i+1 \leq r < l < \infty$, then
\begin{align}
\Bigg| \int_0^t& \sum_{j=i+1}^{l} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}^l(s) w_k^l(s) ds -\int_0^t \sum_{j=i+1}^{\infty} \sum_{k=1}^{\infty} a_{j-k,k} B_{j-k,k}^i w_{j-k}(s) w_k(s) ds \Bigg|\leq \nonumber\\
&\leq \int_0^t \sum_{j=i+1}^{r} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i \big| w_{j-k}^l(s) w_k^l(s)-w_{j-k}(s) w_k(s)\big| ds +\label{FT} \\
&+\int_0^t \sum_{j=r+1}^{l} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}^l(s) w_k^l(s) ds+ \int_0^t \sum_{j=r+1}^{\infty}\sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}(s) w_k(s) ds. \label{ST}
\end{align}
Our goal now is to demonstrate that the right-hand side of this inequality can be arbitrarily small when $l\to \infty$ by choosing a sufficiently large $r$.
Since each term in the sum converges pointwise to zero, the sum has a finite fixed number of terms, and its absolute value is bounded above by $2A\beta \|w_0\|_1^2$, thus it follows from the dominated convergence theorem that \eqref{FT} converges to zero as $l\to\infty$.
Define $\kappa_r= \|w_0\|_1 \sum_{i=r}^{\infty} i w_{0i}$. Clearly $\rho_r \to 0$ as $ r\to \infty$. Now let us look at the integrals in \eqref{ST}. From \eqref{PMOMNT}, we infer that
\begin{align*}
\int_0^t \sum_{j=r+1}^{l} \sum_{k=1}^{j-1} a_{j-k,k}& B_{j-k,k}^i w_{j-k}^l(s) w_k^l(s) ds= \int_0^t \sum_{r+1\leq j+k\leq l} a_{j,k} B_{j,k}^i w_j^l(s) w_k^l(s) ds \\
& =\int_0^t \sum_{k=1}^{l-1} \sum_{j=r+1-k}^{l-k} a_{j,k} B_{j,k}^i w_j^l(s) w_k^l(s) ds +\int_0^t \sum_{j=r}^{l-1} \sum_{k=1}^{l-j} a_{j,k} B_{j,k}^i w_j^l(s) w_k^l(s) ds \\
&\leq 2A \beta \int_0^t \Big(\sum_{k=1}^{l-1} kw_k^l(s) \Big) \Big(\sum_{j=r}^{l-1} j w_{j}^l(s)\Big) ds \\
& \leq 2A \beta \int_0^t \kappa_r ds.
\end{align*}
Therefore, the first integral in \eqref{ST} can be made arbitrarily small by choosing $r$ large enough. Analogously, we prove the result for the second integral. For all $i+1 \leq r <q$ we have
\begin{align*}
\sum_{j=r+1}^{q} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}^l w_k^l \xrightarrow[]{l\to \infty}\sum_{j=r+1}^{q} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k} w_k.
\end{align*}
Due to \eqref{GME}, the sum on the left-hand side is bounded by $2A\beta \kappa_r$, and so we also get
\begin{align*}
\sum_{j=r+1}^{q} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-K,k} w_{j} w_k \leq 2A \beta \kappa_r
\end{align*}
for all $q$. Since this bound is uniform in $q$, we have
\begin{align*}
\sum_{j=r+1}^{q} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k} w_k \xrightarrow[]{q \to \infty} \sum_{j=r+1}^{q} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k} w_k \leq 2A \beta\kappa_r.
\end{align*}
As a consequence of the dominated convergence theorem, the second integral in \eqref{ST} can be made arbitrarily small by choosing sufficiently large $r$ and $l$. This completes the proof of \eqref{LIMIT1}.
Our next step is to show that \eqref{LIMIT2} holds. Let $r$ be a fix positive integer such that $1 \leq r+1 +i< l < \infty$. Then
\begin{align}
\Bigg|\int_0^t &\sum_{j=1}^{l-i} a_{i,j} w_i^l(s) w_j^l(s)ds - \int_0^t \sum_{j=1}^{\infty} a_{i,j} w_i(s) w_j(s) ds\Bigg| \leq \nonumber\\
&\leq \int_0^t \sum_{j=1}^r a_{i,j} \big|w_i^l(s) w_j^l(s)- w_i(s) w_j(s)\big| ds+\label{FFT}\\
&+ \int_0^t \sum_{j=r+1}^{l-i} a_{i,j} w_i^l(s) w_j^l(s) ds +\int_0^t \sum_{j=r+1}^{\infty} a_{i,j} w_i(s) w_j(s) ds \label{SST}
\end{align}
and our goal is to show that, given a sufficiently large value of $r$, the right-hand side of the above inequality can be made arbitrarily small when $l\to \infty$.
We next infer from \eqref{PMOMNT} that
\begin{align}
\int_0^t \sum_{j=r+1}^{l-i} a_{i,j} w_i^l(s) w_j^l(s) ds \leq&~~~~ A \int_0^t \sum_{j=r+1}^l i w_i^l(s) j w_j^l(s) ds \nonumber \\
\leq &~~~~ A \int_0^t \|w_0\|_1 \sum_{j=r+1}^l j w_j^l(s) ds \leq A\int_0^t \kappa_r ds \nonumber \\
\leq & ~~~~AT\kappa_r
\end{align}
and so we get the first integral in \eqref{SST} can be made arbitrarily small by choosing $r$ sufficiently large. For the second integral, the result is proved in an analogous way. For all $1 \leq r < q$, we have
\begin{align*}
\sum_{j=r+1}^q a_{i,j} w_i^l w_j^l \xrightarrow[]{l \to \infty} \sum_{j=r+1}^{\infty} a_{i,j} w_i w_j.
\end{align*}
Using \eqref{PMOMNT}, we notice that the sum in the left-hand side is bounded by $A \kappa_r$, which implies that
\begin{align*}
\sum_{j=r+1}^q a_{i,j} w_i w_j \leq A \kappa_r
\end{align*}
for all $q$. Since this bound is uniform in $q$, we have
\begin{align*}
\sum_{j=r+1}^q a_{i,j} w_i w_j \xrightarrow[]{q \to \infty} \sum_{j=r+1}^{\infty} a_{i,j} w_i w_j.
\end{align*}
Therefore, using the dominated convergence theorem, it is possible to make the second integral in \eqref{SST} arbitrary small by choosing $r$ and $l$ large enough. We have thus shown that $w=(w_i)$ is a solution to \eqref{NLDCBE}--\eqref{NLDCBEIC}.
\par
To complete the proof of Theorem \ref{MAINTHEOREM}, it remains to prove that $w$ is mass conserving. Therefore, considering $\psi_i=i$ in \eqref{MCE}, we obtain
\begin{align}
\sum_{i=1}^l i (w_i(t_2)- w_i(t_1))=& \frac{1}{2} \int_{t_1}^{t_2}\sum_{j=1}^l \sum_{k=1}^l \Big( \sum_{i=1}^{l} iB_{j,k}^i - j -k \Big) a_{j,k} w_j(s) w_k(s) ds\nonumber \\
&+\int_{t_1}^{t_2}\sum_{j=1}^l \sum_{k=l+1}^{\infty} \Big( \sum_{i=1}^{l} iB_{j,k}^i - j \Big) a_{j,k} w_j(s) w_k(s) ds\nonumber \\
&+\frac{1}{2}\int_{t_1}^{t_2} \sum_{j=l+1}^{\infty} \sum_{k=l+1}^{\infty} \sum_{i=1}^l i B_{j,k}^ia_{j,k} w_j(s) w_k(s) ds. \label{FSFE}
\end{align}
On the one hand \eqref{LMC1}--\eqref{NMT} entail that
\begin{align}
\sum_{i=1}^l i B_{j,k}^i = \sum_{i=1}^j ib_{i,j;k} + \sum_{i=1}^k ib_{i,k;j}=j+k \label{NMT1}
\end{align}
for $j,k \in\{1,2,\cdots,l\}$, and the first term of the right hand side of \eqref{FSFE} is equal to zero. On the other hand, using again \eqref{LMC1}--\eqref{NMT}, we obtain
\begin{align}
\sum_{i=1}^l i B_{j,k}^i = \sum_{i=1}^j ib_{i,j;k} + \sum_{i=1}^l ib_{i,k;j}\label{NMT2}
\end{align}
for $j \in\{1,2,\cdots,l\}$ and $k\geq l+1$, hence a non-negative bound from below for
the second term of the right-hand side of \eqref{FSFE}. Therefore \eqref{FSFE} yields
\begin{align*}
\sum_{i=1}^l i (w_i(t_2)- w_i(t_1))\geq& \frac{1}{2}\int_{t_1}^{t_2} \sum_{j=1}^{l} \sum_{k=l+1}^{\infty} \sum_{i=1}^l i b_{i,k;j}a_{j,k} w_j(s) w_k(s) ds \\
&+ \frac{1}{2}\int_{t_1}^{t_2} \sum_{k=1}^{l} \sum_{j=l+1}^{\infty} \sum_{i=1}^l i b_{i,j;k}a_{j,k} w_j(s) w_k(s) ds.
\end{align*}
\begin{align*}
\sum_{i=1}^l i (w_i(t_2)\geq \sum_{i=1}^l i w_i(t_1))+ \frac{1}{2}\int_{t_1}^{t_2} \sum_{j=1}^{\infty} \sum_{k=l+1}^{\infty} \sum_{i=1}^l i b_{i,k;j}a_{j,k} w_j(s) w_k(s) ds.
\end{align*}
Taking $t_1=0$ and $t_2=t$, we have
\begin{align*}
\sum_{i=1}^l i w_i(t) \geq \sum_{i=1}^l i w_{0i}+ \frac{1}{2}\int_{0}^{t} \sum_{j=1}^{\infty} \sum_{k=l+1}^{\infty} \sum_{i=1}^l i b_{i,k;j}a_{j,k} w_j(s) w_k(s) ds.
\end{align*}
The second term on the right-hand side of the above equation is finite as a consequence of \eqref{IP}. Hence, letting $l \to \infty$, we have
\begin{align}
\sum_{i=1}^{\infty} i w_i(t) \geq \sum_{i=1}^{\infty} i w_{0i}. \label{UBMC}
\end{align}
Combining \eqref{LIMITBOUND} and \eqref{UBMC}, we get the mass conservation property of the solution.
\end{proof}
Next, we prove that the sequence $w^l$ of solutions to the truncated system that converges to the solution $w$ of \eqref{NLDCBE}--\eqref{NLDCBEIC} indeed does so in the strong topology of $Y$, uniformly for $t$ in compact subsets of $[0,\infty)$.
\begin{corollary}\label{COR1}
Let $w^l$ be be the pointwise convergent subsequence of solutions to \eqref{FDNLBE}--\eqref{FDNLBEIC}. Then, $w^l \longrightarrow w$ in $Y$ uniformly on compact subsets of $[0,\infty)$.
\end{corollary}
\begin{proof}
First we prove that, for each $i$, $w_i^l(t) \to w_i(t)$ uniformly on compact subsets of $[0,+\infty)$. For this it is clearly sufficient to show that for each $r > 1$,
\begin{align*}
\Delta_r^l(t):= e^{-t} \Big[\varrho^l(0) -\sum_{i=1}^{r-1} iw_i^l(t) + 4r A\varrho^l(0)^2\Big]
\end{align*}
converges to
\begin{align*}
\Delta_r(t):= e^{-t} \Big[\varrho(0) -\sum_{i=1}^{r-1} iw_i(t) + 4r A\varrho(0)^2\Big]
\end{align*}
uniformly on compact subsets of $[0, \infty)$, where $\varrho(0)=\sum_{i=1}^{\infty} i w_i(0)$. But this follows from the pointwise convergence of $\Delta_r^l(t)$ to the continuous function $\Delta_r(t)$ and the fact that by Lemmas \ref{LER}, \ref{LEMMALEQ},
\begin{align*}
\frac{d}{dt}\Delta_r^l(t) \leq 0, \hspace{.3cm}t\in[0,T),~~~l\geq r.
\end{align*}
Let $I\subset [0,\infty)$ be compact and $t_l\to t$ in $I$, then
\begin{align*}
\lim_{l\to \infty} \|w^l(t_l) \|= \lim_{l\to \infty} \|w(t_l) \|= \|w(t) \|
\end{align*}
which implies that $w^l \to w$ in $C(I,Y)$, as required.
\end{proof}
In the next section, the issue we consider is that whether, given $w^0 \in Y_1^+$ such that $ \sum_{i=1}^{\infty} i^{\alpha}w_i^0 <\infty $ for some $\alpha >1 $, the solution $w$ to \eqref{SNLBE}--\eqref{SNLBEIC} constructed in Theorem \ref{MAINTHEOREM} enjoys the same properties throughout time evolution, that is, $ \sum_{i=1}^{\infty} i^{\alpha}w_i^0 <\infty $ for $t\in (0,\infty)$.
\section{Propagation of moments, Uniqueness and Continuous Dependence on Initial Data}\label{PMCDID}
\begin{prop}\label{PMPROP}
Let $T\in (0, \infty)$ and assume that the assumptions \eqref{ASYMM}-\eqref{QUADGROWTH} and \eqref{NMT} are fullfilled. Further, consider $ w^0 \in Y_1^+$ is such that
\begin{align}
\sum_{i=1}^{\infty} i^{\alpha} w_i^0 < \infty \label{ALPHAMOMNTIN}
\end{align}
for some $\alpha >1 $. Then the solution $w$ to \eqref{SNLBE}--\eqref{SNLBEIC} constructed on Theorem \ref{MAINTHEOREM} on $[0,T)$ satisfies
\begin{align}
\sup_{t\in[0,T]} \sum_{i=1}^{\infty} i^{\alpha} w_i(t) <\infty\label{ALPHAMOMNT}
\end{align}
for each $T>0$.
\end{prop}
\begin{proof}
We know from \eqref{LIMITw} that
\begin{align}
\lim_{l\to \infty } w_i^l(t) = w_i(t)
\end{align}
for each $t \in [0, + \infty)$ and $i \geq 1$, where $w^l$ denotes the solution to \eqref{FDNLBE}--\eqref{FDNLBEIC} given by Lemma \ref{LER}. On taking $\mu_i= i^{\alpha}$ and $m=1$ in \eqref{GME}, we get
\begin{align*}
\frac{d}{dt} \sum_{i=1}^l i^{\alpha} w_i^l = \frac{1}{2} \sum_{i=1}^l \sum_{j=1}^{l-i} \Big( \sum_{s=1}^{i+j-1} s^{\alpha} B_{i,j}^s - i^{\alpha} - j^{\alpha} \Big) a_{i,j} w_i^l w_j^l.
\end{align*}
Using \eqref{NMT}, above equation reduces to
\begin{align*}
\frac{d}{dt} \sum_{i=1}^l i^{\alpha} w_i^l = \frac{1}{2} \sum_{i=1}^l \sum_{j=1}^{l-i} \Big( \sum_{s=1}^{i-1} s^{\alpha} b_{s;i,j} +\sum_{s=1}^{j-1} s^{\alpha} b_{s;j,i} - i^{\alpha} - j^{\alpha} \Big) a_{i,j} w_i^l w_j^l.
\end{align*}
Since for $p>1$, the function $s\longrightarrow s^p$ is convex, hence using \eqref{LMC}, we have
$$ \sum_{s=1}^{i-1} s^{\alpha} b_{s;i,j} \leq i^{\alpha},~~~ i\geq 2,j\geq 1 \hspace{.5cm}\text{and} \hspace{.5cm}\sum_{s=1}^{j-1} s^{\alpha} b_{s;j,i} \leq j^{\alpha},~~~ j\geq 2,i\geq 1.$$
Hence
\begin{align*}
\frac{d}{dt} \sum_{i=1}^l i^{\alpha} w_i^l \leq 0,
\end{align*}
which implies
\begin{align*}
\sum_{i=1}^l i^{\alpha} w_i^l \leq \sum_{i=1}^l i^{\alpha} w_{0i} \leq \sum_{i=1}^{\infty} i^{\alpha} w_{0i}.
\end{align*}
With the help of \eqref{ALPHAMOMNTIN} we may pass to the limit as $l \to \infty$ in the above inequality and obtain
\begin{align*}
\sum_{i=1}^{\infty} i^{\alpha} w_i(t) \leq \sum_{i=1}^{\infty} i^{\alpha} w_i^0.
\end{align*}
This concludes the proof of Proposition \ref{PMPROP}.
\end{proof}
Next, we put the stronger assumption on the collisional kernel, i.e.,
\begin{align}
a_{i,j} \leq A_{\gamma} (ij)^{\gamma}, \gamma\in [0,1]. \label{AGAMMA}
\end{align}
Now we establish the uniqueness result for \eqref{NLDCBE}--\eqref{NLDCBEIC} this is achieved by assuming there are two solutions to the initial value problem and demonstrating that they are equal. This will be accomplished as in the usual coagulation fragmentation equations with the help of Gronwall’s inequality. The proof involves slightly more restricted constraints on the collision kernel and initial condition than those used in the existence result.
\begin{prop}\label{UNIQPROP}
Assume that the assumption \eqref{ASYMM}, \eqref{NMT} and \eqref{AGAMMA} are fulfilled.
Consider $w^0 \in Y_1^+$ such that
\begin{align}
\sum_{i=1}^{\infty} i^{1+\gamma} w_i^0 <\infty. \label{GAMINIT}
\end{align}
Then there is a unique solution $w$ to \eqref{NLDCBE}--\eqref{NLDCBEIC} on $[0,+\infty)$ satisfying
\begin{align}
\sup_{t\in [0,T]} \sum_{i=1}^{\infty} i^{1+\gamma} w_i(t) <\infty \label{GAMMMNT}
\end{align}
for each $T\in(0,+\infty)$,
\end{prop}
\begin{proof}
As a $\gamma \in [0, 1]$ it follows from \eqref{AGAMMA} that $a_{i,j}$ satisfy \eqref{QUADGROWTH}
and the existence of a solution to \eqref{NLDCBE}--\eqref{NLDCBEIC} on $[0,+\infty)$ with the properties
stated in Proposition \ref{UNIQPROP} is a consequence of Theorem \ref{MAINTHEOREM} and Proposition \ref{PMPROP}.
Suppose the initial value problem for \eqref{NLDCBE}--\eqref{NLDCBEIC} with the initial condition $w(0) = w_0 \in Y_1^+$ satisfying \eqref{GAMINIT} has two solutions, $w$ and $\hat{w}$ on $[0,+\infty)$ satisfying the property \eqref{GAMMMNT}. Let $\eta(t):=w(t)-\hat{w}(t)$. We shall prove that $w \equiv \hat{w} $, by showing that the sum $\sum_{i=1}^{\infty} i |\eta_i| $ is identically zero.
For $i \geq 1$ we put
\begin{align*}
\eta(t) = w(t)- \hat{w}(t) \hspace{.4cm} \text{and} \hspace{.4cm}\theta_i = \sgn(\eta_i),
\end{align*}
where $\sgn(h) = h /|h| $ if $ h\in \mathbb{R} \ \{0\}$ and $\sgn(0) =0$.
Now, we infer from \eqref{MCE} that
\begin{align}
\sum_{i=1}^l i |\eta_i(t)| = \int_0^t \sum_{m=1}^3 \Delta_m^l(s) ds, \label{DEL}
\end{align}
where
\begin{align}
\Delta_1^l = \frac{1}{2} \sum_{i=1}^l \sum_{j=l+1-i}^{l} \Big( \sum_{s=1}^{l} s\theta_s B_{i,j}^s - i\theta_i - j\theta_j\Big) a_{i,j} (w_i w_j -\hat{w}_i\hat{w}_j),
\end{align}
\begin{align}
\Delta_2^l = \sum_{i=1}^l \sum_{j= l+1}^{\infty}\Big(\sum_{s=1}^l s\theta_s B_{j,k}^s -i\theta_i\Big) a_{i,j} (w_iw_j - \hat{w}_i\hat{w}_j),
\end{align}
\begin{align}
\Delta_3^l = \frac{1}{2} \sum_{i=l+1}^{\infty}\sum_{j=l+1}^{\infty} \sum_{s=1}^l s\theta_s B_{i,j}^s a_{i,j} (w_{i} w_j - \hat{w}_{i} \hat{w}_j).
\end{align}
From \eqref{NMT1}, it follows that
\begin{align*}
\Bigg( \sum_{s=1}^{i+j-1} s \theta_s B_{i,j}^s - \theta_i - \theta_j\Bigg) x_i&= \Bigg( \sum_{s=1}^{i} s\theta_s \theta_i b_{s,i;j}^s +\sum_{s=1}^{j} s\theta_s \theta_i b_{s,j;i} - i -j \theta_i \theta_j\Bigg) |x_i| \\
& \leq 2j |x_i|.
\end{align*}
The first term $\Delta_1^l$ can be estimated as follows
\begin{align*}
\Delta_1^l& \leq \sum_{i=1}^{l} \sum_{j=1}^{l} a_{i,j} (jw_j|x_i| + i \hat{w}_i|x_j|)
\end{align*}
hence by \eqref{ASYMM} and \eqref{AGAMMA}, we have
\begin{align}
\Delta_1^l \leq A_{\gamma} \Big( \sum_{i=1}^l i^{1+\gamma} (w_i + \hat{w}_i)\Big) \sum_{j=1}^l j |x_j|. \label{DEL1}
\end{align}
Next, we deduce from \eqref{NMT2} and \eqref{AGAMMA} that
\begin{align*}
\int_0^t \Bigg| \sum_{i=1}^l \sum_{j=l+1}^{\infty} (i+j) a_{i,j} w_i w_j \Bigg| ds \leq A_{\gamma} \int_0^t \sum_{i=1}^l \sum_{j=l+1}^{\infty} (i^{1+\gamma} j^{\gamma}+j^{1+\gamma} i^{\gamma}) w_i w_j ds,
\end{align*}
and using \eqref{GAMMMNT}, we obtain
\begin{align*}
\lim_{l \to +\infty} \int_0^t \Bigg | \sum_{i=1}^l \sum_{j=l+1}^{\infty} (i+j) a_{i,j} w_i w_j \Bigg| ds =0,
\end{align*}
from which we conclude that
\begin{align}
\lim_{l\to \infty} \Delta_2^l =0. \label{DEL2}
\end{align}
In a similar vein, we can show that
\begin{align}
\lim_{l\to \infty} \Delta_3^l =0. \label{DEL3}
\end{align}
On substituting \eqref{DEL1}, \eqref{DEL2} and \eqref{DEL3} into \eqref{DEL}, we arrive at
\begin{align*}
\sum_{i=1}^{l} i |\eta_i(t)| \leq & 2A_{\gamma} \int_0^t \Big( \sum_{i=1}^{l} i |\eta_i(s)|\Big)\Big(\sum_{j=1}^{l} j^{1+\gamma} w_j(s)\Big) ds.
\end{align*}
Finally, we use Gronwall's lemma to complete the proof of Proposition \ref{UNIQPROP}.
\end{proof}
Next, we prove the following result in terms of continuous dependence with respect to the initial conditions:
\begin{prop}
Assume that the assumptions of Proposition \ref{UNIQPROP} hold and let $w$ and $\hat{w}$ are solutions of \eqref{NLDCBE}--\eqref{NLDCBEIC} with initial conditions $w(0)= w_0$ and $\hat{w}(0) = \hat{w}_0$ satisfying \eqref{GAMINIT} then, for each $t\geq 0$, there is a positive $\kappa(t,\|w_0\|_{1+\gamma},\|\hat{w}_0\|_{1+\gamma})$ such that
\begin{align}
\|w(t)- \hat{w}(t) \|_1 \leq \kappa(t,\|w_0\|_{1+\gamma},\|\hat{w}_0\|_{1+\gamma} )\|w_0- \hat{w}_0 \|_1.\label{CD1}
\end{align}
\end{prop}
\begin{proof}
Since $w$ and $\hat{w}$ are solutions of \eqref{NLDCBE}--\eqref{NLDCBEIC} having initial conditions $w_0$ and $\hat{w}_0$ respectively, we can write
\begin{align*}
w_i(t) = w_{0i} + \int_0^t \Big[ \frac{1}{2}\sum_{j=i+1}^{\infty} \sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} w_{j-k}(s) w_k(s) -\sum_{j=1}^{\infty} a_{i,j} w_i(s)w_j(s) \Big] ds,
\end{align*}
\begin{align*}
\hat{w}_i(t) = \hat{w}_{0i} + \int_0^t \Big[\frac{1}{2} \sum_{j=i+1}^{\infty} \sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} \hat{w}_{j-k}(s) \hat{w}_k(s) -\sum_{j=1}^{\infty} a_{i,j} \hat{w}_i(s)\hat{w}_j(s) \Big] ds,
\end{align*}
and defining $\zeta(t) =w(t)-\hat{w}(t) $ and $\psi_i= i\sgn(\zeta_i)$, we perform the same estimates as in the proof of Proposition \ref{UNIQPROP} to obtain,
\begin{align*}
\sum_{i=1}^{l} i |\zeta_i(t)| \leq & \sum_{i=1}^{l} i |\zeta_i(0)| + 2A_{\gamma} \int_0^t\Big( \sum_{i=1}^{l} i |\zeta_i(s)|\Big) \Big(\sum_{j=1}^{l} j^{1+\gamma} \hat{w}_j(s)\Big) ds.
\end{align*}
By using Gronwall's lemma,we accomplish the estimate \eqref{CD1}.
\end{proof}
In the following section, we will demonstrate that the solution to the non-linear breakage model is first-order differentiable and continuously dependent on initial data.
\section{Differentiability of the solutions}\label{DOS}
The next theorem is the main result needed to prove the differentiability of the solution.
\begin{theorem}
Let \eqref{IP} and \eqref{AGAMMA} hold and $w$ be a solution of \eqref{NLDCBE}--\eqref{NLDCBEIC} on $[0,T)$ where $0<T\leq \infty$. Then, for every $r\in \mathbb{N}$
\begin{align}
\sum_{i=r}^{\infty} i w_i(t_2)-\sum_{i=r}^{\infty} i w_i(t_1)=&\frac{1}{2}\int_{t_1}^{t_2} \sum_{j=r}^{\infty} \sum_{k=r}^{\infty} \Bigg( \sum_{i=r}^{j+k-1} iB_{j,k}^i - j -k \Bigg)a_{j,k} w_j(s) w_k(s) ds \nonumber \\
&+ \frac{1}{2}\sum_{j=1}^{r-1}\sum_{k=1}^{r-1} \sum_{i=r}^{j+k-1} iB_{j,k}^i a_{j,k} w_j(s) w_k(s) ds\nonumber \\
&+\sum_{j=1}^{r-1}\sum_{k=r}^{\infty} \Big(\sum_{i=r}^{j+k-1} iB_{j,k}^i -k\Big) a_{j,k} w_j(s) w_k(s). \label{TAILEQ}
\end{align}
\end{theorem}
\begin{proof}
Let $1 \leq r \leq l$. On multiplying each equation in \eqref{IVOE} by $\psi_i$ and taking summation over $i$ from $r$ to $l$, we obtain
\begin{align}
\sum_{i=r}^l \psi_i &(w_i(t_2)- w_i(t_1))= \int_{t_1}^{t_2}\Bigg[ \frac{1}{2}\sum_{S_1} \Bigg( \sum_{i=r}^{j+k-1} \psi_iB_{j,k}^i - \psi_j -\psi_k \Bigg)+\frac{1}{2}\sum_{S_2} \sum_{i=r}^{j+k-1} \psi_iB_{j,k}^i \nonumber \\
&+ \sum_{S_3} \Bigg(\sum_{i=r}^{j+k-1} \psi_i B_{j,k}^i - \psi_k\Bigg)+ \sum_{S_4}\Bigg(\frac{1}{2}\sum_{i=r}^l \psi_i B_{j,k}^i-\psi_j\Bigg) + \frac{1}{2}\sum_{S_5}\sum_{i=r}^l \psi_i B_{j,k}^i \nonumber\\
&+\frac{1}{2}\sum_{S_6}\sum_{i=r}^l \psi_i B_{j,k}^i\Bigg] a_{j,k} w_j(s) w_k(s) ds\label{GFSFE}
\end{align}
where
\begin{align*}
S_1 &= \{(j,k):~~~j,k\geq r,~~~j+k\leq l\}\\
S_2 &= \{(j,k):~~~j,k< r,~~~r\leq j+k \leq l\}\\
S_3 &= \{ (j,k):~~~1 \leq j \leq r-1,k\geq r, j+k\leq l\} \\
S_4 &= \{ (j,k):~~~r \leq j \leq l, j+k> l\}\\
S_5 &= \{ (j,k):~~~1\leq j\leq r-1, k \geq l-j+1\}\\
S_6 &= \{ (j,k):~~~j\geq l+1, k \geq 1\}
\end{align*}
with the sums equal to zero if the associated region is empty. (Note that $S_2$, $S_3$ and $S_5$ are empty if $r = 1$.)
On taking $\psi_i=i$ in \eqref{GFSFE}, we obtain
\begin{align*}
\sum_{i=r}^l i &(w_i(t_2)- w_i(t_1))= \int_{t_1}^{t_2}\Bigg[ \frac{1}{2}\sum_{S_1} \Bigg( \sum_{i=r}^{j+k-1} iB_{j,k}^i - j -k \Bigg)+\frac{1}{2}\sum_{S_2} \sum_{i=r}^{j+k-1} iB_{j,k}^i \nonumber \\
&+ \sum_{S_3} \Bigg(\sum_{i=r}^{j+k-1} i B_{j,k}^i - k\Bigg)+ \sum_{S_4}\Bigg(\frac{1}{2}\sum_{i=r}^l i B_{j,k}^i-j\Bigg) + \frac{1}{2}\sum_{S_5}\sum_{i=r}^l i B_{j,k}^i \nonumber\\
&+\frac{1}{2}\sum_{S_6}\sum_{i=r}^l i B_{j,k}^i\Bigg] a_{j,k} w_j(s) w_k(s) ds.
\end{align*}
Under the condition \eqref{IP}, integrals having region of summation $S_4$, $S_5$ and $S_6$ converge to zero as $l\to \infty$.
\end{proof}
In the following proposition, we will address the issue of the differentiability of solutions.
\begin{prop}\label{DIFFPROP}
Let $ a_{i,j}$ satisfy \eqref{ASYMM}, \eqref{AGAMMA} and the condition \eqref{IP} holds. Let $w=(w_i)$ be a solution on some interval $[0,T]$, where $0<T\leq \infty$ of \eqref{NLDCBE} with initial condition having $(1+\gamma)$-th moment is finite. Then the series $\frac{1}{2} \sum_{j=i+1}^{\infty} \sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} w_{j-k}(t) w_k(t)$ and $\sum_{j=1}^{\infty} a_{i,j} w_i(t) w_j(t)$ are absolutely continuous on the compact sub-intervals of $[0, T]$.
\end{prop}
\begin{proof}
It is enough to show the boundedness of $\sum_{j=1}^{\infty} \sum_{k=1}^{\infty} (j+k) a_{j,k} w_j w_k $ for \eqref{IP} to hold. Since
\begin{align*}
\sum_{j=1}^{\infty} \sum_{k=1}^{\infty} (j+k) a_{j,k} w_j w_k\leq& 2A_{\gamma}\sum_{j=1}^{\infty} j^{1+\gamma} w_j \sum_{k=1}^{\infty} k w_k\\
&\leq 2A_{\gamma} \|w_0\|_{1+\gamma} \|w_0\|_1,
\end{align*}
which implies that \eqref{IP} holds for any $t_1, t_2 \in [0, T)$. Therefore, considering $m=1$ for $t\in [0,T]$, equation \eqref{TAILEQ} implies the uniform convergence of the series $\sum_{i=1}^{\infty} i w_i(t)$. Since the series $ \sum_{j=i+1}^{\infty}\sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} w_{j-k} w_k $ is bounded by this series, we conclude the uniform convergence of $ \sum_{j=i+1}^{\infty}\sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} w_{j-k} w_k $. Now the boundedness of $w_i(t)$ ensures the absolute continuity of $\sum_{j=i+1}^{\infty}\sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} w_{j-k} w_k $. Also the series $\sum_{j=1}^{\infty} a_{i,j} w_j(t)$ is bounded by $\sum_{i=1}^{\infty} i w_i(t)$, resulting in its uniform convergence. Finally, we obtain the desired result using the boundedness of $w_i(t)$.
\end{proof}
The Definition \ref{DEF1}(1), \eqref{IP} and Proposition \ref{DIFFPROP} ensure that the solution $w$ is differentiable in the classical sense in $[0, T[$.
\section{Some Invariance properties of solutions}\label{IPOS}
It is natural to predict that under no mass transfer condition \eqref{NMT}, if there are no clusters larger than $m$ at the beginning of the physical process, then none will be generated afterwards. This will be established in the next proposition.
\begin{prop}
Assume that \eqref{NMT} holds and the Cauchy problem for \eqref{NLDCBE}--\eqref{NLDCBEIC} have unique solutions. Then, for every $m\in \mathbb{N}$, the sets
\begin{align*}
Y^{\sharp m} := \{w \in Y_1^+ | w_i=0,~~~ \forall i>m \}
\end{align*}
are positively invariant for \eqref{NLDCBE}--\eqref{NLDCBEIC}.
\end{prop}
\begin{proof}
Let $w$ be a solution to \eqref{NLDCBE} such that $w(\tau) = w_0 \in Y^{\sharp m}$, for some $\tau \geq 0$. Since we know that \eqref{NLDCBE}--\eqref{NLDCBEIC} reduces to \eqref{SNLBE}--\eqref{SNLBEIC} when the condition \eqref{NMT} holds. Hence, let $w^m(\cdot)$ be the unique solution of the $m$-dimensional Cauchy problem
\begin{align*}
\dot{w}_i^m =& \sum_{j=i+1}^{m} \sum_{k=1}^{m} b_{i,j,k} a_{j,k} w_j w_k -\sum_{k=1}^{m} a_{j,k} w_j w_k,\\
w_i^m(\tau) &= w_{0i},
\end{align*}
for $i=1, \cdots, m$ (with the first sum defined to be zero if $i=m$.) Then the function $(w_1^m, w_2^m, \cdots, w_m^m, 0,0, \cdots)$ is a solution of the infinite dimensional system \eqref{SNLBE}--\eqref{SNLBEIC} and, by uniqueness, it must be the solution $w$. As a result, for all $t\geq \tau$, we have $w_i(t)=0 $ when $i=m+1, m+2, \cdots,$ that is, $w(t) \in Y^{\sharp m}$ for all $t\geq \tau$, proving the result.
\end{proof}
This invariance condition also appears in linear fragmentation equations: if the original cluster distribution contains no clusters larger than $m$, then they cannot be formed by fragmentation of the (smaller) ones that are already there.
\par
In the upcoming section, we will discuss the large-time behaviour of the solution, and our result follows the proof \cite[Proposition 4.1]{Laurencot 2001}, where it has been proved for collisional kernel having linear growth.
\section{On the large-time behaviour of solutions} \label{LTBOS}
The investigation of the large-time behaviour of solutions is studied in this section. In this model, as previously stated, a cluster only forms smaller fragments after colliding. As a result, we anticipate that only $1$-clusters will be left in the long time.
\begin{prop}
Let $a_{i,j} \leq A ij$ and \eqref{LMC}, \eqref{NMT}, and \eqref{GMC} satisfied. For $ w^0 \in Y^+$, there is a solution $w$ to \eqref{NLDCBE}--\eqref{NLDCBEIC} on $[0,\infty)$ and there is $w^{\infty} = (w_i^{\infty})\in Y^+$ such that
\begin{align}
\lim_{t \to \infty} \|w(t) - w^{\infty}\|_{Y} = 0 . \label{WINFTYLIM}
\end{align}
Moreover, if $i \geq 2$ is such that $a_{i,i}\neq 0 $ we have
\begin{align}
w_i^{\infty} = 0. \label{WINFZERO}
\end{align}
\end{prop}
\begin{remark}
In particular, if $a_{i,i} >0$ for each $i \geq 2$ then $w_i^{\infty} = 0 $ for every $i\geq 2$, mass conservation and \eqref{WINFTYLIM} entail that $w_1^{\infty} = \|w^0 \|_Y$.
\end{remark}
\begin{proof}
Consider the identity \eqref{MCE} with $\psi_i=i$, we obtain
\begin{align}
\sum_{i=1}^{l} i w_i(t_2) - \sum_{i=1}^l i w_i(t_1) =& \int_{t_1}^{t_2} \sum_{j=l+1}^{\infty} \sum_{k=1}^{\infty} \sum_{i=1}^l i b_{i,j,k} a_{j,k} w_j(s) w_k(s) ds \geq 0. \label{LT1}
\end{align}
The first consequence of \eqref{LT1} is that the function
\begin{align}
S_l: t \mapsto \sum_{i=1}^l i w_i(t) \hspace{.3cm} \text{is a non decreasing function on}\hspace{.2cm} [0,+\infty). \label{LT2}
\end{align}
Owing to \eqref{QBOUND}, the function $S_l$ is also bounded from above it must converge to some constant $q_l^{\infty}\geq 0$. Since $\sum_{i=1}^{l-1} i w_i(t) \leq \sum_{i=1}^{l} i w_i(t)$ we have $q_{l}^{\infty}\geq q_{l-1}^{\infty}$. Then for all $l \in \mathbb{N}$ we have
\begin{align}
w_l(t) = \frac{1}{l}(S_l(t) -S_{l-1}(t)) \xrightarrow[]{t \to \infty} q_{l}^{\star}- q_{l-1}^{\star}:=w_l^{\infty}. \label{LT3}
\end{align}
Furthermore, as $w(t)\in Y^+$ for each $t \geq 0$ the convergence \eqref{LT3} ensures that $w^{\infty}:=(w_l^{\infty})$ belongs to $Y^+$.
Also the \eqref{LT2} and \eqref{GMC} entail that
\begin{align*}
\sum_{i=l}^{\infty} i w_i(t) \leq \sum_{i=l}^{\infty} i w_i^0, \hspace{.2cm} l\geq 1, \hspace{.2cm} t\geq 0.
\end{align*}
This fact and \eqref{LT3} yields \eqref{WINFTYLIM}.\\
Finally, another consequence of \eqref{WINFTYLIM} and \eqref{LT1} is that
\begin{align*}
\int_0^{\infty} \sum_{j=l+1}^{\infty} \sum_{k=1}^{\infty} \sum_{i=1}^l i b_{i,j,k} a_{j,k} w_j(s) w_k(s) ds < \infty.
\end{align*}
Let $ i\geq 2$ such that $a_{i,i}>0$. Then the above estimate with $l=i-1$ and $j=k=i$ asserts that
\begin{align*}
\sum_{s=1}^{\infty} s b_{s,i,i} a_{i,i} w_i^2 = ia_{i,i} w_i^2 \in L^1(0,+\infty).
\end{align*}
Using \eqref{LT3}, we can deduce that $a_{i,i} (w_i^{\infty})^2=0$, resulting in \eqref{WINFZERO}.
\end{proof}
| 23,561 |
\section{Introduction}
Hamiltonian matrices arise in many applications related to
linear control theory for continuous-time systems \cite{Benner1}, quadratic eigenvalue problems \cite{Mermann1,Tiseur}. Deciding whether a certain Hamiltonian matrix $H$ has purely imaginary
eigenvalues is the most critical step in algorithms for computing the stability radius of a
matrix or the $H_\infty$ norm of a linear time-invariant system, (see, \cite{Boyd,Byers1}). QR-like algorithms that
achieve this goal have been developed in \cite{Benner2,Byers2,Van}, while Krylov subspace methods tailored
to Hamiltonian matrices can be found in \cite{Benner3,Benner4, Fern, Mermann2,Watkin}. An efficient strongly stable method for computing invariant subspaces of $H$ has been proposed in \cite{Chu}.
\begin{definition}
A $2n\times 2n$ real matrix $A$ is said to be a Hamiltonian matrix if it satisfies $A^TJ+JA=0$, where $J=\left(\begin{array}{cc}
0 & I_n \\
-I_n & 0
\end{array}\right)$.
\end{definition}
One of the main mathematical tools we shall use in this paper is a rank-r perturbation result, due to Rado and introduced by Perfect in \cite{Perfect1}, which shows how to modify $r$ eigenvalues of an $n\times n$ matrix, $r<n$, via a rank-r perturbation, without changing any of the remaining $n$ - $r$ eigenvalues. This result has given rise to a number of sufficient conditions for the existence and construction of nonnegative matrices with prescribed real or complex spectrum and also for the universal realizability of spectra, that is, spectra $\Lambda=\{\lambda_1,\ldots,\lambda_n\}$, which are realizable by a $n\times n$ nonnegative matrix for each Jordan canonical form associated with $\Lambda$ (see \cite{manza, Perfect1, Soto1,Soto2,Soto3, Soto4, SotoM} and the references therein).
\begin{theorem}[Rado, \cite{Perfect1}] \label{Rado} Let $A$ be an $n\times n $ arbitrary matrix with spectrum $\Lambda=\{\lambda_1,\lambda_2,\ldots, \lambda_n\}$. Let $X=[x_1|x_2|\cdots |x_r]$ be such that $rank(X)=r$ and $Ax_i=\lambda_i x_i$, $i=1,\ldots,r$, $r < n$. Let $C$ be an $r\times n$ arbitrary matrix. Then $A+XC$ has eigenvalues $\{\mu_1,\mu_2,\ldots,\mu_r,\lambda_{r+1},\ldots,\lambda_n\}$, where $\{\mu_1,\mu_2,\ldots,\mu_r\}$ are eigenvalues of the matrix $\Omega+CX$ with $\Omega=diag\{\lambda_1,\ldots, \lambda_r\}$.
\end{theorem}
The case $r=1$ in Theorem \ref{Rado} constitutes a well known result due to Brauer (\cite{Brauer},[Theorem 27]), also employed with success in connection with the \textit{Nonnegative Inverse Eigenvalue Problem} (NIEP) and the \textit{Nonnegative Inverse Elementary Divisors Problem} (NIEDP), or nonegative inverse universal realizability problem.
A number of different versions of Rados's Theorem have been obtained in \cite{manza,Soto2, Soto4}. In particular in \cite{Soto2} the authors introduce a symmetric version of Theorem \ref{Rado}.
In this paper, we develop a Hamiltonian version of Rado's Theorem, which allows us to modify $r$ eigenvalues, $r<2n$, of a Hamiltonian matrix by preserving its Hamiltonian structure.
We shall say that $\Lambda=\{\lambda_1,\lambda_2,\ldots, \lambda_{2n}\}$ is $\mathcal{H}$-realizable if there exists a $2n\times 2n$ real Hamiltonian matrix with spectrum $\Lambda$. It is customary to use the notation $\sigma(A)$ to represent the spectrum of a matrix $A$.
The following properties of Hamiltonian matrices are well know \cite{Benner4}.
\begin{proposition}\cite{Benner4}\label{intro1}
The following are equivalent:
\begin{description}
\item[a)] $A$ is a Hamiltonian matrix.
\item[b)] $A=JS$, where $S=S^T$.
\item[c)] $(JA)^T=JA$.
\item[d)] $A=\left(\begin{array}{cc}
C & G \\
F & -C^T
\end{array}\right)$, where $G=G^T$ and $F=F^T$.
\end{description}
\end{proposition}
Let $\mathcal{H}^n=\{A\, :\, A^TJ+JA=0\}$. It is clear that if $A,B\in\mathcal{H}^n $, and $\alpha\in \mathbf{R}$ then, $A+\alpha B\in \mathcal{H}^n$.
\begin{proposition}\cite{Benner4}\label{intro2}
Let $A\in \mathcal{H}^n$ and let $p_A(x)$ be the characteristic polynomial of $A$. Then:
\begin{description}
\item[a)] $p_A(x)=p_A(-x)$.
\item[b)] If $p_A(c)=0$, then $p_A(-c)=p_A(\overline{c})=p_A(-\overline{c})=0$.
\end{description}
\end{proposition}
The paper is organized as follows: In Section 2, we show how to construct Hamiltonian matrices with prescribed spectrum. In section 3, we introduce a Hamiltonian version of Rado's Theorem, and based on it, we modify $r$ eigenvalues, $r<2n$, of a Hamiltonian matrix by preserving its the Hamiltonian structure. Finally, in Section 4, we discuss an application to a linear condi\-tions time system. Throughout the paper some illustrative examples are presented.
\section{Hamiltonian matrices with prescribed spectrum}
We start this section with some results and criteria related to the Hamiltonian inverse eigenvalue problem.
\begin{theorem}\label{blockteo}
Let $M=\left(\begin{array}{cc}
A_{11} & A_{12} \\
A_{12} & A_{11}
\end{array}\right)$, where $A_{ij}$ are $n\times n$ matrices. Then $\sigma(M)=\sigma(A_{11}+A_{12})\cup\sigma(A_{11}-A_{12})$
\end{theorem}
\begin{proof}
Let $P=\left(\begin{array}{cc}
I_n & -I_n \\
I_n & I_n
\end{array}\right)$, then $P^{-1}=\frac{1}{2}\left(\begin{array}{cc}
I_n & I_n \\
-I_n & I_n
\end{array}\right)$. It is easy to see that
$$
P^{-1}MP=\left(\begin{array}{cc}
A_{11}+A_{12}& 0\\
0& A_{11}-A_{12}
\end{array}\right)
$$
Hence, $\sigma(M)=\sigma(A_{11}+A_{12})\cup\sigma(A_{11}-A_{12})$.
\end{proof}
\vspace{0.3cm}
A matrix $A$ is called anti-symmetric if $A^T=-A$.
\begin{corollary}\label{cororeal}
Let $\Lambda=\{\lambda_1,\ldots,\lambda_n\}$ be the spectrum of an $n\times n$ real matrix $A$. Then $\Lambda\cup -\Lambda$ is $\mathcal{H}$-realizable.
\end{corollary}
\begin{proof}
We write $A$ as $A=\frac{1}{2}(A+ A^T)+\frac{1}{2}(A-A^T)$. Then for $A_{11}=\frac{1}{2}(A-A^T)$ and $A_{12}=\frac{1}{2}(A+ A^T)$, we have from Theorem \ref{blockteo} that
$$
H=\left(\begin{array}{cc}
A_{11} & A_{12} \\
A_{12} & A_{11}
\end{array}\right)
$$
is a Hamiltonian matrix with spectrum $\Lambda\cup -\Lambda$.
\end{proof}
\begin{remark}
Let $H$ be a Hamiltonian matrix with $\sigma(H)\subset \mathbb{R}$. Then, it follows from Proposition \ref{intro2}, that, $\sigma(H)=\Lambda\cup -\Lambda$, where $\Lambda\subset\mathbb{R}$. Reciprocally, from Corollary \ref{cororeal} we have that every list of the form $\Lambda\cup -\Lambda$, where $\Lambda\subset\mathbb{R}$, is the spectrum of a Hamiltonian matrix.
\end{remark}
\begin{theorem}\label{blocks}
Let $\{\Gamma_k\}_{k=1}^n$ be a family of lists $\mathcal{H}$-realizable. Then $\Gamma=\cup_{k=1}^{n}\Gamma_k$ is also $\mathcal{H}$-realizable.
\end{theorem}
\begin{proof}
Let $H_k$ be a Hamiltonian matrix with spectrum $\Gamma_k$, $k=1,2,\ldots,n$. Then, from Proposition \ref{intro2}
$$
H_k=\left(\begin{array}{cc}
A_k & E_k \\
F_k & -A_k^T\\
\end{array}\right),
$$
where $E_k=E_k^T$ and $F_k=F_k^T$. Thus for $M=diag\{H_1, H_2, \ldots,H_n\}$, $\sigma(M)=\cup_{k=1}^{n}\Gamma_k$ and by adequate permutations of rows and columns we have
$$
P\left(\begin{array}{ccccccc}
A_1 & E_1 & & & & & \\
F_1 & -A_1^T & & & & & \\
& & A_2 & E_2 & & & \\
& & F_2 & -A_2^T& & & \\
& & & &\ddots & & \\
& & & & & A_n& E_n \\
& & & & & F_n&-A_n^T\\
\end{array}\right)P= \left(\begin{array}{cc}
A & E \\
F & -A^T\\
\end{array}\right),
$$
where $A=diag\{A_1,A_2,\ldots,A_n\}$, $E=diag\{E_1,E_2,\ldots,E_n\}$ and\newline $F=diag\{F_1,F_2,\ldots,F_n\}$. So it is clear that $E=E^T$ and $F=F^T$.
Hence, $H=\left(\begin{array}{cc}
A & E \\
F & -A^T\\
\end{array}\right)$ is a Hamiltonian matrix such that $\sigma(H)=\sigma(M)=\cup_{k=1}^{n}\Gamma_k$.
\end{proof}
\begin{corollary}
Let $\Gamma=\{\pm ib_1, \pm ib_2,\ldots, \pm ib_n\}$, $b_i>0$, be a list of complex number. Then $\Gamma$ is $\mathcal{H}$-realizable.
\end{corollary}
\begin{proof}
The Hamiltonians matrices $B_k=\left(\begin{array}{cc}
0 & b_k \\
-b_k & 0\\
\end{array}\right)$ have the spectrum $\{ib,-ib\}$. Then $B_kJ+JB_k=0$ and from Theorem \ref{blocks} there is a $2n\times 2n$ Hamiltonian matrix with spectrum $\Gamma$.
\end{proof}
\vspace{0.3cm}
In the case of general lists of complex numbers, the smallest list of complex numbers being $\mathcal{H}$-realizable must be of the form
$$
\Lambda(a,b)=\{a+ib,a-ib,-a-ib,-a+ib\},
$$
which is spectrum of the Hamiltonian matrix
$$
\left(\begin{array}{cccc}
a & b & 0 & 0\\
-b & a & 0 & 0\\
0 & 0 & -a & b\\
0 & 0 & -b & -a\\
\end{array}\right)
$$
Then, from Theorem \ref{blocks}, the list of complex number $\cup_{k=1}^n\Lambda(a_k,b_k)$ is $\mathcal{H}$-realizable. Reciprocally, every $\mathcal{H}$-realizable list $\Lambda\subset \mathbb{C}$ is of the form $\cup_{k=1}^n\Lambda(a_k,b_k)$.
Then, a list of the form
$$
(\Lambda\cup-\Lambda)\cup\cup_{k=1}^n\Lambda(a_k,b_k),
$$
where $\Lambda\subset \mathbb{R}$ have $2n$ elements, is $\mathcal{H}$-realizable.
\begin{example}
We consider $\Lambda=\{1\pm i,-1\mp i,1\mp2i,-1\mp 2i\}$, that is, $\Lambda=\Lambda(1,1)\cup\Lambda(1,2)$. Then we can construct the following Hamiltonian matrix with the desired spectrum.
$$
A=\left(\begin{array}{cccccccccccccccc}
1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
- 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 2 & 0 & 0 & 0 & 0\\
0 & 0 & -2 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & -1& 1 & 0 & 0\\
0 & 0 & 0 & 0 & -1& -1& 0 & 0\\
0 & 0 & 0 & 0 & 0& 0& -1& 2\\
0 & 0 & 0 & 0 & 0& 0& -2& -1\\
\end{array}\right)
$$
Furthermore, it's clear that:
$$
\left(
\begin{array}{cccccccccccccccc}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{2} & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & -2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & -\frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2 & -
\end{array}
\right)
$$
have eigenvalues: $\{-\frac{1}{2},2,\frac{1}{2},-2,-1+i,-1-i,1+i,1-i,-1
\allowbreak 2i,-1-2i,1+2i,1-2i,1,-1,1,-1\}$
\end{example}
\begin{remark}
Observe that the above construction requires the same number of real elements than the number of complex elements.
\end{remark}
For the general case we have the following result:
\begin{theorem}
$\Lambda\subset \mathbb{C}$ is $\mathcal{H}$-realizable if only if
$$
\Lambda=\cup_{k=1}^{p}(\Lambda_k\cup -\Lambda_k)\cup _{t=p+1}^{n}(\Lambda_t\cup -\Lambda_t),
$$
where $\Lambda_k\subset \mathbb{R}$ and the $\Lambda_t$ are complex lists such that $\overline{\Lambda_t}=\Lambda_t$.
\end{theorem}
\begin{proof}
The first implication is verified immediately through the Proposition \ref{intro2}. Reciprocally, let $\Lambda=\cup_{k=1}^{p}(\Lambda_k\cup -\Lambda_k)\cup _{t=p+1}^{n}(\Lambda_t\cup -\Lambda_t)$. Each $\Lambda_k$ is realizable for some real matrix $A_k$, and without loss of generality, we assume that each $\Lambda_t$ is realizale by a real matrix $B_t$ (we may take $\Lambda_t=\{a_t+ib_t,a_t-ib_t\}$ if it necessary). We define the matrix
$$
H=\left(\begin{array}{cccc}
A & 0 & & \\
0 & B & & \\
& & -A^T & 0 \\
& & 0& -B^T \\
\end{array}\right)
$$
where $A=diag\{A_1\ldots,A_p\}$ and $B=diag\{B_{p+1},\ldots,B_n\}$, which is a Hamiltonian matrix with spectrum $\Lambda$.
\end{proof}
\section{Perturbations results}
In this section we prove a Hamiltonian version of Theorem \ref{Rado}. As the superscript $T$, in $A^T$, denotes the transpose of $A$, we define the superscript $\mathcal{H}$, in $A^\mathcal{H}$, in the following way
$$
A^\mathcal{H}=JA^TJ,
$$
and $A^\mathcal{H}$ will be called the Hamiltonian transpose or $\mathcal{H}$-transpose.
\begin{remark}
Since, $J=\left(\begin{array}{cc}
0 & I_n \\
-I_{n} & 0
\end{array}\right)$, the above definition implies that $A$ is a Hamiltonian matrix if only if, $A^\mathcal{H}=A$. However, if the matrix $A$ is of order $2m\times 2n$, then
$$
A^\mathcal{H}=J_n A^T J_m,
$$
where $J_n=\left(\begin{array}{cc}
0 & I_n \\
-I_{n} & 0
\end{array}\right)$ and $J_m=\left(\begin{array}{cc}
0 & I_m \\
-I_{m} & 0
\end{array}\right)$.
\end{remark}
The following properties are straightforward:
$$
J^T=-J,\,\,\,\, J^\mathcal{H}=J,\,\,\,\, J^2=-I
$$
Let $A$ and $B$ matrices of order $2n\times 2n$. Then it is easy to verify the following properties:
\begin{description}
\item[1)] $(AB)^\mathcal{H}=-B^\mathcal{H}A^\mathcal{H}.$
\item[2)] $(A+B)^\mathcal{H}=A^\mathcal{H}+B^\mathcal{H}.$
\item[3)] $(\alpha A)^\mathcal{H}=\alpha A^\mathcal{H}.$
\item[4)] $(A^T)^\mathcal{H}=(A^\mathcal{H})^T.$
\item[5)] $(A^\mathcal{H})^\mathcal{H}=A.$
\end{description}
\begin{lemma}\label{propied}
If $A$ and $B$ are Hamiltonian matrices of the same order, then $i)$ $\alpha A+\beta B$, with $\alpha,\beta \in \mathbb{R}$, $ii)$ $A^{-1}$ if $A^{-1}$ exists, $iii)$ $A^T$, $iv)$ $A^\mathcal{H}$ and $v)$ $JAJ$, are Hamiltonian matrices.
\end{lemma}
\begin{proof} Since $A$ and $B$ are Hamiltonian matrices, then $A^\mathcal{H}=A$ and $B=B^\mathcal{H}$. Therefore
\begin{description}
\item[i)] $(\alpha A+\beta B)^\mathcal{H}=J(\alpha A+\beta B)^T J=\alpha A^\mathcal{H}+\beta B^\mathcal{H}=\alpha A+\beta B$.
\item[ii)] $(A^{-1})^\mathcal{H}=J(A^{-1})^TJ=(J^{-1}A^TJ^{-1})^{-1}=(JA^TJ)^{-1}=(A^\mathcal{H})^{-1}=A^{-1}$.
\item[iii)] $(A^T)^\mathcal{H}=(A^\mathcal{H})^T=(A)^T$.
\item[iv)] $(A^\mathcal{H})^\mathcal{H}=A=A^\mathcal{H}$.
\item[v)] $(JAJ)^\mathcal{H}=J^\mathcal{H}A^\mathcal{H}J^\mathcal{H}=JAJ$.
\end{description}
\end{proof}
\begin{lemma}\label{lemma}
Let $C$ be a Hamiltonian matrix of order $r\times r$ and let $X$ be a matrix of order $n\times r$. Then the matrix $XCX^\mathcal{H}$ is a Hamiltonian matrix.
\end{lemma}
\begin{proof} Since $X^\mathcal{H}=J_r X^T J_n$, then
\begin{eqnarray*}
(XCX^\mathcal{H})^T&=&(XCJ_rX^TJ_n)^T=J_n^TXJ_r^TC^TX^T \\
&=& -J_nX(-J_r)C^TX^T=-J_nXJ_rC^TJ_rJ_rX^T \\
&=& -J_nXCJ_rX^T
\end{eqnarray*}
and
\begin{eqnarray*}
(XCX^\mathcal{H})^\mathcal{H}&=&J_n(XCX^\mathcal{H})^TJ_n=J_n(-J_nXCJ_rX^T)J_n \\
&=& XCJ_rX^TJ_n=XCX^\mathcal{H} \\
\end{eqnarray*}
and therefore $XCX^\mathcal{H}$ is a Hamiltonian matrix.
\end{proof}
\vspace{0.3cm}
The following result gives a Hamiltonian version of Rado's result.
\begin{theorem}\label{RadoH}
Let $A$ be a $n\times n$ Hamiltonian matrix with spectrum $\Lambda=\{\lambda_1,\lambda_2,\ldots,\lambda_n\}$, and for some $r < n$, let $\{x_1,x_2,\ldots,x_r\}$ be a set of eigenvectors of $A$ corresponding to $\lambda_1,\ldots,\lambda_r$, respectively. Let $X$ be the $n\times r$ matrix with $i-$th column $x_i$ and $rank(X)=r$. Let $\Omega=diag\{\lambda_1,\ldots,\lambda_r\}$, and let $C$ be an $r\times r$ Hamiltonian matrix. Then, the matrix $A+XCX^\mathcal{H}$ is Hamiltonian with eigenvalues $\{\mu_1,\mu_2,\ldots,\mu_r,\lambda_{r+1},\ldots,\lambda_n\}$, where $\mu_1,\mu_2,\ldots,\mu_r$ are eigenvalues of $B=\Omega+CX^\mathcal{H}X$.
\end{theorem}
\begin{proof}
Let $S=[X|Y]$ be a nonsingular matrix with $S^{-1}=[\frac{U}{V}]$. Then $UX=I_r$, $VY=I_{n-r}$, $VX=0$, $UY=0$. Moreover, since $AX=X\Omega$, we have
\begin{eqnarray*}
S^{-1}AS&=&\left[\frac{U}{V}\right]A[X|Y]=
\left(\begin{array}{cc}
\Omega & UAY \\
0 & VAY
\end{array}\right) \\
S^{-1}XCX^\mathcal{H}S&=& \left(\begin{array}{cc}
CX^\mathcal{H}X & CX^\mathcal{H}Y \\
0 & 0
\end{array}\right)
\end{eqnarray*}
Therefore,
$$
S^{-1}(A+XCX^\mathcal{H})S=
\left(\begin{array}{cc}
\Omega+CX^\mathcal{H}X & UAY+ CX^\mathcal{H}Y \\
0 & VAY
\end{array}\right).
$$
Hence, the spectrum of $A+XCX^\mathcal{H}$ is the union of the spectra of $\,\Omega+CX^\mathcal{H}X$ and $VAY$. That is,
$\{\mu_1,\mu_2,\ldots,\mu_r,\lambda_{r+1},\ldots,\lambda_n\}$. Finally, from Lemma \ref{lemma}, $A+XCX^\mathcal{H}$ is a Hamiltonian matrix.
\end{proof}
\begin{example} $A=\left(
\begin{array}{cccc}
1 & 2 & 0 & 1 \\
0 & 2 & 1 & 0 \\
1 & 2 & -1 & 0 \\
2 & 0 & -2 & -
\end{array
\right)$ is a Hamiltonian matrix with eigenvalues, $\{2\sqrt{2},,1,-1,-2\sqrt{2}\}$. From Theorem \ref{RadoH} we may change the eigenvalues $\pm2\sqrt{2}$ of $A$. Then, for
$$
X=\left(
\begin{array}{cc}
4-3\sqrt{2} & 3\sqrt{2}+4 \\
\frac{7}{2}-\frac{5}{2}\sqrt{2} & \frac{5}{2}\sqrt{2}+\frac{7}{2} \\
3-2\sqrt{2} & 2\sqrt{2}+3 \\
1 &
\end{array
\right), \,\,\,\,\, C=
\left(
\begin{array}{cc}
1& 2\\
2 & -1\\
\end{array
\right),
$$
the eigenvalues of the Hamiltonian matrix $A+XCX^\mathcal{H}$ are $\{\sqrt{442}, 1, -1,-\sqrt{442}\}$.
\end{example}
\section{Applications}
Many Hamiltonian eigenvalue problems arise from a number of applications, particularly in systems and control theory. The properties of Hamiltonian system like conservation of energy or volume in the phase space leads specific dynamical features. The differential state equations most used to describe the behaviour of a system are a \emph{linear continuous-time system} with constant coefficients, which can be described by a set of matrix differential and algebraic equations
\begin{equation}\label{EQ}
\begin{split}
\dot{x}(t) &= Ax(t)+Bu(t), \,\,\,\, x(0)=x_0 \\
y(t) &= Cx(t)+Du(t). \\
\end{split}
\end{equation}
where $x(t)\in \mathbb{R}^n$ is called \emph{state} vector, $u(t)\in \mathbb{R}^m$ is the vector of \emph{inputs} and $y(t)\in \mathbb{R}^r$ is the vector of \emph{outputs} at time $t\in [0,\infty)$, $A$, $B$, $C$ and $D$ are real matrices of apropriate size.
The system above is called \emph{stable} if all eigenvalues of $A$ lie in the left half plane.
A bisection method for measuring the \emph{stability radius} of the system in (\ref{EQ})
$$
\gamma(A)=\{||E||_2\,\, : \,\, \sigma(A+E)\cap i\mathbb{R}\neq \emptyset\},
$$
where $E$ is a real matrix with the same order of $A$, can be based on the following Theorem:
\begin{theorem}[\cite{Byers1}]
Let $A$ be an $n\times n$ real matrix. If $\alpha\geq 0$, then the Hamiltonian matrix
$$
H(\alpha)=\left(\begin{array}{cc}
A & -\alpha I_n \\
\alpha I_n & -A^T\\
\end{array}\right)
$$
has an eigenvalue on the imaginary axis if only if $\alpha\geq \gamma(A)$.
\end{theorem}
To decide whether $H(\alpha)$ has at least one eigenvalue on the imaginary axis is crucial for the success of the bisection method.
In the following results, we shall consider real matrices $ A $ such that $A^TA = AA^T$. That is, unitarily diagonalizable matrices such as, circulant matrices, symmetric matrices, etc.
\begin{theorem}\label{aply1}
Let $A$ be a matrix with eigenvalues $\{\lambda_1,\ldots,\lambda_n\}$, then
$$
H(\alpha)=\left(\begin{array}{cc}
A & -\alpha I_n \\
\alpha I_n & -A^T\\
\end{array}\right)
$$
has eigenvalues $\{\pm\sqrt{\lambda_1^2-\alpha^2},\ldots \pm\sqrt{\lambda_n^2-\alpha^2}\}$. Therefore, $H(\alpha)$ has all its eigenvalues on the imaginary axis if only if $|\lambda_k|<\alpha$, for all $k=1,\ldots,n$.
\end{theorem}
\begin{proof}
Let $U$ be a unitary matrix such that $U^\ast AU=D=diag\{\lambda_1,\ldots,\lambda_n\}$, ehere $U^\ast$ denotes the conjugate transpose of $U$. Then $\overline{U}^\ast(- A^T)\overline{U}=-D$. If we define $P=\left(\begin{array}{cc}
U & 0 \\
0 & \overline{U}\\
\end{array}\right)$, is easy to verify that
$$
P^\ast H(\alpha)P=\left(\begin{array}{cc}
D & -\alpha I_n \\
\alpha I_n & -D\\
\end{array}\right):=B.
$$
Now we will find a base of eigenvectors of $B$, we defined $\beta_k=\frac{1}{\alpha}(\lambda_k\pm\sqrt{\lambda_k^2-\alpha^2})$ and $v_k=\left[\frac{\beta_ke_k}{e_k}\right]$, where $e_k$ is the k-th canonical vector.
\begin{eqnarray*}
Bv_k&=&\left(\begin{array}{cc}
D & -\alpha I_n \\
\alpha I_n & -D\\
\end{array}\right)\left(\begin{array}{c}
\beta_ke_k \\
e_k\\
\end{array}\right)\\
&=& \left(\begin{array}{c}
\beta_kDe_k-\alpha e_k \\
\beta_k\alpha e_k-De_k\\
\end{array}\right)=
\left(\begin{array}{c}
\beta_k \lambda_ke_k-\alpha e_k \\
\beta_k\alpha e_k-\lambda_ke_k\\
\end{array}\right)\\
&=& \left(\begin{array}{c}
(\beta_k \lambda_k-\alpha )e_k \\
(\beta_k\alpha -\lambda_k)e_k\\
\end{array}\right)=\left(\begin{array}{c}
\pm\sqrt{\lambda_k^2-\alpha^2}\,\beta_k\, e_k \\
\pm\sqrt{\lambda_k^2-\alpha^2}\,e_k\\
\end{array}\right)\\
&=& \pm\sqrt{\lambda_k^2-\alpha^2}\,v_k
\end{eqnarray*}
Hence, $\{\pm\sqrt{\lambda_1^2-\alpha^2},\ldots, \pm\sqrt{\lambda_n^2-\alpha^2}\}$ are the eigenvalues of $H(\alpha)$. Even more, the algebraic multiplicity of $\lambda_k$ is equal to the algebraic multiplicity of $-\sqrt{\lambda_k^2-\alpha^2}$ y $\sqrt{\lambda_k^2-\alpha^2}$, besides, since $A$ y $-A^T$ are diagonalizables, so will $H(\alpha)$. Finally, it is clear that $ H (\ alpha) $ has its eigenvalues in the imaginary axis if and only if, $|\lambda_k|<\alpha$, for all $k=1,\ldots,n$.
\end{proof}
\vspace{0.3cm}
As an immediate consequence of the previous result we have:
\begin{corollary}
Let $A$ be a matrix with eigenvalues $\{\lambda_1,\ldots,\lambda_n\}$. Then
$\gamma(A)\leq \alpha$ if only if $|\lambda_k|<\alpha$ for some $k=1,2,\ldots,n$.
\end{corollary}
\begin{example}
Let $A=circ(0,1,0,1)$ be a circulant matrix with eigenvalues $\{2,0,0,-2\}$. Then
$$
H(\alpha)=\left(\begin{array}{cccccccc}
0 & 1 & 0 & 1 & -\alpha & 0 & 0 & 0 \\
1 & 0 & 1 & 0 & 0 & -\alpha & 0 & 0 \\
0 & 1 & 0 & 1 & 0 & 0 & -\alpha & 0 \\
1 & 0 & 1 & 0 & 0 & 0 & 0 & -\alpha \\
\alpha & 0 & 0 & 0 & 0 & -1 & 0 & -1 \\
0 & \alpha & 0 & 0 & -1 & 0 & -1 & 0 \\
0 & 0 & \alpha & 0 & 0 & -1 & 0 & -1 \\
0 & 0 & 0 & \alpha & -1 & 0 & -1 &
\end{array}\right)
$$
has eigenvalues: $\{\pm i\alpha ,\pm i\alpha,\pm \sqrt{4-\alpha^2},\pm\sqrt{4-\alpha^2}\}$. Thus, $H(\alpha)$ has all its eigenvalues on the imaginary axis if $2<\alpha$ and $\alpha\neq 0$. In this case $\gamma(A)\geq \alpha$ if only if $2<\alpha$.
\end{example}
\begin{theorem}
Let $A$ be an $n\times n$ real matrix with spectrum $\{\lambda_1,\ldots,\lambda_n\}$, and let
$D=diag\{d_1,d_2,\ldots,d_n\}$, $d_i\neq 0$ $i=1,2,\ldots,n$. Then, the matrix
$$
H=\left(\begin{array}{cc}
A & -D \\
D & -A^T\\
\end{array}\right)
$$
has eigenvalues $\{\pm\sqrt{\lambda_1^2-d_1^2},\ldots \pm\sqrt{\lambda_n^2-d_n^2}\}$, with corresponding eigenvectors
$$X_k=\left[\frac{\beta_ke_k}{e_k}\right], where\,\,\,\, \beta_k=\frac{1}{d_k}\left(\lambda_k\pm\sqrt{\lambda_k^2-d_k^2}\right),$$
$k=1,\ldots,n$.
\end{theorem}
\begin{proof}
Following the same argument in the proof of Theorem \ref{aply1}, the result follows.
\end{proof}
\vspace{0.3cm}
Now, we consider the case $H(\alpha)$ having all real eigenvalues. We shall find bounds for $\gamma(A)$. To do this, we shall use Theorem \ref{RadoH} and following result:
\begin{theorem}[\cite{Byers1}]
Let $\alpha\geq 0$, and let $E$ be a Hamiltonian matrix. Let $K(\alpha)=H(\alpha)+E$. If $K(\alpha)$ has an eigenvalue with zero real part, then $\gamma (A)\leq \alpha +2||E||$.
\end{theorem}
\begin{theorem}\label{Hcond1}
Let $A$ be an $n\times n$ real matrix and let $\alpha\geq 0$. If $H(\alpha)$ has only real eigenvalues, then there is a Hamiltonian matrix $K(\alpha)=H(\alpha)+E$, such that $K(\alpha)$ has an eigenvalue in the imaginary axis.
\end{theorem}
\begin{proof}
As all the eigenvalues of $H(\alpha)$ are real, then $\beta^+=\frac{1}{\alpha}(\lambda_1+\sqrt{\lambda_1^2-\alpha^2})$, $\beta^-=\frac{1}{\alpha}(\lambda_1-\sqrt{\lambda_1^2-\alpha^2})$ are real numbers. Let
$$
X=\left(
\begin{array}{cc}
\beta^+e_1 & \beta^-e_1 \\
e_1 & e_1 \\
\end{array}
\right)
$$
be a matrix, the columns of which whose columns are the eigenvectors corresponding to the eigenva\-lues $\beta^-$ and $\beta^+$ (It follows from Theorem \ref{aply1}). We consider the perturbed matrix $H(\alpha)+XCX^{\mathcal{H}}$, where $C$ is an $2\times2$ Hamiltonian matrix.
\vspace{0.3cm}
From the Theorem \ref{RadoH}, $H(\alpha)+XCX^{\mathcal{H}}$ has eigenvalues
$$
\{\mu_1,\mu_2, \pm\sqrt{\lambda_2^2-\alpha^2},\ldots,\pm\sqrt{\lambda_2^2-\alpha^2}\}),
$$
where $\{\mu_1,\mu_2\}=\sigma(CX^{\mathcal{H}}X)$. It easy to verify that, if $C=\left(\begin{array}{cc}
a & b \\
c & -a \\
\end{array}
\right)$, then
\begin{eqnarray*}
CX^{\mathcal{H}}X&=&\left(
\begin{array}{cc}
a & b \\
c & -a \\
\end{array}
\right)
\left(
\begin{array}{cc}
(\beta^--\beta^+) & 0 \\
0 & (\beta^--\beta^+) \\
\end{array}
\right)\\
&=&-\frac{2}{\alpha}\sqrt{\lambda_1^2-\alpha^2}
\left(
\begin{array}{cc}
a & b \\
c & -a \\
\end{array}
\right).
\end{eqnarray*}
Therefore, $\{\mu_1,\mu_2\}=\{\frac{2}{\alpha}\sqrt{\lambda_1^2-\alpha^2}\sqrt{a^2+bc},-\frac{2}{\alpha}\sqrt{\lambda_1^2-\alpha^2}\sqrt{a^2+bc}\}$. So it is enough to take real numbers $a,b,c\in\mathbb{R}$ such that $a^2+bc<0$.
\end{proof}
\begin{example}
Let
$$A=\left(
\begin{array}{cccccc}
-1 & 0 & 0 & -\frac{1}{3} & 0 & 0 \\
0 & -1 & 0 & 0 & -\frac{1}{3} & 0 \\
0 & 0 & 2 & 0 & 0 & -\frac{1}{3} \\
\frac{1}{3} & 0 & 0 & 1 & 0 & 0 \\
0 & \frac{1}{3} & 0 & 0 & 1 & 0 \\
0 & 0 & \frac{1}{3} & 0 & 0 & -
\end{array
\right),
$$
be a Hamiltonian matrix with eigenvalues $\{2\sqrt{2}-3,-2\sqrt{2}-3,2\sqrt{2}-3,-2\sqrt{2}-3, \frac{1}{3}\sqrt{35},-\frac{1}{3}\sqrt{35}\}$. Let
$$X=\left(
\begin{array}{cc}
-2\sqrt{2}-3 & 2\sqrt{2}-3 \\
0 & 0 \\
0 & 0 \\
1 & 1 \\
0 & 0 \\
0 &
\end{array
\right),
$$
such that $AX=\Omega X$, where $\Omega=diag\{-2\sqrt{2}-3,2\sqrt{2}-3\}$.
Then, from Theorem \ref{RadoH}, $A+XCX^{\mathcal{H}}$ is a Hamiltonian matrix with eigenvalues $\{\frac{2}{3}\sqrt{2},-\frac{2}{3}\sqrt{2
,\frac{1}{3}\sqrt{35},-\frac{1}{3}\sqrt{35},\frac{2}{3}i\sqrt{2}\sqrt{23},
\frac{2}{3}\allowbreak i\sqrt{2}\sqrt{23}\}$, where
$
C=\left(\begin{array}{cc}
2 & 2 \\
-2 & -2\\
\end{array}\right)
$.
\end{example}
\begin{lemma}\label{Hlemma}
Let $C$ and $X$ be matrices, which satisfy the hypotheses of Theorem \ref{RadoH}. If $X^TX=I_r$, then $||XCX^\mathcal{H}||_F=||C||_F$.
\end{lemma}
\begin{proof}
As $X^TX=I_r$, it is easy to see that $X^\mathcal{H}(X^\mathcal{H})^T=I_r$. Then
\begin{eqnarray*}
||XCX^\mathcal{H}||_F&=&\sqrt{trz[(XCX^\mathcal{H})^T(XCX^\mathcal{H})]}\\
&=&\sqrt{trz[(X^\mathcal{H})^TC^T(X^TX)CX^\mathcal{H}]}\\
&=&\sqrt{trz(CX^\mathcal{H}(X^\mathcal{H})^TC^T)}\\
&=&\sqrt{trz(CC^T)}=\sqrt{trz(C^TC)}\\
&=&||C||_F\\
\end{eqnarray*}
\end{proof}
\begin{corollary}
Let $A$ be an $n\times n$ real matrix and let $\alpha\geq 0$. Then $\gamma(A)\leq \alpha+2||C||$, where $C$ is a Hamiltonian matrix.
\end{corollary}
\begin{proof}
The result is immediate from Theorem \ref{Hcond1} and Lemma \ref{Hlemma}.
\end{proof}
In the $ A $ non-diagonalizable case with real eigenvalues, it is always possible to perturb the matrix to obtain a Hamiltonian matrix of the type $ A + E $, just as this new matrix has at least one eigenvalue in the imaginary axis, in this sense we have the following result that is easy to verify.
\begin{theorem}
Let $\lambda$ be an eigenvalue of $H(\alpha)$. Then its associated eigenvector will have the form
$$
z=[\alpha x,\,\,\,\,\, (A-\lambda I)x]^T
$$
where $x$ is the solution of the system
$$
[\alpha^2 I-(A^T+\lambda I)(A-\lambda I)]x=0
$$
\end{theorem}
\section{Conclusion}
In the study of systems as in (\ref{EQ}) it would be important to characterize the perturbations so that the eigenvalues in the imaginary axis remain in the imaginary axis, this would allow a good estimation of the stability radius. In other words, we propose the following problem:
\begin{problem}
Given a matrix $A$ with purely imaginary eigenvalues, determine the smallest perturbation matrix $E$ such that $A + E$ matrix has eigenvalues outside the imaginary axis. In this way we want to determine the set of matrices $A$, such that those small perturbations that move away the imaginary eigenvalues of the imaginary axis are in a subset of measure zero within the set of real matrices.
\end{problem}
\section*{References}
| 13,729 |
\section{Introduction}
\label{sc:Intro}
Parametric studies involving flows of industrial interest require robust computational fluid dynamics (CFD) solvers and efficient strategies to simulate multiple queries of the same problem.
Finite volume (FV) methods represent the most common approach in industry to perform flow simulations~\cite{LeVeque:02,Toro:09,Sonar-MS:07,Ohlberger-BHO:17,Eymard-EGH:00,RS-SGH:18,RS-VGSH:20,MG-RS-20} and different strategies have been proposed to simulate flows in the turbulent regime~\cite{Deardorff-70,Hughes-HMJ-00,Spalart-TSSS-00}.
A widespread approach is represented by the Reynolds-averaged Navier-Stokes (RANS) equations~\cite{Reynolds-1895} coupled with the one-equation Spalart-Allmaras (SA) turbulence model~\cite{SpalartAllmaras-92}.
This work focuses on such strategy and relies on its cell-centred FV implementation available in OpenFOAM~\cite{OpenFOAM} and validated by the industry.
When the simulation requires testing a large number of different configurations - e.g. for shape optimisation, uncertainty quantification, inverse and control problems - numerical strategies to reduce the cost of the overall computation are critical. Reduced order models (ROM)~\cite{AH-CHRW:17,Gunzburger-PWG-18} construct an approximation of the solution in a lower dimensional space, for which an appropriate basis needs to be devised.
It is known that numerical difficulties arise when reduced basis (RB) and proper orthogonal decomposition (POD) techniques are applied to convection-dominated problems~\cite{Iliescu-IW-13,Pacciarini-PR-14,Iliescu-GIJW-15}.
This is especially critical in the context of flow simulations when the Reynolds number is increased and turbulent phenomena need to be accounted for.
More precisely, the most relevant POD modes are associated with the highest energy scales of the problem under analysis, whereas small scales, which play a critical role in the dissipation of turbulent kinetic energy, are poorly represented by POD-ROM~\cite{Aubry-AHLS-88}.
To remedy this issue, closure models stemming from traditional description of turbulence have been extended to ROMs, leading to Galerkin projection-based POD with dynamic subgrid-scale~\cite{Iliescu-WABI-12}, variational multiscale~\cite{Iliescu-IW-14,Rozza-SBZR-19}, $k-\omega$ SST~\cite{Rozza-LCLR-16} models and to a certified Smagorinsky RB strategy~\cite{Ballarin-RAMBR-17}.
Moreover, strategies to improve efficiency and accuracy of POD-ROM in the context of realistic and turbulent flows have been proposed by coupling the projection-based framework with residual minimisation~\cite{Farhat-CFCA-13}, nonlinear least-squares optimisation~\cite{Zimmermann-ZG-12}, interpolation based on radial basis functions~\cite{Rozza-GSSRB} and a constrained greedy approach~\cite{Taddei-FMPT-18}.
In the context of machine learning-based reduced order models~\cite{Iliescu-XMRI-18,Hesthaven-GH-19,Hesthaven-WHR-19}, a strategy coupling a traditional projection-based POD for velocity and pressure with a data-driven technique for the eddy viscosity has been recently proposed in~\cite{Rozza-HSMR-20}.
All above contributions involve the development of \emph{a posteriori} ROMs, namely RB and POD, in which the basis of the low-dimensional approximation space is computed starting from a set of snapshots. On the contrary, PGD~\cite{Chinesta-Keunings-Leygue,Chinesta-CLBACGAAH:13} constructs a reduced basis of separable functions explicitly depending on space and on user-defined parameters, with no \emph{a priori} knowledge of the solution of the problem.
The resulting PGD \emph{computational vademecum} provides a generalised solution which can thus be efficiently evaluated in the online phase via interpolation in the parametric space, that is, no extra problem needs to be solved in the low-dimensional reduced space as in POD.
In the context of flow problems, PGD was originally utilised to develop efficient solvers for the incompressible Navier-Stokes equations by separating spatial directions~\cite{Allery-DAA-10} and space and time~\cite{Allery-DAA-11,Allery-LA-14}.
In addition, problems involving parametrised geometries have been solved using PGD~\cite{AH-AHCCL:14,SZ-ZDMH:15}, with special emphasis on incompressible flows in geometrically parametrised domains~\cite{PD-DZH:17,RS-SZH:20}.
To foster the application of \emph{a priori} model order reduction techniques to problems of industrial interest, a non-intrusive PGD implementation in the CFD software OpenFOAM has been recently proposed in~\cite{Tsiolakis-TGSOH-20} to solve parametrised incompressible Navier-Stokes equations in the laminar regime.
Following the work on PGD for convection phenomena~\cite{2010-IJMF-GDCCDH,Gonzalez-GCCDH-13} and for viscous incompressible Navier-Stokes flows~\cite{Tsiolakis-TGSOH-20}, the present contribution proposes the first \emph{a priori} ROM for turbulent incompressible flows.
The proposed PGD \emph{computational vademecum} relies on a separated approximation of the RANS equations coupled with the SA turbulence model. The PGD-ROM methodology mimicks the structure of the \texttt{simpleFoam} algorithm with SA turbulence model, resulting in a minimally intrusive approach within OpenFOAM.
The resulting strategy provides a generalised solution of incompressible flows, explicitly depending on user-defined parameters, in convection-dominated regimes.
The remainder of this paper is organised as follows. Section~\ref{sc:RANS-SA} recalls the full order RANS-SA equations and the corresponding cell-centred FV approximation utilised by OpenFOAM. The rationale of the PGD-ROM for the turbulent incompressible Navier-Stokes equations is introduced in section~\ref{sc:Par-RANS-SA}, where the details of the algorithms to devise the separated velocity-pressure approximation of the flow equations (\texttt{PGD-NS}) and the separated form of the eddy (\texttt{PGD-SA}) and turbulent (\texttt{PGD-$\nu_t$}) viscosities via the SA equation are presented. Numerical experiments involving flow control in external aerodynamics, in two and three dimensions, with Reynolds number ranging up to $1,000,000$ are reported in section~\ref{sc:simulations}, whereas section~\ref{sc:Conclusion} summarises the contributions of this work. Three appendices provide the technical details of the derivation of the PGD formulation of the Navier-Stokes and SA equations, as well as the expressions of the coefficients for the corresponding spatial and parametric iterations of the alternating direction scheme.
\section{The Reynolds-averaged Navier-Stokes equations and the Spalart-Allmaras turbulence model}
\label{sc:RANS-SA}
To simulate turbulent incompressible flows using the RANS equations, the velocity-pressure pair $(\bm{u},p)$ is decomposed into a mean flow component $(\bm{U},P)$ and a perturbation $(\bm{u}',p')$, that is $\bm{u} {=} \bm{U} {+} \bm{u}'$ and $p {=} P {+} p'$.
Given an open bounded computational domain $\Omega \subset \mathbb{R}^{d}$ in $d$ spatial dimensions, the boundary $\partial\Omega$ is partitioned such that $\partial\Omega {=} \Gamma_{\!\! \text{w}} \cup \Gamma_{\!\! \text{in}} \cup \Gamma_{\!\! \text{out}}$, where the three disjoint portions $\Gamma_{\!\! \text{w}}$, $\Gamma_{\!\! \text{in}}$ and $\Gamma_{\!\! \text{out}}$ denote material walls, inlet and outlet surfaces, respectively.
The steady-state RANS equations for the mean flow variables $(\bm{U},P)$ are given by
\begin{equation} \label{eq:RANS}
\left\{\begin{aligned}
\bm{\nabla} {\cdot} (\bm{U} {\otimes} \bm{U}) - \bm{\nabla} {\cdot} ((\nu {+} \nu_t) \bm{\nabla} \bm{U}) + \bm{\nabla} P &= \bm{0} &&\text{in $\Omega$,}\\
\bm{\nabla} {\cdot} \bm{U} &= 0 &&\text{in $\Omega$,}\\
\bm{U} &= \bm{0} &&\text{on $\Gamma_{\!\! \text{w}}$,}\\
\bm{U} &= \bm{U}_{\!\! \text{in}} &&\text{on $\Gamma_{\!\! \text{in}}$,}\\
(\nu \bm{\nabla} \bm{U} {-} p \mat{I}_{d} ) \bm{n} &= \bm{0} &&\text{on $\Gamma_{\!\! \text{out}}$,}
\end{aligned}\right.
\end{equation}
where the velocity profile $\bm{U}_{\!\! \text{in}}$ is imposed on the inlet surface $\Gamma_{\!\! \text{in}}$, whereas it is customary to use homogeneous Dirichlet and Neumann boundary data to model no-slip and free-traction conditions, respectively, on fixed material walls $\Gamma_{\!\! \text{w}}$ and outlet surfaces $\Gamma_{\!\! \text{out}}$, respectively.
In Equation~\eqref{eq:RANS}, $\mat{I}_{d}$ denotes the $d {\times} d$ identity matrix, $\nu$ represents the physical viscosity of the fluid, whereas the turbulent viscosity $\nu_t$ has been introduced in the momentum equation to model the perturbations to the mean flow due to turbulence.
The one-equation SA turbulence model, derived by means of dimensional analysis and empirical observations~\cite{SpalartAllmaras-92}, introduces the relationship
\begin{equation}\label{eq:nuT}
\nu_t = \widetilde{\nu} f_{v1}
\end{equation}
between the turbulent viscosity $\nu_t$ and the eddy viscosity $\widetilde{\nu}$.
Under the assumption of fully turbulent flows, the trip term in the SA turbulence model is neglected and $\widetilde{\nu}$ is solution of
\begin{equation}\label{eq:SA}
\left\{\begin{aligned}
\bm{\nabla} {\cdot} (\bm{U} \widetilde{\nu}) - \frac{1}{\sigma} \bm{\nabla} {\cdot} ( (\nu {+} \widetilde{\nu}) \bm{\nabla} \widetilde{\nu} ) - \frac{c_{b2}}{\sigma} \bm{\nabla} \widetilde{\nu} {\cdot} \bm{\nabla} \widetilde{\nu} &= s &&\text{in $\Omega$,}\\
\widetilde{\nu} &= \widetilde{\nu}_D &&\text{on $\Gamma_{\!\! \text{in}} \cup \Gamma_{\!\! \text{w}}$,}\\
\bm{\nabla} \widetilde{\nu} {\cdot} \bm{n} &= 0 &&\text{on $\Gamma_{\!\! \text{out}}$,}
\end{aligned}\right.
\end{equation}
where $\widetilde{\nu}_D$ is the eddy viscosity datum and the source term is given by
\begin{equation}\label{eq:sourceSA}
s := c_{b1} \widetilde{S} \widetilde{\nu} - c_{w1} \frac{f_w}{\widetilde{d}^2} \widetilde{\nu}^2 .
\end{equation}
The terms on the left-hand side of equation~\eqref{eq:SA} account for the convection and diffusion of the eddy viscosity, whereas the right-hand side, see equation~\eqref{eq:sourceSA}, features the production and destruction contributions, respectively. The SA turbulence model is thus closed by introducing the following definitions
\begin{equation}\label{eq:functionsSA}
\hspace{-7pt}
\begin{alignedat}{5}
\bm{\omega} &:= \frac{\bm{\nabla} \bm{U} - \bm{\nabla} \bm{U}^T}{2} , \quad && \widetilde{S} &&:= \left[ 2\bm{\omega} : \bm{\omega} \right]^{1/2} + \frac{\widetilde{\nu}}{\kappa^2 \widetilde{d}^2} f_{v2} , \quad && \chi &&:= \frac{\widetilde{\nu}}{\nu} , \\
f_w &:= g \left[ \frac{1 + c_{w3}^6}{g^6 + c_{w3}^6} \right]^{1/6} , \quad && f_{v2} &&:= 1 - \frac{\chi}{1+ \chi f_{v1}} , \quad && f_{v1} &&:= \frac{\chi^3}{\chi^3 + c_{v1}^3} , \\
c_{w1} &:= \frac{c_{b1}}{\kappa^2} + \frac{1 + c_{b2}}{\sigma} , \quad && g &&:= r + c_{w2}(r^6 -r) , \quad && r &&:= \frac{\widetilde{\nu}}{\widetilde{S} \kappa^2 \widetilde{d}^2} ,
\end{alignedat}
\end{equation}
where $\widetilde{d}$ is the minimum distance to the closest wall, $\sigma {=} 2/3$, $\kappa {=} 0.41$, $c_{b1} {=} 0.1355$, $c_{b2} {=} 0.622$, $c_{v1} {=} 7.1$, $c_{w2} {=} 0.3$ and $c_{w3} {=} 2$.
\subsection{A finite volume formulation of the RANS-SA equations}
\label{sc:CCFV-NS-SA}
In order to discretise the turbulent Navier-Stokes equations, OpenFOAM cell-centred finite volume rationale is considered~\cite{OpenFOAM}. The computational domain is subdivided in $N$ cells $V_i, \, i{=}1,\ldots,N$ such that $V_i {\cap} V_j {=} \emptyset, \text{for $i {\neq} j$}$ and $\Omega {=} \bigcup_{i=1}^{N} V_i$.
In each cell $V_i$, the integral form of equation~\eqref{eq:RANS} is defined as
\begin{equation} \label{eq:weak-RANS}
\!\left\{\begin{aligned}
\int_{V_i}{\!\! \bm{\nabla} {\cdot} (\bm{U} {\otimes} \bm{U}) \, dV}
- \int_{V_i}{\!\! \bm{\nabla} {\cdot} ((\nu {+} \nu_t) \bm{\nabla} \bm{U}) \, dV}
+ \int_{V_i}{\!\! \bm{\nabla} P \, dV}
&= \bm{0} , \\
\int_{V_i}{\!\! \bm{\nabla} {\cdot} \bm{U} \, dV} &= 0 ,
\end{aligned}\right.
\end{equation}
where $(\bm{U},P)$ are cell-by-cell constant approximations of the velocity and pressure fields, respectively, and $\bm{U} {=} \bm{U}_{\!\! \text{in}} \, \text{on} \, \Gamma_{\!\! \text{in}}$ and $\bm{U} {=} \bm{0} \, \text{on} \, \Gamma_{\!\! \text{w}}$.
In a similar fashion, the cell-centred finite volume approximation of the SA equation~\eqref{eq:SA} is: compute $\widetilde{\nu}$ constant in each cell such that $\widetilde{\nu} = \widetilde{\nu}_D \, \text{on} \, \Gamma_{\!\! \text{in}} \cup \Gamma_{\!\! \text{w}}$ and it holds
\begin{equation} \label{eq:weak-SA}
\begin{aligned}
\int_{V_i}{\!\! \bm{\nabla} {\cdot} (\bm{U} \widetilde{\nu}) \, dV}
- \frac{1}{\sigma} \int_{V_i}{\!\! \bm{\nabla} {\cdot} \left( (\nu {+} \widetilde{\nu}) \bm{\nabla} \widetilde{\nu} \right) \, dV}
- \frac{c_{b2}}{\sigma} & \int_{V_i}{\!\! \bm{\nabla} \widetilde{\nu} {\cdot} \bm{\nabla} \widetilde{\nu} \, dV} \\
=
c_{b1} \int_{V_i}{\!\! \widetilde{S} \widetilde{\nu} \, dV}
& - c_{w1} \int_{V_i}{\!\! \frac{f_w}{\widetilde{d}^2} \widetilde{\nu}^2 \, dV} .
\end{aligned}
\end{equation}
\subsection{A turbulent Navier-Stokes solver in OpenFOAM}
\label{sc:OpenFOAM-NS-SA}
OpenFOAM strategy to solve the RANS equation with SA turbulence model relies on a staggered approach.
First, the flow equations~\eqref{eq:weak-RANS} are solved using a seed value of $\nu_t$. More precisely, the integrals over each cell in~\eqref{eq:weak-RANS} are approximated by means of the corresponding fluxes across the boundaries of the cell~\cite{Ohlberger-BHO:17,Eymard-EGH:00}. In addition, the semi-implicit method for pressure linked equations (SIMPLE) algorithm~\cite{Patankar-PS:72}, that is, a fractional-step Chorin-Temam projection method~\cite[Sect. 6.7]{Donea-Huerta}, is utilised to handle incompressibility and a relaxation approach is employed for the nonlinear convection term.
Second, the velocity field $\bm{U}$ obtained using \texttt{simpleFoam} is employed to compute the quantities in~\eqref{eq:functionsSA} and to solve the SA equation~\eqref{eq:weak-SA}. It is worth noting that equation~\eqref{eq:weak-SA} is highly nonlinear and a relaxation strategy is implemented in OpenFOAM to improve the convergence of the numerical algorithm~\cite{OpenFOAM}.
Finally, the updated value of the turbulent viscosity $\nu_t$ is determined according to equation~\eqref{eq:nuT} and the \texttt{simpleFoam} routine is utilised to recompute the turbulent velocity and pressure fields.
\section{Proper generalised decomposition for parametric turbulent flow problems}
\label{sc:Par-RANS-SA}
In the context of parametric studies, viscosity coefficient, reference velocity or boundary conditions of the problems may depend on a set of $M$ user-defined parameters $\bm{\mu} {=} (\mu_1,\ldots,\mu_M)^T$.
The solution of the RANS-SA equations is thus denoted by the velocity-pressure pair $(\bm{U}(\bm{x},\bm{\mu}),P(\bm{x},\bm{\mu}))$ and the eddy viscosity $\widetilde{\nu}(\bm{x},\bm{\mu})$, which are now functions of the spatial, $\bm{x} \in \Omega \subset \mathbb{R}^d$, and parametric, $\bm{\mu}\in\bm{\mathcal{I}}\subset\mathbb{R}^{M}$, variables.
More precisely, $\bm{U}(\bm{x},\bm{\mu}),P(\bm{x},\bm{\mu})$ and $\widetilde{\nu}(\bm{x},\bm{\mu})$ fulfil the high-dimensional RANS-SA equations obtained by integrating~\eqref{eq:weak-RANS}-\eqref{eq:weak-SA} in the parametric space $\bm{\mathcal{I}}$.
\subsection{Preliminary notions on the proper generalised decomposition}
\label{sc:PGD}
The PGD separated approximation of the high-dimensional RANS-SA equations, denoted by $(\upgd^n, \ppgd^n)$ for the flow field and $\nuTpgd^m$ for the eddy viscosity, is constructed as a sum of $n$ (respectively, $m$) separable modes, each being the product of functions which depend on either the spatial or one of the parametric variables $\mu_j, \ j=1,\ldots,M$. Henceforth, only space, $\bm{x}$, and parameters, $\bm{\mu}$, are separated for the sake of simplicity.
Following~\cite{Tsiolakis-TGSOH-20}, a \emph{predictor-corrector} approximation is utilised, namely
\begin{subequations}\label{eq:sep-increment}
\begin{equation}\label{eq:sep-increment-up}
\left\{\begin{aligned}
\upgd^n(\bm{x},\bm{\mu}) &=
\upgd^{n-1}(\bm{x},\bm{\mu}) + \sigmaU^n\bm{f}_{\!\! U}^n(\bm{x})\phi^n(\bm{\mu}) + \sigmaU^n\delta\upgd^n(\bm{x},\bm{\mu}) \\
\ppgd^n(\bm{x},\bm{\mu}) &=
\ppgd^{n-1}(\bm{x},\bm{\mu}) +\sigmaP^nf_{\!\! P}^n(\bm{x})\phi^n(\bm{\mu}) + \sigmaP^n\delta\ppgd^n(\bm{x},\bm{\mu})
\end{aligned}\right.
\end{equation}
for the flow variables and
\begin{equation}\label{eq:sep-increment-nu}
\nuTpgd^m(\bm{x},\bm{\mu}) =
\nuTpgd^{m-1}(\bm{x},\bm{\mu}) +\sigmaNU^mf_{\! \nu}^m(\bm{x})\psi^m(\bm{\mu}) + \sigmaNU^m\delta\nuTpgd^m(\bm{x},\bm{\mu})
\end{equation}
\end{subequations}
for the eddy viscosity.
It is worth noticing that the same scalar parametric function $\phi(\bm{\mu})$ is selected for both velocity and pressure~\cite{PD-DZH:17}, whereas a different scalar function, $\psi(\bm{\mu})$, is considered for the eddy viscosity.
In~\eqref{eq:sep-increment}, $(\upgd^{n-1},\ppgd^{n-1})$ and $\nuTpgd^{m-1}$ feature the contributions of the previous $n{-}1$ and $m{-}1$ PGD modes, respectively, and $\sigmaU^n\bm{f}_{\!\! U}^n\phi^n$,$\sigmaP^nf_{\!\! P}^n\phi^n$ and $\sigmaNU^mf_{\! \nu}^m\psi^m$ represent the predictions of the current term in the PGD expansion of velocity, pressure and eddy viscosity.
The terms $\sigmaU^n\delta\upgd^n,\sigmaP^n\delta\ppgd^n$ and $\sigmaNU^m\delta\nuTpgd^m$ account for the corrections utilised in the computation of the current mode and feature variations $\varDelta$ in the spatial and parametric functions, namely
\begin{equation}\label{eq:PGD-corr}
\left\{\begin{aligned}
\delta\upgd^n(\bm{x},\bm{\mu}) &:= \varDelta\bm{f}_{\!\! U}(\bm{x})\phi^n(\bm{\mu})+\bm{f}_{\!\! U}^n(\bm{x})\varDelta\phi(\bm{\mu})+\varDelta\bm{f}_{\!\! U}(\bm{x})\varDelta\phi(\bm{\mu}) , \\
\delta\ppgd^n(\bm{x},\bm{\mu}) &:= \Def_{\!\! P}(\bm{x})\phi^n(\bm{\mu})+f_{\!\! P}^n(\bm{x})\varDelta\phi(\bm{\mu})+\Def_{\!\! P}(\bm{x})\varDelta\phi(\bm{\mu}) , \\
\delta\nuTpgd^m(\bm{x},\bm{\mu}) &:= \Def_{\! \nu}(\bm{x})\psi^m(\bm{\mu})+f_{\! \nu}^m(\bm{x})\varDelta\psi(\bm{\mu})+\Def_{\! \nu}(\bm{x})\varDelta\psi(\bm{\mu}) ,
\end{aligned}\right.
\end{equation}
where the high-order variation introduced by the last term is henceforth neglected.
\begin{remark}\label{rmrk:amplitude}
The coefficients $\sigmaU^n$, $\sigmaP^n$ and $\sigmaNU^m$ denote the amplitudes of the velocity, pressure and eddy viscosity modes, respectively. Given $\| \phi^n {+} \varDelta\phi \| {=} 1$ and $\| \psi^m {+} \varDelta\psi \| {=} 1$, they are defined as
\begin{equation*}
\sigmaU^n := \| \bm{f}_{\!\! U}^n + \varDelta\bm{f}_{\!\! U} \| , \,\, \sigmaP^n := \| f_{\!\! P}^n + \Def_{\!\! P} \| \,\, \text{and} \,\, \sigmaNU^m := \| f_{\! \nu}^m + \Def_{\! \nu} \| ,
\end{equation*}
where the dependence of the modes on $\bm{x}$ and $\bm{\mu}$ is omitted for the sake of readability. For the simulations in section~\ref{sc:simulations}, $\ensuremath{\mathcal{L}_2}$ norms over the spatial and parametric domains have been considered for the normalisation step.
\end{remark}
It is worth noticing that the final number of modes needed for the PGD approximations, denoted by the super-indexes $n$ and $m$ in~\eqref{eq:sep-increment}, is not known \emph{a priori} and is in general different for the flow variables and the eddy viscosity.
More precisely, the number of terms in the PGD expansion is automatically determined by the algorithm which ends the enrichment procedure when a user-defined stopping criterion is fulfilled. Classical definitions of the stopping criterion include the relative amplitude of the last computed mode with respect to the first one or to the sum of all previosuly computed terms~\cite{Tsiolakis-TGSOH-20}.
\subsection{Proper generalised decomposition of the flow equations}
\label{sc:PGD-RANS}
In this section, the spatial and parametric steps of the non-intrusive PGD algorithm applied to the Navier-Stokes equations~\eqref{eq:weak-RANS} are recalled. First, the laminar case is considered, that is the turbulent viscosity $\nu_t$ is set to zero. The derivation of such formulation is briefly reported in appendix~\ref{sc:appPGD-NS}, whereas the interested reader is referred to~\cite{Tsiolakis-TGSOH-20} for a detailed presentation of the method.
Recall that in order to construct a PGD approximation, a separated representation of the user-defined data is required~\cite{SZ-ZDMH:15}, e.g. $\nu(\bm{x},\bm{\mu}) {=} D(\bm{x})\zeta(\bm{\mu})$ for the physical viscosity.
In addition, the first mode $(\upgd^0, \ppgd^0)$ is selected to verify the boundary condition on the inlet surface and, more generally, all inhomogeneous Dirichlet-type boundary conditions. The following terms of the PGD expansion of velocity and pressure are computed via an alternating direction scheme~\cite{PGD-CCH:14, Chinesta-Keunings-Leygue}, henceforth named \texttt{PGD-NS}.
At each iteration of the alternating direction approach, a spatial mode $(\sigmaU^n\bm{f}_{\!\! U}^n,\sigmaP^nf_{\!\! P}^n)$ is computed using \texttt{simpleFoam} and an algebraic problem is solved to determine the corresponding parametric function $\phi^n$, as detailed in the following subsections.
The alternating direction algorithm stops when the relevance of the increments $\varDelta\bm{f}_{\!\! U}$ and $\Def_{\!\! P}$ is negligible with respect to the amplitudes $\sigmaU^n$ and $\sigmaP^n$ of the corresponding modes.
Similarly, the global procedure stops when the contribution of the current mode in the PGD enrichment, measured by means of its relative amplitude, is negligible.
\subsubsection{\texttt{PGD-NS}: the spatial iteration}
First, the high-dimensional problem, obtained by integrating~\eqref{eq:weak-RANS} in $\bm{\mathcal{I}}$, is restricted to the spatial direction fixing the parametric function $\phi^n$. The spatial increments $(\sigmaU^n \varDelta\bm{f}_{\!\! U}, \sigmaP^n \Def_{\!\! P})$ are thus computed as the FV solution of the flow equations
\begin{equation} \label{eq:PGD-NS-spatial}
\left\{\begin{aligned}
\alpha_0 \!\! \int_{V_i}\!\! \bm{\nabla} {\cdot} (\sigmaU^n \varDelta\bm{f}_{\!\! U} {\otimes} \sigmaU^n & \varDelta\bm{f}_{\!\! U}) dV \\
- \alpha_1 \!\! \int_{V_i} \bm{\nabla} {\cdot} (D \bm{\nabla} (\sigmaU^n & \varDelta\bm{f}_{\!\! U})) dV
+ \alpha_2 \!\! \int_{V_i}{\bm{\nabla} (\sigmaP^n \Def_{\!\! P}) dV} \\
&= R_U^n - \!\! \int_{V_i}\!\! \bm{\nabla} {\cdot} \Big( \sum_{j=1}^n \! \alpha_3^j \sigmaU^j \bm{f}_{\!\! U}^j {\otimes} \sigmaU^{k-1} \varDelta\bm{f}_{\!\! U}^{k-1} \Big) dV \\
& \hspace{4em} - \!\! \int_{V_i}\!\! \bm{\nabla} {\cdot} \Big( \sigmaU^{k-1} \varDelta\bm{f}_{\!\! U}^{k-1} {\otimes}\!\sum_{j=1}^n \!\alpha_3^j \sigmaU^j \bm{f}_{\!\! U}^j \Big) dV , \\
\alpha_2 \!\! \int_{V_i}{\bm{\nabla} {\cdot} (\sigmaU^n \varDelta\bm{f}_{\!\! U}) dV} &= R_P^n ,
\end{aligned}\right.
\end{equation}
where the coefficients $\alpha_j, \, j{=}0,\ldots,3$, reported in appendix~\ref{sc:appCoeff}, only depend on user-defined data and parametric functions and can thus be efficiently precomputed.
On the right-hand sides of the momentum and continuity equations, the residuals $R_U^n$ and $R_P^n$ are determined using the previously computed terms $(\upgd^{n-1},\ppgd^{n-1})$ and the predictions $(\sigmaU^n\bm{f}_{\!\! U}^n\phi^n,\sigmaP^nf_{\!\! P}^n\phi^n)$ of the current mode in the PGD expansions of velocity and pressure , namely
\begin{subequations}\label{eq:NSspatialRes-general}
\begin{align}
&
\begin{aligned}
R_U^n :=
&- \int_{\bm{\mathcal{I}}} \phi^n \int_{V_i}{\!\! \bm{\nabla} {\cdot} \left( [\upgd^{n-1} + \sigmaU^n\bm{f}_{\!\! U}^n \phi^n ] {\otimes} [\upgd^{n-1} + \sigmaU^n\bm{f}_{\!\! U}^n \phi^n ] \right) dV \, d\bm{\mathcal{I}}} \\
&+ \int_{\bm{\mathcal{I}}} \phi^n \int_{V_i}{\!\! \bm{\nabla} {\cdot} \left( \nu \bm{\nabla} (\upgd^{n-1} + \sigmaU^n\bm{f}_{\!\! U}^n \phi^n) \right) dV \, d\bm{\mathcal{I}}} \\
&- \int_{\bm{\mathcal{I}}} \phi^n \int_{V_i}{\!\! \bm{\nabla} (\ppgd^{n-1} + \sigmaP^nf_{\!\! P}^n \phi^n) \, dV \, d\bm{\mathcal{I}}} ,
\end{aligned}
\label{eq:NSspatialResU-general} \\
&
R_P^n :=
- \int_{\bm{\mathcal{I}}} \phi^n \int_{V_i}{\!\! \bm{\nabla} {\cdot} \left(\upgd^{n-1} + \sigmaU^n\bm{f}_{\!\! U}^n \phi^n \right) dV \, d\bm{\mathcal{I}}} .
\label{eq:NSspatialResP-general}
\end{align}
\end{subequations}
It is worth recalling that the factor $\phi^n$ in the expressions of $R_U^n$ and $R_P^n$, see equation~\eqref{eq:NSspatialRes-general}, follows from the restriction of the residuals defined in the high-dimensional space $\Omega {\times} \bm{\mathcal{I}}$ to the tangent manifold associated with the spatial direction~\cite{Tsiolakis-TGSOH-20}.
In order to perform an efficient computation of such residuals, the separated expressions of $(\upgd^{n-1},\ppgd^{n-1})$ as a product of spatial and parametric functions are exploited, leading to
\begin{subequations}\label{eq:NSspatialRes}
\begin{align}
&
\begin{aligned}
R_U^n
= -& \sum_{j=1}^n \sum_{\ell=1}^n \alpha_4^{j\ell} \!\! \int_{V_i}{\bm{\nabla} {\cdot} (\sigmaU^j\bm{f}_{\!\! U}^j {\otimes} \sigmaU^\ell\bm{f}_{\!\! U}^\ell) \, dV} \\
&+ \sum_{\ell=1}^n \alpha_5^\ell \int_{V_i}{\!\! \bm{\nabla} {\cdot} ( D \bm{\nabla} (\sigmaU^\ell\bm{f}_{\!\! U}^\ell ) ) \, dV}
- \sum_{\ell=1}^n \alpha_6^\ell \int_{V_i}{\!\! \bm{\nabla} (\sigmaP^\ellf_{\!\! P}^\ell ) \, dV} ,
\end{aligned}
\label{eq:NSspatialResU} \\
&
R_P^n
= - \sum_{\ell=1}^n \alpha_6^\ell \int_{V_i}{\!\! \bm{\nabla} {\cdot} (\sigmaU^\ell \bm{f}_{\!\! U}^\ell ) \, dV} ,
\label{eq:NSspatialResP}
\end{align}
\end{subequations}
where the coefficients $\alpha_j, \, j{=}4,\ldots,6$ encapsulate the information of the previously computed parametric modes and are defined in appendix~\ref{sc:appCoeff}.
In order to devise a strategy for the spatial iteration of the \texttt{PGD-NS} step which is non-intrusive with respect to the OpenFOAM SIMPLE algorithm, two linear contributions arising from the relaxation of the convection term appear on the right-hand side of the momentum equation in~\eqref{eq:PGD-NS-spatial}. More precisely, the last two integrals are evaluated using the last computed increment $\sigmaU^{k-1}\varDelta\bm{f}_{\!\! U}^{k-1}$ in the SIMPLE iterations.
It is straightforward to observe that the resulting structure of the left-hand side of equations~\eqref{eq:PGD-NS-spatial} mimicks the traditional Navier-Stokes equations~\eqref{eq:weak-RANS}, with $\nu_t {=} 0$.
Hence, the PGD spatial iteration is solved using the \texttt{simpleFoam} algorithm, natively implemented in OpenFOAM, and a non-intrusive model reduction strategy for parametric problems is obtained~\cite{Tsiolakis-TGSOH-20}.
\subsubsection{\texttt{PGD-NS}: the parametric iteration}
To compute the parametric increment $\varDelta\phi$, the value of the spatial functions \\ $(\sigmaU^n\bm{f}_{\!\! U}^n,\sigmaP^nf_{\!\! P}^n) {\gets} (\sigmaU^n [\bm{f}_{\!\! U}^n {+} \varDelta\bm{f}_{\!\! U}], \sigmaP^n [f_{\!\! P}^n {+} \Def_{\!\! P}])$ is updated and fixed.
The high-dimensional problem is thus restricted to the parametric direction $\bm{\mathcal{I}}$ and the following algebraic equation is obtained
%
\begin{subequations}\label{eq:PGD-NS-param}
\begin{equation}\label{eq:NSparamMatrix}
a_0 (\varDelta\phi)^2 + \left( - a_1 \zeta + a_2 + \sum_{j=1}^n a_3^j \phi^j \right) \varDelta\phi
= r_U^n + r_P^n ,
\end{equation}
where $r_U^n$ and $ r_P^n$ are given by
\begin{equation}\label{eq:NSparamRes}
r_U^n := \sum_{\ell=1}^n \left( - \sum_{j=1}^n a_4^{j\ell} \phi^j + a_5^\ell \zeta - a_6^\ell \right) \phi^\ell , \quad
r_P := - \sum_{\ell=1}^n a_7^\ell \phi^\ell
\end{equation}
\end{subequations}
and represent the residuals of the discretised momentum and continuity equations, respectively, in the parametric space.
\begin{remark}\label{rmrk:diffLaminar}
Contrary to the parametric problem in~\cite{Tsiolakis-TGSOH-20}, the second-order variation is maintained in~\eqref{eq:NSparamMatrix}. Although this term was negligible in laminar simulations, it has been verified numerically that its presence improves the stability of the solution of the parametric step of the Navier-Stokes equations in the turbulent regime.
\end{remark}
The algebraic problem~\eqref{eq:PGD-NS-param} is fully determined by the coefficients $a_i, \ i=0,\ldots,7$, which are detailed in appendix~\ref{sc:appCoeff}. Similarly to what observed in the previous section, these coefficients only depend on user-defined data and on previously computed spatial modes and can be efficiently precomputed. The resulting parametric problem is solved through a collocation approach.
\subsection{Proper generalised decomposition of the turbulence model}
\label{sc:PGD-SA}
In the derivation of the PGD formulation of the flow equations in the previous section, the turbulent viscosity $\nu_t$ was neglected.
In order to devise a PGD strategy for parametric turbulent flow problems, a key aspect is represented by the construction of a separated approximation of $\nu_t$. In this section, such a task is performed via the formulation of spatial and parametric PGD iterations to compute the eddy viscosity $\widetilde{\nu}$ according to the SA turbulence model.
Similarly to the procedure developed for the flow equations, the first mode $\nuTpgd^0$ is arbitrarily selected to match the imposed value of the turbulent viscosity on $\Gamma_{\!\! \text{in}} \cup \Gamma_{\!\! \text{w}}$. More precisely, Dirichlet data for $\nuTpgd$ are selected as full order solutions of the SA equation computed using the boundary condition modes of velocity.
Assuming $\nuTpgd^{m-1}$ known, the $m$-th term in the PGD expansion is computed by means of a greedy approach via an alternating direction method, named \texttt{PGD-SA}.
More precisely, spatial, $\sigmaNU^mf_{\! \nu}^m$, and parametric, $\psi^m$, modes of the eddy viscosity are determined by alternatively solving a PDE in $\Omega$ and an algebraic problem in $\bm{\mathcal{I}}$, until the corrections $\Def_{\! \nu}$ and $\varDelta\psi$ are negligible with respect to the amplitude $\sigmaNU^m$, as detailed in the following subsections.
Finally, the separated form of $\nu_t$ is retrieved according to relationship~\eqref{eq:nuT}, via the \texttt{PGD-$\nu_t$} routine.
\subsubsection{\texttt{PGD-SA}: the spatial iteration}
\label{sc:PGD-SA-spatial}
Following the rationale presented for the flow equations, the parametric function $\psi^m$ is fixed and the high-dimensional SA equation is restricted to the spatial direction. The increment $\sigmaNU^m\Def_{\! \nu}$ acts as unknown of the spatial iteration of the PGD procedure for the parametric SA equation. More precisely, a cell-by-cell constant approximation $\sigmaNU^m\Def_{\! \nu}$ is computed solving
\begin{equation} \label{eq:PGD-SA-spatial}
\hspace{-10pt}
\begin{aligned}
\int_{V_i}\!\! \bm{\nabla} {\cdot} &\Big(\!\!\Big(\sum_{j=1}^n \beta_1^j \sigmaU^j \bm{f}_{\!\! U}^j \Big) \sigmaNU^m \Def_{\! \nu} \Big) dV \\
- \frac{1}{\sigma} & \int_{V_i}{\!\! \bm{\nabla} {\cdot} \Big( \Big[ \Big( \beta_2 D {+} \sum_{j=1}^m \beta_3^j \sigmaNU^j f_{\! \nu}^j \Big) {+} \beta_4 \sigmaNU^m \Def_{\! \nu} \Big] \bm{\nabla} (\sigmaNU^m \Def_{\! \nu}) \! \Big) dV} \\
-& \frac{\beta_4 c_{b2}}{\sigma} \int_{V_i}{\!\! \bm{\nabla} (\sigmaNU^m \Def_{\! \nu}) {\cdot} \bm{\nabla} (\sigmaNU^m \Def_{\! \nu}) \, dV} \\
&- \beta_5 c_{b1} \int_{V_i}{\!\! \Sx{\widetilde{S}^m} \sigmaNU^m \Def_{\! \nu} \, dV}
+ \beta_6 c_{w1} \int_{V_i}{\!\! \frac{\Sx{f_w^m}}{\widetilde{d}^2} \left(\sigmaNU^m \Def_{\! \nu} \right)^2 \, dV} \\
&\hspace{85pt} = R_{\nu}^m + \frac{1}{\sigma} \int_{V_i}{\!\! \bm{\nabla} {\cdot} \Big( \sigmaNU^{k-1} \Def_{\! \nu}^{k-1} \bm{\nabla} \Big(\sum_{j=1}^m \beta_3^j \sigmaNU^j f_{\! \nu}^j \Big) \! \Big) dV} \\
&\hspace{95pt} + \frac{2 c_{b2}}{\sigma} \int_{V_i}{\!\! \bm{\nabla} \Big(\sum_{j=1}^m \beta_3^j \sigmaNU^j f_{\! \nu}^j \Big) {\cdot} \bm{\nabla} (\sigmaNU^{k-1} \Def_{\! \nu}^{k-1}) \, dV} \\
&\hspace{98pt} - 2 c_{w1} \int_{V_i}{\!\! \frac{\Sx{f_w^m}}{\widetilde{d}^2} \Big( \sum_{j=1}^m \beta_7^j \sigmaNU^j f_{\! \nu}^j \Big) \sigmaNU^{k-1} \Def_{\! \nu}^{k-1} \, dV} ,
\end{aligned}
\end{equation}
where $\widetilde{S}^m$ and $f_w^m$ are evaluated using the $m$ previously computed modes of $\nuTpgd^m$ and $\Sx{\circledcirc}$ denotes the spatial modes of the function $\circledcirc$.
\begin{remark}\label{rmrk:nonSeparable}
It is worth emphasising that neither $\widetilde{S}(\bm{x},\bm{\mu})$ nor $f_w(\bm{x},\bm{\mu})$ are separable exactly via an analytical procedure. In this context, in order to efficiently compute the coefficients and the integral terms in~\eqref{eq:PGD-SA-spatial}, a high-dimensional reconstruction of these functions in the space $\Omega \times \bm{\mathcal{I}}$ to later interpolate in $\Omega$ or a numerical procedure for PGD separation~\cite{PD-DZGH-18,Diez-DZGH-19} need to be performed. The former strategy is employed in the simulations in section~\ref{sc:simulations}.
\end{remark}
As previously observed for the spatial iteration of the flow equations, the right-hand side of equation~\eqref{eq:PGD-SA-spatial} features the residual $R_{\nu}^m$ obtained using the values of the previous terms $\nuTpgd^{m-1}$ in the PGD expansion of the eddy viscosity and the prediction $\sigmaNU^mf_{\! \nu}^m\psi^m$ of the current mode, namely
\begin{equation}\label{eq:SAspatialRes-general}
\begin{aligned}
\mathcal{R}_\nu :=
&- \int_{\bm{\mathcal{I}}} \psi^m \int_{V_i}{\bm{\nabla} {\cdot} \left(\upgd^n (\nuTpgd^{m-1} + \sigmaNU^mf_{\! \nu}^m\psi^m) \right) \, dV \, d\bm{\mathcal{I}}} \\
&+ \frac{1}{\sigma} \int_{\bm{\mathcal{I}}} \psi^m \int_{V_i}{\!\! \bm{\nabla} {\cdot} \Big( (\nu + \nuTpgd^{m-1} + \sigmaNU^mf_{\! \nu}^m\psi^m ) \bm{\nabla} (\nuTpgd^{m-1} + \sigmaNU^mf_{\! \nu}^m\psi^m) \! \Big) dV \, d\bm{\mathcal{I}}} \\
&+ \frac{c_{b2}}{\sigma} \int_{\bm{\mathcal{I}}} \psi^m \int_{V_i}{\!\! \bm{\nabla} (\nuTpgd^{m-1} + \sigmaNU^mf_{\! \nu}^m\psi^m) {\cdot} \bm{\nabla} (\nuTpgd^{m-1} + \sigmaNU^mf_{\! \nu}^m\psi^m) \, dV \, d\bm{\mathcal{I}}} \\
&+ c_{b1} \int_{\bm{\mathcal{I}}} \psi^m \int_{V_i}{\!\! \widetilde{S} (\nuTpgd^{m-1} + \sigmaNU^mf_{\! \nu}^m\psi^m) \, dV \, d\bm{\mathcal{I}}} \\
&- c_{w1} \int_{\bm{\mathcal{I}}} \psi^m \int_{V_i}{\!\! \frac{f_w}{\widetilde{d}^2} (\nuTpgd^{m-1} + \sigmaNU^mf_{\! \nu}^m\psi^m)^2 \, dV \, d\bm{\mathcal{I}}} ,
\end{aligned}
\end{equation}
where the parametric function $\psi^m$ in the integrals above stems from the restriction of the high-dimensional residuals to the tangent manifold in the spatial direction, see appendix~\ref{sc:appPGD-SA}. By eploiting the separated expression of $\nuTpgd^{m-1}$, the residual can be rewritten as
\begin{equation}\label{eq:SAspatialRes}
\begin{aligned}
R_{\nu}^m
= -& \sum_{j=1}^n \sum_{\ell=1}^m \beta_8^{j\ell} \!\! \int_{V_i}{\bm{\nabla} {\cdot} (\sigmaU^j\bm{f}_{\!\! U}^j \sigmaNU^\ellf_{\! \nu}^\ell) \, dV} \\
& + \frac{1}{\sigma} \sum_{\ell=1}^m \int_{V_i}{\!\! \bm{\nabla} {\cdot} \Big(\!\!\Big( \beta_9^\ell D {+} \sum_{j=1}^m \beta_{10}^{j\ell} \sigmaNU^j f_{\! \nu}^j \Big) \bm{\nabla} (\sigmaNU^\ell f_{\! \nu}^\ell) \! \Big) dV} \\
& + \frac{c_{b2}}{\sigma} \sum_{j=1}^m \sum_{\ell=1}^m \beta_{10}^{j\ell} \int_{V_i}{\!\! \bm{\nabla} (\sigmaNU^j f_{\! \nu}^j) {\cdot} \bm{\nabla} (\sigmaNU^\ell f_{\! \nu}^\ell) \, dV} \\
& + c_{b1} \sum_{\ell=1}^m \beta_{11}^\ell \int_{V_i}{\!\! \Sx{\widetilde{S}^m} \sigmaNU^\ell f_{\! \nu}^\ell \, dV} \\
& - c_{w1} \sum_{j=1}^m \sum_{\ell=1}^m \beta_{12}^{j\ell} \int_{V_i}{\!\! \frac{\Sx{f_w^m}}{\widetilde{d}^2} \sigmaNU^j f_{\! \nu}^j \sigmaNU^\ell f_{\! \nu}^\ell \, dV} .
\end{aligned}
\end{equation}
In order for the discussed PGD-ROM implementation to be non-intrusive with respect to the SA solver natively implemented in OpenFOAM, three terms arising from the restriction of the high-dimensional SA equation to the spatial direction, see appendix~\ref{sc:appPGD-SA}, are treated explicitly via a relaxation approach. Hence, in the last three terms on the right-hand side of equation~\eqref{eq:PGD-SA-spatial}, $\sigmaNU^{k-1} \Def_{\! \nu}^{k-1}$ denotes the last computed increment in the iterative procedure to solve the SA equation.
The left-hand side of the equation~\eqref{eq:PGD-SA-spatial} thus presents the same structure as the original SA equation~\eqref{eq:weak-SA}, where the spatial integrals are now weighted by means of appropriately computed parametric coefficients, and the OpenFOAM strategy for the solution of the turbulence model equation is exploited.
It is worth noticing that since the functions $\widetilde{S}(\bm{x},\bm{\mu})$ and $f_w(\bm{x},\bm{\mu})$ are not separable exactly using an analytical procedure, see remark~\ref{rmrk:nonSeparable}, equation~\eqref{eq:PGD-SA-spatial} cannot be solved in a complete non-intrusive way using OpenFOAM. The resulting implementation of the \texttt{PGD-SA} algorithm in OpenFOAM is thus minimally intrusive as the structure of equation~\eqref{eq:PGD-SA-spatial} is the same as equation~\eqref{eq:weak-SA} but a numerical separation of $\widetilde{S}(\bm{x},\bm{\mu})$ and $f_w(\bm{x},\bm{\mu})$ needs to be integrated in the existing OpenFOAM routine for the SA solver.
The spatial problem~\eqref{eq:PGD-SA-spatial}-\eqref{eq:SAspatialRes} is fully closed by the definitions of the coefficients $\beta_j$, $j{=}1,\ldots,12$ in appendix~\ref{sc:appCoeff}. These quantities depend soley on user-defined data and parametric functions and can be precomputed for an efficient implementation of the PGD spatial iteration.
\subsubsection{\texttt{PGD-SA}: the parametric iteration}
\label{sc:PGD-SA-param}
The parametric iteration for the turbulence model is devised by fixing the newly computed eddy viscosity spatial mode $\sigmaNU^m f_{\! \nu}^m {\gets} \sigmaNU^m [f_{\! \nu}^m {+} \Def_{\! \nu}]$ and restricting the high-dimensional SA equation to the parametric direction $\bm{\mathcal{I}}$.
It follows the algebraic equation
\begin{equation}\label{eq:PGD-SA-param}
\begin{aligned}
(& -b_4 + b_6 \Smu{f_w^m} )(\varDelta\psi)^2 \\
&+ \left( - b_2 \zeta - b_5 \Smu{\widetilde{S}^m} + \sum_{j=1}^n b_1^j \phi^j - \sum_{j=1}^m (b_3^j + b_7^j \Smu{f_w^m}) \psi^j \right) \varDelta\psi
= r_{\nu}^m ,
\end{aligned}
\end{equation}
in which the unknown is the parametric increment $\varDelta\psi$, discretised in the nodes of the interval $\bm{\mathcal{I}}$ according to a collocation method.
The right-hand side of equation~\eqref{eq:PGD-SA-param} features the residual of the SA equation in the parametric space, namely
\begin{equation}\label{eq:SAparamRes}
r_{\nu}^m := \sum_{\ell=1}^m \left( b_9^\ell \zeta + b_{11}^\ell \Smu{\widetilde{S}^m} + \sum_{j=1}^n b_8^{j\ell} \phi^j + \sum_{j=1}^m (b_{10}^{j\ell} - b_{12}^{j\ell} \Smu{f_w^m} ) \psi^j \right) \psi^\ell .
\end{equation}
As previosuly observed, $\widetilde{S}^m$ and $f_w^m$ are evaluated using the $m$ previously computed modes of $\nuTpgd^m$. Moreover, the functions $\widetilde{S}(\bm{x},\bm{\mu})$ and $f_w(\bm{x},\bm{\mu})$ are not separable analytically. Following remark~\ref{rmrk:nonSeparable}, $\Smu{\circledcirc}$ denotes the parametric mode of the non-separable function $\circledcirc$, obtained either via a high-order reconstruction in $\Omega \times \bm{\mathcal{I}}$ and consequent interpolation in the parametric space or via the PGD separation described in~\cite{PD-DZGH-18,Diez-DZGH-19}.
The expression of the coefficients $b_j, \ j=1,\ldots,12$ is detailed in appendix~\ref{sc:appCoeff}. As for the parametric step of the flow equations, they only depend on data provided by the user and on spatial functions and they can thus be efficiently precomputed.
\subsubsection{\texttt{PGD-$\nu_t$}: devising a separated turbulent viscosity}
The PGD approximation $\turbNUpgd$ of the turbulent viscosity is obtained introducing the separated form of the eddy viscosity $\nuTpgd$ in~\eqref{eq:nuT}.
It is worth noticing that the function $f_{v1}$ is not separable analytically. Hence, as detailed in remark~\ref{rmrk:nonSeparable}, a high-dimensional reconstruction in the space $\Omega \times \bm{\mathcal{I}}$ to perform interpolation in $\Omega$ and $\bm{\mathcal{I}}$ separately, or a numerical strategy for PGD separation~\cite{PD-DZGH-18,Diez-DZGH-19} is required. The former strategy is employed in the simulations in section~\ref{sc:simulations}.
For the sake of readability, consider $f_{v1}^m$ obtained using the $m$ previously computed modes of $\nuTpgd^m$ and introduce the approximation
\begin{equation}\label{eq:sepFw1}
f_{v1}^m \simeq \Sx{f_{v1}^m} \Smu{f_{v1}^m} ,
\end{equation}
where $\Sx{f_{v1}^m}$ and $\Smu{f_{v1}^m}$ denote the spatial and parametric modes of the function $f_{v1}^m$, respectively, obtained by means of either the interpolation of its high-dimensional reconstruction or the PGD numerical separation.
The resulting PGD approximation of the turbulent viscosity is
\begin{equation}\label{eq:sep-turbNu}
\turbNUpgd^q(\bm{x},\bm{\mu}) = \turbNUpgd^{q-1}(\bm{x},\bm{\mu}) +\sigmaT^qf_{\! t}^q(\bm{x})\xi^q(\bm{\mu}) ,
\end{equation}
where $f_{\! t}^q$ and $\xi^q$ denote the spatial and parametric modes, respectively.
\begin{remark}
A computationally efficient implementation of~\eqref{eq:nuT} exploiting the separated nature of all the involved variables can be devised in terms of elemental arithmetic operations of separated functions~\cite{Diez-DZGH-19}. More precisely, the spatial modes $f_{\! t}^q$ are obtained from the product of the spatial functions $\Sx{f_{v1}^m}$ and $f_{\! \nu}^m$, whereas the parametric terms $\xi^q$ stem from the product of the parametric modes $\Smu{f_{v1}^m}$ and $\psi^m$.
It is worth noting that the product of separated functions leads to a number of terms in the PGD expansion larger than the number of modes of its factors. Nonetheless, such operation can be efficiently performed via a separated multiplication~\cite{Diez-DZGH-19} and the result can be compressed to eliminate redundant information~\cite{Modesto-MZH:15,PD-DZGH-18,Diez-DZGH-19}.
\end{remark}
\subsection{A minimally intrusive PGD implementation of a parametric solver for turbulent Navier-Stokes flows in OpenFOAM}
\label{sc:PGD-RANS-SA}
In sections~\ref{sc:PGD-RANS} and~\ref{sc:PGD-SA}, the PGD formulations of the Navier-Stokes equations and the SA turbulence model have been separately presented, introducing the so-called \texttt{PGD-NS}, \texttt{PGD-SA} and \texttt{PGD-$\nu_t$} algorithms.
In order to devise a minimally intrusive parametric solver for turbulent incompressible Navier-Stokes flows in OpenFOAM, these routines are integrated in a unique framework. More precisely, the PGD algorithms for the flow equations and the turbulence model are coupled by mimicking the segregated structure of \texttt{simpleFoam} with SA turbulence model implemented in OpenFOAM, leading to an overall PGD-ROM strategy able to exploit the capabilities of the validated CFD library.
First, recall that the strategy of the full order solver involves the following steps~\cite{OpenFOAM}:
\begin{enumerate}[label=(\Alph*)]
\item Compute velocity and pressure using a fractional step projection approach to solve~\eqref{eq:weak-RANS} with a user-prescribed value of the turbulent viscosity (RANS solver via \texttt{simpleFoam}).
\item Use the value of the computed velocity to solve~\eqref{eq:weak-SA} and determine the eddy viscosity (SA solver).
\item Update the turbulent viscosity according to~\eqref{eq:nuT}.
\item Recompute velocity and pressure solving~\eqref{eq:weak-RANS} with the newly determined turbulent viscosity (RANS solver via \texttt{simpleFoam}).
\end{enumerate}
Following this rationale, the corresponding parametric solver is described in algorithm~\ref{alg:PGD-RANS-SA-OF}.
For step (A), the PGD algorithm solves the parametrised flow equations (Algorithm 1 - Step 5) via the non-intrusive \texttt{PGD-NS} strategy. The initial value of the turbulent viscosity utilised in this computation is selected starting from the boundary conditions modes of the velocity.
\begin{remark}\label{rmrk:RANS-modification}
It is worth emphasising that the contribution of the turbulent viscosity $\nu_t$ in the Navier-Stokes momentum equation needs to be accounted for in step (A) and (D), that is, an additional term is introduced in the PGD formulation for the laminar Navier-Stokes equations presented in section~\ref{sc:PGD-RANS}. More precisely, given the PGD approximation $\turbNUpgd^q$ of the turbulent viscosity obtained from the \texttt{PGD-$\nu_t$} routine, the term
\begin{equation}\label{eq:PGD-NS-turb-viscous-term}
-\int_{\bm{\mathcal{I}}} \int_{V_i}{\!\! \bm{\nabla} {\cdot} ((\nu + \turbNUpgd^q) \bm{\nabla} \upgd^n ) dV \, d\bm{\mathcal{I}}}
\end{equation}
leads to the contribution
\begin{equation}\label{eq:PGD-NS-turb-LHS-spatial}
- \int_{V_i}{\!\! \bm{\nabla} {\cdot} \Big(\!\! \Big(\alpha_1 D + \sum_{j=1}^q\alpha_7^j \sigmaT^jf_{\! t}^j \Big)\bm{\nabla} (\sigmaU^n \varDelta\bm{f}_{\!\! U}) \! \Big) dV}
\end{equation}
on the left-hand side of the spatial equation~\eqref{eq:PGD-NS-spatial}, whereas the corresponding term in the spatial residual~\eqref{eq:NSspatialResU} is given by
\begin{equation}\label{eq:PGD-NS-turb-RHS-spatial}
\begin{aligned}
\int_{\bm{\mathcal{I}}} \phi^n & \int_{V_i}{\!\! \bm{\nabla} {\cdot} \left( (\nu + \turbNUpgd^q) \bm{\nabla} (\upgd^{n-1} + \sigmaU^n\bm{f}_{\!\! U}^n \phi^n) \right) dV \, d\bm{\mathcal{I}}} \\
&= \sum_{\ell=1}^n \int_{V_i}{\!\! \bm{\nabla} {\cdot} \Big(\!\! \Big(\alpha_5^\ell D + \sum_{j=1}^q\alpha_8^{j\ell} \sigmaT^jf_{\! t}^j\Big) \bm{\nabla} (\sigmaU^\ell\bm{f}_{\!\! U}^\ell ) \! \Big) \, dV} ,
\end{aligned}
\end{equation}
where the newly introduced coefficients $\alpha_j, \ j{=}7,8$, depending only on parametric functions, are detailed in appendix~\ref{sc:appCoeff}.
The corresponding terms in the parametric problem are
\begin{equation}\label{eq:PGD-NS-turb-LHS-param}
- \left( a_1 \zeta + \sum_{j=1}^q a_8^j \xi^j \right) \varDelta\phi
\end{equation}
on the left-hand side of equation~\eqref{eq:NSparamMatrix} and
\begin{equation}\label{eq:PGD-NS-turb-RHS-param}
\sum_{\ell=1}^n \left( a_5^\ell \zeta + \sum_{j=1}^q a_9^{j\ell} \xi^j \right) \phi^\ell
\end{equation}
in the definition of $r_U^n$ in~\eqref{eq:NSparamRes}, $a_j, \ j{=}8,9$ being two coefficients depending solely on spatial functions, see appendix~\ref{sc:appCoeff}.
\end{remark}
The PGD enrichment procedure for the RANS equations continues by alternatively solving the spatial and parametric problems with the above modifications, until a user-prescribed threshold is achieved by the amplitude of the computed velocity and pressure modes.
Once a \emph{sufficiently accurate} PGD approximation of velocity and pressure is obtained, step (B) and (C) compute separated representations of the eddy and turbulent viscosities, respectively, by means of the minimally intrusive \texttt{PGD-SA} (Algorithm 1 - Step 9) and \texttt{PGD-$\nu_t$} (Algorithm 1 - Step 12) routines.
Finally, the PGD approximation of the flow equations is reset and a new PGD enrichment is performed using the newly computed PGD approximation of the turbulent viscosity (Step (D)) via the non-intrusive \texttt{PGD-NS} algorithm.
\begin{algorithm}
\caption{An OpenFOAM implementation of turbulent \texttt{pgdFoam}}\label{alg:PGD-RANS-SA-OF}
\begin{algorithmic}[1]
\REQUIRE{Stopping criteria $\eta_\diamond^\star$ and $\eta_\nu^\star$ for the PGD enrichment of the flow equations and the turbulence model. $\diamond=U,P$}
\STATE{Compute boundary condition modes: the spatial mode is solution of~\eqref{eq:weak-RANS} using \texttt{simpleFoam} with SA turbulence model and the parametric modes are set equal to $1$.}
\STATE{Set $n \gets 1$, $i \gets 0$ and initialise the amplitudes $\sigma_\diamond^1 \gets 1$.}
\WHILE{$\sigma_\diamond^n > \eta_\diamond^\star\,\sigma_\diamond^1$}
\STATE{Set the enrichment threshold for turbulent viscosity $\eta_t^i$.}
\STATE{Compute the spatial, $(\sigmaU^n\bm{f}_{\!\! U}^n,\sigmaP^nf_{\!\! P}^n)$, and parametric, $\phi^n$, modes of velocity and pressure using \texttt{PGD-NS}.}
\IF{$\sigma_\diamond^n < \eta_t^i$}
\STATE{Set $m \gets 1$ and initialise the amplitude $\sigma_\nu^1 \gets 1$.}
\WHILE{$\sigma_\nu^m > \eta_\nu^\star\,\sigma_\nu^1$}
\STATE{Compute the spatial, $\sigmaNU^mf_{\! \nu}^m$, and parametric, $\psi^m$, modes of the eddy viscosity using \texttt{PGD-SA}.}
\STATE{Update the mode counter: $m \gets m+1$.}
\ENDWHILE
\STATE{Compute the spatial, $\sigmaT^qf_{\! t}^q$, and parametric, $\xi^q$, modes of the turbulent viscosity using \texttt{PGD-$\nu_t$}.}
\STATE{Increment viscosity update counter: $i \gets i+1$.}
\STATE{Reinitialise the mode counter: $n \gets 0$.}
\ENDIF
\STATE{Update the mode counter: $n \gets n+1$.}
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\begin{remark}\label{rmrk:turbTolerance}
The overall cost of the parametric solver for the turbulent Navier-Stokes equations depends on the number of updates of the turbulent viscosity.
Effective numerical strategies are devised by tuning the accuracy $\eta_\diamond^\star$ required to stop the enrichment of the velocity and pressure approximations and its relationship with the threshold value $\eta_t^i$ determining the number of updates of the turbulent viscosity.
More precisely, the threshold value $\eta_t^i$ is decreased after each update of the turbulent viscosity to improve the accuracy of the velocity and pressure modes computed by means of \texttt{PGD-NS}.
In the simulations in section~\ref{sc:simulations}, for the $i$-th iteration it holds
\begin{equation}\label{eq:criterionTurbUpdate}
\eta_t^i = 10^{-(i+\gamma)} ,
\end{equation}
that is, starting from an initial accuracy of $10^{-\gamma}$, an exponentially decreasing tolerance is defined for each new update of the turbulent viscosity.
An alternative approach to control the accuracy of the separated representation of eddy and turbulent viscosities may be devised modifying step 6 of algorithm 1 and fixing \emph{a priori} the number of modes in the PGD approximation of the velocity field required to run \texttt{PGD-SA} and $\texttt{PGD-$\nu_t$}$ routines.
\end{remark}
\section{Numerical experiments}
\label{sc:simulations}
In this section, numerical simulations of the NASA wall-mounted hump are presented to demonstrate the potential of the proposed methodology. This problem is a quasi-two-dimensional NASA benchmark devised to validate turbulence modelling, starting from an experimental set-up. The domain consists of a Glauert-Goldschmied type body mounted on a splitter plate between two endplates, see figure~\ref{fig:hump_model}. Following~\cite{Seifert-Pack:Hump-FlowControl,Moin-06}, the characteristic length of the problem is set equal to the chord length of the hump $c {=} \SI{0.42}{m}$, whereas its maximum height is $\SI{0.0537}{m}$ and its span is $\SI{0.5842}{m}$. Flow separation along the hump is controlled via a suction jet acting through an opening located at $65\%$ of the chord $c$, as detailed in figure~\ref{fig:hump_slot}. In the experimental set-up the opening is connected to a plenum, on the bottom of which suction pumps are installed; for the numerical simulations in the following sections, the plenum is removed and the suction effect is imposed as a boundary condition on the opening via a mass flow rate of $1.518\times 10^{-2}\SI{}{kg/s}$ for the jet.
\begin{figure}[ht]
\centering
\subfigure[Experimental set-up]{\includegraphics[width=0.49\textwidth]{3dBody}\label{fig:hump_model}}
\subfigure[Jet location]{\includegraphics[width=0.49\textwidth]{slotLocation}\label{fig:hump_slot}}
\caption{NASA wall-mounted hump: (a) representation of the experimental set-up and (b) 2D section of the hump, location of the jet (blue rectangle) and detail of the jet opening. Source: {\color{blue}\texttt{https://cfdval2004.larc.nasa.gov/case3.html}}}
\label{fig:hump_model-hump_slot}
\end{figure}
In the analysis of this problem, the quantity of interest is represented by the effect of the suction jet on the flow separation and on the position of the reattachment point. Experimental and numerical studies~\cite{Greenblatt-PYHSW:Hump,Rumsey:Hump-RANS} verified the quasi-two-dimensional nature of the phenomena identifying minor three-dimensional effects located near the endplates.
Henceforth, the PGD results will be compared to the full order OpenFOAM approximation, considered as \emph{reference solution}.
The NASA wall-mounted hump problem being quasi-two-dimensional, in the following sections both the 2D and the 3D cases are studied. A parametric flow control problem is devised by varying the maximum amplitude of the suction jet between $10\%$ and $100\%$ of the module of a peak velocity $\hat{U}$. In two dimensions, a sinusoidal velocity profile is defined as
\begin{equation}\label{eq:Hump-jetProfile}
\bm{U}^{\text{jet}}_{\!\! \hat{y}} = \mu \frac{\hat{U}}{2}(1-\cos(2\pi \hat{x})) ,
\end{equation}
where $\hat{x}$ is the normalised curvilinear abscissa of the jet patch, that is $\hat{x} \in \left[0 , 1\right]$, and the resulting jet is pointing in the direction $\hat{y}$ orthogonal to the boundary.
Similarly, in the 3D case the jet defined on the plane $(\hat{x},\hat{z})$ and pointing in the orthogonal direction $\hat{y}$ is
\begin{equation}\label{eq:Hump-jetProfile-3D}
\bm{U}^{\text{jet}}_{\!\! \hat{y}} = \mu \frac{\hat{U}}{4}(1-\cos(2\pi \hat{x}))(1-\cos(2\pi \hat{z})),
\end{equation}
where the normalised coordinate $\hat{z}$ is
\begin{equation}\label{eq:Hump-hatZ}
\hat{z}=
\begin{cases}
0 & \text{ for } z < 0.4 c \\
\displaystyle\frac{5z-2c}{c} & \text{ for } 0.4 c \leq z \leq 0.6 c \\
0 & \text{ for } z > 0.6 c .
\end{cases}
\end{equation}
It is worth noting that the module of the peak velocity $\hat{U}$ is selected such that the ratio between the mass flow rate of the jet and of the inlet is $1.5 \times 10^{-3}$, reproducing the value in the experimental set-up of the NASA wall-mounted hump with a plenum~\cite{Seifert-Pack:Hump-FlowControl}. In addition, both in~\eqref{eq:Hump-jetProfile} and~\eqref{eq:Hump-jetProfile-3D}, the interval of the parametric variable is defined as $\mathcal{I} {=} [0.1,\,1]$.
\subsection{Two-dimensional NASA wall-mounted hump with parametrised jet}
\label{sc:simulation-2D}
The computational domain for the two-dimensional NASA wall-mounted hump is a channel of height $c$, extending $6.39 c$ upstream and $5 c$ downstream as displayed in figure~\ref{fig:hump2D}. The resulting mesh consists of $114,000$ cells.
Homogeneous Dirichlet boundary conditions are imposed on both the velocity and the eddy viscosity on the bottom wall and on the hump. A symmetry condition is imposed on the top wall, whereas on the outlet free traction is enforced. At the inlet, a parabolic profile is imposed for both velocity and eddy viscosity. More precisely, the variables range between a null value on the bottom wall and a maximum value at $y {=} \SI{0.02}{m}$. For the velocity, the peak value is $\SI{34.6}{m/s}$, whereas the free-stream eddy viscosity $\widetilde{\nu} {=} 3\nu$ is selected. The kinematic viscosity is $\nu {=} 1.55274\times 10^{-5}\SI{}{m^2/s}$, thus the resulting Reynolds number is approximately $\text{Re} {=} 936,000$. On the jet patch, the velocity profile~\eqref{eq:Hump-jetProfile} with $\hat{U} {=} \SI{23.4}{m/s}$ is imposed and a homogeneous Neumann condition is considered for the eddy viscosity.
\begin{figure
\centering
\includegraphics[width=0.8\textwidth]{domain2D}
\caption{Computational domain for the two-dimensional NASA wall-mounted hump.}
\label{fig:hump2D}
\end{figure}
It is worth noting that on the hump the mesh is such that $y^+<1$, whence no wall treatment is required for the turbulent viscosity.
The boundary conditions are imposed using two modes computed via \texttt{simpleFoam} with SA turbulence model. The first mode is a full order solution with the jet acting at $100\%$ of the mass flow rate ($\mu {=} 1$) and the second one is associated with the jet acting at $10\%$, that is $\mu {=} 0.1$. The corresponding parametric modes are
\begin{equation}\label{eq:parModesHump}
\phi^1(\mu)=\frac{10}{9}(1-\mu) \ \text{ and } \ \phi^2(\mu)= 1 - \phi^1(\mu),
\end{equation}
respectively.
The tolerance for the enrichment of the flow variables is set to $\eta_u^\star {=} \eta_p^\star {=} 10^{-4}$, whereas the tolerance for the turbulence model is selected as $\eta_{\nu}^\star {=} 10^{-2}$. The criterion to update the turbulent viscosity is detailed in remark~\ref{rmrk:turbTolerance}, with $\gamma {=} 1$.
The turbulent \texttt{pgdFoam} algorithm achieves convergence with eight velocity-pressure modes computed using \texttt{PGS-NS} and three corrections by means of \texttt{PGD-SA} and \texttt{PGD-$\nu_t$}. Each \texttt{PGD-SA} loop reached the prescribed tolerance within three computed modes. The relative amplitude of the computed modes, as the turbulent viscosity is updated, is reported in figure~\ref{fig:hump-ampl}.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{amplitudeHumpJets2D}
\caption{Relative amplitude of the computed velocity-pressure modes as the turbulent viscosity is updated.}
\label{fig:hump-ampl}
\end{figure}
It is worth recalling that each time the relative amplitude of the modes drops by one order of magnitude, the separated representation of the turbulent viscosity is updated via the \texttt{PGD-SA} and \texttt{PGD-$\nu_t$} routines (see remark~\ref{rmrk:turbTolerance}) and the PGD approximation for velocity and pressure is recomputed using the updated turbulent viscosity.
The importance of updating the PGD approximation of the turbulent viscosity to correctly compute the turbulent velocity and pressure fields is displayed in figure~\ref{fig:hump-L2-2D-PGD-SA-comparison}. Considering the result of the full order \texttt{simpleFoam} with SA turbulence model for $\mu {=} 0.5$ as a reference solution, figure~\ref{fig:hump-L2-2D-PGD-SA-comparison} compares the relative $\ensuremath{\mathcal{L}_2}(\Omega)$ error of the PGD approximation computed via the \texttt{PGD-NS}, \texttt{PGD-SA} and \texttt{PGD-$\nu_t$} strategy described in algorithm~\ref{alg:PGD-RANS-SA-OF} with the one obtained by omitting the turbulent viscosity update. Without the \texttt{PGD-SA} and \texttt{PGD-$\nu_t$} routines, the error of both velocity and pressure approximations stagnates from the first computed mode and the overall value is one order of magnitude larger than the one achieved by the methodology in algorithm~\ref{alg:PGD-RANS-SA-OF}.
\begin{figure}[ht]
\centering
\subfigure[Influence of turbulent viscosity update]{\includegraphics[width=0.49\textwidth]{errorHumpPGD_SA_vs_INTERP_2D}\label{fig:hump-L2-2D-PGD-SA-comparison}}
\subfigure[Relative $\ensuremath{\mathcal{L}_2}(\Omega)$ error]{\includegraphics[width=0.49\textwidth]{errorHumpJets2D}\label{fig:hump-L2-2D}}
\caption{Accuracy of the PGD approximations of velocity and pressure with respect to the full order solutions as a function of the number of modes. (a) Influence of turbulent viscosity update for the case of $\mu {=} 0.5$. (b) Relative $\ensuremath{\mathcal{L}_2}(\Omega)$ error for different values of $\mu$. The vertical lines separates the two boundary condition modes and the computed modes.}
\label{fig:hump-L2}
\end{figure}
Figure~\ref{fig:hump-L2-2D} reports the relative $\ensuremath{\mathcal{L}_2}(\Omega)$ error of the PGD approximation with turbulent viscosity update for three configurations, that is $\mu {=} 0.25$, $\mu {=} 0.5$ and $\mu {=} 0.75$. The results clearly display that the PGD approximation achieves comparable accuracy throughout the parametric interval $\mathcal{I}$ using two boundary condition modes and three computed modes. The following modes only introduce minor corrections to the solution as identified by their corresponding amplitudes, see figure~\ref{fig:hump-ampl}.
As mentioned in the problem statement, the quantities of interest in this study are the position of the reattachment point and the effect of the suction jet on the recirculation bubble.
Figure~\ref{fig:hump-velocity-2D} displays the velocity field after the hump and the recirculation bubble for three values of the parameter $\mu$. The influence of the jet in reducing the flow separation and moving the reattachment point towards the hump is well captured by the PGD approximation which is in excellent agreement with the full order solution.
\begin{figure}[ht]
\centering
\begin{tabular}[c]{@{}c@{}c@{ }c@{ }c@{ }}
$\upgd$ &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{U_attach_stream_PGD_0_25}} &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{U_attach_stream_PGD_0_5}} &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{U_attach_stream_PGD_0_75}} \\[2em]
$\uref$ &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{U_attach_stream_bench_0_25}} &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{U_attach_stream_bench_0_5}} &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{U_attach_stream_bench_0_75}} \\
& $\mu{=}0.25$ & $\mu{=}0.5$ & $\mu{=}0.75$ \\
\end{tabular}
%
\includegraphics[width=0.6\textwidth]{U_legend}
\caption{Comparison of the PGD approximation (top) and the full order solution (bottom) of the recirculation bubble for $\mu{=}0.25$, $\mu{=}0.5$ and $\mu{=}0.75$. The vertical line denotes the position of the reattachment point.}
\label{fig:hump-velocity-2D}
\end{figure}
Qualitative comparisons of the pressure field and the turbulent viscosity for different values of the parameter $\mu$ are presented in figure~\ref{fig:hump-pressure-2D} and~\ref{fig:hump-viscosity-2D}, respectively. Using eight computed modes, the PGD approximation is able to accurately approximate localised variations in the flow pattern, throughout the interval $\mathcal{I}$.
\begin{figure}[ht]
\centering
\begin{tabular}[c]{@{}c@{}c@{ }c@{ }c@{ }}
$\ppgd$ &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{pISOS_PGD_25}} &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{pISOS_PGD_50}} &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{pISOS_PGD_75}} \\[2em]
$\pref$ &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{pISOS_bench_25}} &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{pISOS_bench_50}} &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{pISOS_bench_75}} \\
& $\mu{=}0.25$ & $\mu{=}0.5$ & $\mu{=}0.75$ \\
\end{tabular}
%
\includegraphics[width=0.6\textwidth]{p_legend}
\caption{Comparison of the PGD approximation (top) and the full order solution (bottom) of the pressure field after the hump for $\mu{=}0.25$, $\mu{=}0.5$ and $\mu{=}0.75$.}
\label{fig:hump-pressure-2D}
\end{figure}
\begin{figure}[ht]
\centering
\begin{tabular}[c]{@{}c@{}c@{ }c@{ }c@{ }}
$\nu_{t_{\texttt{PGD}}}$ &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{nut_PGD_0_25}} &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{nut_PGD_0_5}} &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{nut_PGD_0_75}} \\[2em]
$\nu_{t_{\texttt{REF}}}$ &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{nut_bench_0_25}} &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{nut_bench_0_5}} &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{nut_bench_0_75}} \\
& $\mu{=}0.25$ & $\mu{=}0.5$ & $\mu{=}0.75$ \\
\end{tabular}
%
\includegraphics[width=0.6\textwidth]{nut_legend}
\caption{Comparison of the PGD approximation (top) and the full order solution (bottom) of the turbulent viscosity after the hump for $\mu{=}0.25$, $\mu{=}0.5$ and $\mu{=}0.75$.}
\label{fig:hump-viscosity-2D}
\end{figure}
In addition, table~\ref{tab:reattach-2D} confirms the accuracy of the PGD approximation by quantitatively comparing the estimated location of the reattachment point with respect to the position computed using the full order \texttt{simpleFoam} solver with SA turbulence model. The reduced model with eight computed modes provides errors below $0.2\%$.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c||c|c|c|}
\hline
$ \mu $ & $0.25$ & $0.50$ & $0.75$ \\
\hline
PGD & $1.183 c$ & $1.156 c$ & $1.129 c$ \\
full order & $1.184 c$ & $1.154 c$ & $1.131 c$ \\
\hline
Relative error & $0.84 \times 10^{-3}$ & $0.17 \times 10^{-2}$ & $0.17 \times 10^{-2}$ \\
\hline
\end{tabular}
\caption{2D NASA wall-mounted hump: position of the reattachment point computed using the PGD approximation and the full order solver for different values of $\mu$.}
\label{tab:reattach-2D}
\end{center}
\end{table}
Finally, figure~\ref{fig:hump-Cf-2D} and~\ref{fig:hump-Cp-2D} report the skin friction coefficient and the pressure coefficient, respectively. Both figures focus on the area after the jet and compare the PGD approximation using different number of modes with the full order solution. More precisely, the PGD approximations based on the boundary condition modes, i.e. $n {=} 2$, provide results qualitatively comparable with the full order solution, whereas perfect agreement is achieved using eight computed modes ($n {=} 10$).
\begin{figure}[ht]
\centering
\subfigure[Skin friction coefficient]{\includegraphics[width=0.49\textwidth]{Cf_hump_2D_scaled}\label{fig:hump-Cf-2D}}
\subfigure[Pressure coefficient]{\includegraphics[width=0.49\textwidth]{Cp_hump_2D_scaled}\label{fig:hump-Cp-2D}}
\caption{Comparison of (a) the skin friction coefficient and (b) the pressure coefficient of the full order solution and the PGD approximation, for different number of PGD modes and for the three values of the parameter $\mu$.}
\label{fig:hump-Cf-Cp}
\end{figure}
\subsection{Three-dimensional NASA wall-mounted hump with parametrised jet}
\label{sc:simulation-3D}
The computational domain for the three-dimensional problem, see figure~\ref{fig:hump-domain-3D}, is obtained by extruding the 2D domain described in the previous section in the $z$ direction by $0.8$ chord lengths.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{domain3Dtransparent}
\caption{Computational domain for the three-dimensional NASA wall-mounted hump.}
\label{fig:hump-domain-3D}
\end{figure}
The problem inherits the set of boundary conditions utilised in the 2D case. On the additional external surfaces, slip boundary conditions are imposed. The peak value of the inlet velocity is set to $\SI{3.46}{m/s}$ and the profile of the jet suction is defined as in~\eqref{eq:Hump-jetProfile-3D} with $\hat{U} {=} \SI{2.34}{m/s}$. The kinematic viscosity being $\nu {=} 1.55274\times 10^{-5}\SI{}{m^2/s}$, the Reynolds number for the 3D case is $\text{Re} {=} 93,600$ and the computational mesh consists of $2.34$ million cells.
Similarly to the two-dimensional case, the boundary conditions are enforced using the two parametric modes in~\eqref{eq:parModesHump} and two spatial modes corresponding to the \texttt{simpleFoam} solutions with SA turbulence model for $\mu {=} 0.1$ and $\mu {=} 1$.
The values $\eta_u^\star {=} \eta_p^\star {=} 0.5 \times 10^{-3}$ and $\eta_{\nu}^\star {=} 10^{-2}$ are considered for the tolerance of the enrichment loops of the flow variables and the turbulent viscosity, respectively. To reduce the overall cost of the \texttt{PGD-NS}, \texttt{PGD-SA} and \texttt{PGD-$\nu_t$} procedure, the number of turbulent viscosity updates is reduced by considering a lower initial tolerance in criterion~\eqref{eq:criterionTurbUpdate}, namely $\gamma {=} 2$.
Algorithm~\ref{alg:PGD-RANS-SA-OF} achieves convergence with four modes computed by the \texttt{PGS-NS} routine and two \texttt{PGD-SA} and \texttt{PGD-$\nu_t$} corrections. Each \texttt{PGD-SA} loop reached the prescribed tolerance within two computed modes. The PGD approximation is then compared with the corresponding full order solution provided by \texttt{simpleFoam} with the SA turbulence model: the relative $\ensuremath{\mathcal{L}_2}(\Omega)$ error for $\mu {=} 0.25$, $\mu {=} 0.5$ and $\mu {=} 0.75$ is displayed in figure~\ref{fig:hump-L2-3D}, reporting that the reduced order model is able to provide errors in velocity and pressure below $0.1 \%$ and $0.5 \%$, respectively.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{errorHumpJets3D}
\caption{Relative $\ensuremath{\mathcal{L}_2}(\Omega)$ error of the PGD approximations of velocity and pressure with respect to the full order solutions as a function of the number of modes for different values of $\mu$. The vertical line separates the two boundary condition modes and the computed modes.}
\label{fig:hump-L2-3D}
\end{figure}
The effect of the suction jet on flow recirculation is displayed in figure~\ref{fig:hump-Cf-3D}, where a top view of the wall shear stress on the bottom wall is reported starting from the jet patch up to $1.6 c$ downstream. A qualitative comparison between the reduced order and the full order solution confirms the ability of the PGD to accurately reproduce the turbulent flow in the entire range of values $\mathcal{I}$ of the parameter.
\begin{figure}[ht]
\centering
\begin{tabular}[c]{@{}c@{}c@{ }c@{ }c@{ }}
$\bm{\tau}_{\!\! w_{\texttt{PGD}}}$ &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{TOPVIEW_wallShearStress_PGD_25}} &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{TOPVIEW_wallShearStress_PGD_50}} &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{TOPVIEW_wallShearStress_PGD_75}} \\[2em]
$\bm{\tau}_{\!\! w_{\texttt{REF}}}$ &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{TOPVIEW_wallShearStress_bench_25}} &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{TOPVIEW_wallShearStress_bench_50}} &
\parbox[c]{0.3\textwidth}{\includegraphics[width=0.3\textwidth]{TOPVIEW_wallShearStress_bench_75}} \\
& $\mu{=}0.25$ & $\mu{=}0.5$ & $\mu{=}0.75$ \\
\end{tabular}
%
\includegraphics[width=0.6\textwidth]{wallShearStressScale}
\caption{Comparison of the PGD approximation (top) and the full order solution (bottom) of the wall shear stress on the bottom wall for $\mu{=}0.25$, $\mu{=}0.5$ and $\mu{=}0.75$. Detail of the region starting from the jet patch up to $1.6 c$ downstream.}
\label{fig:hump-Cf-3D}
\end{figure}
In addition, figure~\ref{fig:hump-velocity-3D} displays the velocity profile on the hump, computed using the PGD, for different values of the parameter $\mu$, whereas figure~\ref{fig:hump-streamlines-3D} reports the corresponding streamlines. The results display that the recirculation effects are reduced when increasing the suction jet and the PGD is able to capture the vortex structure with comparable accuracy with respect to the full order solution.
\begin{figure}[ht]
\centering
\subfigure[$\mu{=}0.25$]{\includegraphics[width=0.49\textwidth]{velocityPGD_25Z}}
\subfigure[$\mu{=}0.75$]{\includegraphics[width=0.49\textwidth]{velocityPGD_75Z}}
\caption{Comparison of the PGD approximation of the velocity profile on the hump for $\mu{=}0.25$ and $\mu{=}0.75$.}
\label{fig:hump-velocity-3D}
\end{figure}
\begin{figure}[ht]
\centering
\begin{tabular}[c]{@{}c@{}c@{ }c@{ }}
$\upgd$ &
\parbox[c]{0.45\textwidth}{\includegraphics[width=0.45\textwidth]{Umin3D_streamsZ}} &
\parbox[c]{0.45\textwidth}{\includegraphics[width=0.45\textwidth]{Umax3D_streamsZ}} \\[2.5em]
$\uref$ &
\parbox[c]{0.45\textwidth}{\includegraphics[width=0.45\textwidth]{UbenchMin3D_streamsZ}} &
\parbox[c]{0.45\textwidth}{\includegraphics[width=0.45\textwidth]{UbenchMax3D_streamsZ}} \\
& $\mu{=}0.25$ & $\mu{=}0.75$ \\
\end{tabular}
\caption{Detail of the vortex structure in the recirculation region computed using the PGD approximation (top) and the full order solution (bottom) for $\mu{=}0.25$ and $\mu{=}0.75$.}
\label{fig:hump-streamlines-3D}
\end{figure}
Finally, the position of the reattachment point in correspondance of the location of the peak of the jet profile is reported in table~\ref{tab:reattach-3D} for different values of the parameter $\mu$. The PGD approximation with four computed modes is in excellent agreement with the full order solver, with relative errors below $0.5\%$.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c||c|c|c|}
\hline
$ \mu $ & $0.25$ & $0.50$ & $0.75$ \\
\hline
PGD & $1.102 c$ & $1.062 c$ & $1.024 c$ \\
full order & $1.103 c$ & $1.059 c$ & $1.019 c$ \\
\hline
Relative error & $0.91 \times 10^{-3}$ & $0.28 \times 10^{-2}$ & $0.49 \times 10^{-2}$ \\
\hline
\end{tabular}
\caption{3D NASA wall-mounted hump: position of the reattachment point computed using the PGD approximation and the full order solver for different values of $\mu$.}
\label{tab:reattach-3D}
\end{center}
\end{table}
\section{Conclusion}
\label{sc:Conclusion}
A PGD strategy to compute parametric solutions of turbulent incompressible flow problems in OpenFOAM has been proposed. The methodology is based on the incompressible Reynolds-averaged Navier-Stokes equations with Spalart-Allmaras turbulence model and mimicks the segragated approach implemented in the industrially-validated solver OpenFOAM to devise a minimally intrusive PGD-ROM for convection-dominated flow problems of industrial interest.
First, the velocity and pressure modes are computed using the non-intrusive PGD strategy \texttt{PGD-NS} developed in~\cite{Tsiolakis-TGSOH-20} using a seed value for the turbulent viscosity. The PGD approximation of the velocity is then used to improve the turbulent viscosity representation via the minimally intrusive \texttt{PGD-SA} and \texttt{PGD-$\nu_t$} routines. Finally, the resulting separated turbulent viscosity is utilised to recompute the PGD expansions of velocity and pressure.
The importance of an accurate approximation of the turbulent viscosity has been verified by comparing the solution of the above algorithm with the one computed without solving the SA equation: the latter solution quickly stagnates providing errors of one order of magnitude larger than the proposed methodology.
The developed strategy has been validated in two and three spatial dimensions using a benchmark problem of turbulent external flow, the NASA wall-mounted hump, with $\text{Re} {=} 93,600$ and $\text{Re} {=} 936,000$. A flow control problem of industrial interest has been devised by introducing a suction jet on the hump to reduce the recirculation effects.
The proposed PGD-based reduced order model has proved to be able to compute a reduced basis with no \emph{a priori} knowledge of the solution, for convection-dominated viscous incompressible flows achieving both qualitative and quantitative agreement with the full order solution computed via \texttt{simpleFoam} with SA turbulence model, throughout the interval of the parametric variable.
More precisely, the reduced model provided accurate approximations of the velocity and pressure fields, with relative $\ensuremath{\mathcal{L}_2}$ errors below $0.1 \%$ and $1 \%$, respectively. In addition, it proved to be able to capture localised flow features and estimate quantities of engineering interest such the position of the reattachment point, the skin friction coefficient and the pressure coefficient with errors below $0.5 \%$.
\section*{Acknowledgements}
This work was partially supported by the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l odowska-Curie Actions (Grant agreement No. 675919) that financed the Ph.D. fellowship of the first author.
The second, third and last authors were supported by the Spanish Ministry of Economy and Competitiveness (Grant agreement No. DPI2017-85139-C2-2-R).
The second and last authors are also grateful for the financial support provided by the Spanish Ministry of Economy and Competitiveness through the Severo Ochoa programme for centres of excellence in RTD (Grant agreement No. CEX2018-000797-S) and the Generalitat de Catalunya (Grant agreement No. 2017-SGR-1278).
\bibliographystyle{elsarticle-num}
| 27,393 |
\section{Introduction}
Latent space models are popular tools for sampling from high dimensional distributions. Often, only a small number of latent factors are sufficient to describe data variations. These models exploit the underlying structure of the data and learn explicit representations that are faithful to the data generating factors. Popular latent space models are Variational Auto-Encoders (VAEs)~\cite{kingma_auto_encoding_2013}, Restricted Boltzmann Machines (RBMs)~\cite{salakhutdinov_deep}, Normalizing Flows~\cite{pmlr-v37-rezende15}, and their many variants.
\paragraph{Disentanglement.} In latent variable modelling, one is often interested in modelling the data in terms of \emph{uncorrelated} or \emph{independent} components, yielding a so-called `disentangled' representation~\cite{bengio2013representation} which is often studied in the context of VAEs. In representation learning, disentanglement corresponds to a decoupling of generating factors. Components corresponding to orthogonal directions in latent space may be interpreted as generating distinct factors in input space e.g. lighting conditions, style, colors, etc. An illustration of a latent traversal is shown in Figure~\ref{fig:traversal_rkm}, where one observes that only one specific feature of the image is changing as one moves along a component in latent space. For instance, in the first row of right-hand side of Figure~\ref{fig:traversal_rkm}, we observe that moving along the first principal component generates images where only floor color is varying while all other features such as shape, scale, wall color, object color, etc. are constant. In the second row, the object scale is only changing.
An advantage of such a representation is that the different latent units impart more interpretability to the model. Disentangled models are useful for the generation of plausible pseudo-data with certain desirable properties, e.g. generating new car designs with a predefined color or height. One popular variant achieving disentanglement but at the expense of generation quality is $\beta$-VAE~\cite{higgins2017beta}.
\begin{figure}[t]
\centering
\def0.8\textwidth{0.8\textwidth}
\input{drawing.pdf_tex}
\caption{Schematic illustration of $\St$-RKM training problem. The length of the dashed line represents the reconstruction error (see Auto-Encoder term in \eqref{eq:ReducedObjective}) and the length of the vector projecting on hyperplane represents the PCA reconstruction error. After training, the projected points tend to be distributed normally on the hyperplane.}
\label{fig:schematic_diagram}
\end{figure}
\paragraph{Motivation.}
Let $p(\bm{x})$ be the distribution of the data $\bm{x}\in\mathbb{R}^d$ and consider latent vectors $\bm{z}\in \mathbb{R}^\ell$ with the prior distribution $p(\bm{z})$, chosen to be a standard normal. Then, one defines an encoder $q(\bm{z}|\bm{x})$ that can be deterministic or random, for e.g. given by $ \mathcal{N}(\bm{z}|\bm{\phi_{\theta}}(\bm{x}),\gamma^2\mathbb{I})$, where the mean\footnote{The common definition of a VAE includes another neural network for parametrizing the covariance matrix. To simplify this introductory discussion, this matrix is here chosen as a constant diagonal $\gamma^2\mathbb{I}$.} is given by the neural network $\bm{\phi_{\theta}}$ parametrized by reals $\bm{\theta}$.
A random decoder $p(\bm{x}|\bm{z}) = \mathcal{N}(\bm{x}|\bm{\psi_{\xi}}(\bm{z}),\sigma^2_0\mathbb{I})$ is associated to the decoder neural network $\bm{\psi_{\xi}}$, parametrized by reals $\bm{\xi}$, which maps latent codes to the data points.
A VAE is estimated by maximizing the lower bound
\begin{equation}
\mathbb{E}_{\bm{z}\sim q(\bm{z}|\bm{x})}[\log (p(\bm{x}|\bm{z}))]- \beta \KL(q(\bm{z}|\bm{x}),p(\bm{z})) \leq \log p(\bm{x}), \label{eq:ELBO_VAE}
\end{equation}
which is called the Evidence Lower Bound (ELBO) when $\beta = 1$. It has been argued in~\cite{higgins2017beta} that %
larger values of $\beta>1$ promote more disentanglement. In this paper, we attempt to reconcile generation quality with disentanglement.
To introduce the model, we firstly make explicit the connection between $\beta$-VAEs and standard Auto-Encoders (AEs).
Let the dataset be $\{ \bm{x}_{i}\}_{i=1}^{n} \text{~with~}\bm{x}_{i} \in \mathbb{R}^d $.
Let $q(\bm{z}|\bm{x}) = \mathcal{N}(\bm{z}|\bm{\phi_{\theta}}(\bm{x}),\gamma^2\mathbb{I})$ be an encoder, where $\bm{z}\in \mathbb{R}^\ell$. For a fixed $\gamma>0$, the maximization problem \eqref{eq:ELBO_VAE} is then equivalent to the minimization of the `regularized' AE
\begin{equation}
\min_{\bm{\theta}, \bm{\xi}}\frac{1}{n}
\sum_{i=1}^{n}\Big\{ \mathbb{E}_{\bm{\epsilon}}\|\bm{x}_i - \bm{\psi_{\xi}}(\bm{\phi_{\theta}}(\bm{x}_i)+\bm{\epsilon})\|_2^2 +\alpha\|\bm{\phi_{\theta}}(\bm{x}_i)\|_2^2\Big\} ,\label{eq:VAE_AE}
\end{equation}
where $\alpha = \beta\sigma^2_0$, $\bm{\epsilon}\sim\mathcal{N}(0,\gamma^2\mathbb{I})$ and where additive constants depending on $\gamma$ have been omitted. The first term in~\eqref{eq:VAE_AE} might be interpreted as an AE loss whereas the second term can be viewed as a regularization. This `regularized' AE interpretation motivates our method as introduced below.
\begin{figure}[t]
\centering
\def0.8\textwidth{0.8\textwidth}
\input{latent.pdf_tex}
\caption{Image by the decoder of the latent space traversal, i.e., $\bm{\psi}_{\bm{\xi}}\left(t \bm{u}_i\right)$ for $t\in [a,b]$ with $a<b$ and for some $i\in\{1,\dots, m\}$. Green and black dashed-lines represent the walk along $\bm{u}_1$ and $\bm{u}_2$ respectively. At every step of the walk, the image of the decoder is computed to generate the data in the input space. The images were generated by $\St$-RKM with $\sigma = 10^{-3}$.}
\label{fig:traversal_rkm}
\end{figure}
\section{Proposed model\label{sec:Proposed}}
The optimization problem proposed here is best understood by considering the analogy between Variational Auto-Encoders and classical Auto-Encoders.
\subsection{Training jointly an auto-encoder and Principal Component Analysis in latent space}
The idea consists of learning an auto-encoder along with finding an `optimal' linear subspace of the latent space so that the variance of the training set in latent space is maximized within this space. See Figure \ref{fig:schematic_diagram} to follow the discussion below. The encoder $\bm{\phi_{\theta}}:\mathbb{R}^d \to \mathbb{R}^\ell$ typically sends input data to a latent space while the decoder $\bm{\psi_{\xi}}: \mathbb{R}^\ell \to \mathbb{R}^d$ goes in the reverse direction, and constitutes an approximate inverse. Both the encoder and decoder are neural networks parametrized by vectors of reals $\bm{\theta}$ and $\bm{\xi}$. However, it is unclear how to define a parametrization or an architecture of these neural networks so that the learned representation is disentangled. Therefore, in addition to these trained parameters, we also jointly find an $m$-dimensional linear subspace $\range(U)$ of the latent space $\mathbb{R}^\ell$, so that the encoded training points mostly lie within this subspace. This linear subspace is given by the span of the orthonormal columns of the $\ell\times m$ matrix
\begin{equation*}
U=\begin{bmatrix}
| & & |\\
\bm{u}_1 & \ldots & \bm{u}_m\\
| & & |
\end{bmatrix}.
\end{equation*}
The set of $\ell\times m$ matrices with orthonormal columns with $\ell\geq m$ defines the Stiefel manifold $\St(\ell,m)$. For a reference about Stiefel manifold, we refer to~\cite{AbsilBook}.
The idea is then to encode input data into a subspace of the latent space by
\begin{equation*}
\bm{x}\mapsto \mathbb{P}_{U}\bm{\phi_{\theta}}(x)= \bm{u}_1^\top \bm{\phi_{\theta}}(x)\times\begin{bmatrix}
| \\
\bm{u}_1\\
|
\end{bmatrix}+ \dots + \bm{u}_m^\top \bm{\phi_{\theta}}(x) \times\begin{bmatrix}
| \\
\bm{u}_m\\
|
\end{bmatrix} ,
\end{equation*}
where the projector onto $\range(U)$ is simply $\mathbb{P}_{U} = U U^\top$.
The orthonormal vectors $\{\bm{u}_1, \dots, \bm{u}_m\}$ provides directions associated with different generative factors of our generative model, or in other words the principal directions in latent space match orthogonal directions of variation in the data space (see Figure \ref{fig:schematic_diagram}), in the spirit of Principal Component Analysis (PCA). This argument is more motivated in Section~\ref{sec:Disentanglement}. We propose to train an objective function which is a \emph{trade-off} between
\begin{itemize}
\item an AE loss which promotes parameters such that $\bm{\psi_{\xi}}(\mathbb{P}_{U}\bm{\phi_{\theta}}(\bm{x}_i))\approx \bm{x}_i$,
\item and, a PCA loss which aims to yield $\mathbb{P}_{U^\perp}\bm{\phi}_{\bm{\theta}}(\bm{x}_i)\approx 0$,
\end{itemize}
for all $i\in [n]$. The parameters trained are the real entries of $\bm{\theta}, \bm{\xi}$ as well as the matrix $U\in \St(\ell,m)$. The model proposed in this paper is
\begin{equation}
\min_{\substack{ U\in \St(\ell,m)\\\bm{\theta}, \bm{\xi}}}\lambda\underbrace{\frac{1}{n}\sum_{i=1}^{n} L_{\bm{\xi},U}\left(\bm{x}_i,\bm{\phi}_{\bm{\theta}}(\bm{x}_i)\right)}_{\text{Auto-Encoder objective}} + \underbrace{\frac{1}{n}\sum_{i=1}^{n}\|\mathbb{P}_{U^\perp} \bm{\phi}_{\bm{\theta}}(\bm{x}_i)\|_2^2}_{\text{PCA objective}},\tag{St-RKM}\label{eq:ReducedObjective}
\end{equation}
where $\lambda>0$ is a trade-off parameter. It is named Stiefel-Restricted Kernel Machines~\eqref{eq:ReducedObjective} in view of Section~\ref{sec:RKM}, where more details about Restricted Kernel Machines are given. Though, detailed knowledge of RKM is not needed to understand the content of this paper. Hereabove, the AE encoder objective can be chosen as
\begin{equation*}
L^{(\sigma)}_{\bm{\xi},U}(\bm{x},\bm{z}) =\mathbb{E}_{\bm{\epsilon}\sim\mathcal{N}(0,\mathbb{I})} \left\|\bm{x} - \bm{\psi}_{\bm{\xi}}\big(\mathbb{P}_U\bm{z}+\sigma U\bm{\epsilon}\big)\right\|_2^2, \text{ with } \sigma>0,%
\end{equation*}
in analogy with the VAE objective~\eqref{eq:VAE_AE}. In Section~\ref{sec:AE_losses}, other AE losses are discussed. The basic idea is to combine in~\eqref{eq:ReducedObjective} different AE losses with a regularization term which penalizes the feature map in the orthogonal subspace $U^\perp$.
The PCA interpretation becomes clear if we introduce the covariance matrix
$$
C_{\bm{\theta}} = \frac{1}{n}\sum_{i=1}^n \bm{\phi}_{\bm{\theta}}(\bm{x}_i)\bm{\phi}_{\bm{\theta}}^\top(\bm{x}_i),$$
where the feature map is assumed to be centered, i.e. $\mathbb{E}_{\bm{x}\sim p(\bm{x})} [\bm{\phi}_{\bm{\theta}}(\bm{x})] = \bm{0}$.
Then, for a given positive integer $m\leq \ell$ the classical PCA problem reads
$$
\min_{U\in\St(\ell,m)} \Tr\left(C_{\bm{\theta}} -\mathbb{P}_U C_{\bm{\theta}} \mathbb{P}_U \right),
$$
as it is explained, for instance, in Section~4.1 of~\cite{Woodruff}.
Clearly, if $\mathbb{P}_U$ is the projector on the $m$ principal components, then the columns of $U$ are the eigenvectors of the covariance matrix\footnote{This follows from Proposition~\ref{Prop:U} given hereafter and the fact $\Tr\left(\mathbb{P}_U C_{\bm{\theta}} \mathbb{P}_U\right) = \Tr\left(U^\top C_{\bm{\theta}} U\right) $. The latter identity uses the cyclicity of the trace and $U^\top U = \mathbb{I}_m$.} and we have the following diagonalization $$U^\top C_{\bm{\theta}} U = \diag(\bm{\lambda}),$$ where $\bm{\lambda}$ is a vector containing the principal values.
The PCA objective function can be written as a sum of squared errors as follows
$$
\Tr\left(C_{\bm{\theta}} -\mathbb{P}_U C_{\bm{\theta}} \mathbb{P}_U \right) = \frac{1}{n}\sum_{i=1}^n \| \mathbb{P}_{U^\perp} \bm{\phi}_{\bm{\theta}}(\bm{x}_i)\|_2^2,
$$
where $\mathbb{P}_{U^\perp} = \mathbb{I}-\mathbb{P}_{U}$.
The above PCA objective
corresponds to the reconstruction error of Kernel PCA, for the kernel $k_{\bm{\theta}}(\bm{x},\bm{y}) = \bm{\phi}^\top_{\bm{\theta}}(\bm{x}) \bm{\phi}_{\bm{\theta}}(\bm{y}) $.
\subsection{Proposition of two AE losses \label{sec:AE_losses}}
Here, we consider two stochastic AE losses.
The first loss reads
\begin{equation*}
L^{(\sigma)}_{\bm{\xi},U}(\bm{x},\bm{z}) =\mathbb{E}_{\bm{\epsilon}} \left\|\bm{x} - \bm{\psi}_{\bm{\xi}}\big(\mathbb{P}_U\bm{z}+\sigma U\bm{\epsilon}\big)\right\|_2^2, %
\end{equation*}
where $\bm{\epsilon}\sim\mathcal{N}(0,\mathbb{I}_m)$.
As motivated by Lemma~\ref{Lemma:Smoothness} below, the noise term $\sigma U\bm{\epsilon}$ above promotes a \emph{smoother} decoder network. To further improve disentanglement, we propose a splitted AE loss
\begin{equation}
L^{(\sigma),sl}_{\bm{\xi},U}(\bm{x},\bm{z}) =L^{(0)}_{\bm{\xi},U}(\bm{x},\bm{z}) \\ +\mathbb{E}_{\bm{\epsilon}} \left\|\bm{\psi}_{\bm{\xi}}\big(\mathbb{P}_U\bm{z}\big) - \bm{\psi}_{\bm{\xi}}\big(\mathbb{P}_U\bm{z}+\sigma U\bm{\epsilon}\big)\right\|_2^2,\label{eq:slLoss}
\end{equation}
with $\bm{\epsilon}\sim\mathcal{N}(0,\mathbb{I}_m)$.
The first term in~\eqref{eq:slLoss} is the classical AE loss ($\sigma =0$) while the second term promotes orthogonal directions of variations, as is discussed further in Section~\ref{sec:decoder_smoothness}.
In the proposed model, reconstruction of an out-of-sample point $\bm{x}$ is given by $ \bm{\psi}_{\bm{\xi}}\big(\mathbb{P}_U\bm{\phi}_{\bm{\theta}}(\bm{x}) \big)$.
Note that we don't simply propose another encoder-decoder architecture, given by: $U^\top\bm{\phi}_{\bm{\theta}}(\cdot)$ and $\bm{\psi}_{\bm{\xi}}(U \cdot)$.
Instead, our objective assumes that the neural network defining the encoder provides a better embedding if we impose that it maps training points on a linear subspace of dimension $m<\ell$ in the $\ell$-dimensional latent space.
In other words, the optimization of the parameters in the last layer of the encoder does not play a redundant role%
, since the second term in \eqref{eq:ReducedObjective} clearly also depends on $\mathbb{P}_{U^\perp}\bm{\phi}_{\bm{\theta}}(\cdot)$.
The problem~\eqref{eq:ReducedObjective} was inspired by Restricted Kernel Machines (RKM) by~\cite{suykens_deep_2017}, which are briefly described in the next section. %
\subsection{Restricted Kernel Machines\label{sec:RKM}}
RKMs yield a representation of kernel methods with visible and hidden units; thereby establishing links between Kernel Principal Component Analysis (KPCA) and RBMs. This framework has an energy form similar to RBMs~\cite{lecun_learning_2004} and there is a training procedure in a non-probabilistic setting.
The optimization problem~\eqref{eq:ReducedObjective} can be expressed as the sum of a regularized AE loss and RKM as follows
\begin{equation*}
\min_{\substack{ U\in \St(\ell,m)\\\bm{\theta}, \bm{\xi}}}\min_{\bm{h}_i\in \mathbb{R}^m} \sum_{i=1}^{n}\Big\{\frac{\lambda}{2} L_{\bm{\xi},U}(\bm{x}_i,\bm{\phi}_{\bm{\theta}}(\bm{x}_i)) + \underbrace{f(\bm{h}_i) - \bm{\phi}_{\bm{\theta}}^\top(\bm{x}_i) U \bm{h}_i}_{\text{RKM}} + \frac{1}{2}\|\bm{\phi}_{\bm{\theta}}(\bm{x}_i)\|_2^2 \Big\},
\end{equation*}
where $\bm{\phi_{\theta}}(\bm{x}_{i})\in \mathbb{R}^\ell$, $\bm{h}_{i} \in \mathbb{R}^m$ with $m \leq \ell$ and $U$ is an interconnection matrix\footnote{In~\cite{suykens_deep_2017}, the constraint on $U$ is implemented as a soft penalty term.}. The function $f:\mathbb{R}^m\to \left]-\infty,+\infty\right]$ is used for regularization and for instance can be chosen as closed and strongly convex, or as the characteristic function of a closed set.
The analogy with RBM goes as follows: $\bm{\phi_{\theta}}(\bm{x}_{i})$ is interpreted as visible `units' whereas $U$ plays the role of an interconnection matrix with hidden features $\bm{h}_i$, which are not binary-valued contrary to RBMs. The first minimization in the above problem gives $\bm{h}_i^\star = U^\top \bm{\phi}_{\bm{\theta}}(\bm{x}_i)$, which yields the first term of the optimal objective $-f^\star(U^\top\bm{\phi_{\theta}}(\bm{x}_{i}))$ in terms of $f^\star$, i.e., the Fenchel conjugate of $f$.
In this paper, we focus mainly on the squared norm regularizer $f(\bm{h}) = \frac{1}{2}\|\bm{h}\|_2^2$, which yields our main objective~\eqref{eq:ReducedObjective}.
\section{PCA and disentanglement}
\begin{table*}[]
\centering
\caption{FID Scores~\cite{Heusel2017} for 8000 randomly generated samples (smaller is better). $\St$-RKM variants are shaded and outperform competitors in all datasets but one.}
\label{Table:fid}
\resizebox{\textwidth}{!}{
\begin{tabular}{lcccccc}
\toprule
\textbf{Models}& \multicolumn{1}{c}{\textbf{MNIST}} & \multicolumn{1}{c}{\textbf{fMNIST}} & \multicolumn{1}{c}{\textbf{SVHN}} & \multicolumn{1}{c}{\textbf{Dsprites}} & \multicolumn{1}{c}{\textbf{3Dshapes}} & \multicolumn{1}{c}{\textbf{Cars3D}} \\ \midrule
\cellcolor{gray!30}
{$\St$-RKM} ($\sigma=0$) & \cellcolor{gray!30}$\bm{28.71}$~($\pm$0.33) & \cellcolor{gray!30} 67.70~($\pm$0.50) & \cellcolor{gray!30}62.97~($\pm$0.34) & \cellcolor{gray!30} 88.82~($\pm$1.32) & \cellcolor{gray!30}25.76~($\pm$1.74) & \cellcolor{gray!30} 174.42 ($\pm$0.32) \\
\cellcolor{gray!30}
{$\St$-RKM} ($\sigma = 10^{-3}$) & \cellcolor{gray!30} $\bm{28.83}$ ($\pm$0.23) & \cellcolor{gray!30} $\bm{66.84}$~($\pm$0.28) & \cellcolor{gray!30} $\bm{60.42}$~($\pm$0.32) & \cellcolor{gray!30} 84.91~($\pm$1.81) & \cellcolor{gray!30}$\bm{21.87}$~($\pm$0.18) & \cellcolor{gray!30} $\bm{169.86}$~($\pm$0.44) \\
\cellcolor{gray!30}
{$\St$-RKM-sl} ($\sigma = 10^{-3}$) & \cellcolor{gray!30}$\bm{28.58}$~($\pm$0.21) & \cellcolor{gray!30} 73.85~($\pm$0.36) & \cellcolor{gray!30}$\bm{60.40}$~($\pm$0.34) & \cellcolor{gray!30}75.94~($\pm$0.82) &\cellcolor{gray!30} 23.14~($\pm$0.38) & \cellcolor{gray!30}174.76 ($\pm$ 0.52) \\
{VAE} ($\beta=1$) & 39.38~($\pm$0.31) & 101.26~($\pm$0.54) & 71.13~($\pm$0.36) & 119.55~($\pm$1.46) & 37.62~($\pm$1.63) & 213.09 ($\pm$0.30) \\
$\beta$-{VAE} ($\beta=3$) & 30.14~($\pm$0.19) & 86.12~($\pm$0.62) & 72.93~($\pm$0.47) & $83.25$~($\pm$1.87) & 30.39~($\pm$1.01) & 172.39 ($\pm$0.41) \\
FactorVAE & 35.12~($\pm$1.32) & 91.43~($\pm$2.16) & 87.45~($\pm$1.4) & \textbf{61.83}~($\pm$1.23) & 41.45~($\pm$1.66) & 175.21~($\pm$0.22) \\
{Info-GAN} & 77.75~($\pm$2.05) & 78.77~($\pm$12.51) & 98.10~($\pm$1.21) & 121.46~($\pm$2.84) & 55.11~($\pm$3.18) & 177.14 ($\pm$ 0.21)
\end{tabular}
}
\end{table*}
\subsection{Decoder smoothness and disentanglement}
\label{sec:decoder_smoothness}
In the case of the stochastic loss, the smoothness of the decoder is motivated by the following Lemma which extends the result of~\cite{Robinek}. Here we adapt it in the context of optimization on the Stiefel manifold.
\begin{lemma}\label{Lemma:Smoothness}
Let $\bm{\epsilon}\sim\mathcal{N}(\bm{0},\mathbb{I}_m)$ a random vector and $U\in\St(\ell,m)$. Let $\bm{\psi}_a(\cdot)\in \mathcal{C}^2(\mathbb{R^\ell})$ with $a\in [d]$. If the function $[\bm{\psi}(\cdot)-\bm{x}]^2_a$ has $L$-Lipschitz continuous Hessian for all $a\in [d]$, we have
\begin{align}
\label{eq:Smoothness}
\mathbb{E}_{\bm{\epsilon}} [\bm{x} - \bm{\psi}(\bm{y}+\sigma U\bm{\epsilon})]_a^2
&= [\bm{x} - \bm{\psi}(\bm{y})]_a^2 +\sigma^2 \Tr\big(U^\top\nabla\bm{\psi}_a(\bm{y})\nabla\bm{\psi}_a(\bm{y})^\top U\big)\\
&-\sigma^2 [ \bm{x}- \bm{\psi}(\bm{y})]_a \Tr\big(U^\top\Hess_{\bm{y}} [\bm{\psi}_a] U \big)+R_a(\sigma),\nonumber
\end{align}
with $|R_a(\sigma)|\leq \frac{1}{6}\sigma^3 L\frac{\sqrt{2}(m+1) \Gamma((m+1)/2)}{\Gamma(m/2)}$ where $\Gamma$ is Euler's Gamma function.
\end{lemma}
In Lemma \ref{Lemma:Smoothness}, the additional terms proportional to $\sigma^2$ can be interpreted as biases. Specifically, the second term on the right-hand side of the equation in Lemma~\ref{Lemma:Smoothness} indicates that the stochastic AE loss promotes a smooth decoder by penalizing its derivative.
\begin{remark}
For a classical neural network architecture, it is unclear in practice if the function $[\bm{\psi}(\cdot)-\bm{x}]^2_a$ has $L$-Lipschitz continuous Hessian for all $a\in [d]$. This assumption in Lemma~\ref{Lemma:Smoothness} is used to provide a meaningful bound on the reminder in~\eqref{eq:Smoothness}.
\end{remark}
\subsection{Disentanglement} \label{sec:Disentanglement}
Here we argue that \emph{the principal directions in latent space match orthogonal directions of variation in the data space}. Therefore, the disentanglement of our representation is due to the optimization over $U\in \St(\ell,m)$ and is promoted by the stochastic AE loss.
Let $\bm{\Delta}_k = \nabla \bm{\psi}(\bm{y})^\top\bm{u}_k$ with $1\leq k\leq m$ and $t\in \mathbb{R}$. Denote by $\bm{\Delta}$ the matrix with $\bm{\Delta}_k$ as columns. Then, as one moves from $\bm y$ in the latent space in the direction of $\bm{u}_k$, the generated data changes by \[\bm{\psi}(\bm{y}+ t \bm{u}_k) - \bm{\psi}(\bm{y}) = t \bm{\Delta}_k + \mathcal{O}(t^2).\] Consider now a different direction, i.e., $k\neq k'$, and recall that $\bm{u}_k$ and $\bm{u}_{k'}$ are orthogonal. A disentangled representation would satisfy: $\bm{\Delta}_k^\top\bm{\Delta}_{k'} = 0$. In other words, as the latent point moves along $\bm{u}_k$ or along $\bm{u}_{k'}$, the decoder output varies in a significantly different manner. \emph{Hence, for all $\bm y$ in the latent space, we expect the Gram matrix $\bm{\Delta}^\top\bm{\Delta}$ to be diagonal}.
To isolate the connection between the equation in Lemma~\ref{Lemma:Smoothness} and the disentanglement of the representation, we assume that the encoder is already mapping points in the subspace defined by $U$, i.e., $\bm{y}_i = UU^\top\bm{\phi}_{\bm{\theta}}(\bm{x}_i)\approx \bm{\phi}_{\bm{\theta}}(\bm{x}_i)$ for all $1\leq i\leq n$.
From Lemma~\ref{Lemma:Smoothness}, we observe that the stochastic AE objective includes diagonalization terms involving the trace of a symmetric matrix. Then, we rely on Proposition \ref{Prop:U} whose proof is straightforward.
\begin{proposition}\label{Prop:U}
Let $M$ be a $\ell \times \ell$ symmetric matrix with distinct eigenvalues. Let $\nu_1, \dots, \nu_m$ be its $m$ smallest eigenvalues, with the associated eigenvectors $\bm{v}_1, \dots, \bm{v}_m$. Let $V$ be the matrix whose columns are these eigenvectors. Then, the optimization problem
$\min_{U\in\St(\ell,m)} \tr(U^\top M U)$ is solved by $U^\star = V$ and we have
${U^\star}^\top M U^\star = \diag(\bm{\nu}),$
with $\bm{\nu} = (\nu_1, \dots, \nu_m)^\top$.
\end{proposition}
If we consider only the second term in the equation in Lemma \ref{Lemma:Smoothness} and take $M_i = \nabla\bm{\psi}(\bm{y}_i)\nabla\bm{\psi}(\bm{y}_i)^\top$, we see, thanks to Proposition \ref{Prop:U}, that the optimization over $U\in\St(\ell,m)$ promotes a diagonal Gram matrix $\bm{\Delta}^\top\bm{\Delta} = {U}^\top M_i U$.
By construction, the loss \eqref{eq:slLoss} does not include the third term in the equation in Lemma \ref{Lemma:Smoothness}.
This motivated the introduction of the splitted loss. We now discuss the connections with probabilistic models and the independence of latent factors.
\section{Connections with the Evidence Lower Bound \label{sec:elbo}}%
In order to formulate an ELBO for our proposed model, consider the following random encoders:
\begin{equation*}
q(\bm{z}|\bm{x})=\mathcal{N}(\bm{z}|\bm{\phi}_{\bm{\theta}}(\bm{x}),\gamma^2\mathbb{I}_\ell) \text{ and } q_U(\bm{z}|\bm{x})=\mathcal{N}(\bm{z}|\mathbb{P}_U\bm{\phi}_{\bm{\theta}}(\bm{x}),\sigma^2\mathbb{P}_U+\delta^2 \mathbb{P}_{U^\perp}),
\end{equation*}
where $\bm{\phi}_{\bm{\theta}}$ has zero mean on the data distribution. Here, $\sigma^2$ plays the role of a trade-off parameter, while the regularization parameter $\delta$ is introduced for technical reasons and is put to a numerically small absolute value (see supplementary material for details). Let the decoder be $p(\bm{x}|\bm{z}) = \mathcal{N}(\bm{x}|\bm{\psi_{\xi}}(\bm{z}),\sigma^2_0\mathbb{I})$ and the latent space distribution is parametrized by $p(\bm{z}) = \mathcal{N}(0,\Sigma)$ where $\Sigma\in \mathbb{R}^{\ell\times\ell}$ is a covariance matrix.
We treat $\Sigma$ as a parameter of the optimization problem that is determined at the last stage of the training.
Then the minimization problem \eqref{eq:ReducedObjective} with stochastic AE loss is equivalent to the maximization of
\begin{equation}
\frac{1}{n}\sum_{i=1}^{n}\Big\{\underbrace{\mathbb{E}_{ q_U(\bm{z}|\bm{x}_i)}[\log (p(\bm{x}_i|\bm{z}))]}_{\text{(I)}} \\- \underbrace{\KL(q_U(\bm{z}|\bm{x}_i),q(\bm{z}|\bm{x}_i))}_{\text{(II)}} \\- \underbrace{\KL(q_{U}(\bm{z}|\bm{x}_i),p(\bm{z}))}_{\text{(III)}}\Big\},\label{eq:ProbaRKM0}
\end{equation}
which is a lower bound to the ELBO, since the KL divergence in (II) is positive. The hyper-parameters $\gamma,\sigma,\sigma_0$ take a fixed value.
Up to additive constants, the terms (I) and (II) of \eqref{eq:ProbaRKM0} match the objective \eqref{eq:ReducedObjective}. The third term (III) in \eqref{eq:ProbaRKM0} is optimized after the training of the first two terms. It can be written as follows
\begin{equation*}
\frac{1}{n}\sum_{i=1}^{n}\KL( q_{U}(\bm{z}|\bm{x}_i),p(\bm{z}))=\frac{1}{2}\Tr [\Sigma_0\Sigma^{-1}] \\+\frac{1}{2} \log(\det \Sigma) + \text{constants}\label{eq:KLlatent}
\end{equation*}
with $\Sigma_0 =\mathbb{P}_{U}C_{\bm{\theta}}\mathbb{P}_{U}+\sigma^2\mathbb{P}_U+\delta^2 \mathbb{P}_{U^\perp}$.
Hence, in that case, the optimal covariance matrix is diagonalized
$
\Sigma = U(\diag(\bm{\lambda})+\sigma^2\mathbb{I}_{m})U^\top + \delta^2\mathbb{P}_{U_{\perp}},
$
with $\bm{\lambda}$ denoting the principal values of the PCA.
Now we briefly discuss the factorization of the encoder.
Let $\bm{h}(\bm{x}) = U^\top \bm{\phi}_{\bm{\theta}}(\bm{x})$ and let the `effective' latent variable be $ \bm{z}^{(U)} = U^\top\bm{z}\in \mathbb{R}^m$.
Then the probability density function of $q_U(\bm{z}|\bm{x})$ is
\[
f_{q_U(\bm{z}|\bm{x})}(\bm{z})= \frac{e^{-\frac{\| U_{\perp}^\top\bm{z}\|_2^2}{2\delta^2}}}{(\sqrt{2\pi\delta^2})^{\ell-m}} \prod_{j=1}^m\frac{e^{-\frac{(\bm{z}^{(U)}_j- \bm{h}_j(\bm{x}))^2}{2\sigma^2}} }{\sqrt{2\pi\sigma^2}},
\]
where the first factor is approximated by a Dirac delta if $\delta\to 0$. Hence, the factorized form of $q_U$ indicates the independence of the latent variables $\bm{z}^{(U)}$. This independence has been argued to promote disentanglement. In particular, the term (II) in \eqref{eq:ProbaRKM0} is analogous to a `Total Correlation' loss~\cite{MIG_VAE}, although not formally equal.
\begin{figure*}[h!]
\centering
\setlength{\tabcolsep}{2pt}
\resizebox{\textwidth}{!}{
\begin{tabular}{r c c}
& {\normalsize RKM-sl} ($\sigma=10^{-3}$) & FactorVAE ($\gamma=12$) \\
\rotatebox[origin=c]{90}{3Dshapes} & \tabfigure{width=6.5cm, height=4cm}{traversal_rkm_3ds.pdf} & \tabfigure{width=6.5cm, height=4cm}{traversal_factorvae_3ds.pdf}\\
\rotatebox[origin=c]{90}{Dsprites} & \tabfigure{width=6.5cm, height=4cm}{traveresal_rkm_dsp.pdf} & \tabfigure{width=6.5cm, height=4cm}{traversal_factorvar_dsp.pdf}\\
\rotatebox[origin=c]{90}{Cars3D} & \tabfigure{width=6.5cm, height=3.5cm}{traversal_rkm_cars.pdf} & \tabfigure{width=6.5cm, height=3.5cm}{traversal_factorvae_cars.pdf}
\end{tabular}
}
\caption{Traversals along principal components. First-two rows show the ground-truth and reconstructed images. Further each subsequent row show the generated images by traversing along a principal component in the latent space. The last column in each sub-image shows the dominant factor of variation.}
\label{fig:traversal_imgs}
\end{figure*}
\section{Experiments}
In this section, we investigate if $\St$-RKM\footnote{The source code is available at \href{https://github.com/EigenPandey/Stiefel_Restricted_Kernel_Machine}{https://github.com/EigenPandey/Stiefel\_Restricted\_Kernel\_Machine}} can simultaneously achieve (i) accurate reconstructions on training data (ii) good random generations, and (iii) good disentanglement performance. We use the standard datasets: MNIST~\cite{lecun-mnisthandwrittendigit-2010}, Fashion-MNIST~\cite{xiao2017/online} (fMNIST), and SVHN~\cite{netzerReadingDigitsNatural}. To evaluate disentanglement, we use datasets with known ground-truth generating factors such as Dsprites~\cite{dsprites17}, 3Dshapes~\cite{3dshapes18}, and Cars3D~\cite{reed2015deep}. Further, all tables report average errors with 1 standard deviation over 10 experiments.%
\noindent \textbf{Algorithm}: We use an alternating-minimization scheme (further described in supplementary material). First, the Adam optimizer with learning rate $2\times 10^{-4}$ is used to update the encoder-decoder parameters and then, the Cayley Adam optimizer~\cite{Li2020Efficient} with learning rate $10^{-4}$ is used to update the $U$. Finally at the end of the training, we recompute $U$ from the Singular Value Decomposition (SVD) of the covariance matrix as a final correction-step of the Kernel PCA term in our objective (step 10 of algorithm \ref{algo}). Since the $\ell \times \ell $ covariance matrix is typically small, this decomposition is fast (see table \ref{tab:arch}). In practice, our training procedure only marginally increases the computation cost which can be seen from training times in Table~\ref{tab:train_times}.
\begin{algorithm}[h]
\caption{Manifold optimization of $\St$-RKM}\label{algo}
\textbf{Input:} $ \{\bm{x}_{i}\}_{i=1}^n$, $ \bm{\phi}_{\bm{\theta}}, \bm{\psi}_{\bm{\zeta}}, \mathcal{J}\coloneqq \text{Eq.~} \ref{eq:ReducedObjective} $\\
\textbf{Output:} Learned $\bm{\theta, \zeta} , U$
\begin{algorithmic}[1]
\Procedure{Train}{}
\While{not converged}
\State $\{\bm{x}\} \gets \text{\{Get mini-batch\}} $
\State Get embeddings $ \bm{\phi}_{\bm{\theta}}(\bm{x})\gets \bm{x} $
\State Compute centered $C_{\bm{\theta}}$\Comment{Covariance matrix}
\State $ \text{Update}~\{ \bm{\theta}_{\bm{e}},\bm{\psi}_{\bm{g}}\} \gets \text{Adam}(\mathcal{J})$ \Comment{Optimization step}
\State $ \text{Update}~\{ U\} \gets \text{Cayley\_Adam}(\mathcal{J})$\Comment{Optimization step}
\EndWhile
\State Do steps 4-5 over whole dataset
\State $ U \gets \text{SVD}(C_{\bm{\theta}})$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{table}[htp]
\centering
\caption{Training time in minutes (for 1000 epochs) and number of parameters of the generative models on MNIST dataset.}
\label{tab:train_times}
\begin{tabular}{lcccc}
\textbf{Model} &$\St$-\textbf{RKM} & $\beta$-\textbf{VAE} & \textbf{FactorVAE} & \textbf{Info-GAN} \\ \midrule
\textbf{Nb parameters} &$4164519$ & $4165589$ & $8182591$ & $4713478$ \\
\textbf{Training time} & 21.93 ($\pm$1.3) & $\bm{19.83}$~($\pm$0.8) & 33.31 ($\pm$2.7) & 45.96 ($\pm$1.6)
\end{tabular}
\end{table}
\noindent \textbf{Experimental setup}: We consider four baselines for comparison: (i) VAE, (ii) $\beta$-VAE, (iii) FactorVAE and (iv) Info-GAN. To be consistent in evaluation, we keep the same encoder (discriminator) and decoder (generator) architecture; and the same latent dimension across the models. In the case of Info-GAN, batch-normalization is added for training stability. For the determination of the hyperparameters of other methods, we start from values in the range of the parameters suggested in the authors' reference implementation. After trying various values we noticed that $\beta = 3$ and $\gamma = 12$ seem to work well across the datasets that we considered for $\beta$-VAE and FactorVAE respectively. Furthermore, in all experiments on $\St$-RKM, we keep reconstruction weight $\lambda=1$. All models are trained on the entire dataset. Further technical details are given in supplementary material. Note that for the same encoder-decoder network, the $\St$-RKM model has the least number of parameters compared to any VAE variants and Info-GAN (see Table~\ref{tab:train_times}).
To evaluate the quality of generated samples, we report the Fr\'echet Inception Distance~\cite{Heusel2017} (FID) scores in Table~\ref{Table:fid} and the Sliced Wasserstein Distance (SWD)~\cite{karras2017progressive} scores in Table~\ref{tab:swd}.
Note that FID scores are not necessarily appropriate for Dsprites since this dataset is significantly different from ImageNet on which the Inception network was originally trained. Randomly generated samples are shown in Figure~\ref{fig:fid_sample_imgs}.
To generate samples from deterministic $\St$-RKM ($\sigma=0$), we sample from a fitted normal distribution on the latent embeddings of the dataset (similar to~\cite{ghosh2019variational}). %
$\St$-RKM variants (shaded background) perform better on most datasets and within them, the stochastic variants with $\sigma= 10^{-3}$ performs the best. This can be attributed to a better generalization of the decoder network due to the addition of noise-term on latent-variables (see Lemma~\ref{Lemma:Smoothness}).%
The training times for $\St$-RKM variants are shorter compared to FactorVAE and Info-GAN due to a significantly smaller number of parameters.
\begin{table*}[]
\centering
\caption{Sliced Wasserstein Distance (SWD) evaluates the quality of randomly generated 8000 samples over 10 iterations (smaller is better). The multi-scale statistical similarity between distributions of local image patches drawn from the Laplacian pyramid is evaluated using the SWD. A small Wasserstein distance indicates that the distribution of the patches is similar, thus real and fake images appear similar in both appearance and variation at this spatial resolution. We always show the average SWD to evaluate performance. Scores are multiplied by $10^{2}$ for better readability.}
\label{tab:swd}
\resizebox{\textwidth}{!}{
\begin{tabular}{lcccccc}
\toprule
\textbf{Models} & \textbf{MNIST} & \textbf{fMNIST} & \textbf{SVHN} & \textbf{3Dshapes} & \textbf{DSprites} & \textbf{Cars3D} \\\midrule
\cellcolor{gray!30} $\St$-RKM ($\sigma =0$) & \cellcolor{gray!30}4.80~($\pm$0.13) &\cellcolor{gray!30} \textbf{4.71}~($\pm$0.14) & \cellcolor{gray!30}4.36~($\pm$0.32) & \cellcolor{gray!30} 2.52~($\pm$0.18) & \cellcolor{gray!30}4.54~($\pm$0.64) & \cellcolor{gray!30}3.69~($\pm$1.4) \\
\cellcolor{gray!30} $\St$-RKM ($\sigma =10^{-3}$) &\cellcolor{gray!30}4.77~($\pm$0.12) &\cellcolor{gray!30} 6.46~($\pm$0.17) & \cellcolor{gray!30}\textbf{3.26}~($\pm$0.16) &\cellcolor{gray!30} \textbf{1.04}~($\pm$0.14) & \cellcolor{gray!30}3.72~($\pm$0.58) &\cellcolor{gray!30} {3.62}~($\pm$1.29) \\
\cellcolor{gray!30} $\St$-RKM-sl ($\sigma =10^{-3}$) & \cellcolor{gray!30}\textbf{3.11}~($\pm$0.10) & \cellcolor{gray!30}5.17~($\pm$0.10) & \cellcolor{gray!30} 4.16~($\pm$0.23) & \cellcolor{gray!30} 1.20~($\pm$0.19) & \cellcolor{gray!30} \textbf{3.13}~($\pm$0.54) & \cellcolor{gray!30}4.02~($\pm$1.40) \\
VAE ($\beta =1$) & 4.85~($\pm$0.48) & 5.60~($\pm$0.09) & 4.50~($\pm$0.34) & 2.06~($\pm$0.13) & 5.04~($\pm$0.92) & 4.01~($\pm$1.90) \\
$\beta$-VAE ($\beta =3$) & 3.75~($\pm$0.08) & 7.16~($\pm$0.28) & 4.71~($\pm$0.27) & 3.25~($\pm$0.27) & 4.85~($\pm$0.68) & 4.83~($\pm$0.21) \\
FactorVAE ($\gamma =12$) & 3.52~($\pm$0.27) & 5.12~($\pm$0.01) & 3.46~($\pm$0.64) & 1.32~($\pm$0.01) & 3.24~($\pm$0.02) & \textbf{3.47}~($\pm$0.07) \\
InfoGAN & 4.08~($\pm$0.27) & 5.21~($\pm$1.33) & 4.84~($\pm$0.72) & 2.33~($\pm$0.36) & 5.17~($\pm$0.31) & 4.92~($\pm$0.33)
\end{tabular}
}
\end{table*}
\begin{table*}[]
\centering
\caption{Eastwood framework's~\cite{eastwood2018a} disentanglement metric with Lasso and Random Forest regressor. For disentanglement and completeness higher score is better, for informativeness, lower is better. `Info.’ indicates (average) root-mean-square error in predicting $\bm z$. The best scores are in bold. $\St$-RKM variants are shaded and outperform other models except in two instances.\label{tab:eastwood}}
\resizebox{\textwidth}{!}{%
\begin{tabular}{llccc c ccc}
\multirow{2}{*}{{\bfseries{Dataset}}} & \multirow{2}{*}{\textbf{Model}} & \multicolumn{3}{c}{\textbf{Lasso}} & & \multicolumn{3}{c}{\textbf{Random Forest}} \\ \cmidrule{3-5} \cmidrule{7-9}
& & \textbf{Dise.} & \textbf{Comp.} & \textbf{Info.} & & \textbf{Dise.} & \textbf{Comp.} & \textbf{Info.} \\ \midrule
\multirow{6}{*}{{Dsprites}} & \cellcolor{gray!30} {$\St$-RKM ($\sigma = 0$)} & \cellcolor{gray!30} 0.41~($\pm$0.02) & \cellcolor{gray!30} 0.45~($\pm$0.01) &\cellcolor{gray!30} 1.05~($\pm$0.03) & \cellcolor{gray!30}& \cellcolor{gray!30} 0.27~($\pm$0.01) & \cellcolor{gray!30} 0.62~($\pm$0.01) & \cellcolor{gray!30} {0.97}~($\pm$0.03) \\
& \cellcolor{gray!30} {$\St$-RKM ($\sigma = 10^{-3}$)} & \cellcolor{gray!30} \textbf{0.45}~($\pm$0.01) & \cellcolor{gray!30} \textbf{0.47}~($\pm$0.02) & \cellcolor{gray!30} 1.05~($\pm$0.01) & \cellcolor{gray!30} & \cellcolor{gray!30} 0.28~($\pm$0.01) & \cellcolor{gray!30} \textbf{0.63}~($\pm$0.02) & \cellcolor{gray!30} 1.02~($\pm$0.01) \\
& \cellcolor{gray!30} {$\St$-RKM-sl} ($\sigma = 10^{-3}$) & \cellcolor{gray!30} 0.37~($\pm$0.03) & \cellcolor{gray!30} 0.32~($\pm$0.01) & \cellcolor{gray!30} 1.07~($\pm$0.02) & \cellcolor{gray!30} & \cellcolor{gray!30} \textbf{0.35}~($\pm$0.02) & \cellcolor{gray!30} 0.58~($\pm$0.01) & \cellcolor{gray!30} \textbf{0.96}~($\pm$0.02) \\
& {VAE} ($\beta=1$) & 0.26~($\pm$0.06) & 0.22~($\pm$0.00) & 0.97~($\pm$0.01) & & 0.24~($\pm$0.03) & 0.55~($\pm$0.04) & 1.00~($\pm$0.01) \\
& {$\beta$-VAE} ($\beta=3$) & 0.36~($\pm$0.02) & 0.31~($\pm$0.02) & {0.96}~($\pm$0.21) && 0.33~($\pm$0.01) & 0.53~($\pm$0.04) & {0.99}~($\pm$0.11) \\
& {FactorVAE} ($\gamma=12$) & 0.40~($\pm$0.01) & 0.34~($\pm$0.01) & {0.98}~($\pm$0.01) & & 0.34~($\pm$0.02) & 0.58~($\pm$0.01) & 1.05~($\pm$0.01) \\
& {Info-GAN} & 0.31~($\pm$0.21) & 0.27~($\pm$0.03) & \textbf{0.95}~($\pm$0.02) & & 0.31~($\pm$0.01) & 0.47~($\pm$0.20) & 1.00~($\pm$0.02) \\\midrule
\multirow{6}{*}{{3Dshapes}} & \cellcolor{gray!30} {$\St$-RKM ($\sigma = 0$)} & \cellcolor{gray!30} \textbf{0.76}~($\pm$0.02) &\cellcolor{gray!30} \textbf{0.71}~($\pm$0.02) & \cellcolor{gray!30} {1.06}~($\pm$0.03) &\cellcolor{gray!30} &\cellcolor{gray!30} 0.55~($\pm$0.03) &\cellcolor{gray!30} \textbf{0.69}~($\pm$0.02) &\cellcolor{gray!30} \textbf{0.51}~($\pm$0.21) \\
& \cellcolor{gray!30} {$\St$-RKM ($\sigma = 10^{-3}$)} & \cellcolor{gray!30} 0.74~($\pm$0.02) & \cellcolor{gray!30} 0.66~($\pm$0.01) & \cellcolor{gray!30} 1.24~($\pm$0.02) &\cellcolor{gray!30}& \cellcolor{gray!30}{0.61}~($\pm$0.01) & \cellcolor{gray!30} 0.67~($\pm$0.01) & \cellcolor{gray!30} 0.86~($\pm$0.10) \\
& \cellcolor{gray!30} {$\St$-RKM-sl} ($\sigma = 10^{-3}$) &\cellcolor{gray!30} 0.72~($\pm$0.01) & \cellcolor{gray!30}0.65~($\pm$0.01) & \cellcolor{gray!30}\textbf{1.03}~($\pm$0.02) & \cellcolor{gray!30}& \cellcolor{gray!30}\textbf{0.63}~($\pm$0.02) & \cellcolor{gray!30} 0.66~($\pm$0.02) & \cellcolor{gray!30} 0.95~($\pm$0.01) \\
& {VAE}~($\beta=1$) & 0.44~($\pm$0.21) & 0.33~($\pm$0.22) & 1.26~($\pm$0.20) & & 0.33~($\pm$0.20) & 0.36~($\pm$0.21) & 0.94~($\pm$0.01) \\
& {$\beta$-VAE} ($\beta=3$) & 0.55~($\pm$0.01) & 0.54~($\pm$0.01) & 1.07~($\pm$0.01) & & 0.56~($\pm$0.01) & 0.57~($\pm$0.03) & 0.54~($\pm$0.22) \\
& {FactorVAE} ($\gamma=12$) & 0.62~($\pm$0.01) & 0.41~($\pm$0.03) & 1.05~($\pm$0.01) & & 0.57~($\pm$0.02) & 0.58~($\pm$0.01) & 0.93~($\pm$0.20) \\
& {Info-GAN} & 0.41~($\pm$0.22) & 0.39~($\pm$0.01) & 1.17~($\pm$0.02) & & 0.53~($\pm$0.01) & 0.51~($\pm$0.10) & 0.61~($\pm$0.12)\\ \midrule
\multirow{6}{*}{{Cars3D}} & \cellcolor{gray!30} {$\St$-RKM ($\sigma = 0$)} &\cellcolor{gray!30} 0.45~($\pm$0.01) & \cellcolor{gray!30}0.27~($\pm$0.13) & \cellcolor{gray!30}1.33~($\pm$0.08) & \cellcolor{gray!30}& \cellcolor{gray!30}0.49~($\pm$0.01) & \cellcolor{gray!30}\textbf{0.38}~($\pm$0.01) &\cellcolor{gray!30} \textbf{1.16}~($\pm$0.03) \\
& \cellcolor{gray!30} {$\St$-RKM ($\sigma = 10^{-3}$)} &\cellcolor{gray!30} 0.42~($\pm$0.09) & \cellcolor{gray!30} 0.40~($\pm$0.02) &\cellcolor{gray!30} 1.34~($\pm$0.03) & \cellcolor{gray!30}& \cellcolor{gray!30} 0.54~($\pm$0.01) & \cellcolor{gray!30} 0.32~($\pm$0.02) & \cellcolor{gray!30} 1.20~($\pm$0.11) \\
& \cellcolor{gray!30} {$\St$-RKM-sl} ($\sigma = 10^{-3}$) & \cellcolor{gray!30} \textbf{0.65}~($\pm$0.02) & \cellcolor{gray!30} \textbf{0.48}~($\pm$0.01) & \cellcolor{gray!30} {1.30}~($\pm$0.05) &\cellcolor{gray!30}& \cellcolor{gray!30}\textbf{0.55}~($\pm$0.02) & \cellcolor{gray!30} 0.33~($\pm$0.02) & \cellcolor{gray!30} 1.20~($\pm$0.03) \\
& {VAE} ($\beta=1$) & 0.47~($\pm$0.01) & 0.18~($\pm$0.04) & 1.34~($\pm$0.02) & & 0.23~($\pm$0.21) & 0.35~($\pm$0.01) & 1.21~($\pm$0.02) \\
& {$\beta$-VAE} ($\beta=3$) & 0.51~($\pm$0.06) & 0.27~($\pm$0.08) & 1.35~($\pm$0.01) & & 0.47~($\pm$0.07) & {0.37}~($\pm$0.02) & {1.19}~($\pm$0.07) \\
& {FactorVAE} ($\gamma=12$) & 0.54~($\pm$0.02) & 0.38~($\pm$0.23) & 1.33~($\pm$0.02) & & 0.44~($\pm$0.01) & 0.33~($\pm$0.01) & 1.24~($\pm$0.01) \\
& {Info-GAN} & 0.56~($\pm$0.01) & 0.23~($\pm$0.13) & \textbf{1.29}~($\pm$0.04) & & 0.27~($\pm$0.22) & 0.32~($\pm$0.05) & 1.41~($\pm$0.21)\\
\end{tabular}%
}
\end{table*}
To evaluate the disentanglement performance, various metrics have been proposed. A comprehensive review by Locatello et al.~\cite{locatelloChallengingassumptions} shows that the various disentanglement metrics are correlated albeit with a different degree of correlation across datasets. In this paper, we use three measures given in Eastwood's framework~\cite{eastwood2018a}: \emph{disentanglement}: the degree to which a representation factorizes the underlying factors of variation, with each variable capturing at most one generative factor; \emph{completeness}: the degree to which each underlying factor is captured by a single code variable; and \emph{informativeness}: the amount of information that a representation captures about the underlying factors of variation.
Table~\ref{tab:eastwood} shows that $\St$-RKM variants (shaded background) have better disentanglement and completeness scores.
However, informativeness scores are higher for $\St$-RKM when using a lasso-regressor in contrast to mixed scores with Random forest regressor. This can be seen clearly in Figure~\ref{fig:traversal_imgs} which shows the generated images by traversing along the principal components in the latent space. In 3Dshapes dataset, $\St$-RKM model captures floor-hue, wall-hue, and orientation perfectly but has a slight entanglement in capturing other factors. This is worse in $\beta$-VAE which has entanglement in all dimensions except the floor-hue along with noise in some generated images. %
Similar trends can be observed in the Dsprites and Cars3D dataset. %
\begin{figure*}[h]
\centering
\setlength{\tabcolsep}{1pt}
\resizebox{0.95\textwidth}{!}{
\begin{tabular}{r c c c c}
& $\St$-RKM ($\sigma=0$) & & $\beta$-VAE ($\beta=3$) & Info-GAN \\
\rotatebox[origin=c]{90}{MNIST} & \tabfigure{width=5.5cm}{mnist_rkm_rand_gen.pdf} && \tabfigure{width=5.5cm}{mnist_3betavae_randgen.pdf} &\tabfigure{width=5.5cm}{rand_gen_mnist_gan.pdf} \\
&&&&\\
\rotatebox[origin=c]{90}{fMNIST} & \tabfigure{width=5.5cm}{randgen_imgs_fmnist.pdf} && \tabfigure{width=5.5cm}{randgen_3vae_imgs_fmnist.pdf} &\tabfigure{width=5.5cm}{rand_gen_fmnist_gan.pdf} \\
&&&&\\
\rotatebox[origin=c]{90}{SVHN} & \tabfigure{width=5.5cm}{svhn_rkm_randgen.pdf} && \tabfigure{width=5.5cm}{svhn_3betavae_randgen.pdf} &\tabfigure{width=5.5cm}{rand_gen_svhn_gan.pdf} \\
&&&&\\
\rotatebox[origin=c]{90}{3Dshapes} & \tabfigure{width=5.5cm}{3ds_rkm_rand_gen.pdf} && \tabfigure{width=5.5cm}{3ds_3betavae_rand_gen.pdf} &\tabfigure{width=5.5cm}{rand_gen_3dshapes_gan.pdf} \\
&&&&\\
\rotatebox[origin=c]{90}{Dsprites} & \tabfigure{width=5.5cm}{dsp_rkm_rand_gen_imgs.pdf} && \tabfigure{width=5.5cm}{dsp_3betavae_rand_gen_imgs.pdf} &\tabfigure{width=5.5cm}{rand_gen_dsp_gan.pdf}
\end{tabular}
}
\caption{Samples of randomly generated batch of images used to compute FID scores (see Table 1 in main part) and SWD scores (see Table~\ref{tab:swd}).}
\label{fig:fid_sample_imgs}
\end{figure*}
\section{Related work}
\noindent \textbf{VAE}: It has been shown in~\cite{MIG_VAE} that the KL term includes the Mutual Information Gap which encourages disentanglement. In~\cite{burgess2018understanding}, the effect of the $\beta$ term is analyzed more in-depth. It was suggested that the stronger emphasis on the posterior to match the factorized unit Gaussian prior puts further constraints on the implicit capacity of the latent bottleneck~\cite{higgins2017beta}.
Recently, several variants of VAEs promoting disentanglement have been proposed by adding extra terms to the ELBO. For instance, FactorVAE~\cite{FactorVAE} augments the ELBO by a new term enforcing factorization of the marginal posterior (or aggregate posterior).
The work of~\cite{Locatello2020Disentangling} considers adding an extra term that accounts for the knowledge of some partial label information to improve disentanglement. In~\cite{Robinek} the reason for the alignment of the latent space with the coordinate axes is analyzed, as the design of VAE does not suggest any such mechanism. The authors argue that due to the diagonal approximation in the encoder together with the inherent stochasticity forces the local orthogonality of the decoder.
In contrast to~\cite{Robinek} where the implicit orthogonality of VAE is studied, our proposed model has orthogonality by design due to the introduction of the Stiefel manifold. The use of deterministic AEs was studied in~\cite{ghosh2019variational}, %
where another quadratic regularization on the latent vectors was proposed. \\%: $\frac{1}{2} \|\bm z \|_2^2$ as was proposed in the original RKM formulation~\cite{suykens_deep_2017}
\noindent \textbf{RKM}: In~\cite{GENRKM,robust2020} for instance, a multi-view generative model called Generative-RKM (Gen-RKM) has been introduced which uses explicit feature-maps in a novel training procedure for joint feature-selection and subspace learning.
\FloatBarrier
\section{Conclusion} We proposed two main changes with respect to related works: (\emph{i}) In contrast with~\cite{GENRKM}, the interconnection matrix $U$ is restricted to be a rectangular matrix with orthonormal columns, i.e., valued on a Stiefel manifold. Then, for the training, we use the Cayley Adam algorithm of~\cite{Li2020Efficient} for stochastic optimization on the Stiefel manifold. Computationally, $\St$-RKM only increases the training time by a reasonably small amount compared to $\beta$-VAE for instance.
(\emph{ii}) We propose several Auto-Encoder objectives %
and discuss that the combination of a stochastic AE loss with an explicit optimization on the Stiefel manifold promotes disentanglement.
(\emph{iii}) Additionally, we establish connections with probabilistic models, formulate an Evidence Lower Bound (ELBO), and discuss the independence of latent factors. Where the considered baselines have a trade-off between generation quality and disentanglement, we improve on both these aspects as illustrated through various experiments.
\section*{Acknowledgments}
\footnotesize{
EU: The research leading to these results has received funding from
the European Research Council under the European Union's Horizon
2020 research and innovation program / ERC Advanced Grant E-DUALITY
(787960). This paper reflects only the authors' views and the Union
is not liable for any use that may be made of the contained information.
Research Council KUL: Optimization frameworks for deep kernel machines C14/18/068
Flemish Government:
FWO: projects: GOA4917N (Deep Restricted Kernel Machines:
Methods and Foundations), PhD/Postdoc grant
Impulsfonds AI: VR 2019 2203 DOC.0318/1QUATER Kenniscentrum Data
en Maatschappij
This research received funding from the Flemish Government (AI Research Program). The authors are affiliated to Leuven.AI - KU Leuven institute for AI, B-3000, Leuven, Belgium.
Ford KU Leuven Research Alliance Project KUL0076 (Stability analysis
and performance improvement of deep reinforcement learning algorithms)s), ICT 48 TAILOR, Leuven.AI Institute. The computational resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation - Flanders (FWO) and the Flemish Government – department EWI.}
| 20,684 |
\section{Introduction}
\label{sec:introduction}
Ground-based gravitational-wave observatories such as Advanced LIGO~\cite{AdvancedLIGO:2015} and Advanced Virgo~\cite{AdvancedVirgo:2015} are searching for persistent, periodic gravitational wave (GW) signals.
One of the key targets for these continuous-wave (CW) searches are rotating neutron stars.
At the present time, GWs have been observed from transient events comprising binary black hole ~\cite{GW150914, GW151226, GW170104, GW170814, GWTC-1:2018} and binary neutron star~\cite{GW170817,GW170817multi} coalescences, but no CW signal has been detected.
However the GW spectrum has only just been opened.
Future CW detections will shed light on a number of fundamental physics questions, including the properties of bulk nuclear matter~\cite{Riles:2013,AnderssonEtAl:2011}.
Low mass X-ray binaries (LMXBs) are of particular interest for CW searches.
LMXBs are composed of a compact object (such as a neutron star or stellar mass black hole) with a low mass stellar companion (typically $\lesssim 1M_{\odot}$)~\cite{xraybinaries:1997,LewinVanDerKlis:2006,CasaresEtAl:2017}.
Within these binaries, matter is transferred from the low mass star to the neutron star, spinning it up.
Electromagnetic (EM) observations show that these accreting neutron stars rotate below the centrifugal break-up frequency.
Torque balance between spin up via accretion and spin down via GW emission is one way to explain the frequency distribution observed~\cite{BildstenTB:1998}.
Scorpius X-1 is a prime LMXB target for CW searches by Advanced LIGO and Advanced Virgo.
Scorpius X-1 shines brightly in X-rays, indicating a high accretion rate and potentially strong CW emission.
Extensive searches have been made for Scorpius X-1 with multiple search pipelines and data sets from Initial LIGO and Initial Virgo ~\cite{SearchFStatS2:2007, SearchCStatS5:2015, SearchTwoSpecS6VSR2VSR3:2014, SearchRadiometerS5:2011, MeadorsEtAlS6LMXBSearch:2017}\footnote{A search for LMXB XTE J1751-305 was also made with Initial LIGO~\cite{MeadorsEtAlS6LMXBSearch:2017}.}.
Scorpius X-1 was also targeted in the first Advanced LIGO Observing Run (O1) from September 2015 to January 2016~\cite{ScoX1ViterbiO1:2017, SearchRadiometerO1:2017, SearchCrossCorrO1:2017}, and the second Advanced LIGO Observing Run (O2)~\cite{ScoX1ViterbiO2,RadiometerO1O2:2019}.
O2 started in November 2016 with the LIGO instruments and was joined by Virgo for the final month before the run ended in August 2017.
To date, no CW detection has been made.
However upper limits have been placed on the GW strain from Scorpius X-1, the best being the cross-correlation search which gives a $95$\% confidence upper limit on the strain of $h_0^{95\%} \lesssim 2.3 \times 10^{-25}$ at $175\,\mathrm{Hz}$~\cite{SearchCrossCorrO1:2017,MeadorsEtAl:2018}.
Searches for CWs from Scorpius X-1 and other LMXBs face a number of challenges.
First, the rotation frequency of the compact object wanders significantly during the observation.
A hidden Markov model (HMM) has the ability to track the wandering efficiently and accurately.
Following Refs.~\cite{ScoX1ViterbiO2,SuvorovaEtAl:2017,SuvorovaEtAl:2016}, this is the approach we apply in this paper.
A second challenge is that the rotation frequency is sometimes unknown from EM observations.
For Scorpius X-1, no EM pulsations have been observed from the system~\cite{WangEtAlScoX1Ephem:2018}.
Wide band searches need to be carried out, e.g. the recent O2 Scorpius X-1 search spanned $60$--$650\,\mathrm{Hz}$~\cite{ScoX1ViterbiO2}.
Many LMXBs do have EM observations of pulsations, whose frequencies are measured with an accuracy of $\sim 10^{-8}\,\mathrm{Hz}$.
It is these targets we focus on for this search.
EM observations of pulsations greatly reduce the computational cost of the search, making these targets appealing for CW searches despite their lower X-ray brightness in comparison to Scorpius X-1.
In this work we search data from O2, focusing only on data from the two LIGO observatories (due to the shorter duration of the Virgo data in O2).
We present a search for five LMXB targets with well known rotation frequencies.
The search method is identical to that used in Ref.~\cite{ScoX1ViterbiO2}.
We briefly review it in Sec.~\ref{sec:SearchOverview}.
The target list is described in Sec.~\ref{sec:targets} and the searched parameter ranges in Sec.~\ref{sec:targetParameterRanges}.
We describe the application of this search to LMXB targets in Sec.~\ref{sec:o2Search} and Sec.~\ref{sec:vetos}.
The results are presented in Sec.~\ref{sec:results}, followed by a brief discussion of the expected strain from LMXBs in Sec.~\ref{sec:expectedStrain}.
The conclusions are summarized in Sec.~\ref{sec:conclusions}.
\section{Search Algorithm}
\label{sec:SearchOverview}
The LMXB search follows the same procedure as the O2 search for Scorpius X-1~\cite{ScoX1ViterbiO2}.
Here we briefly review the search method, which is described in full in Refs.~\cite{ScoX1ViterbiO2,ScoX1ViterbiO1:2017,SuvorovaEtAl:2016,SuvorovaEtAl:2017}.
\subsection{HMM}
\label{sec:HMM}
In a Markov process, the probability of occupying the current state depends only on the previous state.
In a hidden Markov process, the state is unobservable.
The LMXB targets of this search have observed spin frequencies.
However, we note that a drift may exist between the rotation of the crust (where EM pulsations originate) and the core (which may or may not dominate the GW-emitting mass or current quadruple)~\cite{CrabSearch:2008,SuvorovaEtAl:2016}.
The CW frequency is therefore hidden.
Following the notation in Ref.~\cite{ScoX1ViterbiO2}, we label the hidden state variable as $q(t)$.
This state transitions between some set of allowed values $\{q_1,\dots,q_{N_Q}\}$ at times $\{t_0,\dots,t_{N_T}\}$.
The probability of jumping from $q_i$ at time $t_n$ to $q_j$ at $t_{n+1}$ is given by a transition matrix $A_{q_jq_i}$ which depends only on $q(t_n)$.
Measurements are made of some observable with allowed values $\{o_i,\dots,o_{N_o}\}$, and an emission matrix $L_{o_jq_i}$ relates the likelihood that an observation of $o_j$ relates to a hidden state $q_i$.
In a CW search, the observable is the interferometer data or some data product generated from it, e.g. a Fourier transform, or detection statistic.
The total duration of the observation is $T_{\mathrm{obs}}$.
When searching for LMXBs, the observation is divided into $N_T$ equal parts, each of length $T_{\mathrm{drift}} = T_{\mathrm{obs}} / N_T$.
Identically to Ref.~\cite{ScoX1ViterbiO2}, we take $T_{\mathrm{drift}}=10$ days (other HMM searches use a shorter $T_{\mathrm{drift}}$ depending on the type of target~\cite{MillhouseStrangMelatos:2020,PostMergerRemnantSearch:2019,SunEtAlSNR:2018}).
For each segment, $L_{o_jq_i}$ is calculated from some frequency domain estimator, such as the $\mathcal{F}$-statistic or $\mathcal{J}$-statistic as discussed in Sec.~\ref{sec:jstat}.
Given an estimator, the probability that an observation $O=\{ o(t_0),\dots,o(N_T) \}$ is associated with a particular hidden path $Q=\{q(t_0),\dots,q(N_T) \}$ is
\begin{align}
P(Q|O) =& L_{o(t_{N_T}) q(t_{N_T})} A_{q(t_{N_T}) q(t_{N_T-1})} \dots L_{o(t_1) q(t_1)} \nonumber \\
& \times A_{q(t_1)q(t_0)} \Pi_{q(t_0)},
\end{align}
where $\Pi_{q(t_0)}$, the prior (i.e. the probability that the system starts in $q_i$ at $t=t_0$), is taken to be uniform.
Our objective is to find the optimal hidden path $Q^{\star}$ maximising $P(Q|O)$.
The Viterbi algorithm~\cite{Viterbi:1967} achieves this in a computationally-efficient way by avoiding an exhaustive search of all paths.
It is applied in Refs.~\cite{ScoX1ViterbiO2,SuvorovaEtAl:2016,SuvorovaEtAl:2017,ScoX1ViterbiO1:2017} to CW searches.
The Viterbi detection score~\cite{ScoX1ViterbiO1:2017} for a given path is defined as the number of standard deviations the path's log likelihood exceeds the mean log likelihood of all paths in a user-selected frequency sub-band containing the path's end state.
The quantity $\delta_{q_i}(t_{N_T})$ is defined as the likelihood of the most likely path ending in state $q_i$ at step $N_T$.
The mean and standard deviation of $\ln \delta_{q_i}(t_{N_T})$, marginalized over a sub-band, are given by
\begin{eqnarray}
\mu_{\ln \delta (t_{N_T})} &=& \frac{1}{N_Q} \sum^{N_Q}_{i=1} \ln \delta_{q_i} (t_{N_T}), \\
\sigma^2_{\ln \delta(t_{N_T})} &=& \frac{1}{N_Q} \sum^{N_Q}_{i=1} \left[ \ln \delta_{q_i} (t_{N_T}) - \mu_{\ln \delta}(t_{N_T})\right]^2.
\end{eqnarray}
The Viterbi score for the path with the highest likelihood at step $N_T$, i.e. $\delta_{q^{\star}}$ for $q^{\star} = \mathrm{arg~max}_i \delta_{q_i}(t_{N_T})$, is then
\begin{equation}
S = \frac{\ln \delta_{q^{\star}} - \mu_{\ln \delta}(t_{N_T}) }{ \sigma_{\ln \delta}(t_{N_T}) }.
\end{equation}
As in Refs.~\cite{ScoX1ViterbiO1:2017,ScoX1ViterbiO2}, we use the Viterbi score as our detection statistic throughout this paper.
\subsection{$\mathcal{J}$-statistic}
\label{sec:jstat}
A frequency domain estimator is used to convert the detector data into the probability that a signal is present at a frequency $f$.
CW searches are carried out over many months, so the estimator must account for the motion of the Earth around the Solar System barycenter.
The $\mathcal{F}$-statistic~\cite{JKS:1998} is an example of an estimator used as a matched filter in CW searches for isolated neutron stars.
In a binary the signal is Doppler modulated by the binary motion.
The orbital component of the phase varies with time $t$ as
\begin{equation}
\Phi_\mathrm{s}(t) = -2 \pi f_{\star} a_{\mathrm{0}} \sin \Omega \left( t - t_a \right),
\label{eqn:jstatdoppler}
\end{equation}
where $f_{\star}$ is the rotation frequency of the star, $a_{\mathrm{0}}$ is the projected semi-major axis, $\Omega = 2\pi/P$ is the orbital angular velocity with orbital period $P$, and $t_a$ is some reference time usually chosen to be the time of passage of the ascending node $T_{\mathrm{asc}}$.
The $\mathcal{J}$-statistic introduced in Ref.~\cite{SuvorovaEtAl:2017} extends the $\mathcal{F}$-statistic matched filter to include binary orbital modulation.
The orbital motion spreads the $\mathcal{F}$-statistic power into orbital sidebands spaced by $1/P$ and centred on $f_{\star}$.
The $\mathcal{J}$-statistic weights and sums these sidebands given a set of three binary parameters: $P$, $a_{\mathrm{0}}$ and $T_{\mathrm{asc}}$.
The sum is performed coherently with respect to orbital phase.
We make the assumption of circular orbits.
As in Ref.~\cite{ScoX1ViterbiO2}, we use the $\mathcal{J}$-statistic as our estimator for this search.
\section{Targets}
\label{sec:targets}
The targets of this search are LMXBs.
In LMXBs, $f_{\star}$ is measured from X-ray observations of pulsations or burst oscillations~\cite{GWLMXBsWattsEtAl:2008}.
Accreting millisecond pulsars (AMSPs) are a subclass of LMXBs that can exhibit intermittent X-ray pulsations.
These sources make interesting targets for CW searches because, in many cases, $f_{\star}$ is measured to better than $10^{-8}\,\mathrm{Hz}$.
(Again, we emphasise that the signal frequency is not necessarily equal to $f_{\star}$; see Sec.~\ref{sec:HMM}).
Generally, AMSPs are transient; they have `active' (outburst) and `quiescent' phases.
We denote the spin frequency of the star in its active and quiescent phases as $f_\mathrm{act}$ and $f_\mathrm{qu}$ respectively and use $f_{\star}$ as a general label for the spin frequency in either phase, whenever there is no need to distinguish between $f_\mathrm{act}$ and $f_\mathrm{qu}$.
As discussed in Sec.~\ref{sec:expectedStrain}, the frequency derivative $\dot{f}_{\star}$ has implications for the anticipated signal strength.
The traditional picture is that the active phase is associated with accretion onto the neutron star.
Pulsations are often observed during outburst, whereupon $f_\mathrm{act}$ and the $\dot{f}_{\mathrm{act}}$ can be measured directly.
Active phases can last from weeks to years.
Some sources pulsate persistently throughout the active phase, whilst others pulsate intermittently~\cite{WattsEtAl:2009}.
In the active phase, $\dot{f}_{\mathrm{act}}$ is set by the accretion torque~\cite{GhoshLambPethick:1977}.
During quiescence, pulsations are not observed.
However, $\dot{f}_{\mathrm{qu}}$ can be inferred from the difference in $f_\mathrm{act}$ measured during the neighbouring active epochs.
In quiescence, $\dot{f}_{\mathrm{qu}}$ is usually set by magnetic dipole braking, although there may still be accretion taking place during these periods~\cite{MelatosMastrano:2016}.
CW emission is possible during both active and quiescent phases.
Torque balance is possible when the gravitational radiation reaction torque cancels the accretion torque.
It is also possible for the star to spin up or down under the action of positive or negative hydromagnetic accretion torques~\cite{GhoshLambPethick:1977,BildstenEtAl:1997}.
Radiation reaction may contribute to a negative $\dot{f}_{\mathrm{act}}$ or $\dot{f}_{\mathrm{qu}}$.
Equally, if a positive $\dot{f}_{\mathrm{act}}$ or $\dot{f}_{\mathrm{qu}}$ is observed, say due to accretion, it may outweigh and therefore mask a negative torque due to gravitational radiation reaction.
Deformations in the neutron star which are not aligned with the rotation axis produce CW emission.
Neutron star mountains can be formed by magnetic stresses misaligned with the spin axis~\cite{BonazzolaGourgoulhonMagMountains:1996, MelatosPayneMagMountains:2005, HaskellEtAlMagMountains:2008, MelatosMastranoMagMountains:2012} and thermo-elastic deformations of a relativistic star~\cite{JohnsonMcDanielOwensElasticDeformation:2013}.
Another emission mechanism involves r-modes (Rossby waves due to the Coriolis effect).
The r-mode oscillations are predicted to be excited by radiation-reaction instabilities and can persist into quiescence~\cite{rmodesOwenEtAl:1998,rmodesAnderson:1998}.
An overview of the known accreting neutron stars, as well as their prospects as GW targets, can be found in Ref.~\cite{GWLMXBsWattsEtAl:2008}.
Below we briefly introduce the five sources we analyse in this paper.
The LMXB targets in this search have distances between $3.4$--$8.0\,\mathrm{kpc}$; see also sections \ref{sec:targetSumHETEa}--\ref{sec:targetSumXTEa}.
All targets are further away than Scorpius X-1 ($2.8 \pm 0.3\,\mathrm{kpc}$~\cite{WangEtAlScoX1Ephem:2018}).
However, they are less than three times further away, so the wave strain should be comparable to Scorpius X-1, if the quadrupole moment and $f_{\star}$ are comparable too.
The binary properties are collated in Table~\ref{tab:targetDetails}.
\begin{table*}
\caption{\label{tab:targetDetails}
Target list: position (right ascension and declination), orbital period ($P$), projected semi-major axis ($a_{\mathrm{0}}$ in light-seconds), time of ascension ($T_{\mathrm{asc}}$), and frequency of observed pulsations ($f_{\star}$).
Quoted errors (in parentheses) are the $1\sigma$ uncertainties apart from \XTEaName, where they indicate the $90$\% confidence interval from Ref.~\cite{XTEJ1814PapittoEtAl:2007}.
}
\begin{tabularx}{\textwidth}{llllllll}
\hline
\hline
\\
Target & RA & Dec & $P$($\mathrm{s}$) & $a_{\mathrm{0}}$ ($\mathop{\mbox{lt-s}}$) & $T_{\mathrm{asc}}$ (GPS time) & $f_{\star}$ ($\mathrm{Hz}$) & Refs. \\
\\
\hline
\\
\HETEaName & \HETEaRA & \HETEaDec & $\HETEaPorbE$ & $\HETEaAsiniE$ & $\HETEaObsTascE$ & $\HETEaNSFreqE$ & \cite{HETEJ1900KaaretEtAl:2006}\\
\IGRaName & \IGRaRA & \IGRaDec & $\IGRaPorbE$ & $\IGRaAsiniE$ & $\IGRaObsTascE$ & $\IGRaNSFreqE$ & \cite{IGRJ00291SannaEtAl:2017,IGRJ00291Discovery:2004}\\
\SAXaName & \SAXaRA & \SAXaDec & $\SAXaPorbE$ & $\SAXaAsiniE$ & $\SAXaObsTascE$ & $\SAXaNSFreqXMME$ & \cite{SAXJ1808HartmanEtAl:2008,SAXJ1808SannaEtAl:2017}\\
\XTEbName & \XTEbRA & \XTEbDec & $\XTEbPorbE$ & $\XTEbAsiniE$ & $\XTEbObsTascE$ & $\XTEbNSFreqE$ & \cite{XTEJ0929Discovery:2002} \\
\XTEaName & \XTEaRA & \XTEaDec & $\XTEaPorbE$ & $\XTEaAsiniE$ & $\XTEaObsTascE$ & $\XTEaNSFreqE$ & \cite{XTEJ1814PapittoEtAl:2007} \\
\\
\hline
\hline
\end{tabularx}
\end{table*}
\subsection{\HETEaName}
\label{sec:targetSumHETEa}
\HETEaName~was first observed in outburst in 2005 by HETE-II (High Energy Transient Explorer-2)~\cite{HETEJ1900VanderspekEtAl:2005}.
It has distance estimates of $\sim 4.3\,\mathrm{kpc}$~\cite{SuzukiEtAlHETEDist:2007} and $4.7\pm 0.6\,\mathrm{kpc}$~\cite{GallowayEtAlHETEDist:2008}.
Early observations by RXTE (Rossi X-ray Timing Explorer) revealed X-ray pulsations which were detected continuously for $22\,\mathrm{d}$ but became intermittent and then undetectable~\cite{HETEJ1900KaaretEtAl:2006,HETEJ1900GallowayEtAl:2008}.
A spin-orbit model~\cite{ManchesterTaylorPulsars:1977} was used to compute a fit for the orbital parameters and pulsation frequency, yielding $\HETEaNSFreqE\,\mathrm{Hz}$ for the latter quantity.
On MJD $53559$ (8 July 2005) a brightening in the source flux was observed as well as a shift in frequency to $377.291596(16)\,\mathrm{Hz}$, after which pulsations became suppressed~\cite{HETEJ1900KaaretEtAl:2006}.
The source remained in outburst without observed pulsations for $\sim 10\,\mathrm{years}$ until its return to quiescence in 2015~\cite{HETEJ1900DegenaarEtAl:2017}.
In this paper we use the timing solution in Ref.~\cite{HETEJ1900KaaretEtAl:2006} computed from the period before the spin jump, when pulsations were observed.
There is no frequency derivative measured.
\subsection{\IGRaName}
\label{sec:targetSumIGRa}
\IGRaName~is the fastest known AMSP at $\IGRaNSFreqE\,\mathrm{Hz}$.
Distance estimates yield a lower limit of $4\,\mathrm{kpc}$ and an upper limit of $6\,\mathrm{kpc}$ from Refs.~\cite{GallowayEtAl:2005} and~\cite{Galloway:2006} respectively.
It was discovered in a $14\,\mathrm{d}$ outburst in 2004~\cite{IGRJ00291Discovery:2004,IGRJ00291TorresEtAl:2008}.
Searches in the RXTE All Sky Monitor data indicate marginal evidence for two prior outbursts during 1998 and 2001~\cite{IGRJ00291PriorBurstsATel:2004,IGRJ00291DeFalcoEtAl:2017}.
A double outburst was observed in 2008, lasting $9\,\mathrm{d}$ in August and $15\,\mathrm{d}$ in September~\cite{IGRJ00921DoubleOutburst:2011,IGRJ00291PapittoEtAl:2011}.
The most recent outburst in 2015 lasted $25\,\mathrm{d}$.
A timing solution for the spin and orbital parameters was computed from the 2015 outburst in Ref.~\cite{IGRJ00291SannaEtAl:2017,IGRJ00291DeFalcoEtAl:2017}.
Several estimates of $\dot{f}_{\star}$ exist from active and quiescent periods (see Table~\ref{tab:expectedStrain}).
In this paper we use the timing solution from Ref.~\cite{IGRJ00291SannaEtAl:2017}.
\subsection{\SAXaName}
\label{sec:targetSumSAXa}
\SAXaName~is a regular outburster discovered in 1996 by the \emph{BeppoSAX} satellite~\cite{BeppoSAX:1997} with an estimated distance in the range $3.4$--$3.6\,\mathrm{kpc}$~\cite{SAXJ1808GallowayCumming:2006}.
Eight outbursts have been observed, the most recent of which was in 2019~\cite{BultEtAll:2019}.
As the 2019 outburst occured after O2, we use the most recent outburst prior to O2 which began in April 2015~\cite{SAXJ1808SannaEtAl:2017}.
In Ref.~\cite{SAXJ1808SannaEtAl:2017} the spin and orbit parameters are computed from observations of the 2015 outburst by XMM-Newton and NuSTAR (Nuclear Spectroscopic Telescope Array).
XMM-Newton and NuSTAR yield frequencies of $\SAXaNSFreqXMME\,\mathrm{Hz}$ and $\SAXaNSFreqNuSTARE\,\mathrm{Hz}$ respectively.
Several observations of $\dot{f}_{\star}$ have been made in both active and quiescent phases (see Table~\ref{tab:expectedStrain}).
In this paper, we use the timing solution from Ref.~\cite{SAXJ1808SannaEtAl:2017}.
\subsection{\XTEbName}
\label{sec:targetSumXTEb}
\XTEbName~was discovered in outburst during two months in April--June 2002 by RXTE~\cite{XTEJ0929Discovery:2002}, the only outburst observed to date~\cite{XTEJ0929Distance:2017}.
It has an estimated distance $>7.4\,\mathrm{kpc}$~\cite{XTEJ0929Distance:2017}.
The spin and orbital parameters were computed from RXTE timing data.
There is also an estimate of $\dot{f}_{\star}$ during the active phase~\cite{XTEJ0929Discovery:2002}.
The pulsation frequency is $\XTEbNSFreqE\,\mathrm{Hz}$ with $\dot{f}_{\mathrm{act}} = -9.2(4)\times 10^{-14}\,\mathrm{Hz}~\mathrm{s}^{-1}$ (spin down).
In this paper we use the timing solution in Ref.~\cite{XTEJ0929Discovery:2002}.
\subsection{\XTEaName}
\label{sec:targetSumXTEa}
\XTEaName~was discovered in outburst in 2003~\cite{XTEJ1814Discovery:2003} by RXTE.
The outburst lasted $53\,\mathrm{d}$ and is the only one observed.
Distance estimates range from $3.8\,\mathrm{kpc}$~\cite{XTEJ1814KraussEtAl:2005} to $\sim 8\,\mathrm{kpc}$~\cite{XTEJ1814:StrohmayerEtAl:2003}.
The spin and orbital parameters were computed via timing analysis.
Pulsations at $\XTEaNSFreqE\,\mathrm{Hz}$ were observed with $\dot{f}_{\mathrm{act}} = -6.7(7)\times 10^{-14}\,\mathrm{Hz}~\mathrm{s}^{-1}$.
In this paper we use the timing solution from Ref.~\cite{XTEJ1814PapittoEtAl:2007}.
\section{Spin, orbital, and astrometric parameters}
\label{sec:targetParameterRanges}
A targeted CW search requires the sky position [right ascension (RA) and declination (Dec)] of the source, needed for the $\mathcal{J}$-statistic to account for the motion of the Earth with respect to the target.
To apply the $\mathcal{J}$-statistic, three binary orbital parameters are also necessary: the orbital period $P$, the projected semi-major axis $a_{\mathrm{0}}$, and the orbital phase $\phi_a$.
The phase of the orbit from X-ray observations is often quoted as the time of the ascending node $T_{\mathrm{asc}}$ where the Doppler-shifted frequency of the neutron star is lowest (the phase is also sometimes quoted as the time of inferior conjunction of the companion star, $T_{90} = T_{\mathrm{asc}} + P/4$~\cite{GWLMXBsWattsEtAl:2008,XTEJ0929Discovery:2002}).
In this search we use $T_{\mathrm{asc}}$ for all targets.
EM observations of pulsations constrain the neutron star spin frequency $f_{\star}$.
The electromagnetically determined search parameters are summarized in Table~\ref{tab:targetDetails}.
Observations of X-ray pulsations during active phases are able to directly constrain $f_{\star}$ to high precision.
The uncertainties in $f_{\star}$ are typically small, as Table~\ref{tab:targetDetails} shows.
However, signal frequency is not necessarily identical to $f_{\star}$ (see also Sec.~\ref{sec:HMM} and Sec.~\ref{sec:searchDescriptionFreqBinning}).
Timing solutions inferred from the Doppler-shifted pulsations allow the orbital parameters to be constrained (see $P$, $a_{\mathrm{0}}$, and $T_{\mathrm{asc}}$ in Table~\ref{tab:targetDetails}).
There are several mechanisms which can lead to CW emission from a rotating neutron star as described in Sec.~\ref{sec:targets}.
Thermal or magnetic `mountains' emit at $2f_{\star}$ and possibly $f_{\star}$~\cite{BildstenTB:1998}.
R-mode oscillations emit at $\sim 4f_{\star}/3$~\cite{rmodesOwenEtAl:1998,rmodesAnderson:1998,Lee:2010}.
Pinned superfluids emit at $f_{\star}$~\cite{MelatosDouglassSimula:2015} and $2 f_{\star}$~\cite{Jones:2010}.
We also search for signals at $f_{\star}/2$, where harmonics may exist.
In summary, we search bands containing $\{1/2, 1, 4/3, 2\}\,f_{\star}$ for each target as discussed in Sec.~\ref{sec:searchDescriptionFreqBinning}.
Identically to Ref.~\cite{ScoX1ViterbiO2}, we choose a sub-band size of $\sim 0.61\,\mathrm{Hz}$ (see Sec.~\ref{sec:o2Search} for details).
Previous CW searches have used sub-bands in the range $0.01$--$1\,\mathrm{Hz}$ depending on the target, algorithm, and search type (e.g. all sky, targeted)~\cite{BetzwieserEtAlCrabSearch:2009,knownPulsarO2:2019,SearchCrossCorrO1:2017,EinsteinATHomeAllSky:2017,EinsteinATHomeTargetet:2019}.
Recent developmental work on r-mode searches recommends scanning a relatively wide frequency range around the $4f_{\star}/3$ value~\cite{CarideEtAlrmodeSearch:2019}.
For the targets considered here, we calculate the recommended search band using Eq. (17) in Ref.~\cite{CarideEtAlrmodeSearch:2019}.
\XTEaName~has the narrowest band ($253$--$291\,\mathrm{Hz}$), and \IGRaName~has the widest band ($669$--$940\,\mathrm{Hz}$).
The $\sim 0.61\,\mathrm{Hz}$ sub-bands searched in this paper are deliberately chosen to be narrower than these ranges.
An exhaustive, broadband, r-mode search across hectohertz frequencies lies outside the scope of this paper, whose central objective is to conduct fast, narrowband searches at a selection of sensible harmonics of $f_{\star}$, taking advantage of well-measured EM observables in LMXBs for the first time.
We postpone a broadband r-mode search to future work (see also Ref.~\cite{FesikPapa:2020} for a recent r-mode search).
\begin{table*}
\caption{\label{tab:TascRange}
Propagated time of ascension just before the start of O2, along with the error (in parentheses) and the search interval.
The error in the second column is the $1\sigma$ uncertainty of the propagated $T_{\mathrm{asc}}$ except for \XTEaName, where it is the $90$\% interval.
The search intervals in the third column are the $\pm 3\sigma$ range apart from \XTEbName.
For \XTEbName, the search interval is equal to the orbital period, covering $T_{\mathrm{asc}} \pm P/2$.
}
\begin{tabular}{p{3.7cm}p{4cm}p{4cm}}
\hline
\hline
\\
Target & $T_{\mathrm{asc,O2}}$ (GPS time) & Search range (GPS time)\\
\\
\hline
\\
\HETEaName&$\HETEaTascGPSPropE$&$\HETEaTascLow$--$\HETEaTascHigh$ \\
\IGRaName &$\IGRaTascGPSPropE$ &$\IGRaTascLow$--$\IGRaTascHigh$ \\
\SAXaName &$\SAXaTascGPSPropE$ &$\SAXaTascLow$--$\SAXaTascHigh$ \\
\XTEbName &$\XTEbTascGPSPropE$ &$\XTEbTascLow$--$\XTEbTascHigh$ \\
\XTEaName &$\XTEaTascGPSPropE$ &$\XTEaTascLow$--$\XTEaTascHigh$ \\
\\
\hline
\hline
\end{tabular}
\end{table*}
The search ranges for the orbital parameters are based on the uncertainty in the EM measurement.
In many cases, the orbital parameters are known to high accuracy, reducing the computational cost.
The error in $T_{\mathrm{asc}}$ is typically $<1\,\mathrm{s}$.
However, the uncertainty in both $T_{\mathrm{asc}}$ and $P$ means that the extrapolation becomes more unreliable the further it extends.
If $T_{\mathrm{asc}}$ is measured several years before O2, we can use $P$ to calculate a time when the binary returns to the same position in its orbit close to the O2 start time (at $T_{\mathrm{O2,start}}=1\,164\,562\,334$).
To propagate the combined error, we compute the number of orbits $N_{\mathrm{orb}}$ between the observed $T_{\mathrm{asc}}$ and the time of ascension just before the start of O2 from
\begin{equation}
T_{\mathrm{asc,O2}} = T_{\mathrm{asc}} + N_{\mathrm{orb}} P,
\end{equation}
and the error for the propagated $T_{\mathrm{asc}}$ is
\begin{equation}
\sigma_{T_{\mathrm{asc,O2}}}=\left[\sigma_{T_{\mathrm{asc}}}^2+(N_{\mathrm{orb}} \sigma_{P})^2\right]^{1/2},
\end{equation}
where $\sigma_{P}$ and $\sigma_{T_{\mathrm{asc}}}$ are the errors on $P$ and $T_{\mathrm{asc}}$ respectively.
For all targets we choose to use $3\sigma$ uncertainties, except for \XTEbName~where we search a $T_{\mathrm{asc}}$ range equal to its orbital period ($T_{\mathrm{asc}}\pmP/2$).
This search range achieves good coverage of the parameter space whilst keeping the computational cost manageable.
The $T_{\mathrm{asc,O2}}$ values used in the search are given in Table~\ref{tab:TascRange}.
\section{Searching O2 data}
\label{sec:o2Search}
Most CW searches of LIGO data, including this one, begin with short Fourier Transforms (SFTs) of time segments of the data.
Each SFT has duration $T_{\mathrm{SFT}} = 1800\,\mathrm{s}$ (see Appendix~\ref{app:sftLengths}).
For each target, the first step is to compute the $\mathcal{F}$-statistic `atoms', defined in Refs.~\cite{PrixAtoms:2011,SuvorovaEtAl:2017}.
The data are split into $N_{T} = 23$ segments of duration $T_{\mathrm{drift}} = 10\,\mathrm{d}$.
The atoms are computed using the fixed values of RA and Dec in Table~\ref{tab:targetDetails}, which are typically known for LMXBs.
\begin{table*}
\caption{\label{tab:thresholds}
Sub-band frequencies, number of templates and associated Gaussian and off-target Viterbi score thresholds.
The second column shows the sub-band frequencies for $\{f_{\star}/2, f_{\star}, 4f_{\star}/3, 2f_{\star}\}$ where the value displayed is the start of the sub-band, which is $\sim 0.61\,\mathrm{Hz}$ wide.
For each sub-band, the third and fourth columns show the number of $P$ and $T_{\mathrm{asc}}$ templates searched respectively ($N_{\Porb}$ and $N_{\Tasc}$) (note: $N_{\asini} = 1$).
The fifth column shows the total number of templates searched ($N_{\mathrm{tot}}$).
The final columns show the Viterbi score thresholds.
The sixth column shows the Viterbi score threshold from identical searches on $100$ synthetic Gaussian noise realisations $S_\mathrm{th}^{\mathrm{G}}$.
The seventh column shows the Viterbi score threshold from identical searches on $100$ randomly selected sky positions in real data $S_\mathrm{th}^{\mathrm{OT}}$.
The thresholds are for a $0.30$ false alarm probability per sub-band.
}
\begin{center}
\begin{tabular}{p{3.7cm}p{2.9cm}p{1.1cm}p{1.1cm}p{1.7cm}p{2.2cm}p{2.2cm}}
\hline
\hline
\\
Target & Sub-band start & \multicolumn{3}{l}{Number of templates} & \multicolumn{2}{l}{Threshold Viterbi score} \\
& frequency ($\mathrm{Hz}$) & $N_{\Porb}$ & $N_{\Tasc}$ & $N_{\mathrm{tot}}$ & $S_\mathrm{th}^{\mathrm{G}}$ & $S_\mathrm{th}^{\mathrm{OT}}$ \\
\\
\hline
\\
\HETEaName & $\HETEaFBa$ & $\HETEaNPa$ & $\HETEaNTa$ & $\HETEaNTota$ & \HETEaSthGpsa & \HETEaSthOTpsa \\
& $\HETEaFBb$ & $\HETEaNPb$ & $\HETEaNTb$ & $\HETEaNTotb$ & \HETEaSthGpsb & \HETEaSthOTpsb \\
& $\HETEaFBc$ & $\HETEaNPc$ & $\HETEaNTc$ & $\HETEaNTotc$ & \HETEaSthGpsc & \HETEaSthOTpsc \\
& $\HETEaFBd$ & $\HETEaNPd$ & $\HETEaNTd$ & $\HETEaNTotd$ & \HETEaSthGpsd & \HETEaSthOTpsd \\
\\
\IGRaName & $\IGRaFBa$ & $\IGRaNPa$ & $\IGRaNTa$ & $\IGRaNTota$ & \IGRaSthGpsa & \IGRaSthOTpsa \\
& $\IGRaFBb$ & $\IGRaNPb$ & $\IGRaNTb$ & $\IGRaNTotb$ & \IGRaSthGpsb & \IGRaSthOTpsb \\
& $\IGRaFBc$ & $\IGRaNPc$ & $\IGRaNTc$ & $\IGRaNTotc$ & \IGRaSthGpsc & \IGRaSthOTpsc \\
& $\IGRaFBd$ & $\IGRaNPd$ & $\IGRaNTd$ & $\IGRaNTotd$ & \IGRaSthGpsd & \IGRaSthOTpsd \\
\\
\SAXaName & $\SAXaFBa$ & $\SAXaNPa$ & $\SAXaNTa$ & $\SAXaNTota$ & \SAXaSthGpsa & \SAXaSthOTpsa \\
& $\SAXaFBb$ & $\SAXaNPb$ & $\SAXaNTb$ & $\SAXaNTotb$ & \SAXaSthGpsb & \SAXaSthOTpsb \\
& $\SAXaFBc$ & $\SAXaNPc$ & $\SAXaNTc$ & $\SAXaNTotc$ & \SAXaSthGpsc & \SAXaSthOTpsc \\
& $\SAXaFBd$ & $\SAXaNPd$ & $\SAXaNTd$ & $\SAXaNTotd$ & \SAXaSthGpsd & \SAXaSthOTpsd \\
\\
\XTEbName & $\XTEbFBa$ & $\XTEbNPa$ & $\XTEbNTa$ & $\XTEbNTota$ & \XTEbSthGpsa & \XTEbSthOTpsa \\
& $\XTEbFBb$ & $\XTEbNPb$ & $\XTEbNTb$ & $\XTEbNTotb$ & \XTEbSthGpsb & \XTEbSthOTpsb \\
& $\XTEbFBc$ & $\XTEbNPc$ & $\XTEbNTc$ & $\XTEbNTotc$ & \XTEbSthGpsc & \XTEbSthOTpsc \\
& $\XTEbFBd$ & $\XTEbNPd$ & $\XTEbNTd$ & $\XTEbNTotd$ & \XTEbSthGpsd & \XTEbSthOTpsd \\
\\
\XTEaName & $\XTEaFBa$ & $\XTEaNPa$ & $\XTEaNTa$ & $\XTEaNTota$ & \XTEaSthGpsa & \XTEaSthOTpsa\\
& $\XTEaFBb$ & $\XTEaNPb$ & $\XTEaNTb$ & $\XTEaNTotb$ & \XTEaSthGpsb & \XTEaSthOTpsb \\
& $\XTEaFBc$ & $\XTEaNPc$ & $\XTEaNTc$ & $\XTEaNTotc$ & \XTEaSthGpsc & \XTEaSthOTpsc \\
& $\XTEaFBd$ & $\XTEaNPd$ & $\XTEaNTd$ & $\XTEaNTotd$ & \XTEaSthGpsd & \XTEaSthOTpsd \\
\\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Number of orbital templates}
The next step is to define the search grid for each target in $P$, $a_{\mathrm{0}}$, and $T_{\mathrm{asc}}$ to compute the $\mathcal{J}$-statistic.
It is assumed that $P$, $a_{\mathrm{0}}$, and $T_{\mathrm{asc}}$ remain within the same bin throughout the search.
When performing a gridded search, it is unlikely that the true signal parameters fall exactly on a grid point or template; there is some mismatch between the signal parameter and the template parameter.
The grid is marked out so as to keep the mismatch to an acceptable level whilst keeping the number of templates low enough to be computationally feasible.
We follow the same procedure as in Ref.~\cite{ScoX1ViterbiO2}.
Using Eq. (71) of Ref.~\cite{LeaciPrix:2015}, the number of $P$, $a_{\mathrm{0}}$, and $T_{\mathrm{asc}}$ templates are
\begin{eqnarray}
N_{\Porb} &=& \frac{\pi\sqrt{2}}{2} \mu_{\mathrm{max}}^{-1/2} f a_{\mathrm{0}} \frac{\gamma T_{\mathrm{drift}}}{\sqrt{12}} \frac{2\pi}{P^2} \Delta P, \label{eqn:NPorb} \\
N_{\asini} &=& \frac{\pi\sqrt{2}}{2} \mu_{\mathrm{max}}^{-1/2} f \Deltaa_{\mathrm{0}}, \label{eqn:Nasini}\\
N_{\Tasc} &=& \frac{\pi\sqrt{2}}{2} \mu_{\mathrm{max}}^{-1/2} f a_{\mathrm{0}} \frac{2\pi}{P} \Delta T_{\mathrm{asc}}, \label{eqn:NTasc}
\end{eqnarray}
where $\mu_{\mathrm{max}}$ is the maximum allowed mismatch, which we choose to be $10\%$ ($\mu_{\mathrm{max}}=0.1$) and $\gamma$ is defined in general in Eq. (67) of Ref.~\cite{LeaciPrix:2015}.
The factor $\gamma$ is a refinement factor introduced because the data are processed in $23$ separate segments; in the special case of the O2 data considered here where the segments are contiguous in time, we have $\gamma = N_T = 23$~\cite{LeaciPrix:2015}.
The values $\DeltaP$, $\Deltaa_{\mathrm{0}}$, and $\DeltaT_{\mathrm{asc}}$ are the $3\sigma$ error bars on the EM measurements of $P$, $a_{\mathrm{0}}$, and $T_{\mathrm{asc}}$ respectively.
We make a conservative estimate of $N_{\Porb}$, $N_{\asini}$, and $N_{\Tasc}$ by setting $f$ equal to the largest frequency value in each sub-band.
The grid is uniformly spaced in each sub-band.
For the five targets in this paper, we find $N_{a_{\mathrm{0}}} < 1$ formally.
Hence we search over $P$ and $T_{\mathrm{asc}}$ with $a_{\mathrm{0}}$ held fixed.
In contrast, for the O2 search for Scorpius X-1, the frequency dependent number of templates ranges from $N_{\asini} = 768$ and $N_{\Tasc} = 78$ for $60\,\mathrm{Hz}$ to $N_{\asini} = 8227$ and $N_{\Tasc} = 824$ for $650\,\mathrm{Hz}$~\cite{ScoX1ViterbiO2}.
For seven sub-bands, we find $N_{\Porb} <1$.
The third and fourth columns of Table~\ref{tab:thresholds} show $N_{\Porb}$ and $N_{\Tasc}$ respectively for each target and sub-band.
Where Eq.~\ref{eqn:NPorb} predicts an even number for $N_{\Porb}$, we round up by one to ensure that the central value from EM observations is searched (e.g. where $N_{\Porb}=2$, we search $N_{\Porb}=3$).
The fifth column shows the total number of templates $N_{\mathrm{tot}} = N_{\Porb} N_{\asini} N_{\Tasc} = N_{\Porb} N_{\Tasc} $ for each target and sub-band.
\subsection{Frequency binning}
\label{sec:searchDescriptionFreqBinning}
In Ref.~\cite{ScoX1ViterbiO2}, the search band is divided into equal sub-bands of width $\Delta f_{\mathrm{band}} = 2^{20} \Delta f_{\mathrm{drift}}=0.6068148\,\mathrm{Hz}$.
The choice of a power of two for $\Delta f_{\mathrm{band}} / \Delta f_{\mathrm{drift}}$ speeds up the computation of the Fourier transform~\cite{DunnEtAl:2019}.
We adopt the same binning strategy here\footnote{
The choice of sub-band varies between CW searches.
An initial search for the Crab pulsar searched a range of $10^{-2}\,\mathrm{Hz}$ around $2f_{\star}$~\cite{BetzwieserEtAlCrabSearch:2009}, a search of O2 data for known pulsars used sub-bands in the range $0.06$--$0.81$$\,\mathrm{Hz}$ depending on the target~\cite{knownPulsarO2:2019}, the cross-correlation O1 search for Scorpius X-1 used $0.05\,\mathrm{Hz}$ sub-bands~\cite{SearchCrossCorrO1:2017}, and the Einstein@Home search used $0.05\,\mathrm{Hz}$ and $1\,\mathrm{Hz}$ sub-bands for recent all-sky and targeted searches respectively~\cite{EinsteinATHomeAllSky:2017,EinsteinATHomeTargetet:2019}.
}.
Every ten days (i.e. $T_{\mathrm{drift}}$), the frequency of the signal can increase or decrease by $\Delta f_{\mathrm{drift}} = 5.787037 \times 10^{-7}\,\mathrm{Hz}$, or remain the same.
For each target we search the $\sim 0.61\,\mathrm{Hz}$ sub-bands which contain $f_{\star}/2$, $f_{\star}$, $4f_{\star}/3$ and $2f_{\star}$ (see Sec.~\ref{sec:targetParameterRanges}).
One advantage of the Viterbi algorithm is its speed, which allows us to search $\sim 0.61\,\mathrm{Hz}$ cheaply.
The sub-band boundaries are identical to those used in Ref.~\cite{ScoX1ViterbiO2}; therefore the EM frequency may not be at the centre of the sub-band.
There is no guarantee that the GW-emitting quadrupole and the X-ray emitting crust are exactly locked together; theoretical estimates of the balance between the nuclear pinning and Magnus forces in a neutron star predict a crust-core lag, for example~\cite{LinkEpsteinCrustCoreLag:1991,MelatosCrustCoreLag:2012}.
The starting frequencies of the sub-bands for each target are shown in the second column of Table~\ref{tab:thresholds}.
\subsection{Thresholds}
\label{sec:thresholds}
As described in Sec.~\ref{sec:jstat}, the $\mathcal{J}$-statistic is applied to the SFTs to account for the Doppler modulation of the binary signal (Eq.~\ref{eqn:jstatdoppler}) using the template defined by the orbital parameters $P$, $a_{\mathrm{0}}$, and $T_{\mathrm{asc}}$.
The Viterbi algorithm is then applied to find the best path through the $2^{20}$ frequency bins over $N_T=23$ segments for each template.
The result of the search is a Viterbi score $S$, as described in Sec.~\ref{sec:HMM}, corresponding to the most likely path for each orbital template ($P$, $a_{\mathrm{0}}$, $T_{\mathrm{asc}}$) and sub-band.
A path is a detection candidate if its Viterbi score exceeds a threshold $S_{\mathrm{th}}$ corresponding to a desired false alarm threshold.
As the distribution of $S$ in noise-only data is unknown analytically, Monte-Carlo simulations are used to establish $S_{\mathrm{th}}$.
For our purposes, each sub-band is searched $N_{\mathrm{tot}}$ times as listed in Table~\ref{tab:thresholds}.
The more templates are searched, the more likely that a single template results in an above-threshold score due solely to statistical fluctuation (i.e. a false alarm).
The probability of experiencing a false alarm during a search is called the false alarm probability (FAP).
The FAP can be defined as the probability of experiencing a false alarm when searching a single template ($\alpha$), or the probability of a false alarm when searching all $N_{\mathrm{tot}}$ templates that constitute a search of a whole sub-band ($\alpha_{\NTot}$).
For example, if we set $\alpha = 10^{-3}$ then the FAP of a sub-band amounts to $\approx 10^{-3} N_{\mathrm{tot}}$ for $N_{\mathrm{tot}} < 100$.
The two probabilities $\alpha$ and $\alpha_{\NTot}$ are related by
\begin{equation}
\alpha_{\NTot} = 1 - \left( 1 - \alpha \right)^{N_{\mathrm{tot}}}. \label{eqn:FAP}
\end{equation}
We can therefore set $\alpha_{\NTot}$ and compute $\alpha$.
Previous, comparable, CW searches for Scorpius X-1 have set the FAP per sub-band between $0.01$ and $0.10$, which yields an expected $\sim 10$ candidates across the full band spanning $\sim 0.5\,{\rm kHz}$ and containing $\sim 10^2$ sub-bands~\cite{SearchCStatS5:2015,ScoX1ViterbiO1:2017,ScoX1ViterbiO2}.
In this paper, where we search only four sub-bands per independent target, it is arguably too conservative to set the FAP in the above range, especially when one expects that any astrophysical signal lies near the detection limit of the experiment based on spin-down and torque-balance arguments (see Refs.~\cite{Riles:2013,BildstenTB:1998,PapaloizouPringleTB:1978,WagonerTB:1984} and Sec.~\ref{sec:expectedStrain}).
We therefore set $\alpha_{\NTot} = 0.30$ in the analysis which follows.
Looking forward to the results in Sec.~\ref{sec:results}, it turns out that $\alpha_{\NTot}=0.30$ yields ten above-threshold candidates in four of the targets, consistent with expectations, which are then screened by the vetoes in Sec.~\ref{sec:vetos}.
Ten candidates is a modest number which keeps the veto workload manageable, while enhancing the chance of a detection.
Naturally the reader is invited to set the FAP lower if desired.
For example, if one sets the FAP to $0.10$ per sub-band, the search yields zero above-threshold candidates across all five targets (see Sec.\ref{sec:results}), again consistent with expectations.
It now remains to determine $S_{\mathrm{th}}$, which we do in two ways: in synthetic Gaussian noise, and in real data to represent the detector noise more faithfully.
We generate Gaussian noise realisations for each target and sub-band to calculate the Gaussian noise threshold $S_\mathrm{th}^{\mathrm{G}}$ directly.
The realisations are generated using the tool \texttt{lalapps\_Makefakedata\_v4} in the LIGO Scientific Collaboration Algorithm Library (LAL)~\cite{lalsuite}\footnote{In the O2 Scorpius X-1 search~\cite{ScoX1ViterbiO2}, realisations were computed in seven sub-bands covering $60$--$650\,\mathrm{Hz}$, and $S_{\mathrm{th}}$ was extrapolated for computational efficiency.}.
We use $100$ realisations for each target and sub-band and the search is performed for each realisation.
The $\alpha_{\NTot} = 0.30$ threshold from Gaussian noise realisations is shown in the sixth column of Table~\ref{tab:thresholds} for each target and sub-band.
Typically $S_\mathrm{th}^{\mathrm{G}}$ increases with $N_{\mathrm{tot}}$, after allowing for statistical variations between sub-bands.
In reality, the LIGO noise is not Gaussian; it contains persistent harmonic features (lines).
Some bands are particularly corrupted.
In order to correct for this, we also perform the search at each $P$, $a_{\mathrm{0}}$, and $T_{\mathrm{asc}}$ template and sub-band for $100$ random off-target sky positions (varying RA and Dec) using the real O2 data.
The off-target thresholds per sub-band $S_\mathrm{th}^{\mathrm{OT}}$ are higher than the Gaussian thresholds, if the sub-band is noisy.
The results for $\alpha_{\NTot} = 0.30$ are shown in the seventh column of Table~\ref{tab:thresholds}.
The scores in columns six and seven are rounded down to one decimal place to avoid rejecting marginal templates due to rounding errors.
We see that there is little difference between $S_\mathrm{th}^{\mathrm{G}}$ and $S_\mathrm{th}^{\mathrm{OT}}$ except in the $\HETEaFBc\,\mathrm{Hz}$ and $\IGRaFBa\,\mathrm{Hz}$ sub-bands which contain known instrumental artefacts.
\section{Vetoes}
\label{sec:vetos}
Templates may produce Viterbi scores above the thresholds defined in Sec.~\ref{sec:thresholds}.
We examine whether there are reasonable grounds to systematically veto these candidates as non-astrophysical sources.
In Sec.~\ref{sec:vetoCriteria} we lay out the veto criteria following the method and notation of Ref.~\cite{ScoX1ViterbiO2}.
Four vetoes are copied from Refs.~\cite{ScoX1ViterbiO1:2017,ScoX1ViterbiO2}.
The off-target veto is new for this paper.
In Sec.~\ref{sec:vetoScenarios} we explain how to classify the results of vetos 2 and 3.
\subsection{Veto descriptions}
\label{sec:vetoCriteria}
\subsubsection*{1. Known lines}
\label{sec:vetoLines}
The detector output contains many harmonic features (instrumental lines), which have been identified as noise as part of the detector characterisation process~\cite{CovasEtAl:2018}.
The physical sources of these noise lines are varied.
Sometimes the effect can be mitigated, but not always.
A candidate in proximity to a known noise line~\cite{GWOSC} at frequency $f_{\mathrm{line}}$ is vetoed, if the optimal HMM path $f_{\mathrm{\cup}}(t)$ satisfies $|f_{\mathrm{\cup}}(t) - f_{\mathrm{line}}| < 2 \pi a_{\mathrm{0}} f_{\mathrm{\cup}}(t) / P$ for any $t$ in the range $0 \leq t \leq T_{\mathrm{obs}}$.
\subsubsection*{2. Single interferometer}
The second veto is applied by searching the data from each interferometer separately.
If a signal is astrophysical in origin, and if it is relatively strong, it should be found in the analysis of both detectors individually.
If an astrophysical signal is relatively weak, it may be found in neither detector individually, even though it is found in the combined data.
The interpretation of the Viterbi scores for single interferometer vetoes are described in Sec.~\ref{sec:vetoScenarios}.
\subsubsection*{3. $T_{\mathrm{obs}}/2$}
The third veto is applied by splitting the data in two segments and searching the two intervals separately.
Again, if the signal is astrophysical and strong, it should be present in both halves of the data.
If the signal is weak it may be below the threshold in both halves.
The O2 data are split so that the first segment covers $140$ days and the second $90$ days.
This division is copied from Ref.~\cite{ScoX1ViterbiO2} and is chosen so that the effective observing time (accounting for the duty cycle of the interferometers) is approximately equal in the two segments.
As with the single interferometer veto, the interpretation of Viterbi scores from the $T_{\mathrm{obs}}/2$ veto is described in Sec.~\ref{sec:vetoScenarios}.
\subsubsection*{4. Off target search}
The fourth veto, which is new (cf. Refs.~\cite{ScoX1ViterbiO1:2017} and~\cite{ScoX1ViterbiO2}), is applied by searching in an off-target sky position.
If the off-target search returns a score above threshold, the origin of the signal is likely to be instrumental.
In this paper off-target means the position of the target plus $10\,\mathrm{m}$ and plus $10'$ for RA and Dec respectively.
\subsubsection*{5. $T_{\mathrm{drift}}$}
The last veto is applied by analysing the frequency wandering of the Viterbi path~\cite{ScoX1ViterbiO1:2017}.
A signal whose wandering timescale exceeds $T_{\mathrm{drift}}$ should return a higher $S$, when $T_{\mathrm{drift}}$ is increased to the observed wandering time-scale.
This veto cannot be applied if the wandering timescale is already close to $T_{\mathrm{drift}}$, which is the case in Ref.~\cite{ScoX1ViterbiO2} and also for this search (see Sec.~\ref{sec:results} and Appendix~\ref{app:completeSearchResults}).
\subsection{Veto scenarios}
\label{sec:vetoScenarios}
The interpretation of the Viterbi scores under the single interferometer and $T_{\mathrm{obs}}/2$ vetoes divides into four scenarios as in Ref.~\cite{ScoX1ViterbiO2}.
We label the original score by $S_{\mathrm{\cup}}$, the threshold score (see Sec.~\ref{sec:thresholds}) by $S_{\mathrm{th}}$, and the two veto runs by $S_{\mathrm{a}}$ and $S_{\mathrm{b}}$, i.e. the scores from two individual detectors or from the two halves of the data.
\emph{Category A.} One veto search returns a sub-threshold score whilst the other is higher than the original search:
\begin{equation}
\left( S_{\mathrm{a}} < S_{\mathrm{th}} \right) \land \left( S_{\mathrm{b}} > S_{\mathrm{\cup}} \right),
\end{equation}
where $\land$ denotes Boolean AND.
If the frequency $f_{\mathrm{b}}$ associated with $S_{\mathrm{b}}$ is close to that of the original candidate $f_{\mathrm{\cup}}$,
\begin{equation}
|f_{\mathrm{\cup}} - f_{\mathrm{b}} | < 2 \pi a_{\mathrm{0}} f_{\mathrm{\cup}} / P,
\end{equation}
then we conclude that the signal is likely to be a noise artefact in one detector or one half of the data.
Category A candidates are vetoed.
\emph{Category B.} The situation is identical to Category A, except that the paths are not close, with $|f_{\mathrm{\cup}} - f_{\mathrm{b}}| > 2 \pi a_{\mathrm{0}} f_{\mathrm{\cup}} / P$.
The two veto searches may not have found the same signal in both detectors or halves of the data.
This could be due to the search instead finding a noise artefact in one detector or half the data.
Perhaps the signal is too weak to be detected in only one interferometer or half the data.
Category B candidates cannot be vetoed.
\emph{Category C.} The candidate exceeds the threshold in both veto searches:
\begin{equation}
\left( S_{\mathrm{a}} > S_{\mathrm{th}} \right) \land \left( S_{\mathrm{b}} > S_{\mathrm{th}} \right).
\end{equation}
This could represent a strong astrophysical signal.
Equally it could represent a noise source which is common to both detectors or present in the full observing run.
Category C candidates cannot be vetoed.
\emph{Category D.} The candidate falls below the threshold in both veto searches:
\begin{equation}
\left( S_{\mathrm{a}} < S_{\mathrm{th}} \right) \land \left( S_{\mathrm{b}} < S_{\mathrm{\cup}} \right).
\end{equation}
The origin of the combined detection is unclear.
One possibility is that it is a weak astrophysical signal which requires the full data set to be detectable.
Category D candidates cannot be vetoed.
\section{O2 Search results}
\label{sec:results}
In this section we present search results for the five LMXB targets listed in Table~\ref{tab:targetDetails}.
Ten templates have scores exceeding the $\alpha_{\NTot} = 0.30$ thresholds set in Sec.~\ref{sec:thresholds}.
Of these: four have scores above both thresholds (i.e. $S_{\mathrm{\cup}}>S_\mathrm{th}^{\mathrm{G}}$ and $S_{\mathrm{\cup}}>S_\mathrm{th}^{\mathrm{OT}}$), three have $S_{\mathrm{\cup}}>S_\mathrm{th}^{\mathrm{G}}$ only, and three have $S_{\mathrm{\cup}}>S_\mathrm{th}^{\mathrm{OT}}$ only.
We apply the veto procedure outlined in Sec.~\ref{sec:vetos} to the ten candidates.
There are two further templates which have scores within $0.4\%$ of $S_\mathrm{th}^{\mathrm{OT}}$ (see Figs.~\ref{fig:IGRaSearch} and~\ref{fig:SAXaSearch} in Appendix~\ref{app:completeSearchResults}).
No other templates are within $1.8\%$ of $S_\mathrm{th}^{\mathrm{OT}}$.
For completeness we add these nearly above-threshold templates as candidates to be considered by the veto procedure, although note they do no meet our formal $\alpha_{\NTot} = 0.30$ FAP target.
There are similarly close templates to $S_\mathrm{th}^{\mathrm{G}}$ in two other sub-bands ($503.0\,{\rm Hz}$ and $299.0\,{\rm Hz}$ for \HETEaName~and \IGRaName~respectively).
We do not include these in the veto procedure due to the presence of broad instrumental lines in these sub-bands (see Fig.~\ref{fig:HETEaSearch} and Fig.~\ref{fig:IGRaSearch} in Appendix~\ref{app:completeSearchResults}).
We follow up 12 templates in total (ten above-threshold and two nearly above-threshold).
The Viterbi scores and frequencies of the candidates are summarized in Fig.~\ref{fig:candidateSummary}.
Each marker shows the terminating frequency of a candidate's Viterbi path (i.e. $q^{\star}$ as defined in Sec.~\ref{sec:HMM}) and the associated Viterbi score.
Candidates for each target are shown by different marker shapes.
The ten above-threshold candidates and two nearly above-threshold templates are shown in orange and blue respectively.
Candidates which are removed through the veto procedure are indicated by the black-square and black-circle outlines for elimination by veto 1 and veto 3 respectively.
\begin{figure}
\begin{center}
\includegraphics[width=0.49\textwidth]{allCandidatesSummary.pdf}
\end{center}
\caption{\label{fig:candidateSummary}
Summarized search results for all LMXB targets.
The horizontal axis shows the frequency at the end of the best path for the template and the vertical axis shows the associated Viterbi score.
Each orange marker indicates a template which has resulted in a path with a Viterbi score higher than either of the Gaussian or off-target thresholds or both.
Results for \HETEaName, \IGRaName, \SAXaName, and \XTEaName~are shown by the circle, triangle, square, and diamond markers respectively.
There are no above-threshold candidates for \XTEbName.
The additional blue markers for \IGRaName~and \SAXaName~(at $\sim 400\,\mathrm{Hz}$ and $\sim 600\,\mathrm{Hz}$ respectively) indicate templates with $S_{\mathrm{\cup}} \approx S_\mathrm{th}^{\mathrm{OT}}$ which are included for safety in the face of rounding errors.
Candidates outlined with a black square or circle indicate templates vetoed by veto 1 (known lines) and veto 3 ($T_{\mathrm{obs}}/2$) respectively.
We note that there are two almost identical \IGRaName~markers at $S_{\mathrm{\cup}}\approx 8.3$, $f\approx 300\,\mathrm{Hz}$ (indistiguishable in the figure).
Both are eliminated by veto 1.
}
\end{figure}
The FAP per sub-band in this paper is deliberately set higher than in previous, comparable, CW searches for Scorpius X-1, because the total number of sub-bands is lower, as noted in Sec.~\ref{sec:thresholds}.
If we set the FAP per sub-band to $0.20$ instead of $0.30$, all candidates except three fall below the Gaussian and off-target thresholds and do not graduate to the veto stage.
One of these candidates is the highest scoring template in the \IGRaName~$f_{\star}/2$ sub-band ($S_{\mathrm{\cup}}\approx 8.5$), which is eliminated by veto 1.
The other two candidates are the two highest scoring templates in the \HETEaName~$2f_{\star}$ sub-band ($S_{\mathrm{\cup}}\approx 8.5$ and $S_{\mathrm{\cup}}\approx 8.8$) which survive the veto procedure.
If we set the FAP per sub-band to $0.10$ instead of $0.30$, zero candidates exceed the Gaussian and off-target thresholds.
The reader is invited to experiment with various choices of FAP when reproducing the results.
The search uses a combination of central processing unit (CPU) and graphical processing unit (GPU) computation.
We work with a GPU implementation of the $\mathcal{J}$-statistic identical to the one in Ref.~\cite{ScoX1ViterbiO2}.
We use the computing facilities of the OzSTAR supercomputer~\cite{OzSTAR}.
OzSTAR has compute nodes with Xeon Gold 6140 CPUs running at $2.3\,{\rm GHz}$~\cite{CPU:Intel} and NVIDIA P100 12GB PCIe GPU cards benchmarked at $\approx 9.3$ TeraFLOPS (floating point operations per second) single-precision performance~\cite{GPU:NVIDIA}.
On OzSTAR the search takes $\approx 5$ CPU-hours and $\approx 23$ GPU-hours (every GPU-hour also requires one CPU-hour).
The false alarm thresholds take $\approx 1000$ CPU-hours and $\approx 4600$ GPU-hours to perform $100$ searches on Gaussian and real data off-target realisations.
In sections~\ref{sec:HETEaResults}--\ref{sec:XTEaResults} we summarize the search results for each of the five targets.
The results are laid out in full in Fig.~\ref{fig:HETEaSearch} for one target only, namely \HETEaName, to guide the reader without cluttering the main body of the paper.
The complete search results, including veto outcomes and optimal Viterbi paths are collated for reference and reproducibility in Appendix~\ref{app:completeSearchResults}.
The O2 search returns five veto survivors.
A search of LIGO O1 data narrowly targeted at the templates in the sub-bands containing the five veto survivors shows no support for an astrophysical signal.
\subsection{\HETEaName}
\label{sec:HETEaResults}
\begin{figure*}
\begin{center}
\includegraphics[width=.49\textwidth]{HETEJ1900-188p6-freq-vscore.pdf}
\includegraphics[width=.49\textwidth]{HETEJ1900-377-freq-vscore.pdf}\\
\includegraphics[width=.49\textwidth]{HETEJ1900-503-freq-vscore.pdf}
\includegraphics[width=.49\textwidth]{HETEJ1900-754p4-freq-vscore.pdf}
\end{center}
\caption{\label{fig:HETEaSearch}
Search results for \HETEaName~at the four sub-bands corresponding to $f_{\star}/2$, $f_{\star}$, $4f_{\star}/3$, and $2f_{\star}$ in panels (a), (b), (c), and (d) respectively.
Each point marks the terminating frequency of the best path for a template (vertical axis) and the associated Viterbi score (horizontal axis).
The vertical lines indicate $S_{\mathrm{th}}$ for a false alarm probability of $0.30$ per sub-band determined from $100$ Gaussian noise realisations $S_\mathrm{th}^{\mathrm{G}}$ (red-dot-dash) and $100$ off-target searches $S_\mathrm{th}^{\mathrm{OT}}$ (green-dash).
The orange and blue horizontal stripes indicate known instrumental lines in the Hanford and Livingston observatories respectively; the solid-line indicates where the instrumental line peaks and the shading indicates its width.
In the case of the sub-band starting at $503.0~\mathrm{Hz}$, broad instrumental lines cover the whole frequency range of the plot.
Where there is a loud instrumental line in a sub-band, the search is likely to select paths very close to the instrumental line.
This means that the paths for a contaminated sub-band can lie entirely within the instrumental line.
The instrumental lines in sub-band $503.0~\mathrm{Hz}$ are due to violin mode resonances in the detector mirror suspensions~\cite{GWOSC}.
There are two candidates above both $S_\mathrm{th}^{\mathrm{G}}$ and $S_\mathrm{th}^{\mathrm{OT}}$ in the $2f_{\star}$ sub-band (bottom-right).
Similar plots for the other four targets can be found in Appendix~\ref{app:completeSearchResults}.
}
\end{figure*}
The search for \HETEaName~returns two candidates as shown in Fig.~\ref{fig:HETEaSearch}.
Each marker in Fig.~\ref{fig:HETEaSearch} shows the terminating frequency of the best path for a template and associated Viterbi score.
The vertical red-dot-dash and green-dash lines show the $S_\mathrm{th}^{\mathrm{G}}$ and $S_\mathrm{th}^{\mathrm{OT}}$ thresholds respectively (for $\alpha_{\NTot} = 0.30$).
A marker with a higher Viterbi score than the threshold lines indicates an above-threshold candidate.
The four panels in Fig.~\ref{fig:HETEaSearch} show the search results in each sub-band: $f_{\star}/2$ (top-left), $f_{\star}$ (top-right), $4f_{\star}/3$ (bottom-left), and $2f_{\star}$ (bottom-right).
There are two candidates in the $2f_{\star}$ sub-band and zero candidates in the other three sub-bands.
The $4f_{\star}/3$ sub-band is noisy.
The horizontal line and shaded bands in the bottom-left panel show where there are instrumental lines.
The solid line indicates the peak of the instrumental line and the transparent shading shows its extent (see Refs.~\cite{GWOSC,CovasEtAl:2018}).
Instrumental lines in the Hanford and Livingston data are shown in orange and blue respectively.
The instrumental lines in the $4f_{\star}/3$ sub-band are due to violin modes of the Hanford and Livingston detectors.
The Hanford line peaks at $503.11\,\mathrm{Hz}$ with a range $503.10$--$503.13\,\mathrm{Hz}$ and the Livingston line peaks at $503.1825\,\mathrm{Hz}$ with range $502.9500$--$503.3000\,\mathrm{Hz}$ (covering the entire plotted region)\cite{GWOSC}.
The candidates in the $2f_{\star}$ sub-band are above both $S_\mathrm{th}^{\mathrm{G}}$ and $S_\mathrm{th}^{\mathrm{OT}}$.
Both candidates survive vetoes 1, 2, 3, and 4.
Veto 5 is not applicable as the frequency wandering timescales of the candidate paths is $\approx T_{\mathrm{drift}}$ (see Fig.~\ref{fig:HETEaPaths} in Appendix~\ref{app:completeSearchResults}).
The frequency paths, Viterbi scores, and veto outcomes for these candidates are further detailed in Appendix~\ref{app:completeSearchResults}.
\subsection{\IGRaName}
\label{sec:IGRaResults}
The search for \IGRaName~returns four candidates.
Three are above $S_\mathrm{th}^{\mathrm{G}}$ and one is above $S_\mathrm{th}^{\mathrm{OT}}$ (zero of these are above both thresholds).
We include a fifth nearly above-threshold template in the veto procedure due to its proximity to $S_\mathrm{th}^{\mathrm{OT}}$ in the face of rounding errors.
The search results, frequency paths, and veto outcomes are shown in Appendix~\ref{app:completeSearchResults} (Fig.~\ref{fig:IGRaSearch}, Fig.~\ref{fig:IGRaPaths}, and Table~\ref{tab:vetoOutcomes} respectively).
Three of the candidate are in the $f_{\star}/2$ sub-band.
They have scores which are above $S_\mathrm{th}^{\mathrm{G}}$ and below $S_\mathrm{th}^{\mathrm{OT}}$.
All three candidates are eliminated by veto 1 due to a broad instrumental line in the Hanford data (see Fig.~\ref{fig:IGRaSearch} in Appendix~\ref{app:completeSearchResults}).
The other above-threshold candidate is in the $2f_{\star}$ sub-band with $S_{\mathrm{\cup}} > S_\mathrm{th}^{\mathrm{OT}}$ and $S_{\mathrm{\cup}} < S_\mathrm{th}^{\mathrm{G}}$.
It survives vetoes 1--4.
Veto 5 is not applicable.
The nearly above-threshold template is in the $f_{\star}$ sub-band.
It survives vetoes 1--4 and veto 5 is not applicable.
\subsection{\SAXaName}
\label{sec:SAXaResults}
The search for \SAXaName~finds two candidates.
A third template almost equals the threshold, and we include it in the veto procedure.
See Appendix~\ref{app:completeSearchResults} for full search results, candidate frequency paths, and veto outcomes.
One candidate is in the $f_{\star}/2$ sub-band with $S_{\mathrm{\cup}}<S_\mathrm{th}^{\mathrm{G}}$ and $S_{\mathrm{\cup}}>S_\mathrm{th}^{\mathrm{OT}}$.
It is eliminated by veto 1 due to proximity to an instrumental line in the Livingston data (see Fig.~\ref{fig:SAXaSearch} in Appendix~\ref{app:completeSearchResults}).
There is one candidate in the $4f_{\star}/3$ sub-band.
It is close to, but just above both thresholds ($S_{\mathrm{\cup}}\gtrsimS_\mathrm{th}^{\mathrm{G}}$ and $S_{\mathrm{\cup}}\gtrsimS_\mathrm{th}^{\mathrm{OT}}$).
The candidate survives vetoes 1--4 and veto 5 is not applicable.
The nearly above-threshold template is in the $f_{\star}$ sub-band with $S_{\mathrm{\cup}}\approxS_\mathrm{th}^{\mathrm{OT}}$.
It survives vetoes 1--4 and veto 5 is not applicable.
\subsection{\XTEbName}
\label{sec:XTEbResults}
The search for \XTEbName~returns zero candidates with $S_{\mathrm{\cup}}>S_\mathrm{th}^{\mathrm{G}}$ or $S_{\mathrm{\cup}}>S_\mathrm{th}^{\mathrm{OT}}$.
Full search results for \XTEbName~are shown in Fig.~\ref{fig:XTEbSearch} in Appendix~\ref{app:completeSearchResults}.
\subsection{\XTEaName}
\label{sec:XTEaResults}
The \XTEaName~search returns two candidates.
One candidate is in the $f_{\star}$ sub-band with $S_{\mathrm{\cup}}<S_\mathrm{th}^{\mathrm{G}}$ and $S_{\mathrm{\cup}}>S_\mathrm{th}^{\mathrm{OT}}$.
It survives vetoes 1 and 2.
It is eliminated by veto 3 as the signal is found to be stronger in the later part of O2.
The other candidate, in the $2f_{\star}$ sub-band, scores above both thresholds.
It survives vetoes 1--4.
Veto 5 is not applicable.
See Appendix~\ref{app:completeSearchResults} for full search results.
\section{Expected GW strain from LMXBs}
\label{sec:expectedStrain}
\begin{table*}[t]
\caption{\label{tab:expectedStrain}
Spin-down limits on the maximum GW strain inferred from EM observations for \IGRaName, \SAXaName, \XTEbName~and \XTEaName~(\HETEaName~is excluded as there is no frequency derivative measurement).
The first column summarizes the target name and estimated distance $D$.
The second column indicates whether the $\dot{f}_{\star}$ observation is from an active or quiescent phase.
The third and fourth columns show the observed frequency derivative $\dot{f}_{\star}$ and the associated spin-down limit $h_{\mathrm{0,sd}}$ respectively using the minimum $D$ from the first column.
The final two columns reference the data used for the estimate.
The $h_{\mathrm{0,sd}}$ limits marked with * indicate $\dot{f}_{\mathrm{act}}>0$ where the assumption $\dot{f}_{\mathrm{gw}} \approx -\dot{f}_{\mathrm{act}}$ is made.
}
\begin{tabular}{lrrlll}
\hline
\hline
\\
Target Details & Active or & ~Freq. derivative~ & ~Spin-down~ & ~Notes & ~Ref. \\
& quiescent & ~$\dot{f}$ $(\mathrm{Hz}~\mathrm{s}^{-1})$~ & ~limit $h_{\mathrm{0,sd}}$~ & \\
\\
\hline
\\
\IGRaName & Quiescent & $-4.1 (1.4)\times 10^{-15}$& ~$ 5.2 \times 10^{-28}$ & Before 2008 outburst & \cite{IGRJ00291PapittoEtAl:2011} \\
$4<D/\mathrm{kpc}<6$ ~\cite{GallowayEtAl:2005,Galloway:2006} & Active & $+3(5)\times 10^{-12}$ & ~$ 1.4 \times 10^{-26}$* & 2015 outburst & \cite{IGRJ00291SannaEtAl:2017} \\
& Active & $+5.1 (4)\times 10^{-15}$ & ~$ 5.8 \times 10^{-28}$* & 2008 outburst & \cite{IGRJ00291PapittoEtAl:2011} \\
\\
\hline
\\
\SAXaName & Quiescent & $-5.5(1.2)\times 10^{-16}$ & ~$ 2.7 \times 10^{-28}$ & Five outbursts up to 2008 & \cite{SAXJ1808HartmanEtAl:2009}\\
$D\approx 3.4$--$3.6~\mathrm{kpc}$~\cite{SAXJ1808GallowayCumming:2006} & Quiescent & $-1.65(20)\times 10^{-15}$ & ~$ 4.7 \times 10^{-28}$ & Six outbursts up to 2011 & \cite{SAXJ1808PatrunoEtAl:2012}\\
& Active & $+2.6(3)\times 10^{-11}$ & ~$ 5.9 \times 10^{-26}$* & 2015 outburst (XMM-Newton data) & \cite{SAXJ1808SannaEtAl:2017} \\
& Active & $+1.1(3)\times 10^{-10}$ & ~$ 1.2 \times 10^{-25}$* & 2015 outburst (NuSTAR data) & \cite{SAXJ1808SannaEtAl:2017} \\
& Quiescent & $-1.5(2)\times 10^{-15}$ & ~$ 4.5 \times 10^{-28}$ & Long-term spin-down & \cite{SAXJ1808SannaEtAl:2017} \\
\\
\hline
\\
\XTEbName & Active & $-9.2(4) \times 10^{-14}$ & ~$ 2.4\times 10^{-27}$ & 2002 outburst & \cite{XTEJ0929Discovery:2002} \\
$D > 7.4\mathrm{kpc}$~\cite{XTEJ0929Distance:2017} & \\
\\
\hline
\\
\XTEaName & Active & $-6.7 (7)\times 10^{-14}$ & ~$ 3.0 \times 10^{-27}$ & 2003 outburst & \cite{XTEJ1814PapittoEtAl:2007} \\
$D \approx 3.8$--$8~\mathrm{kpc}$~\cite{XTEJ1814KraussEtAl:2005,XTEJ1814:StrohmayerEtAl:2003} & \\
\\
\hline
\hline
\end{tabular}
\end{table*}
In view of the results in Sec.~\ref{sec:results}, it is useful to ask how strong the signal from a particular source is expected to be, given the EM information available.
From Eq. (52) in Ref.~\cite{Riles:2013}, the indirect spin-down limit on the maximum GW strain $h_{\mathrm{0,sd}}$ is
\begin{align}
h_{\mathrm{0,sd}} =& ~2.5\times 10^{-25} \left( \frac{1\,{\rm kpc}}{D} \right) \nonumber \\
& \times \left[ \left( \frac{1\,{\rm kHz}}{f_{\mathrm{gw}}} \right) \left( \frac{- \dot{f}_{\mathrm{gw}}}{10^{-10}\,\mathrm{Hz}~\mathrm{s}^{-1}} \right) \left( \frac{I_{zz}}{I_0}\right) \right]^{1/2},
\label{eqn:hspindownlim}
\end{align}
where $f_{\mathrm{gw}}$ and $\dot{f}_{\mathrm{gw}}$ are the GW frequency and frequency derivative respectively, $D$ is the distance to the source, $I_{zz}$ is the $zz$ component of the moment-of-inertia tensor, and $I_0$ is the moment of inertia of the un-deformed star.
For our purposes, we make the assumptions $I_{zz}/I_0 \approx 1$, $f_{\mathrm{gw}} \propto f_{\star}$, and $\dot{f}_{\mathrm{gw}} \propto \dot{f}_{\star}$.
As described in Sec.~\ref{sec:targets}, EM observations constrain $f_{\star}$ and $\dot{f}_{\star}$ during both active and quiescent phases.
Many of our targets have a range of estimates from observations of different phases.
During active periods, the neutron star typically spins up, although this is not always the case; the hydromagnetic accretion torque can be negative~\cite{GhoshLambPethick:1977}.
In quiescence, the neutron star typically spins down.
In Table~\ref{tab:expectedStrain}, we collate estimates of $\dot{f}_{\star}$ for each of the targets.
\IGRaName~and \SAXaName~have several values of $\dot{f}_{\star}$ estimated from observations of different active and quiescent phases.
These two targets both follow the typical picture of active spin up ($\dot{f}_{\mathrm{act}} > 0$) and quiescent spin down ($\dot{f}_{\mathrm{qu}} < 0$).
\XTEbName~and \XTEaName~have only a single observed active phase and therefore each have only one $\dot{f}_{\star}$ estimate.
Both exhibit active spin down, unlike \IGRaName~and \SAXaName~which show active spin up.
\HETEaName~ is excluded from this calculation, as there is no $\dot{f}_{\star}$ measurement from the short time it exhibited pulsations.
Many of the targets have a range of $D$ estimates, which we summarize in the first column of Table~\ref{tab:expectedStrain}.
To calculate the maximum $h_{\mathrm{0,sd}}$, we use the minimum $D$ for each target.
For $\dot{f}_{\star} < 0$, we compute $h_{\mathrm{0,sd}}$ assuming $\dot{f}_{\mathrm{gw}} \approx \dot{f}_{\star}$.
The $h_{\mathrm{0,sd}}$ estimates are collected in the fourth column of Table~\ref{tab:expectedStrain}.
We find that the maximum $h_{\mathrm{0,sd}}$ for $\dot{f}_{\star} < 0$ comes from the active phases of \XTEbName~and \XTEaName~with $h_{\mathrm{0,sd}} \lesssim 2.4\times 10^{-27}$ and $\lesssim 3.0\times 10^{-27}$ respectively.
For comparison, the Scorpius X-1 O2 search set an upper limit on the detectable wave strain of $3.47 \times 10^{-25}$ at $194.6\,\mathrm{Hz}$ with $95\%$ confidence~\cite{ScoX1ViterbiO2}.
Equation~\ref{eqn:hspindownlim} requires $\dot{f}_{\star} < 0$.
For $\dot{f}_{\star} > 0$, we can make a different order of magnitude estimate for $h_{\mathrm{0,sd}}$.
When the star is observed to spin up, the positive net torque (dominated by accretion) may mask a negative gravitational radiation reaction torque of a similar order of magnitude.
In principle, $\dot{f}_{\mathrm{act}}>0$ allows for an arbitrarily large frequency derivative due to accretion, i.e. $\dot{f}_{\mathrm{acc}} > 0$ can be as large as one wishes, as long as we also have $|\dot{f}_{\mathrm{gw}}| = \dot{f}_{\mathrm{acc}} - \dot{f}_{\mathrm{act}}$
One arguably plausible scenario, without excessive fine tuning, is $\dot{f}_{\mathrm{act}} \sim \dot{f}_{\mathrm{acc}} \sim |\dot{f}_{\mathrm{gw}}|$.
On the other hand, for $\dot{f}_{\mathrm{act}} = \dot{f}_{\mathrm{acc}} - |\dot{f}_{\mathrm{gw}}| < 0$, one must have $|\dot{f}_{\mathrm{gw}}| \geq \dot{f}_{\mathrm{act}}$, and setting $|\dot{f}_{\mathrm{gw}}| = \dot{f}_{\mathrm{act}}$ yields a conservative bound.
For the observations with $\dot{f}_{\star} \geq 0$, we therefore estimate $h_{\mathrm{0,sd}}$ assuming $\dot{f}_{\mathrm{gw}} = -\dot{f}_{\star}$.
We find $h_{\mathrm{0,sd}}$ in the range $10^{-28}$ to $10^{-25}$ for the active phases of \IGRaName~and \SAXaName.
None of the targets were active during O2.
\begin{table}
\caption{\label{tab:torqueBalance}
Torque-balance limit $h_{\mathrm{torque}}$ on the GW strain based on EM observations.
The second column is the long-term X-ray flux from table 1 of Ref.~\cite{GWLMXBsWattsEtAl:2008}; see also references therein.
The third column is the maximum flux from the LMXB catalogue in Ref.~\cite{LiuEtAlLMXBCatalogue:2007}.
The final column shows the estimated range of $h_{\mathrm{torque}}$ for the listed fluxes calculated using $f_{\mathrm{gw}}=2f_{\star}$ in Eq.~\ref{eqn:htorque}.
}
\begin{center}
\begin{tabular}{p{3.2cm}p{1.9cm}p{1.8cm}p{1.3cm}}
\hline
\hline
\\
Target & \multicolumn{2}{l}{Flux ($\times 10^{-8}\mathrm{erg}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$)} & $h_{\mathrm{torque}}$ \\
& Long-term & Maximum & ($\times 10^{-27}$) \\
\\
\hline
\\
\HETEaName & $0.18$ & $0.238$ & $1.9$--$2.2$ \\
\IGRaName & $0.018$ & $0.281$ & $0.5$--$1.9$ \\
\SAXaName & $0.086$ & $0.211$ & $1.3$--$2.0$ \\
\XTEbName & $0.027$ & $0.069$ & $1.0$--$1.7$ \\
\XTEaName & $0.013$ & $0.025$ & $0.6$--$0.8$ \\
\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
We can also calculate the maximum signal strength based on the observed X-ray flux assuming that the accretion and gravitational radiation reaction torques balance each other.
The torque-balance limit is~\cite{Riles:2013, BildstenTB:1998, PapaloizouPringleTB:1978, WagonerTB:1984},
\begin{align}
\label{eqn:htorque}
h_{\mathrm{torque}} = & ~5 \times 10^{-27} \nonumber \\
& \times \sqrt{ \left(\frac{600\,\mathrm{Hz}}{f_{\mathrm{gw}}}\right) \left( \frac{F_{\mathrm{X}}}{10^{-8} \mathrm{erg}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}} \right) },
\end{align}
where $F_{\mathrm{X}}$ is the X-ray flux.
In the second and third column of Table~\ref{tab:torqueBalance} we list the long-term flux from Table 1 of Ref.~\cite{GWLMXBsWattsEtAl:2008} and the maximum flux from the LMXB catalogue in Ref.~\cite{LiuEtAlLMXBCatalogue:2007} respectively.
The third column shows the range of $h_{\mathrm{torque}}$ given these values.
For some of the targets, the $h_{\mathrm{torque}}$ limits are lower than the $h_{\mathrm{0,sd}}$ limits in Table~\ref{tab:expectedStrain} during active phases.
However the numbers are comparable to $h_{\mathrm{0,sd}}$ in quiescence.
\section{Conclusions}
\label{sec:conclusions}
In this paper we present results of a search for continuous gravitational waves from five LMXBs in the LIGO O2 dataset.
The search uses a hidden Markov model to track spin wandering and the $\mathcal{J}$-statistic matched filter to track orbital phase.
The LMXBs have electromagnetically measured pulsation frequencies, thereby restricting the parameter space relative to searches for other objects like Scorpius X-1~\cite{ScoX1ViterbiO2,ScoX1ViterbiO1:2017,SearchFStatS2:2007,SearchCrossCorrO1:2017}.
A Gaussian threshold is set using searches on $100$ realisations of Gaussian noise.
An off-target threshold is set by searching the O2 dataset in $100$ random off-target sky positions.
We find no candidates above a threshold corresponding to a $0.10$ FAP per sub-band.
We find ten candidates above a threshold corresponding to a $0.30$ FAP per sub-band.
After applying vetoes we are left with five candidates (two for \HETEaName~and one each for \IGRaName, \SAXaName, and \XTEaName).
The survivors are marginally above the $\alpha_{\NTot} = 0.30$ threshold, exceeding it by less than $\approx 0.4$ in Viterbi score.
The number of survivors is statistically consistent with the number of false alarms expected from a FAP of $0.30$ per sub-band for $20$ sub-bands (i.e. $0.30 \times 20 = 6$).
It is premature to speculate about the nature of the surviving candidates.
We recommend that they be followed up in future observations, including LIGO-Virgo Observing Run 3.
\section*{Acknowledgements}
\label{sec:acknowledgements}
The authors are grateful to
Sofia Suvorova, William Moran, and Robin Evans for their past developmental work on HMMs for continuous wave searches and also to them and Margaret Millhouse, Patrick Meyers, and Julian Carlin for helpful discussions including advice on the off-target threshold and veto procedure;
Shanika Galaudage and Duncan Galloway for advice on selecting LMXB targets and locating the most accurate EM measurements of the targets' parameters in the literature;
and Ling Sun for helpful comments on the manuscript.
We also thank the Continuous Wave Working Group of the LIGO Scientific Collaboration and Virgo Collaboration for their useful discussion.
This research is supported by the Australian Research Council Centre of Excellence for Gravitational Wave Discovery (OzGrav) (project number CE170100004).
This work used computational resources of the OzSTAR national facility at Swinburne University of Technology and also at the California Institute of Technology.
OzSTAR is funded by Swinburne University of Technology and the National Collaborative Research Infrastructure Strategy (NCRIS).
This research has made use of data, software and/or web tools obtained from the Gravitational Wave Open Science Center (https://www.gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration.
LIGO is funded by the U.S. National Science Foundation.
Virgo is funded by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by Polish and Hungarian institutes.
This work has been assigned LIGO document number P1900273.
\bibliographystyle{myunsrt}
| 26,490 |
\section{Introduction}
\subsection{Motivation and history.}
The Hecke orbit conjecture predicts that Hecke symmetries characterize the central foliation on Shimura varieties over an algebraically closed field $k$ of characteristic $p$. The conjecture predicts that on the mod $p$ reduction of a Shimura variety, any prime-to-$p$ Hecke orbit is dense in the central leaf containing it. In the 90s, Oort came up with the idea of studying the locus defined by fixing a geometric isomorphism type of Barsotti-Tate groups. Such a locus is called an Oort's central leaf. By definition, a central leaf is stable under prime-to-$p$ Hecke correspondences and naturally contains the prime-to-$p$ Hecke orbit of any point in it. On the other hand, an isogeny leaf is an orbit by geometric $p$-isogenies. Oort showed that central leaves and isogeny leaves almost give a product structure on an irreducible component of a Newton polygon stratum. Moreover, central leaves define a partition of its ambient Newton stratum by (possibly infinitely many) locally-closed, smooth subvarieties. Within any given Newton stratum, all central leaves have the same dimension and are related to each other via finite isogeny correspondences. In this sense, central leaves form a ``foliation" on a given Newton stratum. So do isogeny leaves. Therefore, isogeny leaves and central leaves lie transversely to each other and characterize the geometry of the Newton stratum they are in.
Given any closed geometric point on a Shimura variety, its orbit under prime-to-$p$ Hecke correspondences naturally sits inside the central leaf passing through that point. The Hecke orbit conjecture draws a parallel between central leaves and isogeny leaves: just as each isogeny leaf coincides with a fixed geometric $p$-isogeny class, central leaves should also almost coincide with a fixed prime-to-$p$ isogeny class. In \cite[Problem 18]{MR1812812}, Chai and Oort predicted that prime-to-$p$ Hecke orbits on Siegel modular varieties are as large as possible. That is to say, every prime-to-$p$ Hecke orbit is Zariski dense in the central leaf containing it. This also means that central leaves are minimal subvarieties stable under all prime-to-$p$ Hecke correspondences. Here we state the more general version of the conjecture for PEL type Shimura varieties.
\begin{conj}\cite[Conjecture 3.2]{chai2006hecke}\label{conj}
Every prime-to-$p$ Hecke orbit on a PEL type Shimura variety over $k$ is dense in the central leaf containing it.
\end{conj}
In 1995, Chai proved the conjecture for ordinary points (\cite[Theorem 2]{chai1995every}) on Siegel modular varieties. In 2019, following a strategy similar to that in \cite{chai1995every}, Rong Zhou (\cite[Theorem 3.1.3]{zhou2019motivic}) proved the Hecke orbit conjecture for the $\mu$-ordinary locus on quaternionic Shimura surfaces and their associated unitary Shimura varieties.
The first known case of the full conjecture is proven by C.-F. Yu for Hilbert modular varieties \cite{yu2004discrete}. Using the statement of Hilbert modular varieties, Chai and Oort proved in 2005 that the conjecture holds for Siegel modular varieties (\cite[Theorem 13.1]{chai2005hecke}).
The present paper concerns the case of Shimura varieties of PEL type. They are moduli spaces of abelian varieties in characteristic $p$ with prescribed additional structures: polarization, action by a finite dimensional $\mathbb{Q}$-algebra, and level structure. We generalize the method in \cite{chai2005hecke} to applicable situations for PEL type Shimura varieties.
\subsection{Overview of results.}
We fix an integer prime $p$ and let $k$ be an algebraically closed field of characteristic $p.$ Let $\mathcal{D}=(B,\mathcal{O}_B,*,V,(\cdot,\cdot), h)$ be a Shimura datum of PEL type (see \ref{2.1} and \cite{kottwitz1992points} for details) for which $p$ is an unramified prime of good reduction. Write $F$ for the center of $B$ and $F_0$ the maximal subfield fixed under $*$.
The main result of this paper is the following theorem. We refer the readers to Theorem \ref{mainthm} for the precise statement.
\begin{thm}
Let $\mathscr{S}$ be the reduction modulo $p$ of a Shimura variety of PEL type A or C over $k$ for which $p$ is an unramified prime of good reduction. Let $\mathcal{N}$ be a Newton stratum. Assume \begin{enumerate}
\item $\mathcal{N}$ contains a $B$-hypersymmetric point $x_0$;
\item either $p$ is totally split in $F/F_0$ and the Newton polygon of $\mathcal{N}$ satisfies condition (*), or $p$ is inert in $F/F_0$.
\end{enumerate}
Write $\mathcal{N}^0$ for the irreducible component of $\mathcal{N}$ containing $x_0.$ Then $H^p(x)$ is dense in $C(x)\cap\mathcal{N}^0$ for every $x\in\mathcal{N}^0(k).$ Moreover, if $\mathcal{N}$ is not the basic stratum, then $C(x)\cap \mathcal{N}^0$ is irreducible.
\end{thm}
Assumption (2) only occurs in the case of PEL type A. Condition (*) (see Definition \ref{conditionstar}) amounts to a mild condition on the slope data of the Newton polygon attached to $\mathcal{N}.$ Condition (*) is only necessary for proving the main theorem for points that are not $B$-hypersymmetric when $B$ is not a totally real field. Theorem \ref{irred}, Theorem \ref{cts}, and Proposition \ref{B=F} are independent of this assumption.
The condition that $\mathcal{N}$ is not the basic stratum has to do with a monodromy result (Theorem \ref{thmk}) used in proving $C(x)\cap \mathcal{N}^0$ is geometrically irreducible (or equivalently, geometrically connected). A straightforward consequence of Theorem \ref{thmk} is that, if we further assume $\mathcal{N}$ is smooth, then the assumption that $\mathcal{N}$ is geometrically irreducible is equivalent to the condition that $H^{\Sigma}$ (see section \ref{section 4}) acts transitively on the set of geometrically irreducible components of $\mathcal{N}$. We obtain the following corollary.
\begin{cor}
Assumptions as in Theorem 1.2. Further assume that $N$ is smooth and that the prime-to-$\Sigma$ Hecke correspondences act transitively on $\pi_0(\mathcal{N})$. Then the Hecke orbit conjecture holds for $\mathcal{N}$.
\end{cor}
A key input in the proof strategy is $B$-hypersymmetric points. Hypersymmetric abelian varieties are mod-$p$ analogues of CM abelian varieties in characteristic $0.$ Originally developed by Chai and Oort (see \cite{chai2006hypersymmetric}), the notion refers to abelian varieties that admit as many endomorphisms as allowed by their associated Barsotti-Tate groups. In Section \ref{section 3}, we discuss the details regarding the existence of $B$-hypersymmetric points in relation to the shape of Newton polygons. In Section \ref{mu}, we restrict our attention to the $\mu$-ordinary locus in Shimura varieties of PEL type A. We deduce two sufficient conditions for a Newton stratum to contain a $B$-hypersymmetric point. These conditions combined with the main theorem imply the following (notations as in Section \ref{2.1}).
\begin{cor}[Corollary \ref{cormu}]\begin{enumerate}
\item Suppose $p$ is inert in $F$. If every slope of the Newton polygon attached to $\mathcal{N}$ has the same multiplicity, then the Hecke Orbit conjecture holds for any irreducible component of $\mathcal{N}$ containing a $B$-hypersymmetric point.
\item Suppose the center of $B$ is a CM field. Assume that the signature of $\mathscr{S}$ has no definite place, and that $p$ is a prime of constant degree in the extension $F/\mathbb{Q}$. Further assume assumption 2 of the main theorem is satisfied. Then the Hecke orbit conjecture holds for every irreducible component of the $\mu$-ordinary stratum.
\end{enumerate}
\end{cor}
The statement of our main theorem is restricted to irreducible components of Newton strata which contain $B$-hypersymmetric points. In general, it is not known whether Newton strata in Shimura varieties of PEL type are irreducible or not. On the other hand, it is well-known that if the basic locus coincides with the supersingular locus, then it is discrete; otherwise, the basic locus may be of positive dimension.
Oort and Chai proved in \cite[Theorem A]{chai2011monodromy} that every non-supersingular Newton stratum in a Siegel modular variety is irreducible. Siegel modular varieties classify abelian varieties equipped with a principal polarization. However, the statement is not correct if the polarization is not principal (see \cite{chai2011monodromy}).
For Shimura varieties of PEL type, the only irreducibility result that the author is aware of is \cite[Theorem 1.1]{MR3240772}, where Achter proved for a special class of PEL type Shimura varieties that all Newton strata are irreducible. His result allows us to deduce the following consequence.
\begin{cor}[Corollary \ref{corachter}]
Let $L$ be an imaginary quadratic field inert at the rational prime $p$. The Hecke Orbit conjecture holds for the moduli space of principally polarized abelian varieties of dimension $n\ge 3$ over $k$ equipped with an action by $\mathcal{O}_L$ of signature $(1, n-1).$
\end{cor}
\subsection{Overview of strategy.}
A general strategy for attacking Conjecture \ref{conj} is to break it down into the following two parts as in \cite[Conjecture 4.1]{chai2005hecke}. It is clear that the conjecture is equivalent to their conjunction.
\begin{itemize}
\item The discrete part: the prime-to-$p$ Hecke orbit of $x$ meets every irreducible component of the central leaf $C(x)$ passing through $x$;
\item The continuous part: the Zariski closure of the prime-to-$p$ Hecke orbit of $x$ in $C(x)$ has the same dimension as $C(x)$.
\end{itemize}
In this section, We first give a brief description for Chai and Oort's strategy for proving the Hecke orbit conjecture for Siegel modular varieties in \cite{chai2005hecke}. Then we highlight the differences as well as new ideas in our approach.
For the discrete part, Chai and Oort proved a stronger result that every central leaf not contained in the supersingular stratum is irreducible. This is a consequence of the fact that on a Siegel modular variety, every non-supersingular Newton stratum on a Siegel modular variety is irreducible (\cite[Theorem A]{chai2011monodromy}), and every Newton stratum contains a hypersymmetric point (\cite[Theorems 5.4]{chai2006hypersymmetric}).
For the continuous part, one analyzes the formal completion of $C(x)$ at a split hypersymmetric point. A point is called split if the corresponding abelian variety is isogenous to a product of abelian varieties with at most two slopes. It turns out that the formal completion of the Zariski closure of the prime-to-$p$ Hecke orbit of a split hypersymmetric point $y$ in $C(x)$ is a formal Barsotti-Tate subgroup of the formal completion of $C(x)$. Furthermore, the action of the local stabilizer group of $y$ on $C(x)^{/x}\cong C(x)^{/y}$ underlies an absolutely irreducible representation. Thus the conjecture is true for any split hypersymmetric point. To deduce the statement for arbitrary geometric points, one uses the Hecke orbit conjecture for Hilbert modular varieties to find a split hypersymmetric point in the interior of the closure of any prime-to-$p$ Hecke orbit in its central leaf. This completes the proof in the Siegel case.
Shimura varieties of PEL type are subvarieties of Siegel modular varieties cut out by the condition of having an action by the ring of integers in a prescribed finite dimensional semisimple $\mathbb{Q}$-algebra $B$. Although many of the ingredients used in \cite{chai2005hecke} are known for PEL type Shimura varieties, there are two major things that do not work the same way.
First, $B$-hypersymmetric points on PEL type Shimura varieties are not as abundant as in the Siegel case. Not every Newton stratum contains a hypersymmetric point (see \cite[Example 5.3]{zong2008hypersymmetric} and Example \ref{EmptyExample}). A rephrase of Zong's main result \cite[Theorem 5.1.1]{zong2008hypersymmetric} says that a Newton stratum contains a $B$-hypersymmetric point if and only if the associated Newton polygon is $B$-symmetric (see Theorem \ref{hs}).
Secondly, Chai and Oort's approach depends on the Hilbert trick, which refers to the property that every point on a Siegel modular variety comes from a Hilbert modular variety (see \cite[Section 9]{chai2005hecke}). The application of this fact is two-fold: (1) to find a hypersymmetric point inside the closure of every Hecke orbit inside its central leaf and (2) to show that one can take such a hypersymmetric point to be split, thereby reducing the continuous part into the two-slope case. The Hilbert trick is true for PEL type C only, where the Hecke correspondences also come from a symplectic group. For a general simple $\mathbb{Q}$-algebra $B$, we deduce the conjecture under mild conditions from the conjecture for the Shimura variety attached to $F_0,$ the maximal subfield of $B$ fixed under the positive involution $*$ of $B.$ We bypass the second usage of the Hilbert trick by leveraging on the fact that the formal completion of a central leaf on a PEL type Shimura variety admits a cascade structure built up from Barsotti-Tate formal groups. The formal completion of the Zariski closure of a prime-to-$p$ Hecke orbit, as a subscheme of the formal completion of the central leaf, is then determined by its image in the two-slope components of the cascade. We thereby reduce to an analogue of a step in the proof of the Siegel case, establishing the desired equality at the level of two-slope components, from which the continuous part follows by an inductive argument.
\begin{rmk}
We exclude PEL type D in this paper, in which case the algebraic group $G$ is disconnected. We expect to extend the method to cover case D with extra work in the future.
\end{rmk}
\begin{rmk}
Sug-Woo Shin \cite{shin} announced a proof for the irreducibility of central leaves on Shimura varieties of Hodge type. His proof relies on the theory of automorphic forms, and, unlike Chai and Oort's approach, is independent of the irreducibility of Newton strata. Since our proof of the continuous part also does not depend on the irreducibility of Newton strata, Shin's result combined with Theorem 1.2 will yield the following.
\end{rmk}
\begin{thm}
Let $\mathscr{S}$ be the reduction modulo $p$ of a Shimura variety of PEL type A or C over $k$ for which $p$ is an unramified prime of good reduction. Let $\mathcal{N}^0$ be an irreducible component of a Newton stratum.
\begin{enumerate}
\item Assume $\mathcal{N}^0$ contains a $B$-hypersymmetric point $x_0$;
\item either $p$ is totally split in $F/F_0$ and the Newton polygon of $\mathcal{N}^0$ satisfies condition (*), or $p$ is inert in $F/F_0$.
\end{enumerate}
Then $H^p(x)$ is dense in $C(x)$ for every $x\in\mathcal{N}^0(k).$
\end{thm}
\subsection{Paper outline.}
We briefly discuss the organization of this paper. Sections 2 recalls definitions and various known facts relevant to our context. Section \ref{section 3} develops new terminologies to describe $B$-hypersymmetric points on PEL type Shimura varieties. We rephrase Zong's main result in \cite{zong2008hypersymmetric} in simpler language and derive conditions on the existence of hypersymmetric points in special cases relevant to our applications. Section \ref{section 4} contains the proof of a monodromy result that serves as a key input in proving the irreducibility of $C\cap\mathcal{N}^0.$ This result generalizes \cite[Theorem 5.1]{chai2005hecke} and the main result of \cite{kasprowitz2012monodromy}. Section 5 contains the proof of the discrete part of the main theorem. In section 6, we prove the continuous part for $B$-hypersymmetric points, which then culminates in section 7 with a reduction argument proving the main theorem at general points. We restrict our attention to Cases A and C in sections 5 and 7.
\subsection{Future directions.}
Some of the tools used for proving the conjecture for PEL case are known in more general settings. For example, the almost product structure of Newton strata and Serre-Tate theory are both generalized to Shimura varieties of Hodge type (see \cite{hamacher2019product}; \cite{hong2019hodge} and \cite{shankar2016serre}). However, the right notion for hypersymmetric points in the Hodge type case remains open.
\hfill
\noindent\textbf{Acknowledgements.} I sincerely thank my advisor Elena Mantovan for her continuous patience and unwavering support. I'm grateful to Ana Caraiani, Serin Hong, Marc-Hubert Nicole, Eric Rains, and Sug Woo Shin for helpful discussions and correspondences.
\section{Preliminaries}
\subsection{Shimura varieties of PEL type.}\label{2.1}
Fix a prime number $p$ throughout the rest of this paper. We are interested in moduli problems of PEL type as given in Kottwitz \cite[\textsection 5]{kottwitz1992points}.
Let $\mathcal{D}=(B,\mathcal{O}_B,*,V,(\cdot,\cdot), h)$ be a Shimura datum of PEL type consisting of the following information: \begin{itemize}
\item $B$, a finite dimensional simple $\mathbb{Q}$-algebra;
\item $\mathcal{O}_B$, a maximal $\mathbb{Z}_{(p)}$-order of $B$;
\item $*$, a positive involution on $B$ preserving $\mathcal{O}_B$;
\item $V$, a nonzero finitely generated left $B$-module such that $V_{\mathbb{Q}_p}$ contains a self dual lattice preserved by $\mathcal{O}_B$;
\item $(\cdot,\cdot)$, a $\mathbb{Q}$-valued nondegenerate hermitian form on $V$ such that $(bv,w)=(v,b^*w)$ for all $v,w\in V$ and all $b\in B;$
\item a homomorphism $h:\mathbb{C}\rightarrow \End_{B\otimes_{\mathbb{Q}}\mathbb{R}}(V\otimes_{\mathbb{Q}}\mathbb{R})$ such that $(v,w)\mapsto (v,h(\sqrt[]{-1})w)$ is a positive definite symmetric form on $V_{\mathbb{R}}$.
\end{itemize}
Let $F$ denote the center of $B$ and let $F_0$ be the maximal totally real subfield of $F$ fixed under $*.$
We assume in addition that $p$ is an unramified prime of good reduction for the Shimura datum $\mathcal{D}$. Equivalently, $B_{\mathbb{Q}_p}$ is a product of matrix algebras over unramified extensions of $\mathbb{Q}_p$. In particular, $F$ is unramified at $p$.
One associates to the Shimura datum $\mathcal{D}$ an linear algebraic group $G$ such that $G(R)=\{x\in \End_B(V)\otimes_{\mathbb{Q}}R|xx^*\in R^{\times}\}$ for any $\mathbb{Q}$-algebra $R$. The homomorphism $h$ gives a decomposition of the $B_{\mathbb{C}}$-module $V_{\mathbb{C}}$ as $V_{\mathbb{C}}=V_1\oplus V_2$, where $V_1$ (resp. $V_2$) is the subspace on which $h(z)$ acts by $z$ (resp. by $\overline{z}$). $V_1$ and $V_2$ are $B_{\mathbb{C}}$-submodules of $V_{\mathbb{C}}.$ The field of definition of the isomorphism class of the complex representation $V_1$ of $B$, denoted by $E,$ is called the reflex field of $\mathcal{D}.$
Now we describe the following moduli problem associated to the Shimura datum $\mathcal{D}$ as given in \cite[Section 5]{kottwitz1992points}. Let $\mathbb{A}_f^p$ denote the ring of finite adeles attached to $\mathbb{Q}$ with trivial $p$-component. Let $K=K_pK^p\subseteq G(\mathbb{A}_f)$ be a subgroup, with $K_p$ being a fixed hyperspecial maximal compact subgroup of $G(\mathbb{Q}_p),$ and $K^p$ being a compact open subgroup of $G(\mathbb{A}_f^p)$.
Consider the contravariant functor from the category of locally-Noetherian schemes $S$ over $\mathcal{O}_{E,(p)}:=\mathcal{O}_E\otimes_{\mathbb{Z}}\mathbb{Z}_{(p)}$ that associates to $S$ the set of isomorphism classes of quadruples $(A,\lambda, i, \overline{\eta})$, where \begin{itemize}
\item $A$ is an abelian scheme over $S$;
\item $\lambda$ is a prime-to-$p$ polarization of $A;$
\item $i:\mathcal{O}_B \hookrightarrow \End(A)\otimes_{\mathbb{Z}}\mathbb{Z}_{(p)}$ is a morphism of $\mathbb{Z}_{(p)}$-algebras such that $\lambda\circ i(\alpha^*)=i(\alpha)^{\vee}\circ\lambda$ and $\det(b,\text{Lie}(A))=\det(b,V_1)$ for all $\alpha\in\mathcal{O}_B;$
\item $\overline{\eta}$ is a prime-to-$p$ level $K^p$ structure in the following sense. Let $s$ be a geometric point of $S.$ Denote by $A_s$ the fiber of $A$ over $s$ and let $H_1(A_s,\mathbb{A}_f^p)$ denote its Tate $\mathbb{A}_f^p$-module. A level structure of type $K^p$ on $A$ is a $K^p$-orbit $\overline{\eta}$ of isomorphisms $\eta:V_{\mathbb{A}_f^p}\rightarrow H_1(A_s,\mathbb{A}_f^p)$ as skew-Hermitian $B$-modules such that $\overline{\eta}$ is fixed by $\pi_1(S,s).$
\end{itemize}
Two quadruples $(A,\lambda,i,\overline{\eta})$ and $(A',\lambda',i',\overline{\eta}')$ are said to be isomorphic if there exists a prime-to-$p$ isogeny $f:A\rightarrow A'$ such that $\lambda=rf^{\vee}\circ\lambda'\circ f$ for some positive integer $r\in \mathbb{Z}_{(p)}^{\times}$, $f\circ i=i'\circ f$ and $\overline{\eta'}=f\circ \overline{\eta}.$
When $K^p$ is sufficiently small, this functor is representable by a smooth quasi-projective scheme $\mathcal{S}_{K^p}$ defined over $\mathcal{O}_{E,(p)}$ (see \cite[Section 5]{kottwitz1992points}).
We recall that in the terminologies of Kottwitz, such a moduli problem is said to be of type A if $G$ is unitary, type C if symplectic, and type D if orthogonal. Write $F_0$ for the subfield of $F$ fixed by the positive involution $*$. In the case of type A, $F\neq F_0$. Otherwise $F$ is a totally real field.
As the level $K^p$ varies, the varieties $\mathcal{S}_{K^p}$ form an inverse system that carries a natural action by $G(\mathbb{A}_f^p).$
From now on we fix a prime $v$ of $E$ over $p$ with residue field $\kappa$ and denote by $$\mathscr{S}_{K^p}:=\mathcal{S}_{K^p}\otimes\overline{\kappa}$$ the special fiber of $\mathcal{S}_{K^p}$ over $v.$
\subsection{Newton stratification and Oort's foliation.} Let $k$ be an algebraically closed field of characteristic $p.$ Write $W=W(k)$ for the Witt ring of $k$ and $L$ the fraction field of $W.$ By an abuse of notation, we use $\sigma$ to denote both the Frobenius on $k$ and that of $L$ over $\mathbb{Q}_p.$ We define the set $B(G)$ of $\sigma$-conjugacy classes of $G(L)$ by $$B(G)=\{[b]|b\in G(L)\},$$ where $$[b]=\{g^{-1}b\sigma(g)|g\in G(L)\}.$$
An $F$-isocrystal is a finite dimensional vector space $V$ over $L$ with the Frobenius automorphism $F:V\rightarrow V.$ An $F$-isocrystal with $G$-structure is an exact faithful tensor functor $$\text{Rep}_{\mathbb{Q}_p}(G)\rightarrow \text{F-Isoc}(k).$$
For each $b\in G(L)$, there is an associated $F$-isocrystal with $G$-structure $N_b$ given by $N_b(W,\rho)=(W_L,\rho(b)(id_W\otimes\sigma))$ (see \cite[\textsection 3.4]{rapoport1996classification}). The isomorphism class of $N_b$ is fixed under $\sigma$-conjugation in $G(L).$ Hence the set $B(G)$ is identified with the set of isomorphism classes of $F$-isocrystals with $G$-structures.
According to the Dieudonn\'e-Manin classification, the category of $F$-isocrystals is semi-simple, where the simple objects are parametrized by rational numbers called slopes.
Kottwitz classfied points in $B(G)$ by associating to each $\sigma$-conjugacy class $[b]\in B(G)$ a Newton point and a Kottwitz point (see \cite{kottwitz1985isocrystals}, \cite{kottwitz1997isocrystals}, and \cite{rapoport1996classification}). The set of Newton point admits a partial order $\preceq$.
Let $\mu$ be a conjugacy class of cocharacters of $G.$ To $\mu$ we associate the Newton point $$\overline{\mu}=\frac{1}{r}\sum_{i=0}^{r-1}\sigma^i(\mu),$$where $r$ is an integer such that $\sigma^r$ fixes $\mu.$ An element $[b]\in B(G)$ is said to be $\mu$-admissible if the Newton point corresponding to $[b]$ is less than $\overline{\mu}$. We write $B(G,\mu)$ for the set of $\mu$-admissible elements of $B(G).$ $B(G,\mu)$ is finite with a unique minimum called the basic point, and a unique maximum called the $\mu$-ordinary point (see \cite{kottwitz1997isocrystals}).
From now on, we write $b$ for the conjugacy class $[b]\in B(G).$ For any geometric point $x\in\mathscr{S},$ let $X_x$ be the fiber of the universal Barsotti-Tate group at $x.$ Write $N_x$ for the $F$-isocrystal associated to $X_x,$ then $N_x$ uniquely determines an element $b_x\in B(G_{\mathbb{Q}_p},\mu_{\overline{\mathbb{Q}}_p}).$ The following result is due to Rapoport and Richartz (see \cite[Theorem 3.6]{rapoport1996classification}).
For $b\in B(G_{\mathbb{Q}_p})$, the set $$\mathscr{S}(\preceq b)=\{x\in\mathscr{S}|b_x\preceq b\}$$is a closed subscheme of $\mathscr{S}$ called the closed Newton stratum attached to $b$. The sets $\mathscr{S}(\preceq b)$ form the Newton stratification of $\mathscr{S}$ by closed subschemes of $\mathscr{S}$ indexed by $b\in B(G_{\mathbb{Q}_p}).$
The open Newton stratum attached to $b\in B(G_{\mathbb{Q}_p})$ is defined as $$\mathcal{N}_b=\mathscr{S}(\preceq b)-\cup_{b'\preceq b}\mathscr{S}(\preceq b').$$
It is a locally-closed reduced subscheme of $\mathscr{S}.$ Moreover, the stratum $\mathcal{N}_b$ is non-empty if and only if $b\in B(G_{\mathbb{Q}_p},\mu_{\overline{\mathbb{Q}}_p})$ (see \cite[Theorem 1.6]{viehmann2013ekedahl}). Moreover, in the situations of interest to us, this stratification coincides with the classical Newton stratification determined by the isogeny class of the geometric fibers of the universal Barsotti-Tate group (see \cite[Theorem 3.8]{rapoport1996classification}).
Now we fix a conjugacy class $b$ in $B(G_{\mathbb{Q}_p},\mu_{\overline{\mathbb{Q}}_p})$ and write $\mathbb{X}$ for a corresponding Barsotti-Tate group with $G_{\mathbb{Q}_p}$-structure defined over $\overline{\kappa}$. A Barsotti-Tate group $\mathbb{X}'$ over a field $k'\supset \overline{\kappa}$ is geometrically isomorphic to $\mathbb{X}$ (denoted $\mathbb{X}'\cong_g\mathbb{X})$ if $\mathbb{X}'$ and $\mathbb{X}$ become isomorphic over an extension of $k'.$ The central leaf associated to $\mathbb{X}$ is defined as $$C_{\mathbb{X}}=\{x\in \mathscr{S}|X_x\cong_g \mathbb{X}\},$$ where $X$ stands for the universal Barsotti-Tate group. By definition, we have $C_{\mathbb{X}}\subseteq \mathcal{N}_b.$ Moreover, $C_{\mathbb{X}}$ is a locally-closed smooth subscheme of the Newton stratum $\mathcal{N}_b$ (see \cite{oort2004foliations} and \cite[Proposition 1]{mantovan2005cohomology}).
For a geometric point $x\in \mathscr{S}(k),$ we say $C(x):=C_{X_x}$ is the central leaf passing through $x.$
On any fixed Newton stratum $\mathcal{N}_b$, central leaves give a foliation structure, called the central foliation. Any closed point $x\in \mathcal{N}_b$ is contained in exactly one central leaf. If $x,x'$ are two points in $\mathcal{N}_b$, there exists a scheme $T$ and finite surjective morphisms $C(x)\twoheadleftarrow T\twoheadrightarrow C(x')$. In particular, $\dim C(x)=\dim C(x')$ (see \cite[\textsection 2 and \textsection 3]{oort2004foliations}).
\subsection{Hecke symmetries and Hecke orbits.}
As $K^p$ varies over all sufficiently small compact open subgroups of $G(\mathbb{A}_f^p),$ the Shimura varieties $\mathscr{S}_{K^p}$ form an inverse system $\varprojlim_{K^p}\mathscr{S}_{K^p}$. If $K_1^p\subseteq K_2^p$ are compact open subgroups of $G(\mathbb{A}_f^p),$ there is an \'etale covering $\mathscr{S}_{K_1^p}\rightarrow \mathscr{S}_{K_2^p}$ given by $(A,\lambda,i,(\overline{\eta})_1)\mapsto (A,\lambda,i,(\overline{\eta})_2),$ where $(\overline{\eta})_i$ denotes the $K_i^p$-orbit of $\eta$ and the map is given by extending $(\overline{\eta})_1$ to the $K_2^p$-orbit.
The inverse system $\varprojlim_{K^p}\mathscr{S}_{K^p}$ admits a natural right action by $G(\mathbb{A}_f^p).$ For $g\in G(\mathbb{A}_f^p),$ the corresponding map $$\mathscr{S}_{K^p}\rightarrow \mathscr{S}_{g^{-1}K^pg}$$ is given by $$(A,\lambda,i,\overline{\eta})\mapsto (A,\lambda,i,\overline{\eta g}).$$
For a fixed $K^p$, the action of $G(\mathbb{A}_f^p$) induces a family of finite \'etale algebraic correspondences on $\mathscr{S}_{K^p}$ called the prime-to-$p$ Hecke correspondences. Namely, for $g\in G(\mathbb{A}_f^p)$, we have $$\mathscr{S}_{K^p}\xleftarrow{a}\mathscr{S}_{K^p\cap gK^pg^{-1}}\xrightarrow{b}\mathscr{S}_{K^p},$$ where $b$ is the covering map $\mathscr{S}_{K^p\cap gK^pg^{-1}}\rightarrow \mathscr{S}_{K^p}$ induced by the inclusion of $K^p$ into $gK^pg^{-1}\subseteq K^p$, and $a$ is the composition of the covering map for the inclusion $g^{-1}(K^p\cap gK^pg^{-1})g\subseteq K^p$ with the isomorphism between $\mathscr{S}_{K^p\cap gK^pg^{-1}}$ and $\mathscr{S}_{g^{-1}(K^p\cap gK^pg^{-1})g}.$
Let $x\in \mathscr{S}_{K^p}(k)$ be a closed geometric point, and let $\tilde{x}\in \mathscr{S}(k)$ be a geometric point of the tower $\mathscr{S}(k)$ above $x.$ The prime-to-$p$ Hecke orbit of $x$ in $\mathscr{S}_{K^p}(k)$, denoted by $H^p(x),$ is the countable set consisting of all points that belong to the image of $G(\mathbb{A}_f^p)\cdot\tilde{x}$ under the projection $\mathscr{S}(k)\rightarrow \mathscr{S}_{K^p}(k).$ For a prime $l\neq p,$ the $l$-adic Hecke orbit, denoted by $H_l(x)$, is defined to be the projection of $G(\mathbb{Q}_l)\cdot\tilde{x}$ to $\mathscr{S}_{K^p}(k)$, where the action is given via the canonical injection $G(\mathbb{Q}_l)\hookrightarrow G(\mathbb{A}_f^p).$ It is clear from the definition that both $H^p(x)$ and $H_l(x)$ are independent of the choice of $\tilde{x}.$ By definition, $H^p(x)$ sits inside the central leaf $C(x)$ passing through $x.$
\section{Hypersymmetric Abelian varieties}\label{section 3}
Hypersymmetric abelian varieties were first studied by Chai and Oort in \cite{chai2006hypersymmetric} as a tool for proving the Hecke Orbit conjecture for Siegel modular varieties. Y. Zong studied the more general version for PEL type Shimura varieties in his dissertation \cite{zong2008hypersymmetric} and gave necessary and sufficient conditions on the Newton polygon for the existence of simple hypersymmetric points. While the existence of such points has applications in proving the irreducibility of central leaves and Igusa towers \cite[Proposition 3.3.2]{eischen2017p}, hypersymmetric points are also of their own interests as mod-$p$ analogues of CM abelian varieties in characteristic $0$.
Recall that an abelian variety $A$ of dimension $g$ over a field $K$ is said to be of CM-type if $\End(A)\otimes_{\mathbb{Z}}\mathbb{Q}$ contains a semi-simple commutative sub-algebra of rank $2g$ over $\mathbb{Q}.$ In a moduli space of abelian varieties over a field of characteristic zero, points of CM-type are special. However, Tate proved that if the base field is of positive characteristic, every abelian variety is of CM-type \cite{tate1966endomorphisms}. In this sense, CM-type abelian varieties are no longer special.
Notations as in Section \ref{2.1}. Fixing a level structure $K^p$, we may consider geometric points in the moduli space $\mathscr{S}:=\mathscr{S}_{K^p}$ which correspond to abelian varieties that have as many endomorphisms as allowed by their Barsotti-Tate groups. As it turns out, such points are indeed special in the positive characteristic setting. Not every point satisfy this condition (see \cite[Remark 2.4]{chai2006hypersymmetric}). Moreover, in Shimura varieties of PEL type, not every Newton stratum contains such a point.
\begin{defn}\begin{enumerate}
\item \cite[Definition 6.4]{chai2006hypersymmetric}
A $B$-linear polarized abelian variety $A$ over $k$ is $B$-hypersymmetric if $$\End_B(A)\otimes_{\mathbb{Z}}\mathbb{Q}_p\cong \End_B(A[p^{\infty}])\otimes_{\mathbb{Z}_p}\mathbb{Q}_p.$$
\item We say a point $x\in \mathscr{S}(k)$ is $B$-hypersymmetric if the corresponding abelian variety $A_x$ is $B$-hypersymmetric.
\end{enumerate}
\end{defn}
\subsection{New formulation of the characterization of $B$-hypersymmetric abelian varieties.}
Given a Shimura variety $\mathscr{S}:=\mathscr{S}_{K^p}$ over $k,$ we are interested in the existence of $B$-hypersymmetric points in any central leaf or prime-to-$p$ Hecke orbit. On Siegel modular varieties, a Newton stratum contains a hypersymmetric point if and only if its Newton polygon is symmetric \cite[Corollary 4.8]{chai2006hypersymmetric}. This is true for every Newton stratum on a Siegel modular variety due to the presence of polarization. For PEL type Shimura varieties, Zong showed in \cite{zong2008hypersymmetric} that the existence of $B$-hypersymmetric points depends only on the shape of the corresponding Newton polygon, i.e. a Newton stratum contains $B$-hypersymmetric points precisely when the corresponding Newton polygon satisfies ``supersingular restriction" (see \cite[Theorem 5.1]{zong2008hypersymmetric}). We remark that for the purpose of our paper, the conditions that the Newton polygon of a non-empty Newton stratum admits a ``CM-type partition" (see \cite[Definition 4.1.8]{zong2008hypersymmetric}) is redundant due to the non-emptiness. Hence, for convenience, in the present paper we develop an easier language to describe the shape of Newton polygons whose strata contain $B$-hypersymmetric points, although it is not necessary to do so.
For Shimura varieties of PEL type, we introduce an analogue of the notion of being symmetric for a Newton polygon, in order to draw an explicit analogy with the terminologies in the Siegel case.
Recall that for a point $x\in\mathscr{S},$ the $F$-isocrystal $M$ associated to $A_x$ admits a unique decomposition $M=\bigoplus_{\lambda\in\mathbb{Q}}M(\lambda)$ where $M(r)$ denotes the largest sub-isocrystal of slope $\lambda$. If $M$ is further equipped with an action by a finite dimensional $\mathbb{Q}$-algebra $B,$ then $M=\bigoplus_{v\in Spec{\mathcal{O}_F,v|p}}M_v(\lambda_v)$, where $F$ is the center of $B.$ The slopes $\lambda_v$ of $M_v$ are called the slopes of $M$ at $v.$ The multiplicity of the slope $\lambda_v$ is given by $$m_v(\lambda_v)=\frac{\dim_{L(k)}M_v(\lambda_v)}{[F_v:\mathbb{Q}_p][B:F]^{1/2}}.$$
Let $\nu$ denote a Newton polygon that comes from a $B$-linear polarized abelian variety. Then $\nu$ can be written as $\nu=\sum_{v\in \text{Spec}\mathcal{O}_F,v|p}\nu_v,$ where $$\nu_v=\sum_{i=1}^{N_v}m_{v,i}\lambda_{v,i}$$ for positive integers $n_v,m_{v,i}$. In the above notation, we use $\lambda_{v,i}$ to denote the distinct slopes of $\nu$ at a place $v$ of $F$ above $p$, and $m_{v,i}$ is the multiplicity of $\lambda_{v,i}.$
\begin{defn}\begin{enumerate}
\item A Newton polygon $\nu$ that comes from a $B$-linear polarized abelian variety is \emph{$B$-balanced} if there exist positive integers $n,m$ such that $n_v=n$ for all primes $v$ of $F$ above $p$ and $m_{v,i}=m$ for all $v$ and $i.$
\item Two Newton polygons are disjoint if they have no common slope at the same $v|p$ of $F.$
\item A Newton polygon $\mu$ is \emph{$B$-symmetric} if it is the amalgamation of disjoint $B$-balanced Newton polygons.
\end{enumerate}
\end{defn}
In other words, for any $B$-symmetric Newton polygon $\mu$, there exists a positive integer $n$ and a multi-set $S$ such that for all $v|p$ of $F$, $n_v=n$ and $\{m_{v,i}\}_{i=1}^{n_v}=S$ as multi-sets for all $v|p$ of $F.$
It is clear from the definition that every $B$-balanced polygon is $B$-symmetric. Conversely, a $B$-symmetric polygon is naturally an amalgamation of uniquely determined disjoint $B$-balanced polygons.
We rephrase Zong's main theorem \cite[Theorem 5.1]{zong2008hypersymmetric} into a simplified version as follows:
\begin{thm}\label{hs}
A Newton stratum $\mathcal{N}$ contains a simple $B$-hypersymmetric point if and only if its Newton polygon is $B$-balanced.
\end{thm}
We remark that Theorem \ref{hs} follows from the proof but not the statement of \cite[Theorem 5.1.1]{zong2008hypersymmetric}. To make the present paper self-contained, we reproduce Zong's argument in the Appendix. The following corollary is an immediate consequence of Theorem \ref{hs}.
\begin{cor}
$\mathcal{N}$ contains a $B$-hypersymmetric point if and only if its Newton polygon is $B$-symmetric.
\end{cor}
When $B=\mathbb{Q},$ the condition of being $B$-symmetric becomes empty, so Zong's result recovers \cite[Corollary 4.8]{chai2006hypersymmetric}.
\subsection{Hypersymmetricity over a subfield.}
Recall from the notation of Section \ref{2.1} that $F_0$ is the maximal totally real field of the center $F$ of the simple $\mathbb{Q}$-algebra $B.$ For the proof of the main theorem, we need to understand when a $F$-hypersymmetric is also $F_0$-hypersymmetric.
\begin{defn}\label{conditionstar}
Let $b\in B(G,\mu)$ and let $\zeta$ be the Newton polygon corresponding to $b.$ Write $\zeta=\oplus_{u|p}\zeta_u$ for primes $u|p$ of $F.$ We say $\zeta$ \emph{satisfies the condition (*)} if for any $u\neq u'$ of $F$ sitting above the same prime $v$ of $F_0$ above $p,$ the Newton polygons $\zeta_u$ and $\zeta_{u'}$ have no common slope.
\end{defn}
We prove the following consequence of Zong's criterion of hypersymmetry (\cite[Proposition 3.3.1]{zong2008hypersymmetric}).
\begin{prop}\label{inert} \begin{enumerate}
\item Let $L/K$ be a finite extension of number fields such that every prime of $K$ above $p$ is inert in $L/K.$ Let $A$ be an $L$-hypersymmetric abelian variety defined over $\overline{\mathbb{F}_p},$ then $A$ is $K$-hypersymmetric.
\item Let $K$ be a totally real field. Let $L/K$ be an imaginary quadratic extension where $p$ splits completely. Let $A$ be an $L$-hypersymmetric abelian variety defined over $\overline{\mathbb{F}_p}.$ Suppose the Newton polygon of $A$ satisfies the condition (*). Then $A$ is $K$-hypersymmetric.
\end{enumerate}
\end{prop}
\begin{proof} Let $A'$ be an abelian variety over some $\mathbb{F}_{p^a}$ such that $A'\otimes\overline{\mathbb{F}_p}\cong A.$ Let $\pi$ denote the $\mathbb{F}_{p^a}$-Frobenius endomorphism of $A'$ and let $\zeta$ denote the Newton polygon of $A.$ Then $\End_L(A)\otimes_{\mathbb{Z}}\mathbb{Q}=\End_L(A')\otimes_{\mathbb{Z}}\mathbb{Q},$ the center of which is identified with $L(\pi).$ Since $A$ is $L$-hypersymmetric, there exists a positive integer $n$ such that $\zeta$ has $n$ slopes at each $p|w$ of $L.$ By \cite[Proposition 3.3.1]{zong2008hypersymmetric}, $$L(\pi)\otimes_L L_w\cong \prod L_w,$$ with the number of factors equal to $n.$
(1) Suppose $L/K$ is inert. Let $u$ be the prime of $K$ below $w.$ Then $[L_w:K_u]=1,$ so $$\dim_KK(\pi)=\dim_LL(\pi)=n=n_u[L_w:K_u]=n_u,$$ where $n_u$ stands for the number of slopes of $\zeta$ at $u$.
Therefore we have $K(\pi)\otimes_K K_v\cong \prod K_v$ with the number of factors being equal to $u.$ Since this holds for any $u|p$ of $K$, by \cite[Proposition 3.3.1]{zong2008hypersymmetric}, we conclude that $A$ is $K$-hypersymmetric.
(2) Suppose $K$ is totally real and $L/K$ is imaginary quadratic. Condition (*) implies $n_u=2n$ for any $u|p$ of $K.$ Since every prime above $p$ splits in $L/K$, we have $\dim_K K(\pi)=2\dim_L L(\pi).$ Again, this allows us to conclude by \cite[Proposition 3.3.1]{zong2008hypersymmetric} that $A$ is $K$-hypersymmetric.
\end{proof}
\begin{rmk}
In the case of $F/F_0,$ where $F_0$ is totally real and $F$ is a quadratic imaginary extension of $F_0,$ if the conditions in Proposition \ref{inert} are not satisfied, an $F$-balanced Newton polygon may not be $F_0$-balanced. Consider the following two examples:\begin{enumerate}
\item Suppose $[F_0:\mathbb{Q}]=2$, $p=v_1v_2$ in $F_0$ and $v_1=u_1u_1', v_2=u_2u_2'$ in $F.$ Then the slope data
$$\begin{cases}
\lambda, 1-\lambda &\text{at }u_1\\
\lambda, 1-\lambda &\text{at }u_1'\\
\mu, \nu &\text{at }u_2\\
1-\mu, 1-\nu &\text{at }u_2'\\
\end{cases}$$ where $\mu\neq \nu$ gives a $F$-balanced Newton polygon, but its restriction to $F_0$ given by
$$\begin{cases}
2(\lambda),2(1-\lambda) &\text{at }v_1\\
\mu, \nu, 1-\mu, 1-\nu &\text{at }v_2\\
\end{cases}$$ is not $F_0$-balanced.
\item Suppose $[F_0:\mathbb{Q}]=4$, $p=v_1v_2$ in $F_0,$ $v_1=u_1u_1'$ and $v_2$ is inert in $F/F_0.$ Then the slope data
$$\begin{cases}
0 &\text{at }u_1\\
1 &\text{at }u_1'\\
1/2 &\text{at }v_2
\end{cases}$$ where $\mu\neq \nu$ gives a $F$-balanced Newton polygon, but its restriction to $F_0$ given by
$$\begin{cases}
0, 1 &\text{at }v_1\\
2(1/2) &\text{at }v_2\\
\end{cases}$$ is not $F_0$-balanced.
\end{enumerate}
\end{rmk}
\subsection{Hypersymmetric points in the $\mu$-ordinary stratum in PEL type A}\label{mu}
For this section, we restrict our attention to Shimura varieties of PEL type A. Namely, we assume $F$ is a CM field. Write $d=[F:\mathbb{Q}].$ We study necessary conditions on the $\mu$-ordinary Newton polygon to guarantee the existence of a $B$-hypersymmetric point.
Moonen explicitly computes the $\mu$-ordinary polygon in \cite{moonen2004serre} in terms of the multiplication type. Let $\mathcal{T}$ denote the set of complex embeddings of $F$ and $\mathfrak{O}$ denote the set of $\sigma$-orbits of elements in $\mathcal{T}.$ There is a bijection between $\mathfrak{O}$ and the set of primes of $F$ above $p.$ Let $\mathfrak{f}$ denote the multiplication type as defined in \cite[Section 0.4]{moonen2004serre}. For each $\sigma$-orbit $\mathfrak{o}$ of complex embeddings of $F,$ the corresponding $\mu$-ordinary Newton polygon has slopes (see \cite[Section 1.2.5]{moonen2004serre} and \cite[Definition 2.6.2]{eischen2017p}): $$a_j^{\mathfrak{0}}=\frac{1}{\#\mathfrak{o}}\#\{\tau\in\mathfrak{o}|\mathfrak{f}(\tau)>d-j\}\text{ for }j=1,\cdots,d.$$
For any $\lambda$ that occurs as a slope, the multiplicity of $\lambda$ is given by $$m_{\lambda}=\#\{j\in\{1,\cdots,d|a_j^{\mathfrak{o}}=\lambda\}.$$ Moonen's result makes it convenient to check the existence of $B$-hypersymmetric points. In particular, Example \ref{EmptyExample} demonstrates that not every $\mu$-ordinary stratum contains a $B$-hypersymmetric point.
\begin{exmp}Suppose $[F:\mathbb{Q}]=4$. If $\mathfrak{o}_1=\{\tau_1,\tau_2\},\mathfrak{o}_2=\{\tau_1^*,\tau_2^*\}$with signature $(3,0),(1,4)$ respectively, then $p$ splits into $vv^*$ in $F$ and $\mu(v)=\mu(\mathfrak{o}_1)= (0)^1+(1/2)^3$ and $\mu(v^*)=\mu(\mathfrak{o}_2)=(1/2)^3+(1)^1.$ This $\mu$-ordinary stratum is $F$-symmetric and contains a hypersymmetric point.\end{exmp}
\begin{exmp}\label{EmptyExample}Suppose $[F:\mathbb{Q}]=4$. If $\mathfrak{o}_1=\{\tau_1,\tau_1^*\},\mathfrak{o}_2=\{\tau_2,\tau_2^*\}$ with signature $(3,1),(0,4)$ respectively, then $p$ splits into two real places $v,v'$ in $F$ and $\mu(v)=\mu(\mathfrak{o}_1)=(0)^1+(1/2)^2+(1)^1$ and $\mu(v')=\mu(\mathfrak{o}_2)=(0)^2+(1)^2.$ This $\mu$-ordinary stratum contains no $B$-hypersymmetric point because the number of isotypic components above $v$ and $v'$ are different. \end{exmp}
Below we give a sufficient condition for the existence of $B$-hypersymmetric points in the $\mu$-ordinary stratum.
\begin{cor}\label{3.3} \begin{enumerate}
\item Assume $p$ is inert in $F$. If every slope of the Newton polygon attached to $\mathcal{N}$ has the same multiplicity, then $\mathcal{N}$ contains a $B$-hypersymmetric point.
\item Assume that the signature of $\mathscr{S}_{K^p}$ has no definite place, and that $p$ is a prime of constant degree in the extension $F/\mathbb{Q}$. Then the $\mu$-ordinary stratum contains a $B$-hypersymmetric point.
\end{enumerate}
\end{cor}
\begin{proof} (1) If $p$ is inert, there is only one $\sigma$-orbit. Thus the conditions of being $B$-balanced reduces to the condition that every slope has the same multiplicity, which is true by assumption.
(2) If $\mathfrak{f}$ has no definite place, $\mathfrak{f}$ only takes values in $[1,d-1]$. In this case, for any $\sigma$-orbit $\mathfrak{o},$ the number of values that $\mathfrak{f}$ takes in $[1,d-1]$ is precisely one less than the number of slopes of the $\mu$-ordinary polygon at the corresponding prime of $F$ above $p.$ Hence when the degree of $p$ is constant, the $\mu$-ordinary polygon has the same number of slopes at each prime of $F$ above $v$ and is therefore $B$-balanced.
\end{proof}
\section{Monodromy}\label{section 4}
An important ingredient in \cite{chai2005hecke} for proving the discrete part of the Hecke orbit conjecture is a monodromy result for Hecke invariant subvarieties (\cite[Proposition 4.1, Proposition 4.4]{chai2005monodromy}; c.f. \cite[Theorem 5.1]{chai2005hecke}). In this section, we present a generalization to all Shimura varieties of PEL type. Proposition \ref{thmk} is a more general version of the main theorem in \cite{kasprowitz2012monodromy}, which only holds if no simple factor of $\mathcal{D}$ is of type $D$. The key difference is that in the case of PEL type A and C, the derived group of $G$ is simply connected. In the case of PEL type $D$ we need to work with the simply connected cover of $G_{\text{der}}$ instead. The proofs we present in this section are closely related in that of loc. cit.
We first fix some notations. Let $\mathscr{S}_{K^p}$ and $G$ be as given in Section 2.1. Let $G':=G_{\text{der}}^{\text{sc}}$ denote the simply connected cover of the derived group of $G.$
Let $\Sigma$ be the finite set consisting of $p$ and the primes $p'$ such that some simple component of $G'$ is anisotropic over $\mathbb{Q}_{p'}.$ For a prime $l\neq p,$ let $Z\subseteq\mathscr{S}_{K^p}$ be a smooth locally-closed subvariety stable under all $l$-adic Hecke correspondences coming from $G'$. Let $Z^0$ be a connected component of $Z$ with generic point $z.$ We use $\overline{z}$ to denote a geometric point of $Z$ above $z.$ Let $A_{Z^0}$ denote the restriction to $Z^0$ of the universal abelian scheme over $\mathscr{S}_{K^p}$. Let $K_l$ be the image of $K^p$ under $K^p\hookrightarrow G(\mathbb{A}_f^p)\twoheadrightarrow G(\mathbb{Q}_l)$ and let $\rho_l:\pi_1(Z^0,\overline{z})\rightarrow K_l$ denote the $l$-adic monodromy representation attached to $A_{Z^0}$. Let $M=\rho_l(\pi_1(Z^0,\overline{z}))$ be the image of $\rho.$
As $N$ varies over all subgroups of $K^p$ for which $K^p/N$ is trivial away from $l$, we obtain the following pro-\'etale coverings:
$$(\mathscr{S}_N)_{N}\rightarrow \mathscr{S}_{K^p},$$ $$Y:=(\mathscr{S}_N\times_{ \mathscr{S}_{K^p}}Z^0)_N\rightarrow Z^0,$$ and $$\widetilde{Z}:=(\mathscr{S}_N\times_{\mathscr{S}_{K^p}}Z)_N\rightarrow Z,$$ where the first two admit $K_l$ action via $l$-adic Hecke correspondences, and the third one admits a natural $G'(\mathbb{Q}_l)$ action. Observe that $Aut_{\mathscr{S}_{K^p}}((\mathscr{S}_N)_N)=K^p$ by construction. Hence $\pi_1(Z^0,\overline{z})$ acts on $Y$ via the composition of morphisms $\pi_1(Z^0,\overline{z})\rightarrow \pi_1(\mathscr{S}^0_{K^p},\overline{z})\rightarrow Aut_{\mathscr{S}_{K^p}}((\mathscr{S}_N)_N)=K^p$, where $\mathscr{S}^0_{K^p}$ stands for the connected component of $\mathscr{S}_{K^p}$ containing $Z^0.$ Let $\widetilde{z}\in Y$ be a geometric point above $z$ and write $Y^0$ for the connected component of $Y$ passing through $\widetilde{z}.$
\begin{lem}\label{homeo}\begin{enumerate}
\item There is a homeomorphism $\pi_0(Y)\cong K_l/M.$
\item There is a homeomorphism $\pi_0(\widetilde{Z})\cong G'(\mathbb{Q}_l)/\Stab_{G'}(Y^0).$
\end{enumerate}
\end{lem}
\begin{proof}
The arguments in \cite[Section 2.6, 2.7 and Lemma 2.8]{chai2005monodromy} also work in the present situation. We remark that this is where the assumption of the transitivity of Hecke correspondences is used.
\end{proof}
\begin{lem}\label{Mss} Suppose $M$ is infinite. Then $M$ contains an open subgroup of $K_l.$
\end{lem}
\begin{proof} Let $H$ denote the neutral component of the Zariski closure of $M$ in $G.$ Let $\mathfrak{m}$ denote the Lie algebra of $M$ as an $l$-adic Lie group. By \cite[Corollary 7.9]{borel2012linear}, $\mathfrak{m}$ coincides with the Lie algebra of the Zariski closure $\overline{M}$, so $\mathfrak{m}$ contains the commutator subgroup of the Lie algebra $H$. Thus, if we can show $H=G'$, then $M$ contains an open subgroup of $G'$, which implies $M$ contains an open subgroup of $K_l.$ We do so by investigating the normalizer of $H$ in $G'$ and show that $H$ is in fact normal in $G'$.
By the same argument as in \cite[Proposition 4.1]{chai2005monodromy}, $H$ is semisimple, so $H\subseteq G'.$ Let $\mathbf{N}$ be the normalizer of $H$ in $G'$ and $\mathbf{N}^0$ its neutral component. The proof of \cite[Lemma 3.3]{chai2005monodromy} shows that $\mathbf{N}^0$ is reductive.
Now we show $\mathbf{N}$ contains a nontrivial normal connected subgroup of $G'_{\mathbb{Q}_l}.$ There is a natural inclusion $\Stab_{G'}(Y)\subseteq \mathbf{N}(\mathbb{Q}_l),$ which gives rise to a continuous surjection from $G'(\mathbb{Q}_l)/\Stab_{G'}(Y)$ to $G'(\mathbb{Q}_l)/\mathbf{N}(\mathbb{Q}_l).$
By Lemma \ref{homeo}, the set on the left is profinite and in particular compact, so the group on the right is also compact.
Thus $G'(\mathbb{Q}_l)/\mathbf{N}^0(\mathbb{Q}_l)$ is compact. By \cite[Proposition 9.3]{borel1965groupes}, $\mathbf{N}^0$ contains a maximal connected solvable subgroup $A$ of $G'_{\mathbb{Q}_l}.$
By the assumption on $l$, $G'_{\mathbb{Q}_l}$ is isotropic, so \cite[Propositions 8.4, 8.5]{borel1965groupes} imply that the set of unipotent elements $A_u$ is the unipotent radical of a minimal parabolic subgroup of $G'_{\mathbb{Q}_l}$. Hence, $A_u$ is nontrivial, connected, and normal.
Therefore, we must have $\mathbf{N}^0=\mathbf{N}=G'_{\mathbb{Q}_l}$ and $H$ is a normal subgroup of $G'$ having infinite intersections with all simple components of $G'$. We conclude that $H=G',$ which completes the proof.
\end{proof}
\begin{prop}\label{kprop3.1}
Notations and conditions as in Proposition \ref{thmk}. Suppose $M$ is infinite. Then $Z$ is connected.
\end{prop}
\begin{proof}
By Lemma \ref{Mss}, $M$ contains an open subgroup of $K_l.$ Hence, $K_l/M$ is finite. By Lemma \ref{homeo}, $\pi_0(Y)$ is also finite. Since $Z$ is quasi-projective, it has finitely many connected components, $\pi_0(\widetilde{Z})$ is finite as well. Again by Lemma \ref{homeo}, this implies that $G'(\mathbb{Q}_l)/\Stab_{G'}(Y^0)$ is finite.
If $G'(\mathbb{Q}_l)/\Stab_{G'}(Y^0)\neq\{1\},$ then $\Stab_{G'}(Y^0)$ would be a non-trivial subgroup of finite index. The Kneser-Tits conjecture for simple and simply connected $\mathbb{Q}_l$-isotropic groups implies that none of the simple components of $G'$ has non-trivial non-concentral normal subgroups (\cite[Theorem 7.1, 7.6]{platonov1992algebraic}), and hence, no non-trivial subgroup of finite index. This is a contradiction. We conclude $\pi_0(\widetilde{Z})=G'(\mathbb{Q}_l)/\Stab_{G'}(Y^0)=\{1\}.$ In particular, $Z$ is connected.
\end{proof}
The following proposition, generalizing the main theorem of \cite{kasprowitz2012monodromy}, is the upshot of this section.
\begin{prop}\label{thmk}
Suppose that the prime-to-$\Sigma$ Hecke correspondences from $G'$ act transitively on the set of connected components of $Z.$ If $z$ is not in the basic stratum, then $Z$ is connected.
\end{prop}
\begin{proof}
By Proposition \ref{kprop3.1}, it suffices to show that $M=\rho_l(\pi_1(Z^0,\overline{z}))$ is infinite for all $l\notin \Sigma$. Suppose towards a contradiction that $M$ is finite for some $l$. By \cite[Theorem 2.1]{oort1974subvarieties}, there exists a finite surjective base change $Z'\rightarrow Z^0$ such that $A_{Z^0}\times_{Z^0}Z'$ is isogenous to an isotrivial abelian scheme $\mathbf{A}$ defined over $\overline{\mathbb{F}_p}.$ By \cite[Proposition I.2.7]{faltings2013degeneration}, the isogeny $A_{Z^0}\times_{Z^0}Z'\rightarrow \mathbf{A}\times_{\overline{\mathbb{F}_p}}Z'$ over $Z'$ extends to an isogeny over $\overline{Z'}.$ Hence, $\overline{Z^0}$ lies in a single Newton stratum. We claim that $\overline{Z^0}$ contains a basic point, which contradicts the assumption that the generic point of $Z^0$ lies outside the basic stratum.
We first show that $\overline{Z^0}$ is a proper scheme over $\overline{\mathbb{F}_p}$. Let $R$ be a discrete valuation ring over $\mathcal{O}_E\otimes_{\mathbb{Z}}\mathbb{Z}_{(p)}$ with fraction field $K$, and let $(A,\lambda,i,\overline{\eta})$ be a $K$-valued point of $\overline{Z^0}.$ Then the N\'{e}ron model of $A$ over $R$ is in fact an abelian scheme. It is straightforward to check that $\lambda,i$ and $\overline{\eta}$ give a PEL structure on the N\'{e}ron model of $A$. Hence, $(A,\lambda,i,\overline{\eta})$ extends to an $R$-valued point of $\overline{Z^0}$ and $\overline{Z^0}$ is proper by the valuative criterion of properness.
Now we follow closely the proof of \cite[Proposition 6]{chai1995every} to show that $\overline{Z^0}$ contains a basic point. By assumption, the generic point $z$ of $Z^0$ is not in the basic stratum, so $\overline{Z^0}$ is positive dimensional. Since each Ekedahl-Oort stratum is quasi-affine, $\overline{Z^0}$ cannot be contained in the generic Ekedahl-Oort stratum. Hence, it must intersect some smaller stratum $S^{\omega_1}.$ By definition, each Ekedahl-Oort stratum is closed under $l$-adic Hecke correspondences, so $\overline{Z^0}\cap S^{\omega_1}$ is closed under $l$-adic Hecke correspondences as well. If $\overline{Z^0}\cap S^{\omega_1}$ is not $0$-dimensional, then its closure meets some smaller stratum $S^{\omega_2}.$ We can repeat this argument, and eventually reach a stratum $S^{\omega}$ such that $\overline{Z^0}\cap S^{\omega}$ is non-empty, $0$-dimensional, and closed under $l$-adic Hecke correspondences. Thus, $\overline{Z^0}$ contains a point whose $l$-adic Hecke orbit is finite. By \cite[Proposition 4.8]{yu2005basic}, this point must be basic. This completes the contradiction and the proof of our proposition.
\end{proof}
We apply Proposition \ref{thmk} to Newton strata and central leaves to obtain the following corollary.
\begin{cor}\label{equiv}\begin{enumerate}
\item Suppose $\mathcal{N}$ is a non-basic Newton stratum. Further assume $\mathcal{N}$ is smooth. Then the prime-to-$\Sigma$ Hecke correspondences from $G'$ act transitively on $\pi_0(\mathcal{N})$ if and only if $\mathcal{N}$ is irreducible.
\item Let $C\subseteq \mathscr{S}_{K^p}$ be a central leaf not contained in the basic stratum. Then the prime-to-$\Sigma$ Hecke correspondences from $G'$ act transitively on $\pi_0(C)$ if and only if $C$ is irreducible.
\end{enumerate}
\end{cor}
\begin{rmk}\label{smooth}
Shen and Zhang \cite[Proposition 6.2.7]{shen2017stratifications} proved that non-basic Newton strata on Shimura varieties of abelian type are smooth if the pair $(G, \mu)$ is fully Hodge-Newton decomposable. G{\"o}rtz, He and Nie classified such pairs in \cite[Theorem 3.5]{goertz2019fully}.
\end{rmk}
\section{The discrete part} In this section, we restrict to cases A and C. We prove the discrete part of the Hecke Orbit conjecture under the assumption that the Newton stratum in question is irreducible and contains a $B$-hypersymmetric point. Namely,
\begin{thm}\label{irred}
Suppose $\mathcal{N}$ is a Newton stratum. Further assume that $\mathcal{N}$ contains a $B$-hypersymmetric point $x_0$ in some irreducible component $\mathcal{N}^0.$ Then $H^p$ acts transitively on $\Pi_0(C(x)\cap \mathcal{N}^0)$ for any $x\in\mathcal{N}^0.$ Moreover, if $\mathcal{N}$ is not the basic stratum, then $C(x)\cap \mathcal{N}^0$ is irreducible.
\end{thm}
\begin{lem}\label{weakapp}
Let $(A,\lambda,i,\overline{\eta})$ be an abelian variety with PEL structure over $k.$ Let $I_B$ be the unitary group attached to $(\End_B^0(A),*)$ where $*$ denotes the Rosati involution, i.e. for every commutative algebra $R,$ $$I_B(R)=U(\End_B(A)\otimes R,*)=\{u|u\cdot u^*=1=u^*\cdot u\}.$$ Then $I_{B}$ satisfies weak approximation.
\end{lem}
\begin{proof}
Let $I$ denote the unitary group attached to $(\End^0(A),*)$. Then $I$ is connected (see the proof of \cite[Lemma 4.5]{chai2011monodromy}). By \cite[\textsection 2.11]{humphreys2011conjugacy}, $I_B$ is connected. By \cite[Lemma 4.6]{chai2011monodromy}, $I$ is $\mathbb{Q}$-rational, so is $I_B$. Then \cite[Proposition 7.3]{platonov1992algebraic} implies $I_B$ satisfies weak approximation.
\end{proof}
Now we are ready to prove Theorem \ref{irred}. The key idea of the proof is as follows. Denote by $\mathcal{N}^0$ the irreducible component of $\mathcal{N}$ containing $x_0$. For a central leaf $C$ in $\mathcal{N}$ such that $C\cap \mathcal{N}^0$ is nonempty, write $C^0=C\cap \mathcal{N}^0.$ We first use the almost product structure on Newton strata to show that there exist mutually isogenous $B$-hypersymmetric points on each irreducible component of $C^0$. We then show that these points are in the same prime-to-$p$ Hecke orbit. This proves the first statement. Then we check the conditions of Corollary \ref{equiv} are satisfied and use it to conclude $C^0$ is irreducible.
\begin{proof} \textbf{Step 1.} Denote by $\{C^0_j\}_{j\in J}$ the set of irreducible component of $C^0.$ By the product structure of Newton polygon strata (see \cite[Theorem 5.3]{oort2004foliations} and \cite[\textsection 4]{mantovan2004certain}), for $N, m, n, d$ large enough, there is a finite surjective morphism $$\pi_N:Ig_{m,\mathbb{X}}\times\overline{\mathcal{M}}_{\mathbb{X}}^{n,d}\rightarrow \mathcal{N}^0$$ such that for some closed geometric point $t$ of $\overline{\mathcal{M}}_{\mathbb{X}}^{n,d},$ $\pi_N$ restricts to a finite surjective morphism $$q_m: Ig_{m,\mathbb{X}}\times\{t\}\rightarrow C^0.$$
For any fixed $j\in J,$ let $(s_j,t_j)\in Ig_{m,\mathbb{X}}\times\overline{\mathcal{M}}_{\mathbb{X}}$ be such that $(s_j,t_j)\in \pi_N^{-1}(x_0)$ and $s_j\in q_m^{-1}(C^0_j),$ then $y_j=q_m(s_j)$ is a point in $C^0_j$ related to $x_0$ by a quasi-isogeny $\phi.$ Thus we obtain a set of points $\{y_j\in C^0_j\}_j$ that are mutually isogenous. Note that $y_j$ are $B$-hypersymmetric because the property of being hypersymmetric by definition is preserved under isogenies.
\textbf{Step 2.} Now we show that the $y_j$'s in \textbf{Step 1} are related by prime-to-$p$ isogenies. For any $i,j\in J,$ let $A_i$ and $A_j$ denote the abelian varieties with additional structure corresponding to $y_i$ and $y_j$, respectively. By construction, there exists an isogeny $\phi:A_i\rightarrow A_j.$
Since $y_i$ and $y_j$ belong to the same central leaf $C$, there exists an isomorphism $\theta_p:A_i[p^{\infty}]\xrightarrow{\sim} A_j[p^{\infty}].$ Let $\psi_p:A_i[p^{\infty}]\rightarrow A_i[p^{\infty}]$ be given by the composition $\phi_p^{-1}\circ \theta_p,$ where $\phi_p$ denotes the isogeny of Barsotti-Tate groups induced by $\phi$, then we have $\psi_p\in U(\End_B(A_i[p^{\infty}]),*).$ Since $A_i$ is hypersymmetric, the latter group is isomorphic to $I_B\otimes \mathbb{Q}_p$. By weak approximation for algebraic groups, there exists $\psi\in I_B(\mathbb{A}_f^p)$ that induces the same isogeny as $\psi_p$ on the Barsotti-Tate groups. Hence, $\phi\circ \psi:A_i\rightarrow A_j$ is an isogeny that induces an isomorphism on the Barsotti-Tate groups. Thus, any $y_i$ and $y_j$ are in the same prime-to-$p$ Hecke orbit. Therefore, $H^p$ acts transitively on the set of irreducible components of $C^0.$
In particular, since the Hecke symmetries on the connected component of $\mathscr{S}_{K^p}$ comes from the adjoint group $G^{\text{ad}}$ of $G,$ and $G^{\text{der}}$ is a covering of $G^{\text{ad}}$ (see \cite[1.6.5]{moonen1998models} and \cite{deligne1971travaux}), we may take the isogenies in Step 1 to be from $G_{\text{der}}$. Moreover, using weak approximation, we may take these isogenies to be prime-to-$\Sigma$, where $\Sigma$ is as defined in the beginning of Section \ref{section 4}. We conclude by Corollary \ref{equiv}(2) that $C^0$ is irreducible.
\end{proof}
\section{The continuous part at $B$-hypersymmetric points}
In this section, we prove the discrete part of the Hecke orbit conjecture for points that are $B$-hypersymmetric. We do not need to assume working in an irreducible (component of a) Newton stratum.
\begin{thm}\label{cts}
Let $\mathcal{N}$ be a Newton stratum. Suppose $\mathcal{N}$ contains a $B$-hypersymmetric $x_0$. Let $C=C(x_0)$ denote the central leaf containing $x_0.$ Let $H$ denote the Zariski closure of $H^p(x_0)$ inside $C.$ Then $\dim H=\dim C.$
\end{thm}
\begin{rmk}
We remark that $H$ is smooth. Indeed, by generic smoothness, each connected component of $H$ has a smooth open dense subscheme, but each connected component of $H$ is by definition the union of prime-to-$p$ Hecke translates of that smooth open dense subscheme and therefore is smooth. Furthermore, by Proposition \ref{thmk}, $H$ is connected. Therefore, $H$ lies inside a single connected component of $C$.
\end{rmk}
In the case of Siegel modular varieties, the continuous part (see \cite[Theorem 10.6]{chai2005hecke} uses the ``Hilbert trick'' (\cite[Proposition 9.2]{chai2005hecke}) to find a point in $\mathcal{N}$ isogenous to a product of abelian varieties with at most two slopes - such a point is called ``split", thereby reducing the proof to the case where $x_0$ has at most two slopes. The Hilbert trick does not hold for PEL type in general. We observe that it is not necessary to work with a split point. We work instead with the full cascade structure on the formal completion of a central leaf and reduce to a statement analogous to what appears in the proof in the case of split points.
\subsection{The cascade structure on central leaves.}
In \cite{moonen2004serre}, Moonen generalizes classical Serre-Tate theory to $\mu$-ordinary points on PEL type Shimura varieties. He proves that the local deformation space at a generic point on a PEL type Shimura variety is built up from Barsotti-Tate groups via a system of fibrations over the Witt ring of $k$. For points outside the generic Newton stratum, Chai develops an analogous theory in the case when one restricts to a central leaf (see \cite[Sections 4.3, 4.4]{chai2006hecke}). In this section, we give a brief overview of the theory following loc. cit.
Let $A\rightarrow C$ be the restriction to $C$ of the universal abelian variety, and let $X$ be its Barsotti-Tate group with action by $\mathcal{O}_B\otimes_{\mathbb{Z}}\mathbb{Z}_p$. Then $X$ admits a unique slope filtration $0=X_0\subseteq X_1\subseteq\cdots\subseteq X_r=A[p^{\infty}]$ such that each $Y_i = X_i/X_{i-1}$ is a non-trivial isoclinic Barsotti-Tate group, and that the slopes of the $Y_i$'s appear in descending order (see \cite[Proposition 4.1]{chai2006hecke}). For $1\le i<j\le r,$ we use $\mathfrak{DE}(i,j)$ to denote the deformation space of the filtered Barsotti-Tate group $0\subseteq Y_i\subseteq Y_i\times Y_j$ with action by $\mathcal{O}_B\otimes_{\mathbb{Z}}\mathbb{Z}_p$, and let $\mathfrak{Def}(i,j)$ denote the deformation space of the filtered Barsotti-Tate group $0\subseteq X_i/X_{i-1}\subseteq X_{i+1}/X_{i-1}\subseteq\cdots\subseteq X_j/X_{i-1}$ with action by $\mathcal{O}_B\otimes_{\mathbb{Z}}\mathbb{Z}_p$. By definition, we have $\mathfrak{Def}(i,i+1)=\mathfrak{DE}(i,i+1)$ for any $i.$
The central leaf $C$ is homogeneous in the sense that formal completions of $C$ at any two points are non-canonically isomorphic. Thus it suffices to study what happens at the point $x_0\in C.$ The formal completion $C^{/x_0}$ is contained in the deformation space of the above-mentioned slope filtration, which admits a $r$-cascade structure in the sense of \cite[Definition 2.2.1]{moonen2004serre}. Denote this $r$-cascade by $\mathfrak{MDE}(X)$.
For $1\le i<j\le r,$ the group constituents of $\mathfrak{MDE}(X)$ are given by $\mathfrak{DE}(i,j)$, and the $(i,j)$-truncations are given by $\mathfrak{Def}(i,j).$ The $r$-cascade structure can be expressed in the following commutative diagram:
\adjustbox{scale=0.76,center}{%
\begin{tikzcd}
& & & \mathfrak{Def}(1,r)\arrow{dl} \arrow{dr} & & & \\
& & \mathfrak{Def}(1,r-1) \arrow{dl}\arrow{dr} & & \mathfrak{Def}(2,r)\arrow{dl}\arrow{dr} & & \\
& \mathfrak{Def}(1,r-2) \arrow{dl}\arrow{dr}& & \mathfrak{Def}(2,r-1)\arrow{dl}\arrow{dr}& & \mathfrak{Def}(3,r)\arrow{dl}\arrow{dr} &\\
\cdots&&\cdots&&\cdots&&\cdots\\
\end{tikzcd}
}
Here each $\mathfrak{Def}(i,j)$ is a bi-extension of $(\mathfrak{Def}(i,j-1), \mathfrak{Def}(i+1,j))$ by $\mathfrak{DE}(i,j)\times_{\text{Spec}(k)} \mathfrak{Def}(i+1,j-1)$.
For a smooth formal group $G$, we denote by $G_{\text{pdiv}}$ its maximal Barsotti-Tate subgroup. We write $\mathfrak{MDE}(X)_{\text{pdiv}}$ to mean the sub-cascade of $\mathfrak{MDE}(X)$ whose group constituents are $\mathfrak{DE}(i,j)_{\text{pdiv}}.$ Then $C^{/x_0}\subseteq \mathfrak{MDE}(X)$ is precisely the sub-cascade of $\mathfrak{MDE}(X)_{\text{pdiv}}$ fixed under the involution induced by the polarization of $x$ \cite[Section 4.4]{chai2006hecke}. We denote this sub-cascade by $\mathfrak{MDE}(X)_{\text{pdiv}}^{\lambda}$; its group constituents are $\mathfrak{DE}(i,j)_{\text{pdiv}}^{\lambda}$ when $i+j=r+1$ and $\mathfrak{DE}(i,j)_{\text{pdiv}}$ otherwise. This cascade structure on $C^{/x_0}$ allows us to reduce the proof of Theorem \ref{cts} to Proposition \ref{twoslope} below, which is an analogue of \cite[Theorem 7.3]{chai2005hecke} used in the Siegel case.
For the rest of this subsection, we study the group constituents of $\mathfrak{MDE}(X)_{\text{pdiv}}^{\lambda}$. Let $X,Y$ be isoclinic Barsotti-Tate groups over $k$ with $\mathcal{O}_B\otimes_{\mathbb{Z}}\mathbb{Z}_p$-action such that the slope of $X$ is smaller than that of $Y.$ Let $\mathfrak{DE}(X,Y)$ denote the deformation space of the filtered Barsotti-Tate group $0\subseteq X\subseteq X\times Y.$
\begin{notation}
If $G$ is a Barsotti-Tate group, write $M(G)$ for the Cartier module of $G.$ It can be equipped with the structure of a $V$-isocrystal, where $V$ denotes the dual operator to the Frobenius $F.$ For a polarization $\lambda,$ we write $G^{(\lambda)}$ to mean $G^{\lambda}$ when the induced action of $\lambda$ on $G$ is nontrivial and $G$ otherwise. We also recall that $W=W(k)$ denotes the Witt ring of $k$ and $L$ denotes the fraction field of $W.$
\end{notation}
\begin{prop}\label{twoslope}\begin{enumerate}
\item There is a natural isomorphism of $V$-isocrystals $$M(\mathfrak{DE}(X,Y)_{\text{pdiv}})\otimes_{W}L\cong \Hom_{W}(M(X),M(Y))\otimes_{W}L.$$
\item Let $\lambda$ be a principal quasi-polarization on $X\times Y.$ There is a natural isomorphism of $V$-isocrystals $$M(\mathfrak{DE}(X,Y)_{\text{pdiv}}^{\lambda})\otimes_{W}L\cong \Hom_{W}^{\lambda}(M(X),M(Y))\otimes_{W}L,$$ where by abuse of notation we use $\lambda$ to denote the involutions induced by $\lambda$ on the respective spaces.
\end{enumerate}\end{prop}
\begin{proof}
This clearly follows from \cite[Theorem 9.6]{chai2005canonical}, where the same statements are proved without assuming the presence of $\mathcal{O}_B\otimes_{\mathbb{Z}}\mathbb{Z}_p$-action.
\end{proof}
\subsection{Proof of the continuous part at $B$-hypersymmetric points.}
Notations as in Theorem \ref{cts}. The key idea in proving Theorem \ref{cts} is to study the action of the local stabilizer group at $x_0$ and show that the formal completion $H^{/x_0}\subseteq C^{/x_0}$ in fact coincides with $C^{/x_0}.$
\begin{defn}{\cite[Section 6.1]{chai2006hecke}} Let $x=[(A_x,\lambda_x,\iota_x,\eta_x)]\in\mathscr{S}(k)$ be a geometric point. Let $\mathcal{U}_x$ be the unitary group attached to the semisimple algebra with involution $(\End_{\mathcal{O}_B}(A_x)\otimes_{\mathbb{Z}}\mathbb{Q},*),$ where $*$ is the Rosati involution attached to $\lambda_x.$ We call $\mathcal{U}_x(\mathbb{Z}_p)$ the \emph{local stabilizer group at $x$}.
\end{defn}
By the assumption of Theorem \ref{cts}, $x_0$ is $B$-hypersymmetric, so $\mathcal{U}_{x_0}(\mathbb{Z}_p)$ coincides with $I_B(\mathbb{Z}_p)$ for the group $I_B$ defined in Lemma 5.2. By deformation theory, there is a natural action of $\mathcal{U}_{x_0}(\mathbb{Z}_p)$ on the formal completion $\mathscr{S}^{/x_0}$ and hence on its closed formal subschemes $\mathfrak{Def}(1,r)_{\text{pdiv}}^{(\lambda)}$ and $H^{/x_0}.$
Recall that the maps in the cascade structure of $\mathfrak{MDE}(X)$ are given by \cite[Proposition 2.1.9]{moonen2004serre} (see also \cite[2.3.6]{moonen2004serre}). These maps give rise to a group action of $\mathcal{U}_{x_0}(\mathbb{Z}_p)$ on each $\mathfrak{Def}(i,j)_{\text{pdiv}}$ in an inductive way. We describe this action in the next paragraph.
Suppose that for the triple $\mathfrak{Def}(i,j-1)_{\text{pdiv}}\xleftarrow{p}\mathfrak{Def}(i,j)_{\text{pdiv}}\xrightarrow{r}\mathfrak{Def}(i+1,j)_{\text{pdiv}}$ we have an action of $\mathcal{U}_{x_0}(\mathbb{Z}_p)$ on $\mathfrak{Def}(i,j)_{\text{pdiv}}.$ Define an action of $\mathcal{U}_{x_0}(\mathbb{Z}_p)$ on $\mathfrak{Def}(i,j-1)_{\text{pdiv}}$ by $u\cdot \mathfrak{X}:=p(u\cdot\tilde{\mathfrak{X}})$ for any $u\in \mathcal{U}(\mathbb{Z}_p)$ and $\mathfrak{X}\in \mathfrak{Def}(i,j-1)_{\text{pdiv}},$ where $\tilde{\mathfrak{X}}$ is any pre-image of $\mathfrak{X}$ in $\mathfrak{Def}(i,j)_{\text{pdiv}}$ under $p.$ For any other choice of pre-image $\tilde{\mathfrak{X}}',$ by the biextension structure on $\mathfrak{Def}(i,j)_{\text{pdiv}}$, $\tilde{\mathfrak{X}}$ and $\tilde{\mathfrak{X}}'$ are in the same $\mathfrak{DE}(i+1,j-1)_{\text{pdiv}}$-orbit in $\mathfrak{Def}(i,j)_{\text{pdiv}}$. Hence, $u\cdot\tilde{\mathfrak{X}}$ and $u\cdot\tilde{\mathfrak{X}}'$ are also in the same $\mathfrak{DE}(i+1,j-1)_{\text{pdiv}}$-orbit, which implies that their images under $p$ coincide. Therefore this action of $\mathcal{U}_{x_0}(\mathbb{Z}_p)$ on $\mathfrak{Def}(i,j-1)_{\text{pdiv}}$ is well-defined. One defines the action on $\mathfrak{Def}(i+1,j)_{\text{pdiv}}$ in the obvious analogous way.
In the following, we study how $H^{/x_0}$ behaves with respect to the cascade structure of $C^{/x_0}\cong \mathfrak{MDE}(X)_{\text{pdiv}}^{\lambda}.$
For $1\le i \le r-1,$ let $H_{i,i+1}$ denote the image of $H^{/x_0}$ inside $\mathfrak{DE}(i,i+1)_{\text{pdiv}}^{(\lambda)}$ under the maps in the cascade structure of $C^{/x_0}.$
For $1\le i<j-1\le r$, we use the biextension structure of $\mathfrak{Def}(i,j)_{\text{pdiv}}^{(\lambda)}$ and define $H_{i,j}$ as the quotient $\mathfrak{Def}(i,j)^{(\lambda)}_{\text{pdiv}}/(H_{i,i+1}\times H_{j-1,j})$. By \cite[Section2]{mumford1969bi}, there is a non-canonical injection $\mathfrak{DE}(i,j)_{\text{pdiv}}^{(\lambda)}\times_{\text{Spec}(k)} \mathfrak{Def}(i+1,j-1)\hookrightarrow H_{i,j}$.
\begin{lem}\label{lempdiv}
For $1\le i\le r-1,$ $H_{i,i+1}$ is a formal Barsotti-Tate subgroup of $\mathfrak{DE}(i,i+1)_{\text{pdiv}}^{(\lambda)}$.
\end{lem}
\begin{proof}By definition, $H$ is stable under all prime-to-$p$ Hecke correspondences. Then \cite[Proposition 6.1]{chai2006hecke} implies $H^{/x_0}$ is stable under the action of $\mathcal{U}_{x_0}(\mathbb{Z}_p);$ hence, so are the $H_{i,i+1}$'s. As explained in Remark 6.2, $H$ is smooth, so $H^{/x_0}$ is formally smooth and hence irreducible. The action of $\mathcal{U}_{x_0}(\mathbb{Z}_p)$ on $\mathfrak{DE}(i,i+1)_{\text{pdiv}}^{(\lambda)}$ gives a homomorphism from $\mathcal{U}_{x_0}(\mathbb{Z}_p)\otimes_{\mathbb{Z}_p}\mathbb{Q}_p$ to the unitary group attached to $\text{End}(\mathfrak{DE}(i,i+1)_{\text{pdiv}})^{(\lambda)}\otimes_{\mathbb{Z}_p}\mathbb{Q}_p$. Applying \cite[Theorem 4.3]{chai2008rigidity} finishes the proof.
\end{proof}
\begin{lem}\label{lemequal}
For $1\le i \le r-1,$ $H_{i,i+1}=\mathfrak{DE}(i,i+1)_{\text{pdiv}}^{(\lambda)}.$
\end{lem}
\begin{proof}
The natural action of $\mathcal{U}_{x_0}(\mathbb{Z}_p)$ on $H_{i,i+1}$ restricts to an action on $M(Y_i)\otimes_W L$. Since $x_0$ is $B$-hypersymmetric, we have $\mathcal{U}_{x_0}(\mathbb{Z}_p)=I_B(\mathbb{Z}_p).$ Thus $\overline{\mathcal{U}_{x_0}(\mathbb{Z}_p)}$ acting on $M(Y_i)\otimes_W \overline{L}$ is isomorphic to the standard representation $Std$ of $GL_{\dim Y_i}.$
If the polarization $\lambda:=\lambda_{x_0}$ is nontrivial on $\mathfrak{DE}(i,i+1)_{\text{pdiv}}$, we obtain a duality pairing between $M(Y_i)$ and $M(Y_{i+1}).$ In this case, the action of $\overline{\mathcal{U}_{x_0}(\mathbb{Z}_p)}$ on $M(Y_{i+1})\otimes_W \overline{L}$ is isomorphic to the dual of $Std.$ Thus, the action of $\overline{\mathcal{U}_{x_0}(\mathbb{Z}_p)}$ on $\Hom_{W}^{\lambda}(M(Y_i),M(Y_{i+1}))\otimes_{W}\overline{L}$ is isomorphic to $\text{Sym}^2Std,$ which is irreducible. By Proposition \ref{twoslope}(2), the action of $\overline{\mathcal{U}_{x_0}(\mathbb{Z}_p)}$ on $M(\mathfrak{DE}(i,i+1)_{\text{pdiv}}^{\lambda})\otimes_{W}\overline{L}$ is isomorphic to an irreducible representation.
On the other hand, if the polarization is trivial on $\mathfrak{DE}(i,i+1)_{\text{pdiv}}$, the action of $\overline{\mathcal{U}_{x_0}(\mathbb{Z}_p)}$ on $\Hom_{W}(M(Y_i),M(Y_{i+1}))\otimes_{W}\overline{L}$ is the tensor product representation of the standard representations of $GL_{\dim Y_i}$ and $GL_{\dim Y_{i+1}}$, which is irreducible. By Proposition \ref{twoslope}(1), the action of $\overline{\mathcal{U}_{x_0}(\mathbb{Z}_p)}$ on $M(\mathfrak{DE}(i,{i+1})_{\text{pdiv}})\otimes_{W}\overline{L}$ is again isomorphic to an irreducible representation.
By Lemma \ref{lempdiv}, it makes sense to consider the Cartier module $M(H_{i,{i+1}})$. We have $M(H_{i,{i+1}})\otimes_W \overline{L}$ as a non-trivial sub-representation of $M(\mathfrak{DE}(i,{i+1})_{\text{pdiv}})^{(\lambda)}\otimes_W\overline{L}$ which is irreducible, so we obtain the desired equalities.
\end{proof}
\begin{proof}[Proof of Theorem \ref{cts}]
We show inductively that Lemma \ref{lemequal} implies $H^{x_0}=C^{/x_0}.$
When $r=2,$ $C^{/x_0}=\mathfrak{DE}(1,2)_{\text{pdiv}}^{\lambda}$ and there is nothing to prove.
When $r=3,$ $C^{/x_0}=\mathfrak{Def}(1,3)_{\text{pdiv}}^{\lambda}$ is a biextension of $(\mathfrak{DE}(1,2)_{\text{pdiv}},\mathfrak{DE}(2,3)_{\text{pdiv}})$ by $\mathfrak{DE}(1,3)_{\text{pdiv}}^{\lambda}.$ The equalities in Lemma 6.5 induces an isomorphism realized via $H_{1,3}=\mathfrak{Def}(1,3)_{\text{pdiv}}^{\lambda}/(H_{1,2}\times H_{2,3})=\mathfrak{Def}(1,3)_{\text{pdiv}}^{\lambda}/(\mathfrak{DE}(1,2)_{\text{pdiv}}\times \mathfrak{DE}(2,3)_{\text{pdiv}})\cong \mathfrak{DE}(1,3)_{\text{pdiv}}^{\lambda}.$ Thus, the only candidate for the subscheme $H^{/x_0}$ of $\mathfrak{Def}(1,3)_{\text{pdiv}}^{\lambda}$ is $\mathfrak{Def}(1,3)_{\text{pdiv}}^{\lambda}$ itself.
When $r\ge 3,$ let $H^{i,j}$ denote the image of $H^{/x_0}$ in $\mathfrak{Def}(i,j)_{\text{pdiv}}^{\lambda}.$ By induction we have equalities $H^{i,j}=\mathfrak{Def}(i,j)_{\text{pdiv}}^{(\lambda)}$ for all $(i,j)\neq (1,r).$ In particular, $H^{1,r-1}=\mathfrak{Def}(1,r-1)_{\text{pdiv}}$ and $H^{2,r}=\mathfrak{Def}(2,r)_{\text{pdiv}}$ together imply $$H
_{1,r}=\mathfrak{Def}(1,r)_{\text{pdiv}}^{\lambda}/(\mathfrak{DE}(1,r-1)_{\text{pdiv}}\times \mathfrak{DE}(2,r)_{\text{pdiv}}),$$ so we obtain $H^{/x_0}=\mathfrak{Def}(1,r)_{\text{pdiv}}^{\lambda}=C^{/x_0}.$
\end{proof}
\section{Proof of the main theorem}\label{section7}
The main theorem of our paper is the following.
\begin{thm}\label{mainthm}
Let $\mathcal{D}=(B,\mathcal{O}_B,*,V,\langle\cdot,\cdot\rangle, h)$ be a Shimura datum of PEL type A or C, for which $p$ is an unramified prime of good reduction. Let $F$ be the center of $B$ and $F_0$ its maximal totally real subfield. Let $\mathscr{S}$ denote the reduction modulo $p$ of the Shimura variety associated to $\mathcal{D}$ of level $K^p\subseteq G(\mathbb{A}_f^p)$. Let $\mathcal{N}$ be a Newton stratum on $\mathscr{S}$. Assume \begin{enumerate}
\item $\mathcal{N}$ contains a hypersymmetric point $x_0$,
\item either (i) $p$ is totally split in $F/F_0$ and the Newton polygon of $\mathcal{N}$ satisfies the condition (*) in Definition \ref{conditionstar}; or (ii) $p$ is inert in $F/F_0.$
\end{enumerate}
Write $\mathcal{N}^0$ for the irreducible component of $\mathcal{N}$ containing $x_0$. Then $H^p(x)$ is dense in $C(x)\cap \mathcal{N}^0$ for every $x\in\mathcal{N}^0(k).$ Moreover, if $\mathcal{N}$ is not the basic stratum, then $C(x)\cap \mathcal{N}^0$ is irreducible.
\end{thm}
We observe that Theorem \ref{irred} implies that for any $x\in\mathcal{N}^0$, $C(x)\cap \mathcal{N}^0$ is irreducible and hence coincides with the irreducible component of $C(x)$ containing $x$. We denote this component by $C^0(x)$. Moreover, Theorem \ref{irred} and Theorem \ref{cts} together imply that the Hecke orbit conjecture holds for any $B$-hypersymmetric point. Then the key in deriving Theorem \ref{mainthm} from Theorem \ref{irred} and Theorem \ref{cts} lies in showing the existence of a $B$-hypersymmetric point in $\overline{H^p(x)}\cap C^0(x)$ for every $x\in\mathcal{N}^0$.
\subsection{The case of $B=F$.}
We consider first the situation where $B=F$ is a totally real field. In this case, our PEL datum is of type C. The algebraic group $G$ is symplectic, and the Hilbert trick still applies. We use the Hilbert trick to embed a Hilbert modular variety where every central leaf in the Newton strata corresponding to $\mathcal{N}$ contains $B$-hypersymmetric point.
For the remainder of this section, we fix a point $x=[(A_x,\lambda_x,\iota_x,\eta_x)]\in\mathcal{N}(\overline{\mathbb{F}_p}).$ There is a decomposition $A_x\sim_{F\text{-isog}} A_1^{e_1}\times\cdots\times A_n^{e_n}$ into $F$-simple abelian varieties $A_i$ such that $A_i$ and $A_j$ are not $F$-isogenous whenever $i\neq j.$ For each $i,$ let $E_i\subseteq \End_{F}(A_i)\otimes_{\mathbb{Z}}\mathbb{Q}$ be the maximal totally real subalgebra fixed under the involution given by the polarization of $A_i$ induced by $\lambda_x,$ then $E_i/F$ is a totally real field extension of degree $\dim(A_i)/[F:\mathbb{Q}].$ Define $E:=\prod_{i=1}^nE_i,$ then $E$ is a subalgebra of $\End_F(A_x)\otimes_{\mathbb{Z}}\mathbb{Q}$ of dimension $\dim(A).$
This construction relies on the assumption that $x$ is defined over $\overline{\mathbb{F}_p},$ so the method in this section only applies when $k=\overline{\mathbb{F}_p}.$ However, by \cite[Theorem 2.2.3]{poonen2017rational}, to prove the Hecke orbit conjecture over any algebraically closed field $k$ of characteristic $p,$ it suffices to prove it over $\overline{\mathbb{F}_p}.$
\begin{lem}\label{fields}
Let $F$ be a totally real number field and $d$ a positive integer. Let $u_1,\cdots,u_n$ be the places of $F$ above $p.$ Suppose that for each $i$, there is a finite separable extension $K_i/F_{u_i}$ of degree $d.$ Then there exists a totally real field extension $L$ of $F$ of degree $d$ such that all the $u_i$'s are inert in $L/F$ and $L_{w_i}\cong K_i$ over $F_{u_i}$ for $w_i$ of $L$ above $u_i.$
\end{lem}
\begin{proof}
For a place $u_i$ of $F$ above $p,$ we have $K_i\cong F_{u_i}(\alpha_i)$, where $\alpha_i$ is a root of some irreducible separable monic polynomial $f_i'\in F_{u_i}[X]$ of degree $d.$ By Krasner's lemma, we can approximate $f_i'$ by some irreducible separable monic polynomial $f_i\in F[X]$ such that $F_{u_i}[X]/(f_i)\cong F_{u_i}[X]/(f_i')\cong K_i.$ Let $v_1,\cdots,v_m$ denote the archimedean places of $F$ and let $g_i=\prod_{j=1}^d (X-\beta_{ij})$ for distinct $\beta_{ij}\in F,$ so $F_{v_i}[X]/(g_i)\cong \mathbb{R}[X]/(g_i)\cong \mathbb{R}^d$ since $F$ is totally real. By weak approximation, there exists some monic $f\in F[X]$ that is $u_i$-close to $f_i$ for each $i$ and $v_i$-close to $g_i$ for each $i.$ In particular, $f=f_i$ for some $i,$ so $f$ is irreducible in $F_{u_i}[X]$ and hence irreducible in $F.$
Let $L=F[X]/(f),$ then $L/F$ is separable of degree $d$. Moreover, for each $u_i$, we have $$\prod_{w|u_i}L_{w}\cong F_{u_i}\otimes L\cong F_{u_i}[X]/(f)\cong K_i.$$ This implies that there is a unique place $w_i$ of $E$ above $u_i$ and $L_{w_i}\cong K_i.$ Similarly, for each archimedean place $v_i$, we have $\prod_{w|v_i}L_w\cong F_{v_i}\otimes L\cong \mathbb{R}^d.$ Thus, $L$ is totally real.
\end{proof}
\begin{lem}\label{Eprime}
The closure of the prime-to-$p$ Hecke orbit of $x$ in $\mathscr{S}$ contains a supersingular point $z=[(A_z,\lambda_z,\iota_z,\eta_z)].$ Moreover, there exists a product of totally real fields $L=\prod_{i,j}L_{i,j}$ and an injective ring homomorphism $\alpha:L\rightarrow \End_F(A_z)\otimes_{\mathbb{Z}}\mathbb{Q}$ such that \begin{itemize}
\item for every $i,j$, $F\subseteq L_{i,j}$ and $L_{i,j}/F$ is inert at every prime of $F$ above $p$;
\item $\dim_{\mathbb{Q}}L=\dim A_z;$ and
\item the Rosati involution induced by $\lambda_z$ acts trivially on the image $\alpha(L)$.
\end{itemize}
\end{lem}
\begin{proof}
First of all, the same argument as in the last paragraph of the proof of Proposition \ref{thmk} shows $\overline{H^p(x)}$ contains a basic point $z=[(A_z,\lambda_z,\iota_z,\eta_z)].$ Since $B=F$ is a totally real number field, $z$ is supersingular. Indeed, let $d=[F:\mathbb{Q}],$ then $F=\mathbb{Q}(x)$ where $x$ is the root of some degree $d$ monic irreducible polynomial $f\in\mathbb{Q}.$ Then endomorphism ring of a $d$-dimensional supersingular abelian variety is isomorphic to $\text{Mat}_d(D_{p,\infty}),$ where $D_{p,\infty}$ denotes the quaternion algebra over $\mathbb{Q}$ ramified exactly at $p$ and $\infty,$ and $\text{Mat}_d(D_{p,\infty})$ clearly contains the companion matrix of $f$.
By assumption, $\mathcal{N}$ admits an $F$-hypersymmetric point, so its Newton polygon $\zeta$ is $F$-symmetric, i.e. $\zeta$ is the amalgamation of disjoint $F$-balanced Newton polygons $\zeta_1,\cdots,\zeta_a$ for some $a.$ By \cite[Definition 4.4.1]{zong2008hypersymmetric}, for any $j,$ $\zeta_j$ either has no slope $1/2$, or only has slope $1/2.$ Hence, there exist positive integers $m_j, h_j$ such that at every prime $u$ of $F$ above $p,$ $\zeta_j$ has exactly $h_j$ symmetric components with at most $2$ slopes, and each component has multiplicity $m_j.$
Write $\mathbb{X}=A_x[p^{\infty}]$, then the above decomposition of $\zeta$ gives a decomposition $\mathbb{X}=\bigoplus_{j=1}^a\mathbb{X}_j,$ where $\mathbb{X}_j$ is the Barsotti-Tate group corresponding to $\zeta_j.$ Further, for each fixed $j,$ the numerical properties of $\zeta_j$ mentioned above gives a decomposition $$\mathbb{X}_j=\bigoplus_{u|p}\bigoplus_{i=1}^{h_j}\mathbb{X}^u_{j,i}$$ into Barsotti-Tate groups of at most $2$ slopes.
Hence, $$\End_F(\mathbb{X})\otimes_{\mathbb{Z}_p}\mathbb{Q}_p\cong \prod_{u|p}\prod_{j=1}^a\prod_{i=1}^{h_j}\End_{F_u}(\mathbb{X}_{j,i}^u).$$ Its subalgebra $E\otimes_{\mathbb{Q}} \mathbb{Q}_p$ similarly admits a decomposition $$E\otimes_{\mathbb{Q}}\mathbb{Q}_p=\prod_{u|p}\prod_{j=1}^a\prod_{i=1}^{h_j}E_{j,i}^u$$ into local fields. Note that this has to coincide with the decomposition $E\otimes\mathbb{Q}_p\cong \prod_{i=1}^nE_i\otimes\mathbb{Q}_p=\prod_{i=1}^n\prod_{w|p}E_w.$
Now we regroup the local data $\{E_{j,i}^u/F_u\}_{j,i,u}$ to construct totally real extension of $F.$ Notice that for a fixed $j,$ the numerical conditions on $\zeta_j$ implies that for any fixed $i,$\begin{itemize}
\item $\#\{E_{j,i}^u\}_{u|p}=h_j,$ and
\item $[E_{j,i}^u:F_u]=[E_{j,i}^{u'}:F_{u'}]=\dim(\mathbb{X}_{j,i}^u)/[F:\mathbb{Q}]$ for any primes $u,u'$ of $F$ above $p.$
\end{itemize}
By Lemma \ref{fields}, for a fixed pair $(j,i)$, the data $\{E_{j,i}^u\}_{u|p}$ gives rise to a totally real field extension $L_{i,j}/F$ with $[L_{i,j}:\mathbb{Q}]=\dim(\mathbb{X}_{j,i})$ such that all primes of $F$ above $p$ are inert and $\{(L_{i,j})_{w}\}_{w|p}=\{E_{i,j}^u\}_{u|p}$ as multi-sets.
Taking $L=\prod_{i,j}L_{i,j},$ it is easy to see that $\dim_{\mathbb{Q}}L'=\dim A_z$.
In general, $\mathcal{O}_E\cap \End_{F}(A_z)\subseteq\mathcal{O}_E$ is of finite index. Following the argument in \cite[Theorem 11.3]{chai2005hecke}, up to an isogeny correspondence, we may assume $\End_F(A_z)$ contains $\mathcal{O}_E.$ Similarly, we may and do assume $\mathcal{O}_L\subseteq\End_F(A_z).$ By construction, we then have $\mathcal{O}_E\otimes_{\mathbb{Z}} \mathbb{Z}_p\cong \mathcal{O}
_L\otimes_{\mathbb{Z}}\mathbb{Z}_p$ as maximal orders of $\End_F(A_z)\otimes_{\mathbb{Z}}\mathbb{Z}_p,$ so the Noether-Skolem theorem implies that $E=\gamma L\gamma^{-1}$ for some $\gamma$ in the local stabilizer gorup of $z.$ Then $\alpha:=\text{Ad}(\gamma)$ satisfies the property in the statement of this lemma.
\end{proof}
\begin{prop}\label{B=F}
Theorem \ref{mainthm} holds when $B=F$ is a totally real field.
\end{prop}
\begin{proof}
If $x$ is (isogenous to) a $F$-hypersymmetric point, the desired statement follows immediately from Theorems \ref{irred} and \ref{cts}, so we assume $x$ is not $F$-hypersymmetric.
We use an idea analogous to that of \cite[Section 10]{chai2005hecke} and show that $\overline{H^p(x)}\cap C^0$ contains an $F$-hypersymmetric point $t$. Then we have $\overline{H^p(t)}\cap C^0\subseteq \overline{H^p(x)}\cap C^0$, but $\overline{H^p(t)}\cap C^0=C^0$ since $t$ is $F$-hypersymmetric, and we are done.
Let $E$ be as given in the beginning of this section, and let $\mathcal{M}_E$ denote the Hilbert modular variety attached to $E.$ In general, $E\cap \End_{\overline{\mathbb{F}_p}}(A_x)$ is an order of the ring $\mathcal{O}_E.$ However, up to an isogeny correspondence, we may assume $\mathcal{O}_E\subseteq \End_{\overline{\mathbb{F}_p}}(A_x).$ Then there exists a finite morphism $\mathcal{M}_E\rightarrow\mathscr{S}$ passing through $x$, compatible with the prime-to-$p$ Hecke correspondence on $\mathcal{M}_E$ and $\mathscr{S}$ and such that for each geometric point of $\mathcal{M}_E,$ the map induced by $f$ on the strict henselizations is a closed embedding (for details, see \cite[Proposition 9.2]{chai2005hecke} and the proof of \cite[Theorem 11.3]{chai2005hecke}).
Let $L=\prod_i{L_i}$ and $\gamma$ be as given by Lemma \ref{Eprime}. Again, up to an isogeny correspondence, we may assume $\mathcal{O}_{L}\subseteq \End_{\overline{\mathbb{F}_p}}(A_z)$. Similarly, there is a finite natural morphism $g:\mathcal{M}_{L}\rightarrow\mathscr{S}$ passing through $z,$ such that $g$ is compatible with the Hecke correspondences on either side and at every geometric point of $\mathcal{M}_{L}$ induces a closed embedding on the strict henzelizations. Moreover, writing $H^p_E(x)$ for the prime-to-$p$ Hecke orbit of $x$ in $\mathcal{M}_E,$ we have $\gamma^{/z}(\overline{H^p_E(x)})\subseteq g^{/z}(\mathcal{M}_{L}^{/z})$ and $\gamma^{/z}(\overline{H^p_E(x)})\subseteq \gamma^{/z}(\overline{H^p(x)}^{/z}).$ By \cite[Proposition 6.1]{chai2006hecke}, $\overline{H^p(x)}^{/z}$ is stable under the action of the local stabilizer group of $z,$ so $\gamma^{/z}(\overline{H^p(x)}^{/z})=\overline{H^p(x)}^{/z}$ and we obtain $\gamma^{/z}(\overline{H^p_E(x)}^{/z})\subseteq g^{/z}(\mathcal{M}_{L}^{/z})\cap \overline{H^p(x)}^{/z}.$ Therefore the fiber product $\mathcal{M}_{L}\times_{\mathscr{S}_{F}}(\overline{H^p(x)}\cap C^0(x))$ is nonempty.
Now let $\tilde{y}$ be an $\overline{\mathbb{F}_p}$-point of $\mathcal{M}_{L}\times_{\mathscr{S}_F}(\overline{H^p(x)}\cap C^0)$ with image $\overline{y}\in\mathcal{M}_{L}(\overline{\mathbb{F}_p})$ and image $y\in (\overline{H^p(x)}\cap C^0)(\overline{\mathbb{F}_p}).$ By definition, we have $g(\overline{y})=y.$ Moreover, $\overline{H^p(x)}\cap C^0$ contains the image under $g$ of the smallest Hecke-invariant subvariety of $\mathcal{M}_{L}$ passing through $\overline{y},$ which by the Hecke orbit conjecture for Hilbert modular varieties \cite[Theorem 4.5]{chai2005hecke} is precisely the central leaf $C_{L}(\overline{y})$.
Therefore, if we can show $C_{L}(\overline{y})$ contains a point isogenous to a product of $L_i$-hypersymmetric points, then Proposition \ref{inert} implies that $C_{L}(\overline{y})$ contains a $F$-hypersymmetric point, and so does $\overline{H^p(x)}\cap C^0.$
To see that $C_L(\overline{y})$ contains a point isogenous to a product of $L_i$-hypersymmetric points, consider the canonical decomposition $\mathcal{M}_L\cong\prod\mathcal{M}_{L_i}$ and the corresponding Newton stratum decomposition $\mathcal{N}_L\cong\prod \mathcal{N}_{L_i}$. By the construction of $L_i,$ the Newton polygon of $\mathcal{N}_{L_i}$ either has exactly two slopes at every prime of $L_i$ above $p,$ or only has slope $1/2.$ In either case, $\mathcal{N}_{L_i}$ admits an $L_i$-hypersymmetric point. By the existence of finite correspondences between central leaves on $\mathcal{N}_{L_i}$ (see \cite[Section 1.3]{yustratifying}), each central leaf of $\mathcal{N}_{L_i}$ contains a ${L_i}$-hypersymmetric point. This completes our proof.
\end{proof}
\subsection{Proof of the general case.} In this section, we complete the proof of Theorem \ref{mainthm}.
\begin{proof}[Proof of Theorem \ref{mainthm}] Again, we only need to prove the statement when $x$ is not $B$-hypersymmetric.
\textbf{Case 1.} When $B$ is totally real, see Proposition \ref{B=F}.
\textbf{Case 2.} Suppose $B=F$ is a CM field.
Let $\mathcal{D}'$ be the Shimura datum given by replacing $B$ by its maximal totally real subfield $F_0$ in the definition of $\mathcal{D}.$ Let $\mathscr{S}'$ denote the Shimura variety arising from $\mathcal{D}'$. Then there is an embedding $\mathscr{S}\rightarrow \mathscr{S}'$ given by sending any $[(A,\lambda,\iota,\eta)]$ to $[(A,\lambda,\iota|_{F_0},\eta)].$ Let $x=[(A,\lambda,\iota,\eta)]\in \mathcal{N}^0$ be any point, and we denote its image in $\mathscr{S}'$ also by $x$. By assumption, $\mathcal{N}^0$ contains a $F$-hypersymmetric point $x_0$. Proposition \ref{inert} implies $x_0$ is also $F_0$-hypersymmetric. Write $H^{p\prime}(x)$ for the prime-to-$p$ Hecke orbit of $x$ in $\mathscr{S}$ and $C^{0\prime}$ for the irreducible component of the central leaf in $\mathscr{S}'$ passing through $x$. Then by Proposition \ref{B=F}, $\overline{H^{p\prime}(x)}\cap C^{0\prime}=C^{0\prime}.$
Now we show that $H^{p\prime}(x)\cap C^0=H^p(x).$ Let $x'=[(A',\lambda',\iota',\eta')]\in H^{p\prime}(x),$ then there is a prime-to-$p$ $F_0$-isogeny $f$ between $x$ and $x'.$ By definition, $f\circ \iota|_{F_0}=\iota'|_{F_0}\circ f$. Since $F/F_0$ is a quadratic imaginary extension, $f$ extends to an $F$-isogeny between $x$ and $x'.$
Finally, we have $\overline{H^p(x)}\cap C^0=\overline{H^{p\prime}(x)\cap C^0}\cap C^0=\overline{H^{p\prime}(x)}\cap C^0=\overline{H^{p\prime}(x)} \cap C^{0\prime}\cap C^0=C^{0\prime}\cap C^0=C^0.$
\textbf{Case 3.} Now we are ready to show the statement for a general $B$.
Let $\mathcal{D}'$ be the Shimura datum given by replacing $B$ by its center $F$ in the definition of $\mathcal{D}.$ Let $\mathscr{S}'$ denote the Shimura variety arising from $\mathcal{D}'$. Then there is an embedding $\mathscr{S}\rightarrow \mathscr{S}'$ given by sending $[(A,\lambda,\iota,\eta)]$ to $[(A,\lambda,\iota|_{F},\eta)].$ Let $x\in \mathcal{N}^0$ be any point, and we denote its image in $\mathscr{S}'$ also by $x$. By assumption, $\mathcal{N}^0$ contains a $B$-hypersymmetric point $x_0$. By \cite[Proposition 3.3.1]{zong2008hypersymmetric}, $x_0$ is also $F$-hypersymmetric. Write $H^{p\prime}(x)$ for the prime-to-$p$ Hecke orbit of $x$ in $\mathscr{S}$ and $C^{0\prime}$ for the irreducible component of the central leaf in $\mathscr{S}'$ passing through $x$. Then by the previous two cases, $\overline{H^{p\prime}(x)}\cap C^{0\prime}=C^{0\prime}.$
Now we show that $H^{p\prime}(x)\cap C^0=H^p(x).$ Let $x'=[(A',\lambda',\iota',\eta')]$ be a closed geometric point of $\mathscr{S}$ such that there is an prime-to-$p$ $F_0$-isogeny $f$ from $x=[(A,\lambda,\iota,\eta)]$ to $x'$. Then $f$ induces a morphism $f:\End(A)\rightarrow \End(A')$ such that $f\circ \iota|_{F}=\iota'|_{F}\circ f$ on $F.$ By the Skolem-Noether Theorem, $f\circ \iota|_{F}=\iota'|_{F}\circ f$ extends to an inner automorphism $\varphi:B\rightarrow B.$ Hence, $f$ extends to a $B$-isogeny between $x$ and $x'.$
By an argument analogous to the last part of Case 2, we conclude $\overline{H^p(x)}\cap C^0=C^0.$
\end{proof}
\subsection{Special cases of the main theorem.}
Based on the discussions in section \ref{section 3}, we prove the following corollaries of the main theorem.
\begin{cor}\label{cormu}\begin{enumerate}
\item Suppose $p$ is inert in $F$. If every slope of the Newton polygon attached to $\mathcal{N}$ has the same multiplicity, then the Hecke orbit conjecture holds for any irreducible component of $\mathcal{N}$ containing a $B$-hypersymmetric point.
\item Suppose the center of $B$ is a CM field. Assume that the signature of $\mathscr{S}$ has no definite place, and that $p$ is a prime of constant degree in the extension $F /\mathbb{Q}$. Further assume assumption 2 in Theorem \ref{mainthm} is satisfied. Then the Hecke orbit conjecture holds for every irreducible component of the $\mu$-ordinary stratum.
\end{enumerate}
\end{cor}
\begin{proof}
By Corollary \ref{3.3}, in either of the two cases, the Newton stratum $\mathcal{N}$ contains a $B$-hypersymmetric point. In the second case, $G(\mathbb{A}_f^p)$ acts transitively on $\Pi_0(\mathscr{S})$, so if the $\mu$-ordinary stratum contains a $B$-hypersymmetric point, then it contains a $B$-hypersymmetric point in each of its irreducible components. Moreover, the assumptions satisfy the conditions of Proposition \ref{inert}. Hence, we may apply Theorem \ref{mainthm} and to derive the desired results.
\end{proof}
\begin{cor}\label{corachter}
Let $L$ be a quadratic imaginary field inert at the rational prime $p$. The Hecke Orbit conjecture holds for the moduli space of principally polarized abelian varieties of dimension $n\ge 3$ equipped with an action by $\mathcal{O}_L$ of signature $(1, n-1).$
\end{cor}
\begin{proof}
Since $p$ is inert, a Newton stratum contains a $L$-hypersymmetric point if its Newton polygon is symmetric. By \cite[Seciton 3.1]{bultel2006congruence}, any admissible Newton polygon in this case is given by $N(r)+(1/2)^{n-2r}$ for some integer $0\le r\le n/2$, where \begin{equation*}
N(r)=\begin{cases}\emptyset,&\text{if }r=0,\\
(\frac{1}{2}-\frac{1}{2r})+(\frac{1}{2}+\frac{1}{2r}), &\text{if }r>0\text{ is even},\\
(\frac{1}{2}-\frac{1}{2r})^2+(\frac{1}{2}+\frac{1}{2r})^2, &\text{if }r\text{ is odd}.
\end{cases}
\end{equation*}
From this description, it follows by easy computation that for each pair $(n,r),0\le r\le n/2$, the admissible Newton polygon uniquely determined by $(n,r)$ is always $L$-hypersymmetric. Indeed, when $n=3r$ and $r$ is even, or when $n=4r$ and $r$ is odd, the Newton polygon consists of $3$ slopes of the same multiplicity and is hence $L$-balanced. Otherwise, the Newton polygon is an amalgamation of one polygon of slope $1/2$ and one polygon of slopes $\frac{1}{2}-\frac{1}{2r}, \frac{1}{2}+\frac{1}{2r}$. Clearly, each of these two polygons is $L$-balanced.
It is well-known that in this case, each isogeny class of Barsotti-Tate groups consists of a unique isomorphism class. Thus, every central leaf coincides with the Newton polygon containing it and therefore admits a $L$-hypersymmetric point. Moreover, by \cite[Theorem 1.1]{MR3240772}, every Newton stratum is irreducible, which then implies that every central leaf is irreducible. Following the same argument as the proof of Theorem \ref{mainthm}, the irreducibility of central leaves combined with Theorem \ref{cts} yields the desired result.
\end{proof}
| 34,526 |
\section{Introduction}
Data involving interactions between multiple entities can often be represented by multidimensional arrays, {\textit{i.e.,}}\xspace tensors, and are ubiquitous in practical applications. For example, a four-mode tensor \textit{(user, advertisement, web-page, device)} can be extracted from logs of an online advertising system, and three-mode \textit{(patient, doctor, drug}) tensor from medical databases. As a powerful approach for tensor data analysis, tensor decomposition estimates a set of latent factors to represent the entities in each mode, and use these factors to reconstruct the observed entry values and to predict missing values. These factors can be further used to explore hidden structures from the data, {\textit{e.g.,}}\xspace via clustering analysis, and provide useful features for important applications, such as personalized recommendation, click-through-rate prediction, and disease diagnosis and treatment.
In practice, tensor data is often accompanied with valuable temporal information, namely the time points at which each interaction took place to generate the entry value. These time points signify that underlying the data can be rich, complex temporal variation patterns. To leverage the temporal information, existing tensor decomposition methods usually introduce a time mode~\citep{xiong2010temporal,rogers2013multilinear,zhe2016dintucker,zhe2015scalable,du2018probabilistic} and arrange the entries into different time steps, {\textit{e.g.,}}\xspace hours or days. They estimate latent factors for time steps, and may further model the dynamics between the time factors to better capture the temporal dependencies~\citep{xiong2010temporal}. Recently, \citet{zhang2021dynamic} introduced time-varying coefficients in the CANDECOMP/PARAFAC (CP) framework~\citep{Harshman70parafac} to conduct continuous-time decomposition. While successful, current methods always assume the factors of entities are static and invariant. However, along with the time, these factors, which reflect the entities' hidden properties, can evolve as well, such as user preferences, commodity popularity and patient health status. Existing approaches are not able to capture such variations and therefore can miss important temporal patterns
To overcome this limitation, we propose {\textsc{DiTucker}}\xspace, a novel nonparametric dynamic tensor decomposition model to estimate time-varying factors. Our model is robust, flexible enough to learn various complex trajectories from sparse, noisy data, and capture nonlinear temporal relationships of the entities to predict the entry values. Specifically, we use Gaussian processes (GPs) to sample frequency functions in the frequency domain, and then generate the factor trajectories via inverse Fourier transform. Due to the nice properties of Fourier bases, we can robustly estimate the factor trajectories across long-term time horizons, even under sparse and noisy data. We use Gauss-Laguerre quadrature to efficiently compute the inverse Fourier transform. Next, we use a second-level GP to sample entry values at different time points as a function of the corresponding factors values. In this way, we can estimate the complex temporal relationships between the entities. For efficient and scalable inference, we use the sparse variational GP framework~\citep{GPSVI13} and introduce pseudo inputs and outputs for both levels of GPs. We observe a matrix Gaussian structure in the prior, based on which we can avoid introducing pseudo frequencies, reduce the dimension of pseudo inputs, and hence improve the inference quality and efficiency. We then employ matrix Gaussian posteriors to obtain a tractable variational evidence bound. Finally, we use a nested reparameterization procedure to implement a stochastic mini-batch variational learning algorithm.
We evaluated our method in three real-world applications. We compared with the state-of-the-art tensor decomposition methods that incorporate both continuous and discretized time information. In most cases, {\textsc{DiTucker}}\xspace outperforms the competing methods, often by a large margin. {\textsc{DiTucker}}\xspace also achieves much better test log-likelihood, showing superior posterior inference results. We showcase the learned factor trajectories, which exhibit interesting temporal patterns and extrapolate well to the non-training region. The entry value prediction by {\textsc{DiTucker}}\xspace also shows a more reasonable uncertainty estimate in both interpolation and extrapolation.
\section{Preliminaries}\label{sect:bk}
\subsection{Tensor Decomposition}
In general, we denote a $K$-mode tensor or multidimensional array by ${\mathcal{M}} \in \mathbb{R}^{d_1 \times \ldots \times d_K}$. Each mode consists of $d_k$ entities indexed by $1, \ldots, d_k$. We then denote each tensor entry by ${\boldsymbol \ell} = (\ell_1, \ldots, \ell_K)$, where each element is the entity index in the corresponding mode. The value of the tensor entry, denoted by $m_{\bf i}$, is the result of interaction between the corresponding entities. For example, given a three-mode tensor \textit{(customer, product, online-store)}, the entry values might be the purchase amount or payment. Given a set of observed entries, tensor decomposition aims to estimate a set of latent factors to represent the entities in each mode. Denote by $\u^k_j$ the factors of entity $j$ in mode $k$. These factors can reflect hidden properties of the entities, such as customer interest and preference. We denote the collection of the factors in mode $k$ by ${\bf U}^k = [\u^k_1, \ldots, \u^k_{d_k}]^\top$, and by ${\mathcal{U}} = \{{\bf U}^1, \ldots, {\bf U}^K\}$ all the factors for the tensor.
To learn these factors, a tensor decomposition model is used to fit the observed data. For example, Tucker decomposition~\citep{Tucker66} assumes that ${\mathcal{M}} = {\mathcal{W}} \times_1 {\bf U}^1 \times_2 \ldots \times_K {\bf U}^K$, where ${\mathcal{W}} \in \mathbb{R}^{r_1 \times \ldots \times r_K}$ is called tensor core, and $\times_k$ is the tensor-matrix product at mode $k$~\citep{kolda2006multilinear}. If we restrict ${\mathcal{W}}$ to be diagonal, it becomes the popular CANDECOMP/PARAFAC (CP) decomposition~\citep{Harshman70parafac}, where each entry value is decomposed as
\begin{align}
m_{\boldsymbol \ell} = (\u^1_{\ell_1} \circ \ldots \circ \u^K_{\ell_K})^\top \boldsymbol{\lambda}, \label{eq:cp}
\end{align}
where $\circ$ is the element-wise multiplication and $\boldsymbol{\lambda}$ corresponds to the diagonal elements of ${\mathcal{W}}$. While CP and Tucker are popular and elegant, they assume a multilinear interaction between the entities. In order to flexibly estimate various interactive relationships ({\textit{e.g.,}}\xspace from simple linear to highly nonlinear), \citep{xu2012infinite,zhe2015scalable,zhe2016dintucker} view the entry value $m_{\boldsymbol \ell}$ as an unknown function of the latent factors and assign a Gaussian process (GP) prior to jointly learn the function and factors from data,
\begin{align}
m_{\boldsymbol \ell} = f(\u^1_{\ell_1}, \ldots, \u^K_{\ell_K}), \;\;\; f \sim \gp\left(0, \kappa(\v_{\boldsymbol \ell}, \v_{{\boldsymbol \ell}'})\right), \label{eq:gptf}
\end{align}
where $\v_{\boldsymbol \ell} = [\u^1_{\ell_1}; \ldots; \u^K_{\ell_K}]$, $\v_{{\boldsymbol \ell}'} = [\u^1_{{\boldsymbol \ell}'_1}; \ldots; \u^K_{{\boldsymbol \ell}'_K}]$ are the factors associated with entry ${\boldsymbol \ell}$ and ${\boldsymbol \ell}'$, respectively, {\textit{i.e.,}}\xspace inputs to the function $f$, and $\kappa(\cdot, \cdot)$ is the covariance (kernel) function that characterizes the correlation between function values. For example, a commonly used one is the square exponential (SE) kernel, $\kappa({\bf x}, {\bf x}') = \exp(-\frac{\| {\bf x} -{\bf x}' \|^2}{\eta})$, where $\eta$ is the kernel parameter. Thus, any finite set of entry values ${\bf m} = [m_{{\bf i}_1}, \ldots, m_{{\bf i}_N}]$, which is a finite projection of the GP, follows a multivariate Gaussian prior distribution, $p({\bf m}) = {\bf N}({\bf m} | {\bf 0}, {\bf K})$ where ${\bf K}$ is an $N\times N$ kernel matrix, $[{\bf K}]_{n, n'} = \kappa(\v_{{\boldsymbol \ell}_n}, \v_{{\boldsymbol \ell}_{n'}})$. To fit the observations ${\bf y}$, we can use a noise model $p({\bf y}|{\bf m})$, {\textit{e.g.,}}\xspace a Gaussian noisy model for continuous observations. Given the learned ${\bf m}$, to predict the value of a new entry ${\boldsymbol \ell}^*$, we can use conditional Gaussian,
\begin{align}
p(m_{{\boldsymbol \ell}^*}|{\bf m}) = {\bf N}(m_{{\boldsymbol \ell}^*}| \mu^*, \nu^*) \label{eq:gp-pred}
\end{align}
where $\mu^* = \kappa(\v_{\ell^*}, {\bf V}) \kappa({\bf V}, {\bf V})^{-1} {\bf m}$, $\nu^*=\kappa(\v_{\ell^*}, \v_{\ell^*}) - \kappa(\v_{\ell^*}, {\bf V})\kappa({\bf V}, {\bf V})^{-1} \kappa({\bf V}, \v_{\ell^*})$, and ${\bf V} = [\v_{{\boldsymbol \ell}_1}, \ldots, \v_{{\boldsymbol \ell}_N}]^\top$, because $[{\bf m}; m_{{\boldsymbol \ell}^*}]$ also follows a multi-variate Gaussian distribution.
In practice, tensor data is often along with time information, {\textit{i.e.,}}\xspace the time point at which each interaction occurred to generate the observed entry value. To exploit this information, current methods partition the time domain into steps $1, 2, \ldots, T$ according to a given interval, {\textit{e.g.,}}\xspace one week or month. The observed entries are then binned into the $T$ steps. In this way, a time mode is appended to the original tensor~\citep{xiong2010temporal,rogers2013multilinear,zhe2016dintucker,zhe2015scalable,du2018probabilistic}, ${\widehat{\Mcal}} \in \mathbb{R}^{d_1 \times \ldots \times d_K \times T}$. Then any tensor decomposition method can therefore be applied to estimate the latent factors for both the entities and time steps. To better grasp temporal dependencies, a dynamic model can be used to model the transition between the time factors. For example, \citep{xiong2010temporal} placed a conditional prior over successive steps, $p(\t_{j+1}|\t_j) = N(\t_{j+1}|\t_j, \sigma^2{\bf I})$ where $\t_j$ are the latent factors for time step $j$.
To leverage continuous time information, the most recent work~\citep{zhang2021dynamic} uses polynomial splines to model the coefficients $\boldsymbol{\lambda}$ in CP decomposition (see \eqref{eq:cp}) as a time-varying (trend) function.
\subsection{Fourier Transform}
Fourier transform (FT) is a mathematical transform that reveals the connection between functions in the time and frequency domains. In general, for any (complex) integrable function $f(t)$ in the time domain, we can find a corresponding function ${\widehat{f}}(\omega)$ in the frequency domain such that
\begin{align}
f(t) = \frac{1}{2\pi} \int_{-\infty}^\infty \widehat{f}(\omega) e^{i \omega t} \d \omega, \label{eq:ift}
\end{align}
where $e^{i \omega t} = \cos(\omega t) + i \sin(\omega t)$, and $i$ indicates the imaginary part. The frequency function can be obtained via a convolution in the time domain,
\begin{align}
{\widehat{f}}(\omega) = \int_{-\infty}^\infty f(t) e^{-i \omega t} \d t. \label{eq:fft}
\end{align}
The two functions $(f(t), {\widehat{f}}(\omega))$ is called a Fourier pair.
Using the time function $f(t)$ to compute the frequency function ${\widehat{f}}(\omega)$, {\textit{i.e.,}}\xspace \eqref{eq:fft}, is called forward transform, while using ${\widehat{f}}(\omega)$ to recover $f(t)$, {\textit{i.e.,}}\xspace \eqref{eq:ift}, is called inverse transform.
\cmt{
technique that transforms a function of time $f(t)$ into a function of frequency $\widehat{f}(\omega)$, which is widely applied in many scientific domains, {\textit{e.g.,}}\xspace physics, digital signal processing, and cryptography. The forward FT, {\textit{i.e.,}}\xspace analysis equation, breaks a waveform (a function or a signal) into an alternative representation, characterized by sine and cosine functions of varying frequency, according to
\begin{align*}
\widehat{f}(\omega) = \int_{-\infty}^\infty f(t) e^{-i \omega t} \d t.
\end{align*}
On the other hand, the inverse FT, {\textit{i.e.,}}\xspace synthesis equation, synthesizes the original function from its frequency domain representation, which is
\begin{align*}
f(t) = \frac{1}{2\pi} \int_{-\infty}^\infty \widehat{f}(\omega) e^{i \omega t} \d \omega.
\end{align*}
}
\begin{figure*}[h]
\centering
\setlength\tabcolsep{0pt}
\captionsetup[subfigure]{aboveskip=0pt,belowskip=0pt}
\begin{tabular}[c]{c}
\begin{subfigure}[t]{\textwidth}
\centering
\includegraphics[width=\textwidth]{./figs/nonfat.pdf}
\end{subfigure}
\end{tabular}
\caption{\small A graphical representation of our non-parametric factor trajectory learning model for dynamic tensor decomposition.}
\label{fig:graphical_model}
\end{figure*}
\section{Model}
Although the existing tensor decomposition methods are useful and successful, they always assume the factors of the entities are static and invariant, even when the time information is incorporated into the decomposition. However, due to the complexity and diversity of real-world applications, those factors, which reflect the underlying properties of the entities, can evolve over time as well, {\textit{e.g.,}}\xspace customer interest and preference, product quality, and song popularity. Hence, assuming fixed factors can miss important temporal variation patterns and hinder knowledge discovery and/or downstream predictive tasks. To overcome this issue, we propose {\textsc{DiTucker}}\xspace, a novel Bayesian nonparametric factor trajectory model for dynamic tensor decomposition.
Specifically, given the observed tensor entries and their timestamps, $\mathcal{D} = \{({\boldsymbol \ell}_1, y_1, t_1), \ldots, ({\boldsymbol \ell}_N, y_N, t_N)\}$, we want to learn $R$ factor trajectories ({\textit{i.e.,}}\xspace time functions) for each entity $j$ in mode $k$,
\[
\u^k_j(t) = [u^k_{j,1}(t), \ldots, u^k_{j,R}(t)] \in \mathbb{R}^R.
\]
To flexibly estimate these trajectories, an intuitive idea is to use GPs to model each $u^k_{j,r}(t)$, similar to that \citet{xu2012infinite,zhe2016dintucker} used GPs to estimate the decomposition function (see \eqref{eq:gptf}). However, this modeling can be restrictive. When the time point is distant from the training time points, the corresponding covariance (kernel) function decreases very fast (consider the SE kernel for an example, $\kappa(t, t') = \exp(-\frac{1}{\eta}(t-t')^2)$). As a result, the GP estimate of the trajectory value --- which is essentially an interpolation based on the training points (see \eqref{eq:gp-pred}) --- tends to be close to zero (the prior mean) and the predictive variance becomes large. That means, the GP estimate is neither accurate nor reliable. However, real-world tensor data are often very sparse and noisy --- most entries only have observations at a few time points. Therefore, learning a GP trajectory outright on the time domain might not be robust and reliable enough, especially for many places that are far from the scarce training timestamps.
To address this issue, we find that from the Fourier transform view, any time function can be represented (or decomposed) by a series of Fourier bases $\{e^{i \omega t}|\omega \in \mathbb{R}\}$; see \eqref{eq:ift} and \eqref{eq:fft}. These bases are trigonometric and have nice properties. We never need to worry that their values will fade to zero (or other constant) when $t$ is large or getting away from the training timestamps. If we can obtain a reasonable estimate of their coefficients, {\textit{i.e.,}}\xspace ${\widehat{f}}(\omega)$ , we can use these bases to recover the time function in a much more reliable way.
Therefore, we turn to learning the factor trajectories from the frequency domain. Specifically, for each entity $j$ of mode $k$, we learn a frequency function $\wh{u}^k_{j,r}(\omega)$ ($1 \le r \le R$), so that we can obtain the time trajectory via inverse Fourier transform (IFT),
\begin{align}
u^k_{j,r}(t) = \frac{1}{2\pi}\int_{-\infty}^{\infty} \wh{u}^k_{j,r}(\omega) e^{i \omega t} \d \omega.
\end{align}
To get rid of the imaginary part, we require that $\wh{u}^k_{j,r}(\omega)$ is symmetric, {\textit{i.e.,}}\xspace $\wh{u}^k_{j,r}(\omega) = \wh{u}^k_{j,r}(-\omega)$, and therefore we have
\begin{align}
u^k_{j,r}(t) = \frac{1}{\pi} \int_{0}^{\infty} \wh{u}^k_{j,r}(\omega) \cos(\omega t)\d \omega. \label{eq:int}
\end{align}
However, even if the frequency function is given, the integral in \eqref{eq:int} is in general analytically intractable. To overcome this issue, we use Gauss-Laguerre (GL) quadrature, which can solve the integral of the kind $\int_0^\infty e^{-x}g(x)\d x$ quite accurately. We write down \eqref{eq:int} as
\begin{align}
u^k_{j,r}(t) = \frac{1}{\pi} \int_{0}^{\infty} e^{-\omega}\left[\wh{u}^k_{j,r}(\omega)e^{\omega}\right] \cos(\omega t)\d \omega,
\end{align}
and then use GPs to learn $\alpha^k_{j,r}(\omega)=\wh{u}^k_{j,r}(\omega)e^{\omega}$. In doing so, not only do we still enjoy the flexibility of nonparametric estimation, we can also conveniently apply GL quadrature, without the need for any additional integral transform,
\begin{align}
u^k_{j,r}(t) \approx \frac{1}{\pi} \sum_{c=1}^C \alpha^k_{j,r}(\wh{\omega}_c) \cos(\wh{\omega}_c t) \cdot \gamma_c \label{eq:gl}
\end{align}
where $\{\wh{\omega}_c\}$ and $\{\gamma_c\}$ are $C$ quadrature nodes and weights
Next, we introduce a frequency embedding ${\bf e}^k_j \in \mathbb{R}^s$ for each entity $j$ in mode $k$, and model $\alpha^k_{j,r}(\cdot)$ as a function of both the embedding and frequency,
\begin{align}
\alpha^k_{j,r}(\omega) = f^k_r({\bf e}^k_j, \omega).
\end{align}
The advantage of doing so is that we only need to estimate one function $f^k_r(\cdot, \cdot)$ to obtain the $r$-th frequency functions for all the entities in mode $k$. The frequency embeddings can also encode structural information within these entities ({\textit{e.g.,}}\xspace groups and outliers). Otherwise, we have to estimate $d_k R$ functions in mode $k$, which can be quite costly and challenging, especially when $d_k$ is large. We then apply a GP prior over $f^k_r$,
\begin{align}
f^k_r({\bf e}, \omega) \sim\gp\left(0, \kappa_r([{\bf e}; \omega], [{\bf e}'; \omega']) \right). \label{eq:gp-level1}
\end{align}
Given the factor trajectories, to obtain the value of each entry $m_{\boldsymbol \ell}$ at any time $t$, we use a second-level GP,
\begin{align}
&m_{{\boldsymbol \ell}}(t) = g(\u^1_{\ell_1}(t), \ldots, \u^K_{\ell_K}(t)) \notag\\
& \sim \mathcal{GP}\left(0, \kappa_g(\v_{\boldsymbol \ell}(t), \v_{{\boldsymbol \ell}'}(t))\right), \label{eq:gp-level2}
\end{align}
where $\v_{\boldsymbol \ell}(t) = [\u^1_{\ell_1}(t); \ldots; \u^K_{\ell_K}(t)]$ and $\v_{{\boldsymbol \ell}'}(t) = [\u^1_{{\boldsymbol \ell}'_1}(t); \ldots; \u^K_{{\boldsymbol \ell}'_K}(t)]$. This is similar to \eqref{eq:gptf}. However, since the input consist of the values at (time-varying) trajectories, our second-level GP can flexibly estimate various temporal relationships between the entities. Finally, we sample the observed entry values from a Gaussian noisy model,
\begin{align}
p({\bf y}|{\bf m}) = {\bf N}({\bf y} | {\bf m}, \sigma^2 {\bf I}), \label{eq:ll}
\end{align}
where $\sigma^2$ is the noise variance, ${\bf y} = \{y_1, \ldots, y_N\}$ and ${\bf m} = \{m_{{\boldsymbol \ell}_1}(t_1), \ldots, m_{{\boldsymbol \ell}_N}(t_N)\}$.
In this paper, we focus on continuous observations. However, our method can be easily adjusted to other types of observations. A graphical illustration of our model is given in Fig. \ref{fig:graphical_model}.
\section{Algorithm}
The inference of our model is challenging. The GP prior over each $f^k_r(\cdot )$ (see \eqref{eq:gp-level1}) demands we compute a multivariate Gaussian distribution of \cmt{of the function values at all frequency embedding and quadrature nodes combinations,} $\{f^k_r({\bf e}^k_j, \wh{\omega}_c) | 1 \le j \le d_k, 1 \le c \le C\}$, and the GP over $g$ (see \eqref{eq:gp-level2} and \eqref{eq:ll}) a multivariate Gaussian distribution of $\{m_{{\boldsymbol \ell}_n}(t_n)|1\le n\le N\}$. Hence, when the mode dimensions $d_k$ and/or the number of observations $N$ is large, the computation is very costly or even infeasible, not to mention the two-level GPs are tightly coupled (see \eqref{eq:gp-level2}). To overcome the computational challenges, based on the variational sparse GP framework~\citep{hensman2013gaussian}, we leverage our model structure to develop a nested stochastic mini-batch variational learning algorithm, presented as follows.
\subsection{Sparse Variational ELBO Based on Matrix Gaussian Prior and Posterior}
Specifically, given $\mathcal{D} = \{({\boldsymbol \ell}_1, y_1, t_1), \ldots, ({\boldsymbol \ell}_N, y_N, t_N)\}$, the joint probability of our model is
\begin{align}
p(\text{Joint}) &= \prod_{k=1}^K \prod_{r=1}^R {\bf N}\left({\bf f}^k_r|{\bf 0}, \kappa_r({\bf X}^k_f, {\bf X}^k_f)\right) \notag \\
&\cdot {\bf N}\left({\bf m}|{\bf 0}, \kappa_g({\bf X}_g, {\bf X}_g)\right) {\bf N}({\bf y}|{\bf m}, \sigma^2{\bf I}) \label{eq:joint}
\end{align}
where ${\bf f}^k_r$ is the concatenation of $\{f^k_r({\bf e}^k_j, \wh{\omega}_c)|1 \le j \le d_k, 1\le c \le C\}$, ${\bf m} = [m_{{\boldsymbol \ell}_1}(t_1), \ldots, m_{{\boldsymbol \ell}_N}(t_N)]$, $\kappa_r$ and $\kappa_g$ are kernel functions, ${\bf X}^k_f$ are $d_k C \times (s+1)$ input matrix for ${\bf f}^k_r$, each row of ${\bf X}^k_f$ is an $({\bf e}^k_j, \wh{\omega}_c)$ pair, ${\bf X}_g = [\v_{{\boldsymbol \ell}_1}(t_1), \ldots, \v_{{\boldsymbol \ell}_N}(t_N)]^\top$ is the input matrix for ${\bf m}$, of size $N \times KR$. In our work, both $\kappa_r$ and $\kappa_r$ are chosen as SE kernels. Note that $\{\wh{\omega}_c\}$ are the quadrature nodes.
First, we observe that for the first-level GP, the input matrix ${\bf X}^k_r$ is the cross combination of the $d_k$ frequency embeddings ${\bf E}^k = [{\bf e}^k_1, \ldots, {\bf e}^k_{d_k}]^\top$ and $C$ quadrature nodes $\wh{\boldsymbol{\omega}} = [\wh{\omega}_1; \ldots; \wh{\omega}_C]$. Hence, we can rearrange ${\bf f}^k_r$ into a $d_k \times C$ matrix ${\bf F}^k_r$. Due to the multiplicative property of the kernel, {\textit{i.e.,}}\xspace $\kappa_r([{\bf e}; \omega], [{\bf e}'; \omega']) = \kappa_r({\bf e}, {\bf e}') \cdot \kappa_r(\omega, \omega') $, we observe ${\bf F}^k_r$ follows a matrix Gaussian prior distribution,
\begin{align}
&p({\bf F}^k_r) =N({\bf f}^k_r | {\bf 0}, \kappa_r({\bf E}^k, {\bf E}^k) \otimes \kappa_r(\wh{\boldsymbol{\omega}}, \wh{\boldsymbol{\omega}}) )\notag \\
&= \MN({\bf F}^k_r|{\bf 0}, \kappa_r({\bf E}^k, {\bf E}^k), \kappa_r(\wh{\boldsymbol{\omega}}, \wh{\boldsymbol{\omega}})) \label{eq:mg-prior}\\
&=\frac{\exp\left(-\frac{1}{2}\text{tr}\left( \kappa_r(\wh{\boldsymbol{\omega}}, \wh{\boldsymbol{\omega}})^{-1} \left({\bf F}^k_r\right)^\top \kappa_r({\bf E}^k, {\bf E}^k)^{-1} {\bf F}^k_r\right)\right)}{(2\pi)^{d_k C/2} |\kappa_r({\bf E}^k, {\bf E}^k)|^{C/2} |\kappa_r(\wh{\boldsymbol{\omega}}, \wh{\boldsymbol{\omega}})|^{d_k/2}}. \notag
\end{align}
Therefore, we can compute the $d_k \times d_k$ row covariance matrix $\kappa_r({\bf E}^k, {\bf E}^k)$ and $C \times C$ column covariance matrix $\kappa_r(\wh{\boldsymbol{\omega}}, \wh{\boldsymbol{\omega}})$ separately, rather than a giant full covariance matrix, {\textit{i.e.,}}\xspace $\kappa_r({\bf X}_f^k, {\bf X}_f^k)$ in \eqref{eq:joint} ($d_k C \times d_k C$). This can reduce the cost of the required covariance inverse operation from $\mathcal{O}\left((d_k C)^3\right)$ to $\mathcal{O}(C^3 + (d_k)^3)$ and hence save much cost. Nonetheless, when $d_k$ is large, the row covariance matrix can still be infeasible to compute. Note that we do not need to worry about the column covariance, because the number of quadrature nodes $C$ is small, {\textit{e.g.,}}\xspace $C=10$ is usually sufficient to give a high accuracy in numerical integration.
To address the challenge of computing the row covariance matrix, we introduce $a_k$ pseudo inputs ${\bf Z}^k \in \mathbb{R}^{a_k \times s}$, where $a_k \ll d_k$. We consider the value of $f^k_r(\cdot)$ at the cross combination of ${\bf Z}_k$ and $\wh{\boldsymbol{\omega}}$ , which we refer to as the pseudo output matrix ${\bf G}^k_r$. Then $\{{\bf G}^k_r, {\bf F}^k_r\}$ follow another matrix Gaussian prior and we decompose the prior via
\begin{align}
p({\bf G}^k_r, {\bf F}^k_r) = p({\bf G}^k_r) p({\bf F}^k_r|{\bf G}^k_r), \label{eq:joint-prior-level1}
\end{align}
where
\begin{align}
&p({\bf G}^k_r) = \MN\left({\bf G}^k_r|{\bf 0}, \kappa_r({\bf Z}^k, {\bf Z}^k), \kappa_r(\wh{\boldsymbol{\omega}}, \wh{\boldsymbol{\omega}})\right), \notag \\
&p({\bf F}^k_r|{\bf G}^k_r) = \MN\left({\bf F}^k_r|\mathbf{\Gamma}^k_r, \mathbf{\Omega}^k_r, \kappa_r(\wh{\boldsymbol{\omega}}, \wh{\boldsymbol{\omega}})\right), \label{eq:cmg}
\end{align}
in which $\mathbf{\Gamma}^k_r = \kappa_r({\bf E}^k,{\bf Z}^k)\kappa_r({\bf Z}^k, {\bf Z}^k)^{-1} {\bf G}^k_r$, and $\mathbf{\Omega}^k_r = \kappa_r({\bf E}^k, {\bf E}^k) - \kappa_r({\bf E}^k, {\bf Z}^k)\kappa_r({\bf Z}^k, {\bf Z}^k)^{-1} \kappa_r({\bf Z}^k, {\bf E}^k)$.
Similarly, to address the computational challenge for the second-level GP, namely $\kappa_g({\bf X}_g, {\bf X}_g)$ in \eqref{eq:joint}, we introduce another set of $a_g$ pseudo inputs ${\bf Z}_g \in \mathbb{R}^{a_g \times KR}$, where $a_g \ll N$. Denote by ${\bf h}$ the corresponding pseudo outputs --- the outputs of $g(\cdot)$ function at ${\bf Z}_g$. We know $\{{\bf m}, {\bf g}\}$ follow a joint Gaussian prior, and we decompose their prior as well,
\begin{align}
p({\bf h}, {\bf m}) = p({\bf h})p({\bf m}|{\bf h}), \label{eq:joint-prior-level2}
\end{align}
where $p({\bf h}) = {\bf N}\left({\bf h}|{\bf 0}, \kappa_g({\bf Z}_g, {\bf Z}_g)\right)$ and
\cmt{$p({\bf m}|{\bf h})$ is a conditional Gaussian distribution.} $p({\bf m}|{\bf h}) = {\bf N}\big({\bf m}|\kappa_g({\bf X}_g,{\bf Z}_g)\kappa_g({\bf Z}_g, {\bf Z}_g)^{-1}{\bf h}, \;\;\kappa_g({\bf X}_g, {\bf X}_g) - \kappa_g({\bf X}_g, {\bf Z}_g)\kappa_g({\bf Z}_g, {\bf Z}_g)^{-1}\kappa_g({\bf Z}_g, {\bf X}_g)\big)$.
Now, we augment our model with the pseudo outputs $\{{\bf G}^k_r\}$ and ${\bf h}$. According to \eqref{eq:joint-prior-level1} and \eqref{eq:joint-prior-level2}, the joint probability becomes
\begin{align}
&p(\{{\bf G}^k_r, {\bf F}^k_r\}, {\bf h}, {\bf m}, {\bf y}) = \prod_{k=1}^K \prod_{r=1}^R p({\bf G}^k_r) p({\bf F}^k_r|{\bf G}^k_r) \notag \\
&\cdot p({\bf h})p({\bf m}|{\bf h}) {\bf N}({\bf y}|{\bf m}, \sigma^2{\bf I}). \label{eq:joint-2}
\end{align}
Note that if we marginalize out all the pseudo outputs, we recover the original distribution \eqref{eq:joint}.
To conduct tractable and scalable inference, we follow~\citep{hensman2013gaussian} to introduce a variational posterior of the following form,
\begin{align}
&q(\{{\bf G}^k_r, {\bf F}^k_r\}, {\bf h}, {\bf m}) \notag \\
&=\prod\nolimits_{k=1}^K\prod\nolimits_{r=1}^R q({\bf G}^k_r) p({\bf F}^k_r |{\bf G}^k_r) q({\bf h}) p({\bf m}|{\bf h}). \label{eq:var-post}
\end{align}
We then construct a variational evidence lower bound (ELBO)~\citep{wainwright2008graphical}, $\mathcal{L} = \mathbb{E}_q\left[\log \frac{p(\{{\bf G}^k_r, {\bf F}^k_r\}, {\bf h}, {\bf m}, {\bf y})}{q(\{{\bf G}^k_r, {\bf F}^k_r\}, {\bf h}, {\bf m})}\right]$. Contrasting \eqref{eq:joint-2} and \eqref{eq:var-post}, we can see that all the conditional priors, $p({\bf F}^k_r |{\bf G}^k_r)$ and $p({\bf m}|{\bf h})$, which are all giant Gaussian, have been canceled. Then we can obtain a tractable ELBO, which is additive over the observed entry values,
\begin{align}
\mathcal{L} =& -\sum\nolimits_{k=1}^K\sum\nolimits_{r=1}^R{\mathrm{KL} }\left(q({\bf G}^k_r) \| p({\bf G}^k_r)\right) \label{eq:elbo} \\
&- {\mathrm{KL} }\left(q({\bf h}) \| p({\bf h})\right)+ \sum\nolimits_{n=1}^N \mathbb{E}_q\left[\log p(y_n |m_{{\boldsymbol \ell}_n}(t_n))\right], \notag
\end{align}
where ${\mathrm{KL} }(\cdot\| \cdot)$ is the Kullback-Leibler (KL) divergence. We then introduce Gaussian posteriors for $q({\bf h})$ and all $q({\bf G}^k_r)$ so that the KL terms have closed forms. Since the number of pseudo inputs $a_g$ and $a_k$ are small ({\textit{e.g.,}}\xspace $100$), the cost is cheap. To further improve the efficiency, we use a matrix Gaussian posterior for each ${\bf G}^k_r$, namely
\begin{align}
q({\bf G}^k_r) = \MN({\bf G}^k_r | {\bf A}^k_r, \L^k_r\left(\L^k_r\right)^\top, {\bf R}^k_r \left({\bf R}^k_r\right)^\top), \label{eq:mgpost}
\end{align}
where $\L^k_r$ and ${\bf R}^k_r$ are lower triangular matrices to ensure the row and column covariance matrices are positive semi-definite. Thereby, we do not need to compute the $a_k C \times a_k C$ full posterior covariance matrix.
Note that the matrix Gaussian (MG) view \eqref{eq:mg-prior} not only can help us reduce the computation expense, but also improves the approximation. The standard sparse GP approximation requires us to place pseudo inputs in the entire input space. That is, the embeddings plus frequencies. Our MG view allows us to only introduce pseudo inputs in the embedding space (no need for pseudo frequencies). Hence it decreases approximation dimension and improves inference quality.
\subsection{Nested Stochastic Mini-Batch Optimization}
We maximize the ELBO \eqref{eq:elbo} to estimate the variational posterior $q(\cdot)$, the frequency embeddings, kernel parameters, pseudo inputs ${\bf Z}^k$ and ${\bf Z}_g$ and other parameters.\cmt{, so as to obtain the frequency functions $\{f^k_r(\cdot)\}$ and then factor trajectories $\{\u^k_j(t)\}$ (see \eqref{eq:gl}).} To scale to a large number of observations, each step we use a random mini-batch of data points to compute a stochastic gradient, which is based on an unbiased estimate of the ELBO,
\begin{align}
\widehat{\mathcal{L}} = \text{KL-terms} + \frac{N}{B} \sum_{n \in \mathcal{B}}\mathbb{E}_q\left[\log p(y_n |m_{{\boldsymbol \ell}_n}(t_n))\right],
\end{align}
where $\mathcal{B}$ is the mini-batch, of size $B$. However, we cannot directly compute $\nabla \wh{\mathcal{L}}$ because each expectation $\mathbb{E}_q\left[\log p(y_n |m_{\ell_n}(t_n))\right]$ is analytically intractable. To address this problem, we use a nested reparameterization procedure to compute an unbiased estimate of $\nabla \wh{\mathcal{L}}$, which is also an unbiased estimate of $\nabla \mathcal{L}$, to update the model.
Specifically, we aim to obtain a parameterized sample of each $m_{{\boldsymbol \ell}_n}(t_n)$, plug it in the corresponding log likelihood and get rid of the expectation. Then we calculate the gradient. It guarantees to be an unbiased estimate of $\nabla \wh{\mathcal{L}}$. To do so, following \eqref{eq:joint-prior-level2}, we first draw a parameterized sample of the pseudo output ${\bf h}$ via its Gaussian posterior $q({\bf h})$~\citep{kingma2013auto}, and then apply the conditional Gaussian $p(m_{{\boldsymbol \ell}_n}(t_n)|{\bf h})$. We denote by $\wh{{\bf h}}$ the sample of ${\bf h}$. However, the input to ${\bf m}_{\ell_n}(t_n)$ are values of the factor trajectories (see \eqref{eq:gp-level2}), which are modeled by the GPs in the first level. Hence, we need to use the reparameterization trick again to generate a parameterized sample of the input $\v_{{\boldsymbol \ell}_n}(t_n) = [\u^1_{{\ell_n}_1}(t_n); \ldots; \u^K_{{\ell_n}_K}(t_n)]$. To do so, we generate a posterior sample of the pseudo outputs $\{{\bf G}^k_r\}$ at the first level. According to \eqref{eq:mgpost}, we can draw a matrix Gaussian noise $\S^k_r \sim \MN({\bf 0}, {\bf I}, {\bf I})$ and obtain the sample by $$\wh{{\bf G}}^k_r = {\bf A}^k_r + \L^k_r \S^k_r ({\bf R}^k_r)^\top.$$ Then we use the conditional matrix Gaussian in \eqref{eq:cmg} to sample $\boldsymbol{\alpha}^k_r \overset{\Delta}{=} [f^k_r({\bf e}^k_{\ell_{n_k}}, \wh{\omega}_1), \ldots, f^k_r({\bf e}^k_{\ell_{n_k}}, \wh{\omega}_C)]$($1 \le k \le K$). Since $\boldsymbol{\alpha}^k_r$ is actually a row of ${\bf F}^k_r$, the conditional matrix Gaussian is degenerated to an ordinary multivariate Gaussian and we generate the parameterized sample $\wh{\boldsymbol{\alpha}}^k_r$ accordingly given ${\bf G}^k_r = \wh{{\bf G}}^k_r$. Then we apply the GL quadrature \eqref{eq:gl} to obtain the sample of each trajectory value $\wh{u}^k_{\ell_{n_k},r}(t_n)$, and the input to the second-level GP, $\wh{\v}_{{\boldsymbol \ell}_n}(t_n) = [\wh{\u}^1_{{\ell_n}_1}(t_n); \ldots; \wh{\u}^K_{{\ell_n}_K}(t_n)]$. Now, with the sample of the pseudo output $\wh{{\bf h}}$, we can apply $p(m_{{\boldsymbol \ell}_n}(t_n)|{\bf h}=\wh{{\bf h}})$ to generate the sample of ${\bf m}_{\ell_n}(t_n)$. We can use any automatic differentiation library to track the computation of these samples, build computational graphs, and calculate the stochastic gradient.
\cmt{
Specifically, to obtain a parameterized sample for each $m_{{\boldsymbol \ell}_n}$, we first generate a posterior sample of the pseudo outputs $\{{\bf G}^k_r\}$ at the first level. According to \eqref{eq:mgpost}, we can draw a matrix Gaussian noise $\S^r_k \sim \MN({\bf 0}, {\bf I}, {\bf I})$ and obtain the sample by $$\wh{{\bf G}}^k_r = {\bf A}^k_r + \L^k_r \S^k_r ({\bf R}^k_r)^\top.$$ Then we use the conditional matrix Gaussian in \eqref{eq:cmg} to sample $\boldsymbol{\alpha}^k_n \overset{\Delta}{=} [f^k_r(\ell_{n_k}, \wh{\omega}_1), \ldots, f^k_r(\ell_{n_k}, \wh{\omega}_C)]$. Since $\boldsymbol{\alpha}^k_n$ is actually a row of ${\bf F}^k_r$, the conditional matrix Gaussian is degenerated to an ordinary multivariate Gaussian, and we can accordingly generate the parameterized sample $\wh{\boldsymbol{\alpha}}^k_n$ given ${\bf G}^k_r = \wh{{\bf G}}^k_r$. Then we apply the GL quadrature \eqref{eq:gl} to obtain the sample of the trajectory value $\wh{u}^k_{\ell_{n_k},r}(t_n)$. Once we collect all the relevant factor trajectory values at $t_n$, we obtain a sample of the input to the second-level GP (see \eqref{eq:gp-level2}), $\wh{\v}_{{\boldsymbol \ell}_n}(t_n)$. Next, from $q({\bf h})$, we draw a parameterized sample of the pseudo output at the second level, denoted $\wh{{\bf h}}$. Given $\wh{{\bf h}}$ and the input sample $\wh{\v}_{{\boldsymbol \ell}_n}(t_n)$, from the conditional Gaussian in \eqref{eq:joint-2}, we generate a sample for $m_{{\boldsymbol \ell}_n}(t_n)$. \zsdc{say something good}
}
\cmt{
\begin{algorithm}[tb]
\caption{Stochastic Optimization for NONFAT Inference}
\label{alg:nonfat}
\begin{algorithmic}
\STATE {\bfseries Input:} $N$ observed dynamic tensor entries $\{({\bf i}_n, y_n, t_n)\}_{n=1}^N$
\STATE Initialize identity factors $\v_j^k$, pseudo inputs ${\bf Z}_f^k$ and ${\bf Z}_g$, posterior parameters for $\a_r^k$ and $\a_g$ and kernel parameters.
\FOR{$epoch=1$ {\bfseries to} $T$}
\STATE Split $N$ data points into batches $\{\mathcal{B}_m\}_{m=1}^M$.
\FOR{$m=1$ {\bfseries to} $M$}
\STATE Sample $\wh{g}_n$ for each entry in $\mathcal{B}_m$ based on \eqref{eq:marginal_g} and compute the batch version ELBO according to \eqref{eq:elbo_batch}.
\STATE Conduct one step Adam optimization and update the model parameters
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
}
\subsection{Algorithm Complexity}
The time complexity of our inference algorithm is $\mathcal{O}\left(RC^3+KRa_k^3 + a_g^3 + B(KR(a_k^2 + C^2) + a_g^2) \right)$, in which $B$ is the size of the mini-batch and the cubic terms arise from computing the KL divergence and inverse covariance (kernel) matrix on the pseudo inputs and quadrature nodes (across $K$ modes and $R$ trajectories). Hence, the cost is proportional to the mini-batch size. The space complexity is $\mathcal{O}(\sum_{k=1}^K\sum_{r=1}^R (a_k^2 + C^2 + a_k C) + a_g^2 + a_g + \sum_{k=1}^K d_k s)$, including the storage of the prior and variational posterior of the pseudo outputs and frequency embeddings.
\section{Related Work}
To exploit time information, existent tensor decomposition methods usually expand the tensor with an additional time mode, {\textit{e.g.,}}\xspace~\citep{xiong2010temporal,rogers2013multilinear,zhe2016distributed,ahn2021time,zhe2015scalable,du2018probabilistic}. This mode consists of a series of time steps, say by hours or weeks. A dynamic model is often used to model the transition between the time step factors. For example, \citet{xiong2010temporal} used a conditional Gaussian prior, \citet{wu2019neural} used recurrent neural networks to model the time factor transition, and \citet{ahn2021time} proposed a kernel smoothing and regularization term. To deal with continuous time information, the most recent work \citep{zhang2021dynamic} uses polynomial splines to model the CP coefficients ($\boldsymbol{\lambda}$ in \eqref{eq:cp}) as a time function. Another line of research focuses on the events of interactions between the entities, {\textit{e.g.,}}\xspace Poisson tensor factorization~\citep{schein2015bayesian,Schein:2016:BPT:3045390.3045686}, and those based on more expressive point processes~\citep{zhe2018stochastic,pan2020scalable,wang2020self}. While successful, these methods only model the event count or sequences. They do not consider the actual interaction results, like payment or purchase quantity in online shopping, and clicks/nonclicks in online advertising. Hence their problem setting is different. Despite the success of current tensor decomposition methods~\citep{Chu09ptucker,choi2014dfacto,zhe2016dintucker,zhe2015scalable,zhe2016distributed,liu2018neuralcp,pan2020streaming,tillinghast2020probabilistic,tillinghast2021nonparametric,fang2021streaming,tillinghast2021nonparametricHGP}, they all assume the factors of the entities are static and fixed, even when the time information is incorporated. To our knowledge, our method is the first to learn these factors as trajectory functions.
Our bi-level GP decomposition model can be viewed as an instance of deep Gaussian processes (DGPs) ~\citep{damianou2013deep}. However, different from the standard DGP formulation, we conduct inverse Fourier transform over the output of the first-level GPs, before we feed them to the next-level GP. In so doing, we expect to better learn the factor trajectory functions in our problem. Our nested sparse variational inference is similar to ~\citep{salimbeni2017doubly}, where in each GP level, we also introduce a set of pseudo inputs and outputs to create sparse approximations. However, we further take advantage of our model structure to convert the (finite) GP prior in the first level as a matrix Gaussian. The benefit is that we do not need to introduce pseudo inputs in the frequency domain and so we can reduce the approximation. We use a matrix Gaussian posterior to further accelerate the computation
\section{Experiment}
\subsection{Predictive Performance}
We evaluated {\textsc{DiTucker}}\xspace in three real-world benchmark datasets. (1) \textit{Beijing Air Quality}\footnote{\url{ https://archive.ics.uci.edu/ml/datasets/Beijing+Multi-Site+Air-Quality+Data}}, hourly concentration measurements of $6$ pollutants ({\textit{e.g.,}}\xspace PM2.5, PM10 and SO2) in $12$ air-quality monitoring sites across Beijing from year 2013 to 2017. We thereby extracted a 2-mode (pollutant, site) tensor, including 10K measurements and the time points for different tensor entries. (2) \textit{Mobile Ads}\footnote{\url{https://www.kaggle.com/c/avazu-ctr-prediction}}, a 10-day click-through-rate dataset for mobile advertisements. We extracted a three-mode tensor \textit{(banner-position, site domain, mobile app)}, of size $7 \times 2842 \times 4127$. The tensor includes $50$K observed entry values (click number) at different time points. (3) \textit{DBLP}~\footnote{\url{https://dblp.uni-trier.de/xml/}}, the bibliographic records in the domain of computer science. We downloaded the XML database, filtered the records from year 2011 to 2021, and extracted a three-mode tensor (author, conference, keyword) of size $3731\times 1935 \times 169$ from the most prolific authors, most popular conferences and keywords. The observed entry values are the numbers of papers published at different years. We collected $50$K entry values and their time points.
\begin{table*}[t]
\centering
\small
\begin{tabular}[c]{ccccc}
\toprule
\textit{Beijing Air Quality} & $R=2$ & $R=3$ & $R=5$ & $R=7$ \\
\hline
NONFAT & $\bf{0.340 \pm 0.006}$ & $ \bf{0.315 \pm 0.001}$ & $ \bf{0.314\pm 0.001}$ & $0.326\pm 0.006$ \\
CPCT & $0.997 \pm 0.001$ & $0.997 \pm 0.001$ & $1.002\pm 0.002 $& $1.002\pm 0.002$ \\
GPCT & $0.372 \pm 0.001$ & $0.366\pm 0.001$ & $0.363\pm 0.001$ & $0.364\pm 0.001$ \\
NNCT & $0.986 \pm 0.002$ & $0.988 \pm 0.002$ &$ 0.977\pm 0.012$ & $0.987\pm 0.003$\\
GPDTL & $0.884\pm 0.001$ & $0.884 \pm 0.001$ & $0.885\pm 0.001$ & $0.884 \pm 0.001$ \\
NNDTL & $ 0.356 \pm 0.003$ & $0.358 \pm 0.005$ & $0.333\pm 0.003$ & $\bf{0.315\pm 0.002}$\\
GPDTN & $ 0.884 \pm 0.001 $& $0.884 \pm 0.001$ & $0.884 \pm 0.001$ & $ 0.884 \pm 0.001$ \\
NNDTN & $0.365 \pm 0.005$ & $0.337 \pm 0.006$ & $0.336 \pm 0.003$ & ${0.319 \pm 0.005}$\\
\midrule
\textit{Mobile Ads} & \\
\midrule
NONFAT & $0.652 \pm 0.002$ & $\bf{0.635 \pm 0.003}$ &$\bf{0.638 \pm 0.006}$ & $\bf{0.637 \pm 0.005}$ \\
CPCT & $1.001 \pm 0.004$ & $0.986 \pm 0.018$ &$ 1.009 \pm 0.009$ & $0.971 \pm 0.010$ \\
GPCT &$ 0.660 \pm 0.003$ & $0.661 \pm 0.003$ & $0.662 \pm 0.001$ &$ 0.659 \pm 0.003$ \\
NNCT & $0.822 \pm 0.001$ & $0.822 \pm 0.001$ & $0.822 \pm 0.001$ & $0.822\pm 0.001$ \\
GPDTL & $0.714 \pm 0.006$ & $0.695 \pm 0.004$ & $0.695 \pm 0.004$ & $0.695 \pm 0.003$ \\
NNDTL & $\bf{0.646 \pm 0.003}$ & $0.646 \pm 0.002$ &$ 0.642 \pm 0.003 $& $0.640 \pm 0.003$\\
GPDTN & $ 0.667 \pm 0.003$ &$ 0.661 \pm 0.003$ &$ 0.668 \pm 0.003$ & $0.669 \pm 0.003$ \\
NNDTN & $0.646 \pm 0.004$ & $0.645 \pm 0.002$ &$ 0.640 \pm 0.003$ & $0.638 \pm 0.003$\\
\midrule
\textit{DBLP} & \\
\midrule
NONFAT & $\bf{0.188\pm 0.003} $& $ \bf{0.188\pm 0.003}$ & $0.189 \pm 0.003$& $\bf{0.189\pm 0.003}$ \\
CPCT & $1.004\pm 0.003$ & $1.004 \pm 0.002$ & $ 1.005 \pm 0.002$& $ 1.001 \pm 0.004$ \\
GPCT & $ 0.189 \pm 0.003 $& $ 0.191 \pm 0.003 $& $ 0.192 \pm 0.003 $& $ 0.196 \pm 0.003 $ \\
NNCT & $ 0.188 \pm 0.003$& $ {0.188 \pm 0.003}$& $\bf{ 0.188\pm 0.003}$& $ 0.189 \pm 0.003$ \\
GPDTL & $ 0.208 \pm 0.004$& $ 0.223 \pm 0.003$& $ 0.221\pm 0.003$& $ 0.224\pm 0.003$ \\
NNDTL & $ 0.188 \pm 0.003$& $ 0.188\pm 0.003$& $ 0.189\pm 0.003$& $ 0.189\pm 0.003$ \\
GPDTN & $0.206 \pm 0.002 $& $0.218 \pm 0.003$& $ 0.224 \pm 0.003$& $ 0.225 \pm 0.002$ \\
NNDTN & $0.188\pm 0.003 $& $0.188 \pm 0.003$& $ 0.188\pm 0.003$& $ 0.189\pm 0.003$ \\
\bottomrule
\end{tabular}
\caption{\small Root Mean-Square Error (RMSE). The results were averaged over five runs. }
\label{tb:rmse}
\end{table*}
\begin{table*}[t]
\centering
\small
\begin{tabular}[c]{ccccc}
\toprule
\textit{Beijing Air Quality} & $R=2$ & $R=3$ & $R=5$ & $R=7$ \\
\hline
NONFAT & $\bf{-0.343 \pm 0.018} $&$ \bf{-0.264 \pm 0.003}$&$ \bf{-0.260\pm 0.004}$&$ \bf{-0.297\pm 0.017}$ \\
GPCT & $ -0.420 \pm 0.001$&$ -0.406 \pm 0.001$&$ -0.401 \pm 0.001$&$ -0.401 \pm 0.001$ \\
GPDTL & $-1.299 \pm 0.001$&$ -1.299 \pm 0.001$&$ -1.299 \pm 0.001$&$ -1.299\pm 0.001$ \\
GPDTN & $ -1.299\pm 0.001$&$ -1.299 \pm 0.001$&$ -1.299\pm 0.001$&$ -1.299\pm 0.001$ \\
\midrule
\textit{Mobile Ads} & \\
\midrule
NONFAT & $ \bf{-0.726 \pm 0.004} $&$ \bf{-0.705 \pm 0.003}$&$\bf{ -0.709\pm 0.007}$&$ \bf{-0.706 \pm 0.008} $ \\
GPCT & $ -0.733 \pm 0.002$&$ -0.737 \pm 0.005$&$ -0.734\pm 0.004$&$ -0.735 \pm 0.004$ \\
GPDTL &$ -1.843 \pm 0.009$&$ -1.807 \pm 0.006$&$ -1.822\pm 0.008$&$ -1.830\pm 0.003$ \\
GPDTN & $-0.774 \pm 0.003$&$ -0.762\pm 0.004$&$ -0.804\pm 0.006$&$ -0.806\pm 0.003$ \\
\midrule
\textit{DBLP} & \\
\midrule
NONFAT & $ \bf{0.201 \pm 0.019} $&$ \bf{0.201\pm 0.019}$&$ \bf{0.199 \pm 0.017}$&$ \bf{0.199 \pm 0.017}$ \\
GPCT & $ {0.129 \pm 0.009} $&$ {0.105 \pm 0.009} $&${ 0.104 \pm 0.011}$&$ {0.087 \pm 0.013}$ \\
GPDTL & $ 0.102 \pm 0.023 $&$0.004 \pm 0.025$&$ 0.035\pm 0.019$&$ 0.022\pm 0.019$ \\
GPDTN & $0.114 \pm 0.012$&$ 0.041 \pm 0.019$&$ 0.019 \pm 0.020$&$ 0.013 \pm 0.015$ \\
\bottomrule
\end{tabular}
\caption{Test log-likelihood. The results were averaged from five runs.}
\label{tb:ll}
\vspace{-0.2in}
\end{table*}
We compared with the following popular and/or state-of-the-art methods for dynamic tensor decomposition. (1) CPCT~\citep{zhang2021dynamic}, the most recent continuous-time decomposition algorithm, which uses polynomial splines to estimate the coefficients in the CP framework as a temporal function (see $\boldsymbol{\lambda}$ in \eqref{eq:cp}). (2) GPCT, continuous-time GP decomposition, which extends~\citep{xu2012infinite,zhe2016distributed} by placing the time $t$ in the GP kernel to learn the entry value as a function of both the latent factors and time $t$, {\textit{i.e.,}}\xspace $m_{\boldsymbol \ell} = f(\u^1_{\ell_1}, \ldots, \u^K_{\ell_K},t)$. (3) NNCT, continuous-time neural network decomposition, which is similar to~\citep{costco} but uses time $t$ as an additional input to the NN decomposition model. (4) GPDTL, discrete-time GP decomposition with linear dynamics, which expands the tensor with a time mode and jointly estimates the time factors and other factors with GP decomposition~\citep{zhe2016distributed}. In addition, we follow~\citep{xiong2010temporal} to introduce a conditional prior over consecutive steps, $p(\t_{j+1}|\t_j) = {\bf N}(\t_{j+1}|{\bf C} \t_j + \b, v{\bf I})$. Note this linear dynamic model is more general than the one in~\citep{xiong2010temporal} since the latter corresponds to ${\bf C}={\bf I}$ and $\b = {\bf 0}$. (5) GPDTN, discrete-time GP decomposition with nonlinear dynamics. It is similar to GPDTL except that the prior over the time factors becomes $p(\t_{j+1}|\t_j) = {\bf N}(\t_{j+1}|\sigma({\bf C} \t_j) + \b, v{\bf I})$ where $\sigma(\cdot)$ is the nonlinear activation function. Hence, this can be viewed as an RNN-type transition. (6) NNDTL, discrete-time NN decomposition with linear dynamics, similar to GPDTL but using NN decomposition model. (7) NNDTN, discrete-time NN decomposition with nonlinear dynamics, which is similar to GPDTN that employs RNN dynamics over the time steps.
{\bf Experiment Setting.} We implemented all the methods with PyTorch~\citep{paszke2019pytorch}. For all the GP baselines, we used SE kernel and followed~\citep{zhe2016dintucker} to use sparse variational GP framework~\citep{hensman2013gaussian} for scalable posterior inference, with $100$ pseudo inputs. For {\textsc{DiTucker}}\xspace, the number of pseudo inputs for both levels of GPs was set to $100$. For NN baselines, we used three-layer neural networks, with $50$ neurons per layer and \texttt{tanh} as the activation. For nonlinear dynamic methods, including GPDTN and NNDTN, we used \texttt{tanh} as the activation. For CPCT, we used $100$ knots for polynomial splines. For discrete-time methods, we used 50 steps. We did not find improvement with more steps. All the models were trained with stochastic mini-batch optimization. We used ADAM~\citep{adam} for all the methods, and the mini-batch size is $100$. The learning rate was chosen from $\{10^{-4}, 5 \times 10^{-4}, 10^{-3}, 5 \times 10^{-3}, 10^{-2}\}$. To ensure convergence, we ran each methods for 10K epochs. To get rid of the fluctuation of the error due to the stochastic model updates, we computed the test error after each epoch and used the smallest one as the result. We varied the number of latent factors $R$ from \{2, 3, 5, 7\}. For {\textsc{DiTucker}}\xspace, $R$ is the number of factor trajectories. We followed the standard testing procedure as in~\citep{xu2012infinite,kang2012gigatensor,zhe2016distributed} to randomly sample $80\%$ observed tensor entries (and their timestamps) for training and tested the prediction accuracy on the remaining entries. We repeated the experiments for five times and calculated the average root mean-square-error (RMSE) and its standard deviation.
\textbf{Results.} As we can see from Table \ref{tb:rmse}, in most cases, our methods {\textsc{DiTucker}}\xspace outperforms all the competing approaches, often by a large margin. In addition, {\textsc{DiTucker}}\xspace always achieves better prediction accuracy than GP methods, including GPCT, GPDTL and GPDTN. In most cases, the improvement is significant ($p<0.05$). It shows that our bi-level GP decomposition model can indeed improves upon the single level GP models. Only in a few cases, the prediction error of {\textsc{DiTucker}}\xspace is slightly worse an NN approach. Note that, {\textsc{DiTucker}}\xspace learns a trajectory for each factor, and it is much more challenging than learning fixed-value factors, as done by the competing approaches.
Furthermore, we examined our method in a probabilistic sense. We reported the test log-likelihood of {\textsc{DiTucker}}\xspace and the other probabilistic approaches, including GPCT, GPDTL and GPDTN, in Table \ref{tb:ll}. We can see that {\textsc{DiTucker}}\xspace largely improved upon the competing methods in all the cases, showing that {\textsc{DiTucker}}\xspace is advantageous not only in prediction accuracy but also in uncertainty quantification, which can be important in many decision tasks, especially with sparse and noisy data.
\begin{figure}
\centering
\setlength{\tabcolsep}{0pt}
\begin{tabular}[c]{ccc}
\begin{subfigure}[b]{0.17\textwidth}
\centering
\includegraphics[width=\linewidth]{./figs/pred/K_0_n_1_r_1-trim.pdf}
\caption{$u^1_{1,1}(t)$}
\end{subfigure} &
\begin{subfigure}[b]{0.17\textwidth}
\centering
\includegraphics[width=\linewidth]{./figs/pred/K_0_n_1_r_2-trim.pdf}
\caption{$u^1_{1,2}(t)$}
\end{subfigure} &
\begin{subfigure}[b]{0.17\textwidth}
\centering
\includegraphics[width=\linewidth]{./figs/pred/K_0_n_1_r_3-trim.pdf}
\caption{$u^1_{1,3}(t)$}
\end{subfigure}\\
\begin{subfigure}[b]{0.17\textwidth}
\centering
\includegraphics[width=\linewidth]{./figs/pred/K_1_n_2_r_1-trim.pdf}
\caption{$u^2_{2,1}(t)$}
\end{subfigure} &
\begin{subfigure}[b]{0.17\textwidth}
\centering
\includegraphics[width=\linewidth]{./figs/pred/K_1_n_2_r_2-trim.pdf}
\caption{$u^2_{2,2}(t)$}
\end{subfigure} &
\begin{subfigure}[b]{0.17\textwidth}
\centering
\includegraphics[width=\linewidth]{./figs/pred/K_1_n_2_r_3-trim.pdf}
\caption{$u^2_{2,3}(t)$}
\end{subfigure}
\end{tabular}
\caption{\small The learned factor trajectories. }
\label{fig:trajectory}
\end{figure}
\begin{figure}
\centering
\setlength{\tabcolsep}{0pt}
\begin{tabular}[c]{cc}
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\linewidth]{./figs/pred/i_5_j_5_trim.pdf}
\caption{(6, 6)}
\end{subfigure} &
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\linewidth]{./figs/pred/i_6_j_3_trim.pdf}
\caption{(7,4)}
\end{subfigure}
\end{tabular}
\caption{\small Entry value prediction.}
\label{fig:pred-curve}
\end{figure}
\subsection{Investigation of Learning Result }
Next, we investigated if our learned factor trajectories exhibit patterns and how they influence the prediction. To this end, we set $R=3$ and ran {\textsc{DiTucker}}\xspace on \textit{Beijing Air Quality} dataset. We show the learned factor trajectories for the first monitoring site in mode 1 in Fig. \ref{fig:trajectory} a-c, and second pollutant (SO2) in mode 2 in Fig. \ref{fig:trajectory} d-f. As we can see, they show different patterns. First, it is interesting to see that all the trajectories for the site exhibit periodicity, but with different perturbation, amplitude, period, {\textit{etc.}}\xspace This might relate to the working cycles of the sensors in the site. The trajectories for the pollutant is much less periodic and varies quite differently. For example, $u^2_{2, 1}(t)$ decreases first and then increases, while $u^2_{2, 3}(t)$ increases first and then decreases, and $u^2_{2, 2}(t)$ keeps the decreasing trend. They represent different time-varying components of the pollutant concentration. Second, the vertical dashed line is the boundary of the training region. We can see that all the trajectories extrapolate well. Their posterior mean and standard deviation are stable outside of the training region (as stable as inside the training region). It demonstrates our learning from the frequency domain can yield robust trajectory estimates.
Finally, we showcase the prediction curves of two entries in Fig. \ref{fig:pred-curve}. The prediction made by our factor trajectories can better predict the test points, as compared with GPCT. More important, outside the training region, our predictive uncertainty is much smaller than GPCT (right to the dashed vertical line), while inside the training region, our predictive uncertainty is not so small as GPCT that is close to zero. This shows {\textsc{DiTucker}}\xspace gives more reasonable uncertainty quantification in both interpolation and extrapolation.
\section{Conclusion}
We have presented NONFAT, a novel nonparametric Bayesian method to learn factor trajectories for dynamic tensor decomposition. The predictive accuracy of {\textsc{DiTucker}}\xspace in real-world applications is encouraging and the learned trajectories show interesting temporal patterns. In the future, we will investigate the meaning of these patterns more in-depth and apply our approach in more applications.
\section*{Acknowledgments}
This work has been supported by NSF IIS-1910983 and NSF CAREER Award IIS-2046295.
\bibliographystyle{apalike}
\section{Electronic Submission}
\label{submission}
Submission to ICML 2022 will be entirely electronic, via a web site
(not email). Information about the submission process and \LaTeX\ templates
are available on the conference web site at:
\begin{center}
\textbf{\texttt{http://icml.cc/}}
\end{center}
The guidelines below will be enforced for initial submissions and
camera-ready copies. Here is a brief summary:
\begin{itemize}
\item Submissions must be in PDF\@.
\item \textbf{New to this year}: If your paper has appendices, submit the appendix together with the main body and the references \textbf{as a single file}. Reviewers will not look for appendices as a separate PDF file. So if you submit such an extra file, reviewers will very likely miss it.
\item Page limit: The main body of the paper has to be fitted to 8 pages, excluding references and appendices; the space for the latter two is not limited. For the final version of the paper, authors can add one extra page to the main body.
\item \textbf{Do not include author information or acknowledgements} in your
initial submission.
\item Your paper should be in \textbf{10 point Times font}.
\item Make sure your PDF file only uses Type-1 fonts.
\item Place figure captions \emph{under} the figure (and omit titles from inside
the graphic file itself). Place table captions \emph{over} the table.
\item References must include page numbers whenever possible and be as complete
as possible. Place multiple citations in chronological order.
\item Do not alter the style template; in particular, do not compress the paper
format by reducing the vertical spaces.
\item Keep your abstract brief and self-contained, one paragraph and roughly
4--6 sentences. Gross violations will require correction at the
camera-ready phase. The title should have content words capitalized.
\end{itemize}
\subsection{Submitting Papers}
\textbf{Paper Deadline:} The deadline for paper submission that is
advertised on the conference website is strict. If your full,
anonymized, submission does not reach us on time, it will not be
considered for publication.
\textbf{Anonymous Submission:} ICML uses double-blind review: no identifying
author information may appear on the title page or in the paper
itself. \cref{author info} gives further details.
\textbf{Simultaneous Submission:} ICML will not accept any paper which,
at the time of submission, is under review for another conference or
has already been published. This policy also applies to papers that
overlap substantially in technical content with conference papers
under review or previously published. ICML submissions must not be
submitted to other conferences and journals during ICML's review
period.
Informal publications, such as technical
reports or papers in workshop proceedings which do not appear in
print, do not fall under these restrictions.
\medskip
Authors must provide their manuscripts in \textbf{PDF} format.
Furthermore, please make sure that files contain only embedded Type-1 fonts
(e.g.,~using the program \texttt{pdffonts} in linux or using
File/DocumentProperties/Fonts in Acrobat). Other fonts (like Type-3)
might come from graphics files imported into the document.
Authors using \textbf{Word} must convert their document to PDF\@. Most
of the latest versions of Word have the facility to do this
automatically. Submissions will not be accepted in Word format or any
format other than PDF\@. Really. We're not joking. Don't send Word.
Those who use \textbf{\LaTeX} should avoid including Type-3 fonts.
Those using \texttt{latex} and \texttt{dvips} may need the following
two commands:
{\footnotesize
\begin{verbatim}
dvips -Ppdf -tletter -G0 -o paper.ps paper.dvi
ps2pdf paper.ps
\end{verbatim}}
It is a zero following the ``-G'', which tells dvips to use
the config.pdf file. Newer \TeX\ distributions don't always need this
option.
Using \texttt{pdflatex} rather than \texttt{latex}, often gives better
results. This program avoids the Type-3 font problem, and supports more
advanced features in the \texttt{microtype} package.
\textbf{Graphics files} should be a reasonable size, and included from
an appropriate format. Use vector formats (.eps/.pdf) for plots,
lossless bitmap formats (.png) for raster graphics with sharp lines, and
jpeg for photo-like images.
The style file uses the \texttt{hyperref} package to make clickable
links in documents. If this causes problems for you, add
\texttt{nohyperref} as one of the options to the \texttt{icml2022}
usepackage statement.
\subsection{Submitting Final Camera-Ready Copy}
The final versions of papers accepted for publication should follow the
same format and naming convention as initial submissions, except that
author information (names and affiliations) should be given. See
\cref{final author} for formatting instructions.
The footnote, ``Preliminary work. Under review by the International
Conference on Machine Learning (ICML). Do not distribute.'' must be
modified to ``\textit{Proceedings of the
$\mathit{39}^{th}$ International Conference on Machine Learning},
Baltimore, Maryland, USA, PMLR 162, 2022.
Copyright 2022 by the author(s).''
For those using the \textbf{\LaTeX} style file, this change (and others) is
handled automatically by simply changing
$\mathtt{\backslash usepackage\{icml2022\}}$ to
$$\mathtt{\backslash usepackage[accepted]\{icml2022\}}$$
Authors using \textbf{Word} must edit the
footnote on the first page of the document themselves.
Camera-ready copies should have the title of the paper as running head
on each page except the first one. The running title consists of a
single line centered above a horizontal rule which is $1$~point thick.
The running head should be centered, bold and in $9$~point type. The
rule should be $10$~points above the main text. For those using the
\textbf{\LaTeX} style file, the original title is automatically set as running
head using the \texttt{fancyhdr} package which is included in the ICML
2022 style file package. In case that the original title exceeds the
size restrictions, a shorter form can be supplied by using
\verb|\icmltitlerunning{...}|
just before $\mathtt{\backslash begin\{document\}}$.
Authors using \textbf{Word} must edit the header of the document themselves.
\section{Format of the Paper}
All submissions must follow the specified format.
\subsection{Dimensions}
The text of the paper should be formatted in two columns, with an
overall width of 6.75~inches, height of 9.0~inches, and 0.25~inches
between the columns. The left margin should be 0.75~inches and the top
margin 1.0~inch (2.54~cm). The right and bottom margins will depend on
whether you print on US letter or A4 paper, but all final versions
must be produced for US letter size.
Do not write anything on the margins.
The paper body should be set in 10~point type with a vertical spacing
of 11~points. Please use Times typeface throughout the text.
\subsection{Title}
The paper title should be set in 14~point bold type and centered
between two horizontal rules that are 1~point thick, with 1.0~inch
between the top rule and the top edge of the page. Capitalize the
first letter of content words and put the rest of the title in lower
case.
\subsection{Author Information for Submission}
\label{author info}
ICML uses double-blind review, so author information must not appear. If
you are using \LaTeX\/ and the \texttt{icml2022.sty} file, use
\verb+\icmlauthor{...}+ to specify authors and \verb+\icmlaffiliation{...}+ to specify affiliations. (Read the TeX code used to produce this document for an example usage.) The author information
will not be printed unless \texttt{accepted} is passed as an argument to the
style file.
Submissions that include the author information will not
be reviewed.
\subsubsection{Self-Citations}
If you are citing published papers for which you are an author, refer
to yourself in the third person. In particular, do not use phrases
that reveal your identity (e.g., ``in previous work \cite{langley00}, we
have shown \ldots'').
Do not anonymize citations in the reference section. The only exception are manuscripts that are
not yet published (e.g., under submission). If you choose to refer to
such unpublished manuscripts \cite{anonymous}, anonymized copies have
to be submitted
as Supplementary Material via CMT\@. However, keep in mind that an ICML
paper should be self contained and should contain sufficient detail
for the reviewers to evaluate the work. In particular, reviewers are
not required to look at the Supplementary Material when writing their
review (they are not required to look at more than the first $8$ pages of the submitted document).
\subsubsection{Camera-Ready Author Information}
\label{final author}
If a paper is accepted, a final camera-ready copy must be prepared.
For camera-ready papers, author information should start 0.3~inches below the
bottom rule surrounding the title. The authors' names should appear in 10~point
bold type, in a row, separated by white space, and centered. Author names should
not be broken across lines. Unbolded superscripted numbers, starting 1, should
be used to refer to affiliations.
Affiliations should be numbered in the order of appearance. A single footnote
block of text should be used to list all the affiliations. (Academic
affiliations should list Department, University, City, State/Region, Country.
Similarly for industrial affiliations.)
Each distinct affiliations should be listed once. If an author has multiple
affiliations, multiple superscripts should be placed after the name, separated
by thin spaces. If the authors would like to highlight equal contribution by
multiple first authors, those authors should have an asterisk placed after their
name in superscript, and the term ``\textsuperscript{*}Equal contribution"
should be placed in the footnote block ahead of the list of affiliations. A
list of corresponding authors and their emails (in the format Full Name
\textless{}email@domain.com\textgreater{}) can follow the list of affiliations.
Ideally only one or two names should be listed.
A sample file with author names is included in the ICML2022 style file
package. Turn on the \texttt{[accepted]} option to the stylefile to
see the names rendered. All of the guidelines above are implemented
by the \LaTeX\ style file.
\subsection{Abstract}
The paper abstract should begin in the left column, 0.4~inches below the final
address. The heading `Abstract' should be centered, bold, and in 11~point type.
The abstract body should use 10~point type, with a vertical spacing of
11~points, and should be indented 0.25~inches more than normal on left-hand and
right-hand margins. Insert 0.4~inches of blank space after the body. Keep your
abstract brief and self-contained, limiting it to one paragraph and roughly 4--6
sentences. Gross violations will require correction at the camera-ready phase.
\subsection{Partitioning the Text}
You should organize your paper into sections and paragraphs to help
readers place a structure on the material and understand its
contributions.
\subsubsection{Sections and Subsections}
Section headings should be numbered, flush left, and set in 11~pt bold
type with the content words capitalized. Leave 0.25~inches of space
before the heading and 0.15~inches after the heading.
Similarly, subsection headings should be numbered, flush left, and set
in 10~pt bold type with the content words capitalized. Leave
0.2~inches of space before the heading and 0.13~inches afterward.
Finally, subsubsection headings should be numbered, flush left, and
set in 10~pt small caps with the content words capitalized. Leave
0.18~inches of space before the heading and 0.1~inches after the
heading.
Please use no more than three levels of headings.
\subsubsection{Paragraphs and Footnotes}
Within each section or subsection, you should further partition the
paper into paragraphs. Do not indent the first line of a given
paragraph, but insert a blank line between succeeding ones.
You can use footnotes\footnote{Footnotes
should be complete sentences.} to provide readers with additional
information about a topic without interrupting the flow of the paper.
Indicate footnotes with a number in the text where the point is most
relevant. Place the footnote in 9~point type at the bottom of the
column in which it appears. Precede the first footnote in a column
with a horizontal rule of 0.8~inches.\footnote{Multiple footnotes can
appear in each column, in the same order as they appear in the text,
but spread them across columns and pages if possible.}
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{icml_numpapers}}
\caption{Historical locations and number of accepted papers for International
Machine Learning Conferences (ICML 1993 -- ICML 2008) and International
Workshops on Machine Learning (ML 1988 -- ML 1992). At the time this figure was
produced, the number of accepted papers for ICML 2008 was unknown and instead
estimated.}
\label{icml-historical}
\end{center}
\vskip -0.2in
\end{figure}
\subsection{Figures}
You may want to include figures in the paper to illustrate
your approach and results. Such artwork should be centered,
legible, and separated from the text. Lines should be dark and at
least 0.5~points thick for purposes of reproduction, and text should
not appear on a gray background.
Label all distinct components of each figure. If the figure takes the
form of a graph, then give a name for each axis and include a legend
that briefly describes each curve. Do not include a title inside the
figure; instead, the caption should serve this function.
Number figures sequentially, placing the figure number and caption
\emph{after} the graphics, with at least 0.1~inches of space before
the caption and 0.1~inches after it, as in
\cref{icml-historical}. The figure caption should be set in
9~point type and centered unless it runs two or more lines, in which
case it should be flush left. You may float figures to the top or
bottom of a column, and you may set wide figures across both columns
(use the environment \texttt{figure*} in \LaTeX). Always place
two-column figures at the top or bottom of the page.
\subsection{Algorithms}
If you are using \LaTeX, please use the ``algorithm'' and ``algorithmic''
environments to format pseudocode. These require
the corresponding stylefiles, algorithm.sty and
algorithmic.sty, which are supplied with this package.
\cref{alg:example} shows an example.
\begin{algorithm}[tb]
\caption{Bubble Sort}
\label{alg:example}
\begin{algorithmic}
\STATE {\bfseries Input:} data $x_i$, size $m$
\REPEAT
\STATE Initialize $noChange = true$.
\FOR{$i=1$ {\bfseries to} $m-1$}
\IF{$x_i > x_{i+1}$}
\STATE Swap $x_i$ and $x_{i+1}$
\STATE $noChange = false$
\ENDIF
\ENDFOR
\UNTIL{$noChange$ is $true$}
\end{algorithmic}
\end{algorithm}
\subsection{Tables}
You may also want to include tables that summarize material. Like
figures, these should be centered, legible, and numbered consecutively.
However, place the title \emph{above} the table with at least
0.1~inches of space before the title and the same after it, as in
\cref{sample-table}. The table title should be set in 9~point
type and centered unless it runs two or more lines, in which case it
should be flush left.
\begin{table}[t]
\caption{Classification accuracies for naive Bayes and flexible
Bayes on various data sets.}
\label{sample-table}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
Data set & Naive & Flexible & Better? \\
\midrule
Breast & 95.9$\pm$ 0.2& 96.7$\pm$ 0.2& $\surd$ \\
Cleveland & 83.3$\pm$ 0.6& 80.0$\pm$ 0.6& $\times$\\
Glass2 & 61.9$\pm$ 1.4& 83.8$\pm$ 0.7& $\surd$ \\
Credit & 74.8$\pm$ 0.5& 78.3$\pm$ 0.6& \\
Horse & 73.3$\pm$ 0.9& 69.7$\pm$ 1.0& $\times$\\
Meta & 67.1$\pm$ 0.6& 76.5$\pm$ 0.5& $\surd$ \\
Pima & 75.1$\pm$ 0.6& 73.9$\pm$ 0.5& \\
Vehicle & 44.9$\pm$ 0.6& 61.5$\pm$ 0.4& $\surd$ \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
Tables contain textual material, whereas figures contain graphical material.
Specify the contents of each row and column in the table's topmost
row. Again, you may float tables to a column's top or bottom, and set
wide tables across both columns. Place two-column tables at the
top or bottom of the page.
\subsection{Theorems and such}
The preferred way is to number definitions, propositions, lemmas, etc. consecutively, within sections, as shown below.
\begin{definition}
\label{def:inj}
A function $f:X \to Y$ is injective if for any $x,y\in X$ different, $f(x)\ne f(y)$.
\end{definition}
Using \cref{def:inj} we immediate get the following result:
\begin{proposition}
If $f$ is injective mapping a set $X$ to another set $Y$,
the cardinality of $Y$ is at least as large as that of $X$
\end{proposition}
\begin{proof}
Left as an exercise to the reader.
\end{proof}
\cref{lem:usefullemma} stated next will prove to be useful.
\begin{lemma}
\label{lem:usefullemma}
For any $f:X \to Y$ and $g:Y\to Z$ injective functions, $f \circ g$ is injective.
\end{lemma}
\begin{theorem}
\label{thm:bigtheorem}
If $f:X\to Y$ is bijective, the cardinality of $X$ and $Y$ are the same.
\end{theorem}
An easy corollary of \cref{thm:bigtheorem} is the following:
\begin{corollary}
If $f:X\to Y$ is bijective,
the cardinality of $X$ is at least as large as that of $Y$.
\end{corollary}
\begin{assumption}
The set $X$ is finite.
\label{ass:xfinite}
\end{assumption}
\begin{remark}
According to some, it is only the finite case (cf. \cref{ass:xfinite}) that is interesting.
\end{remark}
\subsection{Citations and References}
Please use APA reference format regardless of your formatter
or word processor. If you rely on the \LaTeX\/ bibliographic
facility, use \texttt{natbib.sty} and \texttt{icml2022.bst}
included in the style-file package to obtain this format.
Citations within the text should include the authors' last names and
year. If the authors' names are included in the sentence, place only
the year in parentheses, for example when referencing Arthur Samuel's
pioneering work \yrcite{Samuel59}. Otherwise place the entire
reference in parentheses with the authors and year separated by a
comma \cite{Samuel59}. List multiple references separated by
semicolons \cite{kearns89,Samuel59,mitchell80}. Use the `et~al.'
construct only for citations with three or more authors or after
listing all authors to a publication in an earlier reference \cite{MachineLearningI}.
Authors should cite their own work in the third person
in the initial version of their paper submitted for blind review.
Please refer to \cref{author info} for detailed instructions on how to
cite your own papers.
Use an unnumbered first-level section heading for the references, and use a
hanging indent style, with the first line of the reference flush against the
left margin and subsequent lines indented by 10 points. The references at the
end of this document give examples for journal articles \cite{Samuel59},
conference publications \cite{langley00}, book chapters \cite{Newell81}, books
\cite{DudaHart2nd}, edited volumes \cite{MachineLearningI}, technical reports
\cite{mitchell80}, and dissertations \cite{kearns89}.
Alphabetize references by the surnames of the first authors, with
single author entries preceding multiple author entries. Order
references for the same authors by year of publication, with the
earliest first. Make sure that each reference includes all relevant
information (e.g., page numbers).
Please put some effort into making references complete, presentable, and
consistent, e.g. use the actual current name of authors.
If using bibtex, please protect capital letters of names and
abbreviations in titles, for example, use \{B\}ayesian or \{L\}ipschitz
in your .bib file.
\section*{Accessibility}
Authors are kindly asked to make their submissions as accessible as possible for everyone including people with disabilities and sensory or neurological differences.
Tips of how to achieve this and what to pay attention to will be provided on the conference website \url{http://icml.cc/}.
\section*{Software and Data}
If a paper is accepted, we strongly encourage the publication of software and data with the
camera-ready version of the paper whenever appropriate. This can be
done by including a URL in the camera-ready copy. However, \textbf{do not}
include URLs that reveal your institution or identity in your
submission for review. Instead, provide an anonymous URL or upload
the material as ``Supplementary Material'' into the CMT reviewing
system. Note that reviewers are not required to look at this material
when writing their review.
\section*{Acknowledgements}
\textbf{Do not} include acknowledgements in the initial version of
the paper submitted for blind review.
If a paper is accepted, the final camera-ready version can (and
probably should) include acknowledgements. In this case, please
place such acknowledgements in an unnumbered section at the
end of the paper. Typically, this will include thanks to reviewers
who gave useful comments, to colleagues who contributed to the ideas,
and to funding agencies and corporate sponsors that provided financial
support.
\nocite{langley00}
| 25,515 |
\section*{Introduction}
The modeling of translational and rotational defects in solids, typically referred to as \emph{dislocations} and \emph{disclinations}, respectively, dates back to the pioneering work of Vito Volterra on the investigation of the equilibrium configurations of multiply connected bodies \cite{V07}.
Dislocations, possibly the most common lattice defect, are regarded as the main mechanism of ductility and plasticity
of metals and elastic crystals \cite{Orowan1934,Polanyi1934,Taylor1934}. Disclinations appear at the lattice level in metals alloys \cite{ET67,TE68},
minerals \cite{Cordier14}, graphene \cite{Banhart11,Yang18}, and
virus shells \cite{harris77,NABARRO71}.
Despite both being line defects, their behavior is different, both geometrically and energetically. Moreover, the mathematical modeling is mostly available in the mechanical assumption of cylindrical geometry, where the curves on which the defects are concentrated are indeed line segments parallel to the cylinder axis.
Dislocations entail a violation of translational symmetry and are characterized by the so-called \emph{Burgers vector}. Here we consider only \emph{edge dislocations}, namely those whose Burgers vector is perpendicular to the dislocation line.
Disclinations arise as a violation of rotational symmetry and are characterized by the so-called \emph{Frank angle}.
Disclinations are defined,
(see
\cite{AEST68,N67}) as the ``closure failure of rotation ... for a closed circuit round the disclination centre''.
Conceptually, a planar wedge disclination can be realized in the following way, see \cite{V07}.
In an infinite cylinder, remove a triangular wedge of material and restore continuity by glueing together the two surfaces of the cut: this results in a positive wedge disclination; conversely, open a surface with a vertical cut originating at the axis of the infinite cylinder through the surface, insert an additional wedge of material into the cylinder through the opening, and restore continuity of the material: this results in a negative wedge disclination \cite{romanov92}.
Because of the cylindrical geometry, we will work in the cross-section of the material, where the disclinations lines are identified by points in the two-dimensional sections.
In this setting,
the energy of an edge dislocation scales, far away from its center, as the logarithm of the size of the domain,
while the energy of a single disclination is non-singular and scales quadratically with the size of the domain \cite{ROMANOV09}.
In many observations disclinations appear in the form of dipoles \cite{hirth20,KK91,romanov92},
which are pairs of wedge disclinations of opposite Frank angle placed at a close (but finite) distance.
This configuration has the effect of screening the mutual elastic strains resulting in significantly lower energy than the one of single, isolated disclinations.
A continuum theory for disclinations in the framework of linearized elasticity has been developed and systematized, among a number of authors, by de Wit in \cite{W68} and subsequently in \cite{dW1,dW2,dW3}.
A non-linear theory of disclinations and dislocations has been developed in \cite{Z97}, to which we refer the interested reader for a historical excursus
and a list of references to classical linearized theories, as well as to other early contributions on the foundation of non-linear theories.
For more recent modeling approaches,
in \cite{acharya15}
disclinations are comprised as a special case of \textit{g.disclinations}, a general concept designed
to model phase transformations, grain boundaries, and other plastification mechanisms.
Qualitative and quantitative comparison between the classical linearized elasticity approach and the g.disclination theory is discussed in details in \cite{ZHANG18}.
The contributions \cite{FRESSENGEAS11}
and \cite{Taupin15}
propose a mesoscale theory for crystal plasticity designed for modeling the dynamic interplay of disclinations and dislocations
based on linearized kinematics and written in terms of elastic and plastic curvature tensors.
Variational analysis of a discrete model for planar disclinations is performed in \cite{CVM21}.
Finally, we point out that the papers
\cite{Acharya19} and also
\cite{yavari12,yavari13}
consider a differential geometry approach for large non-linear deformations.
While the body of work on dislocations is vast both in the mathematics \cite{OrtizRepetto99,ArizaOrtiz05, GromaGyorgyiIspanovity10, GarroniMueller05, GarroniMueller06, ContiGarroniMueller11, ContiGarroniMueller22} as well as in the physics and chemistry literature \cite{HirthLothe82, FleckHutchinson93, Groma97, HullBacon01, GurtinAnand05, LimkumnerdVan-der-Giessen08} due to their relevance in metallurgy and crystal plasticity, the interest on disclinations has been much lower.
This disproportion owes to the fact that disclinations are thought to be less predominant in the formation of plastic microstructure.
However, a large body of experimental evidence, some of which in recent years, has shown that disclinations, both in single isolated as well as multi-dipole configuration, are in fact very relevant plastification mechanism, so that understanding their energetics and kinematics is
crucial to understanding crystal micro-plasticity.
Interesting examples of disclinations can be observed in martensitic microstructures. This is a complex micrometer-scale pattern emerging in classes of elastic crystals undergoing the austenite-to-martensite phase transformation \cite{BJ87,B}.
While in an ideal scenario such transformation entails a purely elastic, fully reversible change of symmetry of the underlying crystal lattice, in many practical realizations non idealities such as dislocations and in particular disclinations emerge, resulting in possible degradation of reversibility of the shape-memory effect.
We refer to \cite{KK91} for the classification of over forty types of disclinations that can be constructed in MgCd alloys undergoing the hexagonal-to-orthorhombic transformation.
Among them, we recall examples of beautiful, self-similar, martensitic microstructures containing a dipole of wedge disclinations
(see in particular \cite{MA80} and \cite{Manolikas80-2}), for which models and computations are produced in \cite{PL13,CPL14} and mathematical theories are derived in \cite{CDRZZ19}.
Examples of complex self-similar microstructures incorporating disclinations
emerging from the nucleation and evolution of needle-shaped regions occupied by martensitic phase
are described in \cite{IHM13, ILTH17}
(see also \cite{BCH15,CH20} for computations and stochastic models).
For more examples of experimental observations and numerical simulations of partial disclinations, see
\cite[Section~12.3.3]{GUTKIN2011329}
and also \cite{BALANDRAUD07,BALANDRAUD10}.
In crystal plasticity, disclinations (with their various configurations, such as isolated wedge disclinations, disclination dipoles, and even quadrupoles)
have been recognized to play an important role in the kinematic accommodation of special morphologies of plastic kink bands caused by rotational stretches of the lattice \cite{LN15,I19,IS20, hasebe20}.
Modeling and analysis of kinking has recently captured the interest of metallurgists in relation to a novel strengthening mechanism observed in certain classes of Mg-based alloys exhibiting Long-Period Stacking Order (LPSO) phase \cite{Kawamura01,ABE02}.
Although yet to date in large part not understood, the kink-strengthening mechanism seems to originate from an intricate
interplay of elastic and materials instabilities observed in the columnar ``mille-feuille" structures of LPSO materials under high compressions in planar geometries \cite{HAGIHARA10,HMHYIOONK16,HAGIHARA16}.
While exact scale-free constructions \cite{I19} shed light on the kinematics of the disclination-kinking mechanism, a model based on energy first principles to describe
the energetics of systems of disclinations, dislocations and kinks, together with the length scales of their associated plastification patterns and their collective effects on the strenghtening of the LPSO phase, is still unavailable.
With this paper we intend to move a first step in this direction
and lay the foundation of a general and comprehensive variational theory suitable to treat systems of rotational and translational defects on a lattice.
We focus on three different aspects: we propose a variational model for finite systems of planar wedge disclinations; we study dipoles of disclinations and we identify relevant energy scalings dictated by geometry and loading parameters; finally, we prove the
asymptotic energetic equivalence of a dipole of wedge disclinations with an edge dislocation.
\subsubsection*{Modeling assumptions}
We operate under the assumption of plane strain elastic displacements
and under the approximation of linearized kinematics so that contributions of individual defects can be added up
via superposition.
As we are mainly concerned with the modeling of experimental configurations of metals and hard crystals, we restrict our analysis to the case of two-dimensional plain strain geometries, leaving to future work the analysis in the configuration of plane mechanical stresses as in buckled membranes.
We model disclinations and dislocations as point sources of kinematic incompatibility following an approach analogous to \cite{SN88} and \cite{CermelliLeoni06}. Alternative approaches according to the stress-couple theory in linearized kinematics are pursued in \cite{W68,FRESSENGEAS11,Taupin15}.
Despite their intrinsic limitations, linearized theories have proven useful to describe properties of systems of dislocations both in continuous and discrete models \cite{CermelliGurtin99,CermelliLeoni06, GarroniLeoniPonsiglione10, DeLucaGarroniPonsiglione12, AlicandroDeLucaGarroniPonsiglione14, ContiGarroniOrtiz15, DeLuca16, AlicandroDeLucaGarroniPonsiglione16, AlicandroDeLucaGarroniPonsiglione17, BlassMorandotti17, Ginster19_1, AlicandroDeLucaLazzaroniPalombaroPonsiglione22}
(see also \cite{ScardiaZeppieri12, MuellerScardiaZeppieri14, Ginster19_2, GarroniMarzianiScala21} for related nonlinear models for (edge) dislocations).
In \cite{PL13, CPL14, CDRZZ19} systems of disclinations have been investigated in linear and finite elasticity models,
and qualitative as well as quantitative comparisons have been discussed.
By working in plane strain linearized kinematics, it is convenient to formulate the mechanical equilibrium problem in terms of a scalar potential, the Airy stress function of the system, see, \emph{e.g.}, \cite{Meleshko03}.
This is a classical method in two-dimensional elasticity based on the introduction of a potential scalar function whose second-order derivatives correspond to the components of the stress tensor (see \cite[Section~5.7]{ciarlet97} and \cite{Schwartz66}).
From the formal point of view, by denoting with $\sigma_{ij}$ the components of the $2\times 2$ mechanical stress tensor, we write
\begin{equation*}
\sigma_{11}=\frac{\partial^2 v}{\partial y^2}\,,\qquad
\sigma_{12}=\sigma_{21}=-\frac{\partial^2 v}{\partial y\partial x}\,,\qquad
\sigma_{22}=\frac{\partial^2 v}{\partial x^2}\,,
\end{equation*}
where $v\colon \mathbb{R}^2\supset\Omega\to \mathbb{R}$ is the Airy stress function. Upon introduction of the Airy potential $v$, the equation of mechanical equilibrium $\mathrm{Div}\,\sigma=0$ is identically satisfied and the information on kinematic (in-)compatibility is translated into a loading source problem for a fourth-order elliptic partial differential equation with boundary conditions for the scalar field $v$\,.
Existence of the Airy stress function and the variational equivalence of the equilibrium problems formulated in terms of strains and stresses, which we refer to as the \emph{laboratory variables},
with the single-equation problem for the Airy potential are proved in \cite{ciarlet97} in simply connected domains for perfectly compatible (that is, defect-free) elasticity.
Although the Airy stress function method has been adopted by a number of authors to model lattice defects,
the equivalence of the equilibrium problem formulated in terms of strains and stresses
with the problem formulated
in term of the Airy function for simply connected domains for the incompatible elasticity, is, to the best of our knowledge, overlooked.
In the present contribution, we attack and solve this question,
providing a rigorous, analytical structure to the equilibrium problems for systems of disclinations formulated for the Airy stress function.
Our systematization of the Airy stress function method is useful also for the general case of compatible elasticity. We investigate a number of analytical questions, such as the equivalence of boundary datum in terms of the laboratory variables and the Airy potential, fine Poincar\'{e} and trace inequalities in perforated domains, and density of Airy potentials under non-standard constraints.
To make our presentation clear, we gather most of our original results on the analytical aspects of the Airy stress function method in a series of appendices which can be read and referred to separately from the rest of the paper.
\subsubsection*{Main contributions and novelties of this work}
We construct a rigorous variational setting so that the equilibrium problem formulated in terms of the Airy potential is well posed in terms of existence, uniqueness, and regularity of solutions.
Although we focus on finite systems of isolated disclinations, our formulation is general and can be applied to treat configurations
in linearized planar elasticity
in different geometries and regimes.
An immediate application of our analysis is in providing
rigorous framework for numerical calculations
of lattice defects with the Airy potential method (see, \emph{e.g.}, \cite{SN88, ZHANG14}).
From the point of view of the applications in Materials Science,
we prove rigorously energy scalings for systems of isolated wedge disclinations, disclination dipoles, and edge dislocations.
Starting from the modeling work of Eshelby \cite{Eshelby66} (see also \cite{dW3}) aimed at showing
kinematic equivalence of a dipole of wedge disclinations with an edge dislocation, we compute energy estimates and
characterize the relation between a dipole of disclinations and an edge dislocation in a precise variational sense.
Our result is significant because disclination dipoles are fundamental building blocks for
kinks as well as
grain boundaries
\cite{LI72, Gertsman89, Nazarov13}, which are important configurations in crystals and metals.
The results contained in this paper serve as a bridge across length scales: the smallest one of disclinations, the intermediate one of disclination dipoles, and the larger one of edge dislocations. Starting from the smallest length scale, we progressively zoom out and unveil the different energy scalings that are proper of the three phenomena.
These scaling laws suggest the correct energy renormalizations.
Eshelby's kinematic equivalence of disclination dipoles with edge dislocations is established here at the energetic level in a precise quantitative way.
Within the formalism of the Airy stress function, we show that the energy of a system of disclination dipoles coincides, in the limit as the dipole distance vanishes and upon renormalization, with the energy of a system of edge dislocations as described in \cite{CermelliLeoni06} via the core-radius approach.
Our analysis complements the work of \cite{CermelliLeoni06}:
we compute the expansion of the renormalized energy for edge dislocations
as well as disclination dipoles in the Airy stress function rather than in the laboratory variables.
\subsubsection*{Outline of the paper and methods}
The outline of the paper is as follows.
Section \ref{sc:model} is devoted to the presentation of the mechanical equilibrium equations, in terms both of the laboratory variables and of the Airy stress function of the system.
Our main results of this section (Propositions \ref{prop:airyepsilonA} and \ref{prop:airyepsilon}) contain the proof of the equivalence of the mechanical equilibrium problem formulated in terms of the laboratory variables and of the Airy stress function.
Our result is based on a crucial characterization of traction-free boundary displacements for the problem formulated in terms of the Airy potential. Such a characterization involves a non-standard tangential boundary condition for the Hessian of the Airy stress function which we are able to characterize in terms of classical Dirichlet-type boundary conditions for the bilaplacian equation (Proposition \ref{20220421}).
In Section \ref{sc:isolated_disclinations}, we
focus on the analysis of systems of isolated disclinations performed for the mechanical problem formulated for the Airy potential.
In a domain $\Omega$ we consider a finite number~$K$ of disclinations and we operate under the assumption that their centers $\{y^k\}_{k=1}^K\subset\Omega$ are fixed (hence the term \emph{isolated}).
We show that the
mechanical equilibrium formulated in terms of the Airy potential is the solution to a non-homogeneous
fourth-order elliptic equation
where the source term is a finite sum of Dirac deltas,
each of which is
placed at a disclination site~$y^k$ and is modulated by the corresponding Frank angle~$s^k$.
Therefore, the existence of non-trivial solutions to the equilibrium problem follows from the presence of a point-source loading term measuring the \emph{charge} of a disclination which is the signature of a rotational mismatch in the lattice.
(Here, the term ``charge'' can be misleading. We intend to make an analogy with electric charges: same-sign charges repel each other, whereas opposite-sign charges attract each other. Incidentally, the same behavior is observed with screw dislocations, see, \emph{e.g.}, \cite{CermelliGurtin99,BlassMorandotti17,BlassFonsecaLeoniMorandotti15}. The use of the term ``charge'' should not be confused with the notion of \emph{topological charge}: dislocations carry one, disclinations do not.)
Although the variational problem for isolated disclinations entails regular functionals, the mechanical strains and stresses of wedge disclinations are in fact singular (showing a logarithmic behavior at the disclinations sites), thus violating the requirements of linearized kinematics (see \cite{dW3, L03}).
The Airy potentials corresponding to the singular strains and stresses are the classical solutions for planar wedge disclinations computed in \cite{V07} -- and correctly recovered by our model -- corresponding to the Green's function for the bilaplacian operator.
A possible remedy to the unphysical behavior and inconsistency with experimental observations is the smoothening of mechanical strains and stresses by introducing an additional length scale proportional to the disclination core. As the analysis contained in this paper focuses mainly on singular limits for disclination dipoles and dislocations, we ignore regularization of non-singular functionals for isolated disclinations, leaving these issues to future work.
With Section \ref{sub:dipole}
we begin our investigation of systems of disclination dipoles which we then conclude in Section \ref{sc:four}.
As length scales and mutual distances between disclinations are regarded as model parameters and as we are interested in the asymptotics of such parameters, we call the modeling of Sections \ref{sub:dipole} and \ref{sc:four} of \textit{interacting} disclinations.
The
dependence of both the minimizers
and the energy scaling
regimes on these length scales
will be dictated by loading terms
for the problem formulated in the Airy variable,
and will follow from global minimization of the total energy of the system
and not from \emph{a priori} assumptions.
We operate by directly computing the limits of energy minima and minimizers; a more general approach
via $\Gamma$-convergence \cite{Braides02,DalMaso93}
is not explored in this paper.
We follow Eshelby's \cite{Eshelby66} derivation aimed at
showing that the Burgers vector $b\in \mathbb{R}^2$ of an edge dislocation
can be produced by the lattice mismatch caused by a disclination dipole of charge $\pm |b|/h$ at a small dipole distance $h>0$.
Motivated by his proof of their \textit{kinematic} equivalence, we analyze and clarify
the relation between a disclination dipole and an edge dislocation from the point of view of their \textit{energies}.
Since edge dislocations and wedge disclination dipoles are both characterized by singular mechanical strain and stress as well as singular energies,
we make use of the core-radius approach for planar edge dislocations, see \cite{CermelliLeoni06}.
Consequently,
we consider a finite collection of
disclination dipoles in a domain $\Omega$
and we denote
by $\ep>0$ their core radius with the geometry requirement that $0<h<\ep$.
The limits as~$h$ and~$\ep$ vanish are taken one at a time, first as $h\to0$ and then as $\ep\to0$.
As a consequence of the first limit $h\to0$, the length scale~$\ep$ emerges as the core radius of the dislocations.
The material responds to continuum theories of elasticity at scales larger than~$\ep$\,, whereas discrete descriptions are better suited at scales smaller then~$\ep$\,, thus establishing the \emph{semi-discrete} nature of our model.
In Section~\ref{sub:dipole}, we consider one dipole of disclinations with charges $\pm s$ and we keep~$\ep$ fixed while taking the limit as $h\to0$.
At this stage, the energy of the dipole behaves asymptotically as $h^2|\log h|$ (see Proposition~\ref{p:3.3}).
By rescaling the energy by $h^2$, we prove convergence to a functional that features
a surface load and a bulk elastic term (see \eqref{defJ0}).
The latter is the elastic energy of an edge dislocation with core radius of size $\ep$\,.
Then we provide an expansion of the $\ep$-regularized dislocation energy as $\ep\to0$.
This is tackled in Section~\ref{sc:four} where,
relying on an additive decomposition between plastic (\emph{i.e.}, determined by the disclinations) and elastic parts of the Airy stress function, in an analogous fashion to \cite{CermelliLeoni06}, we study the limit as $\ep\to0$ (Theorem~\ref{2201181928}), we compute the renormalized energy of the system (Theorem~\ref{CLequiv}), and we finally obtain the energetic equivalence, which is the sought-after counterpart of Eshelby's kinematic equivalence.
We show that the minimizer of the $\ep$-regularized energy converges to a limit function which is the distributional solution to a PDE
and is not characterized via a variational principle.
From the technical point of view,
our results rely on a density theorem for traction-free $H^2(\Omega)$ Airy stress functions, which can be locally approximated, close to each singularity,
with a sequence of smooth, traction-free functions (Proposition \ref{prop:approx}).
Our asymptotic expansion of the $\ep$-regularized energy obtained via the Airy stress function formulation (see \eqref{20220222_8}) is in agreement with \cite[Theorem~5.1 and formula (5.2)]{CermelliLeoni06} at all orders.
We stress that the results in Section~\ref{sc:four} are written for finite systems of disclination dipoles and dislocations. In particular, Theorem~\ref{CLequiv} fully characterizes the energy of a finite system of dislocations: the renormalized energy $F$ in \eqref{ren_en_dislo} contains information on the mutual interaction of the dislocations.
In conclusion, we combine in a cascade the converge result of Section \ref{sub:dipole} for disclination dipoles for vanishing~$h$ with the asymptotic expansion of Section \ref{sc:four} of the renormalized energy of edge dislocations for vanishing~$\ep$.
We compute, via a diagonal argument, the asymptotic expansion of the $\ep(h)$-regularized energy of finite systems of disclination dipoles for vanishing dipole distance~$h$, thus extending the asymptotic analysis of \cite{CermelliLeoni06} to finite systems of dipoles of wedge disclinations.
\medskip
\textsc{Acknowledgments:} The authors are members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matema\-tica (INdAM).
PC holds an honorary appointment at La Trobe University and is supported by JSPS Innovative Area Grant JP21H00102 and partially JP19H05131.
MM gratefully acknowledges support from the \emph{Japan meets Italian Scientists} scheme of the Embassy of Italy in Tokyo and from the MIUR grant Dipartimenti di Eccellenza 2018-2022 (CUP: E11G18000350001).
MM acknowledges the Institute of Mathematics for Industry, an International Joint Usage and Research Center located in Kyushu University, where part of the work contained in this paper was carried out.
\medskip
\textsc{Acknowledgments:} There are no supplementary materials or data associated with this manuscript.
\vskip10pt
\textsc{Notation.}
For $d\in\{2,3\}$\,, $m\in\mathbb{N}$\,, and for every $k\in\mathbb{Z}$\,, let $\mathscr{R}^k(A;\mathbb{R}^m)$ denote the space of $k$-regular $\mathbb{R}^m$-valued functions defined on an open set $A\subset\mathbb{R}^d$ (we will consider Sobolev spaces like $H^{k}(A;\mathbb{R}^m)$ or spaces of $k$-differentiable functions like $C^{k}(A;\mathbb{R}^m)$, for $k\ge 0$)\,.
Now we introduce different curl operators and show relationships among them.
For $d=3$ and $m=3$ we define $\textsc{curl}\,\colon\mathscr{R}^k(A;\mathbb{R}^{3})\to \mathscr{R}^{k-1}(A;\mathbb{R}^3)$ as
\begin{equation*
\begin{aligned}
\textsc{curl}\, V\coloneqq&(\partial_{x_2}V^3-\partial_{x_3}V^2;\partial_{x_3}V^1-\partial_{x_1}V^3; \partial_{x_1}V^2-\partial_{x_2}V^1)
\end{aligned}
\end{equation*}
for any $V=(V^1;V^2;V^3)\in\mathscr{R}^k(A;\mathbb{R}^{3})$\,,
or, equivalently, $(\textsc{curl}\, V)^i=\varepsilon_{ijk}\partial_{x_j}V^k$\,, where $\varepsilon_{ijk}$ is the Levi-Civita symbol.
For $d=3$ and $m=3\times 3$ we define $\mathrm{CURL}\,\colon\mathscr{R}^k(A;\mathbb{R}^{3\times 3})\to \mathscr{R}^{k-1}(A;\mathbb{R}^{3\times 3})$ by $(\mathrm{CURL}\, M)_{ij}\coloneqq\varepsilon_{ipk}\partial_{x_p}M_{jk}$ for every $M\in \mathscr{R}^k(A;\mathbb{R}^{3\times 3})$ and
we notice that $(\mathrm{CURL}\, M)_{ij}=(\textsc{curl}\, M_j)^i$\,, where $M_j$ denotes the $j$-th row of $M$\,.
Moreover, we denote by $\INC\colon\mathscr{R}^k(A;\mathbb{R}^{3\times 3})\to \mathscr{R}^{k-2}(A;\mathbb{R}^{3\times 3})$ the operator defined by $\INC\coloneqq\mathrm{CURL}\,\ccurl\equiv\mathrm{CURL}\,\circ\mathrm{CURL}\,$\,.
For $d=2$ and $m\in\{2,2\times 2\}$\,,
we define the following curl operators: $\mathrm{curl}\,\colon\mathscr{R}^k(A;\mathbb{R}^2)\to \mathscr{R}^{k-1}(A;\mathbb{R})$ as $\mathrm{curl}\, v\coloneqq\partial_{x_1}V^2-\partial_{x_2}{V^1}$ for any $V=(V^1;V^2)\in\mathscr{R}^k(A;\mathbb{R}^2) $, $\mathrm{Curl}\,\colon\mathscr{R}^k(A;\mathbb{R}^{2\times 2})\to \mathscr{R}^{k-1}(A;\mathbb{R}^2)$ as $\mathrm{Curl}\, M:=(\mathrm{curl}\, M_1;\mathrm{curl}\, M_2)$ for any $M\in \mathscr{R}^k(A;\mathbb{R}^{2\times 2})$\,.
Let now $A\subset\mathbb{R}^2$ be open. For every $V=(V^1,V^2)\in\mathscr{R}^{k}(A;\mathbb{R}^2)$\,, we can define $\underline{V}\in\mathscr{R}^k(A;\mathbb{R}^3)$ as $\underline{V}\coloneqq(V^1;V^2;0)$ and we have that
$$
\textsc{curl}\,\underline{V}=(0;0;\mathrm{curl}\, V)\,.
$$
Analogously, if $M\in\mathscr{R}^k(A;\mathbb{R}^{2\times 2})$\,, then, defining $\underline{M}\colon A\to \mathbb{R}^{3\times 3}$ by $\underline{M}_{ij}=M_{ij}$ if $i,j\in\{1,2\}$ and $\underline{M}_{ij}=0$ otherwise, we have that $\underline{M}\in \mathscr{R}^k(A;\mathbb{R}^{3\times 3})$\,,
$$
\mathrm{CURL}\,\underline{M}=\left[\begin{array}{ccc}
0&0&\mathrm{curl}\, M_1\\
0&0&\mathrm{curl}\, M_2\\
0&0&0
\end{array}\right]\,,\qquad \mathrm{CURL}\,\ccurl\underline{M}=\left[\begin{array}{ccc}
0&0&0\\
0&0&0\\
0&0&\mathrm{curl}\,\mathrm{Curl}\, M
\end{array}\right]\,.
$$
In what follows, $\mathbb{R}^{2\times 2}_{\mathrm{sym}}$ is the set of the matrices $M\in\mathbb{R}^{2\times 2}$ with $M_{ij}=M_{ji}$ for every $i,j=1,2$\,.
Finally, for every $M\in\mathbb{R}^{2\times 2}$ we denote by $M^{\top}$ the matrix with entries $(M^{\top})_{ij}=M_{ji}$ for every $i,j=1,2$\,.
\section{The mechanical model}\label{sc:model}
\subsection{Plane strain elasticity}
Let $\Omega$ be an open bounded simply connected subset of $\mathbb{R}^2$ with~$C^2$ boundary. For any displacement $u\in H^1(\Omega;\mathbb{R}^2)$ the associated elastic strain $\epsilon\in L^2(\Omega;\mathbb{R}^{2\times 2}_{\mathrm{sym}})$ is given by $\epsilon\coloneqq\nabla^{\mathrm{sym}} u\coloneqq\frac{1}{2}(\nabla u+\nabla^{\top} u)$, whereas the corresponding stress $\sigma\in L^2(\Omega;\mathbb{R}^{2\times 2}_{\mathrm{sym}})$ is defined by
\begin{equation}\label{stressstrain}
\sigma\coloneqq\mathbb{C}\epsilon\coloneqq\lambda\mathrm{tr}(\epsilon)\mathbb{I}_{2\times 2}+2\mu\epsilon\,;
\end{equation}
here $\mathbb{C}$ is the {\it isotropic elasticity tensor} with {\it Lam\'e constants} $\lambda$ and $\mu$\,.
Notice that
\begin{subequations}\label{lamepos}
\begin{equation}
\mathbb{C}\textrm{ is positive definite}
\end{equation}
if and only if
\begin{equation}\label{lame}
\mu>0\qquad\textrm{ and }\qquad\lambda+\mu>0\,,
\end{equation}
or, equivalently,
\begin{equation}\label{lame3}
E>0\qquad\textrm{ and }\qquad-1<\nu<\frac{1}{2}\,.
\end{equation}
Here and below, $E$ is the {\it Young modulus} and $\nu$ is the {\it Poisson ratio}, in terms of which the Lam\'e constants $\lambda$ and $\mu$ are expressed by
\end{subequations}
\begin{equation}\label{lame2}
\mu=\frac{E}{2(1+\nu)}\qquad\textrm{ and }\qquad \lambda=\frac{E\nu}{(1+\nu)(1-2\nu)}\,.
\end{equation}
We will assume \eqref{lamepos} throughout the paper.
In plain strain elasticity the isotropic elastic energy associated with the displacement $u$ in the body $\Omega$ is defined by
\begin{equation}\label{def:energy}
\mathcal{E}(u;\Omega)\coloneqq\frac 1 2 \int_{\Omega}\sigma:\epsilon\,\mathrm{d} x=\frac{1}{2}\int_{\Omega}\big(\lambda (\mathrm{tr}(\epsilon))^2+2\mu|\epsilon|^2\big)\,\mathrm{d} x\,;
\end{equation}
we notice that in formula \eqref{def:energy} the energy $\mathcal{E}(\cdot;\Omega)$ depends only on $\epsilon$ so that in the following, with a little abuse of notation, we will denote by $\mathcal{E}(\cdot;\Omega)\colon L^2(\Omega;\mathbb{R}^{2\times 2}_\mathrm{sym})\to [0,+\infty)$ the energy functional defined in \eqref{def:energy}, considered as a functional of $\epsilon$ (and not of $u$).
Notice that we can write the elastic energy also as a function of the stress $\sigma$ as
\begin{equation}\label{energysigma}
\mathcal{F}(\sigma;\Omega):=\frac{1}{2}\frac{1+\nu}{E}\int_{\Omega} \big(|\sigma|^2-\nu(\mathrm{tr}(\sigma))^2\big)\,\mathrm{d} x=\mathcal{E}(\epsilon;\Omega)\,,
\end{equation}
where we have used \eqref{stressstrain} and \eqref{lame2} to deduce that
\begin{equation}\label{strain_stress}
\epsilon_{11}=\frac{1+\nu}{E}\Big((1-\nu)\sigma_{11}-\nu\sigma_{22}\Big)\,,\quad \epsilon_{12}=\frac{1+\nu}{E}\sigma_{12}\,,\quad \epsilon_{22}=\frac{1+\nu}{E}\Big((1-\nu)\sigma_{22}-\nu\sigma_{11}\Big)\,,
\end{equation}
and
\begin{equation}\label{intE}
\lambda \big(\mathrm{tr}(\epsilon)\big)^2+2\mu|\epsilon|^2=\frac{1+\nu}{E}\big(|\sigma|^2-\nu(\mathrm{tr}(\sigma))^2\big)\,.
\end{equation}
Finally, we reformulate the energy \eqref{energysigma} using the Airy stress function method. This assumes the existence of a function $v\in H^2(\Omega)$ such that
\begin{equation}\label{airy}
\sigma_{11}=\partial^2_{x_2^2}v\,,\quad\sigma_{12}=-\partial^{2}_{x_1x_2}v\,,\quad\ \sigma_{22}=\partial^2_{x_1^2}v\,;
\end{equation}
more precisely, we consider the operator $\mathsf{A}\colon\mathscr{R}^k(\Omega)\to\mathscr{R}^{k-2}(\Omega;\mathbb{R}^{2\times 2}_\mathrm{sym})$ such that $\sigma=\sigma[v]=\mathsf{A}(v)$ is defined by \eqref{airy}\,.
It is immediate to see that the operator $\mathsf{A}$ is not injective, since $\mathsf{A}(v)=\mathsf{A}(w)$ whenever $v$ and $w$ differ up to an affine function; its invertibility under suitable boundary conditions will be discussed in Subsection \ref{incairy} (see Proposition \ref{prop:airyepsilon}).
Assuming that there exists $v$ such that $\sigma=\sigma[v]=\mathsf{A}(v)$\,, from \eqref{airy}, we can rewrite \eqref{energysigma} as
\begin{equation}\label{energyairy}
\mathcal{F}(\sigma[v];\Omega)=\frac 1 2\frac{1+\nu}{E}\int_{\Omega}\Big(|\nabla^2 v|^2-\nu|\Delta v|^2\Big)\,\mathrm{d} x\eqqcolon \mathcal{G}(v;\Omega)\,.
\end{equation}
We notice that if the stress $\sigma$ admits an Airy potential $v$\,, i.e., $\sigma=\sigma[v]=\mathsf{A}(v)$\,, then
\begin{equation}\label{divenulla}
\mathrm{Div}\,\sigma[v]\equiv 0\,,
\end{equation}
that is, the equilibrium equation $\mathrm{Div}\,\sigma= 0$ is automatically satisfied.
In fact, this is the main advantage in using the Airy stress function method.
\subsection{Kinematic incompatibility: dislocations and disclinations}\label{sc:inclab}
Let $u\in C^3(\Omega;\mathbb{R}^2)$ and set $\beta:=\nabla u$\,. Clearly,
\begin{subequations}\label{compa_tutte}
\begin{equation}\label{compa}
\mathrm{Curl}\,\beta=0\qquad\textrm{ in }\Omega\,.
\end{equation}
We can decompose $\beta$ as
$\beta=\epsilon+\beta^{\mathrm{skew}}$\,, where $\epsilon\coloneqq\frac{1}{2}(\beta+\beta^{\top})$ and $\beta^{\mathrm{skew}}\coloneqq\frac{1}{2}(\beta-\beta^{\top})$\,. By construction,
\begin{equation*
\beta^{\mathrm{skew}}=\left(\begin{array}{ll} 0&f\\
-f&0
\end{array}\right)\,,
\end{equation*}
for some function $f\in C^2(\Omega)$\,, and hence $\mathrm{Curl}\,\beta^{\mathrm{skew}}=\nabla f$\,.
Therefore, the compatibility condition \eqref{compa} can be rewritten as
\begin{equation}\label{compa2}
\mathrm{Curl}\,\epsilon=-\nabla f\qquad\textrm{in }\Omega\,,
\end{equation}
which, applying again the $\mathrm{curl}\,$ operator, yields the {\it Saint-Venant compatibility condition}
\begin{equation}\label{compa3}
\mathrm{curl}\,\mathrm{Curl}\,\epsilon=0\qquad\textrm{in }\Omega\,.
\end{equation}
\end{subequations}
Viceversa, given $\epsilon\in C^2(\Omega;\mathbb{R}^{2\times2}_{\mathrm{sym}})$, the {\it Saint-Venant principle} \cite{SV1855} states that if \eqref{compa3} holds, then there exists $u\in C^3(\Omega;\mathbb{R}^2)$ such that $\epsilon=\nabla^{\mathrm{sym}}u$\,.
In order to apply the direct method of the Calculus of Variations for the minimization of the elastic energy \eqref{def:energy}, the natural functional setting for the displacement $u$ is the Sobolev space $H^1(\Omega;\mathbb{R}^2)$\,. Therefore, a natural question that arises is whether identities \eqref{compa_tutte} make sense also when $\beta$ is just in $L^2(\Omega;\mathbb{R}^{2\times 2})$.
The answer to this question is affirmative as shown by the following result proved in \cite{ciarlet05} (see also \cite{Geymonat09}).
\begin{proposition}\label{sv}
Let $\Omega\subset\mathbb{R}^2$ be an open, bounded, and simply connected set and let $\epsilon \in L^2(\Omega;\mathbb{R}^{2\times 2}_{\mathrm{sym}})$\,. Then,
\begin{equation}\label{compa4}
\mathrm{curl}\,\mathrm{Curl}\,\epsilon=0\qquad\textrm{in }H^{-2}(\Omega)
\end{equation}
if and only if there exists $u\in H^1(\Omega;\mathbb{R}^2)$ such that $\epsilon=\nabla^{\mathrm{sym}} u$\,. Moreover, $u$ is unique up to rigid motions.
\end{proposition}
Notice that, by the Closed Graph Theorem, we have that \eqref{compa4} holds true in $H^{-2}(\Omega)$ if and only if it holds in the sense of distributions. Therefore, the generalizations of identities \eqref{compa_tutte} when $u\in H^1(\Omega;\mathbb{R}^2)$ are given by
\begin{subequations}\label{compadebole}
\begin{eqnarray}\label{compa10}
\mathrm{Curl}\,\beta& \!\!\!\! =&\!\!\!\! 0\qquad\textrm{ in }\mathcal{D}'(\Omega;\mathbb{R}^2)\,,\\ \label{compa20}
\mathrm{Curl}\,\epsilon& \!\!\!\!=&\!\!\!\! -\nabla f\qquad\textrm{ in }\mathcal{D}'(\Omega;\mathbb{R}^2)\,, \\
\label{compa30}
\mathrm{curl}\,\mathrm{Curl}\,\epsilon& \!\!\!\!=&\!\!\!\! 0\qquad\textrm{in }\mathcal{D}'(\Omega)\,,
\end{eqnarray}
\end{subequations}
where $f$ is a function in $L^2(\Omega)$ and the operator $\nabla$ should be understood in the sense of distributions. (Here and below, $\mathcal{D}'(\Omega;\mathbb{R}^2)$ and $\mathcal{D}'(\Omega)$ denote the families of $\mathbb{R}^2$-valued and $\mathbb{R}$-valued, respectively, distributions on $\Omega$\,.)
Clearly, if $\beta$ is not a gradient, then equations \eqref{compadebole}
are not satisfied anymore.
In particular, if the right-hand side of \eqref{compa10} is equal to some $\alpha\in H^{-1}(\Omega;\mathbb{R}^2)$, then \eqref{compa20} becomes
\begin{equation}\label{compa200}
\mathrm{Curl}\,\epsilon=\alpha-\nabla f\qquad\textrm{ in }\mathcal{D}'(\Omega;\mathbb{R}^2)\,.
\end{equation}
Moreover, if the right-hand side of \eqref{compa20} is equal to $-\kappa$ where $\kappa\in H^{-1}(\Omega;\mathbb{R}^2)$ is not a gradient,
then
\eqref{compa30} becomes
\begin{equation}\label{compa300}
\mathrm{curl}\,\mathrm{Curl}\,\epsilon=-\theta\qquad\textrm{in }\mathcal{D}'(\Omega)\,,
\end{equation}
where we have set $\theta\coloneqq\mathrm{curl}\,\kappa$\,.
Finally, when both incompatibilities are present, we have that
\begin{equation}\label{incfinal}
\mathrm{curl}\,\mathrm{Curl}\,\epsilon=\mathrm{curl}\,\alpha-\theta\qquad\textrm{in }\mathcal{D}'(\Omega)\,.
\end{equation}
We will focus on the case when $\alpha$ and $\theta$ are finite sums of Dirac deltas.
More precisely, we will consider $\alpha\in\mathscr{ED}(\Omega)$ and $\theta\in\mathscr{WD}(\Omega)$\,, where
\begin{equation*}
\begin{aligned}
\mathscr{ED}(\Omega)\coloneqq&\bigg\{\alpha=\sum_{j=1}^{J}b^j\textrm{def}_{x^j}\,:\,J\in\mathbb{N}\,,\,b^j\in\mathbb{R}^2\setminus\{0\}\,,\,x^j\in\Omega\,,\,x^{j_1}\neq x^{j_2}\textrm{ for }j_1\neq j_2\bigg\}\,,\\
\mathscr{WD}(\Omega)\coloneqq&\bigg\{\theta=\sum_{k=1}^{K}s^k\textrm{def}_{y^k}\,:\,K\in\mathbb{N}\,,\,s^k\in\mathbb{R}\setminus\{0\}\,,\,y^k\in\Omega\,,\,\,y^{k_1}\neq y^{k_2}\textrm{ for }k_1\neq k_2\bigg\}\,.
\end{aligned}
\end{equation*}
In this case \eqref{incfinal} reads
\begin{equation}\label{incfinal2}
\mathrm{curl}\,\mathrm{Curl}\,\epsilon=-\sum_{j=1}^{J}|b^j|\partial_{\frac{(b^j)^\perp}{|b^j|}}\textrm{def}_{x^j}-\sum_{k=1}^{K}s^k\textrm{def}_{y^k}\qquad\textrm{in }\mathcal{D}'(\Omega)\,,
\end{equation}
where we recall that $b^\perp=(-b_2;b_1)$ for every $b=(b_1;b_2)\in\mathbb{R}^2$\,.
The measure $\alpha$ identifies a system of $J$ edge dislocations with Burgers vectors $b^j$\,; the measure $\theta$ identifies a system of $K$ wedge disclinations with Frank angles $s^k$\,.
\begin{remark}\label{discreteweigths}
\rm{For the sake of simplicity we will assume that the weights $b^j$'s and $s^k$'s of the singularities of $\alpha$ and $\theta$ lie in $\mathbb{R}^2\setminus\{0\}$ and $\mathbb{R}\setminus\{0\}$\,, respectively.
Actually, in the theory of perfect edge dislocations, we have that $b^j\in\mathcal{B}\subset\mathbb{R}^2$\,, where $\mathcal{B}$ is the {\it slip system}, i.e., the (discrete) set of the vectors of the crystallographic lattice. Analogously, in the theory of perfect disclinations, $s^k\in\mathcal{S}$\,, where, in a regular Bravais lattice, $\mathcal{S}$ is given by the integer multiples of the minimal angle $s$ between two adjacent nearest-neighbor bonds of a given point (namely, $s=\pm\frac{\pi}{2}$ in the square lattice and $s=\pm\frac{\pi}{3}$ in the regular triangular lattice).
Whenever $b^j$ are not vectors in $\mathcal{B}$ or $s^k$ are not angles in $\mathcal{S}$, the corresponding dislocations and disclinations are referred to as \textit{partial}, see \cite{Wit1972,N67}.
Since we will focus only on the regime of finite number of edge dislocations and wedge disclinations, the classes $\mathcal{B}$ and $\mathcal{S}$ do not play any role in our analysis.
}
\end{remark}
Let $\alpha\in\mathscr{ED}(\Omega)$ and $\theta\in\mathscr{WD}(\Omega)$\,.
Following \cite{W68,dW1}, for every open set $A\subset\Omega$ with $\partial A\cap(\mathrm{spt}\,\alpha\cup\mathrm{spt}\,\theta)=\emptyset$ we define the Frank angle $\omega\res A$\,, the Burgers vector ${\bf b}\res A$\,, and the {\it total Burgers vector} ${\bf B}\res A$ restricted to $A$ as
$$
\omega\res A\coloneqq\theta(A)\,,\qquad
{\bf b}\res A \coloneqq \alpha(A)\,,\qquad
{\bf B}\res A\coloneqq{\bf b}\res A-\int_{A}(-x_2;x_1)\,\mathrm{d}\theta\,.
$$
We notice that in \cite{W68,dW1}\,, the Frank angle is indeed a rotation vector $\bm\Omega\res A$, which in our plane elasticity setting is the vector perpendicular to the cross section given by $\bm\Omega\res A=(0;0;\omega\res A)$\,.
For the purpose of illustration, we notice that if $\mathrm{spt}\,\theta\subset \Omega\setminus A$\,, then $\omega\res A=0$ and ${\bf B}\res A={\bf b}\res A=\alpha(A)$\,. Now, if $\mathrm{spt}\,\alpha\subset \Omega\setminus A$ and
$\theta=s\textrm{def}_{y}$ for some $y\in A$\,, then
$\omega\res A=\theta(A)=s$\,, ${\bf b}\res A=0$\,, and ${\bf B}\res A=-s(-y_2;y_1)$\,. This illustrates the different contributions of dislocations and disclinations to the quantities $\omega$\,, $\bf b$\,, and $\bf B$ just introduced: dislocations only contribute to the Burgers vector but never to the Frank angle, whereas disclinations contribute both to the Frank angle and to the total Burgers vector.
Finally, supposing for convenience that $\mathrm{spt}\,\alpha\subset\Omega\setminus A$\,, if $\theta=s\big(\textrm{def}_{y+\frac{h}{2}}-\textrm{def}_{y-\frac{h}{2}}\big)$ for some $y,h\in\mathbb{R}^2$ with $y\pm\frac{h}{2}\in A$\,, we have that
$$
\omega\res A=0\qquad\textrm{and}\qquad\mathbf{B}\res A=-s(-h_2;h_1)\,,
$$
which shows that a dipole of opposite disclinations does not contribute to the Frank angle but contributes to the total Burgers vector independently of its center $y$ (see Section~\ref{sub:dipole}).
\subsection{Disclinations in terms of the Airy stress function}\label{incairy}
In this subsection, we rewrite the incompatibility condition in \eqref{incfinal} in terms of the Airy stress function $v$ introduced in \eqref{airy}.
To this purpose, assume that $\alpha\equiv 0$\,, so that \eqref{incfinal} coincides with \eqref{compa300}.
Here and henceforth we use the symbols $n$ and $t$ to denote the external unit normal and tangent vectors, respectively, such that $t=n^{\perp}=(-n_2;n_1)$\,; in this way, the ordered pair $\{n,t\}$ is a right-handed orthonormal basis of~$\mathbb{R}^2$\,.
Consider $v\colon\Omega\to\mathbb{R}{}$ and let $\sigma=\sigma[v]=\mathsf{A}(v)$ (see \eqref{airy}) and $\epsilon[v]=\mathbb{C}^{-1}\sigma[v]$ (see \eqref{strain_stress}).
Then, formally,
\begin{subequations}\label{conversions}
\begin{eqnarray}
\mathrm{curl}\,\mathrm{Curl}\, \epsilon[v]& \!\!\!\!\equiv& \!\!\!\!\frac{1-\nu^2}{E}\Delta^2 v\,
\label{airy2} \\
\mathbb{C}\epsilon[v]\,n& \!\!\!\!\equiv& \!\!\!\!\sigma[v]\, n\equiv(\partial^2_{x_2^2}vn_1-\partial_{x_1x_2}^2 v n_2;-\partial^2_{x_1x_2}vn_1+\partial^2_{x_1^2}vn_2)\equiv\nabla^2 v\, t\,. \label{airybdry}
\end{eqnarray}
\end{subequations}
As customary in mechanics, we refer to
the zero-stress boundary condition
$\mathbb{C}\epsilon[v]\,n=0$
on $\partial\Omega$ as \textit{traction-free}.
With some abuse of notation, we also name traction-free
the same boundary condition
measured in terms of the tangential component of the Hessian of the
Airy potential, that is $ \nabla^2 v\, t=0$
on $\partial\Omega$.
If $\epsilon$ satisfies the equilibrium equations subject to the incompatibility constraint \eqref{compa300} for some $\theta\in\mathscr{WD}(\Omega)$, namely
\begin{equation}\label{cauchyepA}
\begin{cases}
\mathrm{curl}\,\mathrm{Curl}\,\epsilon=-\theta&\text{in $\Omega$}\\
\mathrm{Div}\,\mathbb{C}\epsilon=0&\text{in $\Omega$}\\
\mathbb{C}\epsilon\,n=0&\textrm{on $\partial\Omega$}\,,
\end{cases}
\end{equation}
then, by \eqref{divenulla} and \eqref{conversions}, the Airy stress function $v$ satisfies the system
\begin{equation}\label{cauchyvA}
\begin{cases}
\displaystyle \frac{1-\nu^2}{E}\Delta^2v=-\theta&\textrm{in $\Omega$}\\[2mm]
\nabla^2v\,t=0&\textrm{on $\partial\Omega$\,.}
\end{cases}
\end{equation}
Recalling that \eqref{compa300} holds in the sense of distributions, the study of the regularity of the fields~$\epsilon$ and~$\sigma$ in the laboratory setting and of the Airy stress function~$v$ must be carried out carefully. The reason is the following: the measure of the elastic incompatibility $\theta\in\mathscr{WD}(\Omega)$ is an element of the space $H^{-2}(\Omega)$, so that it is natural to expect that $\epsilon,\sigma\in L^2(\Omega;\mathbb{R}^{2\times2}_{\mathrm{sym}})$ and that $v\in H^2(\Omega)$. At this level $\mathbb{C}\epsilon\,n|_{\partial\Omega}$ and $\nabla^2v\,t|_{\partial\Omega}$ make sense only as elements of $H^{-\frac{1}{2}}(\partial\Omega;\mathbb{R}^{2})$, so that the definition of the boundary conditions in \eqref{cauchyepA} and \eqref{cauchyvA} cannot be intended in a pointwise sense, even when the tangent and normal vectors are defined pointwise.
In Propositions \ref{prop:airyepsilonA} and \ref{prop:airyepsilon}, we establish the equivalence of problems \eqref{cauchyepA} and \eqref{cauchyvA} and we show that, under suitable assumptions on the regularity of $\partial\Omega$\,, the boundary conditions hold in the sense of $H^{\frac 1 2}(\partial\Omega;\mathbb{R}^2)$\,.
To this purpose, we introduce the function $\bar v\in H^2_\mathrm{loc}(\mathbb{R}^2)$ defined by
\begin{equation}\label{fundamdiscl}
\bar v(x)\coloneqq\begin{cases}
\displaystyle \frac{E}{1-\nu^2}\frac{|x|^2}{16\pi}\log|x|^2 & \text{if $x\neq0$}\\[2mm]
0 & \text{if $x=0$}
\end{cases}
\end{equation}
as the fundamental solution to the equation
\begin{equation}\label{fundbd}
\frac{1-\nu^2}E\Delta^2v=\textrm{def}_0\quad\text{in $\mathbb{R}^2$\,.}
\end{equation}
Given $\theta=\sum_{k=1}^{K}s^k\textrm{def}_{y^k}\in\mathscr{WD}(\Omega)$,
for every $k=1,\ldots,K$, we let $v^{k}(\cdot)\coloneqq -s^k\bar v(\cdot -y^{k})\res\Omega$ and define
\begin{equation}\label{plastic_parts}
v^p\coloneqq \sum_{k=1}^{K} v^k\,,\quad
\sigma^p\coloneqq\sigma^p[v^p]=\mathsf{A}(v^p)=\sum_{k=1}^K\mathsf{A}(v^k),\qua
\epsilon^p\coloneqq\epsilon^p[v^p]=\mathbb{C}^{-1}\sigma^p[v^p]=\mathbb{C}^{-1}\sigma^p\,,
\end{equation}
which we are going to refer to as the \emph{plastic contributions}.
Notice that, by construction, $v^p$ is smooth in $\mathbb{R}^2\setminus\mathrm{spt}\,\theta$ and hence on $\partial\Omega$ and so are $\sigma^p$ and $\epsilon^p$\,.
Recalling \eqref{fundamdiscl} and \eqref{fundbd}, we see that
\begin{equation}\label{bilaplacian_vp}
\frac{1-\nu^2}E\Delta^2v^p=-\theta\qquad \text{in $\Omega$\,,}
\end{equation}
so that, if $v$ solves the equation in \eqref{cauchyvA} and we define the function $v^e$ through the additive decomposition
\begin{equation}\label{add_dec_v}
v\coloneqq v^p+v^e\,,
\end{equation}
then, $v^e$ satisfies
\begin{equation}\label{bastaquesta}
\begin{cases}
\displaystyle \frac{1-\nu^2}E\Delta^2v^e=0&\text{in $\Omega$}\\[2mm]
\displaystyle \nabla^2v^e\,t=-\nabla^2v^p\,t&\text{on $\partial\Omega$\,.}
\end{cases}
\end{equation}
Therefore, by \eqref{bilaplacian_vp}, we can find a solution $v$ to problem \eqref{cauchyvA} if and only if we find a solution to problem \eqref{bastaquesta}.
Similarly, by \eqref{airy2},
\begin{equation}\label{inc_ep}
\mathrm{curl}\,\mathrm{Curl}\,\epsilon^p=-\theta\qquad \text{in $\Omega$\,,}
\end{equation}
so that if $\epsilon$ solves the equation in \eqref{cauchyepA} and we define the field $\epsilon^e$ through the additive decomposition
\begin{equation}\label{add_dec_epsilon}
\epsilon\coloneqq \epsilon^p+\epsilon^e\,,
\end{equation}
then we have $\mathrm{curl}\,\mathrm{Curl}\,\epsilon^e=0$ in $\Omega$ and $\mathbb{C}\epsilon^e\,n=-\mathbb{C}\epsilon^p\,n$ on $\partial\Omega$.
Therefore, by \eqref{inc_ep}, we find a solution $\epsilon$ to problem \eqref{cauchyepA} if and only if we find a solution to problem
\begin{equation}\label{cauchy_ee}
\begin{cases}
\mathrm{curl}\,\mathrm{Curl}\,\epsilon^e=0&\text{in $\Omega$}\\
\mathrm{Div}\,\mathbb{C}\epsilon^e=0&\text{in $\Omega$}\\
\mathbb{C}\epsilon^e\,n=-\mathbb{C}\epsilon^p\,n&\textrm{on $\partial\Omega$\,,}
\end{cases}
\end{equation}
where we notice that the second equation above is automatically satisfied by \eqref{divenulla}.
We refer to~$v^e$ and~$\epsilon ^e$ as to the \emph{elastic contributions} and we notice that they are compatible fields.
Upon noticing that the function~$\bar v$ is smooth in $\mathbb{R}^2\setminus\{0\}$ and by requiring that the boundary~$\partial\Omega$ be smooth enough, we will see that problems \eqref{bastaquesta} and \eqref{cauchy_ee} admit solutions which are regular enough for the boundary conditions to make sense in $H^{\frac 1 2}(\partial\Omega;\mathbb{R}^2)$.
We start by proving the following result, which is one implication in the equivalence of problems \eqref{cauchyepA} and \eqref{cauchyvA}.
\begin{proposition}\label{prop:airyepsilonA}
Let $\Omega\subset\mathbb{R}^2$ be a bounded, simply connected, open set, let $\theta\in\mathscr{WD}(\Omega)$, and let~$\epsilon$ be a distributional solution to the first two equations in \eqref{cauchyepA}.
If $\partial\Omega$ is of class~$C^2$, then $\epsilon\in L^2(\Omega;\mathbb{R}^{2\times 2}_{\mathrm{sym}})$
and $\mathbb{C}\epsilon\,
\in H^{\frac 1 2}(\partial\Omega;\mathbb{R}^2)$.
As a consequence, a solution~$\epsilon$ to \eqref{cauchyepA} is uniquely determined up to a rigid motion.\\
Moreover, there exists a function $v\in H^2(\Omega)$ such that
$\epsilon=\mathbb{C}^{-1}\mathsf{A}(v)$. Such $v$ is a distributional solution to the first equation in \eqref{cauchyvA}
and $\nabla^2v\,
\in H^{\frac 1 2}(\partial\Omega;\mathbb{R}^2)$.
As a consequence, a solution~$v$ to \eqref{cauchyvA} is uniquely determined up to an affine function.
\end{proposition}
\begin{proof}
Observe that, since $\theta\in H^{-2}(\Omega)$, the distributional solution~$\epsilon$ to the first two equations in \eqref{cauchyepA} is indeed in $L^2(\Omega;\mathbb{R}^{2\times2}_{\mathrm{sym}})$ and the incompatibility equation in \eqref{cauchyepA} holds in the $H^{-2}$-sense.
Moreover, since the function~$\bar v$ defined in \eqref{fundamdiscl} is of class $H^2_{\mathrm{loc}}(\mathbb{R}^2)$, it follows that
the matrix-valued functions~$\epsilon^p$ and~$\sigma^p$ defined in \eqref{plastic_parts} are elements of $L^2(\Omega;\mathbb{R}^{2\times2}_{\mathrm{sym}})$ and that equation \eqref{inc_ep} holds in $H^{-2}(\Omega)$.
Therefore, the elastic strain~$\epsilon^e$ defined in \eqref{add_dec_epsilon} by difference is itself in $L^2(\Omega;\mathbb{R}^{2\times2}_{\mathrm{sym}})$ and satisfies \eqref{cauchy_ee}.
We define $G\colon H^1(\Omega;\mathbb{R}^2)\to\mathbb{R}$ as
\begin{equation*}
G(u)\coloneqq\int_{\Omega}\mathbb{C}\nabla^{\mathrm{sym}}u:\nabla^{\mathrm{sym}}u\,\mathrm{d} x+2\int_{\Omega} \sigma^p:\nabla^{\mathrm{sym}}u\,\mathrm{d} x\,,
\end{equation*}
and we observe that it is bounded below in $H^1(\Omega;\mathbb{R}^2)$, so that, by applying the direct method of Calculus of Variations, in view of Korn's inequality, it admits a unique minimizer $u^e\in H^1(\Omega;\mathbb{R}^2)$, up to rigid motions.
Setting $\epsilon^e\coloneqq\nabla^\mathrm{sym} u^e$\,, we have that $\epsilon^e$ satisfies \eqref{cauchy_ee}, which is the Euler-Lagrange equation for~$G$\,.
Notice that, since $\partial\Omega$ is of class $C^2$ and since $\epsilon^p$ is smooth on $\partial\Omega$\,, by standard regularity results $u^e\in H^2(\Omega;\mathbb{R}^2)$ and hence $\epsilon^e\in H^1(\Omega;\mathbb{R}^{2\times 2}_{\mathrm{sym}})$\,. It follows that
$\mathbb{C}\epsilon^e\,n\in H^{\frac12}(\partial\Omega;\mathbb{R}^2)$, so that, in view of \eqref{add_dec_epsilon}
$\mathbb{C}\epsilon\,n\in H^{\frac12}(\partial\Omega;\mathbb{R}^2)$.
Now we can apply \cite[Theorem 5.6-1(a)]{ciarlet97}, and in particular the argument in \cite[page~397]{ciarlet97}, which guarantees that a strain field $\epsilon^e\in H^m(\Omega;\mathbb{R}^{2\times2}_{\mathrm{sym}})$ admits an Airy stress function $v^e=\mathsf{A}^{-1}(\epsilon^e)\in H^{m+2}(\Omega)$, for every $m\geq0$. By applying this result with $m=1$, we obtain that $v^e\in H^3(\Omega)$ and hence $\nabla^2v^e\,t\in H^{\frac12}(\partial\Omega;\mathbb{R}^2)$\,; this, together with \eqref{cauchy_ee} implies that $v^e$ solves \eqref{bastaquesta}.
By taking~$v^p$ as in \eqref{plastic_parts} and by defining~$v$ according to \eqref{add_dec_v}, we have that $\nabla^2v\,t\in H^{\frac12}(\partial\Omega;\mathbb{R}^2)$ because~$v^p$ is smooth in a neighborhood of~$\partial\Omega$\,.
Thanks to \eqref{conversions} and \eqref{bilaplacian_vp}, $v$ solves \eqref{cauchyvA}.
Since affine functions are in the kernel of the Hessian operator~$\nabla^2$, the last statement of the theorem follows.
\end{proof}
\begin{remark}
{\rm It is easy to check that rigid motions in the laboratory variables correspond to affine functions in the Airy variable.}
\end{remark}
In order to prove the converse implication of Proposition~\ref{prop:airyepsilonA}, we state the following result, which is an immediate consequence of \cite[Theorem~2.20]{Gazzola09} (applied with $k=4$, $m=n=p=2$, and with $f\equiv 0$ and $h_j\in C^{\infty}$)\,.
\begin{lemma}\label{20220422}
Let $A\subset\mathbb{R}^2$ be a bounded open set with boundary of class $C^4$ and let $f\in C^\infty(\partial A;\mathbb{R}^2)$\,.
Then, there exists a solution $w\in H^2(A)$ to
\begin{equation}\label{20220422_1}
\begin{cases}
\displaystyle \frac{1-\nu^2}{E}\Delta^2w=0&\text{in }A\,,\\[2mm]
\displaystyle \nabla^2w\,t=f&\textrm{on }\partial A\,,
\end{cases}
\end{equation}
where the first equation holds in $H^{-2}(A)$ and the second one is meant in $H^{-\frac 1 2}(\partial A;\mathbb{R}^2)$\,.
Moreover, $w\in H^4(A)$ and hence $\nabla^2w\,t\in H^{\frac 3 2}(\partial A;\mathbb{R}^2)$\,.
\end{lemma}
\begin{proposition}\label{prop:airyepsilon}
Let $\Omega\subset\mathbb{R}^2$ be a bounded open set with boundary of class $C^4$ and let $\theta\in\mathscr{WD}(\Omega)$\,.
Then there exists a weak solution $v\in H^2(\Omega)$ to \eqref{cauchyvA} and the condition $\nabla^2v\,t=0$ on $\partial\Omega$ holds in $H^{\frac 3 2}(\partial\Omega;\mathbb{R}^2)$\,. Furthermore, the function $\epsilon=\epsilon[v]\coloneqq\mathbb{C}^{-1}\mathsf{A}(v)$ is a distributional solution to the first two equations in \eqref{cauchyepA} and satisfies the boundary condition $\mathbb{C}\epsilon\,n=0$ in $H^{\frac 3 2}(\partial\Omega;\mathbb{R}^2)$\,.
\end{proposition}
\begin{proof}
Recalling the definition of $v^p$ in \eqref{plastic_parts}, formula \eqref{bilaplacian_vp}, and the decomposition in \eqref{add_dec_v}, it is enough to show that there exists a solution $v^e\in H^4(\Omega)$ to \eqref{bastaquesta}.
Indeed, this follows from Lemma \ref{20220422} applied with $A=\Omega$ and $f=-\nabla^2v^p\,t$\,, since $v^p\in C^\infty_{\mathrm{loc}}(\Omega\setminus\mathrm{spt}\,\theta)$\,.
By taking $\epsilon=\epsilon[v]\coloneqq\mathbb{C}^{-1}\mathsf{A}(v)$\,, with $v=v^e+v^p$\,, we have that $\epsilon\in L^2(\Omega;\mathbb{R}^{2\times 2})$ is a weak solution to \eqref{cauchyepA} and that the boundary condition $\mathbb{C}\epsilon\,n=0$ on $\partial\Omega$ holds in $H^{\frac 3 2}(\partial\Omega;\mathbb{R}^2)$\,; this last statement follows from \eqref{add_dec_epsilon} since
$\epsilon^p=\mathbb{C}^{-1}\mathsf{A}(v^p)$ is smooth near the boundary of $\Omega$ and $\epsilon^e=\mathbb{C}^{-1}\mathsf{A}(v^e)$ satisfies the condition $\mathbb{C}\epsilon^e\,n=-\mathbb{C}\epsilon^p\,n$ on $\partial\Omega$ in $H^{\frac 3 2}(\partial\Omega;\mathbb{R}^2)$, by Lemma \ref{20220422} and \eqref{airybdry}.
\end{proof}
Now we show how the boundary condition $\nabla^2v\,t=0$ on $\partial\Omega$ in \eqref{cauchyepA} can be
formulated in terms of classical Dirichlet-type boundary conditions.
\begin{proposition}\label{20220421}
Let $A\subset\mathbb{R}^2$ be a bounded open set with boundary of class $C^4$\,. Let $\theta\in\mathscr{WD}(A)$ and let $v\in H^2(A)$ be such that
\begin{equation}\label{2204061255}
\frac{1-\nu^2}{E}\Delta^2v=-\theta\qquad\text{in $A$.}
\end{equation}
Then, denoting by $\Gamma^0,\Gamma^1,\ldots,\Gamma^L$ the connected components of $\partial A$\,, we have that
\begin{equation}\label{20220421_1}
\nabla^2v\,t=0\quad\textrm{on }\partial A\quad\Leftrightarrow\quad v=a^l\,, \quad\partial_n v=\partial_n a^l\quad\textrm{on }\Gamma^l\,,\quad\textrm{ for every }l=0,1,\ldots,L\,,
\end{equation}
where $a^0,a^1,\ldots,a^L$ are affine functions.
\end{proposition}
\begin{proof}
We start by proving the implication ``$\Rightarrow$''. Recalling the additive decomposition \eqref{add_dec_v}, by Lemma \ref{20220422} and since $v^p\in C^\infty_{\mathrm{loc}}(A\setminus\mathrm{spt}\,\theta)$, we have that if $v=v^e+v^p$ satisfies \eqref{2204061255} and
\begin{equation}\label{20220425}
\nabla^2v\,t=0\qquad\textrm{on }\partial A\,,
\end{equation}
then $v^e\in H^4(A)$.
Therefore, by the Rellich--Kondrakov Theorem we also have that $v^e\in C^2(\overline{A})$, so that~$v=v^e+v^p$ is of class $C^2$ in a neighborhood of~$\partial A$\,.
By Proposition \ref{2101141730}, we deduce that the function~$v$ has an affine trace on each connected component of $\partial A$\,.
Viceversa, assume that $v$ is a solution to \eqref{2204061255} and satisfies
\begin{equation}\label{20220425_1}
v=a^l\,,\quad \partial_n v=\partial_n a^l\quad\textrm{on }\Gamma^l\,, \quad\textrm{ for every }l=0,1,\ldots,L\,,
\end{equation}
for some affine functions $a^0,a^1,\ldots,a^L$\,.
Then, adopting again the additive decomposition \eqref{add_dec_v}
and recalling \eqref{bilaplacian_vp},
we have that $v^e$ satisfies
\begin{equation} \label{20220425_3}
\left\{\begin{array}{ll}
\displaystyle \frac{1-\nu^2}{E}\Delta^2v^e=0&\qquad\text{in }A\\[2mm]
\displaystyle v^e=a^l-v^p&\qquad\text{on $\Gamma^l$, for every $l=0,1,\ldots,L$}\\[2mm]
\displaystyle \partial_n v^e=\partial_na^l-\partial_nv^p&\qquad\text{on $\Gamma^l$, for every $l=0,1,\ldots,L$\,.}
\end{array}
\right.
\end{equation}
Therefore, by standard regularity results for higher order problems (see, for instance, \cite{Gazzola09}), we have that $v^e\in H^4(A)$ and, again by the Rellich-Kondrakov Theorem, $v^e\in C^2(\overline{A})$\,. It follows that $v=v^e+v^p$ is of class $C^2$ in a neighborhood of~$\partial A$\,, and we can apply again Proposition \ref{2101141730} to deduce that \eqref{20220425} holds true.
\end{proof}
\section{Finite systems of isolated disclinations}\label{sc:isolated_disclinations}
We now study the equilibrium problem for a finite family of isolated disclinations in a body $\Omega$.
The natural idea would be to consider the minimum problem for the elastic energy $\mathcal{G}$ defined in \eqref{energyairy} under the incompatibility constraint \eqref{cauchyvA}\,, associated with a measure $\theta\in\mathscr{WD}(\Omega)$\,;
however, this is inconsistent, since one can easily verify that the Euler--Lagrange equation for $\mathcal{G}$ is $\Delta^2v=0$\,.
To overcome this inconsistency, we define a suitable functional which embeds the presence of the disclinations and whose Euler--Lagrange equation is given by \eqref{cauchyvA}.
To this purpose, let $\Omega\subset\mathbb{R}^2$ be a bounded, open, and simply connected set with boundary of class $C^4$\,;
for every $\theta\in\mathscr{WD}(\Omega)$ let $\mathcal{I}^\theta\colon H^2(\Omega)\to \mathbb{R}$ be the functional defined by
\begin{equation}\label{defI}
\mathcal{I}^{\theta}(v;\Omega)\coloneqq\mathcal{G}(v;\Omega)+\langle \theta,v\rangle\,,
\end{equation}
and consider the minimum problem
\begin{equation}\label{minI}
\min \big\{\mathcal{I}^\theta(v;\Omega) : \text{$v\in H^2(\Omega)$\,, $\nabla^2v\,t=0$ on $\partial\Omega$}\big\}\,.
\end{equation}
A simple calculation shows that the Euler--Lagrange equation for the functional \eqref{defI}, with respect to variations in $H^2_0(\Omega)$\,, is given by \eqref{cauchyvA}.
By Proposition \ref{20220421}, we deduce that the minimum problem in \eqref{minI} is equivalent, up to an affine function, to the minimum problem
\begin{equation}\label{minIaff}
\min \{\mathcal{I}^\theta(v;\Omega)\,:\,v\in H_0^2(\Omega)\}\,.
\end{equation}
\begin{lemma}\label{propItheta}
For every $\theta\in\mathscr{WD}(\Omega)$, the functional $\mathcal{I}^\theta(\cdot;\Omega)$ is strictly convex in $H^2(\Omega)$ and it is bounded below and coercive in $H^2_0(\Omega)$\,.
As a consequence, the minimum problem \eqref{minIaff} has a unique solution.
\end{lemma}
\begin{proof}
We start by proving that $\mathcal{I}^\theta(\cdot;\Omega)$ is bounded below and coercive in $H^2_0(\Omega)$\,. To this purpose,
we first notice that there exists a constant $C_1=C_1(\nu,E,\Omega)>0$ such that for every $v\in H^2_0(\Omega)$
\begin{equation}\label{quadr}
\mathcal{G}(v;\Omega)\ge\frac 1 2\frac{1-\nu^2}{E}\min\{1-2\nu,1\}\|\nabla^2 v\|^2_{L^2(\Omega;\mathbb{R}^{2\times 2})}\ge C_1\|v\|^2_{H^2(\Omega)}\,,
\end{equation}
where in the last passage we have used Friedrichs's inequality in $H^2_0(\Omega)$\,. Notice that the positivity of $C_1$ is a consequence of \eqref{lame3}.
Now, using that $H^2_0(\Omega)$ embeds into $C^0(\Omega)$\,, we have that there exists a constant $C_2=C_2(\theta,\Omega)>0$ such that for every $v\in H^2_0(\Omega)$
\begin{equation}\label{linear}
\langle \theta,v\rangle=\sum_{k=1}^{K}s^kv(x^k)\ge -C_2\|v\|_{H^2(\Omega)}\,.
\end{equation}
By \eqref{quadr} and \eqref{linear}, we get that for every $v\in H^2_0(\Omega)$
\begin{equation}\label{pincapalla}
\mathcal{I}^\theta(v;\Omega)\ge C_1\|v\|^2_{H^2(\Omega)}-C_2\|v\|_{H^2(\Omega)}\ge
-\frac{C_2^2}{4C_1}\,,
\end{equation}
which implies boundedness below and coercivity of $\mathcal{I}^\theta(\cdot;\Omega)$ in $H^2_0(\Omega)$\,.
Now we show that $\mathcal{G}(\cdot;\Omega)$ is strictly convex in $H^2(\Omega)$\,, which, together with the linearity of the map $v\mapsto\langle\theta,v\rangle$\,, implies the strict convexity of $\mathcal{I}^\theta(\cdot;\Omega)$ in $H^2(\Omega)$\,. To this purpose, let $v,w\in H^2(\Omega)$ with $v\neq w$ and let $\lambda\in (0,1)$\,; then a simple computation shows that
\begin{equation}\label{strictconv}
\begin{split}
\mathcal{G}(\lambda v+(1-\lambda)w;\Omega)=&\,\lambda\mathcal{G}(v;\Omega)+(1-\lambda)\mathcal{G}(w;\Omega)-\lambda(1-\lambda)\mathcal{G}(v-w;\Omega)\\
<&\, \lambda\mathcal{G}(v;\Omega)+(1-\lambda)\mathcal{G}(w;\Omega)\,,
\end{split}
\end{equation}
which is the strict convexity condition.
By the direct method of the Calculus of Variations, problem \eqref{minIaff} has a unique solution.
\end{proof}
\begin{remark}
\rm{
We highlight that inequality \eqref{pincapalla} shows that $\mathcal{I}^\theta(\cdot;\Omega)$ could be negative.
In particular, being $\mathcal{G}$ non-negative, the sign of $\mathcal{I}^\theta$ is determined by the value of the linear contribution $\langle \theta,v\rangle$\,. It follows that the minimum problem \eqref{minI} and hence \eqref{minIaff} are non trivial and, as we will see later (see, e.g., \eqref{minenball}), the minimum of $\mathcal{I}^\theta(\cdot;\Omega)$ is indeed negative.
}
\end{remark}
\begin{remark}\label{equinorm}
\rm{
Notice that the functional $\mathcal{G}^{\frac 1 2}(\cdot;\Omega)$ defines a seminorm on $H^2(\Omega)$ and a norm in $H^2_0(\Omega)$\,, since $\mathcal{G}(v;\Omega)\equiv\langle v,v\rangle_{\mathcal{G}_\Omega} $ where the product $\langle \cdot,\cdot\rangle_{\mathcal{G}_\Omega}$\,, defined by
\begin{equation}\label{semiprod}
\langle v,w\rangle_{\mathcal{G}_\Omega}\coloneqq \frac 1 2 \frac{1+\nu}{E}\int_{\Omega} \big(\nabla^2v:\nabla^2 w-\nu \Delta v\Delta w\big)\,\mathrm{d} x\,,
\end{equation}
is a bilinear, symmetric, and positive semidefinite form in $H^2(\Omega)$ and positive definite in $H^2_0(\Omega)$\,.
We remark that in $H^2_0(\Omega)$ the norm $\mathcal{G}^{\frac 1 2}(\cdot;\Omega)$ is equivalent to the standard norm $\|\cdot\|_{H^2(\Omega)}$\,.
}
\end{remark}
In the following lemma, for any given $\xi\in\mathbb{R}^2$ and $R>0$, we compute the minimal value of $\mathcal{I}^\theta(\cdot;B_R(\xi))$ associated with a single disclination located at $\xi$, corresponding to $\theta=s\textrm{def}_\xi$ for some $s\in\mathbb{R}\setminus\{0\}$\,.
The explicit computation is straightforward and is omitted.
\begin{lemma}\label{sub:single}
Let $s\in\mathbb{R}\setminus\{0\}$\,, $\xi\in\mathbb{R}^2$\,, and $R>0$.
The function $v_R\colon \overline{B}_R(\xi)\to\mathbb{R}$
defined by
\begin{equation*}
\begin{aligned}
v_R(x)\coloneqq & -s\bar v (x-\xi)-s\frac{E}{1-\nu^2}\frac{R^2-|x-\xi|^2(1+\log R^2)}{16\pi}\\
=&-sR^2\bigg(\bar v \Big(\frac {x-\xi} R\Big)+\frac{E}{1-\nu^2}\frac{1}{16\pi}\Big(1-\Big|\frac{x-\xi}{R}\Big|^2\Big)\bigg)\,,
\end{aligned}
\end{equation*}
with $\bar v$ as in \eqref{fundamdiscl}, belongs to $H^2(B_R(\xi))\cap C^\infty_{\mathrm{loc}}(B_R(\xi)\setminus\{\xi\})$ and solves
\begin{equation}\label{cauchyvR}
\begin{cases}
\displaystyle \frac{1-\nu^2}{E}\Delta^2v=-s\textrm{def}_\xi&\text{in $B_R(\xi)$}\\[2mm]
\displaystyle v=\partial_nv=0&\text{on $\partial B_R(\xi)$\,.}
\end{cases}
\end{equation}
Hence $v_R$ is the only minimizer of problem \eqref{minIaff} for $\Omega=B_R(\xi)$ and $\theta=s\textrm{def}_\xi$\,.
Moreover,
\begin{equation}\label{energyfinite}
\mathcal{G}(v_R;B_R(\xi))=\frac{E}{1-\nu^2}\frac{s^2R^2}{32\pi}\qquad\textrm{and}\qquad \langle s\textrm{def}_\xi,v_R\rangle=-\frac{E}{1-\nu^2}\frac{s^2R^2}{16\pi}\,,
\end{equation}
so that
\begin{equation}\label{minenball}
\min_{v\in H^2_0(B_R(\xi))}\mathcal{I}^{s\textrm{def}_\xi}(v; B_R(\xi))=\mathcal{I}^{s\textrm{def}_\xi}(v_R; B_R(\xi))=-\frac{E}{1-\nu^2}\frac{s^2R^2}{32\pi}\,.
\end{equation}
\end{lemma}
In view of \eqref{energysigma} and \eqref{energyairy}, the first equality in \eqref{energyfinite} is the stored elastic energy of a single disclination located at the center of the ball $B_R(\xi)$. Observe that, according to the formulation of the mechanical equilibrium problem in the Airy variable \eqref{cauchyvR}, the charge contribution displayed in the second equality in \eqref{energyfinite} adds to the total energy functional of the system, but does not correspond to an energy of elastic nature.
\section{Dipole of disclinations}\label{sub:dipole}
In \eqref{energyfinite} we have seen that an isolated disclination in the center of a ball of radius $R$ carries an elastic energy of the order $R^2$\,.
Here we show that the situation dramatically changes when considering a dipole of disclinations with opposite signs; indeed, when the distance between the disclinations vanishes, a dipole of disclinations behaves like an edge dislocation and its elastic energy is actually of the order $\log R$\,.
\subsection{Dipole of disclinations in a ball}\label{sub:dipole1}
For every $h>0$ let
\begin{equation}\label{poledipoles}
y^{h,\pm}\coloneqq\pm\frac h 2(1;0)
\end{equation}
and
let $\bar v_h\colon\mathbb{R}^2\to \mathbb{R}$ be the function defined by
\begin{equation}\label{defvbarh}
\bar v_h(x)\coloneqq -s(\bar v(x-y^{h,+})-\bar v(x-y^{h,-}))\,,
\end{equation}
where $\bar v$ is given in \eqref{fundamdiscl}.
By construction, $\bar v_h\res{B_R(0)}\in H^2(B_R(0))$ and
\begin{equation}\label{bilaplvbarh}
\Delta^2\bar v_h=-\theta_h\qquad\textrm{in }\mathbb{R}^2\,,
\end{equation}
where we have set
\begin{equation}\label{thetah}
\theta_h\coloneqq s\big(\textrm{def}_{y^{h,+}}-\textrm{def}_{y^{h,-}}\big)\,.
\end{equation}
We start by proving that the $H^2$ norm of $\bar v_h$ in an annulus $A_{r,R}(0)\coloneqq B_R(0)\setminus \overline B_r(0)$ with fixed radii $0<r<R$ vanishes as $h\to 0$.
\begin{lemma}\label{lemma:edgetrue}
For every $0<r<R$ there exists a constant $C(r,R)$ such that
\begin{equation}\label{f:edgetrue}
\lim_{h\to 0}\frac{1}{h^2}\|\bar v_h\|^2_{H^2(A_{r,R}(0))}=C(r,R)s^2\,.
\end{equation}
\end{lemma}
\begin{proof}
Since $\bar v_0\equiv 0$ in $A_{r,R}(0)$\,, for every $x\in A_{r,R}(0)$\,, we have that
\begin{equation}\label{vbarprime}
\lim_{h\to 0}\frac{\bar v_h(x)}{h}=\frac{\mathrm{d}}{\mathrm{d} h}{\Big|_{h=0}}\bar v_h(x)=\frac{E}{1-\nu^2}\frac{s}{8\pi} ( x_1\log|x|^2+x_1)\eqqcolon \bar v'(x)\,.
\end{equation}
Therefore, by the Dominated Convergence Theorem, in order to prove \eqref{f:edgetrue}, it is enough to show that
\begin{equation}\label{newf:edgetrue}
\|\bar v'\|_{H^2(A_{r,R}(0))}=C(r,R)\,,
\end{equation}
for some $C(r,R)>0$\,.
By straightforward computations, we have that
\begin{equation}\label{perdislo}
\begin{split}
\partial_{x_1}\bar v'(x)=&\, \frac{E}{1-\nu^2}\frac{s}{8\pi} \bigg(\log|x|^2+1+\frac{2x_1^2}{|x|^2}\bigg)\,,\\
\partial_{x_2}\bar v'(x)=&\, \frac{E}{1-\nu^2}\frac{s}{8\pi} \frac{2x_1x_2}{|x|^2}\,,\\
\partial^2_{x_1^2}\bar v'(x)=&\, \frac{E}{1-\nu^2}\frac{s}{4\pi} \bigg(\frac{x_1}{|x|^2}+2\frac{x_1x_2^2}{|x|^4}\bigg)=\frac{E}{1-\nu^2}\frac{s}{4\pi}\frac{1}{|x|^4}(x_1^3+3x_1x_2^2)\,,\\
\partial^2_{x_2^2}\bar v'(x)=&\, \frac{E}{1-\nu^2}\frac{s}{4\pi}\frac{1}{|x|^4}(x_1^3-x_1x_2^2)\,,\\
\partial^2_{x_1\,x_2}\bar v'(x)=&\, \frac{E}{1-\nu^2}\frac{s}{4\pi}\frac{1}{|x|^4}(x_2^3-x_1^2x_2)\,.
\end{split}
\end{equation}
Therefore, by \eqref{vbarprime} and \eqref{perdislo} we deduce that
\begin{equation*}
\begin{aligned}
|\bar v'(x)|^2=&\frac{E^2}{(1-\nu^2)^2}\frac{s^2}{64\pi^2}x_1^2(\log|x|^2+1)^2\,,\\
|\nabla \bar v'(x)|^2=&\frac{E^2}{(1-\nu^2)^2}\frac{s^2}{64\pi^2}\bigg(\Big(\log|x|^2+1\Big)^2+4\log|x|^2\frac{x_1^2}{|x|^2}+8\frac{x_1^2}{|x|^2}\bigg)\,,\\
|\nabla^2\bar v'(x)|^2=&\frac{E^2}{(1-\nu^2)^2}\frac{s^2}{8\pi^2}\frac{1}{|x|^2}\,,
\end{aligned}
\end{equation*}
which, integrating over $A_{r,R}(0)$ yields \eqref{newf:edgetrue} and, in turn, \eqref{f:edgetrue}.
\end{proof}
The next lemma is devoted to the asymptotic behavior of the elastic energy of $\bar{v}_h$ as $h\to 0$\,.
Its proof is contained in Appendix \ref{appendixprooflemma}.
\begin{lemma}\label{lemma:energyvbarh}
For every $R>0$
\begin{equation}\label{energyvbarh}
\lim_{h\to 0}\frac{1}{h^2\log \frac{R}{h}}\mathcal{G}(\bar v_h;B_R(0))=\frac{E}{1-\nu^2}\frac{s^2}{8\pi}\,.
\end{equation}
\end{lemma}
The next proposition shows that the same behavior in \eqref{energyvbarh} persists when replacing $\bar v_h$ with the minimizer $v_h$ of $\mathcal{I}^{\theta_h}(\cdot;B_R(0))$ in $H^2_0(B_R(0))$\,, for $\theta_h$ given by \eqref{thetah}.
\begin{proposition}\label{p:3.3}
For every $0<h<R$\,, let
$v_h$ be the minimizer of $\mathcal{I}^{\theta_h}(\cdot;B_R(0))$ in $H^2_0(B_R(0))$\,.
Then,
\begin{equation}\label{plim}
\lim_{h\to 0}\frac{1}{h^2|\log h|}\mathcal{G}(v_h;B_R(0))=\frac{E}{1-\nu^2}\frac{s^2}{8\pi}
\end{equation}
and
\begin{equation}\label{slim}
\lim_{h\to 0}\frac{1}{h^2|\log h|}\mathcal{I}^{\theta_h}(v_h;B_R(0))=-\frac{E}{1-\nu^2}\frac{s^2}{8\pi}\,.
\end{equation}
\end{proposition}
\begin{proof}
We start by noticing that, for every $0<h<R$\,, the minimizer $v_h$ of $\mathcal{I}^{\theta_h}(\cdot;B_R(0))$ in $H^2_0(B_R(0))$ is unique by Lemma~\ref{propItheta}.
Let $w_h\in H^2(B_R(0))$ be defined by the formula $w_h\coloneqq v_h- \bar v_h\res B_R(0)$\,,
where $\bar v_h$ is defined in \eqref{defvbarh}.
Then, by \eqref{bilaplvbarh}, we have that $w_h$ is the unique solution to
\begin{equation}\label{cauchyw}
\begin{cases}
\Delta^2 w=0&\text{in $B_R(0)$}\\
w=-\bar v_h&\text{on $\partial B_R(0)$}\\
\partial_n w=-\partial_n \bar v_h&\text{on $\partial B_R(0)$\,.}
\end{cases}
\end{equation}
By \cite[Theorem 2.16]{Gazzola09}, we have that there exists a constant $C=C(R)>0$ such that
\begin{equation}\label{estiesti}
\|w_h\|_{H^2(B_R(0))}\le C\|\bar v_h\|_{C^2(\partial{B_R}(0))}\le C\|\bar v_h\|_{H^2(A_{r,R}(0))}\,,
\end{equation}
where $0<r<R$ is fixed.
By \eqref{estiesti} and Lemma \ref{lemma:edgetrue} for $h$ small enough we get
\begin{equation}\label{estiwh}
\|w_h\|^2_{H^2(B_R(0))}\le C\|\bar v_h\|^2_{H^2(A_{r,R}(0))}\le C(r,R) s^2h^2\,,
\end{equation}
which, together with Lemma \ref{lemma:energyvbarh}, recalling the definition of $\langle \cdot;\cdot\rangle_{\mathcal{G}_{B_R(0)}}$ in \eqref{semiprod}, yields
\begin{equation*}
\begin{aligned}
\lim_{h\to 0}\frac{1}{h^2\log \frac{R}{h}}\mathcal{G}(v_h; B_R(0))=&\lim_{h\to 0}\frac{1}{h^2\log \frac{R}{h}}\mathcal{G}(\bar v_h;B_R(0))+\lim_{h\to 0}\frac{1}{h^2\log \frac{R}{h}}\mathcal{G}(w_h; B_R(0))\\
&\qquad-2s\lim_{h\to 0}\frac{1}{h^2\log \frac{R}{h}}\langle \bar v_h;w_h\rangle_{\mathcal{G}_{B_R(0)}}
=\frac{E}{1-\nu^2}\frac{s^2}{8\pi}\,,
\end{aligned}
\end{equation*}
i.e., \eqref{plim}. Finally, since
\begin{equation*}
\langle \theta_h,v_h\rangle=\langle \theta_h,\bar v_h\rangle+\langle \theta_h,w_h\rangle=-\frac{E}{1-\nu^2}\frac{s^2}{4\pi}h^2|\log h|+w_h(y^{h,+})-w_h(y^{h,-})\,,
\end{equation*}
using that $w_h\in C^\infty(B_R(0))$ and \eqref{estiwh}, we get
\begin{equation*}
\lim_{h\to 0}\frac{1}{h^2\log\frac{R}{h}}\langle \theta_h,v_h\rangle=-\frac{E}{1-\nu^2}\frac{s^2}{4\pi}+\lim_{h\to 0}\frac{1}{h|\log h|}\partial_{x_1}w_h(0)=-\frac{E}{1-\nu^2}\frac{s^2}{4\pi}\,,
\end{equation*}
which, added to \eqref{plim}, yields \eqref{slim}.
\end{proof}
\subsection{Core-radius approach for a dipole of disclinations}\label{sec:hBV}
We discuss the convergence of a wedge disclination dipole to a planar edge dislocation.
We remind that the kinematic equivalence of a dipole of wedge disclinations with an edge dislocation has been first pointed out in \cite{Eshelby66} with a geometric construction in a continuum (see
\cite{YL2010} for a construction on the hexagonal lattice).
Let $s>0$\,, $R>0$\,, $h\in(0,R)$, and let $\theta_h\coloneqq s\textrm{def}_{(\frac h 2;0)}-s\textrm{def}_{(-\frac h 2;0)}$\,.
Moreover, let $v_h\in H^2(B_R(0))$ satisfy
\begin{equation}\label{risca}
\begin{cases}
\Delta^2 v_h=-\theta_h&\textrm{in }B_R(0)\\
v_h=\partial_nv_h=0&\textrm{on }\partial B_R(0)\,.
\end{cases}
\end{equation}
Then, since $\frac{\theta_h}h\to -s\partial_{x_1}\textrm{def}_{0}$ as $h\to 0$\,, we expect that, formally, $\frac{v_h}{h}\to v$, where $v$ satisfies
\begin{equation}\label{riscaaa}
\begin{cases}
\Delta^2 v= s\partial_{x_1}\textrm{def}_{0}&\textrm{in }B_R(0)\\
v=\partial_nv=0&\textrm{on }\partial B_R(0)\,,
\end{cases}
\end{equation}
namely, $v$ is the Airy function associated with the elastic stress field of an edge dislocation centered at the origin and with Burgers vector $b=se_2$\,, see \eqref{incfinal2}.
Notice that the resulting Burgers vector is orthogonal to the direction of the disclination dipole $d$ (directed from the negative to the positive charge), more precisely we can write $\frac{b}{|b|}=\frac{d^\perp}{|d|}$ (see \cite{Eshelby66} and also \cite[formula (7.17)]{dW3} and \cite[formula (7)]{ZA2018}).
The convergence of the right-hand side of \eqref{risca} to the right-hand side of \eqref{riscaaa} represents the kinematic equivalence between an edge dislocation and a wedge disclination dipole, obtained in the limit as the dipole distance~$h$ tends to zero.
We now focus our attention on the investigation of the energetic equivalence of these defects, which we pursue by analyzing rigorously the convergence of the solutions of \eqref{risca} to those of \eqref{riscaaa}.
As this analysis entails singular energies, we introduce regularized functionals parameterized by $0<\varepsilon<R$, representing the core radius.
To this purpose, we define
\begin{equation}\label{nuovoA1}
\begin{aligned}
\mathscr{B}_{\ep,R}\coloneqq\big\{w\in H^2_0(B_R(0)): \text{$w=a$ in $B_\ep(0)$ for some affine function~$a$}\big\}
\end{aligned}
\end{equation}
and, recalling \eqref{defI}, we introduce, for $h<\ep$\,, the functional
$\widetilde{\mathcal{J}}^s_{h,\ep}\colon \mathscr{B}_{\ep,R}\to\mathbb{R}$ defined by
\begin{equation*
\widetilde{\mathcal{J}}^s_{h,\ep}(w^h)\coloneqq\mathcal{G}(w^h;B_R(0))+\frac{s}{2\pi(\ep-h)}\int_{\partial B_{\ep-h}(0)}\bigg[w^h\Big(x+\frac h 2e_1\Big)-w^h\Big(x-\frac h 2 e_1\Big)\bigg]\,\mathrm{d}\mathcal H^1(x)\,,
\end{equation*}
associated with a pair of disclinations of opposite charges $\pm s$
placed at $\pm(\frac h 2,0)$, respectively.
We identify the relevant rescaling for the Airy stress function $w^h$\,, parametrized by the dipole distance~$h$\,,
and corresponding to the energy regime of interest.
We stress that the energy scalings are dictated by the
scaling of $w^h$ and not from \emph{a priori} assumptions.
Consequently, we assume $w^h=hw$ and write
\begin{equation}\label{2206191431}
\widetilde{\mathcal{J}}^s_{h,\ep}(hw)=\mathcal{G}(hw;B_R(0))+\frac{s}{2\pi(\ep-h)}\int_{\partial B_{\ep-h}(0)}\bigg[hw\Big(x+\frac h 2e_1\Big)-hw\Big(x-\frac h 2 e_1\Big)\bigg]\,\mathrm{d}\mathcal H^1(x)\,.
\end{equation}
It follows that the regularized energy of a disclination dipole of finite charge $s$
is of order $\mathrm{O}(h^2)$\,.
In order to isolate the first non-zero contribution
in the limit as $h\to 0$, we divide \eqref{2206191431} by~$h^2$
and we define $\mathcal{J}^s_{h,\ep}\colon \mathscr{B}_{\ep,R}\to\mathbb{R}$ by
\begin{equation}\label{defJ}
\begin{aligned}
\mathcal{J}^s_{h,\ep}(w)\coloneqq&\, \frac{1}{h^2}\widetilde{\mathcal{J}}^s_{h,\ep}(hw)\\
=&\, \mathcal{G}(w;B_R(0))+\frac{s}{2\pi(\ep-h)}\int_{\partial B_{\ep-h}(0)}\frac{w(x+\frac h 2e_1)-w(x-\frac h 2 e_1)}{h}\,\mathrm{d}\mathcal H^1(x)\,.
\end{aligned}
\end{equation}
We show that the minimizers of $\mathcal{J}^s_{h,\ep}$ in $\mathscr{B}_{\ep,R}$ converge, as $h\to 0$\,, to the minimizers in $\mathscr{B}_{\ep,R}$ of the functional $\mathcal{J}^s_{0,\ep}\colon\mathscr{B}_{\ep,R}\to\mathbb{R}$ defined by
\begin{equation}\label{defJ0}
\mathcal{J}^s_{0,\ep}(w) \coloneqq\mathcal{G}(w;B_{R}(0))+\frac{s}{2\pi\ep}\int_{\partial B_\ep(0)}\partial_{x_1}w\,\mathrm{d}\mathcal H^1\,.
\end{equation}
Notice that, by the very definition of $\mathscr{B}_{\ep,R}$ in \eqref{nuovoA1},
\begin{equation}\label{defJ0eff}
\mathcal{J}^s_{0,\ep}(w) = \mathcal{G}(w;A_{\ep,R}(0))+\frac{s}{2\pi\ep}\int_{\partial B_\ep(0)}\partial_{x_1}w\,\mathrm{d}\mathcal H^1\,.
\end{equation}
We start by showing existence and uniqueness of the minimizers of $\mathcal{J}^s_{h,\ep}$ and $\mathcal{J}^s_{0,\ep}$ in $\mathscr{B}_{\ep,R}$\,.
\begin{lemma}\label{existmin}
Let $s\in\mathbb{R}\setminus\{0\}$\,.
For every $0\le h <\ep<R$ there exists a unique minimizer of $\mathcal{J}^s_{h,\ep}$ in $\mathscr{B}_{\ep,R}$\,.
\end{lemma}
\begin{proof}
The proof relies on the direct method in the Calculus of Variations. We preliminarily notice that the uniqueness of the minimizers follows by the strict convexity of $\mathcal{J}^s_{h,\ep}$ for $h\ge 0$ (see \eqref{strictconv})\,.
Let $\{W_{h,\ep,j}\}_{j\in\mathbb{N}}$ be a minimizing sequence for $\mathcal{J}^s_{h,\ep}$ in $\mathscr{B}_{\ep,R}$\,;
for $h>0$,
since $W_{h,\ep,j}$ is affine in $B_\ep(0)$ for any $j\in\mathbb{N}$\,, for any $x\in\partial B_{\ep-h}(0)$ we have that
\begin{equation*}
\begin{aligned}
\bigg|\frac{W_{h,\ep,j}\big(x+\frac{h}{2}e_1\big)-W_{h,\ep,j}\big(x-\frac{h}{2}e_1\big)}{h}\bigg|=&\,|\partial_{x_1}W_{h,\ep,j}(x)|\le \|\partial_{x_1}W_{h,\ep,j}\|_{L^\infty(B_\ep(0))}\\
\le&\, \frac1{\sqrt{\pi}\ep}\|W_{h,\ep,j}\|_{H^2(B_R(0))}\,,
\end{aligned}
\end{equation*}
and we notice that the last inequality also holds true for $h=0$\,.
Hence, since the zero function $w= 0$ belongs to $\mathscr{B}_{\ep,R}$\,, by using Friedrich's inequality in $H^2_0(B_R(0))$\,, we get, for $j$ large enough,
\begin{equation}\label{proofi}
\begin{aligned}
0= \mathcal{J}^s_{h,\ep}(0)\ge&\, \mathcal{J}^s_{h,\ep}(W_{h,\ep,j}) \\
\ge&\, \frac{1}{2}\frac{1-\nu^2}{E}\min\{1-2\nu,1\}\|\nabla^2W_{h,\ep,j} \|^2_{L^2(B_R(0);\mathbb{R}^{2\times 2})}-\frac{s}{\sqrt{\pi}\ep}\|W_{h,\ep,j}\|_{H^2(B_R(0))}\\
\ge&\, C\|W_{h,\ep,j}\|^2_{H^2(B_R(0))}-\frac{s}{\sqrt{\pi}\ep}\|W_{h,\ep,j}\|_{H^2(B_R(0))}\,,
\end{aligned}
\end{equation}
for some constant $C>0$ depending only on $R$ (other than on $E$ and $\nu$)\,.
By \eqref{proofi}, we deduce that $\|W_{h,\ep,j}\|^2_{H^2(B_R(0))}$ is uniformly bounded.
It follows that, up to a subsequence, $W_{h,\ep,j}\weakly W_{h,\ep}$ (as $j\to \infty$) in $H^2(B_R(0))$ for some function $W_{h,\ep}\in H^2_0(B_R(0))$ that is affine in $B_{\ep}(0)$\,.
By the lower semicontinuity of $\mathcal{J}^s_{h,\ep}$ with respect to the weak $H^2$-convergence,
we get that $W_{h,\ep}$ is a minimizer of $\mathcal{J}^s_{h,\ep}$ in $\mathscr{B}_{\ep,R}$\,.
\end{proof}
We are now in a position to prove the convergence of the minimizers and of the minimal values of $\mathcal{J}^s_{h,\ep}$ to $\mathcal{J}^s_{0,\ep}$ as $h\to 0$\,.
\begin{proposition}\label{convhtozero}
Let $s\in\mathbb{R}\setminus\{0\}$\,.
Let $0<\ep<R$ and, for every $0<h<\ep$\,, let $W^s_{h,\ep}$ be the minimizer of $\mathcal{J}^s_{h,\ep}$ in $\mathscr{B}_{\ep,R}$\,. Then, as $h\to 0$\,, $W^s_{h,\ep}\to W^s_{0,\ep}$ strongly in $H^2(B_R(0))$\,, where $w^s_{0,\ep}$ is the minimizer of $\mathcal{J}^s_{0,\ep}$ in $\mathscr{B}_{\ep,R}$\,.
Moreover, $\mathcal{J}^s_{h,\ep}(W^s_{h,\ep})\to \mathcal{J}^s_{0,\ep}(W^s_{0,\ep})$ as $h\to 0$\,.
\end{proposition}
\begin{proof}
For every $0<h<\ep$ let $a^s_{h,\ep}(x)\coloneqq c^s_{h,\ep,0}+c_{h,\ep,1}^sx_1+c_{h,\ep,2}^sx_2$ with $c_{h,\ep,0}^s,c_{h,\ep,1}^s, c_{h,\ep,2}^s\in\mathbb{R}$ be such that $W^s_{h,\ep}=a^s_{h,\ep}$ in $B_{\ep}(0)$\,.
Then, arguing as in \eqref{proofi}, we get
\begin{equation*}
0\ge \mathcal{J}^s_{h,\ep}(W^s_{h,\ep})\ge C\|W^s_{h,\ep}\|^2_{H^2(B_R(0))}-\frac{s}{\sqrt{\pi}\ep}\|W^s_{h,\ep}\|_{H^2(B_R(0))}\,.
\end{equation*}
Therefore, up to a (not relabeled) subsequence, $W^s_{h,\ep}\weakly \bar W^s_{0,\ep}$ in $H^2(B_R(0))$ for some $\bar W^s_{0,\ep}\in H^2_0(B_R(0))$\,.
Moreover, since the functions $W^s_{h,\ep}$ are affine in $\overline{B}_\ep(0)$\,, also $\bar W^s_{0,\ep}$ is, and hence there exist $c^s_{0,\ep,0}, c^s_{0,\ep,1}, c^s_{0,\ep,2}\in\mathbb{R}$ such that $\bar W^s_{0,\ep}(x)=c_{0,\ep,0}^s+ c_{0,\ep,1}^sx_1+ c_{0,\ep,2}^sx_2$ for every $x\in\overline{B}_\ep(0)$\,. It follows that $\bar W^s_{0,\ep}\in\mathscr{B}_{\ep,R}$\,.
Now, since $W^s_{h,\ep}\to \bar W^s_{0,\ep}$ in $H^1(B_R(0))$\,, we get that
$c_{h,\ep,j}^s\to c_{0,\ep,j}^s$ as $h\to 0$\,, for every $j=1,2,3$\,,
which implies, in particular, that
\begin{equation}\label{termlin}
\begin{aligned}
&\, \lim_{h\to 0}\frac{1}{2\pi(\ep-h)}\int_{\partial B_{\ep-h}(0)} \frac{W^s_{h,\ep}(x+\frac h 2 e_1)-W^s_{h,\ep}(x-\frac h 2 e_1)}{h}\,\mathrm{d}\mathcal H^1(x)
=\lim_{h\to 0}c_{h,\ep,1}^s =c_{0,\ep,1}^s\\
= &\, \frac{1}{2\pi\ep}\int_{\partial B_\ep(0)} \partial_{x_1}\bar W^s_{0,\ep}\,\mathrm{d}\mathcal H^1\\
=&\, \lim_{h\to 0}\frac{1}{2\pi(\ep-h)}\int_{\partial B_{\ep-h}(0)} \!\!\!\! \frac{\bar W^s_{0,\ep}(x+\frac h 2 e_1)-\bar W^s_{0,\ep}(x-\frac h 2 e_1)}{h}\,\mathrm{d}\mathcal H^1(x)\,.
\end{aligned}
\end{equation}
Analogously,
\begin{equation}\label{termlin2}
\lim_{h\to 0}\frac{1}{2\pi(\ep-h)}\int_{\partial B_{\ep-h}(0)} \!\!\!\! \!\! \frac{W^s_{0,\ep}(x+\frac h 2 e_1)-W^s_{0,\ep}(x-\frac h 2 e_1)}{h}\,\mathrm{d}\mathcal H^1(x)=\frac{1}{2\pi\ep}\int_{\partial B_\ep(0)} \!\! \partial_{x_1}W^s_{0,\ep}\,\mathrm{d}\mathcal H^1\,.
\end{equation}
By \eqref{termlin} and \eqref{termlin2}, using the lower semicontinuity of $\mathcal{G}$\,, and taking $W^s_{0,\ep}$ as a competitor for $\mathcal{J}^s_{h,\ep}$ in $\mathscr{B}_{\ep,R}$\,, we get
\begin{equation*}
\mathcal{J}^s_{0,\ep}(W^s_{0,\ep})\le \mathcal{J}^s_{0,\ep}(\bar W^s_{0,\ep})\le\liminf_{h\to 0}\mathcal{J}^s_{h,\ep}(W^s_{h,\ep})\le\lim_{h\to 0}\mathcal{J}^s_{h,\ep}(W^s_{0,\ep})=\mathcal{J}^s_{0,\ep}(W^s_{0,\ep})\,,
\end{equation*}
so that all the inequalities above are in fact equalities. In particular,
\begin{equation}\label{20220427}
\mathcal{J}^s_{0,\ep}(W^s_{0,\ep})=\lim_{h\to 0}\mathcal{J}^s_{h,\ep}(W^s_{h,\ep})
\end{equation}
and consequently
$\bar W^s_{0,\ep}$ is a minimizer of $\mathcal{J}^s_{0,\ep}$ in $\mathscr{B}_{\ep,R}$\,.
In view of Lemma \ref{existmin}, we deduce that $\bar W^s_{0,\ep}=W^s_{0,\ep}$\,, which, together with \eqref{termlin} and \eqref{20220427}, implies that $\mathcal{G}(W^s_{h,\ep};B_R(0))\to \mathcal{G}(W^s_{0,\ep};B_R(0))$ as $h\to 0$\,. In view of Remark \ref{equinorm}, this implies that $W^s_{h,\ep}\to W^s_{0,\ep}$ strongly in $H^2(B_R(0))$ as $h\to 0$.
Finally, by the Urysohn property, we get that the whole family $\{W^s_{h,\ep}\}_h$ converges to $W^s_{0,\ep}$ as $h\to 0$\,.
\end{proof}
We conclude this section by determining the minimizer $w^s_{0,\ep}$ of $\mathcal{J}^s_{0,\ep}$ in $\mathscr{B}_{\ep,R}$\,.
\begin{lemma}\label{lm:mindis}
Let $s\in\mathbb{R}\setminus\{0\}$\,. For every $0<\ep<R$ the function $W^s_{0,\ep}:B_R(0)\to\mathbb{R}$ defined by
\begin{equation}\label{2201181926}
W_{0,\ep}^s(x)\coloneqq \begin{cases}
\displaystyle \frac{s}{16\pi}\frac{E}{1-\nu^2}\Big(\alpha_\ep+\beta_\ep\frac{1}{|x|^2}+\gamma_\ep|x|^2+2\log|x|^2\Big)x_1&\text{if $x\in A_{\ep,R}(0)$}\\[2mm]
\displaystyle \frac{s}{16\pi}\frac{E}{1-\nu^2}\Big(\alpha_\ep+\frac{\beta_\ep}{\ep^2}+\ep^2\gamma_\ep+4\log\ep\Big)x_1&\text{if $x\in B_\ep(0)$\,,}
\end{cases}
\end{equation}
with
\begin{equation}\label{abc}
\alpha_\ep\coloneqq2\frac{R^2-\ep^2}{R^2+\ep^2}-2\log R^2\,,\quad \beta_\ep\coloneqq 2\ep^2\frac{R^2}{R^2+\ep^2}\,,\quad \gamma_\ep\coloneqq-\frac{2}{R^2+\ep^2}\,,
\end{equation}
is the unique minimizer in $\mathscr{B}_{\ep,R}$ of the functional $\mathcal{J}^s_{0,\ep}$ defined in \eqref{defJ0}.
Moreover,
\begin{equation}\label{valmin}
\mathcal{J}_{0,\ep}^s(W^s_{0,\ep})=-\frac{s^2}{8\pi}\frac{E}{1-\nu^2}\Big
\log \frac{R}{\ep}-\frac{R^2-\ep^2}{R^2+\ep^2}\Big)\,.
\end{equation}
\end{lemma}
The proof of Lemma \ref{lm:mindis} is postponed to Appendix \ref{prooflm:mindis}, where we also state Corollary~\ref{insec4}, which will be used in Section~\ref{sc:four}.
\begin{remark}\label{2202211829}
\rm{
Let $b\in\mathbb{R}^2\setminus\{0\}$\,. For any $0<h<\ep<R$ let $\mathcal{J}_{h,\ep}^b\colon \Bnew_{\ep,R}\to \mathbb{R}$ be the functional defined as
\begin{equation*}
\mathcal{J}_{h,\ep}^b(w)\coloneqq\mathcal{G}(w;B_R(0))+\frac{|b|}{2\pi(\ep-h)}\int_{\partial B_{\ep-h}(0)}\frac{w\big(x+\frac h 2\frac{\Pi(b)}{|b|}\big)-w\big(x-\frac h 2 \frac{\Pi(b)}{|b|}\big)}{h}\,\mathrm{d}\mathcal H^1(x)\,,
\end{equation*}
where $\Pi(b)$ denotes the $\frac\pi 2$ clockwise rotation of the vector $b$\,, \emph{i.e.},
\begin{equation}\label{pigrecobugualemenobortogonale}
\Pi(b)=-b^\perp\,.
\end{equation}
By arguing verbatim as in the proof of Proposition~\ref{convhtozero}, we have that, as $h\to 0$\,, the unique minimizer of $\mathcal{J}_{h,\ep}^b$ in $\Bnew_{\ep,R}$ converges strongly in $H^2(B_R(0))$ to the unique minimizer in $\Bnew_{\ep,R}$ of the functional $\mathcal{J}_{0,\ep}^b$ defined by
\begin{equation*}
\mathcal{J}_{0,\ep}^b(w)\coloneqq\mathcal{G}(w;B_R(0))+\frac{1}{2\pi\ep}\int_{\partial B_{\ep}(0)}
\langle \nabla w, \Pi(b)\rangle\,\mathrm{d}\mathcal H^1\,.
\end{equation*}
Notice that the minimizer of $\mathcal{J}_{0,\ep}^b$ is given by
\begin{equation}\label{vudoppiob}
W_{0,\ep}^{b}(x)\coloneqq |b|W_{0,\ep}^{|b|}\bigg(\Big\langle\frac{\Pi(b)}{|b|},x\Big\rangle,\Big\langle \frac{b}{|b|},x\Big\rangle\bigg)\,,
\end{equation}
where the function $W_{0,\ep}^s$ is defined in Lemma \ref{lm:mindis}.
Furthermore, one can easily check that the same proof of Proposition~\ref{convhtozero} applies also to general domains $\Omega$ as well as to a general distribution of dipoles of wedge disclinations
\begin{equation}\label{thetahJ}
\theta_h\coloneqq\sum_{j=1}^J|b^j|\Big(\textrm{def}_{x^j+\frac{h}{2}\frac{\Pi(b^j)}{|b^j|}}-\textrm{def}_{x^j-\frac{h}{2}\frac{\Pi(b^j)}{|b^j|}}\Big)\in\mathscr{WD}(\Omega)\,,
\end{equation}
(with $b^j\in\mathbb{R}^2\setminus\{0\}$ and $\min_{\genfrac{}{}{0pt}{1}{j_1,j_2=1,\ldots,J}{j_1\neq j_2}}|x^{j_1}-x^{j_2}|,\min_{j=1,\ldots,J}\mathrm{dist}(x^j,\partial\Omega)>2\ep$) approximating the family of edge dislocations $\alpha\coloneqq\sum_{i=1}^Jb^j\textrm{def}_{x^j}\in\mathscr{ED}(\Omega)$\,.
In such a case, one can show that, as $h\to 0$\,, the unique minimizer $w_{h,\ep}^{\theta_h}$ of the functional
\begin{equation}\label{2202231647}
\mathcal{I}^{\theta_h}_{h,\ep}(w)\coloneqq \mathcal{G}(w;\Omega)+\sum_{j=1}^J\frac{|b^j|}{2\pi(\ep-h)}\int_{\partial B_{\ep-h}(x^j)}
\frac{w(x+\frac h 2\frac{\Pi(b^j)}{|b^j|})-w(x-\frac h 2 \frac{\Pi(b^j)}{|b^j|})}{h}\,\mathrm{d}\mathcal H^1(x)
\end{equation}
in the set
\begin{equation}\label{2202211820}
\begin{aligned}
\mathscr{B}^{\alpha}_{\ep,\Omega}\coloneqq\{w\in H^2_0(\Omega): \text{$w=a^j$ in $B_\ep(x^j)$ for some affine functions $a^j$\,, $j=1,\dots, J$}\}\,,
\end{aligned}
\end{equation}
converges strongly in $H^2(\Omega)$ to the unique minimizer $w_{0,\ep}^\alpha$ in $\mathscr{B}^{\alpha}_{\ep,\Omega}$ of the functional
\begin{equation}\label{energyI}
\begin{aligned}
\mathcal{I}^\alpha_{0,\ep}(w)\coloneqq&\, \mathcal{G}(w;{\Omega})+\sum_{j=1}^J\frac{1}{2\pi\ep}\int_{\partial B_{\ep}(x^j)}
\langle \nabla w, \Pi(b^j)\rangle\,\mathrm{d}\mathcal H^1\\
=&\,\mathcal{G}(w;\Omega_\ep(\alpha))+\sum_{j=1}^J\frac{1}{2\pi\ep}\int_{\partial B_{\ep}(x^j)}
\langle \nabla w, \Pi(b^j)\rangle\,\mathrm{d}\mathcal H^1\,,
\end{aligned}
\end{equation}
where $\Omega_\ep(\alpha)\coloneqq\Omega\setminus\bigcup_{j=1}^{J} \overline{B}_\ep(x^j)$\,.
}
\end{remark}
\section{Limits for dislocations}\label{sc:four}
In this section, we obtain the full asymptotic expansion in $\ep$ of the singular limit functional
$\mathcal{I}^\alpha_{0,\ep}$ introduced in \eqref{energyI}.
We first prove the convergence of the minimizers of $\mathcal{I}^\alpha_{0,\ep}$ in a suitable functional setting (see Theorem~\ref{2201181928}) and then, by showing that all terms of the expansion coincide with the corresponding terms of the renormalized energy of edge dislocations of \cite{CermelliLeoni06},
we finally deduce the asymptotic energetic equivalence of systems of disclination dipoles with the corresponding systems of edge dislocations.
Let $\alpha=\sum_{j=1}^{J}b^j\textrm{def}_{x^j}\in\mathscr{ED}(\Omega)$\,.
We consider the following minimum problem
\begin{equation}\label{2112192000}
\min_{w\in\mathscr{B}^{\alpha}_{\ep,\Omega}}\mathcal{I}_\ep^\alpha(w)\,,
\end{equation}
where $\mathcal{I}_\ep^\alpha(w)\coloneqq \mathcal{I}^\alpha_{0,\ep}(w)$ is the functional defined in \eqref{energyI} and $\mathscr{B}^{\alpha}_{\ep,\Omega}$ is defined in \eqref{2202211820}.
In order to study the asymptotic behavior of the minimizers and minima of $\mathcal{I}_\ep^\alpha$ as $\ep\to 0$\,, we first introduce some notation.
Fix $R>0$ such that $\overline{\Omega}\subset B_R(x^j)$ for every $j=1,\ldots,J$\,, and let $\ep>0$ be such that the (closed) balls $\overline{B}_\ep(x^j)$ are pairwise disjoint and contained in $\Omega$\,, i.e.,
\begin{equation}\label{distaminima}
\ep< D\coloneqq\min_{j=1,\ldots,J}\bigg\{\frac12\mathrm{dist}_{i\neq j}(x^i,x^j)\,, \mathrm{dist}(x^{j},\partial\Omega)\bigg\}\,.
\end{equation}
We define the function $W_{\ep}^\alpha\colon\Omega_{\ep}(\alpha)\to\mathbb{R}$ by
\begin{equation}\label{20220223_1}
W_{\ep}^\alpha(x)\coloneqq\sum_{j=1}^J W_{\ep}^{j}(x)\,,\qquad\text{with}\qquad W_{\ep}^{j}(\cdot)\coloneqq W_{0,\ep}^{b^j}(\cdot-x^j)
\end{equation}
(see \eqref{vudoppiob})\,.
We highlight that the function $W^\alpha_\ep$ depends also on $R$ through the constants defined in \eqref{abc}.
Notice that any function $w\in \mathscr{B}^{\alpha}_{\ep,\Omega}$ can be decomposed as
\begin{equation}\label{2201181904}
w=W^{\alpha}_\ep+\widetilde w
\end{equation}
where $\widetilde w\in \widetilde{\mathscr{B}}^{\alpha}_{\ep,\Omega}$\,, with
\begin{equation}\label{tildeBdelta}
\begin{aligned}
\widetilde{\mathscr{B}}^{\alpha}_{\ep,\Omega}\coloneqq &\, \{\widetilde{w}\in H^2_0(\Omega)-W^{\alpha}_\ep: \text{$\widetilde{w}+W^\alpha_{\ep}=a^j$ in $B_\ep(x^j)$} \\
&\, \phantom{\{\widetilde{w}\in H^2_0(\Omega)-W^{\alpha}_\ep:\,} \text{for some affine functions $a^j$\,, $j=1,\dots, J$}\}\\
\equiv&\, {\mathscr{B}}^{\alpha}_{\ep,\Omega}-W^\alpha_\ep\,.
\end{aligned}
\end{equation}
Therefore, in view of the decomposition \eqref{2201181904}, for every $w\in\mathscr{B}_{\ep,\Omega}^\alpha$ we have
\begin{equation}\label{20220223_2}
\mathcal{I}_{\ep}^\alpha(w)=\mathcal{G}(W_{\ep}^\alpha;\Omega_\ep(\alpha))+\sum_{j=1}^J\frac{1}{2\pi\ep}\int_{\partial B_\ep(x^j)}\langle\nabla W_\ep^\alpha,\Pi(b^j)\rangle\,\mathrm{d}\mathcal H^1+\widetilde{\mathcal{I}}_{\ep}^\alpha(\widetilde{w})\,,
\end{equation}
where
\begin{equation}\label{defItilde}
\begin{split}
\widetilde{\mathcal I}^{\alpha}_\ep(\widetilde w)\coloneqq
\mathcal{G}(\widetilde{w};\Omega_{\ep}(\alpha))
+&\,\frac{1+\nu}{E} \sum_{j=1}^J \int_{\Omega_{\ep}(\alpha)}
\Big(\nabla^2W^j_{\ep}:\nabla^2\widetilde{w}-\nu \Delta W^j_{\ep}\Delta \widetilde{w}\Big)\,\mathrm{d} x\\
+&\,\sum_{j=1}^J\frac{1}{2\pi\ep}\int_{\partial B_{\ep}(x^j)}
\langle \nabla \widetilde{w}, \Pi(b^j)\rangle\,\mathrm{d}\mathcal H^1\,.
\end{split}
\end{equation}
Notice that the integration for the bulk term $\mathcal{G}$ above is performed on $\Omega_{\ep}(\alpha)$ and not on~$\Omega$\,, as the function $\widetilde{w}$ is not, in general, affine in $\bigcup_{j=1}^J B_{\ep}(x^j)$\,.
In view of \eqref{20220223_2}, as in \cite[Theorem 4.1]{CermelliLeoni06}, the minimum problem \eqref{2112192000} (for $w$) is equivalent to the following minimum problem (for $\widetilde w$)
\begin{equation}\label{2112192003}
\begin{aligned}
\min_{\widetilde w\in \widetilde{\mathscr{B}}^{\alpha}_{\ep,\Omega}}\widetilde{\mathcal I}^{\alpha}_\ep(\widetilde{w})\,.
\end{aligned}
\end{equation}
\begin{lemma}\label{20220222_1}
For every $\widetilde w\in\widetilde{\mathscr{B}}^{\alpha}_{\ep,\Omega}$ we have
\begin{equation}\label{20220222_2}
\begin{aligned}
\widetilde{\mathcal I}^{\alpha}_\ep(\widetilde w)=&\,
\mathcal{G}(\widetilde{w};\Omega_{\ep}(\alpha))
+\frac{1+\nu}{E}\sum_{j=1}^J\bigg(-(1-\nu)\int_{\partial\Omega}(\partial_{n}\Delta W_{\ep}^j)\widetilde{w} \,\mathrm{d}\mathcal H^1\\
&\,+\int_{\partial\Omega}\langle \nabla^2 W_{\ep}^j n,\nabla \widetilde{w} \rangle \,\mathrm{d}\mathcal H^1- \nu\int_{\partial\Omega}\Delta W_{\ep}^j \partial_{n}\widetilde{w} \,\mathrm{d}\mathcal H^1\bigg)\\
&\,+\sum_{j=1}^{J}\Bigg(\frac{1+\nu}{E}\sum_{i=1}^J\bigg((1-\nu)\int_{\partial B_\ep(x^i)}(\partial_{n}\Delta W_{\ep}^j)\widetilde{w} \,\mathrm{d}\mathcal H^1
- \int_{\partial B_{\ep}(x^i)}\langle \nabla^2 W_{\ep}^j n,\nabla \widetilde{w} \rangle \,\mathrm{d}\mathcal H^1\\
&\,\phantom{- \sum_{i=1}^J}\quad+\nu\int_{\partial B_{\ep}(x^i)}\Delta W_{\ep}^j \partial_{n} \widetilde{w} \,\mathrm{d}\mathcal H^1\bigg)+\frac{1}{2\pi\ep}\int_{\partial B_{\ep}(x^j)}
\langle \nabla \widetilde{w}, \Pi(b^j)\rangle\,\mathrm{d}\mathcal H^1\Bigg)\,.
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
Let $\widetilde w\in \widetilde{\mathscr{B}}^{\alpha}_{\ep,\Omega}$ be fixed.
By the Gauss--Green Theorem, for every $j=1,\dots, J$ and for every $0<\ep<D$\,, we have
\begin{equation}\label{22011101622}
\begin{aligned}
\int_{\Omega_{\ep}(\alpha)}
\nabla^2W_{\ep}^j:\nabla^2\widetilde{w}\, \mathrm{d} x=&\,
-\int_{\partial\Omega}(\partial_{n}\Delta W_{\ep}^j)\widetilde{w} \,\mathrm{d}\mathcal H^1
+ \sum_{i=1}^J\int_{\partial B_{\ep}(x^i)}(\partial_{n}\Delta W_{\ep}^j)\widetilde{w} \,\mathrm{d}\mathcal H^1\\
&+ \int_{\partial\Omega}\langle \nabla^2 W_{\ep}^j n,\nabla \widetilde{w} \rangle \,\mathrm{d}\mathcal H^1
- \sum_{i=1}^J\int_{\partial B_{\ep}(x^i)}\langle \nabla^2 W_{\ep}^j n,\nabla \widetilde{w} \rangle \,\mathrm{d}\mathcal H^1\,,
\end{aligned}
\end{equation}
and
\begin{equation}\label{22011101624}
\begin{aligned}
\int_{\Omega_{\ep}(\alpha)}
\Delta W_{\ep}^j\Delta \widetilde{w}\, \mathrm{d} x=&\,
- \int_{\partial\Omega}(\partial_{n}\Delta W_{\ep}^j) \widetilde{w} \,\mathrm{d}\mathcal H^1
+ \sum_{i=1}^J\int_{\partial B_{\ep}(x^i) }(\partial_{n}\Delta W_{\ep}^j) \widetilde{w}\,\mathrm{d}\mathcal H^1\\
&+ \int_{\partial\Omega}\Delta W_{\ep}^j \partial_{n}\widetilde{w} \,\mathrm{d}\mathcal H^1
- \sum_{i=1}^J\int_{\partial B_{\ep}(x^i)}\Delta W_{\ep}^j \partial_{n} \widetilde{w} \,\mathrm{d}\mathcal H^1\,,
\end{aligned}
\end{equation}
where we have used that
$ \Delta^2W_{\ep}^j\equiv 0$ in $\Omega_{\ep}(\alpha)$ for every $j=1,\ldots,J$\,.
By \eqref{22011101622} and \eqref{22011101624} it follows that
\begin{equation*
\begin{aligned}
&\,\int_{\Omega_{\ep}(\alpha)}
\Big( \nabla^2W^j_{\ep}:\nabla^2\widetilde{w}-\nu \Delta W^j_{\ep}\Delta \widetilde{w}\Big)\,\mathrm{d} x\\
=&\,-(1-\nu)\int_{\partial\Omega}(\partial_{n}\Delta W_{\ep}^j)\widetilde{w} \,\mathrm{d}\mathcal H^1+\int_{\partial\Omega}\langle \nabla^2 W_{\ep}^j n,\nabla \widetilde{w} \rangle \,\mathrm{d}\mathcal H^1-\nu \int_{\partial\Omega}\Delta W_{\ep}^j \partial_{n}\widetilde{w} \,\mathrm{d}\mathcal H^1\\
&\,+(1-\nu)\sum_{i=1}^J\int_{\partial B_\ep(x^i)}(\partial_{n}\Delta W_{\ep}^j)\widetilde{w} \,\mathrm{d}\mathcal H^1\\
&\,- \sum_{i=1}^J\int_{\partial B_{\ep}(x^i)}\langle \nabla^2 W_{\ep}^j n,\nabla \widetilde{w} \rangle \,\mathrm{d}\mathcal H^1+\nu\sum_{i=1}^J\int_{\partial B_{\ep}(x^i)}\Delta W_{\ep}^j \partial_{n} \widetilde{w} \,\mathrm{d}\mathcal H^1\,,
\end{aligned}
\end{equation*}
which, in view of the very definition of $\widetilde{\mathcal I}^\alpha_{\ep}$ in \eqref{defItilde}, implies \eqref{22011101622}.
\end{proof}
\begin{remark}\label{maybeuseful}
\rm{Let $\alpha=\sum_{j=1}^J b^j\textrm{def}_{x^j}\in\mathscr{ED}(\Omega)$\,. For every $0<r<R$ and for every $j=1,\ldots,J$ we have that the plastic functions $W^j_{\ep}$ converge in $C^\infty(A_{r,R}(x^j))$, as $\ep\to0$\,, to the function~$W^j_0$ defined by
\begin{equation}\label{20220311
\begin{aligned}
W^j_0(x)\coloneqq&\, \frac{|b^j|}{8\pi}\frac{E}{1-\nu^2}\Big((1-\log R^2)-\frac{|x|^2}{R^2}+\log|x|^2\Big)\Big\langle\frac{\Pi(b^j)}{|b^j|},x-x^j\Big\rangle\,.
\end{aligned}
\end{equation}
It follows that $W_\ep^\alpha\to \sum_{j=1}^JW^j_0\eqqcolon W_0^\alpha$ in $C^\infty(\Omega_{r}(\alpha))$ and hence in $H^2_{\mathrm{loc}}\big(\Omega\setminus\bigcup_{j=1}^J\{x^j\}\big)$\,.
Therefore, in the spirit of \eqref{tildeBdelta}
we set
\begin{equation}\label{tildeBzero}
\widetilde{\mathscr{B}}^{\alpha}_{0,\Omega}\coloneqq\{w\in H^2(\Omega)\,:\,w=-{W}^{\alpha}_0\,,\, \partial_n w=-\partial_n{W}^{\alpha}_0\textrm{ on }\partial\Omega\}\,.
\end{equation}
}
\end{remark}
Now we prove the following theorem, which is the equivalent of \cite[Theorem 4.1]{CermelliLeoni06} in terms of the Airy stress function.
\begin{theorem}\label{2201181928}
Let $\alpha=\sum_{j=1}^Jb^j\textrm{def}_{x^j}\in\mathscr{ED}(\Omega)$ and let
$\mathcal{I}_\ep^\alpha$ be the functional in \eqref{20220223_2}
for every $\ep>0$\,.
For $\ep>0$ small enough,
the minimum problem \eqref{2112192000} admits a unique solution $w_\ep^\alpha$\,.
Moreover, $w_\ep^\alpha\to w^\alpha_0$, as $\ep\to 0$\,, strongly
in $H^2_{\mathrm{loc}}(\Omega\setminus\bigcup_{j=1}^J\{x^j\})$\,, where $w^\alpha_0\in H^2_{\mathrm{loc}}(\Omega\setminus\bigcup_{j=1}^J\{x^j\})$ is the unique distributional solution to
\begin{equation}\label{limsol}
\begin{cases}
\displaystyle \frac{1-\nu^2}{E}\Delta^2 w=-\sum_{j=1}^{J}|b^j|\partial_{\frac{(b^j)^\perp}{|b^j|}}\textrm{def}_{x^j} &\text{in $\Omega$}\\[2mm]
w=\partial_n w=0&\text{on $\partial\Omega$\,.}
\end{cases}
\end{equation}
\end{theorem}
Theorem \ref{2201181928} is a consequence of Propositions~\ref{2201181930} and~\ref{2201252355} below, which are the analogue of \cite[Lemma 4.2]{CermelliLeoni06} and \cite[Lemma 4.3]{CermelliLeoni06}, respectively.
\begin{proposition}\label{2201181930}
Let $\alpha\in\mathscr{ED}(\Omega)$ and let $\ep>0$ be small enough. For every $\widetilde w\in \widetilde{\mathscr{B}}^{\alpha}_{\ep,\Omega}$ we have
\begin{equation}\label{2201182011}
C_1\big(\|\widetilde w\|_{H^2(\Omega_{\ep}(\alpha))}^2-\|\widetilde w\|_{H^2(\Omega_{\ep}(\alpha))}-1\big) \le \widetilde{\mathcal{I}}^\alpha_{\ep}(\widetilde w)\le
C_2\big(\|\widetilde w\|_{H^2(\Omega_{\ep}(\alpha))}^2+\|\widetilde w\|_{H^2(\Omega_{\ep}(\alpha))}+1\big)\,,
\end{equation}
for some constants $0<C_1<C_2$ independent of $\ep$\,.
Moreover,
problem \eqref{2112192003} admits a unique solution $\widetilde{w}^\alpha_{\ep}\in \widetilde{\mathscr{B}}^{\alpha}_{\ep,\Omega}$ and
$\|\widetilde{w}_{\ep}^\alpha\|_{H^2(\Omega_{\ep}(\alpha))}$ is uniformly bounded with respect to $\ep$\,.
Furthermore, there exists $\widetilde{w}^\alpha_0\in \widetilde{\mathscr{B}}^{\alpha}_{0,\Omega}$ such that as $\ep\to0$ and up to a (not relabeled) subsequence,
\begin{equation}\label{2201141838}
\widetilde{w}_\ep^{\alpha}
\weakly \widetilde{w}_0^\alpha\quad\text{weakly in $H^2(\Omega)$.
\end{equation}
\end{proposition}
\begin{proposition
\label{2201252355}
Let $\alpha=\sum_{j=1}^Jb^j\textrm{def}_{x^j}\in\mathscr{ED}(\Omega)$ and let $\ep>0$ be small enough.
Let $\widetilde w^\alpha_{\ep}$ and $\widetilde w^\alpha_{0}$ be as in Proposition~\ref{2201181930}\,.
Then, as $\ep\to 0$\,, the whole sequence $\widetilde{w}^\alpha_\ep$ converges to $\widetilde{w}^\alpha_0$\,, strongly in $H^2_\mathrm{loc}\big(\Omega\setminus\bigcup_{j=1}^J\{x^j\}\big)$ and
$\widetilde w^\alpha_{0}$ is
the unique minimizer in $ \widetilde{\mathscr{B}}^{\alpha}_{0,\Omega}$ of the functional $\widetilde{\mathcal{I}}^\alpha_{0}$ defined by
\begin{equation*}
\begin{aligned}
\widetilde{\mathcal{I}}^\alpha_{0}(\widetilde{w})\coloneqq&\,\mathcal{G}(\widetilde{w};\Omega)
+\frac{1+\nu}{E} \sum_{j=1}^J \Big( -(1-\nu)
\int_{\partial\Omega}(\partial_{n}\Delta W_{0}^j)\widetilde{w} \,\mathrm{d} \mathcal H^1\\
&\phantom{\mathcal{G}(\widetilde{w};\Omega)
+\frac{1+\nu}{E} \sum_{j=1}^J}+ \int_{\partial\Omega}\langle \nabla^2 W_{0}^j n,\nabla \widetilde{w} \rangle \,\mathrm{d} \mathcal H^1
-\nu \int_{\partial\Omega}\Delta W_{0}^j \partial_{n}\widetilde{w} \,\mathrm{d}\mathcal H^1 \Big)\,.
\end{aligned}
\end{equation*}
Moreover,
\begin{equation}\label{20220419}
\Delta^2\widetilde{w}^\alpha_0=0 \qquad\textrm{in }\Omega
\end{equation}
and
\begin{equation}\label{20220222_7}
\widetilde{\mathcal{I}}^\alpha_{\ep}(\widetilde w^\alpha_{\ep})\to \widetilde{\mathcal{I}}^\alpha_{0}(\widetilde w^\alpha_{0})\qquad\textrm{as }\ep\to 0\,.
\end{equation}
\end{proposition}
\begin{proof}[Proof of Theorem \ref{2201181928}]
By the additive decomposition in \eqref{2201181904} and by Propositions \ref{2201181930}, we have that, for $\ep>0$ small enough, $w^\alpha_\ep=W^\alpha_\ep+\widetilde w^\ep_\alpha$\,, where $W^\alpha_\ep$ is defined in \eqref{20220223_1} and $\widetilde{w}_\ep^\alpha$ is the unique solution to the minimum problem in \eqref{2112192003}.
Therefore, by Remark \ref{maybeuseful} and by Proposition \ref{2201252355}, we have that $w_\ep^\alpha\to W_0^\alpha+\widetilde{w}_0^\alpha\eqqcolon w^\alpha_0$ in $H^2_\mathrm{loc}\big(\Omega\setminus\bigcup_{j=1}^J\{x^j\}\big)$ as $\ep\to 0$\,.
Notice that, by \eqref{20220419} and by the very definition of $w_0^\alpha$ (see \eqref{20220311}),
\begin{equation}\label{20220311_1}
\frac{1-\nu^2}{E}\Delta^2w_0^\alpha=\frac{1-\nu^2}{E}\Delta^2W_0^\alpha=-\sum_{j=1}^{J}|b^j|\partial_{\frac{(b^j)^\perp}{|b^j|}}\textrm{def}_{x^j}\qquad\textrm{in }\Omega\,,
\end{equation}
\emph{i.e.}, the first equation in \eqref{limsol}.
Finally, the boundary conditions are satisfied since $\widetilde{w}^\alpha_0\in\widetilde{\mathscr{B}}_{0,\Omega}^\alpha$ (see \eqref{tildeBzero}).
\end{proof}
Now we prove Proposition \ref{2201181930}.
\begin{proof}[Proof of Proposition \ref{2201181930}]
Let $\alpha=\sum_{j=1}^Jb^j\textrm{def}_{x^j}\in\mathscr{ED}(\Omega)$ and let $\widetilde{w}\in\widetilde{\mathscr{B}}_{\ep,\Omega}^\alpha$\,.
We first prove that for every $j=1,\ldots, J$
\begin{equation}\label{20220222_3}
\begin{aligned}
\frac{1+\nu}{E}\sum_{i=1}^J\bigg((1-\nu)\int_{\partial B_\ep(x^i)}(\partial_{n}\Delta W_{\ep}^j)\widetilde{w} \,\mathrm{d}\mathcal H^1
- \int_{\partial B_{\ep}(x^i)}\langle \nabla^2 W_{\ep}^j n,\nabla \widetilde{w} \rangle \,\mathrm{d}\mathcal H^1&\\
+\nu\int_{\partial B_{\ep}(x^i)}\Delta W_{\ep}^j \partial_{n} \widetilde{w} \,\mathrm{d}\mathcal H^1\bigg)+\frac{1}{2\pi\ep}\int_{\partial B_{\ep}(x^j)}
\langle \nabla \widetilde{w}, \Pi(b^j)\rangle\,\mathrm{d}\mathcal H^1&=\mathrm{O}(\ep)\,.
\end{aligned}
\end{equation}
To this purpose, we recall that, for every $i=1,\ldots,J$, there exists an affine function $a^i_\ep$ such that
\begin{equation}\label{2201101612}
\widetilde w=a^i_\ep-W^{i}_{\ep}-\sum_{k\neq i}W^k_\ep\qquad\textrm{on }\partial B_\ep(x^i)\,.
\end{equation}
Moreover, as in \eqref{natural}, for every function $a$ which is affine in $B_\ep(x^j)$ we have
\begin{equation}\label{20220216_1}
\begin{aligned}
& \frac{1-\nu^2}{E}\int_{\partial B_\ep(x^j)}(\partial_{n}\Delta W_{\ep}^j)a \,\mathrm{d}\mathcal H^1+
\frac{1+\nu}{E}\nu\int_{\partial B_{\ep}(x^j)} \Delta W_{\ep}^j \partial_na \,\mathrm{d}\mathcal H^1\\
&-\frac{1+\nu}{E}\int_{\partial B_{\ep}(x^j)}\langle \nabla^2 W_{\ep}^j n,\nabla a \rangle \,\mathrm{d}\mathcal H^1\\
=&-\frac{|b^j|}{2\pi\ep}\int_{\partial B_\ep(x^j)}\partial_{\frac{(b^j)^\perp}{|b^j|}}a\,\mathrm{d}\mathcal H^1
=-\frac{1}{2\pi\ep}\int_{\partial B_\ep(x^j)}\langle\nabla a,\Pi(b^j)\rangle\,\mathrm{d}\mathcal H^1\,.
\end{aligned}
\end{equation}
Let $j=1,\ldots,J$ be fixed.
We first focus on the case $i=j$ in \eqref{20220222_3}\,.
Recalling that $W_\ep^j$ is affine in $B_\ep(x^j)$ and that it is the only minimizer of the total energy in $B_R(x^j)\supset\Omega$\,, by
\eqref{20220216_1} we get
\begin{equation}\label{2201182343}
\begin{aligned}
\frac{1-\nu^2}{E}\int_{\partial B_\ep(x^j)}(\partial_{n}\Delta W_{\ep}^j)(a^j_\ep-W^j_\ep) \,\mathrm{d}\mathcal H^1+
\frac{1+\nu}{E}\nu\int_{\partial B_{\ep}(x^j)} \Delta W_{\ep}^j \partial_n(a^j_\ep-W^j_\ep) \,\mathrm{d}\mathcal H^1&\\
-\frac{1+\nu}{E}\int_{\partial B_{\ep}(x^j)}\langle \nabla^2 W_{\ep}^j n,\nabla(a^j_\ep-W^j_\ep) \rangle \,\mathrm{d}\mathcal H^1
+\frac{1}{2\pi\ep}\int_{\partial B_\ep(x^j)}\langle\nabla(a^j_\ep-W_\ep^j),\Pi(b^j)\rangle\,\mathrm{d}\mathcal H^1&=0\,.
\end{aligned}
\end{equation}
Furthermore, recalling that $w_\ep^k$ is smooth in $B_\ep(x^j)$ for every $k\neq j$, by Taylor expansion
we have that
\begin{equation*}
W^k_\ep(x)=W^k_\ep(x^j)+\langle\nabla W^k_\ep(x^j),x-x^j\rangle+\mathrm{O}(\ep^2)\qquad\textrm{for every }x\in B_\ep(x^j)\,,
\end{equation*}
whence, using \eqref{20220216_1} with $a(\cdot):=W^k_\ep(x^j)+\langle\nabla W^k_\ep(x^j),\cdot-x^j\rangle$, we deduce that
\begin{equation}\label{dopo}
\begin{aligned}
\frac{1-\nu^2}{E}\int_{\partial B_\ep(x^j)} \!\!\!\! (\partial_{n}\Delta W_{\ep}^j)\Big(-\sum_{k\neq j}W^k_\ep\Big) \,\mathrm{d}\mathcal H^1+
\frac{1+\nu}{E}\nu\int_{\partial B_{\ep}(x^j)} \!\!\!\! \Delta W_{\ep}^j \partial_n\Big(-\sum_{k\neq j}W^k_\ep\Big) \,\mathrm{d}\mathcal H^1&\\
-\frac{1+\nu}{E}\int_{\partial B_{\ep}(x^j)}\Big\langle \nabla^2 W_{\ep}^j n,\nabla\Big(-\sum_{k\neq j}W^k_\ep\Big) \Big\rangle \,\mathrm{d}\mathcal H^1&\\
+ \frac{1}{2\pi\ep}\int_{\partial B_\ep(x^j)}\Big\langle\nabla\Big(-\sum_{k\neq j}W^k_\ep\Big),\Pi(b^j)\Big\rangle\,\mathrm{d}\mathcal H^1&=\mathrm{O}(\ep)\,.
\end{aligned}
\end{equation}
By adding \eqref{2201182343} and \eqref{dopo}, in view of \eqref{2201101612}, we get
\begin{equation}\label{20220214_5}
\begin{aligned}
\frac{1-\nu^2}{E}\int_{\partial B_\ep(x^j)}(\partial_{n}\Delta W_{\ep}^j)\widetilde{w} \,\mathrm{d}\mathcal H^1+
\frac{1+\nu}{E}\nu\int_{\partial B_{\ep}(x^j)} \Delta W_{\ep}^j \partial_n\widetilde{w} \,\mathrm{d}\mathcal H^1&\\
-\frac{1+\nu}{E}\int_{\partial B_{\ep}(x^j)}\langle \nabla^2 W_{\ep}^j n,\nabla\widetilde{w}\rangle \,\mathrm{d}\mathcal H^1
+\frac{1}{2\pi\ep}\int_{\partial B_\ep(x^j)}\big\langle \nabla \widetilde{w},\Pi(b^j)\big\rangle\,\mathrm{d}\mathcal H^1 &= \mathrm{O}(\ep)\,.
\end{aligned}
\end{equation}
Now we focus on the case $i\neq j$ in \eqref{20220222_3}\,.
We first notice that, by the Gauss--Green Theorem, for any affine function $a$ there holds
\begin{equation}\label{220210_1}
\begin{aligned}
0=& \int_{B_{\ep}(x^i)}
\Delta W_{\ep}^j\Delta(-W_{\ep}^i+a)\,\mathrm{d} x=
\int_{B_{\ep}(x^i)}{\Delta^2W_{\ep}^j} (-W_{\ep}^i+a)\, \mathrm{d} x\\
&- \int_{\partial B_{\ep}(x^i)}(\partial_{(-n)}\Delta W_{\ep}^j) (-W_{\ep}^i+a) \,\mathrm{d}\mathcal H^1
+ \int_{\partial B_{\ep}(x^i)}\Delta W_{\ep}^j \partial_{(-n)}(-W_{\ep}^i+a)\,\mathrm{d}\mathcal H^1\\
=&\int_{\partial B_{\ep}(x^i)}(\partial_{n}\Delta W_{\ep}^j) (-W_{\ep}^i+a) \,\mathrm{d}\mathcal H^1- \int_{\partial B_{\ep}(x^i)}\Delta W_{\ep}^j \partial_{n}(-W_{\ep}^i+a)\,\mathrm{d}\mathcal H^1\,,
\end{aligned}
\end{equation}
where the first equality follows from the fact that $W_\ep^i$ is affine in $B_\ep(x^i)$ whereas the last one is a consequence of $\Delta^2W^j_\ep=0$ in $A_{\ep,R}(x^j)$\,.
Similarly, we have
\begin{equation}\label{220210_2}
\begin{aligned}
0=&
\int_{B_{\ep}(x^i)}
\nabla^2W_{\ep}^j:\nabla^2 (-W_{\ep}^i+a) \,\mathrm{d} x=
\int_{B_{\ep}(x^i)}\Delta^2W_{\ep}^j(-W_{\ep}^i+a)\, \mathrm{d} x\\
&- \int_{\partial B_{\ep}(x^i)}(\partial_{(-n)}\Delta W_{\ep}^j) (-W_{\ep}^i+a) \,\mathrm{d}\mathcal H^1
+ \int_{\partial B_{\ep}(x^i)}\langle \nabla^2 W_{\ep}^j (-n),\nabla (-W_{\ep}^i+a) \rangle\,\mathrm{d}\mathcal H^1\\
=&\int_{\partial B_{\ep}(x^i)}(\partial_{n}\Delta W_{\ep}^j) (-W_{\ep}^i+a) \,\mathrm{d}\mathcal H^1
- \int_{\partial B_{\ep}(x^i)}\langle \nabla^2 W_{\ep}^j n,\nabla (-W_{\ep}^i+a) \rangle\,\mathrm{d}\mathcal H^1\,.
\end{aligned}
\end{equation}
Furthermore, as $\ep\to0$\,,
\begin{equation}\label{220210_3}
\begin{aligned}
&\int_{\partial B_{\ep}(x^i)}(\partial_{n}\Delta W_{\ep}^j) \Big( -\sum_{k\neq i} W^k_{\ep} \Big)\,\mathrm{d}\mathcal H^1\to 0\\
&\int_{\partial B_{\ep}(x^i)}\Delta W_{\ep}^j \partial_{n} \Big( -\sum_{k\neq i} W^k_{\ep} \Big)\,\mathrm{d}\mathcal H^1\to 0\\
&\int_{\partial B_{\ep}(x^i)}\Big\langle \nabla^2 W_{\ep}^j n,\nabla \Big( -\sum_{k\neq i} W^k_{\ep} \Big) \Big\rangle\,\mathrm{d}\mathcal H^1\to 0\,,
\end{aligned}
\end{equation}
since all the integrands are uniformly bounded in $\ep$ and the domain of integration is vanishing.
Therefore, in view of \eqref{2201101612}, by \eqref{220210_1}, \eqref{220210_2}, \eqref{220210_3},
for any function $\widetilde{w}\in\widetilde{\mathscr{B}}_{\ep,\Omega}^\alpha$ we have that
\begin{equation*}
\begin{aligned}
-\nu \sum_{i\neq j}\int_{\partial B_{\ep}(x^i) }(\partial_{n}\Delta W_{\ep}^j) \widetilde{w}\,\mathrm{d}\mathcal H^1
+\nu \sum_{i\neq j}\int_{\partial B_{\ep}(x^i)}\Delta W_{\ep}^j \partial_{n} \widetilde{w}\,\mathrm{d}\mathcal H^1& \\
+ \sum_{i\neq j}\int_{\partial B_{\ep}(x^i)}(\partial_{n}\Delta W_{\ep}^j)\widetilde{w}\,\mathrm{d}\mathcal H^1
- \sum_{i\neq j}\int_{\partial B_{\ep}(x^i)}\langle \nabla^2 W_{\ep}^j n,\nabla \widetilde{w} \rangle\,\mathrm{d}\mathcal H^1&=\mathrm{O}(\ep)\,,
\end{aligned}
\end{equation*}
which, together with \eqref{20220214_5}, implies \eqref{20220222_3}.
Since the functions $W^j_\ep$ (for every $j=1,\ldots,J$) are uniformly bounded with respect to $\ep$ on $\partial\Omega$, by the standard trace theorem we get
\begin{equation}\label{20220211_3}
\begin{aligned}
&\,\Bigg|-(1-\nu)\int_{\partial\Omega} (\partial_n\Delta W_\ep^j)\widetilde{w}\,\mathrm{d}\mathcal H^1+\int_{\partial\Omega}\langle\nabla^2W_\ep^j,\nabla\widetilde w\rangle\,\mathrm{d}\mathcal H^1
-\nu\int_{\partial\Omega}\Delta W_\ep^j\partial_n\widetilde{w}\,\mathrm{d}\mathcal H^1\bigg|\\
\le&\, C\|\widetilde{w}\|_{H^1(\partial\Omega)}\le C\|\widetilde{w}\|_{H^2(\Omega_\ep(\alpha))}\,,
\end{aligned}
\end{equation}
where $C>0$ is a constant that does not depend on $\ep$\,.
In view of Lemma \ref{20220222_1}, by \eqref{20220222_3} and \eqref{20220211_3} (summing over $j=1,\ldots,J$), for $\ep$ small enough, we get
\begin{equation}\label{20220211_4}
\begin{aligned}
&\,\bigg|\frac{1+\nu}{E} \sum_{j=1}^J \int_{\Omega_{\ep}(\alpha)}
\big( \nabla^2W^j_{\ep}:\nabla^2\widetilde{w}-\nu \Delta W^j_{\ep}\Delta \widetilde{w}\big)\,\mathrm{d} x
+\sum_{j=1}^J\frac{1}{2\pi\ep}\int_{\partial B_{\ep}(x^j)}
\langle \nabla \widetilde{w}, \Pi(b^j)\rangle\,\mathrm{d}\mathcal H^1\bigg|\\
\le&\, C\big(\|\widetilde{w}\|_{H^2(\Omega_\ep(\alpha))}+1\big)\,,
\end{aligned}
\end{equation}
for some constant $C>0$ that does not depend on $\ep$\,.
Now, by applying Proposition \ref{202112230052} with $f=W_\ep^\alpha$ and by the very definition of $\mathcal{G}$ in \eqref{energyairy}, we deduce the existence of two constants $0<C_1<C_2$ independent of $\ep$ (but depending on $\alpha$ and $\Omega$) such that
\begin{equation}\label{20220214_1}
C_1\big(\|\widetilde{w}\|^2_{H^2(\Omega_\ep(\alpha))}-\|W_\ep^\alpha\|^2_{C^\infty(\partial\Omega)}\big)\le \mathcal{G}(\widetilde{w};\Omega_\ep(\alpha))\le C_2\|\widetilde{w}\|^2_{H^2(\Omega_\ep(\alpha))}\,
\end{equation}
for every $\widetilde w\in \widetilde{\mathscr{B}}_{\ep,\Omega}^\alpha$\,. Therefore, by \eqref{20220211_4} and
\eqref{20220214_1}, we deduce \eqref{2201182011}.
By \eqref{2201182011}, existence and uniqueness of the solution $\widetilde{w}_\ep^\alpha$ to the minimization problem \eqref{2112192003} for $\ep>0$ small enough follows by the direct method in the Calculus of Variations.
Furthermore, by \eqref{2201182011} and by Proposition \ref{2201202252} applied with $f=W_\ep^\alpha$ and $f^j=\sum_{i\neq j}W_\ep^i$\,, we have that
\begin{equation}\label{20220214_2}
C'\|\widetilde{w}_{\ep}^\alpha\|^2_{H^2(\Omega)}\le\widetilde{\mathcal{I}}_\ep^\alpha(\widetilde{w}_{\ep}^\alpha)+C''\,,
\end{equation}
for some constants $C',C''>0$ independent of $\ep$ (but depending on $\alpha$ and $\Omega$).
Hence, in order to conclude the proof it is enough to construct (for $\ep$ small enough) a competitor function $\widehat{w}^\alpha_\ep\in\widetilde{\mathscr{B}}_{\ep,\Omega}^\alpha$ such that
\begin{equation}\label{20220214_3}
\widetilde{\mathcal{I}}_\ep^\alpha(\widehat{w}_\ep^\alpha)\le C
\end{equation}
for some constant $C>0$ independent of $\ep$\,.
We construct $\widehat{w}_\varepsilon^\alpha$ as follows.
For every $j=1,\dots,J$\,, let $\varphi^j\in C^{\infty}(\Omega)$ be such that
$\varphi^j\equiv 0$ on $\overline{B}_{\frac D 4}(x^j)$\,,
$\varphi^j\equiv 1$ on $\Omega_{\frac D 2}(\alpha)$\,,
and $|\nabla\varphi(x)|\le \frac{C}{|x-x^j|}$ for every $x\in A_{\frac{D}{4},\frac{D}{2}}(x^j)$\,;
for every $\ep$ small enough\,, we define $\widehat{w}^\alpha_\ep\colon \Omega\to\mathbb{R}$ as
$$
\widehat{w}^\alpha_\ep\coloneqq-\sum_{i=1}^J \varphi^j W^j_{\ep}.
$$
By construction,
$$
\widehat{w}^\alpha_\ep+W^\alpha_\ep=\sum_{j=1}^J(1-\varphi^j)W^j_{\ep}\in\widetilde{\mathscr{B}}_{\ep,\Omega}^\alpha
$$
and
\begin{equation}\label{20220214_4}
\|\widehat{w}^\alpha_\ep\|_{H^2(\Omega_{\ep}(\alpha))}\le
\|\widehat{w}^\alpha_\ep\|_{H^2(\Omega)}\le\sum_{j=1}^J\|\varphi^jW_\ep^j\|_{H^2\big({A}_{\frac D 4,R}(x^j)\big)}\le C\,,
\end{equation}
for some constant $C>0$ independent of $\ep$ (but possibly depending on $\alpha$ and on $R$). By \eqref{2201182011} and \eqref{20220214_4} we obtain \eqref{20220214_3} and this concludes the proof.
\end{proof}
\begin{proof}[Proof of Proposition \ref{2201252355}]
We preliminarily notice that, since $\mathcal{G}$ is lower semicontinuous with respect to the weak $H^2$ convergence,\eqref{2201141838} yields
\begin{equation}\label{20220221_1}
\mathcal{G}(\widetilde{w}_0^\alpha;\Omega)\le\liminf_{\ep\to 0}\mathcal{G}(\widetilde{w}_\ep^\alpha;\Omega_\ep(\alpha))\,,
\end{equation}
and hence
\begin{equation}\label{2201192250}
\widetilde{\mathcal{I}}_{0}^\alpha(\widetilde{w}_0^\alpha)\le\liminf_{\ep\to 0}\widetilde{\mathcal{I}}_{\ep}^\alpha(\widetilde{w}_\ep^\alpha)\,.
\end{equation}
Here we have used that the boundary integrals on $\partial B_\ep(x^j)$ vanish as $\ep\to 0$ in view of \eqref{20220222_3}, and that,
by compactness of the trace operator \cite[Theorem~6.2, page~103]{Necas11} (see also Remark~\ref{maybeuseful}), as $\ep\to 0$\,,
\begin{equation}\label{numerata1606}
\begin{aligned}
\int_{\partial\Omega}(\partial_{n}\Delta W_{\ep}^j)\widetilde{w}_\ep^\alpha \,\mathrm{d}\mathcal H^1\to& \int_{\partial\Omega}(\partial_{n}\Delta W_{0}^j)\widetilde{w}_0^\alpha \,\mathrm{d}\mathcal H^1\\
\int_{\partial\Omega}\langle \nabla^2 W_{\ep}^j n,\nabla \widetilde{w}_\ep^\alpha \rangle \,\mathrm{d}\mathcal H^1\to& \int_{\partial\Omega}\langle \nabla^2 W_{0}^j n,\nabla \widetilde{w}_0^\alpha \rangle \,\mathrm{d}\mathcal H^1\\
\int_{\partial\Omega}\Delta W_{\ep}^j \partial_{n}\widetilde{w}_\ep^\alpha \,\mathrm{d}\mathcal H^1\to& \int_{\partial\Omega}\Delta W_{0}^j \partial_{n}\widetilde{w}_0^\alpha \,\mathrm{d}\mathcal H^1\,.
\end{aligned}
\end{equation}
Moreover, by Proposition \ref{prop:approx}
for every $\widehat{w}_0\in\widetilde{\mathscr{B}}_{0,\Omega}^\alpha$
there exists a sequence $\{\widehat{w}_\ep\}_\ep\subset H^2(\Omega)$ with $\widehat{w}_\ep\in\widetilde{\mathscr{B}}_{\ep,\Omega}^\alpha$ (for every $\ep>0$) such that $\widehat{w}_\ep\to \widehat{w}_0$ strongly in $H^2(\Omega)$\,. It follows that
\begin{equation}\label{20220222_6}
\widetilde{\mathcal{I}}_{0}^\alpha(\widehat w_0)=\lim_{\ep\to 0}\widetilde{\mathcal{I}}_{\ep}^\alpha(\widehat w_\ep)\,,
\end{equation}
which, by the minimality of $\widetilde{w}_{\ep}^\alpha$ and in view of \eqref{2201192250}, gives
\begin{equation}\label{20220222_6}
\widetilde{\mathcal{I}}_{0}^\alpha(\widehat{w}_0)=\lim_{\ep\to 0}\widetilde{\mathcal{I}}_{\ep}^\alpha(\widehat w_\ep)\ge\limsup_{\ep\to 0}\widetilde{\mathcal{I}}_{\ep}^\alpha(\widetilde{w}_\ep^\alpha)\ge \widetilde{\mathcal{I}}_{0}^\alpha(\widetilde{w}_0^\alpha)\,.
\end{equation}
It follows that $\widetilde{w}_0^\alpha$ is a minimizer of $\widetilde{\mathcal{I}}_0^\alpha$ in $\widetilde{\mathscr{B}}_{0,\Omega}^\alpha$\,.
By convexity (see \eqref{strictconv}), such a minimizer is unique and, by computing the first variation of $\widetilde{\mathcal{I}}_{0}^\alpha$ in $\widetilde{w}_0^\alpha$\,, we have that it satisfies \eqref{20220419}.
Furthermore, by applying \eqref{20220222_6} with $\widehat{w}_0=\widetilde{w}_0^\alpha$ we get \eqref{20220222_7}.
Finally, we discuss the strong convergence of $\widetilde{w}_{\ep}^\alpha$ in the compact subsets of of $\Omega\setminus\bigcup_{j=1}^J\{x^j\}$\,.
To this purpose, we preliminarily notice that, from \eqref{20220222_7}, \eqref{20220222_3}, and \eqref{numerata1606}, we have that
\begin{equation*}
\lim_{\ep\to 0}\mathcal{G}(\widetilde{w}^\alpha_\ep;\Omega_\ep(\alpha))=\mathcal{G}(\widetilde{w}^\alpha_0;\Omega)\,.
\end{equation*}
We now want to show that for every (fixed) $r>0$
\begin{equation}\label{440}
\int_{\Omega_{r}(\alpha)}
|\nabla^2\widetilde{w}^\alpha_{\ep}- \nabla^2 \widetilde{w}^\alpha_0 |^2
\,\mathrm{d} x\to 0\qquad\text{as $\ep\to 0$\,.}
\end{equation}
To this purpose, we will use the weak convergence \eqref{2201141838} and Remark~\ref{equinorm}; we start by observing that
\begin{equation*
\begin{aligned}
&\int_{\Omega_{r}(\alpha)}
|\nabla^2\widetilde{w}^\alpha_{\ep}- \nabla^2 \widetilde{w}_0^\alpha |^2\,\mathrm{d} x
-\nu\int_{\Omega_{r}(\alpha)}
|\Delta\widetilde{w}_{\ep}^\alpha- \Delta \widetilde{w}^\alpha_0 |^2
\,\mathrm{d} x
\\
=&\int_{\Omega_{r}(\alpha)} \!\!\! \big(
|\nabla^2\widetilde{w}_{\ep}^\alpha |^2+| \nabla^2 \widetilde{w}_0^\alpha |^2-2
\nabla^2 \widetilde{w}_0^\alpha:\nabla^2\widetilde{w}^\alpha_{\ep}\big)\,\mathrm{d} x
-\nu\int_{\Omega_{r}(\alpha)} \!\!\! \big(
|\Delta\widetilde{w}_{\ep}^\alpha |^2+| \Delta \widetilde{w}^\alpha_0 |^2-2
\Delta \widetilde{w}_0^\alpha \Delta\widetilde{w}_{\ep}^\alpha\big)\,\mathrm{d} x\,,
\end{aligned}
\end{equation*}
whence, thanks to the convergence
\eqref{2201141838},
we deduce
\begin{equation}\label{20220225_1}
\int_{\Omega_{r}(\alpha)}
|\nabla^2\widetilde{w}_{\ep}^\alpha- \nabla^2 \widetilde{w}^\alpha_0 |^2
\,\mathrm{d} x
-\nu\int_{\Omega_{r}(\alpha)}
|\Delta\widetilde{w}^\alpha_{\ep}- \Delta \widetilde{w}_0^\alpha |^2
\,\mathrm{d} x \to 0\,.
\end{equation}
Since (see the first inequality in \eqref{quadr})
\begin{equation*
c(\nu)
\int_{\Omega_{r}(\alpha)}
|\nabla^2\widetilde{w}^\alpha_{\ep}- \nabla^2 \widetilde{w}_0^\alpha |^2
\,\mathrm{d} x
\le
\int_{\Omega_{r}(\alpha)}
|\nabla^2\widetilde{w}^\alpha_{\ep}- \nabla^2 \widetilde{w}_0^\alpha |^2
\,\mathrm{d} x
-\nu\int_{\Omega_{r}(\alpha)}
|\Delta\widetilde{w}^\alpha_{\ep}- \Delta\widetilde{w}_0^\alpha |^2
\,\mathrm{d} x
\end{equation*}
for some constant $c(\nu)>0$ depending only on $\nu$\,, by \eqref{20220225_1},
we get \eqref{440}.
Finally, by \eqref{2201141838}, we get that $\widetilde{w}_\ep^\alpha$ converges strongly in $H^1(\Omega)$, as $\ep\to0$, to $\widetilde{w}_0^\alpha$\,,
which together with
\eqref{440}, implies that
\begin{equation}\label{20220225_3}
\widetilde{w}_\ep^\alpha\to \widetilde{w}_0^\alpha\qquad\textrm{ strongly in }H^{2}(\Omega_r(\alpha))\,.
\end{equation}
In conclusion, for any compact set $K\subset\Omega\setminus\bigcup_{j=1}^J\{x^j\}$\,, there exists $r>0$ such that $K\subset\Omega_{r}(\alpha)$\,, which, in view of \eqref{20220225_3}, implies the claim and concludes the proof of the proposition.
\end{proof}
We are in a position to discuss the asymptotic expansion of energies and to classify each term of the expansion.
\begin{theorem}\label{CLequiv}
For every $\ep>0$ small enough, let $w_\ep^\alpha$ be the minimizer of $\mathcal{I}_\ep^\alpha$ in $\mathscr{B}_{\ep,\Omega}^\alpha$\,.
Then we have
\begin{equation}\label{20220222_8}
\mathcal{I}_\ep^\alpha(w_\ep^\alpha)=-\frac{E}{1-\nu^2} \sum_{j=1}^J
\frac{|b_j|^2}{8\pi}|\log\ep|
+F(\alpha)+f(D,R;\alpha)+\omega_\ep\,,
\end{equation}
where $\omega_\ep\to 0$ as $\ep\to 0$\,,
\begin{equation}\label{20220224_1}
\begin{aligned}
f(D,R;\alpha)=\sum_{j=1}^J\frac{|b^j|^2}{8\pi}\frac{E}{1-\nu^2}\bigg(&\,2+\frac{D^2}{R^2}\Big(\frac{D^2}{R^2}-2\Big)-2\log R\\
&\,+\frac{1}{4(1-\nu)}\frac{D^2}{R^2}\Big(\frac{R^2}{D^2}-1\Big)\Big(\frac{D^2}{R^2}\Big(\frac{R^2}{D^2}+1\Big)-2\Big)\bigg)\,,
\end{aligned}
\end{equation}
(recall \eqref{distaminima} for the definition of $D$) and
\begin{equation}\label{ren_en_dislo}
F(\alpha)=
F^{\mathrm{self}}(\alpha)+
F^{\mathrm{int}}(\alpha)+
F^{\mathrm{elastic}}(\alpha)
\end{equation}
is the renormalized energy defined by
\begin{equation}\label{2201201755}
F^{\mathrm{self}}(\alpha)\coloneqq\sum_{j=1}^J \mathcal{G}( W^j_{0};\Omega_{D}(\alpha))+
\frac{E}{1-\nu^2} \sum_{j=1}^J
\frac{{|b_j|^2}}{8\pi}\log D\,,
\end{equation}
\begin{equation}\label{2201260016}
\begin{aligned}
F^{\mathrm{int}}(\alpha)\coloneqq&\frac{1+\nu}{E}\sum_{j=1}^J\sum_{k\neq j}
\Big( -(1-\nu)\int_{\partial\Omega}(\partial_{n}\Delta{ W_{0}^j}){W^k_0} \,\mathrm{d}\mathcal H^1
\\
&\phantom{\frac{1+\nu}{E}\sum_{j=1}^J\sum_{k\neq j} }+ \int_{\partial\Omega}\langle \nabla^2 {W_{0}^j } n,\nabla {W^k_0}\rangle\,\mathrm{d}\mathcal H^1
-\nu \int_{\partial\Omega}\Delta {W_{0}^j} \partial_{n}{W^k_0} \,\mathrm{d}\mathcal H^1\Big)\,,
\end{aligned}
\end{equation}
\begin{equation}\label{2201221615}
F^{\mathrm{elastic}}(\alpha)\coloneqq\widetilde{\mathcal{I}}_0^\alpha(\widetilde{w}_0^\alpha)\,.
\end{equation}
\end{theorem}
\begin{remark}
\rm{Notice that $F^{\mathrm{self}}(\alpha)$ is independent of $D$\,
as it can be verified by a simple computation.
}
\end{remark}
\begin{proof}
By \eqref{2201181904} and \eqref{20220223_2}, we have that
$w_\ep^\alpha=W_\ep^\alpha+\widetilde{w}_\ep^\alpha$\,, where $W_\ep^\alpha$ is defined in \eqref{20220223_1} and $\widetilde{w}_\ep^\alpha$ is the unique minimizer of $\widetilde{\mathcal{I}}_\ep^\alpha$ in $\widetilde{\mathscr{B}}_{\ep,\Omega}^\alpha$\,\ provided by Proposition~\ref{2201252355}.
Notice that
\begin{equation}\label{20220223_3}
\begin{aligned}
&\,\mathcal{G}(W_\ep^\alpha;\Omega_\ep(\alpha))+\sum_{j=1}^J\frac{1}{2\pi\ep}\int_{\partial B_\ep(x^j)}\langle\nabla W_\ep^\alpha,\Pi(b^j)\rangle\,\mathrm{d}\mathcal H^1\\
=&\,\sum_{j=1}^J\Big(\mathcal{G}(W_\ep^j;\Omega_\ep(\alpha))+\frac1{2\pi\ep}\int_{\partial B_\ep(x^j)}\langle\nabla W_\ep^j,\Pi(b^j)\rangle\,\mathrm{d}\mathcal H^1\Big)\\
&\,+\sum_{j=1}^J\sum_{k\neq j}\bigg(\frac{1+\nu}{E} \!\! \int_{\Omega_\ep(\alpha)} \!\!\! \Big(\nabla^2 W_\ep^j:\nabla^2W_\ep^k-\nu\Delta W_\ep^j\Delta W_\ep^k\Big)\,\mathrm{d} x\\
&\,+\frac1{2\pi\ep}\int_{\partial B_\ep(x^j)} \!\!\!\! \langle\nabla W_\ep^k,\Pi(b^j)\rangle\,\mathrm{d}\mathcal H^1\bigg)\\
\eqqcolon&\, F^{\mathrm{self}}_\ep(\alpha)+F^{\mathrm{int}}_\ep(\alpha)\,.
\end{aligned}
\end{equation}
We notice that, for every $j=1,\ldots,J$ and for every $0<\ep<r\le D$ with $\ep<1$
\begin{equation}\label{20220223_5}
\begin{aligned}
&\,\mathcal{G}(W^j_\ep;\Omega_\ep(\alpha))+\frac{1}{2\pi\ep}\int_{\partial B_\ep(x^j)}\langle\nabla W_\ep^j,\Pi(b^j)\rangle\,\mathrm{d}\mathcal H^1\\
=&\,\mathcal{G}(W^j_\ep;\Omega_r(\alpha))+\mathcal{G}(W^j_\ep;A_{\ep,r}(x^j))+\frac{1}{2\pi\ep}\int_{\partial B_\ep(x^j)}\langle\nabla W_\ep^j,\Pi(b^j)\rangle\,\mathrm{d}\mathcal H^1\,.
\end{aligned}
\end{equation}
Furthermore, by Corollary~\ref{insec4}, we have that
\begin{multline}\label{20220224_2}
\mathcal{G}(W_\ep^j; A_{\ep,r}(x^j))+\frac{1}{2\pi\ep}\int_{\partial B_\ep(x^j)}\langle\nabla W_\ep^j,\Pi(b^j)\rangle\,\mathrm{d}\mathcal H^1
=\,-\frac{|b^j|^2}{8\pi}\frac{E}{1-\nu^2}\log\frac{1}{\ep}\\
+\frac{|b^j|^2}{8\pi}\frac{E}{1-\nu^2}\log r+f_\ep(r,R;|b^j|)\,,
\end{multline}
where $f_\ep(r,R;|b^j|)$ is defined in \eqref{vanerr}.
Notice moreover that $f_{\ep}(r,R;|b^j|)\to f(r,R;|b^j|)$ (as $\ep\to 0$) with $f(r,R;|b^j|)$ defined by
\begin{equation}\label{20220224_3}
\begin{aligned}
f(r,R;|b^j|)\coloneqq&\frac{|b^j|^2}{8\pi}\frac{E}{1-\nu^2}\Big(2+\frac{r^2}{R^2}\Big(\frac{r^2}{R^2}-2\Big)-2\log R\Big)\\
&+\frac{|b^j|^2}{32\pi}\frac{E}{(1-\nu)^2(1+\nu)}\frac{r^2}{R^2}\Big(\frac{R^2}{r^2}-1\Big)\Big(\frac{r^2}{R^2}\Big(\frac{R^2}{r^2}+1\Big)-2\Big)\,.
\end{aligned}
\end{equation}
By Remark \ref{maybeuseful}, summing over $j=1,\ldots,J$ formulas \eqref{20220223_5}, \eqref{20220224_2} and \eqref{20220224_3}, for $r= D$ we obtain
\begin{equation}\label{20220224_4}
F^{\mathrm{self}}_\ep(\alpha)=-\sum_{j=1}^J\frac{|b^j|^2}{4\pi}\frac{E}{1-\nu^2}|\log\ep|+F^{\mathrm{self}}(\alpha)+f(D,R;\alpha)+\omega_\ep\,,
\end{equation}
where $\omega_\ep\to 0$ as $\ep\to 0$ and $f(D,R;\alpha):=\sum_{j=1}^Jf(D,R;|b^j|)$\,.
We now focus on $F_\ep^{\mathrm{int}}(\alpha)$\,.
By arguing as in the proof of Lemma \ref{20220222_1}, for every $j,k=1,\ldots,J$ with $k\neq j$\,, we have that
\begin{equation*}
\begin{aligned}
&\int_{\Omega_\ep(\alpha)}\Big(\nabla^2 W_\ep^j:\nabla^2W_\ep^k-\nu\Delta W_\ep^j\Delta W_\ep^k\Big)\,\mathrm{d} x\\
=&-(1-\nu)\int_{\partial\Omega}(\partial_{n}\Delta W_{\ep}^j){W}_\ep^k \,\mathrm{d}\mathcal H^1+\int_{\partial\Omega}\langle \nabla^2 W_{\ep}^j n,\nabla W_\ep^k \rangle \,\mathrm{d}\mathcal H^1-\nu \int_{\partial\Omega}\Delta W_{\ep}^j \partial_{n}W_\ep^k \,\mathrm{d}\mathcal H^1\\
&+(1-\nu)\sum_{i=1}^J\int_{\partial B_\ep(x^i)}(\partial_{n}\Delta W_{\ep}^j)W_\ep^k \,\mathrm{d}\mathcal H^1\\
&- \sum_{i=1}^J\int_{\partial B_{\ep}(x^i)}\langle \nabla^2 W_{\ep}^j n,\nabla W_\ep^k \rangle \,\mathrm{d}\mathcal H^1+\nu\sum_{i=1}^J\int_{\partial B_{\ep}(x^i)}\Delta W_{\ep}^j \partial_{n} W_\ep^k \,\mathrm{d}\mathcal H^1\,,
\end{aligned}
\end{equation*}
which, in view of \eqref{20220216_1}, \eqref{220210_1}, and \eqref{220210_3}, and using Remark~\ref{maybeuseful}, implies
\begin{equation}\label{20220223_4}
F^{\mathrm{int}}_\ep(\alpha)=F^{\mathrm{int}}(\alpha)+\omega_\ep\,,
\end{equation}
where $\omega_\ep\to 0$ as $\ep\to 0$\,.
Finally, by \eqref{20220223_3}, \eqref{20220223_5}, \eqref{20220224_4}, and \eqref{20220223_4}, we get
\begin{equation*}
\begin{aligned}
&\mathcal{G}(W_\ep^\alpha;\Omega_\ep(\alpha))+\sum_{j=1}^J\frac{1}{2\pi\ep}\int_{\partial B_\ep(x^j)}\langle\nabla W_\ep^\alpha,\Pi(b^j)\rangle\,\mathrm{d}\mathcal H^1\\
=&-\sum_{j=1}^J\frac{|b^j|^2}{4\pi}\frac{E}{1-\nu^2}|\log\ep|+F^{\mathrm{self}}(\alpha)+f(D,R;\alpha)+F^{\mathrm{int}}(\alpha)+\omega_\ep\,,
\end{aligned}
\end{equation*}
which, by \eqref{20220223_2} together with Propositions \ref{2201181930} and \ref{2201252355}, allows us to conclude the proof.
%
%
\end{proof}
We conclude by showing, via a diagonal argument, that the asymptotic behavior in Theorem~\ref{CLequiv} remains valid also for systems of disclination dipoles,
that is, when the finite system
$\alpha\in\mathscr{ED}(\Omega)$ of edge dislocations is replaced with the approximating system of disclination dipoles.
\begin{theorem}\label{diago}
Let $J\in\mathbb{N}$, let $b^1,\ldots,b^J\in\mathbb{R}^2\setminus\{0\}$, and let $x^1,\ldots,x^J$ be distinct points in~$\Omega$\,.
For every $h>0$\,, let $\theta_h\in\mathscr{WD}(\Omega)$ be the measure defined in \eqref{thetahJ}.
Then,
\begin{equation}\label{eovvia}
\theta_h\weakstar\alpha\coloneqq\sum_{j=1}^Jb^j\textrm{def}_{x^j}\in\mathscr{ED}(\Omega)\qquad \text{as $h\to 0$\,.}
\end{equation}
Let $D>0$ be as in \eqref{distaminima}; for every $0<h<\varepsilon<D$ let $w^{\theta_h}_{h,\varepsilon}$ be the unique minimizer in $\mathscr{B}_{\varepsilon,\Omega}^\alpha$ of the functional $\mathcal{I}_{h,\varepsilon}^{\theta_h}$ defined in \eqref{2202231647}.
Then there exists a function $\varepsilon\colon\mathbb{R}^+\to\mathbb{R}^+$ with $\varepsilon(h)>h$ and $\varepsilon(h)\to 0$ as $h\to 0$ such that $w_{h,\varepsilon(h)}^{\theta_h}\to w_0^\alpha$ in $H^2_\mathrm{loc}\big(\Omega\setminus\bigcup_{j=1}^J\{x^j\}\big)$ as $h\to 0$\,, where $w_0^\alpha$ is the function provided by Theorem \ref{2201181928}.
Moreover,
\begin{equation}\label{enerinormafine}
\mathcal{I}_{h,\varepsilon(h)}^{\theta_h}\big(w_{h,\varepsilon(h)}^{\theta_h}\big)=-\frac{E}{1-\nu^2} \sum_{j=1}^J
\frac{|b_j|^2}{8\pi}|\log\ep(h)|
+F(\alpha)+f(D,R;\alpha)+\omega_{h}\,,
\end{equation}
where $F(\alpha)$ and $f(D,R;\alpha)$ are defined in \eqref{ren_en_dislo} and \eqref{20220224_1}, respectively, and $\omega_h\to 0$ as $h\to 0$\,.
\end{theorem}
\begin{proof}
Convergence \eqref{eovvia} is obvious.
Let now $0<\ep<D$ be fixed.
By Remark~\ref{2202211829}, there exists $\bar h<\ep$ such that, for every $h<\bar h$,
\begin{equation}\label{fattaprima}
\lVert w^{\theta_h}_{h,\ep}-w^\alpha_{0,\ep}\rVert_{H^2(\Omega)}<\ep\,,
\end{equation}
where $w_{0,\ep}^\alpha$
is the unique minimizer of \eqref{energyI} in $\mathscr{B}_{\varepsilon,\Omega}^\alpha$\,.
Choose such an $h$, call it $h(\ep)$, and notice that this choice can be made in a strictly monotone fashion\,.
Let now $0<r<D$\,;
by \eqref{fattaprima} and Theorem \ref{2201181928}\,, we get
$$
\big\lVert w^{\theta_{h(\ep)}}_{h(\ep),\ep}-w^\alpha_{0}\big\rVert_{H^2(\Omega_r(\alpha))}\leq \big\lVert w^{\theta_{h(\ep)}}_{h(\ep),\ep}-w^\alpha_{0,\ep}\big\rVert_{H^2(\Omega_r(\alpha))}+\lVert w^\alpha_{0,\ep}-w^\alpha_{0}\rVert_{H^2(\Omega_r(\alpha))}<\ep+\mathrm{o}_\ep\,,
$$
where $\mathrm{o}_\ep\to 0$ as $\varepsilon\to 0$\,.
By the arbitrariness of $r$ we get that $w^{\theta_{h(\ep)}}_{h(\ep),\ep}\to w^\alpha_{0}$ in $H^2_\mathrm{loc}(\Omega\setminus\bigcup_{j=1}^{J}\{x^j\})$\,, and hence, by the strict monotonicity of the map $\ep\mapsto h(\ep)$\, the first part of the claim follows.
Finally, \eqref{enerinormafine} is an immediate consequence of Theorem \ref{CLequiv}.
\end{proof}
| 55,343 |
\section{Introduction}
Two of the most fundamental primitives in information theory are privacy amplification and data compression with side information, both of which involve manipulating the correlations between two random variables $Z$ and $Y$. Privacy amplification is the art of extracting that part of $Z$ which is uncorrelated from $Y$. In particular, the goal is to extract uniform randomness, in the form of a random variable $U$, from an input $Z$ in such a way that $U$ is completely uncorrelated with $Y$. In a cryptographic setting $Z$ might refer to a partially-random and partially-private key string, while $Y$ refers to knowledge held by an adversary. Meanwhile, the goal of data compression with side information is essentially the opposite, to determine that part of $Z$ which is not correlated with $Y$ and to make this available as the compressed version of $Z$. More specifically, an encoder would like to compress $Z$ into a smaller random variable $C$ such that a decoder can reconstruct $Z$ given both $C$ and the side information $Y$.
These two tasks have direct, purely quantum analogs in quantum information theory. Data compression with side information translates into distillation of entanglement between two quantum systems $A$ and $B$ using only classical communication (the analog of $C$). The quantum version of privacy amplification is the removal of all correlations (both quantum and classical) between $A$ and $B$ by actions taken only on $A$, such that the density matrix for system $A$ is also transformed into a completely mixed state (the analog of $U$).
Moreover, in the purely quantum realm the two quantum tasks are dual to one another, a feature which has been fruitfully exploited to construct a whole family of quantum information processing protocols~\cite{abeyesinghe_mother_2009}. The duality holds for complementary quantum systems, in the sense that if it is possible to maximally entangle two systems $A$ and $B$ such that $A$ itself is maximally mixed, then it is possible to completely decouple a maximally-mixed $A$ from the complementary system $R$ of $B$, and vice versa~\cite{schumacher_approximate_2002}. Two systems $B$ and $R$ are complementary, relative to system $A$, when the joint quantum state of $ABR$ is a pure state, a state which always exists in principle. That is, two systems $B$ and $R$ are complementary relative to $A$ when each one is the purifying system for the other and $A$~\footnote{In the communication scenario complementary systems become complementary channels. An arbitrary channel $\mathcal{E}$ taking $A$ to $B$ can be thought of as an isometry taking $A$ to $BR$ by the Stinespring dilation, and considering only the $R$ portion of the output defines the complementary channel $\mathcal{E}^\#$ to $\mathcal{E}$.}.
In this paper we show that this duality also extends to the hybrid classical-quantum tasks of classical privacy amplification against quantum adversaries and classical data compression with quantum side information: The ability to perform one of the tasks implies the ability to perform the other. Here we are interested in manipulating the correlations between a classical random variable $Z$ and a quantum random variable, i.e.\ a quantum system $B$. Despite the hybrid nature of the resources, the analysis is still within the realm of quantum information theory, as we can and do imagine that $Z$ is produced by measurement of the quantum system $A$.
Complementary quantum systems still constitute an important part of the duality, and compression of $Z$ given side information $B$ implies privacy amplification against $R$ and vice versa, just as in the purely quantum case. However, the duality takes on an additional complementary character, as the compression task is not dual to privacy amplification of $Z$ against $R$, but rather it is dual to privacy amplification of a complementary random variable, which we will call $X$, against $R$. Complementary random variables correspond to outcomes of measuring complementary observables, observables like position and momentum for which complete certainty in the outcome of one observable implies complete uncertainty in the outcome of the other. In the present context, if the random variable $Z$ is the result of measuring an observable $Z^A$ on system $A$, then $X$ is the result of measuring a complementary observable $X^A$ on $A$. In what follows we ignore the difference between an observable and random variable and simply call both $Z^A$ (or ${X}^A$).
Of course, one of the pillars of quantum mechanics is that both measurements cannot be performed simultaneously. Because analysis of such complementary measurements can quickly become a maze of counterfactuals, let us describe the duality more precisely. We start with a pure quantum state $\psi^{ABR}$ describing the three quantum systems $A$, $B$, and $R$. Then we imagine a \emph{hypothetical} $Z^A$ measurement, say, and then design a protocol for data compression of the resulting classical random variable $Z^A$ given side information $B$. The protocol itself is real enough, and the duality then states that if we instead perform the $X^A$ measurement, it is possible to repurpose the compression protocol to perform privacy amplification of the classical random variable $X^A$ against system $R$. The same is true in the reverse direction (modulo the caveats discussed below). We stress that only one of the two conjugate measurements $Z^A$ or $X^A$ is ever performed on $\psi^{A}$; we merely contemplate what would be possible had the other measurement been performed.
There are two caveats regarding the duality that should be emphasized. First, we can only establish a duality between protocols in which the privacy amplification function or data compression function is linear. This requirement stems from the need to interpret functions applied to $X^A$ as operations on $Z^A$ and vice versa. In general this is problematic, as $X^A$ and $Z^A$ are complementary observables and therefore actions taken on one have unavoidable back-actions on the other, but linear functions will offer a way around this problem.
Secondly, the duality does not hold in both directions for arbitrary states of $ABR$. As we shall see, the ability to perform data compression with side information (CSI) implies the ability to perform privacy amplification (PA). However, we can only show the converse when $\psi^{ABR}$ has one of two simple forms, either $R$ is completely correlated with (a hypothetical measurement of) $Z^A$ or $B$ is completely correlated with (a hypothetical measurement of) $X^A$. These restrictions and the asymmetry of the duality can be traced back to a recently proven form of the uncertainty principle~\cite{renes_conjectured_2009,berta_entropic_2009} and the fact that it only sets a \emph{lower} limit on knowledge of complementary observables. Going from privacy amplification to data compression implicitly requires an \emph{upper} limit, which we deal with by considering the equality conditions of the uncertainty principle, and these are shown to be exactly the two conditions named above.
The remainder of the paper is devoted to elucidating the duality. In the next section we provide background on the two tasks, how protocols can be constructed using universal hashing, as well as the details of one-shot protocols handling arbitrary inputs and rates that can be achieved in the case of asymptotically-many identical inputs. Then in Sec.~\ref{sec:perfect} we examine the ``perfect'' cases, that is, when $Z^A$ is perfectly recoverable from $B$ or $R$ is completely uncorrelated with a uniformly random $X^A$, and show that the duality immediately follows from a recently discovered form of the uncertainty principle. Use of the uncertainty principle helps explain the duality in a simplified setting and understand the reason for the second caveat above.
As perfect correlation or uncorrelation is difficult to achieve in practice, we are ultimately more interested in the approximate case. In Sec.~\ref{sec:approx} we investigate the duality in the approximate case and show that $R$ is approximately uncorrelated with $X^A$ if $Z^A$ is approximately recoverable from $B$, and vice versa. This serves as a stepping stone to studying full-fledged CSI and PA protocols, as taken up in Sec.~\ref{sec:iidprotocols}. Therein we show how CSI protocols utilizing linear hashing can be used to construct, and can be constructed from, linear-hashing based PA protocols.
In the case of protocols designed for inputs consisting of asymptotically-many copies of a fixed resource state, the uncertainty principle of~\cite{renes_conjectured_2009,berta_entropic_2009} implies that the duality preserves optimality of protocols in that optimal CSI protocols can be transformed into optimal PA protocols, and vice versa.
Combining this with recent results on one-shot CSI, this construction implies a new uncertainty principle formulated in terms of smooth min- and max-entropies, which we discuss in Sec.~\ref{sec:applications} along with additional applications and relations to other work.
\section{Background}
\subsection{Classical-Quantum States}
In order to describe protocols involving hybrid classical-quantum systems, it is convenient to work within the formalism of quantum mechanics. In this language, a classical random variable $Z^A$ and quantum side information $S$ can be described by the classical-quantum (cq) state
\begin{align}
\label{eq:cqstate}
{\psi}^{AS}_Z=\sum_{z=0}^{d-1} p_z\ket{z}\!\bra{z}^A\otimes\varphi_z^S,
\end{align}
where $z$ are the possible values the random variable $Z^A$ can take, with alphabet size $d$, $p_z$ is the probability that $Z^A=z$, and $\varphi_z^S$ is the quantum state of $S$ conditioned on the value of $z$. The $A$ system is effectively classical in the sense that an an unconditional measurement in the $\ket{z}$ basis has no effect on the state; essentially it has already been measured. The measurement defines the $Z^A$ observable, up to specifying the values of the possible outcomes, i.e.\ the position of the position observable. In the present context these values are irrelevant, as we are content with simply enumerating them. The subscript, here $Z$, indicates this is a cq state and which observable defines the classical basis.
The entropy of the classical random variable $Z^A$ given the quantum side information $S$ is defined by
\begin{align}
\label{eq:condent}
H(Z^A|R)_{{\psi}^{AS}_Z}\equiv H(AR)_{{\psi}^{AS}_Z}-H(S)_{{\psi}^{S}_Z},
\end{align}
where $H(A)_{\psi^A}=-{\rm Tr}\left[\psi^A\log_2 \psi^A\right]$ is the von Neumann entropy (measured in bits) and $\psi^S_Z={\rm Tr}_A[\psi^{AS}_Z]$ is the partial trace over the $A$ system.
In general, $\psi^{AS}$ can be thought of as the marginal of the pure state $\ket{\psi}^{AST}$,
where
\begin{align}
\ket{\psi}^{AST}=\sum_{z=0}^{d-1}\sqrt{p_z}\ket{z}^A\ket{z}^{T_1}\ket{\varphi_z}^{ST_2}.
\end{align}
System $T$ consists of two parts, $T_2$ which purifies $S$ for each value of $z$ and $T_1$ which purifies $AST_2$. Here we name the systems $S$ and $T$ instead of $B$ and $R$ as in the introduction because in the subsequent sections $B$ and $R$ will take on both roles in different contexts.
From the pure state we can still define the entropy of the classical random variable $Z^A$ given $S$ by first converting back to a cq state. We will often make use of the following definition: $H(Z^A|S)_{\psi^{AST}}\equiv H(Z^A|S)_{\psi^{AS}_Z}$.
\subsection{Privacy Amplification Against Quantum Adversaries (PA)}
Privacy amplification is the art of extracting a truly secret random variable, a secret key, from one partially known to an eavesdropper holding some side information $S$. Functions performing privacy amplification are usually called \emph{extractors}, the goal being to produce as much secret key as possible.
Privacy amplification against adversaries holding classical information was introduced in~\cite{bennett_to_1986,bennett_privacy_1988}, and was extended to the case of quantum side information in~\cite{devetak_distillation_2005,knig_power_2005,renner_universally_2005}.
Using~(\ref{eq:cqstate}), the ideal output of privacy amplification would be a state for which $p_z=\frac{1}{d}$ and the $\varphi_z^S$ are all independent of $z$ and equal to one another. This last property implies that $\varphi_z^S=\psi^S$ for all $z$.
In~\cite{renner_universally_2005} Renner and K\"onig introduced an approximate notion of security and uniformity of $Z^A$ which is universally composable, meaning that $Z^A$ can safely be input into any other composable protocol, and the overall protocol is certain to be secure by virtue of its constituent parts being secure. This definition says that $Z^A$ is approximately secure if the trace distance to the ideal case $\tfrac{1}{d}\mathbbm{1}\otimes {\psi}^S$ is small, where ${\psi}^S={\rm Tr}_A\left[{\psi}^{AS}_Z\right]$. We will say that $Z^A$ is $\epsilon$-secure if
\begin{align}
\label{eq:epssecure}
p_{\rm secure}(X|S)_\psi\equiv\tfrac{1}{2}\left\|\psi_Z^{AS}-\tfrac{1}{d}\mathbbm{1}\otimes {\psi}^S\right\|_1\leq \epsilon.
\end{align}
Use of the trace distance $\left\|M\right\|_1\equiv{\rm Tr}[\sqrt{M^\dagger M}]$ means that the actual $\epsilon$-secure $Z^A$ can only be distinguished from the ideal, a perfect key, with probability at most $\epsilon$.
Renner and K\"onig show that privacy amplification can produce an $\epsilon$-secure key of length (number of bits) $\ell_{\rm PA}^\epsilon(Z^A|S)_\psi$ given in terms of the smooth min-entropy~\cite{renner_universally_2005, renner_security_2005}:
\begin{align}
\label{eq:pa}
\hmin{\epsilon_1}(Z^A|S)_\psi-2\log\tfrac{1}{\epsilon_2}+2\,\leq\, \ell_{\rm PA}^\epsilon(Z^A|S)_\psi\,\leq\, \hmin{\sqrt{2\epsilon}}(Z^A|S)_\psi,
\end{align}
where $\epsilon=\epsilon_1+\epsilon_2$. For a precise definition of the smooth min-entropy, see Appendix~\ref{app:smooth}.
The lower bound is established by constructing an extractor based on \emph{universal hashing}.
In this scheme the approximate key is created by applying a randomly chosen hash function $f$ to $Z^A$. The function is chosen from a universal family $F$ of hash functions, each mapping a size $d$ input alphabet to a size $m$ output alphabet, such that for any pair of inputs $z_1$ and $z_2$ the probability of a collision of the outputs is no greater than if the function were chosen at random from all functions:
\begin{align*}
{\rm Pr}_F[f(z_1)=f(z_2)]\leq \tfrac{1}{m}\qquad\forall\,\, z_1,z_2.
\end{align*}
More properly, such a family is called a 2-universal family, since the outputs exhibit a weaker form of pairwise independence. Hash families whose outputs are truly pairwise independent are called strongly 2-universal, a notion which can be easily extended to $k$-wise independence. In the present context we shall focus on using linear hash functions, and since the family of all linear hash functions is universal, we can immediately apply the results of~\cite{renner_universally_2005}.
Meanwhile, the upper bound applies to any conceivable privacy amplification protocol, and stems from properties of the min-entropy itself. In the asymptotic i.i.d.\ case of $n\rightarrow \infty$ copies of $\bar{\psi}^{AS}_Z$, the min-entropy tends to the more well-known von Neumann entropy, $\hmin{\epsilon}(Z^{A^{\otimes n}}|S^{\otimes n})_{\psi^{\otimes n}}\rightarrow nH(Z^A|S)_\psi+O(\sqrt{n\log\frac{1}{\epsilon}})$~\cite{tomamichel_fully_2009}, which implies that in this case universal hashing can produce approximate keys at the rate
\begin{align}
\label{eq:parate}
r_{\rm PA}(\psi)\equiv\lim_{\epsilon\rightarrow 0}\lim_{n\rightarrow \infty}\tfrac{1}{n}\ell_{\rm PA}(Z^{A^{\otimes n}}|S^{\otimes n})_{\psi^{\otimes n}}=H(Z^A|S)_{\psi},
\end{align}
and furthermore this rate is optimal.
These results nicely conform with the intuitive understanding of $\hmin{\epsilon}(Z^A|S)$ and $H(Z^A|S)$ as uncertainties of $Z^A$ given the side information $S$; the part of $Z^A$ unknown to $S$ should be roughly of this size, so it is sensible that this amount can in fact be extracted by privacy amplification.
\subsection{Data Compression with Quantum Side Information (CSI)}
The problem of data compression with side information, also known as information-reconciliation, is to compress a random variable $Z^A$ in such a way that it can later by recovered by the compressed version $Z'$ together with the side information $S$. Unlike privacy amplification, this protocol has two components, a compressor and a decompressor, the goal of course being to compress the input to as few bits as possible. The case of classical side information was first solved for in the asymptotic i.i.d.\ scenario by Slepian and Wolf~\cite{slepian_noiseless_1973}, and a one-shot version was given by Renner and Wolf~\cite{renner_simple_2005,renner_smooth_2004}. The quantum i.i.d.\ version was studied by Winter~\cite{winter_coding_1999} and Devetak and Winter~\cite{devetak_classical_2003}, and recently extended to the one-shot scenario by the present author~\cite{renes_one-shot_2010}.
The ideal output of such a scheme would be a cq state in which the $\varphi_z^S$ were perfectly distinguishable from one another, so that a corresponding measurement of $S$ would perfectly reconstruct $z$. A suitable approximate notion is that there should exist some measurement $\Lambda^S_z$ for which the probability $p_{\rm guess}(Z^A|B)$ of successfully identifying $z$ is large. When there does, we say $z$ is $\epsilon$-recoverable from $B$ in the sense that
\begin{align}
\label{eq:epsgood}
p_{\rm guess}(Z^A|B)=\sum_{z=0}^{d-1} p_z{\rm Tr}\left[\Lambda_z^S\varphi_z^S\right]\geq 1-\epsilon.
\end{align}
The one shot result can be formulated in terms of the dual quantity to the min-entropy, the max-entropy, defined in Appendix~\ref{app:smooth}. The minimum number of bits $\ell_{\rm CSI}^{\epsilon}(Z^A|S)_\psi$ needed to compress $Z^A$ such that it is $\epsilon$-recoverable from $S$ and the compressed version is bounded by
\begin{align}
\label{eq:csi}
\hmax{\sqrt{2\epsilon}}(Z^A|S)_\psi\,\leq\, \ell_{\rm CSI}^{\epsilon}(Z^A|S)_\psi\,\leq\, \hmax{{\epsilon_1}}(Z^A|S)_\psi+2\log\tfrac{1}{\epsilon_2}+4,
\end{align}
again for $\epsilon=\epsilon_1+\epsilon_2$. The upper bound is found by constructing a compressor using universal hashing and a decompressor using the so-called ``pretty good measurement'', while the lower bound follows from properties of the max-entropy. Like the min-entropy, the max-entropy also tends to the von Neumann entropy in the limit of $n\rightarrow\infty$ i.i.d.\ inputs. Defining the rate as in privacy amplification, we obtain
\begin{align}
\label{eq:csirate}
r_{\rm CSI}\equiv\lim_{\epsilon\rightarrow 0}\lim_{n\rightarrow \infty}\tfrac{1}{n}\ell_{\rm CSI}(Z^{A^{\otimes n}}|S^{\otimes n})_{\psi^{\otimes n}}=H(Z^A|S)_\psi.
\end{align}
This reproduces the results of Devetak and Winter for the asymptotic i.i.d.\ case.
Again this result conforms to the intuitive understanding of the conditional entropies. Since $\hmax{\epsilon}(Z^A|S)$ and $H(Z^A|S)_\psi$ are in some sense $S$'s uncertainty of $Z^A$, it is sensible that the compressor would have to augment the decompressor's information by this amount.
\section{Duality from the Uncertainty Principle in the Perfect Case}
\label{sec:perfect}
We now show that the ideal cases that either $B$ is already perfectly correlated with $Z^A$ or $R$ is perfectly uncorrelated with $X^A$ are already dual by using a recently derived version of the uncertainty principle. Although using the uncertainty principle in this way will ultimately prove insufficient in the approximate case and when attemping to construct one protocol from the other, the analysis here serves to introduce the nature of the duality in a simplified setting, as well as understand the reasons behind the second caveat.
As remarked in the introduction, the duality between PA and CSI exists for complementary observables $Z^A$ and $X^A$. Let us be more specific and define these observables to be the Weyl-Heisenberg operators $Z\equiv\sum_{k=0}^{d-1} \omega^{k}\ket{k}\bra{k}$ and $X\equiv\sum_{k=0}^{d-1}\ket{k{+}1}\bra{k}$. Since they aren't Hermitian, these operators are not observables in the usual sense since the values they can take on are not real numbers. However, they each specify a basis of system $A$, enough to specify two measurements, which is is all we need here. The two are related by Fourier transform, since the eigenstates of $X$ are simply $\cket{x}=\frac{1}{\sqrt{d}}\sum_{z=0}^{d-1}\omega^{-xz}\ket{z}$. From this relation it is clear that the observables are complementary, as the result of $Z^A$ measurement on an $X^A$ eigenstate is completely random, and vice versa.
Now consider a recently-discovered form of the uncertainty principle~\cite{renes_conjectured_2009,berta_entropic_2009}, which quantifies uncertainty by entropy and includes the case of quantum side information,
\begin{align}
\label{eq:cit}
H(X^A|R)_\psi+H(Z^A|B)_\psi\geq 1.
\end{align}
This holds for arbitrary states $\psi^{ABR}$, pure or mixed.
Loosely speaking, it states that the entropy $R$ has about the result of measuring $X^A$, plus the entropy $B$ has about the result of measuring $Z^A$, cannot be less than 1. Note that it is not possible to perform both of these measurements simultaneously, since the associated observables do not commute. Nevertheless, the uncertainty principle constrains what systems $B$ and $R$ can simultaneously ``know'' about the results of either measurement.
Let us see how this can be used to show that perfect $Z^A$ recovery from $B$ implies perfect $X^A$ security from $R$. Consider an arbitrary pure state $\ket{\psi}^{ABR}$, which we can write
\begin{align*}
\ket{\psi}^{ABR}&=\sum_z \sqrt{p_z}\ket{z}^A\ket{\varphi_z}^{BR}\\\nonumber
&=\sum_x \sqrt{q_x}\cket{x}^A\ket{\vartheta_x}^{BR}.
\end{align*}
using the $Z^A$ basis $\ket{z}^A$ or the $X^A$ basis $\cket{x}^A$. In the ideal case the states $\varphi_z^B$ are perfectly distinguishable, and therefore $H(Z^A|B)_\psi=0$. By the above this implies $H(X^A|R)_\psi=\log_2 d$, which can only occur if all the marginal states $\vartheta_x^R$ are identical. Hence $R$ is completely uncorrelated with $X^A$. Furthermore, $H(Z^A|B)_\psi=0$ also implies that $X^A$ is uniformly distributed. Since the uncertainty principle holds for any state, we can also apply it to $\psi^{AB}$. This yields $H(X^A)_\psi=\log_2 d$, meaning $X^A$ is uniformly distributed. Thus, $X^A$ is an ideal key, uniformly distributed and completely uncorrelated with $R$.
We cannot directly make use of the uncertainty principle for the converse, $X^A$ security from $R$ implies $Z^A$ recoverability from $B$. Assuming the former, we have $H(X^A|R)=1$. But this does not imply $H(Z^A|B)=0$ unless the uncertainty principle is tight. As an example, consider the $d=2$ the state $\ket{\psi}^{ABR}=\frac{1}{\sqrt{2}}\left(\ket{0}+i\ket{1}\right)^A\ket{\varphi}^{BR}$, for which $H(Z^A|B)=H(X^A|R)=1$. On the other hand, if the uncertainty principle is tight, then it is immediate that $H(X^A|R)=1$ implies $H(Z^A|B)=0$ and therefore the desired implication holds.
Thus we are interested in the equality conditions for Eq.~\ref{eq:cit}. The only currently known conditions are that (at least) one of $H(X^A|R)_\psi$, $H(X^A|B)_\psi$, $H(Z^A|R)_\psi$, or $H(Z^A|B)_\psi$ is zero~\cite{renes_conjectured_2009}, so that equality is met when the conditional entropies take on their extreme values. Put differently, the global state $\psi^{ABR}$ must in some way be a cq state, be it between $A$ and $R$, as in $\psi^{AR}=\psi^{AR}_X$ or $\psi^{AR}_Z$, or between $A$ and $B$, as in $\psi^{AB}=\psi^{AB}_Z$ or $\psi^{AB}_X$. Moreover, there must be perfect correlation between the two systems in the sense that the conditional marginal states in either $B$ or $R$ (which depend on the value of $X^A$ or $Z^A$) must be perfectly distinguishable.
For completeness, we briefly recapitulate the argument here. Consider the case that $H(Z^A|B)_\psi=0$,
which immediately implies $H(X^A|R)_\psi\geq 1$. Since 1 is also an upper bound to the conditional entropy, it must be that $H(X^A|R)_\psi=1$ and the equality conditions are met. The same argument can be made starting from $H(X^A|R)_\psi=0$. The remaining two quantities $H(X^A|B)_\psi$ and $H(Z^A|R)_\psi$ are related by the complementary form of the uncertainty principle, obtained by interchanging either the complementary observables $X^A$ and $Z^A$ or the complementary systems $B$ and $R$. The derivation in~\cite{renes_conjectured_2009} simultaneously produces both forms of the uncertainty principle, meaning that satisfying the equality conditions for one implies the same for the other. Thus, the conditions $H(X^A|B)_\psi=0$ and $H(Z^A|R)_\psi=0$ also lead to equality in~(\ref{eq:cit}).
Observe that in the former case of $H(X^A|B)=0$ and $H(X^A|R)=1$ we end up with $H(X^A|B)=H(Z^A|B)=0$, which is a sufficient condition to have maximal entanglement between $A$ and $B$, as discussed in~\cite{renes_physical_2008}. In the other case we end up with $H(Z^A|B)=H(Z^A|R)=0$ and $H(X^A|B)=H(X^A|R)=1$, a a situation similar to that of a $d=2$ GHZ state $\frac{1}{\sqrt{2}}\left(\ket{000}+\ket{111}\right)^{ABR}$.
\section{Duality in the Approximate Case}
\label{sec:approx}
In this section we examine the duality when $Z^A$ is approximately recoverable from $B$ or $R$ is approximately independent of a nearly uniform $X^A$. Unfortunately, the arguments using the uncertainty principle in the previous section cannot easily be modified to work in the approximate case, so here we present a more algebraic treatment. We start with $Z^A$ recovery implies $X^A$ security.
\begin{theorem}
\label{thm:recoverimpliessecure}
For an arbitrary pure state $\ket{\psi}^{ABR}$, suppose
$p_{\rm guess}(Z^A|B)_\psi\geq 1-\epsilon$. Then $p_{\rm secure}(X^A|R)_\psi\leq \sqrt{2\epsilon}$.
\end{theorem}
\begin{proof}
Start by performing the measurement coherently with a partial isometry $U^{B\rightarrow BM}$ and an ancillary system $M$. This transforms the state according to
\begin{align*}
\ket{\psi'}^{ABMR}&\equiv U^{B\rightarrow BM}\ket{\psi}^{ABR}\\
&\equiv\sum_{z,z'}\sqrt{p_z}\ket{z}^A\ket{z'}^M\sqrt{\Lambda_{z'}^B}\ket{\varphi_z}^{BR}.
\end{align*}
The ideal output would be
\begin{align*}
\ket{\xi}^{ABMR}=\sum_z\sqrt{p_z}\ket{z}^A\ket{z}^M\ket{\varphi_z}^{BR},
\end{align*}
and computing the fidelity $F(\psi',\xi)\equiv|{\bracket{\psi'}{\xi}}|$ between the two we find
\begin{align*}
F(\psi',\xi)&=\bra{\xi}U^{B\rightarrow BM}\ket{\psi}^{ABR}\\
&=\sum_z p_z\bra{\varphi_z}\sqrt{\Lambda_{z}^B}\ket{\varphi_z}^{BR}
\\&\geq \sum_z p_z\bra{\varphi_z}{\Lambda_{z}^B}\ket{\varphi_z}^{BR}\\
&=p_{\rm guess}.
\end{align*}
Here we have used the fact that $\sqrt{\Lambda}\geq \Lambda$ for $0\leq \Lambda\leq\mathbbm{1}$.
Now rewrite $\xi$ using the complementary basis $\cket{x}^A$ in anticipation of measuring $X^A$. The result is
\begin{align*}
\ket{\xi}^{ABMR}&=\tfrac{1}{\sqrt{d}}\sum_{x}\cket{x}^A\sum_z\omega^{xz}\ket{z}^{M}\ket{\varphi_z}^{BR}\\
&=\tfrac{1}{\sqrt{d}}\sum_{x}\cket{x}^A\left(Z^x\right)^M\sum_z\ket{z}^{M}\ket{\varphi_z}^{BR}\\
&=\tfrac{1}{\sqrt{d}}\sum_{x}\cket{x}^A\left(Z^x\right)^M\ket{\psi}^{MBR}.
\end{align*}
In the last line we have implicitly defined the state $\ket{\psi}^{MBR}$, which is just $\ket{\psi}^{ABR}$ with $A$ replaced by $M$. It is easy to work out that the result of measuring $X^A$ and marginalizing over $BM$ is the ideal output of privacy amplification of $X^A$ against $R$, namely
\begin{align*}
\bar{\xi}_X^{AR}=\tfrac{1}{d}\sum_x \cket{x}\bra{\widetilde{x}}^A\otimes \psi^R=\tfrac{1}{d}\mathbbm{1}^A\otimes \psi^R.
\end{align*}
Since $\ket{\psi}^{ABR}$ and $\ket{\psi'}^{ABMR}$ are related by the isometry $U^{B\rightarrow BM}$, measuring $X^A$ and tracing out $BM$ results in the same output for both input states. And because the fidelity cannot decrease under such a quantum operation (see Appendix~\ref{app:tdf}), this implies
\begin{align*}
F(\bar{\psi}_X^{AR},\tfrac{1}{d}\mathbbm{1}^A\otimes \psi^R)&=F(\bar{\psi'}_X^{AR},\tfrac{1}{d}\mathbbm{1}^A\otimes \psi^R)\\
&\geq F(\psi',\xi)\\
&\geq p_{\rm guess}(Z^A|R)_\psi\\
&\geq 1-\epsilon.
\end{align*}
Finally, from~\ref{eq:tdfbounds} we have $p_{\rm secure}(X^A|R)\leq \sqrt{1-F(\bar{\psi}_X^{AR},\tfrac{1}{d}\mathbbm{1}^A\otimes \psi^R)^2}\leq\sqrt{2\epsilon}$.
\end{proof}
As discussed in the previous section, there are two routes from $\epsilon$-security of $X^A$ against $R$ to $\epsilon$-recovery of $Z^A$ from $B$. The first case, when $H(X^A|B)_\psi=0$, was implicitly used by Devetak and Winter in their construction of an entanglement distillation protocol achieving the so-called hashing bound~\cite{devetak_distillation_2005}. The second case, $H(Z^A|R)_\psi=0$ has not, to our knowledge, been previously studied, but is more natural in the data compression scenario as it enforces the cq nature of the $AB$ state.
\begin{theorem}
\label{thm:secureimpliesrecover}
If $\ket{\psi}^{ABR}$ is such that $p_{\rm secure}(X^A|R)_\psi\leq \epsilon$ and either \emph{(a)} $H(X^A|B)_\psi=0$ or \emph{(b)} $H(Z^A|R)_\psi=0$, then
$p_{\rm guess}(Z^A|B)_\psi\geq 1-\sqrt{2\epsilon}$.
\end{theorem}
\begin{proof}
Start with case (a), whose condition implies that $\ket{\psi}^{ABR}$ takes the form
\begin{align*}
\ket{\psi}^{ABR}=\sum_x \sqrt{q_x}\cket{x}^A\cket{x}^{B_1}\ket{\vartheta_x}^{B_2R},
\end{align*}
where $B=B_1B_2$. Tracing out $B$ gives the cq state $\bar{\psi}_X^{AR}=\sum_x q_x\cket{x}\bra{\widetilde{x}}^A\otimes \vartheta_x^R$, and the condition $p_{\rm secure}(X^A|R)_\psi\leq \epsilon$ implies the fidelity of $\bar{\psi}_X^{AR}$ with the ideal output exceeds $1-\epsilon$:
\begin{align*}
F(\bar{\psi}_X^{AR},\tfrac{1}{d}\mathbbm{1}^A\otimes\psi^R)&=\sum_x\sqrt{\tfrac{q_x}{d}}F(\vartheta_x^R,\psi^R)\\
&=\sum_x\sqrt{\tfrac{q_x}{d}}F(\ket{\vartheta_x}^{B_2R},U_x^{MB'\rightarrow B_2}\ket{\psi}^{MB'R})\\
&\geq 1-\epsilon.
\end{align*}
To get to the second line we have used Uhlmann's theorem, with corresponding isometries $U_x^{MB'\rightarrow B_2}$ for each state $\vartheta_x^R$, and the state $\ket{\psi}^{MB'R}$ is the same as $\ket{\psi}^{ABR}$ with $A$ replaced by $M$ and $B$ by $B'$. Now define the state
\begin{align*}
\ket{\xi}^{ABR}=\tfrac{1}{\sqrt{d}}\sum_{x=0}^{d-1}\cket{x}^A\cket{x}^{B_1}U_x^{MB'\rightarrow B_2}\ket{\psi}^{MB'R},
\end{align*}
and observe that that $F(\ket{\xi}^{ABR},\ket{\psi}^{ABR})=F(\bar{\psi}_X^{AR},\tfrac{1}{d}\mathbbm{1}^A\otimes\psi^R)$. Hence $F(\ket{\xi}^{ABR},\ket{\psi}^{ABR})\geq 1-\epsilon$, and converting to trace distance, we find $D(\psi^{ABR},\xi^{ABR})\leq\sqrt{2\epsilon}$.
Applying the conditional isometry
\begin{align*}
V^{B_1B_2\rightarrow B_1MB'}=\sum_x\cket{x}\bra{\widetilde{x}}^{B_1}\otimes U_x^{\dagger MB'\rightarrow B_2}
\end{align*}
to $\ket{\xi}^{ABR}$ yields
$\frac{1}{\sqrt{d}}\sum_x\cket{x}^A\cket{x}^{B_1}\ket{\psi}^{MB'R}$, and converting the result back to the $\ket{z}$ basis gives
$\frac{1}{\sqrt{d}}\sum_z\ket{z}^A\ket{-z}^{B_1}\ket{\psi}^{MB'R}$, where aritmetic inside the state vector is modulo $d$. Thus, the measurement
\begin{align*}
\Lambda_z^B=V^{\dagger B_1B_2\rightarrow B_1MB'}\ket{-z}\bra{-z}^{B_1}V^{B_1B_2\rightarrow B_1MB'}
\end{align*}
enables perfect recovery of $z$ from $B$ for the state $\xi^{ABR}$. But the measurement is a quantum operation, which cannot increase the trace distance, and the trace distance after a measurement is simply the variational distance of the resulting probability distributions. Therefore we can infer that
\begin{align*}
\tfrac{1}{2}\sum_{z,z'}\left|p_z\delta_{z,z'}-p_z{\rm Tr}\left[\Lambda^B_{z'}\varphi_z^B\right]\right|\leq \sqrt{2\epsilon}.
\end{align*}
Working out the lefthand side of this equation, we find that $p_{\rm guess}(Z|B)_\psi\geq 1-\sqrt{2\epsilon}$.
Now consider case (b), whose condition implies $\ket{\psi}^{ABR}$ can be written
\begin{align*}
\ket{\psi}^{ABR}=\sum_z\sqrt{p_z}\ket{z}^A\ket{z}^{R_1}\ket{\varphi_z}^{BR_2}.
\end{align*}
Using the complementary basis for $A$ gives the alternate form
\begin{align}
\ket{\psi}^{ABR}&=\tfrac{1}{\sqrt{d}}\sum_{xz}\sqrt{p_z}\cket{x}^A\omega^{xz}\ket{z}^{R_1}\ket{\varphi_z}^{BR_2}\nonumber\\
&=\tfrac{1}{\sqrt{d}}\sum_{x}\cket{x}^A(Z^x)^{R_1}\sum_z\sqrt{p_z}\ket{z}^{R_1}\ket{\varphi_z}^{BR_2}\nonumber\\
&=\tfrac{1}{\sqrt{d}}\sum_{x}\cket{x}^A(Z^x)^{R_1}\ket{\theta}^{BR},
\label{eq:alternate}
\end{align}
where in the last line we have implicitly defined the state $\ket{\theta}^{BR}$. Observe that $\psi^R$ is invariant under the action of $(Z^x)^{R_1}$, since $\psi^R=\frac{1}{d}\sum_x (Z^x)^{R_1}\theta^R(Z^x)^{\dagger R_1}$.
Next, compute the fidelity of $\psi_X^{AR}$ with $\tfrac{1}{d}\mathbbm{1}^X\otimes\psi^R$, using the definition $\theta_x^R=(Z^x)^{R_1}\theta^R(Z^x)^{\dagger R_1}$:
\begin{align*}
F(\psi_X^{AR},\tfrac{1}{d}\mathbbm{1}^X\otimes\psi^R)
&=\tfrac{1}{{d}}\sum_xF(\theta_x^R,\psi^R)\\
&=\tfrac{1}{{d}}\sum_x F((Z^x)^{R_1}\theta^R(Z^x)^{\dagger R_1},\psi^R)\\
&=F(\theta^R,\psi^R).
\end{align*}
Again $p_{\rm secure}(X|R)\leq \epsilon$ implies $F(\psi_X^{AR},\tfrac{1}{d}\mathbbm{1}^X\otimes\psi^R) \geq 1-\epsilon$. Since we now have $F(\theta^R,\psi^R)\geq 1-\epsilon$, it follows by Uhlmann's theorem that there exists an isometry $U^{MB\rightarrow B}$ such that $\bra{\theta}U^{MB\rightarrow B}\ket{\psi}^{MBR}\geq 1-\epsilon$. Now consider the
state
\begin{align*}
\ket{\xi}^{ABMR}&\equiv\tfrac{1}{\sqrt{d}}\sum_x\cket{x}^A(Z^x)^{R_1}\ket{\psi}^{MBR}\\
&=\tfrac{1}{{d}}\sum_{x,z,z'}\ket{z'}^A\omega^{x(z-z')}\sqrt{p_z}\ket{z,z}^{MR_1}\ket{\varphi_z}^{BR_2}\\
&=\sum_z\sqrt{p_z}\ket{z}^A\ket{z}^M\ket{z}^{R_1}\ket{\varphi_z}^{BR_2},
\end{align*}
from which $z$ can obviously be recovered by measuring $M$.
The overlap of this state with $U^{\dagger MB\rightarrow B}\ket{\psi}^{ABR}$ is just $F(\theta^R,\psi^R)$, so we should expect $p_{\rm guess}(Z|B)_\psi$ to be large when using the measurement
\begin{align*}
\Lambda^B_z=U^{MB\rightarrow B}\ket{z}\bra{z}^MU^{\dagger MB\rightarrow B}.
\end{align*}
Indeed, converting the fidelity to trace distance and working out the variational distance just as before yields $p_{\rm guess}(Z|B)_\psi\geq 1-\sqrt{2\epsilon}$.
\end{proof}
\section{Duality for protocols}
\label{sec:iidprotocols}
Having worked out the duality for approximate recoverability or secrecy, we can now begin investigating how the duality works for protocols designed to transform arbitrary inputs to the approximate case. Since the duality concerns transforming operations on $X^A$ into operations on $Z^A$ and vice versa, we first face the problem that operations on one necessarily affect the other in some way. By confining our analysis to PA and CSI protocols in which the outputs are linear functions of the inputs, we may avail ourselves of the stabilizer formalism, and this will enable us to ensure the back action from $X^A$ operations is consistent with the $Z^A$ transformation we wish to implement, and vice versa. A short description of those aspects of the stabilizer formalism needed here is given in Appendix~\ref{app:stabilizer}. We begin with the case of repurposing data compression into privacy amplification, as it is more straightforward.
\begin{theorem}
Let $\mathcal{P}^\epsilon_{\rm CSI}$ be a protocol for compressing of $Z^A$ to a string $C$ of $\ell_{\rm CSI}^\epsilon$ bits via a linear compression encoding map $f:Z\rightarrow C=\{0,1\}^{\ell_{\rm CSI}^\epsilon}$. If $Z^A$ is $\epsilon$-recoverable by the decoding map $\mathcal{D}:(C,B)\rightarrow Z'$, then the encoder can be repurposed to extract $\lceil\log_2{\rm dim}(A)\rceil{-}\ell_{\rm CSI}^\epsilon$ $\sqrt{2\epsilon}$-secure bits from $X^A$ which are uncorrelated with $R$.
\end{theorem}
\begin{proof}
First we embed system $A$ into an integer number $\lceil \log_2 {\rm dim}(A)\rceil$ of qubits. Then, using the encoding map $f$ we can define a subsystem decomposition $A=\bar{A}\hat{A}$ using ${\bar{\bf{z}}}=f({\bf z})$ as detailed in Appendix~\ref{app:stabilizer}. This enables us to write the input state $\ket{\Psi}^{ABR}$ as
\begin{align*}
\ket{\Psi}^{ABR}&=\sum_{\bf z}\sqrt{p_{\bf z}}\ket{\bf z}^A\ket{\varphi_{\bf z}}^{BR}\\
&=\sum_{{\bar{\bf{z}}},{\hat{\bf{z}}}}\sqrt{p_{{\bf z}({\bar{\bf{z}}},{\hat{\bf{z}}})}}\ket{{\bar{\bf{z}}}}^{\bar{A}}\ket{{\hat{\bf{z}}}}^{\hat{A}}\ket{\varphi_{{\bf z}({\bar{\bf{z}}},{\hat{\bf{z}}})}}^{BR}.
\end{align*}
Since ${\bf z}$ is $\epsilon$-recoverable from the combined system $\bar{A}B$ by definition of the protocol,
Theorem~\ref{thm:recoverimpliessecure} applies. Therefore ${\hat{\bf{x}}}$, the result of measuring encoded $\hat{X}$ operators on $\hat{A}$, is $\epsilon$-secure against $R$. But ${\hat{\bf{x}}}=g_\perp({\bf x})$, for $g_\perp$ related to $f$ as in Appendix~\ref{app:stabilizer}, so $g_\perp$ defines a key extraction function.
As $f$ outputs a $\ell_{\rm CSI}^\epsilon$-bit string, the output of $g_\perp$ must be a string of $\lceil\log_2{\rm dim}(A)\rceil-\ell_{\rm CSI}^\epsilon$ bits.
\end{proof}
Now we take up the converse. Again case (a) is similar to results found by Devetak and Winter in~\cite{devetak_distillation_2005}, though, because they do not use linear functions, they cannot directly interpret their use of privacy amplification as data compression of an independently-defined complementary observable. We shall return to this issue at the end of this section. Reiterating the statement made prior to Theorem~\ref{thm:secureimpliesrecover}, case (b) is more naturally suited to the data compression with side information scenario, whose input a cq state by assumption.
\begin{theorem}
Let $\mathcal{P}^\epsilon_{\rm PA}$ be a protocol for privacy amplification of $X^A$ against $R$ consisting of a linear extraction map $g:X^A\rightarrow K=\{0,1\}^{\ell_{\rm PA}^\epsilon}$ and let the input be a pure state $\psi^{ABR}$ such that either \emph{(a)} $H(X^A|B)_\psi=0$ or \emph{(b)} $H(Z^A|R)_\psi=0$. If $\mathcal{P}^\epsilon_{\rm PA}$ produces $\ell_{\rm PA}^\epsilon$ $\epsilon$-secure bits, then the extraction map can then be used to define a compressor and corresponding decoding map which together can be used to compress $Z^A$ to $\lceil\log_2{\rm dim}(A)\rceil-\ell_{\rm PA}^\epsilon$ bits such that $Z^A$ is $\sqrt{2\epsilon}$-recoverable from the side information $B$ and compressed version $C$.
\end{theorem}
\begin{proof}
Start with case (a). Again we embed system $A$ into $d^A\equiv\lceil \log_2 {\rm dim}(A)\rceil$ bits for simplicitly. The input state has the form
\begin{align}
\ket{\Psi}^{ABR}&=\sum_{\bf x}\sqrt{q_{\bf x}}\cket{\bf x}^A \cket{\bf x}^{B_1}\ket{\vartheta_{\bf x}}^{B_2R}\nonumber\\
&=\sum_{{\bar{\bf{x}}},{\hat{\bf{x}}}}\sqrt{q_{{\bf x}}}\ket{{\bar{\bf{x}}}}^{\bar{A}}\ket{{\hat{\bf{x}}}}^{\hat{A}}\ket{{\bar{\bf{x}}}}^{\bar{B_1}}\ket{{\hat{\bf{x}}}}^{\hat{B_1}}\ket{\vartheta_{{\bf x}}}^{B_2R},
\label{eq:secureiid}
\end{align}
where we have used the subsystem decomposition $A=\bar{A}\hat{A}$ from ${\bar{\bf{x}}}=g({\bf x})$ and suppressed the dependence of ${\bf x}$ on $({\bar{\bf{x}}},{\hat{\bf{x}}})$.
By assumption ${\bar{\bf{x}}}$ is $\epsilon$-secure against $R$. Thus, Theorem~\ref{thm:secureimpliesrecover} applies to the division $\bar{A}|\hat{A}B|R$, and there exists a measurement $\Lambda^{\hat{A}B}_{\bar{\bf{z}}}$ such that ${\bar{\bf{z}}}$ is $\sqrt{2\epsilon}$-recoverable from $\hat{A}B$.
Since $\hat{A}$ is not directly available to the decoder, we must break the measurement down into a compressor with classical output and subsequent measurement of $B$ alone, conditional on this output. To do this, suppose $\hat{A}$ is measured in the $Z$ basis, producing ${\hat{\bf{z}}}$, which results in the state
\begin{align*}
\ket{\Psi_{\hat{\bf{z}}}}&=\sum_{{\bar{\bf{x}}},{\hat{\bf{x}}}}\sqrt{q_{\bf x}}\ket{{\bar{\bf{x}}}}^{\bar{A}}\ket{{\bar{\bf{x}}}}^{\bar{B}_1}\omega^{{\hat{\bf{x}}}\cdot{\hat{\bf{z}}}}\ket{{\hat{\bf{x}}}}^{\hat{B}_1}\ket{\vartheta_{{\bf x}}}^{B_2R}\\
&=\sum_{{\bar{\bf{x}}},{\hat{\bf{x}}}}\sqrt{q_{\bf x}}\ket{{\bar{\bf{x}}}}^{\bar{A}}\ket{{\bar{\bf{x}}}}^{\bar{B}_1}\left(X^{{\hat{\bf{z}}}}\right)^{\hat{B}_1}\ket{{\hat{\bf{x}}}}^{\hat{B}_1}\ket{\vartheta_{{\bf x}}}^{B_2R}
\end{align*}
with probability ${1}/(d^A-\ell_{\rm PA}^\epsilon)$. All ${\hat{\bf{z}}}$ dependence drops out when tracing out the $B$ systems, so the marginal states of $R$ conditional on ${\bar{\bf{x}}}$ are the same as in~(\ref{eq:secureiid}). Therefore, Theorem~\ref{thm:secureimpliesrecover} implies ${\bar{\bf{z}}}$ is $\epsilon$-recoverable from $B$ alone for each value of ${\hat{\bf{z}}}$. Since the pair $({\bar{\bf{z}}},{\hat{\bf{z}}})$ fixes the value of ${\bf z}$, ${\hat{\bf{z}}}\equiv f_\perp({\bf z})$ is a suitable compression map enabling $\epsilon$-recovery of ${\bf z}$ from $B$ and $C=f_\perp(Z^A)$.
Now consider case (b), whose input state is of the form
\begin{align}
\ket{\Psi}&=\sum_{\bf z}\sqrt{p_{\bf z}}\ket{\bf z}^A\ket{\bf z}^{R_1}\ket{\varphi_{\bf z}}^{BR_2}\label{eq:caseb1}\\
&=\tfrac{1}{\sqrt{d^n}}\sum_{{\bf x}}\cket{\bf x}^A (Z^{\bf x})^{R_1}\ket{\Theta}^{BR_2}\nonumber\\
&=\tfrac{1}{\sqrt{d^n}}\sum_{{\bar{\bf{x}}},{\hat{\bf{x}}}}\ket{{\bar{\bf{x}}}}^{\bar A}\ket{{\hat{\bf{x}}}}^{\hat{A}} (Z^{{\bar{\bf{x}}}})^{\bar{R}_1}(Z^{{\hat{\bf{x}}}})^{\hat{R}_1}\ket{\Theta}^{BR_2}.\nonumber
\end{align}
Here we have converted to the alternate form in the second equation, following~(\ref{eq:alternate}), with $\ket{\Theta}^{BR}=\sum_{\bf z}\sqrt{p_{\bf z}}\ket{\bf z}^{R_1}\ket{\varphi_{\bf z}}^{BR_2}$. For the third equation we again use the subsystem decomposition for $A=\bar{A}\hat{A}$ as well as $R_1=\bar{R}_1\hat{R}_1$.
By assumption, ${\bar{\bf{x}}}$ is $\epsilon$-secure against $R$, so just as for case (a) Theorem~\ref{thm:secureimpliesrecover} applies to the division $\bar{A}|\hat{A}B|R$ and implies there exists a measurement $\Lambda^{\hat{A}B}_{\bar{\bf{z}}}$ such that ${\bar{\bf{z}}}$ is $\sqrt{2\epsilon}$-recoverable from $\hat{A}B$.
This measurement can be broken down into a compression map with classical output followed by measurement of $B$ alone following the technique used in the previous case. This time, we model the measurement quantum-mechanically, as the transformation $\ket{{\hat{\bf{z}}}}^{\hat{A}_1}\rightarrow \ket{{\hat{\bf{z}}}}^{C}\ket{{\hat{\bf{z}}}}^{R_3}$. However, from~(\ref{eq:caseb1}) it is clear that the same effect can be achieved by the transformation $\ket{{\hat{\bf{z}}}}^{\hat{A}_1}\rightarrow \ket{{\hat{\bf{z}}}}^{C}$ followed by $\ket{{\hat{\bf{z}}}}^{\hat{R}_1}\rightarrow \ket{{\hat{\bf{z}}}}^{\hat{R}_1}\ket{{\hat{\bf{z}}}}^{{R}_3}$. In other words, there is no need to distribute ${\hat{\bf{z}}}$ to $R$, since $R$ already has a copy. Thus, the effect of the measurement is simply to transfer $\hat{A}$ to $C$. Using the function ${\hat{\bf{z}}}\equiv f_\perp({\bf z})$ as the compressor therefore ensures that ${\bar{\bf{z}}}$, and hence ${\bf z}$, is $\epsilon$-recoverable from $(B,C)$.
In both cases $f_\perp$ outputs $d^A-\ell^\epsilon_{\rm PA}$ bits when $g$ outputs $\ell^\epsilon_{\rm PA}$, completing the proof.\end{proof}
\section{Discussion \& Applications}
\label{sec:applications}
The reasons for restricting attention to linear hashing techniques in this analysis should now be more understandable. Since the duality between PA and CSI is meant to hold for complementary observables, it is not a priori clear that, e.g.\ a given privacy amplification function applied to $X^A$ has a well-defined action on $Z^A$, let alone the desired one. However, the use of linear hashing to deal with this problem is only shown here to be sufficient, not necessary, and it would be nice to understand more precisely under what circumstances this duality holds.
This issue is somewhat subtle, and deserves further comment. By the results in Sec.~\ref{sec:approx}, once, say, privacy amplification of $X^A$ against $R$ has been performed, it is certainly possible to define an appropriate complementary observable $Z^A$ so that it is recoverable from $B$. However, this observable generally has nothing whatsoever to do with a complementary observable that we might have defined for the input to the privacy amplification procedure, and in particular, the two need not commute so as to be simultaneously well-defined. For instance, in~\cite{devetak_distillation_2005}, privacy amplification is used as the second step of an entanglement distillation protocol. Since the output is entangled, both $X^A$ and $Z^A$ complementary observables are recoverable from $B$. But these observables have nothing to do with complementary observables one would have defined for the \emph{input} to the protocol, so one cannot say the PA procedure performs CSI. Thus, while $\epsilon$-security of $X^A$ and $\epsilon$-recovery of $Z^A$ always go hand in hand, it does not follow from that alone that PA and CSI protocols necessarily do, too.
On the other hand, in many situations in quantum information processing, such as in~\cite{devetak_distillation_2005}, this distinction is not important.
Perhaps the most direct application of our results is a general entropic uncertainty relation formulated in terms of the smooth conditional min- and max-entropies~\footnote{That such a consequence ought to hold was pointed out by Matthias Christandl.}. Using the upper bound of~\ref{eq:csi}, Theorem~\ref{thm:recoverimpliessecure}, and the lower bound of~\ref{eq:pa} for an input system whose dimension $d$ is a power of two, we immediately obtain
\begin{align}
\log_2 d\leq \hmin{\sqrt{2\sqrt{2\epsilon}}}(X^A|R)_\psi+\hmax{\epsilon_1}(Z^A|B)_\psi+2\log\tfrac{1}{\epsilon_2}+4
\end{align}
for $\epsilon=\epsilon_1+\epsilon_2$. From the definition of the smoothed conditional max-entropy it follows that $\hmax{\epsilon'}(Z^A|B)\leq \hmax{\epsilon}(Z^A|B)$ for $\epsilon'\geq \epsilon$, so if we choose $\epsilon_1=\epsilon_2=\frac{\epsilon}{2}$ and $\epsilon=\frac{\delta^4}{8}$, the above expression can be transformed into the more appealing form
\begin{align}
\hmin{\delta}(X^A|R)_\psi+\hmax{\delta}(Z^A|R)_\psi\geq \log_2 d-8\log\tfrac{1}{\delta}-12.
\end{align}
This extends the recent work on uncertainty principles valid in the presence of quantum memory~\cite{renes_conjectured_2009,berta_entropic_2009} to the smooth min- and max-entropy. Due to the operational interpretations of these quantities~\cite{konig_operational_2009}, this relation should be useful in the analysis of quantum information processing protocols.
Another application of this work is to a new approximate quantum error-correcting condition. This will be explored more fully in a future publication, but we can already give a brief overview here. Essentially, the quantum decoupling condition of~\cite{schumacher_approximate_2002} mentioned in the introduction can be broken down into two classical pieces. That condition states that $AB$ is maximally entangled when $A$ is completely uncorrelated with the purification system $R$, and it is in a completely random state. Approximate quantum error-correcting procedures can then be constructed by approximately decoupling $R$. The entanglement distillation procedure of Devetak and Winter~\cite{devetak_distillation_2005} implicitly gives a different characterization, saying that $AB$ is maximally entangled if $Z^A$ is recoverable from $B$ and secure from $R$. Using the duality of these recoverability and security notions, there are in principle two other equivalent characterizations of approximate entanglement, from which approximate quantum error-correcting procedures can likewise be constructed. The first one states that $AB$ is maximally entangled if both $X^A$ and $Z^A$ are recoverable from $B$, a condition which was implicitly explored in~\cite{renes_physical_2008}. The second is the classical decomposition of the quantum decoupling condition, that $AB$ is entangled if both $X^A$ and $Z^A$ are secure from $R$, with the additional proviso that one of them, say $X^A$, is secure not just from $R$, but from $R$ together with a copy of $Z^A$.
\begin{acknowledgements}
JMR acknowledges useful discussions with and careful reading of the manuscript by Mark M.\ Wilde and Matthias Christandl. Financial support was provided by the Center for Advanced Security Research Darmstadt (www.cased.de).
\end{acknowledgements}
| 16,323 |
\section{Introduction}
In this paper we study local $\wp$-adic problems which have their origin in the famous work of Gross and Zagier~\cite{gross-zagier}. There, Gross and Zagier give a relation between a Heegner point of discriminant $D$ on $X_0(N)$ and the $L$-function attached to a modular newform $f$ of weight $2$ and level $N$ and a character $\chi$ of the class group of $K=\mathbb Q(\sqrt D)$, in case $D$ is squarefree and prime to $N$:
\begin{equation}\label{einleitung_gross-zagier}
L'(f,\chi,s=\frac{1}{2})=const\cdot\hat h(c_{\chi,f})
\end{equation}
Here, $\hat h$ is the height on $\operatorname{Jac}(X_0(N))$ and $c_{\chi,f}$ is a component of the Heegner class depending on $\chi$ and $f$.
The proof involves a detailed study of the local heights as well as Rankin's method and holomorphic projection. The Fourier coefficients of the Rankin kernel happen to equal certain coefficients of the cycle under the action of Hecke correspondences.
Applying this formula to elliptic curves attached to such $f$ sa\-tis\-fying $\operatorname{ord}_{s=1}L(E,s)=1$, Gross and Zagier find points of infinite order on $E(\mathbb Q)$.
\medskip
Zhang in \cite{zhang}, \cite{zhang2} improves the results in different aspects.
He gives a kernel function for the Rankin-Selberg $L$-function which satisfies a functional equation similar to that for $L$. This enables him to compute the central values and the holomorphic projection without the technical difficulty in \cite{gross-zagier} of taking the trace down to the level of the newform. Moreover, he can compare this kernel directly with the height pairing.
Zhang switches the point of view to a completely local one. Thus, modular forms are now viewed as automorphic representations of $\operatorname{GL}_2$ and $\operatorname{CM}$-points are studied on the Shimura variety. (See Chapter~\ref{section_definitionen} for the concrete definitions.) The height pairing of $\operatorname{CM}$-cycles is replaced by a geometric pairing of Schwartz functions $\phi,\psi\in\mathcal S(\chi,\mathbf G(\mathbb A))$,
\begin{equation*}
<\phi,\psi>=\sum_\gamma m_\gamma <\phi,\psi>_\gamma.
\end{equation*}
While the geometric input is included in the multiplicities $m_\gamma$, the coefficients \mbox{$<\phi,\psi>_\gamma$} now are pure ad\`elic integrals.
In that way, finding a $\operatorname{CM}$-point fitting in (\ref{einleitung_gross-zagier}) is replaced by giving local Schwartz functions for which the local components (the so called local linking numbers) of $<\phi,\psi>_\gamma$ correspond to the local Fourier coefficients of the kernel. These identities are called local Gross-Zagier formulae. (See for example Theorem~\ref{zhangs_local_Gross-Zagier} or the original \cite{zhang} Lemma~4.3.1.)
In the meanwhile, Zhang et al \cite{zhang3} proved these results with no level constraint at all.
\medskip
In this paper, we study the local correspondences between the local linking numbers and the local Fourier coefficients qualitatively at finite places not dividing $2$. We look at the general local linking numbers as well as the Fourier coefficients of the general Mellin transforms of which the $L$-function is the common divisor.
We characterize the local linking numbers as functions on the projective line satisfying certain properties (Proposition~\ref{eigenschaften_nonsplit}, \ref{eigenschaften_split} and \ref{charakterisierung_der_LLN}).
The Fourier coefficients are products of Whittaker functions of the automorphic representations occuring in the Rankin-Selberg convolution, that is the ``theta series'' $\Pi(\chi)$ and the ``Eisenstein series'' $\Pi(\lvert\cdot\rvert^{s-\frac{1}{2}},\lvert\cdot\rvert^{\frac{1}{2}-s}\omega)$. We find that the spaces of these two kinds of functions are essentially the same (Theorem~\ref{satz_matching}).
We construct an operator on local linking numbers that reflects the Hecke operator on the analytic side (Theorem~\ref{satz_matching_operator}).
This ``geometric Hecke'' operator is tested quantitatively in the setting of Zhang's local Gross-Zagier formula \cite{zhang} afterwards. It is seen that it produces concrete results equivalent to the local Gross-Zagier formulae (Theorems~\ref{neuformulierung_local_Gross-Zagier} resp. \ref{zhangs_local_Gross-Zagier}).
\medskip
Our techniques used in the proofs are throughout computational and constructively explicit.
While the background is highly technical and requires big theoretical input, we achieve an insight into the local identities by in parts vast but underlyingly elementary computations ($\wp$-adic integration).
In adapting Zhang's setting we give evidence that the local Gross-Zagier formulae are not exceptional or unique but included in a more general identity of spaces.
In other words, we provide the geometric (height) pairing with a vast (indeed exploited) class of local test vectors.
By this, the need for a ``geometric Hecke'' operator which is defined universally on any local linking number but not only for a special one as Zhang's (Section~\ref{section_zhang_referiert} resp. \cite{zhang}, Chap.~4.1)
becomes evident. In giving such a universal operator
which in addition reproduces Zhang's local Gross-Zagier formulae, we achieve a quite general notion of what is going on around these formulae.
It stands to reason whether analogous results for higher genus are possible.
At the moment we have no suggestion how to achieve those. The method of brute force computation tracked here will certainly produce computational overflow not manageable any more.
\medskip
This article is the aggregation of a doctoral thesis~\cite{diss}.
In Section~\ref{section_geom_hintergrund} we review the geometric background needed to define the local linking numbers (Definition~\ref{def_lln}). This is mainly due to Zhang (\cite{zhang}, Chapter~4.1). Note that we have no level constraint because we make no use of the newform at all.
The only obstruction of the data is that the conductor of the id\`ele character $\chi$ is coprime to the discriminant of the imaginary quadratic field $\mathbb K$ over the totally real field $\mathbb F$.
We prefer for computations a parametrization (in the variable $x\in F^\times$) of the local linking numbers which differs form Zhang's one in $\xi$ but is related to that by $x=\frac{\xi}{\xi-1}$.
Roughly speaking, when (locally at a finite place) $G$ is an inner form of the projective group $\operatorname{PGL}_2(F)$ having a maximal torus $T$ isomorphic to $F^\times\backslash K^\times$ and when $\mathcal S(\chi, G)$ is the Schwartz space of functions transforming like $\chi$ under $T$, then the local linking number of $\phi,\psi\in\mathcal S(\chi,G)$ is given by
\begin{equation}\label{gl_einleitung_lln}
<\phi,\psi>_x=\int_{T\backslash G}\int_T\phi(t^{-1}\gamma(x)ty)~dt~\bar\psi(y)~dy,
\end{equation}
where $\gamma(x)$ is a carefully chosen representative of a double coset in $T\backslash G/T$.
These local linking numbers are studied in Chapter~\ref{section_charakterisierung}. They are functions on $F^\times$ characterized by four properties. If $K/F$ is a splitting algebra, then they are exactly the locally constant functions vanishing around $1$ and owning certain chracteristic behavior for $\lvert x\rvert\to 0$ resp. $\lvert x\rvert\to\infty$ (Proposition~\ref{eigenschaften_split}). Similar conditions hold for a field extension $K/F$ (Proposition~\ref{eigenschaften_nonsplit}).
The data used from the theory of automorphic forms are summarized in Section~\ref{section_autom_formen_hintergrund}.
The Rankin-Selberg $L$-series is the common divisor of all the associated Mellin transforms. Their Fourier coefficients are given by products of Whittaker functions both from the theta series $\Pi(\chi)$ and the Eisenstein series. While for the $L$-function essentially the newforms are used (\cite{zhang}, Chapter~3.3), here all Whittaker functions are taken into account. More precisely, only the Kirillov functions are needed which are described by classical results. In this way the results on the analytic side are direct conclusions from well-known facts on automorphic representations of $\operatorname{GL}_2(F)$.
The matching of local linking numbers and Whittaker products is shown in Chapter~\ref{section_matching}.
The rest of the paper is devoted to the exploration of a geometric Hecke operator. This operator is roughly speaking a weighted average of translations of local linking numbers, namely of
\begin{equation*}
<\phi,\begin{pmatrix}b&0\\0&1\end{pmatrix}.\psi>_x,
\end{equation*}
where $b\in F^\times$. This translation is a first natural candidate for a possible operator since the Hecke operator acts on Whittaker products essentially via translation by $b\in F^\times$ (Proposition~\ref{prop_analytischer_Hecke_allgemein}), too.
The translated local linking numbers are studied in Chapter~\ref{section_translation}.
There is a crucial difference in their properties as well as in the proofs according to whether the torus $T=F^\times\backslash K^\times$ is or is not compact.
In the first case, the inner integral of the local linking numbers (\ref{gl_einleitung_lln}) has compact support which allows a quick insight into the behavior of the translation (Section~\ref{section_compact}): Fixing $x$, the translation is a compactly supported locally constant function in $b\in F^\times$ (Proposition~\ref{prop_translatiert_kompakt}). This case is completed by an explicit Example~\ref{bsp_lln_translatiert_kompakt} which is proved in Appendix~A.
In case of a noncompact torus $T$ (i.e., $K/F$ splits) the inner integral is not compactly supported anymore which complicates study and results enormously. There are terms in the absolute value $\lvert b\rvert^{\pm 1}$ and the valuation $v(b)$ occuring (Theorem~\ref{satz_translatiert_nicht_kompakt}). The proof in~\cite{diss} takes one hundred pages of vast $\wp$-adic integration which cannot be reproduced here. We sketch the outline of this proof and refer to \cite{diss}, Chapter~8, for computations. What is more, we include an Example~\ref{bsp_lln_nichtkompakt} proved in Appendix~B, to give a flavour of what is going on. Moreover, the functions considered in Examples~\ref{bsp_lln_translatiert_kompakt} and \ref{bsp_lln_nichtkompakt} are those used in \cite{zhang} for local Gross-Zagier.
The geometric Hecke operator is studied in Chapter~\ref{section_geom_Hecke}.
The translations itself do not realize the leading terms of the asymptotical behavior of the translated Whittaker products, that is the behavior of the analytic Hecke operator.
The operator is constructed such that it satisfies this requirement (Theorem~\ref{satz_matching_operator}). Such an operator is not uniquely determined. We choose it such that further results become quite smooth.
Finally, this operator is tested by rewriting the local Gross-Zagier formula in terms of it.
For this, we first report the results of \cite{zhang}, Chapter~4, as far as we need them (Section~\ref{section_zhang_referiert}). Thus, Zhang's results can be compared directly to ours by the reader. We also give shorter proofs than those in \cite{zhang}.
The action of the Hecke operator constructed in Chapter~\ref{section_geom_Hecke} on the local linking numbers used in \cite{zhang} (resp. Examples~\ref{bsp_lln_translatiert_kompakt} and \ref{bsp_lln_nichtkompakt}) is given in Section~\ref{section_meine_Gross-Zagier_formel}. It produces the leading term of the local Gross-Zagier formula (Theorem~\ref{neuformulierung_local_Gross-Zagier}). Moreover, in case of a compact torus $T$ the result equals exactly that of \cite{zhang}.
\section{Terminology and preparation}\label{section_definitionen}
\subsection{Embedding in the geometric background}\label{section_geom_hintergrund}
The local linking numbers were defined by Zhang~\cite{zhang} and the concrete geometric setting here goes back to that there (Chapter~4).
\subsubsection{Global data}
We start with a swoop from the global framework to the local data we are after.
Let $\mathbb F$ be a totally real algebraic number field and let $\mathbb K$ be a imaginary quadratic extension of $\mathbb F$. Further, let $\mathbb D$ be a division quaternion algebra over $\mathbb F$ which contains $\mathbb K$ and splits at the archimedean places. Let $\mathbf G$ denote the inner form of the projective group $\operatorname{PGL}_2$ over $\mathbb F$ which is given by the multiplicative group $\mathbb D^\times$,
\begin{equation*}
\mathbf G(\mathbb F)=\mathbb F^\times\backslash \mathbb D^\times.
\end{equation*}
Let $\mathbf T$ be the maximal torus of $\mathbf G$ given by $\mathbb K^\times$, i.e. $\mathbf T(\mathbb F)=\mathbb F^\times\backslash \mathbb K^\times$. Let $\mathbb A_{\mathbb F}$ (resp.~$\mathbb A_{\mathbb K}$) be the ad\`eles of $\mathbb F$ (resp.~$\mathbb K$) and let $\mathbb A_{\mathbb F,f}=\prod_{v\mid\infty}1\prod_{v\nmid\infty}'\mathbb F_v$ be the subset of finite ad\`eles.
On $\mathbf T(\mathbb F)\backslash\mathbf G(\mathbb A_{\mathbb F,f})$ there is an action of $\mathbf T(\mathbb A_{\mathbb F,f})$ from the left and an action of $\mathbf G(\mathbb A_{\mathbb F,f})$ from the right. The factor space $\mathbf T(\mathbb F)\backslash\mathbf G(\mathbb A_{\mathbb F,f})$ can be viewed as the set of $\operatorname{CM}$-points of the Shimura variety defined by the inverse system of
\begin{equation*}
\operatorname{Sh}_K := \mathbf G(\mathbb F)^+\backslash \mathcal H_1^n \times \mathbf G(\mathbb A_{\mathbb F,f})/K.
\end{equation*}
Here $K$ runs though the sufficiently small compact open subgroups of $\mathbf G(\mathbb A_{\mathbb F,f})$, $\mathcal H_1$ is the upper halfplane, and $n$ is the number of the infinite places of $\mathbb F$. The $\operatorname{CM}$-points are embedded in $\operatorname{Sh}_K$ by mapping the coset of $g\in\mathbf G(\mathbb A_{\mathbb F,f})$ to the coset of $(z,g)$, where $z\in\mathcal H_1^n$ is fixed by $\mathbf T$.
Let $\mathcal S(\mathbf T(\mathbb F)\backslash\mathbf G(\mathbb A_{\mathbb F,f}))$ be the Schwartz space, i.e. the space of complex valued functions on $\mathbf T(\mathbb F)\backslash\mathbf G(\mathbb A_{\mathbb F,f})$ which are locally constant and of compact support. A character of $\mathbf T$ shall be a character $\chi$ of $\mathbf T(\mathbb F)\backslash\mathbf T(\mathbb A_{\mathbb F,f})$, that is a character of $\mathbb A_{\mathbb K,f}^\times/\mathbb K^\times$ trivial on $\mathbb A_{\mathbb F,f}^\times/\mathbb F^\times$. Especially, $\chi=\prod\chi_v$ is the product of its local unitary components. One has
\begin{equation*}
\mathcal S(\mathbf T(\mathbb F)\backslash\mathbf G(\mathbb A_{\mathbb F,f}))=\oplus_\chi \mathcal S(\chi,\mathbf T(\mathbb F)\backslash\mathbf G(\mathbb A_{\mathbb F,f})),
\end{equation*}
where $\mathcal S(\chi,\mathbf T(\mathbb F)\backslash\mathbf G(\mathbb A_{\mathbb F,f}))$ is the subspace of those functions $\phi$ transforming under $\mathbf T(\mathbb A_{\mathbb F,f})$ by $\chi$, i.e. for $t\in \mathbf T(\mathbb A_{\mathbb F,f})$ and $g\in \mathbf G(\mathbb A_{\mathbb F,f})$: $\phi(tg)=\chi(t)\phi(g)$. Any such summand is made up by its local components,
\begin{equation*}
\mathcal S(\chi,\mathbf T(\mathbb F)\backslash\mathbf G(\mathbb A_{\mathbb F,f}))=\otimes_v \mathcal S(\chi_v,\mathbf G(\mathbb A_{\mathbb F_v})).
\end{equation*}
A pairing on $\mathcal S(\chi,\mathbf T(\mathbb F)\backslash\mathbf G(\mathbb A_{\mathbb F,f}))$ can be defined as follows.
For functions $\phi,\psi$ in $\mathcal S(\chi,\mathbf T(\mathbb F)\backslash\mathbf G(\mathbb A_{\mathbb F,f}))$ and a double coset $[\gamma]\in \mathbf T(\mathbb F)\backslash \mathbf G(\mathbb F)/\mathbf T(\mathbb F)$ define the {\bf linking number}
\begin{equation}\label{def_globallinkingnumber0}
<\phi,\psi>_\gamma:=\int_{\mathbf T_\gamma(\mathbb F)\backslash \mathbf G(\mathbb A_{\mathbb F,f})}\phi(\gamma y)\bar\psi(y)~dy,
\end{equation}
where $\mathbf T_\gamma=\gamma^{-1}\mathbf T\gamma\cap\mathbf T$. For $\gamma$ which normalize $\mathbf T$ one has $\mathbf T_\gamma=\mathbf T$. Otherwise $\mathbf T_\gamma$ is trivial. Here $dy$ denotes the quotient measure of nontrivial Haar measures on $\mathbf G$ and $\mathbf T$ adapted adequately later on. Further, let
\begin{equation*}
m:\mathbf T(\mathbb F)\backslash \mathbf G(\mathbb F)/\mathbf T(\mathbb F)\rightarrow \mathbb C
\end{equation*}
be a multiplicity function. Then
\begin{equation*}
<\phi,\psi>:=\sum_{[\gamma]}m([\gamma])<\phi,\psi>_\gamma
\end{equation*}
defines a sesquilinear pairing on $\mathcal S(\chi,\mathbf T(\mathbb F)\backslash\mathbf G(\mathbb A_{\mathbb F,f}))$. While determining the multiplicity function is an essential global problem, the coefficients $<\phi,\psi>_\gamma$ are the data linking global height pairings on curves and local approaches. Their local components are subject of this paper.
\subsubsection{Local data}
In studying the local components of the linking numbers~(\ref{def_globallinkingnumber0}), we restrict to the nondegenerate case, i.e. the case that $\gamma$ does not normalize the torus $\mathbf T$. First notice that
\begin{equation}\label{def_globallinkingnumber}
<\phi,\psi>_\gamma:=\int_{\mathbf T(\mathbb A_{\mathbb F,f})\backslash \mathbf G(\mathbb A_{\mathbb F,f})}\int_{\mathbf T(\mathbb A_{\mathbb F,f})}\phi(t^{-1}\gamma ty)~dt~\bar\psi(y)~dy.
\end{equation}
Assume $\phi=\prod_v\phi_v$ and $\psi=\prod_v\psi_v$. Then
\begin{equation*}
\int_{\mathbf T(\mathbb A_{\mathbb F,f})}\phi(t^{-1}\gamma ty)~dt=\prod_v\int_{\mathbf T(F_v)}\phi_v(t_v^{-1}\gamma_v t_vy_v)~dt_v
\end{equation*}
as well as $<\phi,\psi>_\gamma=\prod_v<\phi,\psi>_{\gamma,v}$, where
\begin{equation}
<\phi,\psi>_{\gamma,v}:=\int_{\mathbf T(\mathbb F_v)\backslash \mathbf G(\mathbb F_v)}\int_{\mathbf T(\mathbb F_v)}\phi_v(t_v^{-1}\gamma_v t_vy_v)~dt_v~\bar\psi_v(y_v)~dy_v.
\end{equation}
In here one has to observe that the local components $<\phi,\psi>_{\gamma,v}$ depend on the choice $\gamma$ while $<\phi,\psi>_v$ does not. Thus, one has to work a little to get a neatly definition.
As all the following is local, one simplyfies notation:
Let $F$ denote a localization of $\mathbb F$ at a finite place which does not divide $2$. Then $K$ is the quadratic extension of $F$ coming from $\mathbb K$. $K$ can be a field, $K=F(\sqrt A)$, or a splitting algebra $K=F\oplus F$. For $t\in K$, let $\bar t$ denote the Galois conjugate of $t$ (resp. $\overline{(x,y)}=(y,x)$ in the split case). The local ring of $F$ (resp.~$K$) is $\mathbf o_F$ (resp.~$\mathbf o_K$). It contains the maximal ideal $\wp_F$ (resp.~$\wp_K$, where in the split case $\wp_K:=\wp_F\oplus\wp_F$). Let $\wp_F$ be a prime element for $\mathbf o_F$. If it can't be mixed up, one writes $\wp$ (resp.~$\pi$) for $\wp_F$ (resp.~$\pi_F$). The residue class field of $F$ has characteristic $p$ and $q$ elements. Further, let $\omega$ be the quadratic character of $F^\times$ given by the extension $K/F$ that is, $\omega(x)=-1$ if $x$ is not in the image of the the norm of $K/F$.
Let $D:=\mathbb D(F)$, $T:=\mathbf T(F)$ and $G:=\mathbf G(F)$.
By Wedderburn-Artin there are two possibilities: Either the quaternion algebra $D$ is split, i.e. $D\cong M_2(F)$ and $G\cong \operatorname{PGL}_2(F)$. Or $D$ is not split, i.e. a division ring over $F$. Then $G=F^\times\backslash D^\times$ is a nonsplit inner form of $\operatorname{PGL}_2(F)$. One defines
\begin{equation}\label{def_delta(D)}
\delta(D):=\left\{\begin{matrix}0,&\textrm{if $D$ is split}\\1,&\textrm{if $D$ is not split}\end{matrix}\right..
\end{equation}
Generally, there is exists $\epsilon\in D^\times$, such that for all $t\in K$ one has $\epsilon t=\bar t\epsilon$ and such that
\begin{equation*}
D=K+\epsilon K.
\end{equation*}
Then $c:=\epsilon^2\in F^\times$. Let $\operatorname N$ denote the reduced norm on $D$. Restricted to $K$ this is the norm of the extension $K/F$. One has for $\gamma_1+\epsilon\gamma_2\in D$
\begin{equation*}
\operatorname N(\gamma_1+\epsilon\gamma_2)=\operatorname N(\gamma_1)-c\operatorname N(\gamma_2),
\end{equation*}
as $\operatorname N(\epsilon)=-\epsilon^2=-c$. Thus, $D$ splits exactily in the case $c\in\operatorname N(K^\times)$.
With this notations, one can parametrize the double cosets $[\gamma]\in T\backslash G/T$ by the projective line:
\begin{defn}\label{def_P_funktion}
Let $P:T\backslash G/T\rightarrow \mathbb P^1(F)$ be defined by
\begin{equation*}
P(\gamma_1+\epsilon\gamma_2):=\frac{c\operatorname N(\gamma_2)}{\operatorname N(\gamma_1)}
\end{equation*}
for $\gamma_1+\epsilon\gamma_2\in D^\times$.
\end{defn}
We check that this in fact is well-defined: $P(t(\gamma_1+\epsilon\gamma_2)t')=P(\gamma_1+\epsilon\gamma_2)$ for all $t,t'\in K^\times$. The non-empty fibres of $P$ not belonging to $0$ or $\infty$ are exactly the nondegenerate double cosets.
In case that $K/F$ is a field extension, $P$ is injective with range $c\operatorname N(K^\times)\cup\{0,\infty\}$.
In case $K/F$ split, the range of $P$ is $F^\times\backslash\{1\}\cup\{0,\infty\}$ and the fibres of $F^\times\backslash\{1\}$ are single double cosets (\cite{jacquet}).
Of course this is just one possibility of parametrization. Zhang~\cite{zhang} (Chapter~4) for example uses $\xi:=\frac{P}{P-1}$ to which we will come back eventually.
\begin{lem}
(\cite{zhang} Chapter~4) Let $\gamma\in D^\times$. In each double coset $T\gamma T$ of $G$ there exists exactly one $T$-conjugacy class of trace zero.
\end{lem}
Now the local components $<\phi,\psi>_\gamma$ of the linking numbers can be declared precisely:
\begin{defn}\label{def_lln}
Let $\phi, \psi\in\mathcal S(\chi,G)$. For $x\in F^\times$ the {\bf local linking number} is defined by
\begin{equation*}
<\phi,\psi>_x:=<\phi,\psi>_{\gamma(x)}
\end{equation*}
if there is a tracefree preimage $\gamma(x)\in D^\times$ of $x$ under $P$. If there doesn't exist a preimage, then $<\phi,\psi>_x:=0$. Thus, for $x\in c\operatorname N:=c\operatorname N(K^\times)$
\begin{equation*}
<\phi,\psi>_x=\int_{T\backslash G}\int_T\phi(t^{-1}\gamma(x)ty)~dt~\bar\psi(y)~dy.
\end{equation*}
\end{defn}
Notice that this definition is independent of the choice of the element $\gamma(x)$ of trace zero by unimodularity of the Haar measure on $T$.
There is one general assumption on the character $\chi$ which will be assumed in all the following.
The conductor of $\chi$ and the discriminant of $K/F$ shall be coprime:
\begin{assumption}\label{voraussetzung_an_chi}
The character $\chi$ of $T$ may only be ramified if $\omega$ is not.
\end{assumption}
The conductor $f(\chi)<\mathbf o_K$ of $\chi$ can be viewed as an ideal of $\mathbf o_F$: If $K=F\oplus F$, then $\chi=(\chi_1,\chi_1^{-1})$ for a character $\chi_1$ of $F^\times$ and $f(\chi)=f(\chi_1)$. If $K/F$ is a ramified field extension, then $\chi$ is unramified, thus $f(\chi)\cap\mathbf o_F=\mathbf o_F$. Lastly, if $K/F$ is an unramified field extension, then $f(\chi)=\pi^{c(\chi)}\mathbf o_K$, where $\pi$ is an uniformizing element for $K$ as well as $F$. That is, $f(\chi)\cap\mathbf o_F=\pi^{c(\chi)}\mathbf o_F$.
There are some simple properties for $\chi$ following from the hypothesis.
\begin{lem}\label{lemma_chi_quadratisch}
Let $\chi$ be as in \ref{voraussetzung_an_chi}. Equivalent are:
(a) $\chi$ is quadratic.
(b) $\chi$ factorizes via the norm.
\end{lem}
\begin{cor}\label{cor_chi}
Assume \ref{voraussetzung_an_chi}.
If $K/F$ is a ramified field extension, then the character $\chi$ is quadratic.
If for an unramified field extension $K/F$ the charcacter $\chi$ is unramified, then $\chi=1$.
\end{cor}
One concluding remark on Haar measures is in due. Let $da$ be a nontrivial additive Haar measure on $F$. Associated volumes are denoted by $\operatorname{vol}$. The measure $d^\times a$ of the multiplicative group $F^\times$ shall be compatible with $da$. Associated volumes are denoted by $\operatorname{vol}^\times$. Thus,
\begin{equation*}
\operatorname{vol}^\times(\mathbf o_F^\times)=(1-q^{-1})\operatorname{vol}(\mathbf o_F^\times).
\end{equation*}
The measure on $T\backslash G$ shall be the quotient measure induced of those on $G$ and $T$.
\subsection{Automorphic forms}\label{section_autom_formen_hintergrund}
The central object on the automorphic side is the Rankin-Selberg convolution of two automorphic representations.
The Gross-Zagier formula is interested in the central order of its $L$-function.
Let $\Pi_1$ be a cuspidal representation of $\operatorname{GL}_2(\mathbb A_{\mathbb F})$ with trivial central character (i.e. an irreducible component of the discrete spectrum of the right translation on $L^2(\operatorname{GL}_2(\mathbb F)\backslash \operatorname{GL}_2(\mathbb A_{\mathbb F})),1))$ and conductor $N$.
Further, let $\Pi(\chi)$ be the irreducible component belonging to $\chi$ of the Weil representation of $\operatorname{GL}_2(\mathbb A_{\mathbb F})$ for the norm form of $\mathbb K/\mathbb F$ (e.g.~\cite{gelbart}~\S 7). It has conductor $f(\chi)^2f(\omega)$ and central character $\omega$.
The Rankin-Selberg convolution of $\Pi_1$ and $\Pi(\chi)$ produces (see~\cite{jacquet2}) the (local) Mellin transform
\begin{equation*}
\Psi(s,W_1,W_2,\Phi)=\int_{Z(F)N(F)\backslash\operatorname{GL}_2(F)}W_1(g)W_2(eg)f_\Phi(s,\omega,g)~dg
\end{equation*}
for Whittaker functions $W_1$ of $\Pi_1$ (resp.~$W_2$ of $\Pi(\chi)$) for an arbitrary nontrivial character of $F$. One defines $e:=\begin{pmatrix}-1&0\\0&1\end{pmatrix}$. In here, the Eisenstein series
\begin{equation*}
f_\Phi(s,\omega,g)=\lvert\det g\rvert^s\int_{F^\times}\Phi\left((0,t)g\right)\lvert t\rvert^{2s}\omega(t)~d^\times t
\end{equation*}
for a function $\Phi\in\mathcal S(F^2)$ occures. $f_\Phi$ is an element of the principal series $\Pi(\lvert\cdot\rvert^{s-\frac{1}{2}},\omega\lvert\cdot\rvert^{\frac{1}{2}-s})$. Of course, there is an ad\`elic analogon of this.
Analytical continuation of $\Psi$ leads to the $L$-function, the greatest common divisor of all $\Psi$. It is defined by newforms $\phi$ for $\Pi_1$ and $\theta_\chi$ of $\Pi(\chi)$ as well as a special form $E$ of $\Pi(\lvert\cdot\rvert^{s-\frac{1}{2}},\omega\lvert\cdot\rvert^{\frac{1}{2}-s})$:
\begin{eqnarray*}
L(s,\Pi_1\times\Pi(\chi))
&=& \int_{Z(\mathbb A_{\mathbb F})\operatorname{GL}_2(\mathbb F)\backslash\operatorname{GL}_2(\mathbb A_{\mathbb F})}\phi(g)\theta_\chi(g)E(s,g)~dg\\
&=& \int_{Z(\mathbb A_{\mathbb F})\operatorname{GL}_2(\mathbb F)\backslash\operatorname{GL}_2(\mathbb A_{\mathbb F})}W_\phi(g)W_{\theta_\chi}(g)f_E(s,\omega,g)~dg,
\end{eqnarray*}
where $W_\phi$ etc. denotes the associated Whittaker function. This $L$-function satisfies the functional equation
\begin{equation*}
L(s,\Pi_1\times\Pi(\chi))=\epsilon(s,\Pi_1\times\Pi(\chi))L(1-s,\Pi_1\times\Pi(\chi)),
\end{equation*}
as $\Pi_1$ and $\Pi(\chi)$ are selfdual. For places where $c(\chi)^2c(\omega)\leq v(N)$, the form $E$ (resp. $W_E$) is the newform of the Eisenstein series. In \cite{zhang} (chap.~1.4) an integral kernel $\Xi(s,g)$ is constructed which has a functional equation analogous to that of $L$ and for which
\begin{equation*}
L(s,\Pi_1\times\Pi(\chi))=\int_{Z(\mathbb A_{\mathbb F})\operatorname{GL}_2(\mathbb F)\backslash\operatorname{GL}_2(\mathbb A_{\mathbb F})}\phi(g)\Xi(s,g)~dg.
\end{equation*}
We remark, that such a kernel depends on the newform of the theta series $\Pi(\chi)$ as well as the Eisenstein series, but not on the special choice of $\Pi_1$. While the construction of the kernel shall not be reported here, its local nonconstant Fourier coefficients are defined by
\begin{equation}\label{def_fourierkoeff}
W(s,\xi,\eta,g):=W_\theta(\begin{pmatrix}\eta&0\\0&1\end{pmatrix}g) W_E(s,\begin{pmatrix}\xi&0\\0&1\end{pmatrix}g).
\end{equation}
Here $\eta:=1-\xi$. These Fourier coefficients are exactly those analytic functions which are compared to special local linking numbers in the local Gross-Zagier formula (\cite{zhang} Lemma~4.3.1).
In this paper, the restriction to newforms in (\ref{def_fourierkoeff}) will be cided. For this, one looks at the Kirillov models of the representations: Starting from the Whittaker model $\mathcal W(\Pi,\psi)$ of an irreducible admissible representation $\Pi$ for an additive character $\psi$, the Kirillov space $\mathcal K(\Pi)$ is given by
\begin{eqnarray*}
\mathcal W(\Pi,\psi) &\to& \mathcal K(\Pi),\\
W&\mapsto& k:(a\mapsto W\begin{pmatrix}a&0\\0&1\end{pmatrix}).
\end{eqnarray*}
\begin{prop}\label{prop_kirillov}
(\cite{godement}, I.36)
Let $\Pi$ be an infinite dimensional irreducible admissible representation of $\operatorname{GL}_2(F)$. The Kirillov space $\mathcal K(\Pi)$ is generated by the Schwartz space $\mathcal S(F^\times)$ along with the following stalks around zero:
(a) If $\Pi$ is supercuspidal, this stalk is zero.
(b) If $\Pi=\Pi(\mu_1,\mu_2)$ is a principle series representation, then it is given by representatives of the form
\begin{itemize}
\item[$\bullet$] $\left(\lvert a\rvert^{\frac{1}{2}}c_1\mu_1(a)+\lvert a\rvert^{\frac{1}{2}}c_2\mu_2(a)\right)\mathbf 1_{\wp^n}(a)$, if $\mu_1\not=\mu_2$,
\item[$\bullet$]
$\lvert a\rvert^{\frac{1}{2}}\mu_1(a)\left(c_1+c_2v(x)\right)\mathbf 1_{\wp^n}(a)$, if $\mu_1=\mu_2$.
\end{itemize}
Here $c_1,c_2\in\mathbb C$.
(c) If $\Pi=\Pi(\mu_1,\mu_2)$ is special, it is given by representatives
\begin{itemize}
\item[$\bullet$] $\lvert a\rvert^{\frac{1}{2}}\mu_1(a)\mathbf 1_{\wp^n}(a)$, if $\mu_1\mu_2^{-1}=\lvert \cdot\rvert$,
\item[$\bullet$]
$\lvert a\rvert^{\frac{1}{2}}\mu_2(a)\mathbf 1_{\wp^n}(a)$, if $\mu_1\mu_2^{-1}=\lvert\cdot\rvert^{-1}$.
\end{itemize}
\end{prop}
Now one defines the so called Whittaker products which are products of Kirillov functions actually. The name keeps in mind the origin of these functions as Fourier coefficients.
\begin{defn}\label{def_whittaker_prod}
Let (locally) $\Pi(\chi)$ be the theta series and $\Pi(1,\omega)$ be the Eisenstein series at the central place $s=\frac{1}{2}$. Then the products
\begin{equation*}
W(\xi,\eta)=W_\theta(\eta)W_E(\xi)
\end{equation*}
of Kirillov functions $W_\theta\in\mathcal K(\Pi(\chi))$ and $W_E\in\mathcal K(\Pi(1,\omega))$
are called {\bf Whittaker products}.
\end{defn}
Being a component of a Weil representation the theta series $\Pi(\chi)$ is completly described (\cite{jacquet-langlands} \S 1, \cite{gelbart} \S 7). Ad\`elically, it is a Hilbert modular form of conductor $f(\chi)^2f(\omega)$ and of weight $(1,\dots,1)$ at the infinite places. If $K=F\oplus F$ is split, then $\chi=(\chi_1,\chi_1^{-1})$ and $\Pi(\chi)=\Pi(\chi_1,\omega\chi_1^{-1})=\Pi(\chi_1,\chi_1^{-1})$ is a principle series representation. If $K/F$ is a field extension and $\chi$ does not factorize via the norm, then $\Pi(\chi)$ is supercuspidal. While if $\chi=\chi_1\circ\operatorname N$ it is the principle series representation $\Pi(\chi_1,\chi_1^{-1}\omega)=\Pi(\chi_1,\chi_1\omega)$, as $\chi_1^2=1$ by Lemma~\ref{lemma_chi_quadratisch}.
Thus, by Proposition~\ref{prop_kirillov}:
\begin{prop}\label{prop_characterisierung_theta_functionen}
Let $\Pi(\chi)$ be the theta series.
(a) If $K/F$ is a quadratic field extension and $\chi$ is not quadratic, then the Kirillov space $\mathcal K(\Pi(\chi))$ is given by $\mathcal S(F^\times)\cup\{0\}$.
(b) If $K/F$ is a quadratic field extension and $\chi^2=1$, then the Kirillov space $\mathcal K(\Pi(\chi))$ as a function space in one variable $\eta$ is generated by $\mathcal S(F^\times)$ along with functions around zero of the form
\begin{equation*}
\lvert\eta\rvert^{\frac{1}{2}}\chi_1(\eta)\left(a_1+a_2\omega(\eta)\right).
\end{equation*}
(c) If $K/F$ is split, then the Kirillov space $\mathcal K(\Pi(\chi))$ as a function space in one variable $\eta$ is generated by $\mathcal S(F^\times)$ along with functions around zero of the form
\begin{itemize}
\item [$\bullet$] $\lvert\eta\rvert^{\frac{1}{2}}\left(a_1\chi_1(\eta)+a_2\chi_1^{-1}(\eta)\right)$, if $\chi_1^2\not=1$,
\item[$\bullet$] $\lvert\eta\rvert^{\frac{1}{2}}\chi_1(\eta)\left(a_1+a_2v(\eta)\right)$, if $\chi_1^2=1$.
\end{itemize}
\end{prop}
For later use we collect some properties of principal series. For an automorphic form $f\in \Pi(\mu_1\lvert\cdot\rvert^{s-\frac{1}{2}},\mu_2\lvert\cdot\rvert^{\frac{1}{2}-s})$ there is $\phi\in\mathcal S(F^2)$ such that
\begin{equation}\label{funktion_principal_series}
f(s,g)=\mu_1(\det g)\lvert \det g\rvert^s\int_{F^\times}\phi\left((0,t)g\right)(\mu_1\mu_2^{-1})(t)\lvert t\rvert^{2s}~d^\times t.
\end{equation}
Conversely, any $\phi\in\mathcal S(F^2)$ defines a form $f_\phi\in \Pi(\lvert\cdot\rvert^{s-\frac{1}{2}},\omega\lvert\cdot\rvert^{\frac{1}{2}-s})$ in that way (e.g.~\cite{bump} chap.~3.7).
The Whittaker function belonging to $f$ (in a Whittaker model with unramified character $\psi$) is given by the first Fourier coefficient,
\begin{equation*}
W_f(s,g,\psi)=\int_Ff(s,\begin{pmatrix}0&-1\\1&0\end{pmatrix}\begin{pmatrix}1&x\\0&1\end{pmatrix}g)\bar\psi(x)~dx.
\end{equation*}
Read in the Kirillov model , the form for $s=\frac{1}{2}$ is given by evaluation at $g=\begin{pmatrix}a&0\\0&1\end{pmatrix}$, thus
\begin{equation*}
W_f(a):=W_f(\frac{1}{2},\begin{pmatrix}a&0\\0&1\end{pmatrix},\psi).
\end{equation*}
For $\mu_i$ unramified the newform is obtained by choosing in (\ref{funktion_principal_series}) concretely
\begin{equation*}
\phi(x,y)=\mathbf 1_{\mathbf o_F}(x)\mathbf 1_{\mathbf o_F}(y).
\end{equation*}
Thus,
\begin{align}
W_{\operatorname{new}}(a) &=
\mu_1(a)\lvert a\rvert^{\frac{1}{2}} \int_F\int_{F^\times}\mathbf 1_{\mathbf o_F}(at)\mathbf 1_{\mathbf o_F}(xt)\mu_1\mu_2^{-1}(t)\lvert t\rvert~d^\times t~\bar\psi(x)~dx\nonumber\\
&=
\mu_1(a)\lvert a\rvert^{\frac{1}{2}}\mathbf 1_{\mathbf o_F}(a)\operatorname{vol}(\mathbf o_F)\operatorname{vol}^\times(\mathbf o_F^\times)\sum_{j=-v(a)}^0\mu_1\mu_2^{-1}(\pi^j)\nonumber\\
&=
\lvert a\rvert^{\frac{1}{2}}\mathbf 1_{\mathbf o_F}(a)\operatorname{vol}(\mathbf o_F)\operatorname{vol}^\times(\mathbf o_F^\times)
\left\{\begin{matrix}\frac{\mu_1(a\pi)-\mu_2(a\pi)}{\mu_1(\pi)-\mu_2(\pi)},\textrm{ if }\mu_1\not=\mu_2\\\mu_1(a)(v(a)+1),\textrm{ if }\mu_1=\mu_2\end{matrix}\right..\label{gleichung_whittaker_neuform}
\end{align}
By Proposition~\ref{prop_kirillov}, we have
\begin{prop}\label{prop_characterisierung_eisenstein_functionen}
At the central place $s=\frac{1}{2}$ the Eisenstein series is the principle series representation $\Pi(1,\omega)$. Its Kirillov space as a function space in the variable $\xi$ is generated by $\mathcal S(F^\times)$ along with the functions around zero of the form
\begin{itemize}
\item [$\bullet$] $\lvert\xi\rvert^{\frac{1}{2}}\left(a_1+a_2\omega(\xi)\right)$, if $K/F$ is a field extension,
\item[$\bullet$] $\lvert\xi\rvert^{\frac{1}{2}}\left(a_1+a_2v(\xi)\right)$, if $K/F$ is split.
\end{itemize}
\end{prop}
Let us recall a property of the Hecke operator. For a finite set $S$ of places of $\mathbb F$, let $\mathbb A^S:=\prod_{v\notin S}\mathbf o_{\mathbb F_v}$ and $\mathbb A_S:=\prod_{v\in S}\mathbb F_v\cdot\mathbb A^S$.
\begin{prop}\label{prop_analytischer_Hecke_allgemein}
(\cite{zhang} Chapter~2.4)
Let $\mu$ be a character of $\mathbb A^\times/\mathbb F^\times$. Let $\phi\in L^2(\operatorname{GL}_2(\mathbb F)\backslash \operatorname{GL}_2(\mathbb A),\mu)$, and let $W_\phi$ be the Whittakerfunction of $\phi$ in some Whittaker model. Let $S$ be the finite set of infinite places and of those finite places $v$ for which $\phi_v$ is not invariant under the maximal compact subgroup $\operatorname{GL}_2(\mathbf o_{\mathbb F_v})$. For $b\in \mathbb A^s\cap\mathbb A^\times$ define
\begin{equation*}
H(b):=\left\{g\in M_2(\mathbb A^S)\mid\det(g)\mathbb A^S=b\mathbb A^S\right\}.
\end{equation*}
Then the Hecke operator $\mathbf T_b$ is well defined for $g\in\operatorname{GL}_2(\mathbb A_S)$:
\begin{equation*}
\mathbf T_bW_\phi(g):=\int_{H(b)}W_\phi(gh)~dh.
\end{equation*}
If $y\in \mathbb A^S$ and $(b,y_f)=1$, then
\begin{equation*}
\mathbf T_bW_\phi(g\begin{pmatrix} y&0\\0&1\end{pmatrix})=\lvert b\rvert^{-1}W_\phi(g\begin{pmatrix} yb&0\\0&1\end{pmatrix}).
\end{equation*}
\end{prop}
That is, the operation of the (local) Hecke operator $\mathbf T_b$ on some Whittaker product is essentially translation by $b$:
\begin{equation}\label{hecke_auf_whittaker_prod}
\mathbf T_bW(\xi,\eta)=\lvert b\rvert^{-2}W(b\xi,b\eta).
\end{equation}
\section{Characterisation of the local linking numbers}\label{section_charakterisierung}
Here, the local linking numbers are characterized as functions on $F^\times$. The characterizing properties are very near by those satisfied by the orbital integrals of \cite{jacquet}. Thus, establishing and proving proposition~\ref{eigenschaften_nonsplit} resp.~\ref{eigenschaften_split} is influenced by the methods there.
Before stating these properties, two useful lemmas:
\begin{lem}\label{lemma1}
Let $\phi\in\mathcal S(\chi,G)$.
(a) For each $y\in G$ there is an open set $V\ni y$ such that for all $g\in\operatorname{supp}(\phi)y^{-1}$ and all $\tilde y\in V$
\begin{equation*}
\phi(g\tilde y)=\phi(gy).
\end{equation*}
(b) Let $C\subset G$ be compact. For each $g\in G$ there is an open set $U\ni g$ such that for all $\tilde g\in U$ and all $y\in TC$
\begin{equation}\label{lemma1_int_gleichung}
\int_T\phi(t^{-1}\tilde gty)~dt=\int_T\phi(t^{-1}gty)~dt.
\end{equation}
\end{lem}
\begin{proof}[Proof of Lemma \ref{lemma1}]
(a) It is enough to prove the statement for $y=\operatorname{id}$. As $\phi$ is locally constant, for every $g\in G$ there is an open set $U_g\ni\operatorname{id}$ with $\phi(gU_g)=\phi(g)$. Let $C\subset G$ be compact such that $\operatorname{supp}\phi=TC$. Then one can cover $C\subset \cup gU_g$ by finitely many $gU_g$. Define $U$ to be the intersection of those $U_g$ to get $\phi(gU)=\phi(g)$ for all $g\in TC$.\\
(b) It is enough to prove the statement for $y\in C$ rather than $y\in TC$, as a factor $s\in T$ just changes the integral by a factor $\chi(s)$. By (a) there is an open set $V_y\ni y$ such that $\phi(t^{-1}gt\tilde y)=\phi(t^{-1}gty)$ for $\tilde y\in V_y$ and $t^{-1}gt\in\operatorname{supp}(\phi)y^{-1}$. Take finitely many $y\in C$ such that the $V_y$ cover $C$. It is enough to find open sets $U_y\ni g$ for these $y$ so that eqn.~(\ref{lemma1_int_gleichung}) is fulfilled. Then $\cap U_y$ is an open set such that eqn.~(\ref{lemma1_int_gleichung}) is satisfied for all $y\in TC$.
Write $g=g_1+\epsilon g_2$ and describe a neighborhood $U_y$ of $g$ by $k_1,k_2>0$ depending on $y$ and the obstructions $\lvert \tilde g_i-g_i\rvert<k_i$, $i=1,2$, for $\tilde g$ lying in $U_y$. Write $t^{-1}\tilde gt=g_1+\epsilon g_2t\bar t^{-1}+(\tilde g_1-g_1)+\epsilon(\tilde g_2-g_2)t\bar t^{-1}$. As $\phi$ is locally constant, one can choose $k_1,k_2$ depending on $y$ such that
\begin{equation*}
\phi(t^{-1}\tilde gt)=\phi((g_1+\epsilon g_2t\bar t^{-1})y)=\phi(t^{-1}gty).
\end{equation*}
These constants are independent from $t$ as $\lvert (\tilde g_2-g_2)t\bar t^{-1}\rvert=\lvert\tilde g_2-g_2\rvert$.
\end{proof}
\begin{lem}\label{lemma2}
Let $\phi\in\mathcal S(F\oplus F)$.
(a) There are $A_1,A_2\in\mathcal S(F)$ such that
\begin{equation*}
\int_{F^\times}\phi(a^{-1}y,a)~d^\times a =A_1(y) + A_2(y)v(y).
\end{equation*}
(b) Let $\eta$ be a nontrivial (finite) character of $F^\times$. Then there are $B_1,B_2\in\mathcal S(F)$ and $m\in \mathbb Z$ such that for $0\not= y\in\wp^m$
\begin{equation*}
\int_{F^\times}\phi(a^{-1}y,a)\eta(a)~d^\times a = B_1(y)+B_2(y)\eta(y).
\end{equation*}
\end{lem}
\begin{proof}[Proof of Lemma \ref{lemma2}]
(a) Any $\phi\in\mathcal S(F\oplus F)$ is a finite linear combination of the following elementary functions: $\mathbf 1_{\wp^n}(a)\mathbf 1_{\wp^n}(b)$, $\mathbf 1_{x+\wp^n}(a)\mathbf 1_{\wp^n}(b)$, $\mathbf 1_{\wp^n}(a)\mathbf 1_{z+\wp^n}(b)$, $\mathbf 1_{x+\wp^n}(a)\mathbf 1_{z+\wp^n}(b)$ for suitable $n\in \mathbb Z$ and $v(x),v(z)>n$. It is enough to prove the statement for these elementary functions. One gets
\begin{equation*}
\int_{F^\times}\mathbf 1_{\wp^n}(a^{-1}y)\mathbf 1_{\wp^n}(a)~d^\times a =\mathbf 1_{\wp^{2n}}(y)v(y\pi^{-2n+1})\operatorname{vol}^\times(\mathbf o_F^\times).
\end{equation*}
Thus, if $0\in\operatorname{supp}\phi$, then the integral has a pole in $y=0$, otherwise it hasn't:
\begin{equation*}
\int_{F^\times}\mathbf 1_{x+\wp^n}(a^{-1}y)\mathbf 1_{\wp^n}(a)~d^\times a =\mathbf 1_{\wp^{v(x)+n}}(y)\operatorname{vol}^\times(1+\wp^{n-v(x)}),
\end{equation*}
\begin{equation*}
\int_{F^\times}\mathbf 1_{\wp^n}(a^{-1}y)\mathbf 1_{z+\wp^n}(a)~d^\times a =\mathbf 1_{\wp^{v(z)+n}}(y)\operatorname{vol}^\times(1+\wp^{n-v(z)})
\end{equation*}
and
\begin{equation*}
\int_{F^\times}\mathbf 1_{x+\wp^n}(a^{-1}y)\mathbf 1_{z+\wp^n}(a)~d^\times a =\mathbf 1_{xz(1+\wp^{m})}(y)\operatorname{vol}^\times(1+\wp^{m}),
\end{equation*}
where $m:=n-\operatorname{min}\{v(x),v(z)\}$.\\
(b) Similar computations to those of (a).
\end{proof}
In describing the properties of the local linking numbers, one has to distinguish between the case of a compact torus $T=F^\times \backslash K^\times$, i.e. $K/F$ is a field extension, and the case of a noncompact torus $T$, i.e. $K/F$ is split.
\begin{prop}\label{eigenschaften_nonsplit}
Let $K=F(\sqrt A)$ be a field extension and let $\omega$ be the associated quadratic character. Let $\phi,\psi\in \mathcal S(\chi,G)$. The local linking number $<\phi,\psi>_x$ is a function of $x\in F^\times$ having the following properties:
(a) $<\phi,\psi>_x$ is zero on the complement of $c\operatorname N$.
(b) $<\phi,\psi>_x$ is zero on a neighborhood of $1\in F^\times$.
(c) There is a locally constant function $A$ on a neighborhood $U$ of $0$ such that for all $0\not= x\in U$: $<\phi,\psi>_x=A(x)(1+\omega(cx))$.
(d) The behavior around infinity is described as follows: There is an open set $V\ni 0$ such that for all $x^{-1}\in V\cap c\operatorname N$
\begin{equation*}
<\phi,\psi>_x=\delta(\chi^2=1)\chi_1(\frac{A}{c})\chi_1(x)\int_{T\backslash G}\phi(\epsilon y)\bar\psi(y)~dy.
\end{equation*}
Here the character $\chi_1$ of $F^\times$ is given by $\chi=\chi_1\circ\operatorname N$ if $\chi^2=1$. Especially, the local linking number vanishes in a neighborhood of infinity if $\chi^2\not= 1$.
\end{prop}
\begin{prop}\label{eigenschaften_split}
Let $K=F\oplus F$ be a split algebra. Let $\chi=(\chi_1,\chi_1^{-1})$ and let $\phi,\psi\in\mathcal S(\chi,G)$. The local linking number $<\phi,\psi>_x$ is a function of $x\in F^\times$ carrying the following properties:
(a) $<\phi,\psi>_x$ is zero on a neighborhood of $1\in F^\times$.
(b) $<\phi,\psi>_x$ is locally constant on $F^\times$.
(c) There is an open set $U\ni 0$ and locally constant functions $A_1,A_2$ on $U$ such that for $0\not= x\in U$: $<\phi,\psi>_x=A_1(x)+A_2(x)v(x)$.
(d) There is an open set $V\ni 0$ and locally constant functions $B_1,B_2$ on $V$ such that for $x^{-1}\in V$:
\begin{equation*}
<\phi,\psi>_x=\left\{\begin{matrix}
\chi_1(x)(B_1(x^{-1})+B_2(x^{-1})v(x)), &\textrm{if } \chi_1^2=1\\
\chi_1(x)B_1(x^{-1})+\chi_1^{-1}(x)B_2(x^{-1}), & \textrm{if } \chi^2\not=1
\end{matrix}.\right.
\end{equation*}
For $\chi_1^2=1$ the term $B_2(x^{-1})v(x)$ only occures if $\operatorname{id}\in \operatorname{supp}\phi(\operatorname{supp}\psi)^{-1}$.
\end{prop}
\begin{proof}[Proof of Proposition~\ref{eigenschaften_nonsplit}]
(b) Assume $1\in c\operatorname N$, otherwise this property is trivial. One has to show that for all $\gamma$ with $P(\gamma)\in U$, where $U$ is a sufficiently small neighborhood of $1$,
\begin{equation*}
\int_{T\backslash G}\int_t\phi(t^{-1}\gamma ty)~dt\bar\psi(y)~dy = 0.
\end{equation*}
This is done by showing that the even inner integral is zero. Let $C\subset G$ be compact such that $\operatorname{supp}\phi\subset TC$. Then $\phi$ obviously vanishes outside of $TCT$. It is enough to show that there is $k>0$ such that $\lvert P(\gamma)-1\rvert>k$ holds for all $\gamma\in TCT$. Assume there isn't such $k$. Let $(\gamma_i)_i$ be a sequence such that $P(\gamma_i)$ tends to $1$. Multiplying by elements of $T$ and enlarging $C$ occasionally (this is possible as $T$ is compact!), one can assume $\gamma_i=1+\epsilon t_i=z_ic_i$,
where $t_i\in T$, $c_i\in C$, $z_i\in Z$. Then $P(\gamma_i)=ct_i\bar t_i =1+a_i$, where $a_i\rightarrow 0$. We have $\det \gamma_i=1-ct_i\bar t_i=-a_i$ as well as $\det \gamma_i =z_i^2\det c_i$. As $C$ is compact, $(z_i)$ is forced to tend to zero. This implies $\gamma_i\rightarrow 0$ contradicting $\gamma_i=1+\epsilon t_i$.
(c) A $\gamma \in F^\times \backslash D^\times$ of trace zero has a representative of the form $\gamma=\sqrt A +\epsilon \gamma_2$ (by abuse of notation). Thus,
\begin{equation*}
<\phi,\psi>_x=\int_{T\backslash G}\int_T\phi((\sqrt A +\epsilon\gamma_2 t\bar t^{-1})y)~dt~\bar\psi(y)~dy.
\end{equation*}
As $\phi\in\mathcal S(\chi,G)$, there exists an ideal $\wp_K^m$ of $K$ such that for all $y\in G$ and all $l\in \wp_K^m$ one has $\phi((\sqrt A+\epsilon l)y)=\phi(\sqrt A y)$. Let $x=P(\gamma)$ be near zero, i.e. $x$ belongs to an ideal $U$ of $F$ which is given by the obstruction that $\frac{cl\bar l}{-A}\in U$ implies $l\in\wp_K^m$. For such $x$ one has
\begin{equation*}
<\phi,\psi>_x=\operatorname{vol}_T(T)\chi(\sqrt A)\int_{T\backslash G}\phi(y)\bar\psi(y)~dy.
\end{equation*}
Taking into account that $x$ must not belong to the image of $P$ (see (a)), one gets
\begin{equation*}
<\phi,\psi>_x=\frac{1}{2}\operatorname{vol}_T(T)\chi(\sqrt A)\bigl(\phi(y),\psi(y)\bigr)(1+\omega(cx)),
\end{equation*}
where $\bigl(\cdot,\cdot\bigr)$ is the $L^2$-scalar product.
(d) Again, let $\gamma=\sqrt A+\epsilon\gamma_2$ denote a preimage of $x$ under $P$. Then
\begin{equation*}
\int_T\phi(t^{-1}\gamma ty)~dt=\chi(\gamma_2)\int_T\phi((\sqrt A\gamma_2^{-1} +t^{-1}\bar t\epsilon)y)~dt.
\end{equation*}
As $\phi$ is locally constant, by Lemma~\ref{lemma1} there exists $k>0$ such that for $\lvert\gamma_2\rvert>k$ and for $y\in \operatorname{supp} \psi$ one has $\phi((\sqrt A\gamma_2^{-1} +t^{-1}\bar t\epsilon)y)=\phi(t^{-1}\bar t\epsilon y)$. Thus, for $\lvert x\rvert >\lvert cA^{-1}\rvert k^2$,
\begin{equation*}
<\phi,\psi>_x=\chi(\gamma_2)\int_T\chi(t^{-1}\bar t)~dt\int_{T\backslash G}\phi(\epsilon y)\bar\psi(y)~dy.
\end{equation*}
As $\chi(t^{-1}\bar t)$ defines the trivial character of $T$ if and only if $\chi^2=1$, the statement follows by noticing that in this case $\chi(\gamma_2)=\chi_1(\frac{Ax}{c})$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{eigenschaften_split}]
Here $D^\times$ is isomorphic to $\operatorname{GL}_2(F)$, an obvious isomorphism is given by embedding $K^\times$ diagonally and sending $\epsilon$ to $\begin{pmatrix}0&1\\1&0\end{pmatrix}$.
Then $P$ is given by
\begin{equation*}
P\begin{pmatrix}a&b\\c&d\end{pmatrix}=\frac{bc}{ad}.
\end{equation*}
The only value not contained in the image of $P$ is $1$. A preimage of $x\not=1$ of trace zero is given by
\begin{equation*}
\gamma=\begin{pmatrix}-1&x\\-1&1\end{pmatrix}.
\end{equation*}
(a) First, one shows that for $\phi\in\mathcal S(\chi,G)$ there is a constant $k>0$ such that for all $\gamma\in\operatorname{supp}\phi$: $\lvert P(\gamma)-1\rvert >k$. Using Bruhat-Tits decomposition for $\operatorname{SL}_2(F)$, $G=\operatorname{PGL}_2(F)=TNN'\cup TNwN$, where $N$ is the group of uniponent upper triangular matrices, $N'$ its transpose and $w=\begin{pmatrix}0&-1\\1&0\end{pmatrix}$. Thus, there is $c>0$ such that
\begin{eqnarray*}
\operatorname{supp}\phi &\subset&T\left\{\begin{pmatrix}1&u\\0&1\end{pmatrix}\begin{pmatrix}1&0\\v&1\end{pmatrix}\mid \lvert u\rvert <c, \lvert v\rvert <c\right\}\\
&&\bigcup T\left\{\begin{pmatrix}1&u\\0&1\end{pmatrix}w\begin{pmatrix}1&v\\0&1\end{pmatrix}\mid \lvert u\rvert <c, \lvert v\rvert <c\right\}.
\end{eqnarray*}
On the first set $P$ has the shape $P=\frac{uv}{1+uv}$. On the second one its shape is $P=\frac{uv-1}{uv}$. Thus, for all $\gamma\in\operatorname{supp}\phi$ one has $\lvert P(\gamma)-1\rvert \geq \operatorname{min}\{1,c^{-2}\}$.
Now, one shows that there even is a constant $k>0$ such that $\lvert P(\gamma)-1\rvert >k$ for all $\gamma y\in \operatorname{supp}\phi$ for all $y\in \operatorname{supp}\psi$. This implies that $<\phi,\psi>_x=0$ in the neighborhood $\lvert x-1\rvert <k$ of $1$. One knows there is such a constant $k_y$ for any $y\in \operatorname{supp}\psi$. By Lemma~\ref{lemma1}(a) this constant is valid for all $\tilde y$ in a neighborhood $V_y$. Modulo $T$ the support of $\psi$ can be covered by finitely many those $V_y$. The minimum of the associated $k_y$ then is the global constant one was looking for.\\
(b) By Lemma~\ref{lemma1}(b), there is for every $x\in F^\times\backslash\{1\}$ a neighborhood $U_x$ such that for all $y\in \operatorname{supp}\psi$ the inner integral
\begin{equation*}
\int_T\phi(t^{-1}\gamma(\tilde x)ty)~dt
\end{equation*}
is constant in $\tilde x\in U_x$. Even more the hole local linking number is locally constant on $F^\times\backslash\{1\}$. That it is constant around $1$ as well was part~(a).\\
For (c) and (d) one regards the inner integral separately first. One has for representatives
\begin{eqnarray*}
t^{-1}\gamma(x)t &=&\begin{pmatrix}a^{-1}&0\\0&1\end{pmatrix}\begin{pmatrix}-1&x\\-1&1\end{pmatrix}\begin{pmatrix}a&0\\0&1\end{pmatrix}\\
&=&\begin{pmatrix}(x-1)&0\\0&1\end{pmatrix}\begin{pmatrix} 1&\frac{x}{a(x-1)}\\0&1\end{pmatrix}\begin{pmatrix}1&0\\-a&1\end{pmatrix}\in K^\times NN'\\
&=&\begin{pmatrix}\frac{1-x}{a}&0\\0&-a\end{pmatrix}\begin{pmatrix}1&\frac{a}{x-1}\\0&1\end{pmatrix}w\begin{pmatrix}1&-a^{-1}\\0&1\end{pmatrix}\in K^\times NwN.
\end{eqnarray*}
As $\operatorname{supp}\phi$ is compact modulo $T$, the intersections $\operatorname{supp}\phi\cap NN'$ and $\operatorname{supp}\phi\cap NwN$ are compact. Write $\phi^y:=\rho(y)\phi$ as a sum $\phi^y=\phi_1^y+\phi_2^y$, $\phi_i^y\in\mathcal S(\chi,G)$, with $\operatorname{supp}\phi_1^y\subset TNN'$ and $\operatorname{supp}\phi_2^y\subset TNwN$. Using the transformation under $T$ by $\chi$, one can actually regard $\phi_i^y$, $i=1,2$, as functions on $F\oplus F$ identifying $N$ with $F$. Thus, $\phi_i^y\in \mathcal S(F\oplus F)$. For the inner integral one gets the formula
\begin{eqnarray}
\int_T\phi(t^{-1}\gamma(x)ty)~dt &=&\chi_1(x-1)\int_{F^\times}\phi_1^y(\frac{x}{a(x-1)},-a)~d^\times a\label{inners_integral_umformung}\\
&+&\chi_1(1-x)\int_{F^\times}\chi_1(a^{-2})\phi_2^y(\frac{a}{x-1},-a^{-1})~d^\times a.\nonumber
\end{eqnarray}
(c)
One has $\chi_1(x-1)=\chi_1(-1)$ if $x\in\wp^{c(\chi_1)}$, the leader of $\chi_1$. By lemma~\ref{lemma2}, the first integral of (\ref{inners_integral_umformung}) for small $x$ equals
\begin{equation*}
A_1(\frac{x}{x-1})+A_2(\frac{x}{x-1})v(\frac{x}{x-1}),
\end{equation*}
where $A_1,A_2$ are locally constant functions on a neighborhood of zero depending on $y$. $\tilde A_i(x):=A_i(\frac{x}{x-1})$ are locally constant functions on a neighborhood $U_1$ of zero as well.
The second integral of (\ref{inners_integral_umformung}) is constant on a neighborhood $U_2$ of $x=0$ depending on $y$, as $\phi_2^y$ is locally constant for $(x-1)^{-1}\rightarrow -1$. Thus, the complete inner integral can be expressed on $U_y:=\wp^{c(\chi_1)}\cap U_1\cap U_2$ as
\begin{equation*}
A_y(x):= \tilde A_1(x)+\tilde A_2(x)v(x) +B.
\end{equation*}
By lemma~\ref{lemma1}(a), there is a neighborhood $V_y$ of $y$ where the inner integral has the same value. Take $V_y$ that small that $\psi$ is constant there, too, and cover $\operatorname{supp}\psi$ modulo $T$ by finitely many such $V_y$, i.e. $y\in I$, $\lvert I\rvert$ finite. The local linking number for $x\in U=\cap_{y\in I}U_y$ now is computed as
\begin{equation*}
<\phi,\psi>_x=\sum_{y\in I}\operatorname{vol}_{T\backslash G}(TV_y)\bar\psi(y)A_y(x).
\end{equation*}
That is, there are locally constant functions $B_1,B_2$ on $U$ such that for $x\in U$
\begin{equation*}
<\phi,\psi>_x=B_1(x)+B_2(x)v(x).
\end{equation*}
(d) Let $x^{-1}\in\wp^{c(\chi_1)}$, then $\chi_1(x-1)=\chi(x)$. As $\phi_1^y$ is locally constant, the first integral of (\ref{inners_integral_umformung}) equals a locally constant function $A_1(x^{-1})$ for $x^{-1}\in U_1$, a neighborhood of zero depending on $y$. For the second integral, one has to differentiate between $\chi_1^2=1$ or not. To start with, let $\chi_1^2\not=1$. Applying lemma~\ref{lemma2}(b) for $\eta=\chi_1^2$, one gets locally constant functions $A_2,A_3$ on a neigborhood $U_2$ of zero depending on $y$ such that the second integral equals $A_2(x^{-1})+\chi_1^2(x^{-1})A_3(x^{-1})$. Thus, for fixed $y$ the inner integral for $x^{-1}\in U_y=U_1\cap U_2\cap\wp^{c(\chi_1)}$ is given by
\begin{equation*}
A_y(x):=\int_T\phi^y(t^{-1}\gamma(c)t)~dt=\chi_1(x)\bigl(A_1(x^{-1})+A_2(x^{-1})+A_3(x^{-1})\chi_1^{-1}(x)\bigr).
\end{equation*}
Proceeding as in (c), one gets the assertion. \\
Now, let $\chi_1^2=1$. By lemma~\ref{lemma2}(a), one has locally constant functions $A_2,A_3$ on a neighborhood $U_2$ of zero such that for $x^{-1}\in U$ the second integral of (\ref{inners_integral_umformung}) is given by $A_2(x^{-1})+A_2(x^{-1})v(x)$. Thus, for $x^{-1}\in U_y:=U_1\cap U_2\cap\wp^{c(\chi_1)}$ the inner integral is given by
\begin{equation*}
A_y(x):=\chi_1(x)\bigl( A_1(x^{-1})+A_2(x^{-1})+A_3(x^{-1})v(x)\bigr).
\end{equation*}
The term $A_3(x^{-1})v(x)$ by lemma~\ref{lemma2}(a) is obtainted from functions $\phi_2^y(a,b)$ having the shape $\mathbf 1_{\wp^n}(a)\mathbf 1_{\wp^n}(b)$ around zero. Those function can only occure if $y$ is contained in $\operatorname{supp}\phi$.
Again proceeding as in part (c), the local linking number for $x^{-1}$ in a sufficently small neighborhood $U$ of zero is
\begin{equation*}
<\phi,\psi>_x=\chi_1(x)\bigl(B_1(x^{-1})+B_2(x^{-1})v(x)\bigr),
\end{equation*}
where $B_1,B_2$ are locally constant on $U$ and $B_2$ doesn't vanish only if $\operatorname{id}\in (\operatorname{supp}\phi)(\operatorname{supp}\psi)^{-1}$.
\end{proof}
\begin{prop}\label{charakterisierung_der_LLN}
The properties (a) to (d) of proposition~\ref{eigenschaften_nonsplit} resp. \ref{eigenschaften_split} characterize the local linking numbers. That is, given a function on $F^\times$ satisfying these properties, one can realize it as a local linking number.
\end{prop}
The proof of proposition~\ref{charakterisierung_der_LLN} is totally constructive. Let us first describe the appoach in general before going into detail in the case of a field extension $K/F$. The case of a split algebra $K=F\oplus F$ will be omitted and referred to~\cite{diss} chap.~2, as the computations there are quite similar to those presented here and straighed forward after them.
Firstly, choose a describtion of the function $H$ satisfying the properties (a) to (d) in the following manner
\begin{equation*}
H(x)=\mathbf 1_{c\operatorname{N}}(x)\bigl(A_0(x)\mathbf 1_{V_0}(x)+A_1(x)\mathbf 1_{V_1}(x) +\sum_{j=2}^{M} H(x_j)\mathbf 1_{V_j}(x)\bigr),
\end{equation*}
where for $j=2,\dots, M$
\begin{equation*}
V_j=x_j(1+\wp_F^{n_j})
\end{equation*}
are open sets in $F^\times$ on which $H$ is constant. Further,
\begin{equation*}
V_0=\wp_F^{n_0} \quad \textrm{ resp. } \quad V_1=F\backslash \wp^{-n_1}
\end{equation*}
are neighborhoods of $0$ (resp. $\infty$) where $H$ is characterized by $A_0$ (resp. $A_1$) according to property (c) (resp. (d)). One can assume without loss of generality that $n_j>0$ for $j=0, \dots , M$ and $V_i\cap V_j=\emptyset$ for $i\not= j$.\\
Secondly, construct functions $\phi_j$, $j=0,\dots, M$, and one function $\psi$ in $\mathcal S(\chi,G)$ such that $\operatorname{supp}\phi_i\cap\operatorname{supp}\phi_j=\emptyset$ if $i\not=j$ and such that
\begin{equation*}
<\phi_j,\psi>_x=H(x_j)\mathbf 1_{V_j}(x)\quad\textrm{ resp. } <\phi_j,\psi>_x=A_j(x)\mathbf 1_{V_j}(x).
\end{equation*}
There is essentially one possibility to construct such functions in $\mathcal S(\chi,G)$: Take a compact open subset $C$ of $G$ which is {\bf fundamental} for $\chi$, i.e. if $t\in T$ and $c\in C$ as well as $tc\in C$, then $\chi(t)=1$. Then the function $\phi =\chi\cdot\mathbf 1_C$ given by $\phi(tg)=\chi(t)\mathbf 1_C(g)$ is well defined in $\mathcal S(\chi,G)$ with support $TC$.\\
The function $\psi$ is now chosen as $\psi=\chi\cdot \mathbf 1_U$, where $U$ is a compact open subgroup of $G$ that small that for $j=0,\dots,M$
\begin{equation*}
P(P^{-1}(V_j)U)= V_j\cap c\operatorname N.
\end{equation*}
For $j\geq 2$ now take $C_j\subset P^{-1}(V_j)$ compact such that $C_jU$ is fundamental and $P(C_jU)=V_j$ and define $\phi_j:=H(x_j)\cdot\chi\cdot\mathbf 1_{C_jU}$.
The stalks of zero and infinity are constructed in a similar manner.\\
Thirdly, as the local linking numbers are linear in the first component and as the supports of the $\phi_j$ are disjoint by construction, one gets
\begin{equation*}
H(x)=<\sum_{j=0}^{M}\phi_j,\psi>_x.
\end{equation*}
\begin{proof}[Proof of Proposition~\ref{charakterisierung_der_LLN} in the case $K$ a field]
Let $K=F(\sqrt A)$ be a quadratic field extension. Let the function $H$ satisfying (a) to (d) of Prop.~\ref{eigenschaften_nonsplit} be given as
\begin{equation*}
H(x)=\mathbf 1_{c\operatorname N}(x)\bigl(A_0(x)\mathbf 1_{V_0}(x)+A_1(x)\mathbf 1_{V_1}(x) +\sum_{j=2}^{M} H(x_j)\mathbf 1_{V_j}(x)\bigr),
\end{equation*}
where
\begin{eqnarray*}
&& V_0=\wp^{n_0} \textrm{ and } A_0(x)=a_0,\\
&& V_1=F\backslash \wp^{-n_1} \textrm{ and } A_1(x)=\left\{\begin{matrix}\chi_1(x)a_1, \textrm{ if } \chi^2=1\\
0, \textrm{ if } \chi^2\not=1\\
\end{matrix},\right.\\
&& V_j=x_j(1+\wp^{n_j}) \textrm{ for } j=2,\dots, M,
\end{eqnarray*}
with $a_0, a_1, H(x_j)\in\mathbb C$, and $n_j>0$ for $j=0,\dots,M$. One can further assume
\begin{eqnarray*}
n_0-v(\frac{c}{A})>0, \quad n_1+v(\frac{c}{A})>0 \textrm{ and both even,}
\end{eqnarray*}
as well as $V_i\cap V_j=\emptyset$ for $i\not= j$. One defines
\begin{eqnarray*}
\tilde n_0 &=& \left\{\begin{matrix}\frac{1}{2}(n_0-v(\frac{c}{A})), & \textrm{if } K/F \textrm{ unramified}\\
n_0-v(\frac{c}{A}), &\textrm{if } K/F \textrm{ ramified}
\end{matrix},\right.
\end{eqnarray*}
\begin{eqnarray*}
\tilde n_1 &=& \left\{\begin{matrix}\frac{1}{2}(n_1+v(\frac{c}{A})), & \textrm{if } K/F \textrm{ unramified}\\
n_1+v(\frac{c}{A}), &\textrm{if } K/F \textrm{ ramified}
\end{matrix},\right.
\end{eqnarray*}
as well as for $j=2,\dots,M$
\begin{equation*}
\tilde n_j = \left\{\begin{matrix}n_j, & \textrm{if } K/F \textrm{ unramified}\\
2n_j, &\textrm{if } K/F \textrm{ ramified}
\end{matrix}.\right.
\end{equation*}
Then $\operatorname N(1+\wp_K^{\tilde n_j})=1+\wp_F^{n_j}$, $j\geq 2$, and $\wp_K$ the prime ideal of $K$. Define
\begin{equation*}
U:= 1+\wp_K^k+\epsilon\wp_K^m,
\end{equation*}
where $k>0$ and $m>0$ are chosen such that
\begin{eqnarray}
&& k\geq c(\chi),\quad m\geq c(\chi)\nonumber\\
&& k\geq \tilde n_j,\quad m\geq \tilde n_j+1, \textrm{ for } j=0,\dots,M,\nonumber\\
&&m\geq c(\chi)+1-\frac{1}{2}v(x_j), \textrm{ for } j=2,\dots,M,\nonumber\\
&&m\geq \tilde n_j+1+\frac{1}{2}\lvert v(x_j)\rvert, \textrm{ for } j=2,\dots,M.\label{k-m-bedingungen}
\end{eqnarray}
As $k,m>0$ and $k,m\geq c(\chi)$, $U$ is fundamental. Define
\begin{equation*}
\psi:=\chi\cdot\mathbf 1_U.
\end{equation*}
Now realize the stalks for $x_j$, $j\geq 2$, as local linking numbers. To begin with, let $\sqrt A +\epsilon\gamma_j$ be a preimage of $x_j$, i.e.
\begin{equation*}
P(\sqrt A+\epsilon\gamma_j)=\frac{c\operatorname N(\gamma_j)}{-A}=x_j.
\end{equation*}
Then the preimage of $V_j$ is given by
\begin{equation*}
P^{-1}(V_j)=T\bigl(\sqrt A+\epsilon\gamma_j(1+\wp_K^{\tilde n_j})\bigr)T=T\bigl(\sqrt A+\epsilon\gamma_j(1+\wp_K^{\tilde n_j})\operatorname N_K^1\bigr).
\end{equation*}
Let $C_j:=\sqrt A +\epsilon\gamma_j(1+\wp_K^{\tilde n_j})\operatorname N_K^1$ and look at the compact open set $C_jU$,
\begin{equation*}
C_jU=\sqrt A(1+\wp_K^k)+c\bar \gamma_j\wp_K^m+\epsilon\bigl(\gamma_j(1+\wp_K^k+\wp_K^{\tilde n_j})\operatorname N_K^1+\sqrt A\wp_K^m\bigr).
\end{equation*}
Due to the choices (\ref{k-m-bedingungen}), $C_jU$ is fundamental. To prove this, one has to check that if $t\in T$, $c\in C_j$ and $tc\in C_jU$, then $\chi(t)=1$ (observe that $U$ is a group). So, let
\begin{equation*}
tc=t\sqrt A +\epsilon\bar t\gamma_j(1+\pi_K^{\tilde n_j}c_1)l\in C_jU.
\end{equation*}
The first component forces $t\in 1+\wp_K^k+\frac{c}{A}\bar\gamma_j\wp_K^m$. For those $t$, the choices (\ref{k-m-bedingungen}) imply $\chi(t)=1$. For the image $P(C_jU)$ one finds again by (\ref{k-m-bedingungen})
\begin{equation*}
P(C_jU)=\frac{c\operatorname N(\gamma_j)\operatorname N(1+\wp_K^k+\wp_K^{\tilde n_j}+\wp_K^m\frac{\sqrt A}{\gamma_j})}{-A\operatorname N(1+\wp_K^k+\frac{c}{\sqrt A}\bar\gamma_j\wp_K^m)}=V_j.
\end{equation*}
The functions $\phi_j:= \chi\cdot\mathbf 1_{C_jU}\in\mathcal S(\chi,G)$ are now well defined. Let us compute the local linking number
\begin{equation*}
<\phi_j,\psi>_x=\int_{T\backslash G}\int_T\phi_j(t^{-1}\gamma(x)ty)~dt~\bar\psi(y)~dy.
\end{equation*}
The integrand doesn't vanish only if there is $s\in K^\times$ such that
\begin{equation*}
st^{-1}\gamma(x)t=s\sqrt A+\epsilon\bar s\gamma_2(x)t\bar t^{-1}\in C_jU.
\end{equation*}
The first component inplies $s\in 1+\wp_K^{\tilde n_j}$. The second component implies $\gamma_2(x)\in\gamma_j(1+\wp_K^{\tilde n_j})\operatorname N_K^1$, which is equivalent to $\gamma(x)\in C_jU$ or $x\in V_j$. In this case one can take $s=1$ and gets
\begin{equation*}
<\phi_j,\psi>_x=\mathbf 1_{V_j}(x)\int_{T\backslash G}\int_T 1~dt~\bar\psi(y)~dy= \mathbf 1_{V_j}(x)\operatorname{vol}_T(T)\operatorname{vol}_G(U).
\end{equation*}
Normalizing $\tilde \phi_j:=\frac{H(x_j)}{\operatorname{vol}_T(T)\operatorname{vol}_G(U)}\phi_j$, one finally gets
H\vert_{V_j}(x)=<\tilde\phi_j,\psi>_x$.
\\
Now regard the stalk of zero. One findes $P(\sqrt A+\epsilon\wp_K^{\tilde n_0})=\wp_F^{n_0}\cap c\operatorname N$.
Define $C_0:=\sqrt A+\epsilon\wp_K^{\tilde n_0}$. The preimage $P^{-1}(V_0)$ equals $TC_0T=TC_0$. The set open and compact set $C_0U$ is easily seen to be fundamental and to satisfy $P(C_0U)=V_0\cap cN$.
Define $\phi_0:=\chi\cdot\mathbf 1_{C_0U}$ and compute the local linking number $<\phi_0,\psi>_x$. Again, this doesn't vanish only if there is $s\in K^\times$ such that
\begin{equation*}
st^{-1}\gamma(x)t=s\sqrt A+\epsilon \bar s\gamma_2(x)t\bar t^{-1}\in C_0U.
\end{equation*}
This forces $\gamma_2(x)\in\wp_K^{\tilde n_0}$. Assuming this, one can take $s=1$ and gets
\begin{equation*}
<\phi_0,\psi>_x=\mathbf 1_{V_0\cap c\operatorname N}(x)\operatorname{vol}_T(T)\operatorname{vol}_G(U).
\end{equation*}
That is, $H\vert_{V_0}(x)=a_0=<\frac{a_0}{\operatorname{vol}_T(T)\operatorname{vol}_G(U)}\phi_0,\psi>_x$.\\
It remains to construct the stalk of infinity. One can assume that $\chi^2=1$, as otherwise the function vanishes for big $x$. Thus, $\chi=\chi_1\circ \operatorname N$. The preimage of $V_1=F\backslash\wp_F^{-n_1}$ is given by
\begin{equation*}
P^{-1}(V_1)=T\bigl(\sqrt A+\epsilon(\wp_K^{\tilde n_1})^{-1}\bigr)T=T\bigl(\sqrt A\wp_K^{\tilde n_1}+\epsilon\operatorname N_K^1\bigr).
\end{equation*}
Take $C_1=\sqrt A\wp_K^{\tilde n_1}+\epsilon\operatorname N_K^1$ to get
a fundamental compact open set
\begin{equation*}
C_1U=\sqrt A\wp_K^{\tilde n_1}+c\wp_K^m+\epsilon\bigl(\operatorname N_K^1(1+\wp_K^k)+\sqrt A\wp_K^{m+\tilde n_1}\bigr),
\end{equation*}
By the choices (\ref{k-m-bedingungen}) one gets $P(C_1U)=V_1\cap c\operatorname N$.
Taking $\phi_1:=\chi\cdot\mathbf 1_{C_1U}$ this time, we get
$ H\vert_{V_1}(x)=\frac{a_1}{\operatorname{vol}_T(T)\operatorname{vol}_G(U)}<\phi_1,\psi>_x$.
\end{proof}
\section{A Matching}\label{section_matching}
Having characterized the local linking numbers, we can easily compare them to the Whittaker products of Definition~\ref{def_whittaker_prod}. For this, we use the parametrization $\xi=\frac{x}{x-1}$ rather than $x$ itself. The properties of the local linking numbers (Propositions~\ref{eigenschaften_nonsplit} and \ref{eigenschaften_split}) transform accordingly. For example, the property of vanishing around $x=1$ means that the local linking numbers as functions in $\xi$ have compact support. The behavior around infinity is replaced by the behavior around $\xi=1$.
The reason why the parametrization $\xi$ is not used all over this paper is simply that it was more comfortable to do the calculations using the coordinate $x$. In view of section~\ref{section_translation}, this is reasonable.
\begin{thm}\label{satz_matching}
The local linking numbers and the Whittaker products match, that is: If $\eta=1-\xi$ and if $\omega(-\xi\eta)=(-1)^{\delta(D)}$, then
\begin{eqnarray*}
&&\Bigl\{\lvert\xi\eta\rvert^{\frac{1}{2}}<\phi,\psi>_{x=\frac{\xi}{\xi-1}}\Big\vert \phi,\psi\in\mathcal S(\chi,G)\Bigr\} =\\
&&\hspace*{3cm} \Bigl\{W_\theta(\eta)W_E(\xi)\Big\vert W_\theta\in\mathcal K(\Pi(\chi)),W_E\in\mathcal K(\Pi(1,\omega))\Bigr\}.
\end{eqnarray*}
\end{thm}
Recall the definition~(\ref{def_delta(D)}) of $\delta(D)$. Notice that the term $\omega(-\xi\eta)$ is just $\omega(x)$.
\begin{proof}[Proof of Theorem~\ref{satz_matching}]
The Whittaker products are products of Kirillov functions characterized in Propositions~\ref{prop_characterisierung_theta_functionen} and \ref{prop_characterisierung_eisenstein_functionen}. Comparing their properties to those of the local linking numbers (Propositions~\ref{eigenschaften_nonsplit} resp. \ref{eigenschaften_split}) yields the Proposition. For example, by Prop.~\ref{prop_characterisierung_theta_functionen} for $K/F$ split and $\chi_1^2\not=1$, the Whittaker products for $\xi\to 1$ ($\eta\to 0$) are given by
\begin{equation*}
\lvert\xi\eta\rvert^{\frac{1}{2}}\left(a_1\chi_1(\eta)+a_2\chi_1^{-1}(\eta)\right),
\end{equation*}
which corresponds to Prop~\ref{eigenschaften_split}~(d). For $\xi\to 0$ ($\eta\to 1$) we apply Prop.~\ref{prop_characterisierung_eisenstein_functionen}: The Whittaker products have the shape
$ \lvert\xi\eta\rvert^{\frac{1}{2}}\left(a_1+a_2v(\xi)\right)$.
This is property~(c) of Prop.~\ref{eigenschaften_split}. Away from $\xi\to 1$ and $\xi\to 0$, the Whittaker products are locally constant with compact support. This is equivalent to (a) and (b) of Prop.~\ref{eigenschaften_split}.
\end{proof}
\section{Translated linking numbers}\label{section_translation}
In the remaining, the quaternion algebra $D$ is assumed to be split, that is $G=F^\times\backslash D^\times$ is isomorphic to the projective group $\operatorname{PGL}_2(F)$.
The aim is to give an operator on the local linking numbers realizing the Hecke operator on the analytic side.
As the analytic Hecke operator essentially is given by translation by $b\in F^\times$ (Proposition~\ref{prop_analytischer_Hecke_allgemein}), the first candidate for this study surely is the translation by $b$, i.e.
\begin{equation*
<\phi,\begin{pmatrix}b&0\\0&1\end{pmatrix}.\psi>_x=\int_{T\backslash G}\int_T\phi(t^{-1}\gamma(x)ty)~dt~\bar\psi(y\begin{pmatrix}b&0\\0&1\end{pmatrix})~dy.
\end{equation*}
Let
\begin{equation}\label{gl_inneres_int_def}
I_\phi(y) =\int_T\phi(t^{-1}\gamma(x)ty)~dt
\end{equation}
be the inner integral of this translated local linking number.
It will be seen eventually that this translation does not realize the Hecke operator completely but that there are operators made up from it which do.
In studying this translation the difference between the case of a compact torus and the case of a noncompact one becomes cucial. While one can describe what is going on in the compact case in a few lines (at least if one fixes $x$, viewing the translated linking number as a function of $b$ alone), our approach in the noncompact case would blast any article because of computational overflow. This case will be sketched here and made clear by examples. The complete computations are done in~\cite{diss}.
What is more, we have to reduce ourselves to the case in which the first variable $x$ is fixed. Again, the reason for that is manageability of computations. But at least we get some hints of what is going on in two variables by examples.
\subsection{The compact case}\label{section_compact}
Let $K=F(\sqrt A)$ be a quadratic field extension of $F$. That is, the torus $F^\times\backslash K^\times$ is compact.
As functions $\phi\in\mathcal S(\chi,G)$ have compact support modulo $T$, they have compact support absolutely. As $\phi$ is locally constant, the inner integral $I_\phi$ (\ref{gl_inneres_int_def}) is. Further, $T\gamma(x)T\operatorname{supp}\phi$ is compact, and left translation by $t'\in T$ yields $I_\phi(t'y)=\chi(t')I_\phi(y)$. Thus, $I_\phi$ itself is a function belonging to $\mathcal S(\chi,G)$.
Choose the following isomorphism of $D^\times=(K+\epsilon K)^\times$ with $\operatorname{GL}_2(F)$:
\begin{eqnarray*}
\epsilon&\mapsto&\begin{pmatrix}0&-A\\1&0\end{pmatrix}\\
K^\times\ni t=a+b\sqrt A&\mapsto& \begin{pmatrix}a&bA\\b&a\end{pmatrix}
\end{eqnarray*}
\begin{lem}\label{lemma_mirobolische_zerlegung}
Let $M=\left\{\begin{pmatrix}y_1&y_2\\0&1\end{pmatrix}\mid y_1\in F^\times,y_2\in F\right\}$ be the mirabolic subgroup of the standard Borel group. The mapping $K^\times\times M\rightarrow \operatorname{GL}_2(F)$, $(t,m)\mapsto k\cdot m$, is a homeomorphism.
\end{lem}
\begin{proof}[Proof of Lemma~\ref{lemma_mirobolische_zerlegung}]
One has to show that
\begin{equation*}
\operatorname{GL}_2(F)\ni \begin{pmatrix}a&b\\c&d\end{pmatrix}=\begin{pmatrix}\alpha&\beta A\\\beta&\alpha\end{pmatrix}\begin{pmatrix}z&y\\0&1\end{pmatrix}
\end{equation*}
has exatly one solution $(\alpha,\beta,z,y)$ satisfying $(\alpha,\beta)\not=(0,0)$ and $z\not=0$. The first column yields $\alpha=az^{-1}$ and $\beta=cz^{-1}$. The second column now reads
\begin{equation*}
\begin{pmatrix}b\{\rm d}\end{pmatrix}=\begin{pmatrix}z^{-1}cA+yz^{-1}a\\z^{-1}a+yz^{-1}c\end{pmatrix},
\end{equation*}
which is a system of linear equations in the variables $w_1=yz^{-1}$ and $w_2=z^{-1}$ with determinant $a^2-c^2A\not=0$. Thus, there is exactly one solution $(w_1,w_2)$. As $w_2=z^{-1}=0$ would imply that the columns $\begin{pmatrix}a\\c\end{pmatrix}$ and $\begin{pmatrix}b\{\rm d}\end{pmatrix}$ are linearly dependend, $z$ and $y$ are uniquely determined and so are $\alpha$ and $\beta$. The resulting continous mapping $K^\times\times M\rightarrow \operatorname{GL}_2(F)$ is bijective. Being provided by polynomial equations its inverse is continous, too.
\end{proof}
The group $M$ is not unimodular anymore but carries a right invariant Haar measure $d^\times y_1~dy_2$, where $d^\times y_1$ resp. $dy_2$ are nontrivial compatible Haar measures on $F^\times$ resp. $F$. We normalize the quotient measure $dy$ on $T\backslash G$ so that
$ dy=d^\times y_1~dy_2$.
By Lemma~\ref{lemma_mirobolische_zerlegung}, any $\phi\in\mathcal S(\chi,G)$ can be identified with a function in $\mathcal S(F^\times\times F)$,
\begin{equation*}
\phi(y_1,y_2):=\phi\begin{pmatrix}y_1&y_2\\0&1\end{pmatrix}.
\end{equation*}
$\phi$ being locally constant with compact support, there are finitely many points $(z_1,z_2)\in F^\times\times F$ and $m>0$ such that
\begin{equation*}
\phi(y_1,y_2)=\sum_{(z_1,z_2)}\phi(z_1,z_2)\mathbf 1_{z_1(1+\wp^m)}(y_1)\mathbf 1_{z_2+\wp^m}(y_2).
\end{equation*}
Applying this for $I_\phi$ and $\psi$,
\begin{equation*}
I_\phi(y_1,y_2)=\sum_{(z_1,z_2)}I_\phi(z_1,z_2)\mathbf 1_{z_1(1+\wp^m)}(y_1)\mathbf 1_{z_2+\wp^m}(y_2),
\end{equation*}
\begin{equation*}
\psi(y_1,y_2)=\sum_{(w_1,w_2)}\psi(w_1,w_2)\mathbf 1_{w_1(1+\wp^m)}(y_1)\mathbf 1_{w_2+\wp^m}(y_2),
\end{equation*}
we compute the translated local linking number
\begin{align*}
<\phi,\begin{pmatrix}b&0\\0&1\end{pmatrix}.\psi>_x&=\int_{T\backslash G}I_\phi(y)\bar\psi(y\begin{pmatrix}b&0\\0&1\end{pmatrix})~dy\\
&=\sum_{(z_1,z_2),(w_1,w_2)}I_\phi(z_1,z_2)\bar\psi(w_1,w_2)\mathbf 1_{z_2+\wp^m}(w_2)\mathbf 1_{\frac{w_1}{z_1}(1+\wp^m)}(b)\\
&\quad\quad\cdot\operatorname{vol}^\times(1+\wp^m)\operatorname{vol}(\wp^m).
\end{align*}
We have proved:
\begin{prop}\label{prop_translatiert_kompakt}
Let $T$ be compact.
For fixed $x$, the translated local linking number $<\phi,\begin{pmatrix}b&0\\0&1\end{pmatrix}\psi>_x$ is a locally constant function of $b\in F^\times$ with compact support.
\end{prop}
As the behavior of the translated local linking numbers as functions in $b$ as well as in $x$ is not studied here completely, an example is in due. This example will be of further use later. Its calculation is banished to Appendix~A.
\begin{eg}\label{bsp_lln_translatiert_kompakt}
Let $K/F$ be an unramified field extension and let $\chi=1$. Then $\phi=\chi\cdot\mathbf 1_{\operatorname{GL}_2(\mathbf o_F)}$ is well defined in $\mathcal S(\chi,G)$ and
\begin{align*}
&<\phi,\begin{pmatrix}b&0\\0&1\end{pmatrix}\phi>_x\cdot\operatorname{vol}^{-1}=\\
&\quad \quad\mathbf 1_{\operatorname N\backslash(1+\wp)}(x)\mathbf 1_{\mathbf o_F^\times}(b)+\mathbf 1_{1+\wp}(x)\bigl(\mathbf 1_{(1-x)\mathbf o_F^\times}(b)+\mathbf 1_{(1-x)^{-1}\mathbf o_F^\times}(b)\bigr)q^{-v(1-x)},
\end{align*}
where $\operatorname{vol}:=\operatorname{vol}_T(T)\operatorname{vol}^\times(\mathbf o_F^\times)\operatorname{vol}(\mathbf o_F)$.
\end{eg}
\subsection{The noncompact case}\label{section_nonkompakt}
Let $K=F\oplus F$ be a split algebra. The character $\chi$ is of the form $\chi=(\chi_1,\chi_1^{-1})$ for a character $\chi_1$ of $F^\times$.
As in the proof of Proposition~\ref{eigenschaften_split}, $G=TNN'\cup TNwN$. Both of these open subset are invariant under right translation by $\begin{pmatrix}b&0\\0&1\end{pmatrix}$.
Choose coset representatives for $T\backslash TNN'$ of the form
\begin{equation*}
y=\begin{pmatrix}1&y_2\\0&1\end{pmatrix}\begin{pmatrix}1&0\\y_3&1\end{pmatrix}
\end{equation*}
as well as coset representatives for $T\backslash TNwN$ of the form
\begin{equation*}
y=\begin{pmatrix}1&y_1\\0&1\end{pmatrix}w\begin{pmatrix}1&0\\y_4&1\end{pmatrix}.
\end{equation*}
Any function $\psi\in\mathcal S(\chi,G)$ can be split into a sum $\psi=\psi_1+\psi_2$, $\psi_i\in\mathcal S(\chi,G)$, with $\operatorname{supp} \psi_1\subset TNN'$ (resp. $\operatorname{supp} \psi_2\subset TNwN$).
The function $\psi_1$ can be viewed as an element of $\mathcal S(F^2)$ in the variable $(y_2,y_3)$. Choose the quotient measure $dy$ on $T\backslash TNN'$ such that $dy=dy_2~dy_3$ for fixed Haar measure $dy_i$ on $F$. Proceed analogly for $\psi_2$.
For fixed $x$ the inner integral $I_\phi$ (\ref{gl_inneres_int_def}) of the local linking number is a locally constant function in $y$. Its support is not compact anymore, but $I_\phi$ is the locally constant limit of Schwartz functions.
This is the reason for this case being that more difficult than the case of a compact torus. The shape of the translated linking numbers is given in the next theorem.
\begin{thm}\label{satz_translatiert_nicht_kompakt}
Let $T$ be a noncompact torus.
For fixed $x$, the local linking number
$ <\phi,\begin{pmatrix}b&0\\0&1\end{pmatrix}.\psi>_x$
is a function in $b\in F^\times$ of the form
\begin{align*}
& \chi_1^{-1}(b)\Bigl(\mathbf 1_{\wp^n}(b)\lvert b\rvert (a_{+,1}v(b)+a_{+,2}) + A(b) + \mathbf 1_{\wp^n}(b^{-1})\lvert b\rvert^{-1} (a_{-,1}v(b)+a_{-,2})\Bigr)\\
& +
\chi_1(b)\Bigl(\mathbf 1_{\wp^n}(b)\lvert b\rvert (c_{+,1}v(b)+c_{+,2}) + C(b) + \mathbf 1_{\wp^n}(b^{-1})\lvert b\rvert^{-1} (c_{-,1}v(b)+c_{-,2})\Bigr),&
\end{align*}
with suitable constants $a_{\pm,i},c_{\pm,i}\in\mathbb C$, integral $n>0$ and functions $A,C\in\mathcal S(F^\times)$.
\end{thm}
{\it On the proof of Theorem~\ref{satz_translatiert_nicht_kompakt}.}
This is done by brute force computations, which need about 100 pages. Instead of giving these, we will outline the reduction to realizable $\wp$-adic integration here and refer to \cite{diss}, Ch.~8, for the computations. What is more, in Example~\ref{bsp_lln_nichtkompakt} we compute one special translated local linking number in detail which gives a good insight to the general calculations and which is used eventually.
We choose the functions $\phi,\psi$ locally as simple as possible, that is: If $z\in\operatorname{supp}\phi$, then $z$ belongs to $TNN'$ or $TNwN$. Let us reduce ourselves to $z\in TNN'$, as the other case is done similarly. Actually it is seen (\cite{diss}, Ch.~8) that all the calculations for one case can be put down to those for the other.
There is a representative
\begin{equation*}
\tilde z= \begin{pmatrix}1+z_2z_3&z_2\\z_3&1\end{pmatrix}
\end{equation*}
of $z$ modulo $T$ and an open set
\begin{equation*}
U_z=\tilde z +\begin{pmatrix}\wp^m&\wp^m\\\wp^m&\wp^m\end{pmatrix}
\end{equation*}
such that $\phi\vert_{U_z}=\phi(\tilde z)$. Choosing $m>0$ that big that $U_z$ is fundamental, $\phi$ locally has the shape $\phi_z:=\chi\cdot\mathbf 1_{U_z}$ up to some multiplicative constant.
For the exterior function $\psi$ proceed similarly. Evidently, it is enough to determine the behavior of the translated local linking numbers for functions of this type.
Thus, we are reduced to compute
\begin{equation*}
\int_{T\backslash G}\int_T\phi_z(t^{-1}\gamma(x)ty)~dt~\bar\psi_{\tilde z}(y\begin{pmatrix}b&0\\0&1\end{pmatrix})~dy.
\end{equation*}
According to whether $z_2$ or $z_3$ is zero or not, and $\operatorname{supp}\psi\subset TNN'$ or $\operatorname{supp}\psi\subset TNwN$, there are eight types of integrals to be done (\cite{diss}, Ch.~5.2 and 8).
\begin{eg}\label{bsp_lln_nichtkompakt}
Let $T$ be noncompact.
Let $\chi=(\chi_1,\chi_1)$, where $\chi_1$ is unramified and quadratic.
Then $\phi=\chi\cdot\mathbf 1_{\operatorname{GL}_2(\mathbf o_F)}$ is well-defined in $\mathcal S(\chi, G)$.
The translated local linking number
$<\phi,\begin{pmatrix} b&0\\0&1\end{pmatrix}.\phi>_x$
is given by
\begin{eqnarray*}
&&
\chi_1(1-x)\chi_1(b)\operatorname{vol}^\times (\mathbf o_F^\times)\operatorname{vol}(\mathbf o_F)^2\cdot\\
&&\Biggl[
\mathbf 1_{F^\times\backslash (1+\wp)}(x)\Biggl(\mathbf 1_{\mathbf o_F^\times}(b)\bigl(\lvert v(x)\rvert +1\bigr)(1+q^{-1}) +\mathbf 1_{\wp}(b)\lvert b\rvert \bigl(4v(b)+2\lvert v(x)\rvert\bigr)\Biggr.
\Biggr.\\
&& \quad\quad\quad\quad\quad\quad\quad\Biggl. +\mathbf 1_{\wp}(b^{-1})\lvert b^{-1}\rvert \bigl(-4v(b)+2\lvert v(x)\rvert\bigr)\Biggr)\\
&&
\quad+\quad \mathbf 1_{1+\wp}(x)\Biggl(\mathbf 1_{\wp^{v(1-x)+1}}(b)\lvert b\rvert \bigl(4v(b)-4v(1-x)\bigr) \Biggr.\\
&&
\quad\quad\quad\quad\quad\quad\quad\Biggl.\Biggl. +\mathbf 1_{v(1-x)\mathbf o_F^\times}(b)\lvert b\rvert +
\mathbf 1_{v(1-x)\mathbf o_F^\times}(b^{-1})\lvert b^{-1}\rvert \Biggr.\Biggr.\\
&&
\quad\quad\quad\quad\quad\quad\quad\Biggl.\Biggl. + \mathbf 1_{\wp^{v(1-x)+1}}(b^{-1})\lvert b^{-1}\rvert \bigl(-4v(b)-4v(1-x)\bigr)\Biggr)\Biggr].
\end{eqnarray*}
\end{eg}
The calculation of Example~\ref{bsp_lln_nichtkompakt} is given in Appendix~B.
\section{A geometric Hecke operator}\label{section_geom_Hecke}
Here, an adequate operator on the local linking numbers is constructed that realizes the asymptotics ($b\to0$) of the Hecke operator on Whittaker products. The asymptotics of the second one is descibed by the following.
\begin{prop}\label{prop_hecke_auf_whittaker_prod}
The Whittaker products $W(b\xi,b\eta)$ have the following behavior for $b\to 0$ and fixed $\xi=\frac{x}{x-1}$, $\eta=1-\xi$.
(a) In case of a compact Torus $T$ and $\chi$ not factorizing via the norm,
\begin{equation*}
W(b\xi,b\eta)=0.
\end{equation*}
In case of a compact Torus $T$ and $\chi=\chi_1\circ\operatorname N$,
\begin{equation*}
W(b\xi,b\eta)=\lvert b\rvert\lvert\xi\eta\rvert^{\frac{1}{2}}\chi_1(b\eta)\left(c_1+c_2\omega(b\xi)\right)\left(c_3\mathbf 1_{\wp^m\cap(1-x)\operatorname N}(b)+c_4\mathbf 1_{\wp^m\cap(1-x)z\operatorname N}(b)\right),
\end{equation*}
where $z\in F^\times\backslash \operatorname N$.
(b) In case of a noncompact Torus $T$,
\begin{equation*}
W(b\eta,b,\xi)=\left\{\begin{matrix}
\lvert b\rvert\lvert\xi\eta\rvert^{\frac{1}{2}}\left(c_1\chi_1(b\eta)+c_2\chi_1^{-1}(b\eta)\right)\left(c_3v(b\xi)+c_4)\right),&\textrm{if } \chi_1^2\not=1\\
\lvert b\rvert\lvert\xi\eta\rvert^{\frac{1}{2}}\chi_1(b\eta)\left(c_1v(b\eta)+c_2\right)\left(c_3v(b\xi)+c_4\right),&\textrm{if }\chi_1^2=1\end{matrix}\right..
\end{equation*}
In here, $c_i\in\mathbb C$, $i=1,\dots,4$.
\end{prop}
\begin{proof}[Proof of Proposition~\ref{prop_hecke_auf_whittaker_prod}]
For $b\to 0$ both arguments $b\xi\to 0$ and $b\eta\to 0$. The stated behavior is directly collected from Propositions~\ref{prop_characterisierung_theta_functionen} and \ref{prop_characterisierung_eisenstein_functionen}.
\end{proof}
We notice that the translation of the local linking numbers by $b$ studied above underlies this asymptotics (cf. Proposition~\ref{prop_translatiert_kompakt} and Theorem~\ref{satz_translatiert_nicht_kompakt}), but that it does not realize the leading terms in case $\chi$ is quadratic. In case of a noncompact torus $T$, the leading term is $v(b)^2$, while translation only produces $v(b)$. In case of a compact torus, the translated linking numbers have compact support, while the Hecke operator on Whittaker products has not.
In the following, we make the additional ``completely unramified'' assumption which is satisfied at all but finitely many places of a division quaternion algebra over a number field. For applications to Gross-Zagier formula, the constructed operator is required in case of this hypothesis only.
\begin{assumption}\label{annahme_fuer_geom_hecke}
$D$ is a split algebra, i.e. $G$ is isomorphic to $\operatorname{GL}_2(F)$. In this, $K/F$ is an unramified extension (split or nonsplit). $\chi$ is an unramified character.
\end{assumption}
For a noncompact torus $T$, the translated local linking numbers (Theorem~\ref{satz_translatiert_nicht_kompakt}) split into sums of the form
\begin{equation*}
<\phi,\begin{pmatrix}\beta&0\\0&1\end{pmatrix}.\psi>_x = <\phi,\begin{pmatrix}\beta&0\\0&1\end{pmatrix}.\psi>_x^++ <\phi,\begin{pmatrix}\beta&0\\0&1\end{pmatrix}.\psi>_x^-,
\end{equation*}
where
\begin{align}
& <\phi,\begin{pmatrix}\beta&0\\0&1\end{pmatrix}.\psi>_x^\pm:=\chi_1^{\pm1}(\beta)\cdot\label{schranken_def_translatiert_noncompakt}\\
&
\Bigl(\mathbf 1_{\wp^n}\lvert\beta\rvert(c_{\pm,1}v(\beta)+c_{\pm,2})+C_{\pm}(\beta)+\mathbf 1_{\wp^n}(\beta^{-1})\lvert\beta\rvert^{-1}(d_{\pm,1}v(\beta)+d_{\pm,2})\Bigr)\nonumber
\end{align}
are the summands belonging to $\chi_1^{\pm1}$ respectively. In here, the constants $c_{\pm,i}, d_{\pm,i}$, and $C_\pm\in\mathcal S(F^\times)$ as well as $n>0$ depend on $\phi,\psi$ and $x$.
If $\chi_1$ is a quadratic character, these two summands coinside.
To give an operator fitting all cases, define in case of a compact torus
\begin{equation*}
<\phi,\begin{pmatrix}\beta&0\\0&1\end{pmatrix}.\psi>_x^\pm:= <\phi,\begin{pmatrix}\beta&0\\0&1\end{pmatrix}.\psi>_x.
\end{equation*}
For $v(b)\geq 0$ define the operator $\mathbf S_b$ as
\begin{equation}
\mathbf S_b:=\frac{1}{4}\left(\mathbf S_b^++\mathbf S_b^-\right),
\end{equation}
where
\begin{eqnarray*}
\mathbf S_b^\pm<\phi,\psi>_x &:=&
\sum_{s=0,1}\sum_{i=0}^{v(b)}\frac{\chi_1^{\mp 1}(\pi)^{i(-1)^s}\omega(b(1-x))^{i+s}}{\lvert\pi^{v(b)-i}\rvert}\\
&& \quad\quad\quad\quad\cdot<\phi,\begin{pmatrix}\pi^{(-1)^s(v(b)-i)}&0\\0&1\end{pmatrix}.\psi>_x^\pm.
\end{eqnarray*}
It is worthwhile remarking that this ``Hecke operator'' is not unique. For example, the summand for $s=0$ is an operator - call it $\mathbf T_b$ - owning the same properties than $\mathbf S_b$ itself. The crucial point seems to be that an averaging sum occurs. The operator $\mathbf S_b$ is chosen such that this sum includes negative exponents $-v(b)+i$ as well. This kind of symmetry will make the results on the local Gross-Zagier formula look quite smoothly (cf. Section~\ref{section_meine_Gross-Zagier_formel}). But these results could be obtained by $\mathbf T_b$ as well.
\begin{prop}\label{prop_geom_hecke_kompakt}
Let $T$ be a compact torus. Then the operator $\mathbf S_b$ reduces to
\begin{equation*}
\mathbf S_b<\phi,\psi>_x=\frac{1}{2}\sum_{s=0,1}\sum_{i=0}^{v(b)}\frac{\omega(b(1-x))^{i+s}}{\lvert \pi^{v(b)-i}\rvert}<\phi,\begin{pmatrix}\pi^{(-1)^s(v(b)-i)}&0\\0&1\end{pmatrix}.\psi>_x.
\end{equation*}
Let $x\in c\operatorname N$ be fixed. For $\phi,\psi\in\mathcal S(\chi, G)$ there are constants $c_1,c_2\in\mathbb C$ and $n\in\mathbb N$ such that for $v(b)\geq n$
\begin{equation*}
\mathbf S_b<\phi,\psi>_x=c_1\mathbf 1_{\wp^n\cap(1-x)\operatorname N}(b)+c_2\mathbf 1_{\wp^n\cap(1-x)z\operatorname N}(b).
\end{equation*}
\end{prop}
\begin{prop}\label{prop_geom_hecke_nonkompakt}
Let $T$ be a noncompact torus. The operators $\mathbf S_b^\pm$ reduce to
\begin{equation*}
\mathbf S_b^\pm<\phi,\psi>_x
\sum_{s=0,1}\sum_{i=0}^{v(b)}\frac{\chi_1^{\mp 1}(\pi)^{i(-1)^s}}{\lvert\pi^{v(b)-i}\rvert}
<\phi,\begin{pmatrix}\pi^{(-1)^s(v(b)-i)}&0\\0&1\end{pmatrix}.\psi>_x^\pm.
\end{equation*}
Let $x\in F^\times$ be fixed.
For $\phi,\psi\in\mathcal S(\chi, G)$ there are constants $c_0,\dots,c_3\in\mathbb C$ and $n\in\mathbb N$ such that for $v(b)\geq n$
\begin{displaymath}
\mathbf S_b <\phi,\psi>_x = \chi_1(b)\bigl( c_1 v(b) +c_0\bigr) + \chi_1^{-1}(b)\bigl( c_3 v(b) +c_2\bigr),
\end{displaymath}
if $\chi_1^2\not=1$, and
\begin{displaymath}
\mathbf S_b <\phi,\psi>_x = \chi_1(b)\bigl( c_2v(b)^2+c_1v(b)+c_0\bigr),
\end{displaymath}
if $\chi_1^2=1$.
\end{prop}
\begin{thm}\label{satz_matching_operator}
For fixed $x$, the local linking numbers $\mathbf S_b<\phi,\psi>_x$ realize the asymptotics of the translated Whittaker products $ W(b\xi,b\eta)$ up to a factor $\lvert b\rvert\lvert \xi\eta\rvert^{\frac{1}{2}}$.
\end{thm}
\begin{proof}[Proof of Theorem~\ref{satz_matching_operator}]
In case $T$ compact,
combining Proposition~\ref{prop_geom_hecke_kompakt} with Pro\-po\-si\-tion~\ref{prop_hecke_auf_whittaker_prod}~(a) for $\chi=1$ yields the Theorem.
In case $T$ noncompact, combine Proposition~\ref{prop_geom_hecke_nonkompakt} with Proposition~\ref{prop_hecke_auf_whittaker_prod}~(b).
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop_geom_hecke_kompakt}]
Notice, that for $T$ compact Assumtion~\ref{annahme_fuer_geom_hecke} induces $\chi=1$ by Corollary~\ref{cor_chi}.
By Proposition~\ref{prop_translatiert_kompakt}, the translated linking number can be written as
\begin{equation*}
<\phi,\begin{pmatrix}\beta&0\\0&1\end{pmatrix}.\psi>_x=\sum_i d_{a_i}\lvert a_i\rvert^{\operatorname{sign} v(a_i)}\mathbf 1_{a_i(1+\wp^m)}(\beta),
\end{equation*}
for finitely many $a_i\in F^\times$, $d_{a_i}\in\mathbb C$, and some $m>0$, where the sets $a_i(1+\wp^m)$ are pairwise disjoint. One can assume that in this sum all $\pi^l$, $-\operatorname{max}_i \lvert v(a_i)\rvert\leq l\leq \operatorname{max}_i \lvert v(a_i)\rvert$, occure. Let $n:=\operatorname{max}_i \lvert v(a_i)\rvert+1$. Then, for $v(b)\geq n$,
\begin{align*}
\mathbf S_b<\phi,\psi>_x &=
\frac{1}{2}\sum_{i=0}^{v(b)}\left(\omega(b(1-x))^i\sum_{l=-n+1}^{n-1}\frac{d_{\pi^l}\lvert\pi^l\rvert^{\operatorname{sign}(l)}} {\lvert\pi^{v(b)-i}\rvert}\mathbf 1_{\pi^l(1+\wp^m)}(\pi^{v(b)-i})\right.\\
& \:+\left.\omega(b(1-x))^{i+1}\sum_{l=-n+1}^{n-1}\frac{d_{\pi^l}\lvert\pi^l\rvert^{\operatorname{sign}(l)}}{\lvert\pi^{v(b)-i}\rvert}\mathbf 1_{\pi^l(1+\wp^m)}(\pi^{i-v(b)})\right)\\
&=\frac{1}{2}\sum_{l=0}^{n-1}\omega(b(1-x))^{v(b)+l}d_{\pi^l}+\frac{1}{2}\sum_{l=-n+1}^{0}\omega(b(1-x))^{v(b)+l+1}d_{\pi^l}\\
&= c_1\mathbf 1_{\wp^n\cap(1-x)\operatorname N}(b)+c_2\mathbf 1_{\wp^n\cap(1-x)z\operatorname N}(b),
\end{align*}
where $c_1:=\frac{1}{2}\sum_{l=0}^{n-1}(d_{\pi^l}+d_{\pi^{-l}})$ and
$c_2:=\frac{1}{2}\sum_{l=0}^{n-1}(-1)^l(d_{\pi^l}-d_{\pi^{-l}})$.
Notice, that for $b(1-x)\in z\operatorname N$ one has $\omega(b(1-x))^{v(b)}=(-1)^{v(b)}=-\omega(1-x)$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop_geom_hecke_nonkompakt}]
Recall that $T$ noncompact induces $\omega=1$.
First, one proves this asymptotics for the part $\mathbf T_b^-$ of $\mathbf S_b$ belonging to $\mathbf S_b^-$ and $s=0$,
\begin{equation*}
\mathbf T_b^-<\phi,\psi>_x :=\sum_{i=0}^{v(b)}\frac{\chi_1(\pi)^i}{\lvert \pi^{v(b)-i}\rvert}<\phi,\begin{pmatrix}\pi^{v(b)-i}&0\\0&1\end{pmatrix}.\psi>_x^-.
\end{equation*}
Let $n>0$ be the integer as in (\ref{schranken_def_translatiert_noncompakt}). Let $v(b)\geq n$. In the formula for $\mathbf T_b^-$, one distinguishes the summands whether $v(b)-i<n$ or not.
If $v(b)-i<n$,
then
\begin{equation*}
<\phi,\begin{pmatrix}\pi^{v(b)-i}&0\\0&1\end{pmatrix}.\psi>_x^-=\chi_1^{-1}(\pi^{v(b)-i})C_-(\pi^{v(b)-i}).
\end{equation*}
The function $\tilde C_-$ defined by
\begin{equation*}
\tilde C_-(\beta):=\frac{\chi_1^{-2}(\beta)}{\lvert\beta\rvert}C_-(\beta)
\end{equation*}
again belongs to $\mathcal S(F^\times)$. The part of $\mathbf T_b^-$ made up by summands satisfying $v(b)-i<n$ is now simplyfied to
\begin{align*}
&\sum_{i=v(b)-n+1}^{v(b)}\frac{\chi_1(\pi)^i}{\lvert\pi^{v(b)-i}\rvert}<\phi,\begin{pmatrix}\pi^{v(b)-i}&0\\0&1\end{pmatrix}.\psi>_x^-\\
&=\sum_{i=v(b)-n+1}^{v(b)}\chi_1(b)\tilde C_-(\pi^{v(b)-i})=\chi_1(b)\sum_{l=0}^{n-1}\tilde C_-(\pi^l).
\end{align*}
In here, the last sum is independent of $b$. Thus, this part of $\mathbf T_b^-$ satisfies the claim.
In the remaining part
\begin{equation*}
T(i\leq v(b)-n):=\sum_{i=0}^{v(b)-n}\frac{\chi_1(\pi)^i}{\lvert \pi^{v(b)-i}\rvert}<\phi,\begin{pmatrix}\pi^{v(b)-i}&0\\0&1\end{pmatrix}.\psi>_x^-
\end{equation*}
all the translated local linking numbers occuring can be written as
\begin{equation*}
<\phi,\begin{pmatrix}\pi^{v(b)-i}&0\\0&1\end{pmatrix}.\psi>_x^-=\chi_1^{-1}(\pi^{v(b)-i})\lvert\pi^{v(b)-i}\rvert\left(c_{-,1}(v(b)-i)+c_{-,2}\right).
\end{equation*}
Using this, the remaining part simplyfies to
\begin{equation*}
T(i\leq v(b)-n)=\chi_1^{-1}(b)\sum_{i=0}^{v(b)-n}\chi_1(\pi)^{2i}\left(c_{-,1}(v(b)-i)+c_{-,2}\right).
\end{equation*}
In the following one has to distinguish between $\chi_1$ quadratic or not. First, let $\chi_1^2=1$. Then
\begin{equation*}
T(i\leq v(b)-n)=\chi_1(b)(v(b)-n+1)\left( c_{-,2}+\frac{1}{2}c_{-,1}(v(b)+n) \right),
\end{equation*}
which owns the claimed asymptotics.
Let $\chi_1^2\not=1$. By enlarging $n$ one can assume that the order of $\chi_1$ divides $n$, i.e. $\chi_1^n=1$. The remaining part of $\mathbf T_b^-$ in this case is
\begin{align*}
T(i\leq v(b)-n)&=(c_{-,1}v(b)+c_{-,2})\frac{\chi_1(b\pi)-\chi_1^{-1}(b\pi)}{\chi_1(\pi)-\chi_1^{-1}(\pi)}\\
&\quad-c_{-,1}\frac{\chi_1(b\pi)(v(b)-n+1)}{\chi_1(\pi)-\chi_1^{-1}(\pi)}
+ c_{-,1}\frac{\chi_1(b\pi^2)-1}{(\chi_1(\pi)-\chi_1^{-1}(\pi))^2}.
\end{align*}
Thus, the claim is satisfied in case $\chi_1^2\not=1$, too.
The other parts of $\mathbf S_b$ satisfy the claimed asymptotics as well, as is easily deduced from the statement for $\mathbf T_b^-$. First, if $\mathbf T_b^+$ denotes the part of $\mathbf S_b$ belonging to $\mathbf S_b^+$ and $s=0$, then the statement for $\mathbf T_b^+$ follows from the proof for $\mathbf T_b^-$ replacing there $\chi_1^{-1}$ by $\chi_1$, $C_-$ by $C_+$, and $c_{-,i}$ by $c_{+,i}$, where the constants are given by (\ref{schranken_def_translatiert_noncompakt}).
Second, for $s=1$ notice that
\begin{equation*}
\chi_1(\pi)^{i(-1)^s}\chi_1^{-1}(\pi^{(-1)^s(v(b)-i)})=\chi_1(b)\chi_1(\pi)^{-2i}.
\end{equation*}
In this case the claim follows if one rewrites the proofs for $s=0$ substituting $\chi_1$ by $\chi_1^{-1}$ as well as $c_{\pm,i}$ by $d_{\pm,i}$ of (\ref{schranken_def_translatiert_noncompakt}).
\end{proof}
\section{Local Gross-Zagier formula rewritten}\label{section_local_Gross-Zagier}
We report Zhang's local Gross-Zagier formulae \cite{zhang} in the notation used throughout this paper in order to compare them directly with the results given by the operator $\mathbf S_b$ just defined afterwards.
We prefer to give short proofs of the results by Zhang for the sake of readability.
We assume Hypothesis~\ref{annahme_fuer_geom_hecke}.
\subsection{Local Gross-Zagier formula by Zhang}\label{section_zhang_referiert}
The local Gross-Zagier formula compares the Whittaker products of local newforms with a local linking number belonging to a very special function $\phi$ (\cite{zhang} Ch.~4.1). This function is given by
\begin{equation*}
\phi=\chi\cdot\mathbf 1_{R^\times},
\end{equation*}
where $R^\times$ in general is the unit group of a carefully chosen order $R$ in $ D$.
In case of Hypothesis~\ref{annahme_fuer_geom_hecke}, $R^\times=\operatorname{GL}_2(\mathbf o_F)$ and the function $\phi$ is well-defined. The special local linking number is then defined by
\begin{equation*}
<\mathbb T_b\phi,\phi>_x,
\end{equation*}
where the ``Hecke operator'' $\mathbb T_b$ is defined as follows (\cite{zhang} 4.1.22 et sqq.). Let
\begin{equation*}
H(b):=\{g\in M_2(\mathbf o_F)\mid v(\det g)=v(b)\}.
\end{equation*}
Then
\begin{equation*}
\mathbb T_b\phi(g):=\int_{H(b)}\phi(hg)~dh.
\end{equation*}
Notice that this operator is well-defined on $\phi$ because $\phi$ essentially is the characteristic function of $\operatorname{GL}_2(\mathbf o_F)$, but not for most other functions. In our construction of the operator $\mathbf S_b$, we continued in some way the idea that $\mathbb T_b$ has the flavor of summation over translates by coset representatives, as
\begin{equation*}
H(b)=\bigcup\begin{pmatrix}y_1&0\\0&y_3\end{pmatrix}\begin{pmatrix}1&y_2\\0&1\end{pmatrix}\operatorname{GL}_2(\mathbf o_F),
\end{equation*}
where the union is over representatives $(y_1,y_3)\in\mathbf o_F\times\mathbf o_F$ with $v(y_1y_3)=v(b)$ and $y_2\in \wp^{-v(y_1)}\backslash \mathbf o_F$.
\begin{lem}\label{zhangs_lln_kompakt}
(\cite{zhang} Lemma~4.2.2)
Let $K/F$ be a field extension and assume Hypothesis~\ref{annahme_fuer_geom_hecke}. Let $\phi=\chi\cdot\mathbf 1_{\operatorname{GL}_2(\mathbf o_F)}$. Then
\begin{equation*}
<\mathbb T_b\phi,\phi>_x=\operatorname{vol}(\operatorname{GL}_2(\mathbf o_F))^2\operatorname{vol}_T(T)\mathbf 1_{\operatorname N}(x)\mathbf 1_{\frac{1-x}{x}(\mathbf o_F\cap\operatorname N)}(b)\mathbf 1_{(1-x)(\mathbf o_F\cap\operatorname N)}(b).
\end{equation*}
\end{lem}
\begin{lem}\label{zhangs_lln_nichtkompakt}
(\cite{zhang} Lemma~4.2.3) Let $K/F$ be split, let $\chi=(\chi_1,\chi_1^{-1})$ be an unramified character, and let $\phi=\chi\cdot\mathbf 1_{\operatorname{GL}_2(\mathbf o_F)}$. In case $\chi_1^2\not=1$,
\begin{align*}
<\mathbb T_b\phi,\phi>_x&=\frac{\chi_1(b(1-x)^{-1}\pi)-\chi_1^{-1}(b(1-x)^{-1}\pi)}{\chi_1(\pi)-\chi_1^{-1}(\pi)}\operatorname{vol}(\operatorname{GL}_2(\mathbf o_F))^2\operatorname{vol}^\times(\mathbf o_F^\times)\\
&\quad\cdot\mathbf 1_{\frac{1-x}{x}\mathbf o_F\cap(1-x)\mathbf o_F}(b)\mathbf 1_{F^\times}(x)\left(v(b)+v(\frac{x}{1-x})+1\right).
\end{align*}
In case $\chi_1^2=1$,
\begin{align*}
<\mathbb T_b\phi,\phi>_x&=\chi_1(b(1-x))\operatorname{vol}(\operatorname{GL}_2(\mathbf o_F))^2\operatorname{vol}^\times(\mathbf o_F^\times)
\mathbf 1_{\frac{1-x}{x}\mathbf o_F\cap(1-x)\mathbf o_F}(b)\\
&\quad\cdot\mathbf 1_{F^\times}(x)\bigl(v(b)-v(1-x)+1\bigr)\bigl(v(b)+v(\frac{x}{1-x})+1\bigr).
\end{align*}
\end{lem}
For the proofs of Lemma~\ref{zhangs_lln_kompakt} and \ref{zhangs_lln_nichtkompakt} we follow a hint given orally by Uwe Weselmann. Write
\begin{equation*}
\phi(x)=\sum_{\tau\in T(F)/T(\mathbf o_F)}\chi(\tau)\mathbf 1_{\tau\operatorname{GL}_2(\mathbf o_F)}(x).
\end{equation*}
For the Hecke operator we find
\begin{equation*}
\mathbb T_b\mathbf 1_{\tau\operatorname{GL}_2(\mathbf o_F)}(x) =\operatorname{vol}(\operatorname{GL}_2(\mathbf o_F))\mathbf 1_{\tau b^{-1}H(b)}(x),
\end{equation*}
as $b^{-1}H(b)=\{h\in \operatorname{GL}_2(F)\mid h^{-1}\in H(b)\}$. Noticing that the Hecke operator is right invariant under multiplication by $y\in \operatorname{GL}_2(\mathbf o_F)$, i.e. $\mathbb T_b\phi(xy)=\mathbb T_b\phi(x)$, we get
\begin{equation}\label{formel_beweis_zhangs_lln}
<\mathbb T_b\phi,\phi>_x=\operatorname{vol}(\operatorname{GL}_2(\mathbf o_F))^2\sum_\tau\chi(\tau)\int_T\mathbf 1_{\tau b^{-1}H(b)}(t^{-1}\gamma(x)t)~dt.
\end{equation}
This formula is evaluated in the different cases.
\begin{proof}[Proof of Lemma~\ref{zhangs_lln_kompakt}]
Let $K=F(\sqrt A)$, where $v(A)=0$. Choose a tracefree $\gamma(x)=\sqrt A +\epsilon(\gamma_1+\gamma_2\sqrt A)$, where $\operatorname N(\gamma_1+\gamma_2\sqrt A)=x$. The conditions for the integrands of (\ref{formel_beweis_zhangs_lln}) not vanishing are
\begin{align*}
\tau^{-1}b\sqrt A&\in\mathbf o_K\\
\tau^{-1}b\bar t^{-1}t(\gamma_1+\gamma_2\sqrt A)&\in\mathbf o_K\\
\det(t^{-1}\gamma(x)t)=A(x-1)&\in b^{-1}\operatorname N(\tau)\mathbf o_F^\times.
\end{align*}
They are equivalent to $\lvert \operatorname N(\tau)\rvert=\lvert b(1-x)\rvert$ and $\lvert b\rvert\leq \min\{\lvert\frac{1-x}{x}\rvert,\lvert 1-x\rvert\}$.
There is only one coset $\tau\in T(F)/T(\mathbf o_F)$ satisfying this, and this coset only exists if $b\in(1-x)\operatorname N$. Thus,
\begin{eqnarray*}
<\mathbb T_b\phi,\phi>_x&=&\operatorname{vol}(\operatorname{GL}_2(\mathbf o_F))^2\operatorname{vol}_T(T)\\
&&\cdot\left(\mathbf 1_{\operatorname N\backslash (1+\wp)}(x)\mathbf 1_{\mathbf o_F\cap\operatorname N}(b)+\mathbf 1_{1+\wp}(x)\mathbf 1_{(1-x)(\mathbf o_F\cap\operatorname N)}(b)\right),
\end{eqnarray*}
which equals the claimed result.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{zhangs_lln_nichtkompakt}]
First evaluate the integral
\begin{equation*}
I_\tau(b,x):=\int_T\mathbf 1_{\tau b^{-1}H(b)}(t^{-1}\gamma(x)t)~dt.
\end{equation*}
Choose $\gamma(x)=\begin{pmatrix}-1&x\\-1&1\end{pmatrix}$ tracefree, and set $\tau=(\tau_1,\tau_2)\in K^\times/\mathbf o_K^\times$ as well as $t=(a,1)\in T$.
The conditions for the integrand of (\ref{formel_beweis_zhangs_lln}) not vanishing are
\begin{align*}
(-\tau_1^{-1}b,\tau_2^{-1}b)&\in\mathbf o_K,\\
(-\tau_1^{-1}a^{-1}bx,\tau_2^{-1}ab)&\in\mathbf o_K,\\
\det(t^{-1}\gamma(x)t)=x-1&\in\operatorname N(\tau)b^{-1}\mathbf o_K^\times.
\end{align*}
That is: Only if $v(\tau_2)=-v(\tau_1)+v(b)+v(1-x)$ satisfies $v(1-x)\leq v(\tau_2)\leq v(b)$, the integral does not vanish. Then the scope of integration is given by $-v(b)+v(\tau_2)\leq v(a)\leq v(\tau_2)+v(x)-v(1-x)$ and the integral equals
\begin{equation*}
I_\tau(b,x)=\operatorname{vol}^\times(\mathbf o_F^\times)\left(v(b)+v(x)-v(1-x)+1\right)\mathbf 1_{\mathbf o_F\cap\wp^{v(1-x)-v(x)}}(b).
\end{equation*}
Evaluating $\chi(\tau)$ we get $\chi(\tau)=\chi_1(b(1-x))\chi_1^{-2}(\tau_2)$, as $\chi$ is unramified. Summing up the terms of (\ref{formel_beweis_zhangs_lln}) yields the claim.
\end{proof}
The other constituents of the local Gross-Zagier formulae are the Whittaker products of newforms for both the Theta series $\Pi(\chi)$ and the Eisensteinseries $\Pi(1,\omega)$ at $s=\frac{1}{2}$. By Hypothesis~\ref{annahme_fuer_geom_hecke}, the Theta series equals $\Pi(\chi_1,\chi_1^{-1})$ if $K/F$ splits, and it equals $\Pi(1,\omega)$ if $K/F$ is a field extension. Thus, all occuring representations are principal series and the newforms read in the Kirillov model are given by (\ref{gleichung_whittaker_neuform}).
In case of a field extension we get
\begin{equation*}
W_{\theta, new}(a)=W_{E, new}(a)=\lvert a\rvert^{\frac{1}{2}}\mathbf 1_{\mathbf o_F\cap\operatorname N}(a)\operatorname{vol}(\mathbf o_F)\operatorname{vol}^\times(\mathbf o_F^\times).
\end{equation*}
In case $K/F$ splits one gets
\begin{equation*}
W_{\theta,new}(a)=\lvert a\rvert^{\frac{1}{2}}\mathbf 1_{\mathbf o_F}(a)\operatorname{vol}(\mathbf o_F)\operatorname{vol}^\times(\mathbf o_F^\times)\left\{\begin{matrix}\frac{\chi_1(a\pi)-\chi_1^{-1}(a\pi)}{\chi_1(\pi)-\chi_1^{-1}(\pi)},&\textrm{if } \chi_1^2\not=1\\\chi_1(a)(v(a)+1),&\textrm{if }\chi_1^2=1\end{matrix}\right.,
\end{equation*}
while
\begin{equation*}
W_{E,new}(a)=\lvert a\rvert^{\frac{1}{2}}\mathbf 1_{\mathbf o_F}(a)(v(a)+1)\operatorname{vol}(\mathbf o_F)\operatorname{vol}^\times(\mathbf o_F^\times).
\end{equation*}
Summing up, we get the following Lemma on the shape of Whittaker products for newforms. We give two expressions of them, the first one using the variables $\xi=\frac{x}{x-1}$ and $\eta=1-\xi$ the second one using the variable $x$.
\begin{lem}\label{lemma_whittaker_neuform_produkte}
(\cite{zhang} Lemma~3.4.1)
Assume Hypothesis~\ref{annahme_fuer_geom_hecke}. Then the Whittaker products for the newforms of Theta series and Eisenstein series have the following form up to the factor $\operatorname{vol}(\mathbf o_F)^2\operatorname{vol}^\times(\mathbf o_F^\times)^2$.
If $K/F$ is a field extension, then
\begin{eqnarray*}
W_{\theta, new}(b\eta)W_{E, new}(b\xi) &=&
\lvert \xi\eta\rvert^{\frac{1}{2}}\lvert b\rvert \mathbf 1_{\mathbf o_F}(b\xi)\mathbf 1_{\mathbf o_F}(b\eta)\\
&=& \lvert \xi\eta\rvert^{\frac{1}{2}}\lvert b\rvert\mathbf 1_{\frac{1-x}{x}(\mathbf o_F\cap\operatorname N)}(b)\mathbf 1_{(1-x)(\mathbf o_F\cap\operatorname N)}(b).
\end{eqnarray*}
If $K/F$ splits and $\chi$ is quadratic, then
\begin{eqnarray*}
&& W_{\theta, new}(b\eta)W_{E, new}(b\xi)\\
&&\quad\quad=\lvert \xi\eta\rvert^{\frac{1}{2}}\lvert b\rvert \mathbf 1_{\mathbf o_F}(b\xi)\mathbf 1_{\mathbf o_F}(b\eta)\chi_1(b\eta)\left(v(b\xi)+1\right)\left(v(b\eta)+1\right)\\
&&\quad\quad=\lvert \xi\eta\rvert^{\frac{1}{2}}\lvert b\rvert\mathbf 1_{\frac{1-x}{x}\mathbf o_F\cap(1-x)\mathbf o_F}(b)\chi_1(b(1-x))\bigl(v(b)+v(\frac{x}{1-x})+1\bigr)\cdot\\
&&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \cdot\bigl(v(b)-v(1-x)+1\bigr).
\end{eqnarray*}
If $K/F$ splits and $\chi$ is not quadratic, then
\begin{eqnarray*}
&& W_{\theta, new}(b\eta)W_{E, new}(b\xi)\\
&&\quad\quad=\lvert \xi\eta\rvert^{\frac{1}{2}}\lvert b\rvert \mathbf 1_{\mathbf o_F}(b\xi)\mathbf 1_{\mathbf o_F}(b\eta)\left(v(b\xi)+1\right)\frac{\chi_1(b\eta\pi)-\chi_1^{-1}(b\eta\pi)}{\chi_1(\pi)-\chi_1^{-1}(\pi)}\\
&&\quad\quad=\lvert \xi\eta\rvert^{\frac{1}{2}}\lvert b\rvert\mathbf 1_{\frac{1-x}{x}\mathbf o_F\cap(1-x)\mathbf o_F}(b)\bigl(v(b)+v(\frac{x}{1-x})+1\bigr)\cdot\\
&&\quad\quad\quad\quad\quad\quad\quad\quad\quad \cdot\frac{\chi_1(b(1-x)^{-1}\pi)-\chi_1^{-1}(b(1-x)^{-1}\pi)}{\chi_1(\pi)-\chi_1^{-1}(\pi)}.
\end{eqnarray*}
\end{lem}
In comparing Lemma~\ref{zhangs_lln_kompakt} resp.~\ref{zhangs_lln_nichtkompakt} with Lemma~\ref{lemma_whittaker_neuform_produkte} one now gets the local Gross-Zagier formula by Zhang:
\begin{thm}\label{zhangs_local_Gross-Zagier}
(\cite{zhang} Lemma~4.3.1)
Assume Hypothesis~\ref{annahme_fuer_geom_hecke}. Let $W_{\theta, new}$ resp. $W_{E,new}$ be the newform for the Theta series resp. Eisenstein series. Let $\phi=\chi\cdot\mathbf 1_{\operatorname{GL}_2(\mathbf o_F)}$. Then up to a factor of volumes,
\begin{equation*}
W_{\theta, new}(b\eta)W_{E, new}(b\xi)=\lvert \xi\eta\rvert^{\frac{1}{2}}\lvert b\rvert <\mathbb T_b\phi,\phi>_{x=\frac{\xi}{\xi-1}}.
\end{equation*}
\end{thm}
\subsection{Rephrasing local Gross-Zagier}\label{section_meine_Gross-Zagier_formel}
As a test for the effectivity of the operator $\mathbf S_b$ constructed in Section~\ref{section_geom_Hecke}, we rewrite Zhang's local Gross-Zagier formula in terms of $\mathbf S_b$.
\begin{thm}\label{neuformulierung_local_Gross-Zagier}
Assume Hypothesis~\ref{annahme_fuer_geom_hecke}. Assume further, that $\chi_1^2=1$ in case $K/F$ splits. Let $W_{\theta, new}$ resp. $W_{E,new}$ be the newform for the Theta series resp. Eisenstein series. Let $\phi=\chi\cdot\mathbf 1_{\operatorname{GL}_2(\mathbf o_F)}$. Then up to a factor of volumes,
\begin{equation*}
W_{\theta, new}(b\eta)W_{E, new}(b\xi)=\lvert \xi\eta\rvert^{\frac{1}{2}}\lvert b\rvert\mathbf S_b<\phi,\phi>_x+O(v(b)),
\end{equation*}
where in case $K/F$ a field extension the term of $O(v(b))$ is actually zero, while in case $K/F$ split the term of $O(v(b))$ can be given precisely by collecting terms in the proof of Example~\ref{bsp_lln_nichtkompakt}.
\end{thm}
\begin{proof}[Proof of Theorem~\ref{neuformulierung_local_Gross-Zagier}]
One has to compare the Whittaker products for newforms given in Lemma~\ref{lemma_whittaker_neuform_produkte} with
the action of the operator $\mathbf S_b$ on the special local linking number belonging to $\phi$. This action is calculated in Lemma~\ref{mein_operator_spezielle_lln_kompakt} resp. \ref{mein_operator_spezielle_lln_nichtkompakt} below.
\end{proof}
\begin{lem}\label{mein_operator_spezielle_lln_kompakt}
Let $K/F$ be a field extension. Assume Hypothesis~\ref{annahme_fuer_geom_hecke}. Let $\phi=\chi\cdot\mathbf 1_{\operatorname{GL}_2(\mathbf o_F)}$. Then up the factor $\operatorname{vol}_T(T)\operatorname{vol}^\times(\mathbf o_F^\times)\operatorname{vol}(\mathbf o_F)$,
\begin{equation*}
\mathbf S_b<\phi,\phi>_x=
\mathbf 1_{\operatorname N}(x)\mathbf 1_{\frac{1-x}{x}(\mathbf o_F\cap\operatorname N)}(b)\mathbf 1_{(1-x)(\mathbf o_F\cap\operatorname N)}(b).
\end{equation*}
\end{lem}
\begin{proof}[Proof of Lemma~\ref{mein_operator_spezielle_lln_kompakt}]
The translated local linking number $<\phi,\begin{pmatrix}\beta&0\\0&1\end{pmatrix}.\phi>_x$ was computed in Example~\ref{bsp_lln_translatiert_kompakt}. One has to compute the sum for the operator $\mathbf S_b$ given in Proposition~\ref{prop_geom_hecke_kompakt}. If $x\in\operatorname N\backslash (1+\wp)$, then up to the factor $\operatorname{vol}_T(T)\operatorname{vol}^\times(\mathbf o_F^\times)\operatorname{vol}(\mathbf o_F)$,
\begin{equation*}
\mathbf S_b<\phi,\phi>_x=\frac{1}{2}\left(\omega(b(1-x))^{v(b)}+\omega(b(1-x))^{v(b)+1}\right)=\mathbf 1_{\operatorname N}(b).
\end{equation*}
If $x\in1+\wp$, then again up to the factor of volumes
\begin{align*}
\mathbf S_b<\phi,\phi>_x&=
\frac{1}{2}\mathbf 1_{\wp^{v(1-x)}}(b)\omega(b(1-x))^{v(b)-v(1-x)}\left(1+\omega(b(1-x))\right)\\
&= \mathbf 1_{\wp^{v(1-x)}\cap(1-x)\operatorname N}(b).\qedhere
\end{align*}
\end{proof}
In case $K/F$ we limit ourselves to the case $\chi_1^2=1$.
\begin{lem}\label{mein_operator_spezielle_lln_nichtkompakt}
Let $K/F$ be split and assume Hypothesis~\ref{annahme_fuer_geom_hecke} as well as $\chi_1^2=1$. Let $\phi=\chi\cdot\mathbf 1_{\operatorname{GL}_2(\mathbf o_F)}$. Then up the factor $\operatorname{vol}^\times(\mathbf o_F^\times)\operatorname{vol}(\mathbf o_F)^2$,
\begin{align*}
&\mathbf S_b<\phi,\phi>_x=\chi_1(b(1-x))\cdot\\
&\quad\left[\mathbf 1_{F^\times\backslash(1+\wp)}(x)\Bigl(2v(b)^2+2(\lvert v(x)\rvert+1)v(b)+(1+q^{-1})(\lvert v(x)\rvert+1)\Bigr)\right.\\
&\quad\left.+\mathbf 1_{1+\wp}(x)\mathbf 1_{\wp^{v(1-x)}}(b)\Bigl(2\bigl(v(b)-v(1-x)+1\bigr)\bigl(v(b)-v(1-x)\bigr)+1\Bigr)\right].
\end{align*}
\end{lem}
\begin{proof}[Proof of Lemma~\ref{mein_operator_spezielle_lln_nichtkompakt}]
In order to evaluate the action of $\mathbf S_b$, one has to know the translated local linking numbers
$<\phi,\begin{pmatrix}\beta&0\\0&1\end{pmatrix}.\phi>_x$. These were computed in Example~\ref{bsp_lln_nichtkompakt}. The operator $\mathbf S_b$ is given in Proposition~\ref{prop_geom_hecke_nonkompakt}. As $\chi_1$ is quadratic, $\mathbf S_b=\frac{1}{2}\mathbf S_b^+$.
For $x\in 1+\wp$ we compute
\begin{align*}
&\mathbf S_b<\phi,\phi>_x\\
&=
\chi_1(b(1-x))\mathbf 1_{\wp^{v(1-x)}}(b)\left(1+\sum_{i=0}^{v(b)-v(1-x)-1}4(v(b)-i-v(1-x))\right)\\
&=
\chi_1(b(1-x))\mathbf 1_{\wp^{v(1-x)}}(b)\Bigl(2(v(b)-v(1-x)+1)(v(b)-v(1-x))+1\Bigr),
\end{align*}
while for $x\in F^\times\backslash(1+\wp)$,
\begin{align*}
&\mathbf S_b<\phi,\phi>_x\\
&=
\chi_1(b(1-x))\left[(\lvert v(x)\rvert+1)(1+q^{-1})+\sum_{i=0}^{v(b)-1}\bigl(4(v(b)-i)+2\lvert v(x)\rvert\bigr)\right]\\
&=
\chi_1(b(1-x))\Bigl(2v(b)^2+(1+\lvert v(x)\rvert)(2v(b)+1+q^{-1})\Bigr).\qedhere
\end{align*}
\end{proof}
\section*{Appendix A\\ Proof of Example~\ref{bsp_lln_translatiert_kompakt}}
For $\phi=\chi\cdot\mathbf 1_{\operatorname{GL}_2(\mathbf o_F)}$ one has to compute
\begin{equation*}
<\phi,\begin{pmatrix}b&0\\0&1\end{pmatrix}\phi>_x=\int_{T\backslash G}\int_T\phi(t^{-1}\gamma(x)ty)~dt~\bar\phi(y\begin{pmatrix}b&0\\0&1\end{pmatrix})~dy,
\end{equation*}
where by Lemma~\ref{lemma_mirobolische_zerlegung}, one can assume $y$ to be of the form $y=\begin{pmatrix}y_1&y_2\\0&1\end{pmatrix}$ as well as $dy=d^\times y_1~dy_2$. Accordingly one factorizes $t^{-1}\gamma(x)t$, where $\gamma(x)=\sqrt A+\epsilon(\gamma_1+\gamma_2\sqrt A)$ is a tracefree preimage of $x$ under $P$ and $t=\alpha+\beta\sqrt A\in K^\times$:
\begin{equation*}
t^{-1}\gamma(x)t=\tilde t\begin{pmatrix}g_1&g_2\\0&1\end{pmatrix},
\end{equation*}
where $\tilde t\in K^\times$ and
\begin{equation*}
g_1=\frac{1+x}{1-x}+\frac{2((\alpha^2+\beta^2A)\gamma_1+2\alpha\beta A\gamma_2)}{(1-x)(\alpha^2-\beta^2A)},
\end{equation*}
\begin{equation*}
g_2=\frac{2A((\alpha^2+\beta^2A)\gamma_2+2\alpha\beta\gamma_1)}{(1-x)(\alpha^2-\beta^2A)}.
\end{equation*}
The inner integrand is not zero if and only if $g_1y_1\in\mathbf o_F^\times$ as well as $g_1y_2+y_1\in\mathbf o_F$, while the outer integrand doesn't vanish only for $y_1\in b^{-1}\mathbf o_F^\times$ and $y_2\in\mathbf o_F$. Forcing this, one gets the following to conditions for the inner integrand:
\begin{equation}\label{bed_1_kompakt}
g_1=\frac{1+x}{1-x}+\frac{2((\alpha^2+\beta^2A)\gamma_1+2\alpha\beta A\gamma_2)}{(1-x)(\alpha^2-\beta^2A)}\in b\mathbf o_F^\times,
\end{equation}
\begin{equation}\label{bed_2_kompakt}
g_1y_2+\frac{2A((\alpha^2+\beta^2A)\gamma_2+2\alpha\beta\gamma_1)}{(1-x)(\alpha^2-\beta^2A)}\in\mathbf o_F.
\end{equation}
In the following, one distinguishes whether $v(\beta)-v(\alpha)\geq 0$ or not as well as whether $x\in\operatorname N$ is a square or not. The contributions and conditions for the scope of integration are determined and marked by "$\bullet$" for final collection.
{\bf 1) Let $x\in F^{\times 2}$ and $v(\frac{\beta}{\alpha})\geq 0$.} Then $\gamma(x)$ can be chosen such that $x=\gamma_1^2$ and one can reduce (\ref{bed_1_kompakt}) and (\ref{bed_2_kompakt}) by $\alpha^2$ assuming $\beta\in\mathbf o_F$ and $\alpha=1$ instead, getting
\begin{equation}\label{bed_11_kompakt}
g_1=\frac{1+x}{1-x}+\frac{2(1+\beta^2A)\gamma_1}{(1-x)(1-\beta^2A)}\in b\mathbf o_F^\times,
\end{equation}
\begin{equation}\label{bed_21_kompakt}
g_1y_2+\frac{4A\beta\gamma_1}{(1-x)(1-\beta^2A)}\in\mathbf o_F.
\end{equation}
Now assume first, that $v(x)\not=0$. Then $v(\frac{1+x}{1-x})=0$ and
\begin{equation*}
v\left(\frac{2(1+\beta^2A)\gamma_1}{(1-x)(1-\beta^2A)}\right)\geq\frac{1}{2}\lvert v(x)\rvert>0,
\end{equation*}
i.e. condition~(\ref{bed_11_kompakt}) is satisfied only for $b\in\mathbf o_F^\times$. In this case, condition~(\ref{bed_21_kompakt}) is satisfied as well and the contribution for $v(x)\not=0$ is given by
\begin{itemize}
\item[$\bullet$] $v(x)\not=0$: $b\in\mathbf o_F^\times$ and $\beta\in\mathbf o_F$.
\end{itemize}
Now assume $v(x)=0$. Then (\ref{bed_11_kompakt}) is equivalent to
\begin{equation*}
v\left((1+x)(1-\beta^2A)+2(1+\beta^2A)\gamma_1\right)=v(b)+v(1-x),
\end{equation*}
respectively
\begin{equation}\label{bed_111_kompakt}
v(b)+v(1-x)=2\operatorname{min}\{v(1+\gamma_1),v(\beta)\}\geq 0.
\end{equation}
In here, $v(b)+v(1-x)>0$ is possible if and only if $\gamma_1\in -1+\wp$ and $\beta\in\wp$.
One can omit this case by choosing the preimage $\gamma(x)=\sqrt A+\epsilon\gamma_1$ such that $\gamma_1\in 1+\wp$ if $x\in 1+\wp$. This doesn't influnce the result, as the local linking numbers are independent of the choice of the tracefree preimage $\gamma(x)$.
Thus, let $v(b)+v(1-x)=0$.
Then the case $v(b)\geq 0$ even forces $0=v(b)=v(1-x)$. That is, condition~(\ref{bed_21_kompakt}) is
\begin{equation*}
\frac{4\beta A\gamma_1}{(1-x)(1-\beta^2A)}\in\mathbf o_F,
\end{equation*}
which here is equivalent to $\beta\in\mathbf o_F$. This yields the contribution
\begin{itemize}
\item[$\bullet$]
$x\in\mathbf o_F^\times\backslash (1+\wp)$: $b\in\mathbf o_F^\times$ and $\beta\in\mathbf o_F$.
\end{itemize}
While in case $v(b)<0$, one has $-v(b)=v(1-x)>0$. One assumes again that the square root $\gamma_1$ of $x$ is in $1+\wp$: $\gamma_1\in 1+\wp$. Then condition~(\ref{bed_21_kompakt}) is equivalent to
\begin{equation*}
\bigl(\beta^2A(1-\gamma_1)^2-(1+\gamma_1)^2\bigr)y_2\in 4\beta A\gamma_1+\wp^{v(1-x)}.
\end{equation*}
As $(1-\gamma_1)^2\in\wp^{2v(1-x)}$, that is $\beta\in\frac{-(1+\gamma_1)^2y_2}{4A\gamma_1}+\wp^{v(1-x)}$. Thus, the contribution in this case is
\begin{itemize}
\item[$\bullet$]
$x\in 1+\wp$: $v(b)=-v(1-x)$ and $\beta\in\frac{-(1+\sqrt x)^2y_2}{4A\sqrt x}+\wp^{v(1-x)}$.
\end{itemize}
{\bf 2) Let $x\in F^{\times 2}$ and $v(\frac{\beta}{\alpha})< 0$.}
Again, take $x=\gamma_1^2$. By reducing conditions~(\ref{bed_1_kompakt}) and (\ref{bed_2_kompakt}) by $\beta^2$, one can assume $\beta=1$ and $\alpha\in\wp$. The conditions now have the shape
\begin{equation}\label{bed_12_kompakt}
\frac{1+x}{1-x}+\frac{2(\alpha^2+A)\gamma_1}{(1-x)(\alpha^2-A)}\in b\mathbf o_F^\times,
\end{equation}
\begin{equation}\label{bed_22_kompakt}
g_1y_2+\frac{4\alpha\gamma_1}{(1-x)(\alpha^2 -A)}\in\mathbf o_F.
\end{equation}
If one substitutes in condition~(\ref{bed_11_kompakt}) resp. (\ref{bed_21_kompakt}) $\beta\mapsto \alpha A^{-1}\in\wp$ and $\gamma_1\mapsto -\gamma_1$, one gets exactly (\ref{bed_12_kompakt}) resp. (\ref{bed_22_kompakt}). Thus, taking $\gamma_1\in -1+\wp$ if $x\in 1+\wp$ this time, one can read off the contributions here from those of the first case:
\begin{itemize}
\item [$\bullet$]
$v(x)\not=0$ or $x\in\mathbf o_F^\times\backslash(1+\wp)$: $b\in\mathbf o_F^\times$ and $\alpha\in\wp$,
\item[$\bullet$]
$x\in 1+\wp$: $v(b)=-v(1-x)$, $\alpha\in\frac{(1-\sqrt x)^2y_2}{4A\sqrt x}+\wp^{v(1-x)}$ and $y_2\in\wp$.
\end{itemize}
{\bf 3) Let $x\in\operatorname{N}\backslash F^{\times2}$ and $-1\in F^{\times 2}$ and $v(\frac{\beta}{\alpha})\geq 0$.}
In this case, choose the tracefree preimage $\gamma(x)=\sqrt A+\epsilon \gamma_2$ to get $x=-\gamma_2A$ . Reducing the conditions~(\ref{bed_1_kompakt}) and (\ref{bed_2_kompakt}), one can assume $\alpha=1$ and $\beta\in\mathbf o_F$. The conditions now are
\begin{equation}\label{bed_13_kompakt}
g_1=\frac{1+x}{1-x}+\frac{4\beta A\gamma_2}{(1-x)(1-\beta^2A)}\in b\mathbf o_F^\times,
\end{equation}
\begin{equation}\label{bed_23_kompakt}
g_1y_2+\frac{2A(1+\beta^2A)\gamma_2}{(1-x)(1-\beta^2A)}\in\mathbf o_F.
\end{equation}
If $v(x)\not=0$, then $v(\frac{1+x}{1-x})=0$ and
\begin{equation*}
v\left(\frac{4\beta A\gamma_2}{(1-x)(1-\beta^2A)}\right)\geq \frac{1}{2}\lvert v(x)\rvert>0.
\end{equation*}
Thus, (\ref{bed_13_kompakt}) is equivalent to $v(b)=0$. Then (\ref{bed_23_kompakt}) is satisfied and the contribution is
\begin{itemize}
\item [$\bullet$] $v(x)\not=0$: $b\in\mathbf o_F^\times$ and $\beta\in\mathbf o_F$.
\end{itemize}
If $v(x)=0$, then $v(1-x)=0$ as $x$ is not a square. Thus,
\begin{equation*}
v\left(\frac{1+x}{1-x}+\frac{4\beta A\gamma_2}{(1-x)(1-\beta^2A)}\right)=v\bigl((1+x)(1-\beta^2A)+4\beta A\gamma_2\bigr)\geq 0
\end{equation*}
and condition~(\ref{bed_13_kompakt}) implies $v(b)\geq 0$. Then again, condition~(\ref{bed_23_kompakt}) is satisfied. But one has to look more exactly at (\ref{bed_13_kompakt}) to get a sharp condition for $b$. As
\begin{equation*}
(1+x)(1-\beta^2A)+4\beta A\gamma_2=(1+\beta\gamma_2A)^2-(\beta-\gamma_2)^2A,
\end{equation*}
(\ref{bed_13_kompakt}) can be rewritten as
\begin{equation*}
v(b)=2\operatorname{min}\{v(1+\beta\gamma_2A),v(\beta-\gamma_2)\}.
\end{equation*}
For $v(b)>0$, both $v(1+\beta\gamma_2A)>0$ and $v(\beta-\gamma_2)>0$ have to be satisfied, that is
\begin{equation*}
\beta\in(\gamma_2+\wp)\cap(-(\gamma_2A)^{-1}+\wp)=\emptyset.
\end{equation*}
Thus, $v(b)=0$ and the contribution is
\begin{itemize}
\item [$\bullet$]
$v(x)=0$: $b\in\mathbf o_F^\times$ and $\beta\in\mathbf o_F$.
\end{itemize}
{\bf 4) Let $x\in\operatorname{N}\backslash F^{\times2}$ and $-1\in F^{\times 2}$ and $v(\frac{\beta}{\alpha})< 0$.}
By reducing conditions~(\ref{bed_1_kompakt}) and (\ref{bed_2_kompakt}), one can assume $\beta=1$ and $\alpha\in\wp$. This case now is done as the previous one substituting $\beta\mapsto\alpha A^{-1}$ and $\gamma_2\mapsto -\gamma_2$ there. The contribution is
\begin{itemize}
\item[$\bullet$]
$x\in \operatorname N\backslash F^{\times 2}$: $b\in\mathbf o_F^\times$ and $\alpha\in \wp$.
\end{itemize}
{\bf 5) Let $x\in\operatorname{N}\backslash F^{\times2}$ and $-1\notin F^{\times 2}$ and $v(\frac{\beta}{\alpha})\geq 0$.} One again assumes $\alpha=1$ and $\beta\in\mathbf o_F$. As $-1$ is not a square, one has without loss of generality $A=-1$. The norm is surjective on $\mathbf o_F^\times$, thus there is $\gamma_3\in\mathbf o_F^\times$ such that $1+\gamma_3^2$ is not a square. Choose $\gamma(x)=\gamma_1(1+\gamma_3\sqrt{-1})$ to get $x=\gamma_1^2(1+\gamma_3^2)$. Conditions ~(\ref{bed_1_kompakt}) and (\ref{bed_2_kompakt}) now are read as
\begin{equation}\label{bed_15_kompakt}
\frac{1+x}{1-x}+\frac{2\gamma_1(1-\beta^2+2\beta\gamma_3)}{(1-x)(1+\beta^2)}\in b\mathbf o_F^\times,
\end{equation}
\begin{equation}\label{bed_25_kompakt}
g_1y_2-\frac{2\gamma_1\left((1-\beta^2)\gamma_3+2\beta\right)}{(1-x)(1+\beta^2)}\in\mathbf o_F.
\end{equation}
As seen earlier, $1-\beta^2+2\beta\gamma_3\in\mathbf o_F^\times$ and $(1-\beta^2)\gamma_3+2\beta\in\mathbf o_F^\times$. As neither $-1$ nor $x$ are squares in $F$, one has $v(\frac{1+x}{1-x})\geq 0$ as well as
\begin{equation}\label{bed_151_kompakt}
v\left(\frac{2\gamma_1(1-\beta^2+2\beta\gamma_3)}{(1-x)(1+\beta^2)}\right)\geq 0.
\end{equation}
That is, (\ref{bed_15_kompakt}) implies $b\in\mathbf o_F$. Then (\ref{bed_25_kompakt}) is equivalent to
\begin{equation*}
\frac{2\gamma_1\left((1-\beta^2)\gamma_3+2\beta\right)}{(1-x)(1+\beta^2)}\in\mathbf o_F,
\end{equation*}
which is true. One studies (\ref{bed_15_kompakt}) further: If $v(x)\not=0$, then in (\ref{bed_151_kompakt}) one even has $>$. Thus, (\ref{bed_15_kompakt}) means $v(b)=0$. If $v(x)=0$, then (\ref{bed_15_kompakt}) is equivalent to
\begin{equation}\label{bed_152_kompakt}
v\left((1+x)(1+\beta^2)+2\gamma_1(1-\beta^2+2\beta\gamma_3)\right)=v(b).
\end{equation}
Assuming $v(b)>0$ first, one gets out of (\ref{bed_152_kompakt}) the condition
\begin{equation*}
(1+x-2\gamma_1)\beta^2+4\gamma_1\gamma_2\beta+(1+x+2\gamma_1)\in\wp.
\end{equation*}
This is a quadratic equation modulo $\wp$, which has roots modulo $\wp$ only if its discriminante is a square. (Notice $v(x)=0$, thus $v(1+x+2\gamma_1)=0$, thus $v(\beta)=0$.) This discriminante is
\begin{equation*}
4\left(4\gamma_1^2\gamma_3^2-(1+x-2\gamma_1)(1+x+2\gamma_1)\right)=-4(1-x)^2,
\end{equation*}
which is not a square. By (\ref{bed_152_kompakt}), one always has $v(b)=0$. The contribution of this case again is
\begin{itemize}
\item[$\bullet$]$x\in\operatorname N\backslash F^{\times 2}$: $b\in\mathbf o_F^\times$ and $\beta\in\mathbf o_F$.
\end{itemize}
{\bf 6) Let $x\in\operatorname{N}\backslash F^{\times2}$ and $-1\notin F^{\times 2}$ and $v(\frac{\beta}{\alpha})< 0$.}
Again, one can assume $\beta=1$ and $\alpha\in\wp$. As seen twice, this case follows from the previous one by substituting $\beta\mapsto \alpha A^{-1}$ and $\gamma_1\mapsto-\gamma_1$ there. This yields the contribution
\begin{itemize}
\item[$\bullet$] $x\in\operatorname N\backslash F^{\times 2}$: $b\in\mathbf o_F^\times$ and $\alpha\in\wp$.
\end{itemize}
{\bf Computation of the local linking number.}
Now one collects all the contributions of the cases 1 to 6 marked by $\bullet$ and computes the integral on them. If there isn't given a range of $y_1$ resp. $y_2$, then it is arbitrary in the support of $\phi(by_1,y_2)$, i.e. $y_1\in b^{-1}\mathbf o_F^\times$ resp. $y_2\in\mathbf o_F$.
Notice first that the contributions of the cases $-1$ a square resp. $-1$ not a square are the same.
One gets
\begin{eqnarray*}
&&<\phi,\begin{pmatrix}b&0\\0&1\end{pmatrix}.\phi>_x\\
&=&
\mathbf 1_{\operatorname N\backslash (1+\wp)}(x)\mathbf 1_{\mathbf o_F^\times}(b)\int_{\mathbf o_F^\times}\int_{\mathbf o_F}\int_{T}~dt~dy_2~dy_1\\
&&
+\mathbf 1_{1+\wp}(x)\mathbf 1_{(1-x)\mathbf o_F^\times}(b)\operatorname{vol}_T(T_1)2q^{-v(1-x)}\operatorname{vol}^\times(\mathbf o_F^\times)\operatorname{vol}(\mathbf o_F)\\
&&
+\mathbf 1_{1+\wp}(x)\mathbf 1_{(1-x)^{-1}\mathbf o_F^\times}(b)\operatorname{vol}_T(T_1)2q^{-v(1-x)}\operatorname{vol}^\times(\mathbf o_F^\times)\operatorname{vol}(\mathbf o_F)\\
&=&
\left(\mathbf 1_{\operatorname N\backslash (1+\wp)}(x)\mathbf 1_{\mathbf o_F^\times}(b)
+ \mathbf 1_{1+\wp}(x)\left(\mathbf 1_{(1-x)\mathbf o_F^\times}(b)+\mathbf 1_{(1-x)^{-1}\mathbf o_F^\times}(b)\right)q^{-v(1-x)}\right)\operatorname{vol},
\end{eqnarray*}
where $T_1:=\left\{\alpha+\beta\sqrt A\in T\mid v(\beta)\geq v(\alpha)\right\}$ and
\begin{equation*}
\operatorname{vol}:=\operatorname{vol}_T(T)\operatorname{vol}^\times(\mathbf o_F^\times)\operatorname{vol}(\mathbf o_F).
\end{equation*}
This finishes the proof of Example~\ref{bsp_lln_translatiert_kompakt}.
\section*{Appendix B\\Proof of Example~\ref{bsp_lln_nichtkompakt}}
For the proof of Example~\ref{bsp_lln_nichtkompakt}, we need the following Lemma. Its short proof is left to the reader.
\begin{lem}\label{G_maximalkompakt_zerlegung}
Let $K/F$ be split. Then
\begin{displaymath}
\operatorname{GL}_2(\mathbf o_F) = \mathbf o_K^\times N(\mathbf o_F)N'(\mathbf o_F) \quad{\bigcup^\bullet} \quad \mathbf o_K^\times N(\mathbf o_F)w N(\wp),
\end{displaymath}
where $N(X)$ is group of unipotent upper triangular matrices having nontrivial entries in $X$.
\end{lem}
By Lemma~\ref{G_maximalkompakt_zerlegung},
\begin{equation*}
\phi =\chi\cdot\mathbf 1_{\operatorname{GL}_2(\mathbf o_F)}=\chi\cdot\mathbf 1_{N(\mathbf o_F)N'(\mathbf o_F)}+\chi\cdot\mathbf 1_{N(\mathbf o_F)wN(\wp)}.
\end{equation*}
For $y\in TNN'$ take the following representative modulo $T$
\begin{displaymath}
y=\begin{pmatrix}1&y_2\\0&1\end{pmatrix}\begin{pmatrix}1&0\\y_3&1\end{pmatrix}.
\end{displaymath}
For such $y$ there is
\begin{displaymath}
\phi(y)=\phi_1(y_2,y_3):=\mathbf 1_{\mathbf o_F}(y_2)\mathbf 1_{\mathbf o_F}(y_3).
\end{displaymath}
Analogly, for $y\in TNwN$ take the following representative modulo $T$
\begin{displaymath}
y=\begin{pmatrix}1&y_1\\0&1\end{pmatrix}w\begin{pmatrix}1&y_4\\ 0&1\end{pmatrix}
\end{displaymath}
to get
\begin{displaymath}
\phi(y)=\phi_2(y_1,y_4):=\mathbf 1_{\mathbf o_F}(y_1)\mathbf 1_{\wp}(y_4).
\end{displaymath}
We will use this splitting of $\phi$ for the exterior function $\psi=\phi$:
\begin{align*}
<\phi,\begin{pmatrix}b&0\\0&1\end{pmatrix}.\phi>_x &=
\int_F\int_F\int_T\phi(t^{-1}\gamma(x)ty)~dt~\bar\chi_1(b)\bar\phi_1(b^{-1}y_2,by_3)~dy_2dy_3 \\
&+
\int_F\int_F\int_T\phi(t^{-1}\gamma(x)ty)~dt~\chi_1(b)\bar\phi_2(by_1,b^{-1}y_4)~dy_1dy_4.
\end{align*}
Choosing Haar measures on $TNN'$ and $TNwN$ as in Section~\ref{section_nonkompakt}, we get
\begin{displaymath}
\operatorname{vol}_{T\backslash G}(T\cdot\operatorname{GL}_2(\mathbf o_F)) =\operatorname{vol}(\mathbf o_F)^2(1+q^{-1}).
\end{displaymath}
The inner integrand $\phi(t^{-1}\gamma(x)ty)$ does not vanish only if there is $(r\mathbf o_F^\times,s\mathbf o_F^\times)\subset T$ such that
\begin{equation}\label{bsp_nonkp_bed_inner_integrand}
\begin{pmatrix}r&0\\0&s\end{pmatrix}t^{-1}\gamma(x)ty \in\operatorname{GL}_2(\mathbf o_F).
\end{equation}
As $\operatorname{GL}_2(\mathbf o_F)$ is fundamental for $\chi$ unramified, there is at most one class $(r\mathbf o_F^\times,s\mathbf o_F^\times)$ satisfying (\ref{bsp_nonkp_bed_inner_integrand}).
If there is, then the value of the inner integrand is $\phi(t^{-1}\gamma(x)ty)=\chi_1(r^{-1}s)$.
We choose once and for all representatives
\begin{equation*}
t=\begin{pmatrix}a&0\\0&1\end{pmatrix}\quad\textrm{ and } \quad\gamma(x)=\begin{pmatrix}-1&x\\-1&1\end{pmatrix}.
\end{equation*}
For $y\in NN'$ condition (\ref{bsp_nonkp_bed_inner_integrand}) is
\begin{equation}\label{bspganzheitsbedingungen}
\begin{pmatrix}
-r(1+y_2y_3-a^{-1}xy_3)&r(-y_2+a^{-1}x)\\
-s(a(1+y_2y_3)-y_3)&s(1-ay_2)
\end{pmatrix}\in \operatorname{GL}_2(\mathbf o_F).
\end{equation}
That is, all components are integral and the determinant satisfies
\begin{equation*}
rs\det (t^{-1}\gamma(x)ty) =rs(x-1)\in\mathbf o_F^\times.
\end{equation*}
So we can choose
\begin{equation*}
r=s^{-1}(1-x)^{-1}.
\end{equation*}
The value of the inner integrand now is $\chi_1(r^{-1}s)=\chi_1(1-x)$.
For $y\in NwN$ condition (\ref{bsp_nonkp_bed_inner_integrand}) is
\begin{equation}\label{bspganzheitsbedingungen'}
\begin{pmatrix}
r(-y_1+a^{-1}x)&-r(1-y_1y_4+a^{-1}xy_4)\\
s(1-ay_1)&-s(a(1-y_1y_4)+y_4)
\end{pmatrix}\in \operatorname{GL}_2(\mathbf o_F).
\end{equation}
Replacing $(y_2,y_3)$ by $(y_1,-y_4)$ in (\ref{bspganzheitsbedingungen}) yields (\ref{bspganzheitsbedingungen'}).
That is, the inner integral for $y\in NwN$ can be deduced of that for $y\in NN'$.
For the exterior function we observe: If $y\in NwN$, then
\begin{equation*}
\begin{pmatrix}b&0\\0&1\end{pmatrix} .\bar\phi_2(y)=\bar\chi\begin{pmatrix}1&0\\0&b\end{pmatrix}\bar\phi_2\begin{pmatrix}by_1&-1+y_1y_4\\1&b^{-1}y_4\end{pmatrix}=\chi_1(b)\phi_2(by_1,b^{-1}y_4).
\end{equation*}
While if $y\in NN'$, then
\begin{equation*}
\begin{pmatrix}b&0\\0&1\end{pmatrix} .\bar\phi_1(y)=\bar\chi\begin{pmatrix}b&0\\0&1\end{pmatrix}\bar\phi_1\begin{pmatrix}1+y_2y_3&b^{-1}y_2\\by_3&1\end{pmatrix}=\chi_1^{-1}(b)\phi_2(b^{-1}y_2,by_3).
\end{equation*}
Thus, for deducing the case $y\in NwN$ of the case $y\in NN'$,
one has to substitute $(y_1,y_4)\mapsto (y_2,-y_3)$, and additionally one has to replace $b$ by $b^{-1}$.
Further, there is an a-priori condition on $(y_2,y_3)$ given by the exterior function:
\begin{equation}\label{bsp_nonkompakt_apriori_bed}
v(y_2)\geq -v(y_3).
\end{equation}
For assuming $v(y_2)<-v(y_3)$ and $\phi_1(b^{-1}y_2,by_3)\not=0$ implies the contradiction
$v(b)\leq v(y_2)<-v(y_3)\leq v(b)$.
\subsection*{Conditions for the inner integrand}
From now on we assume (\ref{bsp_nonkompakt_apriori_bed}).
{\it Claim.}
In the case $y\in NN'$ the conditions (\ref{bspganzheitsbedingungen}) for the inner integrand not to vanish imply exactly the following possible scopes.
For $x\in F^\times\backslash (1+\wp)$:
\begin{itemize}
\item[$\bullet$] $-v(y_3)\leq v(y_2)<0$:
\begin{itemize}
\item[$\ast$] $a\in\frac{1}{y_2}(1+\wp^{-v(y_2)})$ (for $v(s)=v(y_2)$)
\item[$\ast$] $a\in\frac{x}{y_2}(1+\wp^{-v(y_2)})$ (for $v(s)=-v(1-x)$)
\end{itemize}
\item[$\bullet$] $v(1+y_2y_3)<-v(y_3)\leq v(y_2)$:
\begin{itemize}
\item[$\ast$] $a\in\frac{y_3}{1+y_2y_3}(1+\wp^{-v(y_3)-v(1+y_2y_3)})$ (for $v(s)=v(1+y_2y_3)$)
\item[$\ast$] $a\in\frac{xy_3}{1+y_2y_3}(1+\wp^{-v(y_3)-v(1+y_2y_3)})$ (for $v(s)=-v(y_3)-v(1-x)$)
\end{itemize}
\item[$\bullet$] $v(y_2)\geq 0$ and $v(y_3)\geq 0$:
For $v(x)\geq 0$: $0\leq v(a)\leq v(x)$ (for $v(s)=0$); for $v(x)<0$: $0\leq v(s)\leq -v(x)$ (for $v(s)=-v(a)$).
\item[$\bullet$] $v(y_3)<0$ and $v(y_2)=-v(y_3)\leq v(1+y_2y_3)$:
For $v(x)\geq 0$: $2v(y_3)\leq v(a)\leq v(x)+2v(y_3)$ (for $v(s)=0$); for $v(x)<0$: $-v(y_3)\leq v(s)\leq -v(y_3)-v(x)$ (for $v(s)=-v(a)+v(y_3)$).
\end{itemize}
For $x\in 1+\wp$:
\begin{itemize}
\item[$\bullet$] $-v(y_3)\leq v(y_2)< -v(1-x)$:
$a\in\frac{x}{y_2}(1+\wp^{-v(y_2)})$ (for $v(s)=-v(1-x)$)
\item[$\bullet$] $-v(y_3)\leq v(y_2)\leq -v(1-x)$:
$a\in\frac{1}{y_2}(1+\wp^{-v(y_2)})$ (for $v(s)=v(y_2)$)
\item[$\bullet$] $0\leq v(1+y_2y_3)<-v(y_3)-v(1-x)$: $a\in\frac{xy_3}{1+y_2y_3}(1+\wp^{-v(y_3)-v(1+y_2y_3)})$ (for $v(s)=-v(y_3)-v(1-x)$)
\item[$\bullet$] $0\leq v(1+y_2y_3)\leq-v(y_3)-v(1-x)$: $a\in\frac{y_3}{1+y_2y_3}(1+\wp^{-v(y_3)-v(1+y_2y_3)})$ (for $v(s)=v(1+y_2y_3)$)
\end{itemize}
There is no difficulty in checking that all these scopes satisfy condition (\ref{bspganzheitsbedingungen}). That these are the only possible ones is done by wearying distinction of cases.
As those are characteristic for the proof of Theorem~\ref{satz_translatiert_nicht_kompakt} and we skipped that, we include them here to give an insight of what is going on.
\begin{proof}[Proof of Claim]
Condition (\ref{bspganzheitsbedingungen}) is equivalent to the following four conditions
\begin{equation}\label{bsp1}
a^{-1}\in \frac{1+y_2y_3}{xy_3}(1+\frac{s(1-x)}{1+y_2y_3}\mathbf o_F),
\end{equation}
\begin{equation}\label{bsp2}
a^{-1}\in \frac{y_2}{x}(1+\frac{s(1-x)}{y_2}\mathbf o_F),
\end{equation}
\begin{equation}\label{bsp3}
a\in\frac{y_3}{1+y_2y_3}(1+\frac{1}{sy_3}\mathbf o_F),
\end{equation}
\begin{equation}\label{bsp4}
a\in\frac{1}{y_2}(1+ s^{-1}\mathbf o_F).
\end{equation}
We now go through these conditions distinguishing different cases for $s$ and $x$.
{\bf 1)} Assume $v(s)<0$.
Then $a=\frac{1}{y_2}(1+a')$ where $a'\in s^{-1}\mathbf o_F\subset \wp$ by (\ref{bsp4}). Inserting this in (\ref{bsp3}) we get
\begin{equation*}
\frac{1}{y_2}(1+a')+y_3 a' \in s^{-1}\mathbf o_F,
\end{equation*}
Which by (\ref{bsp_nonkompakt_apriori_bed}) is equivalent to
\begin{equation}\label{bsp3v(s)<0}
v(y_2)\leq v(s)<0.
\end{equation}
In particular, $v(y_3)>0$. Assuming this, the conditions (\ref{bsp1}) and (\ref{bsp2}) are equivalent to
\begin{equation}\label{bsp1v(s)<0}
1\in s(1-x)\mathbf o_F
\end{equation}
and
\begin{equation}\label{bsp2v(s)<0}
a^{-1}\in\frac{y_2}{x}(1+\frac{s(1-x)}{y_2}\mathbf o_F).
\end{equation}
{\bf 1.1)} If $v(\frac{s(1-x)}{y_2})>0$, then combining (\ref{bsp4}) and (\ref{bsp2v(s)<0}) we get
\begin{equation}\label{bspv(s)<0,2gut,24kombination}
a\in \frac{x}{y_2}(1+\frac{s(1-x)}{y_2}\mathbf o_F) \cap \frac{1}{y_2}(1+ s^{-1}\mathbf o_F).
\end{equation}
For this intersection to be nonempty, one has to assume $x\in 1+\wp$.
Collecting conditions (\ref{bsp3v(s)<0}) and (\ref{bsp1v(s)<0}) as well as $v(\frac{s(1-x)}{y_2})>0$ we have
\begin{equation}\label{bspv(s)<0,2gut,s-bed}
v(y_2)\leq v(s)\leq -v(1-x).
\end{equation}
If $v(s)=v(y_2)+j$, then (\ref{bspv(s)<0,2gut,24kombination}) is
\begin{equation*}
a\in \frac{x}{y_2}(1+\wp^{v(1-x)+j}) \cap \frac{1}{y_2}(1+ \wp^{-v(y_2)-j}).
\end{equation*}
For $j=0$ this is $ a\in \frac{1}{y_2}(1+ \wp^{-v(y_2)})$,
because in this case $x\in 1+\wp^{v(1-x)}$.
For $j>0$ we have $x\notin 1+\wp^{v(1-x)+j}$. Then the scope for $a$ is nonempty only if $x\in1+ \wp^{-v(y_2)-j}$ is satisfied, that is $v(1-x)\geq -v(y_2)-j$. Together with (\ref{bspv(s)<0,2gut,s-bed}) we now get $v(y_2)+j=v(s)=-v(1-x)$.
Summing up: In the case $v(\frac{s(1-x)}{y_2})>0$ we have $x\in 1+\wp$ and the scopes are:
\begin{itemize}
\item[$\bullet$] $v(s)=v(y_2)\leq -v(1-x)$: $a\in \frac{1}{y_2}(1+ \wp^{-v(y_2)})$,
\item[$\bullet$] $v(y_2)<v(s)=-v(1-x)$: $a\in \frac{x}{y_2}(1+ \wp^{-v(y_2)})$.
\end{itemize}
{\bf 1.2)} If $v(\frac{s(1-x)}{y_2})\leq0$, then we find by (\ref{bsp3v(s)<0}), (\ref{bsp1v(s)<0}) and (\ref{bsp2v(s)<0}):
\begin{equation*}
v(s)\geq v(y_2)\left\{\begin{array}{l}=-v(a)\geq v(s)+v(1-x)-v(x)\\\geq v(s)+v(1-x)\end{array} \right.
\end{equation*}
As we always have $\operatorname{max}\{v(1-x)-v(x),v(1-x)\}\geq 0$, this means $v(s)=v(y_2)$
and $\operatorname{max}\{v(1-x)-v(x),v(1-x)\}=0$.
Thus, the scope in this case is nonzero only if $x\in F^\times\backslash (1+\wp)$. Then it is given by
\begin{itemize}
\item $v(s)=v(y_2)<0$: $a\in \frac{1}{y_2}(1+ \wp^{-v(y_2)})$.
\end{itemize}
Now the case $v(s)<0$ is exhausted.
{\bf 2)} Assume $v(s)\geq 0$.
This case is much more complicated than the previous.
Condition (\ref{bsp4}) now is
\begin{equation}\label{bsp4v(s)>0}
v(a)\geq -v(y_2)-v(s).
\end{equation}
For condition (\ref{bsp3}) we distinguish further:
{\bf 2.1)} If $-v(s)-v(y_3)>0$: Then $\frac{1}{sy_3}\mathbf o_F\subset\wp$ and $a=\frac{y_3}{1+y_2y_3}(1+a')^{-1}$ where $a'\in\frac{1}{sy_3}\mathbf o_F$.
Inserting $a^{-1}$ in (\ref{bsp1}) we get the condition
\begin{equation}\label{bsp1v(s)>0,3gut}
(1+y_2y_3)(1-x-xa')\in s(1-x)\mathbf o_F.
\end{equation}
{\bf 2.1.1)} If additionally $x\in F^\times\backslash (1+\wp)$, then collecting all conditions for $v(s)$ we get:
\begin{equation}\label{bspv(s)>0,xwegvon1,3gut,v(s)-bed}
\left.\begin{array}{r}0\leq\\v(1+y_2y_3)-v(y_2y_3)\leq\end{array}\right\}
v(s)\left\{\begin{array}{l}<-v(y_3) \textrm{ by distinction of cases}\\\leq v(1+y_2y_3)\textrm{ by (\ref{bsp1v(s)>0,3gut}) and (\ref{bsp4v(s)>0})}\end{array}\right..
\end{equation}
It is easily seen that (\ref{bspv(s)>0,xwegvon1,3gut,v(s)-bed}) is satisfied only for $v(s)=v(1+y_2y_3)$. Thus, (\ref{bsp2}) means
\begin{displaymath}
a^{-1}\in \frac{(1+y_2y_3)(1-x)}{x}\mathbf o_F,
\end{displaymath}
as $v(y_2)\geq -v(y_3)>v(s)\geq v(s(1-x))$.
This condition is because of $v(a)=v(\frac{y_3}{1+y_2y_3})$ equivalent to $-v(y_3)\geq v(1-x)-v(x)$. As we are studying the case $x\notin 1+\wp$ , this is always true.
Thus, the scope for $x\in F^\times\backslash (1+\wp)$ is
\begin{itemize}
\item[$\bullet$] $v(s)=v(1+y_2y_3)$, $0\leq v(1+y_2y_3)<-v(y_3)$:
$a\in \frac{y_3}{1+y_2y_3}(1+\wp^{-v(y_3)-v(1+y_2y_3)})$.
\end{itemize}
{\bf 2.1.2)} But if $x\in 1+\wp$,
then we first show that the assumption $v(a^{-1}x)<v(s)+v(1-x)$ implies a contradiction:
For then (\ref{bsp2}) and (\ref{bsp3}) would imply
\begin{displaymath}
a\in \frac{x}{y_2}(1+\wp)\cap\frac{y_3}{1+y_2y_3}(1+\wp),
\end{displaymath}
which is satisfied only for $\frac{x}{y_2}\in\frac{y_3}{1+y_2y_3}(1+\wp)$ or equivalently $1+y_2y_3\in y_2y_3(1+\wp)$. Which is a contradiction as for this, $1\in\wp$ must hold.
Thus, we have $v(a^{-1}x)\geq v(s)+v(1-x)$, and (\ref{bsp2}) together with (\ref{bsp3}) gives: $v(1+y_2y_3)-v(y_3)=-v(a)\geq v(s)+v(1-x)$. Collecting all the condition for $v(s)$ found so far:
\begin{equation}\label{bspv(s)>0,xnahe1,3gut,v(s)-bed}
\left.\begin{array}{r}0\leq\\v(\frac{1+y_2y_3}{y_2y_3})\leq\\\end{array}\right\}
v(s)
\left\{\begin{array}{l}<-v(y_3) \textrm{ by distinction of cases}\\ \leq v(y_2)-v(1-x)\textrm{ by (\ref{bsp4}) und (\ref{bsp2})}\\\leq -v(y_3)-v(1-x)+v(1+y_2y_3)\textrm{ by (\ref{bsp2})}\end{array}\right..
\end{equation}
It is easily seen that these conditions shrink to
\begin{displaymath}
v(1+y_2y_3)\leq v(s)\leq -v(y_3)-v(1-x)
\end{displaymath}
because of $v(1+y_2y_3)<v(y_2)$.
Thus, $v(s)+v(1-x)-v(1+y_2y_3)>0$. Combing (\ref{bsp1}) and (\ref{bsp3}) we get
\begin{equation}\label{bspv(s)>0,xnahe1,3gut,1gut,13kombi}
a\in \frac{xy_3}{1+y_2y_3}(1+\frac{s(1-x)}{1+y_2y_3}\mathbf o_F)\cap\frac{y_3}{1+y_2y_3}(1+\frac{1}{sy_3}\mathbf o_F).
\end{equation}
For $v(s)=v(1+y_2y_3)$ this is
\begin{displaymath}
a\in \frac{y_3}{1+y_2y_3}(1+\wp^{-v(y_3)-v(1+y_2y_3)}),
\end{displaymath}
as in that case $x\in 1+\frac{s(1-x)}{1+y_2y_3}\mathbf o_F$. Let $v(s)=v(1+y_2y_3)+j$ where $j>0$. For the intersection in (\ref{bspv(s)>0,xnahe1,3gut,1gut,13kombi}) to be nonempty, we must have $v(1-x)\geq -v(1+y_2y_3)-v(y_3)-j$. Combined with the rest of the condition for $v(s)$ this is $v(s)=-v(y_3)-v(1-x)$. In particular, $v(1+y_2y_3)<-v(y_3)-v(1-x)$.
Summing up the conditions of this case, we get for $x\in 1+\wp$:
\begin{itemize}
\item[$\bullet$] $v(y_2)>-v(y_3)$ and $v(y_3)\leq -v(1-x)$: $v(s)=0$ and $a\in\frac{y_3}{1+y_2y_3}(1+\wp^{-v(y_3)})$,
\item[$\bullet$] $v(y_2)>-v(y_3)$ and $v(y_3)<-v(1-x)$: $v(s)=-v(y_3)-v(1-x)$ and $a\in\frac{xy_3}{1+y_2y_3}(1+\wp^{-v(y_3)})$,
\item[$\bullet$] $v(y_2)=-v(y_3)$ and $v(1+y_2y_3)\leq -v(y_3)-v(1-x)$: $v(s)=v(1+y_2y_3)$ and $a\in\frac{y_3}{1+y_2y_3}(1+\wp^{-v(y_3)-v(1+y_2y_3)})$,
\item[$\bullet$] $v(y_2)=-v(y_3)$ and $0\leq v(1+y_2y_3)<-v(y_3)-v(1-x)$: $v(s)=-v(y_3)-v(1-x)$ and $a\in\frac{xy_3}{1+y_2y_3}(1+\wp^{-v(y_3)-v(1+y_2y_3)})$.
\end{itemize}
This finishes the case $-v(s)-v(y_3)>0$.
\medskip
{\bf 2.2)} If $-v(s)-v(y_3)\leq0$:
{\bf 2.2.1)} If additionally $v(s)+v(1-x)>v(y_2)$, then $\frac{s(1-x)}{y_2}\mathbf o_F\subset\wp$. Thus, by (\ref{bsp2}) we have $v(a)=v(x)-v(y_2)$. All conditions for $v(s)$, which are given by the distinction of cases and (\ref{bsp3}) and (\ref{bsp4}) are:
\begin{equation}\label{bspv(s)>0,3schlecht,2gut,v(s)-bed}
\left.\begin{array}{lr}
\textrm{Distinction of cases: }&0\leq\\\textrm{Distinction of cases: }&-v(y_3)\leq\\\textrm{by (\ref{bsp4}): }&-v(x)\leq\\\textrm{by (\ref{bsp3}): }&-v(1+y_2y_3)+v(y_2)-v(x)\leq\\\textrm{Distinction of cases: }&v(y_2)-v(1-x)<\end{array}\right\} v(s).
\end{equation}
We show that the assumption $v(s)+v(1-x)-v(1+y_2y_3)>0$ implies a contradiction:
Then (\ref{bsp1}) and (\ref{bsp2}) would imply
\begin{displaymath}
a\in\frac{xy_3}{1+y_2y_3}(1+\wp)\cap\frac{x}{y_2}(1+\wp).
\end{displaymath}
This intersection to be nonempty means $\frac{y_2y_3}{1+y_2y_3}\in 1+\wp$, or equvalently $1\in\wp$.
Thus, $v(s)+v(1-x)-v(1+y_2y_3)\leq 0$. Then by (\ref{bsp1}) we have
\begin{equation*}
v(s)\leq\left\{\begin{array}{l}v(1+y_2y_3)-v(1-x)\\v(y_2)+v(y_3)-v(1-x)\end{array},\right.
\end{equation*}
or equivalently $v(s)\leq- v(1-x)$. Conditions (\ref{bspv(s)>0,3schlecht,2gut,v(s)-bed}) imply in particular $0\leq -v(1-x)$, that is $x\in F^\times\backslash(1+\wp)$. Moreover, $v(y_2)<0$.
The conditions for $v(s)$ are now simplyfied to
\begin{equation*}
\left.\begin{array}{r}0\\-v(x)\end{array}\right\}\leq v(s)\leq -v(1-x),
\end{equation*}
which is equivalent to $v(s)=-v(1-x)$, for $x\notin 1+\wp$.
In this case the scope is given by
\begin{itemize}
\item[$\bullet$] $v(y_2)<0$: $v(s)=-v(1-x)$ and $a\in \frac{x}{y_2}(1+\wp^{-v(y_2)})$.
\end{itemize}
This finishes the case $v(s)+v(1-x)>v(y_2)$.
\medskip\\
{\bf 2.2.2)} But if $v(s)+v(1-x)\leq v(y_2)$, then (\ref{bsp2}) is equivalent to
\begin{equation}\label{bspv(s)>0,3schlecht,2schlecht,2}
-v(a)\geq v(s)+v(1-x)-v(x).
\end{equation}We distingush further:
{\bf 2.2.2.1)} If additionally $v(s)+v(1-x)-v(1+y_2y_3)>0$, then we have by (\ref{bsp1})
\begin{displaymath}
a\in \frac{xy_3}{1+y_2y_3}(1+\frac{s(1-x)}{1+y_2y_3}\mathbf o_F)\subset\frac{xy_3}{1+y_2y_3}(1+\wp).
\end{displaymath}
In particular, $v(a)=v(xy_3)-v(1+y_2y_3)$.
We collect the conditions for $v(s)$:
\begin{equation}\label{bspv(s)>0,3schlecht,1schlecht,v(s)-bed}
\left.\begin{array}{r}0\leq\\-v(y_3)\leq\\-v(x)-v(y_3)\leq\\-v(x)+v(\frac{1+y_2y_3}{y_2y_3})\leq\\v(\frac{1+y_2y_3}{1-x})<\end{array}\right\}v(s)\leq
\left\{\begin{array}{ll}v(y_2)-v(1-x)&\textrm{by distinction of cases}\\&\textrm{by distinction of cases}\\v(\frac{1+y_2y_3}{y_3(1-x)}) & \textrm{by (\ref{bsp3}) and (\ref{bsp2})}\\ & \textrm{by (\ref{bsp4})}\\ & \textrm{by distinction of cases}\end{array}.\right.
\end{equation}
The two conditions on the right combined are equivalent to
\begin{equation*}
v(s)\leq -v(y_3)-v(1-x),
\end{equation*}
by the general assumption $v(y_2)\geq -v(y_3)$ (\ref{bsp_nonkompakt_apriori_bed}).
This implies $v(1+y_2y_3)<-v(y_3)$ as well as
$0\leq -v(1-x)$, that is $x\in F^\times\backslash (1+\wp)$. Inserting this in (\ref{bspv(s)>0,3schlecht,1schlecht,v(s)-bed}) we finally get $v(s)=-v(y_3)-v(1-x)$.
Thus, the scope of this case exists only for $x\in F^\times\backslash (1+\wp)$. It is then given by
\begin{itemize}
\item[$\bullet$] $0\leq v(1+y_2y_3)<-v(y_3)$: $v(s)=-v(y_3)-v(1-x)$ and $a\in \frac{xy_3}{1+y_2y_3}(1+\wp^{-v(y_3)-v(1+y_2y_3)})$.
\end{itemize}
{\bf 2.2.2.2)} If additionally $v(s)+v(1-x)-v(1+y_2y_3)\leq 0$,
then (\ref{bsp1}) is equivalent to.
\begin{equation*}
-v(a)\geq v(s)+v(1-x)-v(x)-v(y_3).
\end{equation*}
By the distinction of cases and the conditions we now get the conditions
\begin{equation}\label{bspv(s)>0,allesschlecht,v(s)-bed}
\left.\begin{array}{r}0\\-v(y_3)\end{array}\right\}\leq v(s)\leq \left\{\begin{array}{l}v(y_2)-v(1-x)\\v(1+y_2y_3)-v(1-x)\end{array}\right..
\end{equation}
By(\ref{bsp1}) to (\ref{bsp4}) we see that $v(a)$ must satisfy:
\begin{equation}\label{bspv(s)>0,allesschlecht,a-bed}
\left.\begin{array}{r}-v(1+y_2y_3)-v(s)\\-v(y_2)-v(s)\end{array}\right\}\leq v(a) \leq \left\{\begin{array}{l}-v(s)+vx)-v(1-x)\\-v(s)+v(x)-v(1-x)+v(y_3)\end{array}\right. .
\end{equation}We distinguish further for $v(y_3)$:
{\bf 2.2.2.2 a)} If $v(y_3)\geq 0$:
We show that $v(y_2)<0$ is not possible. For then (\ref{bspv(s)>0,allesschlecht,v(s)-bed}) would imply $v(1-x)\leq v(y_2)<0$. But (\ref{bspv(s)>0,allesschlecht,a-bed}) would imply $v(1-x)-v(x)\leq v(y_2)<0$, which is a contradiction for $v(x)<0$.
Thus, $v(y_2)\geq 0$ and the conditions (\ref{bspv(s)>0,allesschlecht,v(s)-bed}) shrink to $0\leq v(s)\leq -v(1-x)$. This implies $x\notin 1+\wp$.
Condition (\ref{bspv(s)>0,allesschlecht,a-bed}) is reduced to $-v(s)\leq v(a)\leq -v(s)+v(x)-v(1-x)$.
We no can write down the scope of this case. The scope is nonempty only if
$x\in F^\times\backslash(1+\wp)$ and is given by
\begin{itemize}
\item[$\bullet$] $v(y_2)\geq 0$ and $v(y_3)\geq 0$:
\begin{itemize}
\item[$\ast$] For $v(x)\geq 0$: $v(s)=0$ and $0\leq v(a)\leq v(x)$,
\item[$\ast$] For $v(x)<0$: $v(a)=-v(s)$ and $0\leq v(s)\leq -v(x)$.
\end{itemize}
\end{itemize}
{\bf 2.2.2.2 b)} If $v(y_3)<0$: We show that $v(y_2)>-v(y_3)$ is not possible. Then (\ref{bspv(s)>0,allesschlecht,v(s)-bed}) would imply $v(1-x)\leq v(y_3)<0$, that is $v(x)<0$. By (\ref{bspv(s)>0,allesschlecht,a-bed}) we would get $0\leq v(x)-v(1-x)+v(y_3)$, which could be satisfied only for $v(x)\geq 0$.
Thus, $v(y_2)=-v(y_3)$. We show that $v(1+y_2y_3)<-v(y_3)$ is not possible, for (\ref{bspv(s)>0,allesschlecht,v(s)-bed}) would again imply $v(1-x)<0$. Inserting this in (\ref{bspv(s)>0,allesschlecht,a-bed}), we get $-v(y_3)\leq v(1+y_2y_3)$, contradiction.
Thus, $v(1+y_2y_3)\geq -v(y_3)$. Then (\ref{bspv(s)>0,allesschlecht,v(s)-bed}) means $-v(y_3)\leq v(s)\leq -v(y_3)-v(1-x)$, which is satisfied only if $x\notin 1+\wp$. Condition (\ref{bspv(s)>0,allesschlecht,a-bed}) gives $v(y_3)-v(s)\leq v(a)\leq -v(s)+v(x)-v(1-x)+v(y_3)$.
The scope of this case is nonempty only if
$x\in F^\times\backslash(1+\wp)$. Then it is given by
\begin{itemize}
\item[$\bullet$] $0<-v(y_3)\leq v(1+y_2y_3)$:
\begin{itemize}
\item[$\ast$] For $v(x)\geq 0$: $v(s)=-v(y_3)$ and $2v(y_3)\leq v(a)\leq v(x)+2v(y_3)$,
\item[$\ast$] For $v(x)<0$: $v(a)=v(y_3)-v(s)$ and $-v(y_3)\leq v(s)\leq-v(y_3) -v(x)$.
\end{itemize}
\end{itemize}
Finally, all the scopes under the conditions
(\ref{bsp1}) to (\ref{bsp4}) are treated. This proves the Claim.
\end{proof}
\subsection*{Computation of the inner integral}
We compute the inner integral incase $y\in NN'$. The scopes of integration are summed up in the Claim above. The variable of integration is $a\in F^\times$.
In case $x\in F^\times \backslash (1+\wp)$ we get:
\begin{align*}
& \chi_1(1-x)\operatorname{vol}^\times(\mathbf o_F^\times)(1-q^{-1})^{-1}\cdot\\
&\Biggl(
(\lvert v(x)\rvert +1)(1-q^{-1})\Bigl(\mathbf 1_{\mathbf o_F}(y_2)\mathbf 1_{\mathbf o_F}(y_3)+\mathbf 1_{F\backslash \mathbf o_F}(y_3)\mathbf 1_{-y_3^{-1}(1+\wp^{-v(y_3)})}(y_2)\Bigr)\Biggr.\\
& \quad\quad+
2q^{v(y_2)}\mathbf 1_{\wp^{-v(y_2)}}(y_3)\mathbf 1_{F\backslash \mathbf o_F}(y_2) +2q^{v(y_3)}\mathbf 1_{F\backslash \mathbf o_F}(y_3)\mathbf 1_{\wp^{-v(y_3)+1}}(y_2)\\
&\quad\quad\Biggl.
+2q^{v(y_3)+v(1+y_2y_3)}\mathbf 1_{F\backslash \mathbf o_F}(y_3)\mathbf 1_{-y_3^{-1}(\mathbf o_F^\times\backslash(1+\wp^{-v(y_3)}))}(y_2)\Biggr).
\end{align*}
In case $x\in 1+\wp$ we get:
\begin{align*}
&\chi_1(1-x)\operatorname{vol}^\times(\mathbf o_F^\times)(1-q^{-1})^{-1}\cdot\\
&\Biggl( \mathbf 1_{F\backslash \wp^{-v(1-x)+1}}(y_2)q^{v(y_2)}\mathbf 1_{\wp^{-v(y_2)}}(y_3)
+\mathbf 1_{F\backslash \wp^{-v(1-x)}}(y_2)q^{v(y_2)}\mathbf 1_{\wp^{-v(y_2)}}(y_3)\Biggr.\\
&\Biggl.+ \mathbf 1_{F\backslash \wp^{-v(1-x)+1}}(y_3)q^{v(y_3)}\mathbf 1_{\wp^{-v(y_3)+1}}(y_2)
+\mathbf 1_{F\backslash \wp^{-v(1-x)}}(y_3)q^{v(y_3)}\mathbf 1_{\wp^{-v(y_3)+1}}(y_2)\Biggr.\\
&\Biggl.\quad\quad+q^{v(y_3)+v(1+y_2y_3)}\mathbf 1_{F\backslash \wp^{-v(1-x)+1}}(y_3)\mathbf 1_{-y_3^{-1}(\mathbf o_F^\times\backslash(1+\wp^{-v(y_3)-v(1-x)+1}))}(y_2)\Biggr.\\
&\Biggl.\quad\quad +q^{v(y_3)+v(1+y_2y_3)}\mathbf 1_{F\backslash \wp^{-v(1-x)}}(y_3)\mathbf 1_{-y_3^{-1}(\mathbf o_F^\times\backslash(1+\wp^{-v(y_3)-v(1-x)}))}(y_2)
\Biggr).
\end{align*}
For the inner integral in case $y\in NwN$ we only have to consider such $(y_2,y_3)$ which satisfy $-v(y_3)<v(y_2)$, by the shape of the exterior function $\phi_2$.
Here, we get the following integrals:
In case $x\in F^\times\backslash (1+\wp)$:
\begin{align*}
& \chi_1(1-x)\operatorname{vol}^\times(\mathbf o_F^\times)(1-q^{-1})^{-1}\cdot\\
&\Biggl(
(\lvert v(x)\rvert +1)(1-q^{-1})\mathbf 1_{\mathbf o_F\cap\wp^{-v(y_3)+1}}(y_2)\mathbf 1_{\mathbf o_F}(y_3)\Biggr.\\
& \quad\quad\Biggl. +
2q^{v(y_2)}\mathbf 1_{\wp^{-v(y_2)+1}}(y_3)\mathbf 1_{F\backslash \mathbf o_F}(y_2) +2q^{v(y_3)}\mathbf 1_{F\backslash \mathbf o_F}(y_3)\mathbf 1_{\wp^{-v(y_3)+1}}(y_2)\Biggr).
\end{align*}
In case $x\in 1+\wp$:
\begin{align*}
& \chi_1(1-x)\operatorname{vol}^\times(\mathbf o_F^\times)(1-q^{-1})^{-1}\cdot\\
&\Biggl( \mathbf 1_{F\backslash \wp^{-v(1-x)+1}}(y_2)q^{v(y_2)}\mathbf 1_{\wp^{-v(y_2)+1}}(y_3)
+\mathbf 1_{F\backslash \wp^{-v(1-x)}}(y_2)q^{v(y_2)}\mathbf 1_{\wp^{-v(y_2)+1}}(y_3)\Biggr.\\
&\Biggl. +
\mathbf 1_{F\backslash \wp^{-v(1-x)+1}}(y_3)q^{v(y_3)}\mathbf 1_{\wp^{-v(y_3)+1}}(y_2)
+
\mathbf 1_{F\backslash \wp^{-v(1-x)}}(y_3)q^{v(y_3)}\mathbf 1_{\wp^{-v(y_3)+1}}(y_2)\Biggr).
\end{align*}
\subsection*{Computation of the exterior integral}
Now we integrate these inner integrals against the exterior function
$\phi_i$.. Dabei treten f"ur $x\in 1+\wp$ folgende Summanden auf:
In case $x\in 1+\wp$ and $y\in NN'$, the exterior function is $\phi_1$.
We work off the inner integral term by term and get up to the factor
$\chi_1(b(1-x))\operatorname{vol}^\times(\mathbf o_F^\times)$:
\begin{align*}
&(1-q^{-1})^{-1}\int\int\mathbf 1_{b\mathbf o_F\backslash\wp^{-v(1-x)+1}}(y_2)q^{v(y_2)}\mathbf 1_{b^{-1}\mathbf o_F\cap\wp^{-v(y_2)}}(y_3)~dy_2~dy_3\\
&\quad\quad=\mathbf 1_{\wp^{v(1-x)}}(b^{-1})\lvert b^{-1}\rvert \bigl(-v(b)-v(1-x)+1\bigr)\operatorname{vol}(\mathbf o_F)^2;
\end{align*}
\begin{align*}
&(1-q^{-1})^{-1}\int\int\mathbf 1_{b\mathbf o_F\backslash\wp^{-v(1-x)}}(y_2)q^{v(y_2)}\mathbf 1_{b^{-1}\mathbf o_F\cap\wp^{-v(y_2)}}(y_3)~dy_2~dy_3\\
&\quad\quad=\mathbf 1_{\wp^{v(1-x)+1}}(b^{-1})\lvert b^{-1}\rvert \bigl(-v(b)-v(1-x)\bigr)\operatorname{vol}(\mathbf o_F)^2;
\end{align*}
\begin{align*}
&(1-q^{-1})^{-1}\int\int\mathbf 1_{b\mathbf o_F\cap\wp^{-v(y_3)+1}}(y_2)q^{v(y_3)}\mathbf 1_{b^{-1}\mathbf o_F\backslash\wp^{-v(1-x)+1}}(y_3)~dy_2~dy_3\\
&\quad\quad=\mathbf 1_{\wp^{v(1-x)}}(b)\lvert b\rvert \bigl(q^{-1}+v(b)-v(1-x)\bigr)\operatorname{vol}(\mathbf o_F)^2;
\end{align*}
\begin{align*}
&(1-q^{-1})^{-1}\int\int\mathbf 1_{b\mathbf o_F\cap\wp^{-v(y_3)+1}}(y_2)q^{v(y_3)}\mathbf 1_{b^{-1}\mathbf o_F\backslash\wp^{-v(1-x)}}(y_3)~dy_2~dy_3\\
&\quad\quad=\mathbf 1_{\wp^{v(1-x)+1}}(b)\lvert b\rvert \bigl(q^{-1}+v(b)-v(1-x)-1\bigr)\operatorname{vol}(\mathbf o_F)^2;
\end{align*}
\begin{align*}
& (1-q^{-1})^{-1}\int\int\Bigl( \mathbf 1_{b\mathbf o_F\cap-y_3^{-1}(\mathbf o_F^\times\backslash(1+\wp^{-v(y_3)-v(1-x)+1}))}(y_2)q^{v(y_3)+v(1+y_2y_3)}\cdot\Bigr.\\
&\Bigl.\quad\quad\quad\quad \mathbf 1_{b^{-1}\mathbf o_F\backslash\wp^{-v(1-x)+1}}(y_3)~dy_2~dy_3\Bigr)\\
&\quad\quad=\mathbf 1_{\wp^{v(1-x)}}(b)\lvert b\rvert \bigl(1-2q^{-1}+(1-q^{-1})(v(b)-v(1-x))\bigr)\operatorname{vol}(\mathbf o_F)^2;
\end{align*}
\begin{align*}
&(1-q^{-1})^{-1}\int\int\mathbf 1_{b\mathbf o_F\cap-y_3^{-1}(\mathbf o_F^\times\backslash(1+\wp^{-v(y_3)-v(1-x)}))}(y_2)q^{v(y_3)+v(1+y_2y_3)}\\
&\quad\quad\quad\quad\quad\quad\quad\cdot\mathbf 1_{b^{-1}\mathbf o_F\backslash\wp^{-v(1-x)}}(y_3)~dy_2~dy_3\\
&=\mathbf 1_{\wp^{v(1-x)+1}}(b)\lvert b\rvert \bigl(1-2q^{-1}+(1-q^{-1})(v(b)-v(1-x)-1)\bigr)\operatorname{vol}(\mathbf o_F)^2
\end{align*}
In case $x\in 1+\wp$ and $y\in NwN$ the exterior function is $\phi_2$. We get up to the factor
$\chi_1(b(1-x))\operatorname{vol}^\times(\mathbf o_F)$:
\begin{align*}
&(1-q^{-1})^{-1}\int\int\mathbf 1_{b^{-1}\mathbf o_F\backslash\wp^{-v(1-x)+1}}(y_2)q^{v(y_2)}\mathbf 1_{b\wp\cap\wp^{-v(y_2)}}(y_3)~dy_2\:dy_3\\
&\quad\quad
=\mathbf 1_{\wp^{v(1-x)}}(b)\lvert b\rvert \bigl(v(b)-v(1-x)+1\bigr)q^{-1}\operatorname{vol}(\mathbf o_F)^2;
\end{align*}
\begin{align*}
&(1-q^{-1})^{-1}\int\int\mathbf 1_{b^{-1}\mathbf o_F\backslash\wp^{-v(1-x)}}(y_2)q^{v(y_2)}\mathbf 1_{b\wp\cap\wp^{-v(y_2)+1}}(y_3)~dy_2~dy_3\\
&\quad\quad=\mathbf 1_{\wp^{v(1-x)+1}}(b)\lvert b\rvert \bigl(v(b)-v(1-x)\bigr)q^{-1}\operatorname{vol}(\mathbf o_F)^2;
\end{align*}
\begin{align*}
&(1-q^{-1})^{-1}\int\int\mathbf 1_{b^{-1}\mathbf o_F\cap\wp^{-v(y_3)+1}}(y_2)q^{v(y_3)}\mathbf 1_{b\wp\backslash\wp^{-v(1-x)+1}}(y_3)~dy_2~dy_3\\
&\quad\quad=\mathbf 1_{\wp^{v(1-x)}}(b^{-1})\lvert b^{-1}\rvert \bigl(-v(b)-v(1-x)\bigr)\operatorname{vol}(\mathbf o_F)^2;
\end{align*}
\begin{align*}
&(1-q^{-1})^{-1}\int\int\mathbf 1_{b^{-1}\mathbf o_F\cap\wp^{-v(y_3)+1}}(y_2)q^{v(y_3)}\mathbf 1_{b\wp\backslash\wp^{-v(1-x)}}(y_3)~dy_2~dy_3\\
&\quad\quad=\mathbf 1_{\wp^{v(1-x)+1}}(b^{-1})\lvert b^{-1}\rvert \bigl(-v(b)-v(1-x)-1\bigr)\operatorname{vol}(\mathbf o_F)^2;
\end{align*}
Summing up all terms and remembering the missing factor, we get the translated local linking number for $x\in 1+\wp$
and $\phi=\chi\cdot\mathbf 1_{\operatorname{GL}_2(\mathbf o_F)}$:
\begin{align*}
& <\phi,\begin{pmatrix} b&0\\0&1\end{pmatrix}.\phi>_{x\in 1+\wp} =
\chi_1(1-x)\chi_1(b)\operatorname{vol}^\times (\mathbf o_F^\times)\operatorname{vol}(\mathbf o_F)^2\cdot\\
&\quad\quad\quad\Biggl(\mathbf 1_{\wp^{v(1-x)+1}}(b)\lvert b\rvert \bigl(2v(b)-2(v(1-x)+1)+1\bigr)\Biggr.\\
&\quad\quad\quad\quad\quad+\mathbf 1_{\wp^{v(1-x)+1}}(b^{-1})\lvert b^{-1}\rvert \bigl(-2v(b)-2(v(1-x)+1)+1\bigr)\\
&\quad\quad\quad\quad\quad
+\mathbf 1_{\wp^{v(1-x)}}(b)\lvert b\rvert \bigl(2v(b)-2v(1-x)+1\bigr)\\
&\quad\quad\quad\quad\quad\Biggl.+\mathbf 1_{\wp^{v(1-x)}}(b^{-1})\lvert b^{-1}\rvert \bigl(-2v(b)-2v(1-x)+1\bigr)\Biggr).
\end{align*}
This is the term claimed in Example~\ref{bsp_lln_nichtkompakt}.
In case $x\in F^\times\backslash(1+\wp)$, the occuring exterior integrals can mostly be read off those for $x\in 1+\wp$ substituting there $v(1-x)=0$.
The exeptional terms for $y\in NN'$ are:
\begin{align*}
&\int\int\mathbf 1_{\mathbf o_F\cap b\mathbf o_F}(y_2)\mathbf 1_{\mathbf o_F\cap b^{-1}\mathbf o_F}(y_3)~dy_2~dy_3
=
\mathbf 1_{\mathbf o_F}(b)\lvert b\rvert +\mathbf 1_{\wp}(b^{-1})\lvert b^{-1}\rvert
\end{align*}
and
\begin{align*}
& \int\int\mathbf 1_{ b\mathbf o_F\cap -y_3^{-1}(1+\wp^{-v(y_3)})}(y_2)\mathbf 1_{ b^{-1}\mathbf o_F\backslash \mathbf o_F}(y_3)~dy_2~dy_3 =
\mathbf 1_{\wp}(b)\lvert b\rvert (1-q^{-1}).
\end{align*}
The exeptional term for $y\in NwN$ is:
\begin{align*}
& \int\int\mathbf 1_{\mathbf o_F\cap b^{-1}\mathbf o_F}(y_2)\mathbf 1_{\mathbf o_F\cap b\wp}(y_3)~dy_2~dy_3=
\mathbf 1_{\mathbf o_F}(b)\lvert b\rvert q^{-1} +\mathbf 1_{\wp}(b^{-1})\lvert b^{-1}\rvert.
\end{align*}
In these formulae we left the factor $\operatorname{vol}(\mathbf o_F)^2$ on the right and the factor
$\chi_1(1-x)\operatorname{vol}^\times(\mathbf o_F^\times)$.
Summing up all the terms, we get the translated local linking number for $x\in F^\times\backslash(1+\wp)$
and $\phi=\chi\cdot\mathbf 1_{\operatorname{GL}_2(\mathbf o_F)}$:
\begin{align*}
& <\phi,\begin{pmatrix} b&0\\0&1\end{pmatrix}.\phi>_x =
\chi_1(1-x)\chi_1(b)\operatorname{vol}^\times (\mathbf o_F^\times)\operatorname{vol}(\mathbf o_F)^2\cdot\\
&\quad\quad\quad\Biggl(\mathbf 1_{\mathbf o_F^\times}(b)\bigl(\lvert v(x)\rvert +1\bigr)(1+q^{-1}) +\mathbf 1_{\wp}(b)\lvert b\rvert \bigl(4v(b)+2\lvert v(x)\rvert\bigr)\Biggr.
\Biggr.\\
& \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\Biggl. +\mathbf 1_{\wp}(b^{-1})\lvert b^{-1}\rvert \bigl(-4v(b)+2\lvert v(x)\rvert\bigr)\Biggr).
\end{align*}
This is the result claimed in Example~\ref{bsp_lln_nichtkompakt}.
| 68,419 |
\section{Introduction}
Since its discovery in 1959, the Weibel instability \citep{wei59:wei,fri59:wei} has been, and continues to be, a subject of intense research. Part of this focus is due to the fact that the Weibel instability excites aperiodic fluctuations, i.\,e., purely growing waves that do not propagate out of the system. Therefore, the Weibel instability has been considered to be responsible for the creation of seed magnetic fields in the early universe \citep{sch03:cos,sak04:mag,sch05:ori}, which act as a progenitor to the large-scale magnetic field that we observe today in all (spiral) galaxies \citep{bec96:mag}. Likewise, Weibel fluctuations can provide the necessary dissipation of bulk velocity that leads to the formation of shock waves \citep{tau06:grb}. In highly relativistic jets that occur during events such as GRBs (Gamma-Ray Bursts) or in extreme objects such as AGNs (Active Galactic Nuclei), plasma instabilities \citep{sch02:agn} and particularly the Weibel instability are, thus, ultimately responsible for the radiation observed at Earth.
Weibel modes and their asssociated non-linear structures \citep{byc03:wei} also play a role in the radiative cooling of relativistic particles in blazar and gamma-ray burst sources \citep{sch07:coo}. Furthermore, Weibel modes can also be excited in quantum plasmas \citep{haa08:qua,haa08:mac}, thus generalizing the classical Weibel instability equation; here, a connection has been made to (dark) soliton waves \citep{shu06:las}. It is also worth noticing that there exist experimental verifications of Weibel \citep{wei01:exp} and soliton \citep{bor02:exp} modes in laser plasmas, thus emphasizing the broad applicability of the underlying mechanism, which converts the free energy from anisotropic distributions into magnetic field energy. In analytical studies, such soliton modes have been used to create magnetic turbulence \citep{kin73:tur,wei77:pla}.
The underlying analytical investigation of the non-linear aspects of the classic Weibel instability made use of the fact that the classic Weibel instability excites only transverse, electromagnetic fluctuations \citep{usr06:nli}. For asymmetric distributions, it was shown that the range of unstable wavenumbers is reduced to one single unstable wave mode, which reminds one of solitary structures that are based on single wavenumbers, too. For the case of transverse electromagnetic modes, it was shown that, depending on the exact form of the distribution function, spatially limited structures are produced (solitons).
From the radiation pattern of particles scattered in soliton modes \citep{usr09:rad}, it is known that there are many similarities to synchrotron radiation. Motivated by the fact that the energy output of the particles is mostly referred to as synchrotron radiation and inverse Compton scattering, it is a necessity to take into account other radiation processes, also because synchrotron radiation requires the presence of a background magnetic field. Until now, the origin and themechanism for such a field have been discussed intensely \citep{med99:grb,med07:wei,med07:jit}. The train of thoughts is the following: (i) in reality, particle velocity distributions should almost always be somewhat asymmetric; (ii) in this case, unstable plasma modes are monochromatic, as has been demonstrated analytically \citep{tau06:is1}; (iii) such isolated structures can lead to soliton modes as has been shown for purely electromagnetic Weibel modes \citep{usr06:nli}; (iv) in relativistic outflows such as for GRBs and AGN jets, all kinds of plasma instabilities are expected to arise. Hence, it is most important to discuss the radiation pattern for such scenarios that is generated by particles being scattered in such magnetic structures. In principle, the method of obtaining the differential frequency spectrum is similar to that for synchrotron radiation \citep{usr09:rad}. The basic difference is that for synchrotron radiation the magnetic field is considered to be constant and the electron moves in circles perpendicular to the magnetic field, while in the case of the soliton the electrons move mostly linearly and are deflected via the Lorentz force. Thus, the radiation is not produced by acceleration through a constant background field but, instead, is caused by an interaction of the electrons with highly varying magnetic and electric fields.
Inspired by the procedure shown in \citet{usr06:nli}, a number of subsequent, more detailed, investigations revealed the exotic properties of the (linear) Weibel instability that are unfolded in the case of totally asymmetric distribution functions \citep{tau06:is1,tau06:is2,tau07:evi,tau07:tun}. For such distributions, the electrostatic and electromagnetic wave modes are coupled, and it was shown that any unstable Weibel mode must be isolated, i.\,e., restricted to a single discrete wavenumber. Specific examples for the distribution function illustrated that isolated Weibel modes are excited. Even if one allows for a small real part of the frequency, the isolated Weibel modes persist \citep{tau07:wea}. Such weakly propagating unstable modes are important for oblique wave propagation, because for such cases the growth rate of unstable waves is maximal \citep{bre04:bpi,bre05:bpi}. It is then appropriate to ask how non-linear soliton modes are influenced when one includes the coupling between electrostatic and electromagnetic potentials. The purpose of this paper is to explore that question.
The paper is organized as follows. In Sec.~\ref{tech}, the Vlasov equation is transformed and the non-linear wave equations are derived. In Sec.~\ref{nonrel}, the non-relativistic soliton behavior is derived and examples are given for two basic types of distribution functions. In Sec.~\ref{rel}, the weakly and the fully relativistic behaviors are derived, which requires approximations as regards the transformation of the volume element in momentum space. Furthermore, in Secs.~\ref{nonrel} and \ref{rel}, solutions are derived for two limiting cases regarding the relative values of the electrostatic and vector potentials. In Sec.~\ref{summ}, the results are summarized and discussed. Detailed explanations of the integral transformations in the cases of non-relativistic and relativistic plasmas are given in \ref{nr_transf} and \ref{r_transf}, respectively.
\section{The Relativistic Vlasov Equation}\label{tech}
Throughout the derivation of the non-linear wave equation and the instability conditions, the vector potential, $\f A=(A_x,A_y,A_z)$, and the scalar potential, $\ensuremath{\Phi}$, will be used, with
\begin{subequations}\begin{eqnarray}
\f E&=&-\frac{1}{c}\pd[\f A]t-\f\nabla\ensuremath{\Phi}\\
\f B&=&\f\nabla\times\f A.
\end{eqnarray}\end{subequations}
For a soliton wave, observed to be moving at a constant velocity $\beta c$, the easiest way to handle the non-linear equations is to transform to a reference frame moving with the soliton so that, in such a reference frame, the potentials $\f A$ and $\ensuremath{\Phi}$ are functions solely of the spatial coordinate, $x$, and are independent of time. The electric and magnetic fields are then given by
\begin{subequations}
\begin{eqnarray}
\f E&=&-\ensuremath{\Phi}'\ensuremath{\hat{\boldsymbol{e}}_x}\\
\f B&=&(0,-A'_z,A'_y),
\end{eqnarray}
\end{subequations}
where primes denote differentiation with respect to $x$. The Vlasov equation can then be expressed as\footnote{While our notation differs from that used in classical relativistic mechanics, it is traditional to the field of plasma physics.}
\begin{equation}\label{eq:vlas1}
\fl
0=v_x\,\pd[f]x+\frac{e}{c}\left[\pd[f]{p_x}\left(-c\ensuremath{\Phi}'+v_yA'_y+v_zA'_z\right)-\pd[f]{p_y}\,A'_yv_x-\pd[f]{p_z}\,A'_zv_x\right].
\end{equation}
The characteristics of Eq.~\eqref{eq:vlas1} with respect to the $y$ and $z$ coordinates are given through $\ensuremath{\mathrm{d}} p_{(y,z)}/\ensuremath{\mathrm{d}} A_{(y,z)}=-e/c$ \citep[compare with][]{usr06:nli}. For the solution of the characteristic equations introduce $\varpi_{(y,z)}$ through
\begin{equation}
\varpi_{(y,z)}=p_{(y,z)}+\frac{e}{c}\,A_{(y,z)}.
\end{equation}
Then, Eq.~\eqref{eq:vlas1} simplifies considerably, because $\varpi_{(y,z)}$ are constants and thus the partial differentiations of the distribution function $f$ with respect to $\varpi_y$ and $\varpi_z$ vanish, yielding
\begin{equation}\label{eq:vlas2}
\fl
0=\frac{p_x}{m\gamma}\,\pd[f]x+\frac{e}{c}\,\pd[f]{p_x}\left[-c\ensuremath{\Phi}'+\frac{1}{m\gamma}\left(\varpi_y-\frac{e}{c}\,A_y\right)A'_y+\frac{1}{m\gamma}\left(\varpi_z-\frac{e}{c}\,A_z\right)A'_z\right],
\end{equation}
with $f=f(x,p_x,\varpi_y,\varpi_z)$ and where
\begin{equation}
\gamma^2=1+\frac{1}{(mc)^2}\left[p_x^2+\left(\varpi_y-\frac{e}{c}\,A_y\right)^2+\left(\varpi_z-\frac{e}{c}\,A_z\right)^2\right].
\end{equation}
It is instructive to first consider the non-relativistic limit [set $\gamma=1$ in Eq.~\eqref{eq:vlas2}] because this limit allows exact solution of the Vlasov equation for arbitrary $\ensuremath{\Phi},A_y$, and $A_z$. The non-linear coupling of the electrostatic and electromagnetic potentials in the Maxwell equations can then be investigated simply in the weak coupling limit, where ``weak'' will be defined later.
Thereafter one can consider the relativistic particle behavior where, as will be shown, coupling of the electrostatic and electromagnetic potentials is considerably more involved, although the basic procedure from the non-relativistic situation can be used, \emph{mutatis mutandis}.
\section{The Non-Relativistic Limit}\label{nonrel}
In the case where one deals solely with non-relativistic particles one can set $\gamma=1$ in Eq.~\eqref{eq:vlas2}. Then one has
\begin{equation}\label{eq:vlas3}
\fl
0=\frac{p_x}{m}\,\pd[f]x+e\,\pd[f]{p_x}\left[-\ensuremath{\Phi}'+\frac{1}{mc}\left(\varpi_y-\frac{e}{c}\,A_y\right)A'_y+\frac{1}{mc}\left(\varpi_z-\frac{e}{c}\,A_z\right)A'_z\right].
\end{equation}
Set $p_x^2=u$ in Eq.~\eqref{eq:vlas3} to obtain
\begin{equation}\label{eq:vlas4}
\fl
0=\pd[f]x+2em\,\pd[f]u\left[-\ensuremath{\Phi}'+\frac{1}{mc}\left(\varpi_y-\frac{e}{c}\,A_y\right)A'_y+\frac{1}{mc}\left(\varpi_z-\frac{e}{c}\,A_z\right)A'_z\right].
\end{equation}
With
\begin{equation}
w=u+2em\ensuremath{\Phi}+\left(\varpi_y-\frac{e}{c}\,A_y\right)^2+\left(\varpi_z-\frac{e}{c}\,A_z\right)^2,
\end{equation}
Eq.~\eqref{eq:vlas4} provides the general exact solution $f=f(w,\varpi_y,\varpi_z)$ by direct substitution.\footnote{Consider Eq.~\eqref{eq:vlas4} in the form $\partial f/\partial x-(\ensuremath{\mathrm{d}} g/\ensuremath{\mathrm{d}} x)(\partial f/\partial u)=0$. Following the theory of linear partial differential equations, set $g(x)$ as a new variable. Then $\partial f/\partial g-\partial f/\partial u=0$, which has the general solution $f=f(g+u)$. With $w=g+u$, this is precisely as given.}
Consider then Maxwell's equations in the limit of non-relativistic particles. For a plasma with different constituents---denoted by the index $\alpha$ with, e.\,g., $\alpha=e$ for electrons and $\alpha=p$ for protons---one has
\begin{subequations}
\begin{eqnarray}
\ensuremath{\Phi}''&=&-4\pi\sum_\alpha e_\alpha\int\ensuremath{\mathrm{d}}^3p\;f_\alpha\\
A''_{(y,z)}&=&-4\pi\sum_\alpha\frac{e_\alpha}{m_\alpha c}\int\ensuremath{\mathrm{d}}^3p\;p_{(y,z)}f_\alpha
\end{eqnarray}
\end{subequations}
In terms of $w,\varpi_y$, and $\varpi_z$ the Maxwell equations take the form (see \ref{nr_transf})
\begin{subequations}
\begin{equation}\label{eq:Ph1}
\fl
\ensuremath{\Phi}''=8\pi\sum_\alpha e_\alpha\int_0^\infty\ensuremath{\mathrm{d}}\xi\ensuremath{\int_{-\infty}^\infty}\ensuremath{\mathrm{d}}\varpi_y\ensuremath{\int_{-\infty}^\infty}\ensuremath{\mathrm{d}}\varpi_z\;\sqrt\xi\,\pd\xi\,f_\alpha\!\left(w_\star+\xi,\varpi_y,\varpi_z\right)
\end{equation}
and
\begin{eqnarray}
\f A''\ensuremath{_\perp}&=&8\pi\sum_\alpha\frac{e_\alpha}{m_\alpha c}\int_0^\infty\ensuremath{\mathrm{d}}\xi\ensuremath{\int_{-\infty}^\infty}\ensuremath{\mathrm{d}}\varpi_y\ensuremath{\int_{-\infty}^\infty}\ensuremath{\mathrm{d}}\varpi_z\;\sqrt\xi\nonumber\\
&\times&\left(\f\varpi\ensuremath{_\perp}-\frac{e_\alpha}{c}\,\f A\ensuremath{_\perp}\right)\pd\xi\,f\!\left(w_\star+\xi,\varpi_y,\varpi_z\right) \label{eq:A1},
\end{eqnarray}
\end{subequations}
with $\f A\ensuremath{_\perp}=(0,A_y,A_z)$, $\f\varpi\ensuremath{_\perp}=(0,\varpi_y,\varpi_z)$ and $w\equiv w_\star+\xi$, where
\begin{equation}
w_\star=2e_\alpha m_\alpha\ensuremath{\Phi}+\left(\f\varpi\ensuremath{_\perp}-\frac{e_\alpha}{c}\,\f A\ensuremath{_\perp}\right)^2.
\end{equation}
Solution of the coupled electrostatic potential, Eq.~\eqref{eq:Ph1}, and the electromagnetic potential, Eq.~\eqref{eq:A1}, depends sensitively on the assignment prescribed for the particle distribution function $f(w,\varpi_y,\varpi_z)$. Note that if $f(w,\varpi_y,\varpi_z)$ is a function solely of $w$ and not of $\varpi_y,\varpi_z$ \emph{explicitly}, then $\f A\ensuremath{_\perp}=0$. Then only $\ensuremath{\Phi}$ is eligible for a soliton structure. To illustrate this point more closely, consider two cases, where $f(w,\varpi_y,\varpi_z)$ takes on the functional forms
\begin{subequations}
\begin{equation}
f=f_{a,0}\exp\!\left(-\frac{w}{w_\alpha}\right) \label{eq:fs1}
\end{equation}
or
\begin{equation}
f_\alpha=f_{a,0}\exp\!\left(-\frac{w}{w_\alpha}\right)\exp\!\left(-\frac{\varpi_y^2+\varpi_z^2}{\varpi_\alpha^2}\right), \label{eq:fs2}
\end{equation}
\end{subequations}
with $f_{a,0}$, $w_\alpha$, and $\varpi_\alpha$ constants.
Direct integration of the right-hand sides at Eqs.~\eqref{eq:Ph1} and \eqref{eq:A1} is possible with the expressions from Eqs.~\eqref{eq:fs1} and \eqref{eq:fs2}. The results are in the case of Eq.~\eqref{eq:fs1}
\begin{subequations}
\begin{eqnarray}
\ensuremath{\Phi}''&=&-4\pi^{5/2}\sum_\alpha f_{a,0}e_\alpha w_\alpha^{3/2}\exp\!\left(-\frac{2e_\alpha m_\alpha}{w_\alpha}\,\ensuremath{\Phi}\right) \label{eq:case1a}\\
\f A''\ensuremath{_\perp}&=&0 \label{eq:case1b}
\end{eqnarray}
\end{subequations}
and in the case of Eq.~\eqref{eq:fs2}
\begin{subequations}
\begin{equation}\label{eq:case2a}
\fl
\ensuremath{\Phi}''=-4\pi^{5/2}\sum_\alpha f_{a,0}\frac{e_\alpha w_\alpha^{3/2}\varpi_\alpha^2}{\varpi_\alpha^2+w_\alpha}\,\exp\!\left[-\frac{2e_\alpha m_\alpha}{w_\alpha}\,\ensuremath{\Phi}-\left(\frac{e_\alpha}{c}\right)^2\frac{A_y^2+A_z^2}{w_\alpha+\varpi_\alpha^2}\right]
\end{equation}
together with
\begin{eqnarray}
A''_j&=&4\pi^{5/2}\sum_\alpha f_{a,0}\left(\frac{e_\alpha}{c}\right)^2\frac{w_\alpha^2}{\left(w_\alpha+\varpi_\alpha^2\right)^2}\,A_j\nonumber\\
&\times&\exp\!\left[-\frac{2e_\alpha m_\alpha}{w_\alpha}\ensuremath{\Phi}-\left(\frac{e_\alpha}{c}\right)^2\frac{A_y^2+A_z^2}{w_\alpha+\varpi_\alpha^2}\right] \label{eq:case2b}
\end{eqnarray}
\end{subequations}
with $j\in\{y,z\}$. Note that as $\varpi_\alpha\to\infty$ Eqs.~\eqref{eq:case2a} and \eqref{eq:case2b} reduce to Eqs.~\eqref{eq:case1a} and \eqref{eq:case1b}, respectively, as required. The determination of a soliton structure in both cases can then be readily given for different plasma conditions.
For instance an electron-positron plasma with identical plasma characteristics (i.\,e., $f_{a,0}$ the same for both species) allows one to write Eq.~\eqref{eq:case1a} in the form
\begin{equation}
\ensuremath{\Phi}''=4\pi^{5/2}f_\alpha e_\alpha w_\alpha^{3/2}\left[\exp\!\left(\frac{2e_\alpha m_\alpha}{w_\alpha}\,\ensuremath{\Phi}\right)-\exp\!\left(-\frac{2e_\alpha m_\alpha}{w_\alpha}\,\ensuremath{\Phi}\right)\right],
\end{equation}
which integrates once directly to yield
\begin{equation}\label{eq:case1a2}
\left(\ensuremath{\Phi}'\right)^2=\frac{4\pi^{5/2}w_\alpha^{5/2}}{m_\alpha}\,f_\alpha\cosh\!\left(\frac{2m_\alpha e_\alpha}{w_\alpha}\,\ensuremath{\Phi}\right)+\ensuremath{\text{const}}.
\end{equation}
With $\ensuremath{\Psi}=2m_\alpha e_\alpha\ensuremath{\Phi}/w_\alpha$ one has
\begin{equation}\label{eq:Psp1}
\left(\ensuremath{\Psi}'\right)^2=\frac{4\pi^{5/2}w_\alpha^{5/2}}{m_\alpha}\left(\frac{4m_\alpha^2e_\alpha^2}{w_\alpha^2}\right)f_\alpha\cosh\ensuremath{\Psi}+\ensuremath{\text{const}}.
\end{equation}
Set the constant to a negative value, i.\,e., $-\cosh\ensuremath{\Psi}_\star$ (a positive constant would automatically keep $\ensuremath{\Psi}'$ growing and so cannot represent a soliton), yielding
\begin{equation}\label{eq:Psp2}
\left(\ensuremath{\Psi}'\right)^2=16\pi^{5/2}w_\alpha^{1/2}f_\alpha m_\alpha e_\alpha^2\left(\cosh\ensuremath{\Psi}-\cosh\ensuremath{\Psi}_\star\right).
\end{equation}
Then if \ensuremath{\Psi}\ exceeds $\ensuremath{\Psi}_\star$ it does so thereafter for all coordinates and so does not represent a soliton. If \ensuremath{\Psi}\ is less than $\ensuremath{\Psi}_\star$ then Eq.~\eqref{eq:Psp2} is not valid because it would yield $(\ensuremath{\Psi}')^2$ negative. (For negative \ensuremath{\Psi}, the argument can be reversed.) Hence the only solution is a spatially unbounded potential \ensuremath{\Psi}\ which does not correspond to a soliton.
Such is in the line with the small amplitude limit that provides a dispersion relation of the form $\omega^2=\omega_p^2+k^2v_{\text{th}}^2$ which, for $\omega=0$, indicates $k=\pm i\omega_p/v_{\text{th}}$ and so represents spatially unbounded growing or decaying modes. Such unbounded structures do not represent soliton modes and so are to be discarded. The only solution left for Eq.~\eqref{eq:case1a2} is then $\ensuremath{\Phi}=0$.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{isononl1}
\caption{The behavior of $A'_y$ as a function of the normalized variable, $\zeta$, for positive and negative values of $B_0$ at $A=A_\star$, respectively.}
\label{ab:f1}
\end{figure}
In the case of Eq.~\eqref{eq:fs2} there is a common factor in Eq.~\eqref{eq:case2a} of
\begin{equation*}
\exp\!\left[-\left(\frac{e_\alpha}{c}\right)^2\frac{A_y^2+A_z^2}{w_\alpha+\varpi_\alpha^2}\right]
\end{equation*}
so that for the electron-positron plasma one again has $\ensuremath{\Phi}''>0$ everywhere and so no electrostatic soliton is available. Here, ``common'' means that the factor does not depend on summation over species and so can be brought outside the summation. Thus $\ensuremath{\Phi}=0$ is the only acceptable solution of Eq.~\eqref{eq:case2a}. Equation~\eqref{eq:case2b} takes on the generic form
\begin{equation}
A''_j=\beta\,A_j\exp\!\left[-\alpha\left(A_y^2+A_z^2\right)\right]
\end{equation}
with $\alpha,\beta>0$ and $j\in\{y,z\}$. Then either $A''_y/A_y=A''_z/A_z$ or one of the components $A_y$ or $A_z$ is zero. Consider that $A_z=0$. Then
\begin{equation}
A''_y=\beta\,A_y\exp\!\left[-\alpha A_y^2\right],
\end{equation}
which integrates once to provide
\begin{equation}\label{eq:tmp1}
\left(A'_y\right)^2=\frac{\beta}{\alpha}\left[\ensuremath{\text{const}}-\exp\!\left(-\alpha A_y^2\right)\right].
\end{equation}
Let $A_y$ have an extremum on $A_y=A_\star$. Then write Eq.~\eqref{eq:tmp1} in the form
\begin{equation}\label{eq:Ayp}
\left(A'_y\right)^2=B_0^2\left\{1-\exp\!\left[-\alpha\left(A_y^2-A_\star^2\right)\right]\right\}
\end{equation}
with
\begin{equation*}
\exp\!\left(-\alpha A_\star^2\right)=\alpha B_0^2/\beta
\end{equation*}
so that $A'_y=0$ on $A_y=A_\star$. Eq.~\eqref{eq:Ayp} shows that $A'_y$ has the structure given in Fig.~1 with the two solutions only just touching on $A'_y=0$ but not crossing because
\begin{equation*}
A''_y\gtrless0 \text{\;on\;} A_y=A_\star,\qquad \text{for\;} A_\star\gtrless0,
\end{equation*}
representing \emph{both} a background magnetic field $B_0$ together with a soliton pulse superposed. The absolute value, $\abs{A'_y}$, is illustrated in Fig.~\ref{ab:f2}. Although Eq.~\eqref{eq:Ayp} is not analytically resolvable, the structure of the solution is indeed seen by considering turning points and roots.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{isononl2}
\caption{The behavior of the absolute value of $A'_y$ as a function of the normalized variable, $\zeta$.}
\label{ab:f2}
\end{figure}
The point, then, is that exact non-linear solutions to the coupled Maxwell-Vlasov equations can be arranged to provide a rich variety of structural behaviors depending on the specific choices made for the distribution functions for the particles. In order that there be an electrostatic soliton component in Eq.~\eqref{eq:Ph1} and \eqref{eq:A1} one requires that the precise symmetry invoked between electrons and positrons be broken. Such a break can be arranged in one of two ways: either one chooses different distribution functions for the positive and negative charged components or one chooses different particle masses for the positive ions with respect to the negatively charged particles.
While it was shown that one class of solution is that when one of the components $A_j$ (with $j\in\{y,z\}$) is zero, that does not rule out other classes of solution where $A_j$ is non-zero. In addition one cannot provide a general statemenet on the field direction relative to the background field because each choice of distribution function must be investigated for its effects on the non-linear system. It also by no means follows that a plasma with no background field cannot posses solitons of electrostatic or electromagnetic or coupled properties---again, each distribution function chosen for each species provides its own properties- which was part of the aim of the special illustrations chosen.
For example, if one were to treat with finite electrons and infinitely massive ions then, with the distribution function from Eq.~\eqref{eq:fs1} and \eqref{eq:fs2}, one obtains
\begin{subequations}
\begin{equation}\label{eq:massPh}
\fl
\ensuremath{\Phi}''=4\pi^{5/2}f_{e,0}\,\frac{ew_e^{3/2}\varpi_e^2}{w_e+\varpi_e^2}\,\exp\!\left[\frac{2em}{w_e}\,\ensuremath{\Phi}-\left(\frac{e}{c}\right)^2\frac{A_y^2+A_z^2}{w_e+\varpi_e^2}\right]-4\pi en,
\end{equation}
where $n$ is the ion number density and
\begin{equation}\label{eq:massA}
\fl
A''_j=4\pi^{5/2}f_{e,0}\left(\frac{e}{c}\right)^2\frac{w_e^2}{\left(w_e+\varpi_e^2\right)^2}\,A_j\,\exp\!\left[\frac{2em}{w_e}\,\ensuremath{\Phi}-\left(\frac{e}{c}\right)^2\frac{A_y^2+A_z^2}{w_e+\varpi_e^2}\right]
\end{equation}
\end{subequations}
with $j\in\{y,z\}$.
Note that one class of solutions is when $A_j=0$ and when such is the case Eq.~\eqref{eq:massPh} has the structural form
\begin{equation}\label{eq:massPs}
\ensuremath{\Psi}''=b\,\exp(\ensuremath{\Psi})-gn,\qquad b>0,g>0
\end{equation}
Eq.~\eqref{eq:massPs} integrates once immediately to yield
\begin{equation}\label{eq:yld}
\left(\ensuremath{\Psi}'\right)^2=2b\E\ensuremath{\Psi}-2gn\ensuremath{\Psi}+\ensuremath{\Lambda},
\end{equation}
where \ensuremath{\Lambda}\ is a constant. Consider now in detail the structure of Eq.~\eqref{eq:yld}. Write $\nu=\ensuremath{\Psi}-\ensuremath{\Lambda}/(2gn)$ where one has
\begin{equation}
\left(\nu'\right)^2=B\E\nu-2gn\nu
\end{equation}
with $B=2b\exp[\ensuremath{\Lambda}/(2gn)]$. Further simplification occurs when one sets $x=(2gn)^{-1/2}\zeta$ and $B/(2gn)=R^2$ when one has
\begin{equation}\label{eq:nuzeta}
\left(\dd[\nu]\zeta\right)^2=R^2\E\nu-\nu.
\end{equation}
Note that \ensuremath{\Lambda}, the constant of integration, appears through the relationship between $B$ and $R$. Only for specific ranges of \ensuremath{\Lambda}\ can one expect to obtain a soliton behavior as we now show. On $\nu=0$, Eq.~\eqref{eq:nuzeta} yields
\begin{equation*}
\dd[\nu]\zeta=\pm R
\end{equation*}
so that two branching structures exist. If a soliton is to exist then it is necessary that $\ensuremath{\mathrm{d}}\nu/\ensuremath{\mathrm{d}}\zeta=0$ somewhere, implying a value $\nu_\star$ such that
\begin{equation}\label{eq:nustar}
R^2\E{\nu_\star}=\nu_\star
\end{equation}
Solutions to Eq.~\eqref{eq:nustar} exist only when $R^2<\E{-1}$ and, under that condition, there are two positive values of $\nu_\star$, namely, $\nu_{\text L}$ and $\nu_{\text U}$, where $\nu_{\text U}>\nu_{\text L}$ without loss of generality. Now Eq.~\eqref{eq:nuzeta} requires $(\ensuremath{\mathrm{d}}\nu/\ensuremath{\mathrm{d}}\zeta)^2\geqslant0$ everywhere. One therefore has to distinguish three cases:
\begin{itemize}
\item In $\nu_{\text L}<\nu<\nu_{\text U}$ one has $R^2\E\nu<\nu$ so that Eq.~\eqref{eq:nuzeta} cannot be satisfied. Hence the potential regimes for solitons are either $\nu>\nu_{\text U}$ or $\nu<\nu_{\text L}$.
\item For $\nu>\nu_{\text U}$ one has $(\ensuremath{\mathrm{d}}\nu/\ensuremath{\mathrm{d}}\zeta)^2>0$ so that no bounded structure is possible.
\item Equally for $\nu<\nu_{\text L}$ one has the same argument with the added problem that $(\ensuremath{\mathrm{d}}\nu/\ensuremath{\mathrm{d}}\zeta)^2$ grows indefinitely in $\nu<0$. Thus no electrostatic soliton is possible.
\end{itemize}
The point of this illustration is that the non-linear equation describing potential solitons is completely changed from that for an electron-positron plasma. The electron-positron plasma illustrated earlier has no turning points for the electrostatic potential taken on its own and so there are no soliton structures. In the case of the ion-electron plasma the corresponding field equation admits of two turning points and the analysis of the field equation in each domain has to be undertaken. It is this fact that the example has been used to illustrate. Every choice of distribution function must be investigated anew for the turning points and the corresponding analysis, i.\,e., the above case-by-case investigation, has to be undertaken in each regime.
If one chooses \emph{not} to have one of the components $A_j=0$ (with $j\in\{y,z\}$) then the electrostatic and electromagnetic components mix, representing a hybrid soliton. The point to make is that it is the symmetry breaking (for different distribution functions or the different masses of the charged particle species) that permits different types of soliton patterns, and that has been the purpose of the examples given here.
\section{The Relativistic Behavior}\label{rel}
In the situation where one cannot set $\gamma$ to unity, a considerably more complex behavior arises but, at the same time, one has the transverse $(v_y,v_z)$ particle velocities limited to $\pm c$ unlike in the non-relativistic situation where the corresponding limits are set as $\pm\infty$. This fundamental difference in behavior alters radically the soliton character.
Here one has
\begin{equation}
\pd[f]x+\frac{em\gamma}{p_x}\,\pd[f]{p_x}\left[-\ensuremath{\Phi}'-\frac{1}{2me\gamma}{\left(\f\varpi\ensuremath{_\perp}-\frac{e}{c}\,\f A\ensuremath{_\perp}\right)^2}'\right]=0,
\end{equation}
where
\begin{equation}
\gamma=\sqrt{1+\frac{p_x^2}{(mc)^2}+\frac{1}{(mc)^2}\left(\f\varpi\ensuremath{_\perp}-\frac{e}{c}\,\f A\ensuremath{_\perp}\right)^2}
\end{equation}
so that
\begin{equation}
\pd[f]x+\frac{e}{mc^2}\,\pd[f]\gamma\left[-\ensuremath{\Phi}'+\frac{1}{2me\gamma}{\left(\f\varpi\ensuremath{_\perp}-\frac{e}{c}\,\f A\ensuremath{_\perp}\right)^2}'\right]=0.
\end{equation}
The characteristic is given through
\begin{equation}\label{eq:char}
\dd[\gamma]x=-\frac{e}{mc^2}\left[\ensuremath{\Phi}'-\frac{1}{2me\gamma}{\left(\f\varpi\ensuremath{_\perp}-\frac{e}{c}\,\f A\ensuremath{_\perp}\right)^2}'\right],
\end{equation}
which has the basic structure
\begin{equation}\label{eq:bstruct}
\dd[y]x=-a'(x)-\frac{1}{y}\,b'(x).
\end{equation}
The relativistic form of the characteristic equation is not analytically tractable in general. However, in the case of interest where the influence of an electrostatic (electromagnetic) potential on an electromagnetic (electrostatic) soliton is sought one has either $\abs{a'}\ll\abs{b'}/\gamma$ or $\abs{a'}\gg\abs{b'}/\gamma$. In these two situations it is possible to derive approximate characteristics. Also, in the weakly relativistic limit where one can write $\gamma=1+\epsilon$ with only first order in $\epsilon$ terms being held in Eq.~\eqref{eq:char}, it is possible to perform a complete analytic investigation, thereby illuminating the transition between non-relativistic and relativistic limits. Consider this case first.
\subsection{Weakly Relativistic Behavior}\label{wrel}
Consider Eq.~\eqref{eq:bstruct} with $y=1+\epsilon$ and $\epsilon$ considered small, i.\,e., $\abs\epsilon\ll1$. Then to first order in $\epsilon$ one has
\begin{equation}
\dd[\epsilon]x=-a'(x)-b'(x)+b'(x)\epsilon.
\end{equation}
Then
\begin{equation}
\dd x(\E{-b}\epsilon)=-\E{-b(x)}a'(x)+\dd x(\E{-b}),
\end{equation}
with the solution
\begin{equation}\label{eq:eps_char}
\epsilon=\epsilon_0\E{b(x)}+1-\E{b(x)}\int^x\ensuremath{\mathrm{d}} x'\;a'(x')\E{-b(x')},
\end{equation}
where $\epsilon_0$ is a constant: the \emph{characteristic constant}. Here, for $a(x)$ one has
\begin{subequations}
\begin{eqnarray}
&&a'(x)=\frac{e}{mc^2}\,\ensuremath{\Phi}'(x)\nonumber\\
\Longrightarrow\quad &&a(x)=\frac{e}{mc^2}\,\ensuremath{\Phi}(x)
\end{eqnarray}
and for $b(x)$, likewise,
\begin{eqnarray}
&&b'(x)=-\frac{1}{2(mc)^2}{\left(\f\varpi\ensuremath{_\perp}-\frac{e}{c}\,\f A\ensuremath{_\perp}\right)^2}'\nonumber\\
\Longrightarrow\quad &&b(x)=-\frac{1}{2(mc)^2}\left(\f\varpi\ensuremath{_\perp}-\frac{e}{c}\,\f A\ensuremath{_\perp}\right)^2.
\end{eqnarray}
\end{subequations}
Then, by re-arranging the equation for $\gamma$, i.\,e.,
\begin{equation}
\gamma\equiv1+\epsilon\equiv1+\frac{1}{2(mc)^2}\left[p_x^2+\left(\f\varpi\ensuremath{_\perp}-\frac{e}{c}\,\f A\ensuremath{_\perp}\right)^2\right]
\end{equation}
an expression is provided for $p_x$ in terms of the characteristics $\epsilon_0$ and $\varpi_j$.
The distribution function $f=f(\epsilon_0,\varpi_y,\varpi_z)$ is an arbitrary function of its arguments. Now if $f(\epsilon_0,\varpi_y,\varpi_z)$ were to be taken as a function of $\epsilon_0$ only and not of $\varpi_j$ separately then, because $p_j$ is then a function of $(\varpi\ensuremath{_\perp}-e\f A\ensuremath{_\perp}/c)^2$ as is $\gamma$, it follows that transverse currents
\begin{equation}
\f J\ensuremath{_\perp}\propto\int\frac{\ensuremath{\mathrm{d}}^3p}{\gamma}\,f(\epsilon_0)\left(\f \varpi\ensuremath{_\perp}-\frac{e}{c}\,\f A\ensuremath{_\perp}\right)
\end{equation}
are identically zero, because the integrand is odd. Hence it is necessary and sufficient that $f(\epsilon_0,\varpi_y,\varpi_z)$ be a function of its arguments $\epsilon_0$ and $\varpi_y$ and/or $\varpi_z$ in order to have spontaneous symmetry breaking and so a transverse current.
Clearly, just as for the non-relativistic situation, different choices made for $f(\epsilon_0,\varpi_y,\varpi_z)$ determine the allowable soliton spatial structures. Even with the same choice of functional behavior in distribution functions for the non-relativistic and weakly relativistic situations one has different soliton structures. Consider, for example, the case of infinitely massive ions and mobile electrons with the electron distribution function being taken as
\begin{equation}
\fl
f_e(\epsilon_0,\varpi_y,\varpi_z)=f_0\exp\!\left(-\frac{\epsilon_0}{\epsilon_\star}\right)\exp\!\left(-\frac{\varpi_y^2+\varpi_z^2}{\varpi_0^2}\right),\qquad f_0=\ensuremath{\text{const}}.
\end{equation}
Now in the weakly relativistic limit one can write the characteristic constant, $\epsilon_0$, as
\begin{equation}\label{eq:eps0}
\fl
\epsilon_0=\E{-b(x)}\left\{-1+\E{b(x)}\int^x\ensuremath{\mathrm{d}} x'\;a'(x')\E{-b(x')}+\frac{1}{2(mc)^2}\left[p_x^2+\left(\varpi\ensuremath{_\perp}-\frac{e}{c}\,A\ensuremath{_\perp}\right)^2\right]\right\},
\end{equation}
thereby expressing $p_x$ in terms of the characteristics, i.\,e, $\epsilon_0$ as defined in Eq.~\eqref{eq:eps_char}.
Now consider the transverse current integral
\begin{subequations}
\begin{eqnarray}
\f J\ensuremath{_\perp}&=&e\int\ensuremath{\mathrm{d}}^3p\;\f v\ensuremath{_\perp} f(\epsilon_0,\varpi_y,\varpi_z)\\
&\equiv&\frac{e}{m}\int\frac{\ensuremath{\mathrm{d}}^3p}{\gamma}\left(\f\varpi\ensuremath{_\perp}-\frac{e}{c}\,\f A\ensuremath{_\perp}\right)f(\epsilon_0,\varpi_y,\varpi_z).
\end{eqnarray}
\end{subequations}
Using $1/\gamma=1/(1+\epsilon)\approx1-\epsilon\approx\exp(-\epsilon)$, one can then write, in the weakly relativistic limit,
\begin{eqnarray}
\fl
\f J\ensuremath{_\perp}&=&\frac{e}{m}\int\ensuremath{\mathrm{d}} p_x\,\ensuremath{\mathrm{d}}\varpi_y\,\ensuremath{\mathrm{d}}\varpi_z\,\left(\f\varpi\ensuremath{_\perp}-\frac{e}{c}\,\f A\ensuremath{_\perp}\right)f_0\nonumber\\
\fl
&\times&\exp\!\left(-\frac{\f\varpi\ensuremath{_\perp}^2}{\varpi_0^2}\right)\exp\!\left\{-\frac{\epsilon_0}{\epsilon_\star}-\frac{1}{2(mc)^2}\left[p_x^2+\left(\f\varpi\ensuremath{_\perp}-\frac{e}{c}\,\f A\ensuremath{_\perp}\right)^2\right]\right\}.
\end{eqnarray}
With $\epsilon_0$ being expressed in terms of $p_x^2$ and $(\f\varpi\ensuremath{_\perp}-e\f A\ensuremath{_\perp}/c)^2$ through Eq.~\eqref{eq:eps0} the integral over $p_x$ can be performed immediately yielding
\begin{eqnarray}
\fl
\f J\ensuremath{_\perp}&=&\frac{e}{c}\sqrt{2\pi}\left(1+\frac{\E{-b(x)}}{\epsilon_\star}\right)^{1/2}\exp\!\left[\frac{\E{-b(x)}}{\epsilon_\star}\left(-1+\E{b(x)}\int^x\ensuremath{\mathrm{d}} x'\;a'(x')\E{-b(x')}\right)\right]f_0\nonumber\\
\fl
&\times&\int\ensuremath{\mathrm{d}}\varpi_y\,\ensuremath{\mathrm{d}}\varpi_z\,\left(\f\varpi\ensuremath{_\perp}-\frac{e}{c}\,\f A\ensuremath{_\perp}\right)\exp\!\left(-\frac{\f\varpi\ensuremath{_\perp}^2}{\varpi_0^2}\right)\nonumber\\
\fl
&\times&\exp\!\left[-\frac{1}{2(mc)^2}\left(\f\varpi\ensuremath{_\perp}-\frac{e}{c}\,\f A\ensuremath{_\perp}\right)^2\left(1+\frac{\E{-b(x)}}{\epsilon_\star}\right)\right].
\end{eqnarray}
The double integral over $\varpi_y$ and $\varpi_z$ can be done in closed form so that one can write
\begin{eqnarray}
\fl
\f J\ensuremath{_\perp}&=&\sqrt{2\pi^3}\,\frac{\varpi_0^2}{\left(1+\beta\varpi_0^2\right)^2}\,\exp\!\left\{-\frac{\E{-b(x)}}{\epsilon_\star}\left[-1+\E{b(x)}\int^x\ensuremath{\mathrm{d}} x'\;a'(x')\E{-b(x')}\right]\right\}f_0\nonumber\\
\fl
&\times&\exp\!\left[-\frac{\E2}{c^2}\,\frac{\beta}{1+\beta\omega_0^2}\left(A_y^2+A_z^2\right)\right]\f A\ensuremath{_\perp}
\end{eqnarray}
Even in this seemingly simple extension of the non-relativistic results to the weakly relativistic situation one is challenged by an exceedingly non-linear set of equations for the field components. How many solutions the equations admit, how the elctrostatic and electromagnetic components are coupled, and how the solution(s) structure depends on the various parameters remain analytically intractable but are likely best addressed by using numerical procedures.
The point being illustrated here is that, despite the characteristics being available in closed analytic form, the non-linear complexities of the current distribution as functions of the electrostatic and electromagnetic potentials are less than inviting. And for each choice of distribution function similarly complex results obtain. It would seem that only numerical procedures can help.
\subsection{Fully relativistic behavior}\label{frel}
Because the characteristic Eq.~\eqref{eq:char} is not solvable analytically in general, two limiting cases will be considered in what follows.
\subsubsection{The case $\abs{a'}\ll\abs{b'}/\gamma$}\label{case1}
Here write the characteristic equation as
\begin{equation}
\gamma\,\dd[\gamma]x+b'=-a'\gamma.
\end{equation}
Then
\begin{equation}
\dd x\left(\frac{1}{2}\,\gamma^2+b\right)=-a'\gamma
\end{equation}
so that
\begin{equation}
\frac{1}{2}\,\gamma^2+b=-\int^x\ensuremath{\mathrm{d}}\xi\;a'\gamma+\lambda=-a\gamma+\int^x\ensuremath{\mathrm{d}}\xi\;a\gamma'+\lambda.
\end{equation}
In the integral on the right-hand side set, on evaluating $\gamma^2/2+b\approx\lambda$,
\begin{equation}
\gamma'=-\frac{b'}{\sqrt{2\left(\lambda-b\right)}}
\end{equation}
to lowest order. Then
\begin{equation}
\frac{1}{2}\,\gamma^2+b+a\gamma+\int\ensuremath{\mathrm{d}}\xi\;\frac{ab'}{\sqrt{2\left(\lambda-b\right)}}=\lambda
\end{equation}
and $f$ is a function solely of $\lambda$, $\varpi_y$, and $\varpi_z$.
\subsubsection{The case $\abs{a'}\gg\abs{b'}/\gamma$}\label{case2}
Here write
\begin{equation}
\pd[\gamma]x+a'=-\frac{b'}{\gamma}
\end{equation}
so that
\begin{equation}
\dd x\left(\gamma+a\right)=-\frac{b'}{\gamma}
\end{equation}
with, then,
\begin{equation}\label{eq:gpa}
\gamma+a=-\int^x\ensuremath{\mathrm{d}}\xi\;\frac{b'}{\gamma}+\lambda.
\end{equation}
In the integration on the right-hand side set $\gamma=\lambda-a$, (i.\,e., solve for $\gamma$ by neglecting the integral itself) to obtain
\begin{equation}
\gamma+a+\int^x\ensuremath{\mathrm{d}}\xi\;\frac{b'}{\lambda-a}=\lambda
\end{equation}
to lowest order, which defines the characteristic to order $\abs{b'/(a'\gamma)}$ as required. Then $f$ is a function solely of $\lambda$, $\varpi_y$, and---in contrast to Sec.~\ref{case1}---of $a$. Because $\abs{a'}$ is large, a direct dependence on $a$ is the most useful choice in this case.\footnote{Because $f$ is an arbitrary (but positive) function of its arguments one can choose many different functional forms to illuminate particular points, which is what is done throughout the paper.}
Note that the characteristic Eq.~\eqref{eq:char} is symmetric in $(\f\varpi\ensuremath{_\perp}-e\f A\ensuremath{_\perp}/c)^2$ so that if the distribution function is \emph{not} explicitly dependent on $\varpi_y,\varpi_z$ but only on the constant arising from the characteristic Eq.~\eqref{eq:char}, then $\f A\ensuremath{_\perp}=0$ by symmetry arguments. Thus, just as for the non-relativistic limit: only when the distribution function depends \emph{explicitly} on $\varpi_y$ and/or $\varpi_z$, in addition to the constant arising from the characteristic equation is there the possibility of a self-consistent electromagnetic soliton because of the asymmetry of the current integrals with respect to $\f\varpi\ensuremath{_\perp}-e\f A\ensuremath{_\perp}/c$. However, the functional form of such solitons (and also the modifications brought about by coupling of the electrostatic and electromagnetic fields) is considerably different than their non-relativistic counterparts due to the relativistic limitation that particle speeds must be less than $c$.
To illustrate this basic point consider again an electron-positron plasma in which the two distribution functions are identical. To make the comparison as close as possible between the relativistic and non-relativistic situations consider the electron and positron distribution functions [as functions of $\varpi_y,\varpi_z$, and $\lambda$, the characteristic from Eq.~\eqref{eq:gpa}] to be given by
\begin{equation}
f=f_0\,\exp\!\left[-\frac{\lambda}{\lambda_0}-\frac{\varpi_y^2+\varpi_z^2}{\varpi_0^2}\right]
\end{equation}
with $f_0,\lambda_0$, and $\varpi_0$ constants.
Then consider the current integral
\begin{equation}
J_y=\int\frac{\ensuremath{\mathrm{d}}^3p}{\gamma}\left(\varpi_y-\frac{e}{c}\,A_y\right)f.
\end{equation}
While the general relativistic equation is solvable analytically only in the non-relativistic and weakly relativistic situations, it has the property that it depends only on the combination $(\f\varpi\ensuremath{_\perp}-e\f A\ensuremath{_\perp}/c)^2$. Thus the characteristic constant for the equation (say, \ensuremath{\Lambda}) is also a function solely of $(\f\varpi\ensuremath{_\perp}-e\f A\ensuremath{_\perp}/c)^2$.
Hence, for distribution functions that are functions in the form $f(\ensuremath{\Lambda},\varpi_y,\varpi_z)$ it follows that if $f$ is chosen to be a function solely of \ensuremath{\Lambda}\ and not of $\varpi_y,\varpi_z$ then all transverse current components are precisely zero. Under such conditions there are no electromagnetic soliton solutions. This aspect has already been seen in the non-relativistic (Sec.~\ref{nonrel}) and weakly relativistic (Sec.~\ref{wrel}) cases, and is now of general validity. Thus $f$ must be a function of $\varpi_y$ and/or $\varpi_z$ as well as \ensuremath{\Lambda}\ in order to obtain an electromagnetic soliton. Despite the fact one cannot solve the characteristic equation in closed form for a fully relativistic plasma as shown in \ref{r_transf}, one can obtain accurate approximate solutions when the electrostatic field is either small or large compared to the Lorentz force per unit charge. In both cases it is then possible to express $\gamma$ in terms of the characteristic constant, as also detailed in \ref{r_transf}.
The existence and structure of any solitons (electromagnetic and/or electrostatic) then depends on the choices made for the particle distribution functions, as also exhibited in detail for the non-relativistic and weakly relativistic solutions.
\section{Summary and Discussion}\label{summ}
While the linear Weibel instability has been thoroughly investigated, regrettably the same cannot be said of the non-linear behavior including the coupling of electromagnetic and electrostatic effects. The exploration of the non-linear aspects given here has uncovered a variety of effects that are germane to future investigation for both non-relativistic and relativistic plasmas. While in both the non-relativistic and the relativistic case, the classical constants of motion are total energy and generalized momentum, the problem is to obtain a \emph{closed form} expression for $\gamma$ (or, in the notation used here, $p_x$) in terms of these constants. Because of the coupled effects of the magnetic fields and the electrostatic fields such appears not to be an easily tractible problem as shown in text.
It has been shown previously that Weibel isolated modes (which, subsequently, can develop soliton modes) are retained in analytical calculations even if one allows for ``classic'' extended unstable wavenumber ranges. Because such structures develop only in asymmetric plasmas, they serve as an indicator for the asymmetry of the particle distribution function. Therefore, because precisely symmetric plasmas are difficult to achieve in nature, isolated Weibel modes will be ubiquitous, as shown recently \citep{tau07:wea}. Furthermore, even if the unstable wave modes are allowed to have a ``weak'' propagating component, the isolated Weibel modes are still generated. Hence, soliton structures should \emph{always} be taken into consideration when investigating: (i) instabilities in (relativistic) plasmas in general; (ii) non-linear behavior of the resulting unstable modes; (iii) particle radiation patterns due to scattering in such modes.
Perhaps the most significant theme is that the occurrence of a non-linear Weibel-like soliton requires that the distribution functions be dependent on all three of the characteristic constants. Without such a dependence (and in particular with no explicit dependence on characteristic constants perpendicular to the spatial variation direction of the soliton) then there is no electromagnetic current and so no soliton. This point was demonstrated for non-relativistic, weakly relativistic, and fully relativistic plasma situations.
Even then, the functional behavior of the distribution functions on the three characteristic variables was shown, by explicit examples, to play a fundamental role in determining the structure of the non-linear equations for the coupled electromagnetic and electrostatic fields. Cases were given where no soliton was possible, where solitons existed only for decoupled electromagnetic field with no electrostatic component, and where changes in the distribution functions altered the non-linear field equations so markedly that each situation had to be considered anew. The characteristic constants could be written down in closed form for the non-relativistic and weakly relativistic situations, and the constants could be approximated in the fully relativistic plasma situation for weak electrostatic (electromagnetic) effects on an electromagnetic (electrostatic) field.
Nevertheless the complexity of the resulting non-linear field equations is daunting. Except for simple situations it has so far not proven possible to solve such non-linear equations in either particular cases or the general case for chosen distribution functions. One suspects that only numerical procedures will allow deeper insight into the classes of functional behavior for distribution functions that allow solitons, for the spatial structure of such solitons, and for the relative contributions of the electrostatic and electromagnetic fields to any such solitons. Future work should attempt to investigate the modifications of the radiation pattern due to particle scattering in such soliton structures. In doing so, the question can be explored if and to what extent the radiation spectrum in relativistic outflows deviates from pure synchrotron radiation.
\ack
{\it We thank the anonymous referee for scrutizining our manuscript ever so thoroughly. This work was partially supported by the Deutsche Forschungsgemeinschaft (DFG) through grant No.\ \mbox{Schl~201/17-1}, Sonderforschungsbereich 591, and by the German Academy of Natural Scientists Leopoldina Fellowship Programme through grant LDPS 2009-14 funded by the Federal Ministry of Education and Research (BMBF).}
| 15,818 |
\section{Introduction}
\label{Intro}
The observation of the paleomagnetic data \cite{generale,core,CK95} have shown that, unlike the solar magnetic field, where the polarity reversals are strictly periodic, geomagnetic measurements of the last $160$ million years present rather sudden and occasional polarity reversals.
The reversal process is normally very rapid with respect to the typical time interval between successive reversals, which may range from $10^4$ up to $10^7$ years \cite{generale,CK95,valet93}.
Recent works on data analysis, experimental dynamo and theoretical modeling have inproved the knowledge of the Earth dynamo. However, the main fundamental questions concerning the polarity reversals still remain unanswered \cite{generale,review,dynamo,stefani05}. The nature of the triggers (external or internal to Earth) and the physical mechanisms giving rise to the reversals, the reason for the long time variations in the average reversal rate (cf. e.g. \cite{core,yamazaki}), are still open problems.
The sequence of geomagnetic reversals (see the example from the CK95 database \cite{CK95} shown in Fig. \ref{fig1}) seems to result from a of a stochastic process. The same behaviour is observed for experimental dynamo \cite{bourg} and from numerical simulations \cite{stefani05}. While experimental dynamo is a recent excellent achievement, the numerical approach, namely the direct solution of the Maghetohydrodynamics (MHD) equations (see \cite{review,earth1,earth2}) is still far from being satisfactory for a statistical analysis.
However, reversals are also observed in field resulting from simplified models, such as few modes models \cite{rikitake,crossley,turcotte}, models of noise-induced switchings between two metastable states \cite{schmitt,hoyng02,hoyng04}, or mean-field dynamo models with a noise-perturbed $\alpha$ profile \cite{stefani05}.
\begin{figure}
\centerline{\includegraphics[width=10.0 cm]{bars-pdf.eps}}
\caption{Bottom: Polarity of the earth's magnetic field (from today) as in the CK95 record (partial).
The black bars are the normal (present) polarity. Top: the probability density function $P(\Delta t)$ of persistence times $\Delta t$ for CK95 database (statistical errors are shown as vertical bars).}
\label{fig1}
\end{figure}
Recently, it has been shown through a simple statistical analysis, that the reversals of the paleomagnetic field are not random \cite{prl,pepi,gafd}, namely the statistics of interevent times ($\Delta t = t_{i+1} - t_i$, where $t_i$ is the time of the $i$-th event of the record) departs from a Poissonian distribution (namely an exponential law $P(\Delta t) = \lambda \exp(- \lambda \Delta(t))$, where $\lambda$ represents the reversal occurrence rate \cite{generale,hoyng02,mcfadden}), including a non-stationary Poisson process, in which case a power-law distribution could arise from the superposition of Poisson distributions with time variable rates $\lambda(t)$, see \cite{constable}. This result shows that geomagnetic reversals are clustered in time, probably because of presence of memory in the process generating polarity reversals.
Here we show that experimental dynamo reversals also are characterized by correlations and clustering, suggesting that the reversal process is a universal property of dynamo, which does not need any external triggering.
\section{Local Poisson hypothesis and paleomagnetic data}
\label{LPH}
In this section we will describe the statistical tool used in this work to test, as a zero-th order hypothesis $H_0$, whether the observed sequence is consistent with a \textit{Local Poisson Process}.
The reversals rate profile $\lambda(t)$ being in principle unknown, the test should be independent on it. A method introduced in cosmology \cite{bi} and more recently used for solar flares \cite{boffetta,lepreti} geomagnetic activity \cite{lepreti04}, random lasers in liquid crystals \cite{sameh}, and stock market analysis \cite{greco} will be used here. Consider the time sequence of reversals as a point-like process, and suppose that each reversal occurs at a discrete time $t_i$. The suitably normalized local time interval $h$ between reversals can be defined by introducing $\delta t_i$ as
\begin{equation}
\delta t_i = \min \{t_{i+1}-t_i;t_i-t_{i-1}\} \; ,
\end{equation}
and $\tau_i$ by
\begin{equation}
\tau_i = \left\{\begin{array}{l}
t_{i-1}-t_{i-2} \;\;\;\;\;\;\;\;\;\;\; \mbox{if } \delta t_i = t_i-t_{i-1} \\
t_{i+2}-t_{i+1} \;\;\;\;\;\;\;\;\;\;\; \mbox{if } \delta t_i = t_{i+1}-t_i
\end{array} \right.
\label{taui}
\end{equation}
$\delta t_i$ and $\tau_i$ are then the two persistence times following or preceeding a given reversal at $t_i$. If the local Poisson hypothesis $H_0$ holds, both $\delta t_i$ and $\tau_i$ are independently distributed according to an exponential probability density: $p(\delta t) = 2 \lambda_i \exp(-2 \lambda_i \delta t)$ and $p(\tau)= \lambda_i \exp(-\lambda_i \tau)$ with local rate $\lambda_i$.
The distribution of the variable $h$ defined by
\begin{equation}
h(\delta t_i, \tau_i) = \frac{2 \delta t_i}{2 \delta t_i + \tau_i}
\label{acca}
\end{equation}
will not depend on $\lambda_i$.
For the surviving function of the probability density
\begin{equation}
P(h \geq H) = \int_H^{\infty} P(h) dh = \int_0^{\infty} dx 2\lambda e^{-2\lambda x} \int_0^{g(x,H)} dy \lambda e^{-\lambda y}
\label{cumulativa}
\end{equation}
where $P(h)$ is the probability density function of $h$ and
\[
g(x,H) = 2x \left[\frac{1}{H}-1\right] \; ,
\]
it can be easily shown that, under the hypothesis $H_0$,
\[
P(h \geq H) = 1-H \; ,
\]
that is, $h$ is a stochastic variable uniformly distributed in $h \in [0;1]$.
In a process where $\tau_i$'s are systematically smaller than $2 \delta t_i$'s, clusters are present and the average value of $h$ is greater than $1/2$. On the contrary, when the process is characterized by voids, the average value of $h$ is less than $1/2$. From time series, it is easy to calculate the surviving function $P(h \geq H)$ and the probability density function $P(h)$.
The test described above has been recently applied to four different datasets of geomagnetic polarity reversals, including the already mentioned CK95 cite{prl,pepi,gafd}. The probability density function $P(h)$ is reported in Fig. ref{fig2} for the CK95 datasets. A significant deviation from the uniform distribution was observed in all the datasets, due the presence of clusters.
\begin{figure}
\centerline{\includegraphics[width=10.0 cm]{pcumCK95.eps}}
\caption{Probability densities $P(h)$ of the stochastic variable $h$ and corresponding surviving functions $P(h\geq H)$ for all the empirical datasets. The theoretical probability expected under a Poisson statistics is also shown.}
\label{fig2}
\end{figure}
\section{Experimental dynamo}
\begin{figure}
\centerline{\includegraphics[width=8.5cm]{Fig1_setup.eps}}
\caption{(a) Omega effect : the differential rotation on a von K\'arm\'an flow advects and stretches an externally applied axial field $B_{0z}$ so as to generate a toroidal component $B_\theta$. (b) Postive feed-back : the amplitude of $B_\theta$ is used to drive a power source which generates the current in the external loop. Two Helmoltz coils are set on either end of the cylindrical flow vessel; $B_\theta$ is measured in the mid-plane by a Hall probe connected to a Bell gaussmeter. The measured value is fed into a linear amplifier whose output drives a Kepco current source. In order to explore the role of the turbulent fluctuations, the amplifier has separate channels for the DC and flcutuating parts of the induction.
}
\label{setup}
\end{figure}
The dynamo laboratory model \cite{bourg} mimics an alpha-omega cycle where part of the dynamo cycle is generated by an external feed-back but the flow turbulence is still included and has a leading role. In order to achieve this in a simple laboratory dynamo, we relax the requirement that the current path be fully homogeneous, and we effectively prescribe an alpha mechanism by which a toroidal magnetic field generates an induced poloidal one. However, the omega poloidal to toroidal conversion still results from a fully turbulent process. Our experimental fluid turbulent dynamo is very much inspired by a variation of the solid rotor dynamo proposed by Sir Edward Bullard in the early 20th century, and described in figure 1. Two coaxial disks counter rotate at a rate $\Omega$. When an axial magnetic field $\mathbf{B}_{0z}$ is externally applied, the flow differential rotation induces a toroidal field $\mathbf{B}_\theta$, this is the omega effect. The value of this field is then used to drive a linear current amplifier in the loop that generates $\mathbf{B}_{0z}$. The poloidal to toroidal conversion is entirely due to the fluid motion, and incorporates all turbulence effects. It has been extensively studied in previous ``open loop'' experiments. When $\mathbf{B}_{0z}$ is externally fixed, one has $B_\theta = k R_m B_{0z}$ where $R_m = R^2\Omega/\lambda$ is the magnetic Reynolds number (with $\lambda$ the magnetic diffusivity of liquid Gallium) and $k$ is a ``geometric'' constant which in our experiment has been measured of the order of 0.1. The toroidal to poloidal conversion is then obtained by feeding the axial coils with an electrical current linearly driven by a signal proportional $B_{1\theta}$, so that $B_{0z} = \alpha G B_\theta$ which reinforces $B_{0z}$ with $G$ an adjustable gain. In such a closed loop setup, one then has $B_z = \alpha G k R_m B_{0z}$, and self-sustained dynamo is reached as $\Omega > \Omega^c = \lambda / GkR^2$. Clearly, the adjustable gain of the linear amplifier allows to adjust the value of $\Omega^c$ to an experimentally accessible range. At this point it should be emphasized that although the feed-back scheme is very similar for the Bullard rotor dynamo and for our fluid experiment, the expected dynamics is much richer because of the strong fluctuations in the turbulent flow, where Reynolds numbers in excess of $10^6$ are reached. Indeed, the von K\'arm\'an flow is known for its complex dynamics, presenting not only small scale turbulent fluctuations but also large scale ones -- for instance fluctuation up to 114\% for the differenteial rotation effect has been reported). Compared to the 1963 pioneering experiment of Lowes and Wilkison with solid rotor motions, the study here fully incorporates fluid turbulence and the associated fluctuations of magnetic induction. The role of these fluctuations, inherent to large Reynolds number flows, remains one of the mysterties of natural dynamos, and of noisy instabilities in a braoder framework.
In this experiment the value of the magnetic field at saturation $B_{\rm sat}$ is fixed by the maximum current that can be drawn from the power amplifier driving the coils. We measure $B_{\rm sat} \sim 30$~G, a value such that the Lorentz forces cannot modify the hydrodynamic flow --since it yields an interaction parameter of the order of $10^{-3}$. The saturation of the instability is therefore driven by the amplifier non-linearities rather than by the back-reaction of Lorentz forces on the dynamical velocity field. As a consequence, the $B_z$ component of the generated magnetic field saturates at the same mean amplitude $B_{\rm sat}$ for all rotation rates ($B_{\rm sat}$ corresponds to the magnetic field generated by the coils when the current source is saturated), the saturation amplitude of the toroidal field $B_{\theta \rm sat}=k R_m B_{\rm sat}$ linearly increases with $\Omega$.
Another noteworthy observation is that the presence of turbulent fluctuations plays a crucial role in the triggering of the magnetic field reversals. In the experimental results reported here, the current source is driven by an amplifier whose input is $\overline{B}_\theta + g.b'_\theta$, with $\overline{B}_\theta$ the low pass DC component of $B_\theta$ and $b'_\theta$ its AC fluctuating part. This arrangement allows to study separately the role of slow variations and turbulent fluctuations in the feed-back loop. In the results reported in this article, we have set $g=1.18$. A homopolar dynamo, i.e. without reversals, was obtained for smaller values of $g$ or when the $b'_\theta$ input in the amplifier was replaced by a synthetic gaussian white noise (even with a high amplitude).
We show here the results of the $h$-test obtained in a realization (serie27) with $\Omega=12$ Hz, and cutoff frequency $f_c=600$ mHz. Similar resutls were observed with different parameters, and this study is left for more extended work.
Figure ref{fig3} shows the reversals surviving function in the case described here. The behaviour is very similar to the paleomagnetic case, indicating again presence of clustering and correlations, rather than a random behaviour. This indicates that the mechanism responsible for the clustering is present in both dynamoes, suggesting some sort of universality of the process.
\begin{figure}
\centerline{\includegraphics[width=10cm]{pcum27.eps}}
\caption{Probability densities $P(h)$ of the stochastic variable $h$ and corresponding surviving functions $P(h\geq H)$ for the experimental dataset described in the text. The theoretical probability expected under a Poisson statistics is also shown.}
\label{fig3}
\end{figure}
\section{Conclusion}
\label{fine}
In this short paper, the statistical properties of persistence times between geomagnetic reversals have
been investigated. We performed a statistical test which showed that geomagnetic reversals are produced
by an underlying process that is far from being locally Poissonian, as recently conjectured by \cite{constable}.
Thus, the sequence of geomagnetic reversals is characterized by time correlations. As spontaneous reversals of the geodynamo field have been observed in high resolution numerical simulations \cite{earth1,earth2}, the main results contained in this paper seem to indicate that such reversals could be related to the non-linear nature of the turbulent dynamo.
In order to confirm this conjecture, we performed the statistical test mentioned above on recent results from laboratory dynamo.
Our analysis has shown that the departure from Poisson statistics found in the paleomagnetic data, related with the long range
correlations introduced by the chaotic dynamic of the system cite{pepi,gafd}, are also present in the laboratory dynamo.
Such correlations can be associated with the presence of some degree of memory in the underlying dynamo process \cite{valet,stefani05} which gives rise to clustering of reversals.
| 3,942 |
\section{Introduction}\indent
The existence of large scale magnetic fields in the universe has led to extensive studies of their behavior in
cosmological models \cite{T1}--\cite{B}. The observation by Marklund, Dunsby and Brodin \cite{M} that gravity
wave perturbations of Friedmann--Lema\^itre--Robertson--Walker cosmological models encountering weak
magnetic test fields can produce electromagnetic waves is of particular significance. This phenomenon has
recently been studied again in cosmology \cite{HOF}. In the present paper we examine it in further detail by
taking it out of the cosmological setting and by replacing the gravitational waves by a single impulsive wave in
the following three illustrative situations: (1) a cylindrical impulsive gravitational wave propagating into a
cylindrically symmetric universe containing an approximately uniform magnetic field (the Bonnor \cite{Bo} universe,
rediscovered by Melvin \cite{Me}), (2) an axially symmetric impulsive gravitational wave propagating into an
axially symmetric universe containing an approximately uniform electric field (the Mc Vittie \cite{McV} universe;
see also \cite{Bon}) and (3) a `spherical' impulsive gravitational wave propagating into a universe with no
gravitational field but with a weak uniform magnetic test field. In each of these three cases the space--time to the
future of the null hypersurface history of the impulsive gravitational wave (the model universe left behind by the
wave) is calculated in a future neighborhood of the null hypersurface, using the Einstein--Maxwell vacuum
field equations. In cases (1) and (3) we find that electromagnetic radiation is generated behind the gravitational
wave. In case (2) no electromagnetic radiation appears after the wave unless a current is established behind the
wave breaking the Maxwell vacuum. In all three cases the presence of the magnetic or electric fields in front
of the gravitational wave modifies the amplitude of the gravitational wave and this modification is explicitly calculated
using the Einstein--Maxwell vacuum field equations. The three cases are described in sections 2, 3 and 4
respectively of this paper followed by a discussion of the main features of our results in section 5. Some useful
calculations pertinent to section 2 are given in appendix A.
\setcounter{equation}{0}
\section{The Cylindrically Symmetric Case}\indent
The cylindrically symmetric line--element can be written in the form \cite{SKMHH}
\begin{equation}\label{2.1}
ds^2=e^{2k-2U}(dt^2-d\rho ^2)-e^{-2U}\rho ^2 d\phi ^2-e^{2U}dz^2\ ,\end{equation}
where, in general, $k$ and $U$ are functions of $\rho$ and $t$. An example of a static model of a
gravitational field having a magnetic field as origin is \cite{Bo}, \cite{Me}
\begin{equation}\label{2.2}
e^{2U}=e^{k}=f^2\ ,\qquad f=1+\frac{1}{4}B^2\rho ^2\ ,\end{equation}
with $B$ a real constant. The corresponding Maxwell field is given by the 2--form
\begin{equation}\label{2.3}
F=B\,f^{-2}\rho\,d\rho\wedge d\phi\ .\end{equation}
Referred to an orthonormal tetrad this is a pure magnetic field with one physical component $Bf^{-2}$
and thus ``is not a uniform field in the classical sense"\cite{Bo}. For a weak magnetic field, with terms of
order $B^2$ neglected (more correctly, with dimensionless quantities of order $B^2\rho ^2$ neglected),
the magnetic field (2.3) is approximately uniform. We wish to have an impulsive gravitational wave propagating
into this universe. The history of such a wave is a null hypersurface. Respecting the cylindrical symmetry the
simplest such null hypersurfaces in the space--time with line--element (2.1) have equations
\begin{equation}\label{2.4}
u=t-\rho ={\rm constant}\ .\end{equation}
Such a null hypersurface has the potential to be the history of a cylindrical wave. Changing to $u$ as a coordinate in
place of $t$ according to (2.4) the line--element (2.1) reads
\begin{equation}\label{2.5}
ds^2=e^{2k-2U}du\,(du+2\,d\rho )-e^{-2U}\rho ^2 d\phi ^2-e^{2U}dz^2\ ,\end{equation}
with $k, U$ functions now of $\rho$ and $u$ in general but given by (2.2) for the magnetic universe above.
To construct a space--time model of a cylindrical impulsive gravitational wave propagating into the magnetic universe,
with history $u=0$ (say), and leaving behind a cylindrically symmetric Einstein--Maxwell vacuum we proceed as follows:
We use coordinates labelled $x^\mu =(u, \rho , \phi , z)$ for $\mu =1, 2, 3, 4$. The null hypersurface $\Sigma (u=0)$ divides
space--time into two halves $M_+(u>0)$ and $M_-(u<0)$. We take $M_-$ to be to the past of $\Sigma$ with line--element
(2.5) with $U, k$ given by (2.2) and $M_+$ to be to the future of $\Sigma$ with line--element of the form (2.5) and with the
as yet unknown functions $U, k$ denoted now by $U_+$ and $k_+$. We assume that $\Sigma$ is singular,
which means that the metric tensor of $M_-\cup M_+$ is only $C^0$ across $\Sigma$ and thus physically $\Sigma$ is in
general the history of a cylindrically symmetric null shell and/or impulsive gravitational wave (see \cite{BH} for a review
of singular null hypersurfaces in general relativity). We seek to find the conditions on the functions $U, k, U_+, k_+$ so that
$u=0$ is the history of an impulsive gravitational wave and not a null shell. The system of coordinates we are using is
common to the two sides of $\Sigma$. Since the metric tensor is $C^0$ the induced metrics on $\Sigma$ from its embedding
in $M_+$ and in $M_-$ must be identical and thus we shall have
\begin{equation}\label{2.6}
U_+(u=0, \rho )=\log f\ , \end{equation}
with $f$ given by (2.2). The subset of coordinates $\xi ^a=(\rho , \phi , z)$ with $a=2, 3, 4$, will be taken as intrinsic
coordinates on $\Sigma$. Here $\rho$ is a parameter running along the generators of $\Sigma$ while $\theta ^A=(\phi , z)$ with
$A=3, 4$ label the generators. We denote by $e_{(a)}=\partial /\partial\xi ^a$ the tangential basis vectors. Their scalar products
give the induced metric tensor $g_{ab}$ which is singular since $\Sigma$ is null. The line--element (2.5) restricted to $\Sigma$
reads
\begin{equation}\label{2.7}
ds^2|_{\Sigma}=g_{ab}d\xi ^a\,d\xi ^b=e_{(a)}\cdot e_{(b)}d\xi ^a\,d\xi ^b\ .\end{equation}
It is convenient to introduce a pseudo--inverse of $g_{ab}$ (see \cite{BH}) which we denote by $g^{ab}_*$ and which is formed
by the inverse $g^{AB}$ of $g_{AB}$ bordered by zeros. As normal to $\Sigma$ we take $n^\mu\,\partial /\partial x^\mu =\partial
/\partial\rho =e_{(2)}$. This vector field is tangent to $\Sigma$ and in order to describe extrinsic properties of $\Sigma$ we introduce
a transversal vector field $N^\mu$ on $\Sigma$ which for convenience we take to be future--directed, null and orthogonal
to the two space--like vectors $e_{(A)}$ at each point of $\Sigma$. Thus we have
\begin{equation}\label{2.8}
N_{\mu}\,n^\mu =1\ ,\ N_\mu\,N^\mu =0\ ,\ N_\mu\,e^\mu _{(A)}=0\ .\end{equation}
Thus $N_\mu =(\frac{1}{2}, 1, 0, 0)$. Following the algorithm developed in \cite{BH} we define the transverse extrinsic curvature
${\cal K}^{\pm}_{ab}$ on either side of $\Sigma$ by
\begin{equation}\label{2.9}
{\cal K}^{\pm}_{ab}=-N_{\mu}(e^\mu _{(a),\lambda}+{}^{\pm}\Gamma ^{\mu}_{\lambda\sigma}\,e^{\sigma}_{(a)})\,e^{\lambda}_{(b)}\ ,
\end{equation}
with the comma denoting partial differentiation with respect to $x^\mu$ and ${}^{\pm}\Gamma ^{\mu}_{\lambda\sigma}$ the components
of the Riemannian connection calculated on either side of $\Sigma$. The jump in the quantities (2.9) is defined by
\begin{equation}\label{2.10}
\gamma _{ab}=2\,[{\cal K}_{ab}]=2\,({\cal K}^+_{ab}-{\cal K}^-_{ab})\ .\end{equation}
We find in the present case that $\gamma _{ab}=0$ except for
\begin{eqnarray}\label{2.11}
\gamma _{22}&=&-2\,[\Gamma ^2_{22}]\ ,\\
\gamma _{33}&=&-[\Gamma ^1_{33}]-2\,[\Gamma ^2_{33}]\ ,\\
\gamma _{44}&=&-[\Gamma ^1_{44}]-2\,[\Gamma ^2_{44}]\ .\end{eqnarray}
The singular null hypersurface $\Sigma$ can represent the history of a null shell and/or an impulsive gravitational wave. The surface
stress--energy tensor of the shell, if it exists, is calculated from $\gamma _{ab}$ and is given by (see Eq.(2.37) of \cite{BH})
\begin{equation}\label{2.12}
16\pi\,S^{ab}=\mu\,n^a\,n^b+P\,g^{ab}_*\ ,\end{equation}
with the surface energy density $\mu$ and pressure $P$ defined by
\begin{eqnarray}\label{2.13}
16\pi\,\mu &=&-\gamma _{ab}\,g^{ab}_*=-\gamma _{AB}\,g^{AB}\ ,\\
16\pi\,P&=&-\gamma ^{\dagger}\ ,\end{eqnarray}
In (2.14) $n^a=(1, 0, 0)$ and in (2.16) $\gamma ^{\dagger}=\gamma _{ab}\,n^a\,n^b=\gamma _{22}$. Hence the
conditions for \emph{no shell} read
\begin{equation}\label{2.14}
\gamma ^{\dagger}=\gamma _{22}=0\ \ {\rm and}\ \ -g^{ab}_*\gamma _{ab}=f^2\gamma _{33}
+f^{-2}\rho ^2\gamma _{44}=0\ .\end{equation}Using (2.11)--(2.13) and calculating the components of the
Riemannian connection associated with the metric tensor given via the line--element (2.5) we find that the first
of (2.17) requires that $[k_\rho ]=0$ (with the subscript denoting partial differentiation with respect $\rho$) and the
second of (2.17) requires that $[k]=0$. Hence the boundary condition for \emph{no shell} is
\begin{equation}\label{2.15}
[k]=0\ ,\end{equation}
expressing the continuity of the function $k(\rho , u)$ across $\Sigma (u=0)$. The gravitational wave part of the signal
with history $\Sigma$ is described by a part of $\gamma _{ab}$, denoted $\hat\gamma _{ab}$, defined by Eq.(2.47)
of \cite{BH}:
\begin{equation}\label{2.16}
\hat\gamma _{ab}=\gamma _{ab}-\frac{1}{2}g_{ab}\,g^{cd}_*\gamma _{cd}-2\,\gamma _{(a}N_{b)}+\gamma ^{\dagger}
N_a\,N_b\ ,\end{equation}with $N_a=(1, 0, 0)$. With (2.18) satisfied we find that $\hat\gamma _{ab}=0$ except
\begin{eqnarray}\label{2.17}
\hat\gamma _{33}&=&\gamma _{33}=2\rho ^2f^{-4}[U_u]\ ,\\
\hat\gamma _{44}&=&\gamma _{44}=-2\,[U_u]\ .\end{eqnarray}
The fact that (2.20) and (2.21) are multiples of each other means that the gravitational wave here has only one degree of freedom. Hence we see that \emph{for an impulsive wave} with history
$\Sigma$ we must have
\begin{equation}\label{2.18}
[U_u]\neq 0\ ,\end{equation}with the subscript denoting partial differentiation with respect to $u$.
The Maxwell field in $M_+\cup M_-$ is given in general by the 2--form
\begin{equation}\label{2.19}
F=w_\rho\,dz\wedge d\rho +w_u\,dz\wedge du +s_\rho\,d\rho\wedge d\phi +s_u\,du\wedge d\phi\ ,\end{equation}
with $w, s$ each functions of $\rho , u$ and the subscripts as always denoting partial derivatives. In $M_-(u<0)$ we
have
\begin{equation}\label{2.20}
w=0\ ,\qquad s=-2\,B^{-1}f^{-1}\ ,\end{equation}with $f$ given by (2.2). Substitution of (2.24) into (2.23) yields (2.3).
To obtain the space--time $M_+(u\geq 0)$ and the electromagnetic field for $u\geq 0$ we must satisfy the Einstein--Maxwell
vacuum field equations in $u\geq 0$ with a line--element of the form (2.5) and a Maxwell 2--form of the form (2.23). These
equations are listed in appendix A. For our purposes it is sufficient to solve these equations for small $u>0$ (i.e. in a future
neighborhood of $\Sigma$) . The unknown functions of $\rho , u$ are $U, k, w, s$ with $k$ continuous across $u=0$ and $U_u$
jumping across $u=0$. Hence for small $u$ we can write
\begin{eqnarray}\label{2.21}
U&=&\log f+u\,\theta (u)\,U_1+O(u^2)\ ,\\
k&=&2\,\log f+u\,\theta (u)\,k_1+O(u^2)\ ,\end{eqnarray}with $f$ given by (2.2).
Here $\theta (u)$ is the Heaviside step function which is equal to unity if $u>0$ and equal to zero if $u<0$.
For consistency with the expansions (2.25) and (2.26), and in the light of (2.24),
we also assume that
\begin{eqnarray}\label{2.22}
s&=&-2\,B^{-1}f^{-1}+u\,\theta (u)\,s_1+O(u^2)\ ,\\
w&=&u\,\theta (u)\,w_1+O(u^2)\ .\end{eqnarray}
The unknown functions $U_1, k_1, s_1, w_1$ in (2.25)--(2.28) are functions of $\rho$ only. Substituting (2.25)--(2.28) in Einstein's
equations (A-3)--(A-7) results in
\begin{equation}\label{2.23}
w_1=0\ ,\end{equation}
and then
\begin{equation}\label{2.24}
\frac{d}{d\rho}(\rho ^{1/2}U_1)= B\,\rho ^{-1/2}s_1\ ,\end{equation}
\begin{equation}\label{2.25}
\frac{1}{\rho}\frac{dk_1}{d\rho}-2\,f^{-1}\frac{df}{d\rho}\,\frac{dU_1}{d\rho}=2\,\frac{B}{\rho}\,\frac{ds_1}{d\rho}+2\,B^2f^{-2}U_1\ ,\end{equation}
\begin{equation}\label{2.26}
\frac{dU_1}{d\rho}-U_1\,f^{-1}\frac{df}{d\rho}+\frac{1}{2\rho}\,U_1=\frac{dk_1}{d\rho}\ ,\end{equation}
\begin{equation}\label{2.27}
2\,U_1f^{-1}\frac{df}{d\rho}-2U^2_1-\frac{1}{\rho}k_1=\frac{2}{\rho ^2}f^2(s_1-B\,\rho\,f^{-2})s_1\ .\end{equation}
Maxwell's equations (A-1) and (A-2) now provide just one extra relevant equation, namely,
\begin{equation}\label{2.28}
\frac{d}{d\rho}(\rho ^{-1/2}f\,s_1)=-B\,f^{-1}\rho ^{1/2}U_1\ .\end{equation}We observe that
(2.30), (2.31) and (2.34) imply (2.32). Hence the strategy for solving these five equations is to first solve (2.30) and (2.34) for $s_1, U_1$,
then substitute these solutions into (2.33) to obtain $k_1$ algebraically and then to check that (2.31) is automatically satisfied. Proceeding
in this way we obtain the solutions
\begin{eqnarray}\label{2.29}
U_1&=&a_0\,\rho ^{-1/2}f^{-1}\left (1-\frac{1}{4}B^2\rho ^2\right )+b_0\,\rho ^{1/2}B\,f^{-1}\ ,\\
s_1&=&-a_0\,B\rho ^{3/2}f^{-2}+b_0\,\rho ^{1/2}f^{-2}\left (1-\frac{1}{4}B^2\rho ^2\right)\ ,\\
k_1&=&-2\,(a_0^2+b_0^2)-a_0\,B^2\rho ^{3/2}f^{-1}+2\,b_0\,B\rho ^{1/2}f^{-1}\ ,\end{eqnarray}
where $a_0, b_0$ are real constants. A convenient way to interpret these results is to use them to obtain
information about parts of the Weyl tensor and the Maxwell tensor in $M_+\cup M_-$ on a basis of 1--forms $\theta ^\mu$ ($\mu =1, 2, 3, 4$) in terms of which the
line--element (2.5) can be written
\begin{equation}\label{2.30}
ds^2=2\,\theta ^1\,\theta ^2-(\theta ^3)^2-(\theta ^4)^2\ .\end{equation}
Such a basis is given by
\begin{equation}\label{2.31}
\theta ^1=e^{2k-2U}(d\rho +\frac{1}{2}du)\ ,\ \theta ^2=du\ ,\ \theta ^3=e^{-U}\rho\,d\phi\ ,\ \theta ^4=e^Udz\ .\end{equation}
The Weyl tensor components on this basis, denoted $C_{\mu\nu\lambda\sigma}$, for the space--time $M_+\cup M_-$ is
dominated for small $u$ by the tetrad component
\begin{equation}\label{2.32}
C_{2323}=\{a_0\,\rho ^{-1/2}f^{-1}(1-\frac{1}{4}B^2\rho ^2)+b_0\,\rho ^{1/2}f^{-1}B\}\,\delta (u)\ , \end{equation}
with all other terms and components at most $O(u^0)$. Here $\delta (u)$ is the Dirac delta function. This
dominant part of the Weyl tensor of $M_+\cup M_-$ is type N in the Petrov classification with degenerate principal null direction
$\partial /\partial\rho$. It represents a gravitational wave. When $B=0$ ($f=1$) we see a cylindrical impulsive gravitational wave which is
singular on the axis $\rho =0$ and having one degree of freedom manifested by the appearance of the real constant $a_0$. The presence
of the magnetic field $B$ clearly modifies the amplitude of the gravitational wave. We will comment on this modification in section 5 when
we can compare (2.40) with the examples described in the next two sections. The tetrad components of the Maxwell field in $M_{\pm}$
will be denoted $F^{\pm}_{\mu\nu}$. In general they jump across $\Sigma$ with the jump given by
\begin{equation}\label{2.33}
f_{\mu\nu}=[F_{\mu\nu}]=F^+_{\mu\nu}-F^-_{\mu\nu}\ .\end{equation}In the present case (2.27)--(2.29) and (2.36) mean that $f_{\mu\nu}$
vanishes except for
\begin{equation}\label{2.34}
f_{23}=f\,\rho ^{-1}s_1\ .\end{equation}It thus follows that the bivector $f_{\mu\nu}$ is algebraically special
(type N) in the classification of bivectors with $\partial /\partial\rho$ as degenerate principal null direction. Thus (2.42) indicates the
presence of cylindrical electromagnetic waves behind the impulsive gravitational as it propagates through the
universe with the magnetic field labelled by $B$.
\setcounter{equation}{0}
\section{The Axially Symmetric Case}\indent
We now construct a space--time model of an axially symmetric impulsive gravitational wave propagating into a static axially symmetric
universe containing an electric field. The line--element of a simple such space--time is \cite{McV}, \cite{Bon}
\begin{equation}\label{3.1}
ds^2=-W^{-3}dz^2-W^{-1}(d\rho ^2+\rho ^2d\phi ^2)+W\,dt^2\ ,\end{equation}
with $W=(1+E\,z)^2$ and $E$ is a real constant. This is a solution of the Einstein--Maxwell vacuum field equations with the Maxwell 2--form
given by
\begin{equation}\label{3.2}
F=E\,dz\wedge dt\ .\end{equation}This is clearly a pure electric field. When expressed on an orthonormal tetrad it has the one non--vanishing
physical component $E\,W$ and so is not a uniform electric field in the classical sense. It is approximately uniform for small dimensionless
parameter $E\,z$ however. A simple family of null hypersurfaces in this space--time is given by $u={\rm constant}$ with $u$ derived from
\begin{equation}\label{3.3}
du=dt-W^{-2}dz\ .\end{equation}Such null hypersurfaces can act as the histories of the wave--fronts of axially symmetric waves. Using $u$
instead of $t$ as a coordinate the line--element (3.1) reads
\begin{equation}\label{3.4}ds^2=W\,du^2+2\,W^{-1}du\,dz-W^{-1}(d\rho ^2+\rho ^2d\phi ^2)\ .\end{equation}Here $z$ is a parameter running
along the generators of the null hypersurfaces $u={\rm constant}$. It is convenient to work instead with an affine parameter $r$ along
these generators which is related to $z$ by $1+E\,z=(1-E\,r)^{-1}$. Replacing $z$ by $r$ in (3.4) means the line--element now reads
\begin{equation}\label{3.5}
ds^2=2\,du\,dr+(1-E\,r)^{-2}du^2-(1-E\,r)^2(d\rho ^2+\rho ^2d\phi ^2)\ .\end{equation} The Maxwell field (3.2) now takes the form
\begin{equation}\label{3.6}
F=E\,(1-E\,r)^{-2}dr\wedge du\ .\end{equation}
We now consider a space--time $M_+\cup M_-$ with $M_-(u\leq 0)$ corresponding to the
space--time with line--element (3.5) having as boundary the null hypersurface $u=0$ and $M_+(u\geq 0)$, with the same boundary $u=0$,
to be determined. To this latter end we
solve the vacuum Einstein-Maxwell field equations for $M_+\cup M_-$ requiring that $u=0$ is the history of an axially symmetric impulsive
gravitational wave (and \emph{not} a null shell). Our objective in doing this is to obtain the space--time $M_+$ with sufficient accuracy to
determine the coefficient of $\delta (u)$ in the Weyl tensor of $M_+\cup M_-$ and the jump, if it exists, in the Maxwell field across $u=0$,
in parallel with (2.40)--(2.42). We find that the line--element of $M_+\cup M_-$, for small $u$, can be written in the form (2.38) but with
\begin{eqnarray}\label{3.7}
\theta ^1&=&dr+\frac{1}{2}\left\{(1-E\,r)^{-2}+u\,\theta (u)\,c_1+O(u^2)\right\}\,du\ ,\\
\theta ^2&=&du\ ,\\
\theta ^3&=&(1-E\,r)\{(1+u\,\theta (u)\,\alpha _1+O(u^2))\,d\rho +(u\,\theta (u)\,\beta _1+O(u^2))\,\rho\,d\phi\}\ ,\nonumber\\
\\
\theta ^4&=&(1-E\,r)\{(u\,\theta (u)\,\beta _1+O(u^2))\,d\rho +(1-u\,\theta (u)\,\alpha _1+O(u^2))\,\rho\,d\phi\}\ .\nonumber\\\end{eqnarray}
The functions $\alpha _1, \beta _1, c_1$ are functions of $r, \rho , \phi$. The field equations restrict the functions $\alpha _1$ and $\beta _1$
(they also determine the function $c_1$ but we will not require it here) according to
\begin{equation}\label{3.8}
\alpha _1=\frac{\hat\alpha _1(\rho , \phi )}{1-E\,r}\ ,\qquad \beta _1=\frac{\hat\beta _1(\rho , \phi )}{1-E\,r}\ ,\end{equation}
and the functions $\hat\alpha _1,\ \hat\beta _1$ must satisfy the equations
\begin{eqnarray}\label{3.9}
\frac{\partial\hat\alpha _1}{\partial\phi}-\rho\,\frac{\partial\hat\beta _1}{\partial\rho}=2\,\hat\beta _1\ ,\\
\frac{\partial\hat\beta _1}{\partial\phi}+\rho\,\frac{\partial\hat\alpha _1}{\partial\rho}=-2\,\hat\alpha _1\ .\end{eqnarray}
Introducing the complex variable $\zeta =\log\rho +i\phi$ we can integrate (3.12) and (3.13) to arrive at
\begin{equation}\label{3.10}
\hat\alpha _1+i\hat\beta _1=e^{-\bar\zeta}H(\zeta )\ ,\end{equation}
where $H$ is an arbitrary analytic function of $\zeta$. In parallel with (2.40) and (2.42) we find in this case that
\begin{equation}\label{3.11}
C_{2323}-iC_{2324}=\frac{e^{-\bar\zeta}H(\zeta )}{(1-E\,r)}\,\delta (u)\ ,\end{equation}
and
\begin{equation}\label{3.12}
f_{\mu\nu}=[F_{\mu\nu}]=F^+_{\mu\nu}-F^-_{\mu\nu}=0\ .\end{equation}In (3.15) we see an axially symmetric impulsive
gravitational wave propagating into the universe with the electric field labelled by the parameter $E$. We also see that the
presence of the electric field modifies the amplitude of the wave by the appearance of $E$ in the coefficient of the delta function.
The coefficient of the delta function in the Weyl tensor is type N in the Petrov classification with $\partial /\partial r$ in this case
as degenerate principal null direction (propagation direction in space--time).
On account of (3.16) there is no electromagnetic radiation immediately behind the gravitational wave in this case. If we were
to relax the Maxwell vacuum conditions in $M_+$ we can obtain a 4--current with tetrad components
\begin{eqnarray}\label{3.13}
J_3&=&\frac{2\,E}{1-E\,r}\,f_{32}+O(u)\ ,\\
J_4&=&\frac{2\,E}{1-E\,r}\,f_{42}+O(u)\ .\end{eqnarray}A bivector $f_{\mu\nu}$ having only $f_{32}$ and $f_{42}$ non--zero
is of radiative type with propagation direction $\partial /\partial r$ and represents electromagnetic radiation.
\setcounter{equation}{0}
\section{The `Spherically' Symmetric Case}\indent
Starting with the line--element of Minkowskian space--time in rectangular Cartesian coordinates and time $X, Y, Z, T$ which
reads
\begin{equation}\label{4.1}
ds^2=-dX^2-dY^2-dZ^2+dT^2\ ,\end{equation}we make the coordinate transformation
\begin{equation}\label{4.2}
X+iY=r\,G^{1/2}\,e^{iy}\ ,\ Z=r\,x\ ,\ T=u+r\ ,\end{equation}with $G=1-x^2$ then (4.1) takes the form
\begin{equation}\label{4.3}
ds^2=2\,du\,dr+du^2-r^2(G^{-1}dx^2+G\,dy^2)\ .\end{equation}Here $u={\rm constant}$ are future null cones with
vertices on the time--like geodesic $r=0$, $r$ is an affine parameter along the generators of the null cones and the generators
are labelled by $x, y$. Using (4.2) again we see that
\begin{equation}\label{4.4}
dX\wedge dY=r\,G\,dr\wedge dy-r^2x\,dx\wedge dy\ .\end{equation}Thus in particular the Maxwell 2--form
\begin{equation}\label{4.5}
F=B\,r\,G\,dr\wedge dy-B\,r^2x\,dx\wedge dy\ ,\end{equation} with $B$ are real constant is a uniform magnetic field. We shall
restrict considerations to a weak magnetic field in which squares and higher powers of $B$ will be neglected. In this case
Minkowskian space--time with the bivector (4.5) constitute an approximate solution of the Einstein--Maxwell vacuum field
equations. To construct a model of a `spherical' impulsive gravitational wave propagating into this universe we will take for
$M_-(u\leq 0)$ the space--time with line--element (4.3) and the future null cone $u=0$ for the history of the wave. Since the
future null cone is the history of a 2--sphere expanding with the speed of light we will refer to the gravitational wave with
history $u=0$ as a `spherical' wave. The reason for the inverted commas is because such a wave will be found to have
singular points on its spherical wave front, thus violating strict spherical symmetry (see below). Something like this
is to be expected in general relativity on account of the Birkhoff theorem (see \cite{BH} section 1.2). Now the line--element
of $M_+\cup M_-$, for small $u$, can be written in the form (2.38) but with
\begin{eqnarray}\label{4.6}
\theta ^1&=&dr+\frac{1}{2}\left\{1+u\,\theta (u)\,c_1+O(u^2)\right\}\,du\ ,\\
\theta ^2&=&du\ ,\\
\theta ^3&=&r\,G^{-1/2}(1+u\,\theta (u)\,\alpha _1+O(u^2))\,dx +r\,G^{1/2}(u\,\theta (u)\,\beta _1+O(u^2))\,dy\ ,\nonumber\\
\\
\theta ^4&=&r\,G^{-1/2}(u\,\theta (u)\,\beta _1+O(u^2))\,dx +r\,G^{1/2}(1-u\,\theta (u)\,\alpha _1+O(u^2))\,dy\ .\nonumber\\\end{eqnarray}
Here the functions $\alpha _1, \beta _1,\ c_1$, along with the functions $f_{\mu\nu}$, are functions for $x, y, r$ and can be determined from the vacuum Einstein--Maxwell
field equations (in particular ensuring by the vacuum conditions that no null shell can have $u=0$ as history and also that there is
no surface 4--current on $u=0$). As in the example of section 3 we shall not require the function $c_1$ although it can be determined
using the field equations of course. From Maxwell's equations we find that
\begin{eqnarray}\label{4.7}
\frac{\partial}{\partial r}(r\,f_{32})=-B\,r\,G^{1/2}\beta _1\ ,\\
\frac{\partial}{\partial r}(r\,f_{42})=B\,r\,G^{1/2}\alpha _1\ .\end{eqnarray}
Neglecting $O(B^2r^2)$--terms we conclude that
\begin{equation}\label{4.8}
B\,r\,G^{1/2}f_{32}=K(x, y)\ \qquad {\rm and}\qquad B\,r\,G^{1/2}f_{42}=L(x, y)\ ,\end{equation} with $K$ and $L$ arbitrary functions of $x, y$. Einstein's equations
with the electromagnetic energy--momentum tensor as source yield
\begin{eqnarray}\label{4.9}
\frac{\partial}{\partial r}(r\,\alpha _1)&=&B\,r\,G^{1/2}f_{42}\ ,\\
\frac{\partial}{\partial r}(r\,\beta _1)&=&-B\,r\,G^{1/2}f_{32}\ ,\end{eqnarray}
from which we conclude that, in the light of (4.12),
\begin{equation}\label{4.10}
\alpha _1=L(x,y) +\frac{C(x, y)}{r}\ ,\qquad \beta _1=-K(x,y) +\frac{D(x, y)}{r}\ ,\end{equation} where $C$ and $D$ are arbitrary functions of $x, y$. Now defining
\begin{equation}\label{4.11}
\zeta =\frac{1}{2}\log\left (\frac{1+x}{1-x}\right )+iy\ ,\end{equation}the remaining Einstein field equations give the
single complex equation
\begin{equation}\label{4.12}
\frac{\partial}{\partial\zeta}\{G\,(\alpha _1+i\beta _1)\}=-G\,x\,(L-iK)\ ,\end{equation}from which we conclude, using (4.15), that
\begin{equation}\label{4.13}
G\,(C+iD)=\bar {\cal F}(\bar\zeta)\qquad {\rm and}\qquad L-iK=\bar {\cal G}(\bar\zeta)\ ,\end{equation}where ${\cal F}, {\cal G}$ are
arbitrary analytic functions. Thus (4.12) and (4.15) now read
\begin{equation}\label{4.14}
f_{42}+if_{32}=B^{-1}G^{-1/2}\frac{1}{r}\,{\cal G}(\zeta )\ ,\end{equation}and
\begin{equation}\label{4.15}
\alpha _1-i\beta _1=\frac{1}{r}G^{-1}{\cal F}(\zeta )+{\cal G}(\zeta )\ ,\end{equation}respectively.
In this case the delta function part of the Weyl tensor is given by
\begin{equation}\label{4.16}
C_{2323}-iC_{2324}=(\alpha _1-i\beta _1)\,\delta (u)=\left\{\frac{1}{r}G^{-1}{\cal F}(\zeta )+{\cal G}(\zeta )\right\}\,\delta (u)\ .\end{equation}
The first term in the coefficient of the delta function is the amplitude of a `spherical' wave with the expected ``directional"
singularities at $x=\pm 1$ (corresponding to $G(x)=0$, equivalently $X=Y=0$) while the second term is the modification to the amplitude due to the wave
encountering the weak magnetic field. In (4.19) we see the algebraically special jumps in the Maxwell field across $u=0$
which indicate the presence of electromagnetic radiation in the region $M_+$ of space--time to the future of the history of the
impulsive gravitational wave (i.e. behind the wave).This radiation is spherical--fronted ($u={\rm contant}>0$ being the histories
of the wave--fronts), singular at $r=0$ and also has directional singularities at $x=\pm 1$.
\setcounter{equation}{0}
\section{Discussion}\indent
In sections 2, 3 and 4 above we have considered an impulsive gravitational wave propagating into a vacuum universe with a
magnetic field present in the first and last cases and into a vacuum universe with an electric field present in the second case. If the
vacuum is preserved after the wave has passed then in the region behind the wave electromagnetic radiation appears in the
first and last cases but not in the second case. In addition we have found that in all three cases the amplitude of the impulsive
gravitational wave is modified by the existence of the magnetic or electric field that it encounters. There is an interesting
pattern to this modification when the magnetic and electric fields are weak in all three cases. In the cylindrically symmetric case,
combining (2.40) and (2.42) with (2.36), we can write, approximately for small $B$,
\begin{equation}\label{5.1}
C_{2323}=\{a_0\rho ^{-1/2}+\rho\,B\,f_{23}\}\,\delta (u)\ ,\end{equation}
for the delta function part of the Weyl tensor. The coefficient of the delta function here is a sum of a cylindrical wave term and
an interaction between the weak magnetic field and the electromagnetic radiation. For the axially symmetric case with a weak
electric field we have from (3.15)
\begin{equation}\label{5.2}
C_{2323}-iC_{2324}=\{e^{-\bar\zeta}H(\zeta )+r\,E\,e^{-\bar\zeta}H(\zeta ) \}\,\delta (u)\ .\end{equation}
In this case there is no electromagnetic radiation generated behind the gravitational wave but the coefficient of the delta function
is the sum of an axially symmetric wave term and an interaction between the weak electric field and the gravitational radiation. Finally
in the `spherical' case we have (4.21) which with (4.19) can be written
\begin{equation}\label{5.3}
C_{2323}-iC_{2324}=\left\{\frac{1}{r}G^{-1}{\cal F}(\zeta )+r\,B\,G^{1/2}(f_{42}+if_{32})\right\}\,\delta (u)\ .\end{equation}
The coefficient of the delta function here is a sum of a `spherical' wave and an interaction between the weak magnetic
field and the electromagnetic radiation.
When electromagnetic radiation appears above it takes the form of an electromagnetic shock wave accompanying the
impulsive gravitational wave. The history of the electromagnetic shock wave is the null hypersurface $u=0$. Should
we wish to know the field in $u>0$, to the future of the history of the wave, we would require, for example, the
$O(u^2)$--terms in (2.25)--(2.28), (3.7)--(3.10) and (4.6)--(4.9). The examples given in this paper have motivated the development of a general,
relativistically invariant, treatment of the interaction of impulsive gravitational waves with electromagnetic fields which
will be described in a future publication.
| 11,187 |
\section{Introduction}
In order to pursue new physics beyond the Standard Model (SM),
the Tevatron $p\bar{p}$ collider with $\sqrt{s}=1.96\,{\rm TeV}$ provides powerful approach with $b$ hadrons.
At the Tevatron $b$ quarks are pair-produced with enormous cross section~\cite{Aaltonen:2009xn}, which is three orders of magnitude higher than at $e^+e^-$ colliders, and generate all sorts of $b$ hadrons.
This provides privileged access to SM-suppressed processes such as FCNC transitions and
$CP$ violation in $B^0_s$ mixing.
These approaches from flavor sector at Tevatron are complementary to
direct searches for BSM processes like Supersymetry (SUSY) particles,
and also $B$ physics at the $e^+e^-$ experiments.
In this paper we focus on studies for some promising FCNC processes;
$B\to K^{(*)}\mu^+\mu^-$, $B^0_s\to \phi\mu^+\mu^-$,
$B^0_s\to\mu^+\mu^-$,
and
$B^0_s$ mixing,
performed by CDF and D0 collaborations.
\section{Rare decays}
\subsection{$B\to K^{(*)}\mu^+\mu^-$ and $B^0_s\to \phi\mu^+\mu^-$}
The $B\to K^{(*)}\mu^+\mu^-$ and $B^0_s\to \phi\mu^+\mu^-$ decays are dominated
by the FCNC $b \to s \ell \ell$ transition.
In the SM framework, the quark transition is forbidden at the tree level.
It may occur via $Z$/$\gamma$ penguin diagram or a $W^+ W^-$ box diagram at the lowest order.
A new physics process could enhance the decay amplitude and it might be seen as an interference with the SM amplitude.
Therefore we measure various observables related to the magnitude or the complex phase, like branching ratio, polarization or forward-backward asymmetry.
CDF selects two oppositely charged muon candidates with a momentum transverse to the beamline,
$p_T$, greater than 1.5 or 2.0$\,{\rm GeV/}c$, depending on the trigger selection.
We then reconstruct $B \to \hmm$ signal candidates, where $B$ stands for $B^+$, $B^0$, or $B^0_s$, and
$h$ stands for $K^+, \kst,$ or $\phi$ respectively.
The $\kst$ is reconstructed in the mode $\kst \to K^+\pi^-$, and
the $\phi$ is reconstructed as $\phi \to K^+K^-$.
To enhance separation of signal from background we employ an artificial neural network (NN) technique.
Fig.~\ref{fig:rarebmass_data} shows invariant mass distribution for each rare decay.
\begin{figure}[b]
\begin{center}
\begin{tabular}{ccc}
\resizebox{.25\textwidth}{!}{\includegraphics[clip]{./fit_bmass_kmm_data_hcp.eps}}
\resizebox{.25\textwidth}{!}{\includegraphics[clip]{./fit_bmass_kstmm_data_hcp.eps}}
\resizebox{.25\textwidth}{!}{\includegraphics[clip]{./fit_bmass_phimm_data_hcp.eps}}
\end{tabular}
\end{center}
\caption{The $B$ invariant mass of $\bp \to \kmm$ (left), $\bz \to \kstmm$ (middle), and $\bs \to \phimm$ (right)
for $\lumi$, respectively.
}
\label{fig:rarebmass_data}
\end{figure}
The signal yield is obtained by an unbinned maximum log-likelihood fit of the $B$ invariant mass distribution.
From the $B$ mass fit with $4.4\fb$ of data~\cite{ref:cdf_10047},
we obtain $120\pm16$ ($\bp \to \kmm$), $101\pm12$ ($\bz \to \kstmm$), and $27\pm6$ ($\bs \to \phimm$) signal yields,
with $8.5\sigma$, $9.7\sigma$, and $6.3\sigma$ statistical significance, respectively.
This is the first observation of the $\bs \to \phimm$ mode.
Obtained yields are consistent with world average and theoretical expectations.
We measure the branching fractions of rare decays relative to the corresponding reference channels, $\jpsi h$,
which have same final states as rare decays but with an intermediate $J/{\small \psi}$ resonance.
Using PDG~\cite{Amsler:2008zzb} values for BR of reference decays we obtain
${\cal B}(\bp \to \kmm) = [0.38 \pm 0.05({\rm stat}) \pm 0.03({\rm syst})] \times 10^{-6}$,
${\cal B}(\bz \to \kstmm) = [1.06 \pm 0.14({\rm stat}) \pm 0.09({\rm syst})] \times 10^{-6}$,
${\cal B}(\bs \to \phimm) = [1.44 \pm 0.33({\rm stat}) \pm 0.46({\rm syst})] \times 10^{-6}$.
We measure the differential decay rate with respect to the dimuon mass.
The signal region is divided into six $q^2$ bins, where $q^2\equiv M_{\mu\mu} c^2$.
To obtain the number of signal events in each $q^2$ bin we use
the same procedure used in the global yield fit.
Fig.~\ref{fig:dbr_kstmm} shows the differential branching fraction for $\bz \to \kstmm$ and $\bp \to \kmm$.
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\resizebox{0.33\textwidth}{!}{\includegraphics[clip]{dbr_kstmm.eps}}
\resizebox{0.33\textwidth}{!}{\includegraphics[clip]{dbr_kmm.eps}}
\end{tabular}
\end{center}
\caption{Differential BR of $\bz \to \kstmm$ (left) and $\bp \to \kmm$ (right).
Hatched regions are charmonium veto regions.
Solid lines are the SM expectation~\cite{Ali:1999mm}, which use maximum- and minimum- allowed form factor.
Dashed line is the averaged theoretical curve in each $q^2$ bin.
}
\label{fig:dbr_kstmm}
\end{figure}
The forward-backward asymmetry ($\rm A_{FB}$) and $\kst$ longitudinal polarization ($\rm F_{L}$) are extracted from $\cos \theta_\mu$ and $\cos \theta_K$ distributions, respectively, where
$\theta_\mu$ is the helicity angle between $\mu^+$ ($\mu^-$) direction and the opposite of the $B$ ($\overline{B}$) direction in the dimuon restframe, and
$\theta_K$ is the angle between the kaon direction and the direction opposite to the $B$ meson in the $\kst$ rest frame.
We measure $\rm F_{L}$ and $\rm A_{FB}$ for $\bz \to \kstmm$ and also $\rm A_{FB}$ for $\bp \to \kmm$.
Fit results are shown in Fig.~\ref{fig:afb_fl_kstmm}.
Both $\rm F_{L}$ and $\rm A_{FB}$ are consistent with the SM and also an example of SUSY model.
Our results are also consistent and competitive with B-factories measurements~\cite{Wei:2009zv,Aubert:2008ju}.
\begin{figure}[htypb]
\begin{center}
\begin{tabular}{ccc}
\resizebox{.3\textwidth}{!}{\includegraphics[clip]{summary_fl_6bin.eps}}
\resizebox{.3\textwidth}{!}{\includegraphics[clip]{summary_afb_6bin.eps}}
\resizebox{.3\textwidth}{!}{\includegraphics[clip]{summary_afb_6bin_kll.eps}}
\end{tabular}
\end{center}
\caption{$\rm F_{L}$(left) and $\rm A_{FB}$(middle) fit results as a function of $q^2$ for $\bz \to \kstmm$ and
$\rm A_{FB}$(right) as a function of $q^2$ for $\bp \to \kmm$.
The points show data.
Solid (dotted) curve is the SM (an example of SUSY) expectation~\cite{Ali:1999mm}.
Dashed line is the averaged expectation in each $q^2$ bin.
Hatched regions mean charmonium veto.}
\label{fig:afb_fl_kstmm}
\end{figure}
\subsection{$B^0_s (B^0) \to\mu^+\mu^-$}
The $B^0_s (B^0) \to \mu^+\mu^-$ decays
are also dominated by FCNC process. The decay rates are
further suppressed by the helicity factor, $(m_\mu/m_B)^2$.
The $B^0$ decay is also suppressed with respect to the $B^0_s$ decay
by the ratio of CKM elements, $\left|V_{td}/V_{ts}\right|^2$.
The SM expectations for these branching fractions are
${\cal B}(\Bsmm) = (3.42\pm0.54)\times10^{-9}$ and ${\cal B}(\Bdmm) = (1.00\pm0.14)\times10^{-10}$~\cite{Buras:2003td}.
As many new physics models can enhance the BR significantly,
these decays provide sensitive probes for new physics.
CDF selects two oppositely charged muon candidates within a dimuon invariant mass windows of
$4.669 < m_{\mu^+\mu^-} < 5.969 \,{\rm GeV/}c^2$
The muon candidates are required to have $p_T>2.0\,{\rm GeV/}c$, and $\vec{p_T}^{\mu^+\mu^-}>4\,{\rm GeV/}c$,
where $\vec{p_T}^{\mu^+\mu^-}$ is the transverse component of the sum of the muon momentum vectors.
For CDF analysis, we employ NN to select signal events.
The event selection is checked with
control samples of $\bp \to \psik$ and sideband in data that the background estimates are correctly predicted.
The $\mu^+\mu^-$ invariant mass distributions for the three different NN ranges are shown in Fig.~\ref{fig:bsmm},
using 3.7$\fb$ of data.
In the absence of signal, we extract 95\% (90\%) C.L. limits of ${\cal B}(\Bsmm) < 4.3\times 10^{-8}$ $(3.6\times 10^{-8})$~\cite{ref:cdf_9892} and
${\cal B}(\Bdmm) < 7.6\times 10^{-8}$ $(6.0\times 10^{-8})$~\cite{ref:cdf_9892}, which are currently the world's best upper limits
for both processes.
D0 performs a similar analysis but employs a Boosted Decision Tree (BDT) instead of NN.
With 5$\fb$ of data, D0 has studied the sensitivity to the
branching fraction of $\bs \to \mu^+\mu^-$ decays. An expected upper limit on the branching fraction is
${\cal B}(\Bsmm)\ < 5.3(4.3) \times 10^{-8}$ at the 95(90)\% C.L.~\cite{ref:d0_5906}.
\begin{figure}[htypb]
\begin{minipage}{0.5\hsize}
\begin{center}
\resizebox{0.7\textwidth}{!}{\includegraphics[clip]{./bsmumu_p13_mod.eps}}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\begin{tabular}{cc}
\resizebox{.6\textwidth}{!}{\includegraphics[clip]{./B56F8_RunIIb-II.eps}}\\
\resizebox{.6\textwidth}{!}{\includegraphics[clip]{./B56F9_RunIIb-II.eps}}
\end{tabular}
\end{center}
\end{minipage}
\caption{Left: Dimuon invariant mass distribution for CDF events satisfying all selection criteria for the three ranges of NN.
Right top: Dimuon invariant mass distribution for D0 events after the BDT cut. Search box remains blinded.
Right bottom: $J/{\small \psi} K^+$ events used as a control sample, in D0 data after applying the BDT cut.
}
\label{fig:bsmm}
\end{figure}
\section{Measurement of the $B^0_s$ mixing phase}
Analogously to the neutral $B^0$ system, $CP$ violation in $B^0_s$ system may occur also through
interference of decays with and without the $B^0_s$-$\overline{B}{}^0_s$ mixing.
The $B^0_s-\overline{B}{}^0_s$ mixing occurs via second order weak processes. It is described in the SM by $\Delta m_s$ and $\Delta \Gamma_s$, mass and decay width difference of the two mass eigenstates, $B_s^H$ and $B_s^L$.
The quantity $\Delta \Gamma_s=2|\Gamma_{12}|\cos(\phi_s)$ is sensitive to new physics effects
that affect the phase $\phi_s={\rm arg}(-M_{12}/\Gamma_{12})$,
where $\Gamma_{12}$ and $M_{12}$ are the off-diagonal elements of the mass and decay matrices.
In the SM, the $\phi_s^{\rm SM}$ is predicted to be small as 0.004~\cite{Lenz:2006hd}.
If new physics has different phase $\phi_s^{\rm NP}$ from the SM,
the $\phi_s$ could be dominated by $\phi_s^{\rm NP}$.
In this case we can access the phase by studying the time-evolution of $\bs \to \psiphi$ decays.
The $CP$ violating phase $\beta_s^{\jpsi \phi}$ is defined as the phase between the direct $\bs \to \psiphi$ decay amplitude and mixing followed by decay amplitude.
The $\beta_s^{\rm SM}$ is described by CKM matrix elements as ${\rm arg}(-V_{ts}V_{tb}^*/V_{cs}V_{cb}^*)$ and predicted to be small, 0.02~\cite{Lenz:2006hd}.
Since $\phi_s^{\rm NP}$ contributes to both $\phi_s$ and $\beta_s$,
large $\beta_s$ would indicate existence of new physics contribution.
To extract $\Delta \Gamma_s$ and $\beta_s$, an unbinned maximum likelihood is performed.
The $\bs \to \psiphi$ consists of both $CP$-even and -odd final states.
Although the observed $CP$ asymmetry might be diluted by the opposite $CP$ components,
we can perform unbiased measurement taking into account time evolution of angular distributions of the decay products.
Information about mixing is obtained from flavor tagging of $B^0_s$ meson, which is based on
kaon tracks associated with the $B^0_s$ meson and
the properties and decay tracks of the other $B$ hadron in the event.
Since there is an exact symmetry in the signal probability density function, which contains the strong phases among the three partial waves,
the likelihood function shows two symmetric minima in the $\Delta \Gamma_s$-$\beta_s^{\jpsi \phi}$ plane.
Both CDF and D0 have performed flavour tagged analysis on 2.8$\fb$ of data~\cite{ref:cdf_9458,Abazov:2008fj}.
CDF selected about 3200 signal events with NN, while D0 selected about 2000 signal events with a cut based selection.
Fig.~\ref{fig:betas_cdf_d0} (top left) shows the confidence regions for CDF and
Fig.~\ref{fig:betas_cdf_d0} (top right) shows the fit result for D0.
D0 updates the result from their previous publication result, which restricted the strong phases $\delta_{||}$ and $\delta_{\perp}$ to the values measured in the $B^0 \to J/{\small \psi} K^{*0}$ system.
D0 removes the constraints and also includes systematic uncertainties on $\Delta m_s$.
Currently the compatibility with the SM point is 1.8$\sigma$ for CDF and 1.2$\sigma$ for D0.
We then combine both profile likelihoods. Detail of combination is described in Ref.~\cite{ref:cdf_9787}.
Fig.~\ref{fig:betas_cdf_d0} (bottom) shows the combined results of CDF and D0, which exhibit a $2.1\sigma$ deviation from the SM.
\begin{figure}[htypb]
\begin{center}
\begin{tabular}{cc}
\resizebox{.39\textwidth}{!}{\includegraphics[clip]{./fig04.eps}} &
\resizebox{.39\textwidth}{!}{\includegraphics[trim = 340 0 0 0 , clip]{./fig05.eps}}\\
\resizebox{.39\textwidth}{!}{\includegraphics[clip]{./fig06.eps}} & {} \\
\end{tabular}
\end{center}
\caption{
Two-dimensional profile likelihood as confidence contours of $\beta_s^{J/{\small \psi}\phi}$ and $\Delta\Gamma_{s}$ for CDF's preliminary analysis using $2.8\fb$ of data (top left) and
D0's published analysis using $2.8\fb$ of data,
but allowing strong phases, $\delta_i$ to float and systematic uncertainties are included (top right),
and the combined results (bottom).
The SM expectation and uncertainty ($\beta_s^{\rm SM}$, $\Delta\Gamma_{s}^{\rm SM}$)=
($0.04$, $0.088\pm0.017{\rm ps}^{-1}$)~\cite{Lenz:2006hd} is indicated by the black line.
The region allowed in new physics model given by $\Delta\Gamma_{s}=2|\Gamma_{12}|\cos\phi_s$
is also shown (light green band).
}
\label{fig:betas_cdf_d0}
\end{figure}
\section{Conclusion}
At the Tevatron a rich $B$ physics program is ongoing.
CDF reports the first observation of $\bs \to \phimm$ and measures $\rm A_{FB}(B\to K^{(*)}\mu^+\mu^-)$ in hadron collisions,
which is competitive to $e^+e^-$ B-factories.
CDF updates the $B^0_s(B^0)\to \mu^+\mu^-$ analysis using $3.7\fb$ and continues to improve its world-leading upper limit.
D0 continues to improve their $B^0_s\to \mu^+\mu^-$ analysis and with $5\fb$ their expected limit is
${\cal B}(\Bsmm)\ < 5.3(4.3) \times 10^{-8}$ at the 95(90)\% C.L..
Both CDF and D0 have updated their $\beta_s^{\jpsi \phi}$ measurements with $\bs \to \psiphi$ using $2.8\fb$ of data.
Combined result of both experiments shows a $2.1\sigma$ deviation from the SM.
The Tevatron is performing well with planed running through 2011 will provide double the datasets used for results presented here.
\input{main.bbl}
\end{document}
| 5,206 |
\section{Introduction}
Distributions with a power-law tail have been found in various fields of
natural and social science.
Examples of such studies include, for instance, avalanche sizes in a sandpile model \cite{Bak},
fluctuations in the intervals of heartbeats \cite{Peng},
fish school sizes \cite{Bonabeau},
citation numbers of scientific papers \cite{Render},
frequency of jams in Internet traffic \cite{TTS},
city sizes (see the recent review in Ref.~\cite{Saichev}),
land prices \cite{Kaizoji}--\cite{Ishikawa100},
stock market price changes \cite{Mantegna},
and firm sizes \cite{Stanley00}.
Here, variables (denoted by $x$) follow the probability density function (PDF):
\begin{eqnarray}
P(x) \propto x^{-(\mu+1)}~~~~{\rm for }~~~~x > x_{\rm th}~
\label{Pareto}
\end{eqnarray}
over some size threshold $x_{\rm th}$.
This is called Pareto's Law,
which was first observed in the field of personal income \cite{Pareto}.
The index $\mu$ is called the Pareto index.
Refer to Newman \cite{Newman} for a useful description of Pareto's Law.
In statistical physics,
the study of distributions with a power-law tail (\ref{Pareto}) is significant because
the $k$-th moment $\langle x^k \rangle = \int dx P(x) x^k$ diverges in the case of $\mu \le k$.
It is impossible to describe the system by using the variance $\sigma^2 = \langle x^2 \rangle$
or the standard deviation $\sigma$ in the case of $\mu \le 2$.
This feature comes from power-law behavior in the tail.
Furthermore,
it is worth noting that
a large portion of the overall data are included in the power-law tail.
For example, approximately $90\%$ of total sales or profits in Japanese firms
are included in the power-law tail.
In economics (especially in macroeconomics), one of the major issues is
the state of the entire economy.
In this sense, it is important to clarify the nature of the power-law tail
not only in physics but also in economics.
In general, the power-law breaks below the size threshold $x_{\rm th}$
to suppress the divergence of the PDF \cite{Badger}, \cite{Montroll}.
There are many distributions that have a power-law tail.
These include, for instance, Classical Pareto Distribution (Pareto Type I Distribution),
Pareto Type II Distribution,
Inverse Gamma Distribution,
Inverse Weibull Distribution,
$q$--Distribution, A--Distribution and B--Distribution \cite{Aoyama book}.
In addition to these distributions, it has been hypothesized that many other distributions with a power-law tail follow the log-normal distribution for mid-sized variables below the size threshold $x_{\rm th}$:
\begin{eqnarray}
P(x) \propto \frac{1}{x} \exp \left[ - \frac{1}{2 \sigma^2} \ln^2 \frac{x}{\bar{x}} \right]~~~~{\rm for }~~~~x_{\rm min} < x < x_{\rm th}~.
\label{Log-normal}
\end{eqnarray}
Here, $\bar{x}$ is a mean value and $\sigma^2$ is a variance.
A lower bound of the mid-scale range $x_{\rm min}$ is often related to the lower bound
of an exhaustive set of data.
A pseudo log-normal distribution is approximately derived from A--Distribution or B--Distribution
in the mid-sized range \cite{Aoyama book}.
The study of distributions in the mid-scale range below the size threshold $x_{\rm th}$
is as important as the study of the power-law tail.
In physics, we are interested not only in the mechanism generating a power-law tail
but also in the reason for the tail breaking.
In economics, we should note that the majority of firms are mid-sized.
For instance, in sales or profits data, more than $90\%$ of the total number of firms
are in the mid-scale range.
In this study, by examining exhaustive business data of Japanese firms
that nearly cover the mid- and large-scale ranges,
the authors investigate the relevant distributions with a power-law tail.
This research is expected to be useful for understanding phenomena not only in economics
but also in physics.
On the one hand,
it has been shown that Pareto's Law and the log-normal distribution can be derived
by assuming some model.
For example, a multiplicative process with boundary
constraints and additive noise can generate Pareto's Law \cite{Levy}.
On the other hand, by using no model,
Fujiwara et al. have recently shown that Pareto's Law (\ref{Pareto}) is derived from
Gibrat's Law and from the detailed balance observed in the large-scale range
of exhaustive business data \cite{Fujiwara}.
The relations among laws observed in exhaustive business data are important
for examining the characteristics of distributions based on firm-size.
For instance, in the study of Fujiwara et al.,
it was found that Pareto index $\mu$ is related to the difference between
a positive growth-rate distribution and a negative one.
Furthermore, along the lines of their study,
one of the authors (A.~I) has shown that the log-normal distribution (\ref{Log-normal})
can be inferred from
detailed balance and from Non-Gibrat's Property observed in the profits data of the mid-scale range
\cite{Ishikawa}.
The study of the growth-rate distribution is an interesting subject in itself, and an ongoing investigation into this issue has progressed recently \cite{Riccaboni}.
Detailed balance means that the system is thermodynamically in equilibrium,
the state of which is described as
\begin{eqnarray}
P_{J}(x_T, x_{T+1}) = P_{J}(x_{T+1}, x_T)~.
\label{DetailedBalance}
\end{eqnarray}
Here, $x_T$ and $x_{T+1}$ are firm sizes at two successive points in time.
In Eq.~(\ref{DetailedBalance}), the joint PDF $P_{J}(x_T,x_{T+1})$
is symmetric under the time reversal exchange $x_T \leftrightarrow x_{T+1}$.
Gibrat's Law and Non-Gibrat's Property are observed in the distributions of
firm-size growth rate $R=x_{T+1}/x_T$.
The conditional PDF of the growth rate $Q(R|x_T)$ is defined
as $Q(R|x_T) = P_{J}(x_T,R)/P(x_T)$ by using the PDF $P(x_T)$ and
the joint PDF $P_{J}(x_T,R)$.
Gibrat's Law, which is observed in the large-scale range,
implies that the conditional PDF $Q(R|x_T)$ is independent of the initial
value $x_T$ \cite{Gibrat}:
\begin{eqnarray}
Q(R|x_T) = Q(R)~.
\label{Gibrat}
\end{eqnarray}
Sutton \cite{Sutton} provides an instructive resource for obtaining the proper perspective on
Gibrat's Law.
Non-Gibrat's Property reflects the dependence of the growth-rate distribution
on the initial value $x_T$.
The following properties are observed in the mid-scale range of positive profits data of
Japanese firms \cite{Ishikawa}:
\begin{eqnarray}
Q(R|x_T)&=&d(x_T)~R^{- t_{+}(x_T) - 1}~~~~~{\rm for}~~R > 1~,
\label{FirstNon-Gibrat'sLaw1}
\\
Q(R|x_T)&=&d(x_T)~R^{+ t_{-}(x_T) - 1}~~~~~{\rm for}~~R < 1~,
\label{FirstNon-Gibrat'sLaw2}
\\
t_{\pm}(x_T) &=& \pm \alpha~\ln x_T + C_{\pm}~.
\label{FirstNon-Gibrat'sLaw3}
\end{eqnarray}
Here, $\alpha$ and $C_{\pm}$ are positive constants.
In this composite Non-Gibrat's Property
(\ref{FirstNon-Gibrat'sLaw1})--(\ref{FirstNon-Gibrat'sLaw3}),
the probability
of positive growth decreases and the probability of negative growth increases symmetrically
as the initial value $x_T$ increases in the mid-scale range.
It is particularly noteworthy that the shape of the growth-rate distribution
(\ref{FirstNon-Gibrat'sLaw1})--(\ref{FirstNon-Gibrat'sLaw2}) uniquely determines
the change in the growth-rate distribution (\ref{FirstNon-Gibrat'sLaw3})
under detailed balance (\ref{DetailedBalance}).
Moreover, the rate-of-change parameter $\alpha$ appears in the log-normal distribution
(\ref{Log-normal}).
We designate (\ref{FirstNon-Gibrat'sLaw1})--(\ref{FirstNon-Gibrat'sLaw3}) as
Non-Gibrat's First Property
to distinguish it from another Non-Gibrat's Property
that is observed in sales data.
The shape of the growth-rate distribution
(\ref{FirstNon-Gibrat'sLaw1})--(\ref{FirstNon-Gibrat'sLaw2})
is linear in log-log scale.
This type of growth-rate
distribution is observed in profits and income data of firms
(for instance \cite{Okuyama}, \cite{Ishikawa10}, \cite{Economics}).
In contrast, it has been reported in various articles that
the growth-rate distributions of assets, sales, number of employees in firms,
and personal income
have wider tails than those of profits and income in log-log scale
(for instance \cite{Amaral}, \cite{Fujiwara}, \cite{Matia}, \cite{Fu},
\cite{Buldyrev}, \cite{Economics}).
In this case, the shape of the growth-rate distribution is different from
Eqs.~(\ref{FirstNon-Gibrat'sLaw1}) and (\ref{FirstNon-Gibrat'sLaw2}).
There must be, therefore, another Non-Gibrat's Property corresponding to this shape.
In fact, it has been reported in several studies that
a Non-Gibrat's Property different from Non-Gibrat's First Property
exists in the mid-scale range
of assets and sales of firms (for instance \cite{Aoyama}--\cite{Takayasu}).
In this study, we report the following findings by employing the sales data of Japanese firms,
which include not only data in the large-scale range but also those in the mid-scale range.
\begin{enumerate}
\item Detailed balance (\ref{DetailedBalance}) is confirmed in the mid- and large-scale ranges of sales data.
\item In not only the large-scale range but also the mid-scale range of sales data, the growth-rate distributions have wider tails than those of profits in log-log scale.
\item Under detailed balance (\ref{DetailedBalance}), the allowed change of the growth-rate distribution in the mid-scale range is analytically determined by using empirical data. The change is different from that of profits. We call this Non-Gibrat's Second Property.
\item A log-normal distribution is derived from Non-Gibrat's Second Property and from detailed balance. This is verified with empirical data.
\end{enumerate}
From these results, we conclude that
the shape of the growth-rate distribution
determines the type of Non-Gibrat's Property in the mid-scale range.
\section{Non-Gibrat's First Property}
In this section, we review the analytic discussion in Ref.~\cite{Ishikawa}
and confirm it by applying the results to newly obtained data.
In the analytic discussion, detailed balance (\ref{DetailedBalance}) and
the shape of the growth-rate distribution
(\ref{FirstNon-Gibrat'sLaw1})--(\ref{FirstNon-Gibrat'sLaw2}) lead uniquely to
a change in the growth-rate distribution (\ref{FirstNon-Gibrat'sLaw3}).
In addition, Non-Gibrat's First Property
and detailed balance derive a log-normal distribution
(\ref{Log-normal}) in the mid-scale range.
In this study, we employ profits and sales data supplied by the Research Institute
of Economy, Trade and Industry, IAA (RIETI) \cite{RIETI}.
In this section we analyze profits data, and sales data are analyzed
in the next section.
The data set, which was created by TOKYO SHOKO
RESEARCH, LTD. \cite{TSR} in 2005, includes approximately
800,000 Japanese firms over a period of three years: the
current year, the preceding year, and the year before that.
The number of firms is approximately the same as the actual number of active Japanese firms.
This database is considered nearly comprehensive, at least in the mid- and
large-scale ranges.
In this study, we investigate the joint PDF $P_{J}(x_T,x_{T+1})$ and
the distribution of the growth rate $R=x_{T+1}/x_T$.
Therefore,
by using data of each firm in the previous three years,
we analyze
a data set that has two values at two successive points in time as follows:
$(x_T, x_{T+1})$
= (data in preceding year, data in current year) $\cup$
(data in year before last, data in preceding year).
Here, $\cup$ indicates set-theoretic union.
This superposition of data is
employed in order to secure a statistically sufficient sample size.
This procedure is
allowed in cases where
the economy is stable, that is, thermodynamically in equilibrium.
The validity is checked by detailed balance, as described below.
\begin{figure}[t]
\begin{center}
\includegraphics[height=6cm,width=6.5cm]{ProfitsDB-total-2.eps}
\caption{\label{ProfitsDB-total} Scatter plot of positive profits in the database.
Here, $x_T$ and $x_{T+1}$ are positive profits of individual firms in consecutive years.}
\end{center}
\end{figure}
First, detailed balance (\ref{DetailedBalance}) is observed in profits data.
Note that only positive-profits data are analyzed here,
since we assume that non-negligible negative profits are not listed in the database.
Negative-profits data are thus not regarded as exhaustive.
We employ ``622,420'' data sets $(x_T, x_{T+1})$ that have two positive profits
at two successive points in time.
Figure.~\ref{ProfitsDB-total} shows the joint PDF $P_{J}(x_T,x_{T+1})$
as a scatter plot of individual firms.
Detailed balance (\ref{DetailedBalance}) is confirmed by the Kolmogorov--Smirnov (KS), Wilcoxon--Mann--Whitney (WMW), and Brunner--Munzel (BM) tests.
In the statistical tests, the range of $x_T$ is divided into $N$ bins
as $i_0 \le i_1 \le \cdots \le i_{n-1} \le i_{n} \le \cdots \le i_{N}$
to approximately equalize the number of data
in each bin ``$x_T \in [i_{n-1}, i_{n})$ and $x_T>x_{T+1}$.''
Here, $i_0$ and $i_N$ are the lower and the upper bounds of $x_T$, respectively.
We compare the distribution sample for
``$P_{J}(x_T \in [i_{n-1}, i_{n}), x_{T+1})$ and $x_T>x_{T+1}$''
with another sample for
``$P_{J}(x_T, x_{T+1} \in [i_{n-1}, i_{n}))$ and $x_T<x_{T+1}$''
($n = 1, 2, \cdots, N$)
by making the null hypothesis that these two samples are taken from the same parent
distribution.
Each $p$ value of the WMW test for the case of $N=2000$ is shown in Fig.~\ref{KStestProfits-total}.
Note that the profits data contain a large number of same-value amounts,
which are round numbers: $100$, $200$, $\cdots$, $1000$, $2000$, $\cdots$,
$10000$, $20000$, $\cdots$.
This phenomenon is frequently observed in economic data.
A bin with a round-number amount may contain an exceptionally large number of data in this method of division.
For the case of $N = 2000$, almost all bins typically contain $200$ data;
however, a bin with the round number of $5000$, for instance,
contains an exceptional $4437$ data.
In order to generally equalize the average amount of data in bins to the typical value,
an appropriate number of empty bins are inserted at such bins of round-number amounts as needed (Fig.~\ref{Fig}).
In the case of $N=2000$, there are $759$ empty bins.
$P$ values with respect to the remaining $1241$ bins are depicted
in Fig.~\ref{KStestProfits-total},
in which $1141$ $p$ values exceed $0.05$.
Regardless of the division number $N$ and the kind of test, $p$ values exceed $0.05$
in approximately $92\%$ of bins.
This means that the null hypothesis is not rejected within the $5\%$ significance level
in approximately $92\%$ of the range.
This result does not change in the case where
the range of $x_T$ is divided into logarithmically equal bins.
Consequently, the detailed balance (\ref{DetailedBalance})
in Fig.~\ref{ProfitsDB-total} is generally confirmed.
\begin{figure}[t]
\begin{center}
\includegraphics[height=6cm]{KStestProfits-total-2.eps}
\caption{\label{KStestProfits-total} Each $p$ value of the
WMW test for the scatter plot of positive-profits data points in Fig.~\ref{ProfitsDB-total}.}
\end{center}
\end{figure}\begin{figure}[h!]
\begin{center}
\includegraphics[height=3cm]{Fig.eps}
\caption{\label{Fig} A bin with a round-number amount contains an exceptionally
large number of data.
In order to generally equalize the average amount of data in bins to the typical value, empty bins are inserted at bins with round-number amounts as needed.}
\end{center}
\end{figure}
Second, we divide the range of the initial value $x_T$ into logarithmically equal bins as
$x_T \in [10^{1+0.4(n-1)},10^{1+0.4n})$ $(n=1,2,\cdots,15)$
in order to identify the shape of the growth-rate distribution and
the change as the initial value $x_T$ increases.
The conditional PDFs $q(r|x_T)$ of the logarithmic growth rate
$r=\log_{10} R$ are shown in
Figs.~\ref{Profits-totalGrowthRate-1}--\ref{Profits-totalGrowthRate-3}.
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{Profits-totalGrowthRate-1.eps}
\caption{\label{Profits-totalGrowthRate-1} Conditional PDFs of positive-profits growth rate
in the low-scale range ($10^{1} \le x_T < 10^{3}$).
Here, $x_T$ and $x_{T+1}$ are positive profits in consecutive years, in thousand yen.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{Profits-totalGrowthRate-2.eps}
\caption{\label{Profits-totalGrowthRate-2} Conditional PDFs of positive-profits growth rate
in the mid-scale range ($10^{3} \leq x_T < 10^{5}$).
Here, $x_T$ and $x_{T+1}$ are positive profits in consecutive years, in thousand yen.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{Profits-totalGrowthRate-3.eps}
\caption{\label{Profits-totalGrowthRate-3} Conditional PDFs of positive-profits growth rate
in the large-scale range ($10^{5} \le x_T < 10^{7}$).
Here, $x_T$ and $x_{T+1}$ are positive profits in consecutive years, in thousand yen.}
\end{center}
\end{figure}
In Figs.~\ref{Profits-totalGrowthRate-2} and \ref{Profits-totalGrowthRate-3},
the growth-rate distributions in the mid- and large-scale ranges are
approximated by a linear function of $r$:
\begin{eqnarray}
\log_{10}q(r|x_T)&=&c(x_T) - t_{+}(x_T)~r~~~~~{\rm for}~~r > 0~,
\label{approximation1}
\\
\log_{10}q(r|x_T)&=&c(x_T) + t_{-}(x_T)~r~~~~~{\rm for}~~r < 0~.
\label{approximation2}
\end{eqnarray}
The approximation (\ref{approximation1})--(\ref{approximation2}) is equivalent to
Eqs.~(\ref{FirstNon-Gibrat'sLaw1}) and (\ref{FirstNon-Gibrat'sLaw2})
by using relations $\log_{10} q(r|x_T) = \log_{10} Q(R|x_T) + r + \log_{10} (\ln 10)$ and
$d(x_T) = 10^{c(x_T)}/\ln10$.
From $\int^{\infty}_{0} dR~Q(R|x_T)=1$, the normalization coefficient $d(x_T)$
(or the intercept $c(x_T)$) is determined as
\begin{eqnarray}
\frac{1}{d(x)} = \frac{1}{t_{+}(x)} + \frac{1}{t_{-}(x)}~.
\label{dandt}
\end{eqnarray}
Following the discussion in a previous work~\cite{Ishikawa},
we derive the change in the growth-rate distribution (\ref{FirstNon-Gibrat'sLaw3})
from the shape of the growth-rate distribution
(\ref{FirstNon-Gibrat'sLaw1})--(\ref{FirstNon-Gibrat'sLaw2})
under detailed balance (\ref{DetailedBalance}) and then
derive the log-normal distribution in the mid-scale range.
Under the exchange of variables from $(x_T, x_{T+1})$ to $(x_T,R)$,
two joint PDFs $P_{J}(x_T, x_{T+1})$ and $P_{J}(x_T, R)$ are related to each other
as $P_{J}(x_T, R) = x_T P_{J} (x_T, x_{T+1})$.
Substituting the joint PDF $P_{J}(x_T, R)$ for the conditional PDF $Q(R|x_T)$
and using detailed balance (\ref{DetailedBalance}), we obtain
\begin{eqnarray}
\frac{P(x_T)}{P(x_{T+1})} = \frac{1}{R} \frac{Q(R^{-1}|x_{T+1})}{Q(R|x_T)}~.
\label{DetailedBalance2}
\end{eqnarray}
By substituting the conditional PDF for the shape of the growth-rate
distribution
(\ref{FirstNon-Gibrat'sLaw1})--(\ref{FirstNon-Gibrat'sLaw2}),
another expression of detailed balance (\ref{DetailedBalance2}) is reduced to
\begin{eqnarray}
\frac{\tilde{P}(x_T)}{\tilde{P}(x_{T+1})} = R^{+t_{+}(x_T)-t_{-}(x_{T+1})+1}~
\label{DetailedBalance3}
\end{eqnarray}
for the case of $R>1$.
Here, we denote $\tilde{P}(x) = d(x)~P(x)$.
By expanding Eq.~(\ref{DetailedBalance3}) around $R=1$ with $x_T \to x$ and $x_{T+1} \to R~x$,
the following three differential equations are obtained:
\begin{eqnarray}
\Bigl[1+t_{+}(x)-t_{-}(x) \Bigr] \tilde{P}(x)
+ x~ {\tilde{P}}^{'}(x) = 0~,
\label{DE1}
\end{eqnarray}
\begin{eqnarray}
{t_{+}}^{'}(x)+{t_{-}}^{'}(x)=0~,~~~
{t_{+}}^{'}(x)+x~{t_{+}}^{''}(x)=0~.
\label{DE2}
\end{eqnarray}
The same differential equations are obtained for $R < 1$.
Equations~(\ref{DE2}) uniquely fix $t_{\pm}(x_T)$
as Eq.~(\ref{FirstNon-Gibrat'sLaw3}).
Now, let us verify this by empirical data.
Figure~\ref{Profits-total-Evaluation} shows $t_{\pm}(x_T)$ and $c(x_T)$
estimated by fitting the approximation (\ref{approximation1})--(\ref{approximation2})
to each growth-rate distribution in
Figs.~\ref{Profits-totalGrowthRate-1}--\ref{Profits-totalGrowthRate-3}.
In Fig.~\ref{Profits-total-Evaluation}, $c(x_T)$ is fixed as the empirical value
and $t_{\pm}(x_T)$ is estimated by using the least-squares method.
\begin{figure}[b]
\begin{center}
\includegraphics[height=6cm]{Profits-total-Evaluation.eps}
\caption{\label{Profits-total-Evaluation} Estimations of $c(x_T)$ and $t_{\pm}(x_T)$.
Here, $x_T$ is the lower bound of each bin, in thousand yen, and
$c(x_T)$ is the original value of the growth-rate distribution.
From left, each point on the graph represents $n=1,2, \cdots, 15$.}
\end{center}
\end{figure}
In Fig.~\ref{Profits-totalGrowthRate-1},
the linear function (\ref{approximation1})--(\ref{approximation2})
is difficult to approximate for each growth-rate distribution,
and the values for $n=1,2,\cdots,5$ in Fig.~\ref{Profits-total-Evaluation} are untrustworthy.
In Fig.~\ref{Profits-totalGrowthRate-2}, however, the linear approximation
(\ref{approximation1})--(\ref{approximation2}) is appropriate.
Applying the change in the growth-rate distribution
$t_{\pm}(x_T)$ (\ref{FirstNon-Gibrat'sLaw3})
to $n=6, 7, 8$ $(10^{3} \le x_T < 10^{4.2})$ in Fig.~\ref{Profits-total-Evaluation},
we obtain the rate-of-change parameter $\alpha=0.11 \pm 0.02$ from $t_+(x_T)$
and $\alpha=0.11 \pm 0.03$ from $t_-(x_T)$ by using the least-squares method.
This coincidence of two estimated values guarantees Non-Gibrat's First Property
(\ref{FirstNon-Gibrat'sLaw1})--(\ref{FirstNon-Gibrat'sLaw3}) in the empirical data.
We regard $10^{3} \le x_T < 10^{4.2}$ as the mid-scale range.
In Fig.~\ref{Profits-totalGrowthRate-3}, the growth-rate distribution barely changes
as $n$ increases.
This means that Gibrat's Law (\ref{Gibrat}) is valid in the large-scale range.
In Fig.~\ref{Profits-total-Evaluation}, the values $t_{\pm}(x_T)$ vary
in the large-scale range,
since the number of data in Fig.~\ref{Profits-totalGrowthRate-3}
is statistically insufficient to estimate $t_{\pm}(x_T)$
by the least-squares method.
However, by measuring the positive and negative standard deviations $\sigma_{\pm}$ of
each growth-rate distribution
in Figs.~\ref{Profits-totalGrowthRate-1}--\ref{Profits-totalGrowthRate-3},
we confirmed that the growth-rate distribution only slightly changes in the range
$x_T \ge 10^5$ (Fig.~\ref{vvar}).
From Fig.~\ref{vvar}, we regard $x_T \ge 10^5$ as the large-scale range
and set $\alpha=0$ in this range.
Strictly speaking, a constant parameter $\alpha$ must not take different values.
However, in the database, a large number of firms stay in the same range
for two successive years.
This parameterization is, therefore, generally suitable for describing the PDF.
\begin{figure}[t]
\begin{center}
\includegraphics[height=6cm]{vvar.eps}
\caption{\label{vvar} Estimations of $\sigma_{\pm}(x_T)$.
Here, $x_T$ is the lower bound of each bin, in thousand yen.
From left, each point on the graph represents $n=1,2, \cdots, 15$.}
\end{center}
\end{figure}
In Fig.~\ref{Profits-total-Evaluation},
$c(x_T) = \log_{10} (d(x_T)\ln10)$ hardly changes in the mid- and large-scale ranges $x_T \ge 10^3$.
This is consistent with $C_{\pm}>>\alpha \ln x_T$
in Eqs.~(\ref{FirstNon-Gibrat'sLaw3}) and (\ref{dandt}).
Consequently, by approximation we determine that the dependence of $d(x_T)$ on $x_T$ is negligible
in the mid- and large-scale ranges.
Using $t_{\pm}(x)$ (\ref{FirstNon-Gibrat'sLaw3}),
Eq.~(\ref{DE1}) uniquely decides the PDF of $x$ as
\begin{eqnarray}
P(x) = C~{x}^{-(\mu+1)}
~\exp \left[ - \alpha \ln^2 x \right]
~~~~~{\rm for}~~x > x_{\rm min}~.
\label{HandM}
\end{eqnarray}
Here, we regard $d(x)$ in $\tilde{P}(x)=d(x)~P(x)$ as a constant and
denote $\mu=C_+ - C_-$.
The solutions (\ref{FirstNon-Gibrat'sLaw3}) and (\ref{HandM})
satisfy Eq.~(\ref{DetailedBalance3}) beyond perturbation around $R = 1$, and thus
these are not only necessary but also sufficient.
Figure~\ref{ProfitsDistribution-total} shows that the resultant PDF (\ref{HandM})
fits correctly with the empirical profits data.
In the large-scale range ($\alpha=0$),
the PDF (\ref{HandM}) behaves as Pareto's Law (\ref{Pareto}).
The Pareto index is estimated as approximately $\mu \sim 1$
in the large-scale range ($x \ge 10^5$) of Fig.~\ref{ProfitsDistribution-total}.
In the mid-scale range, the PDF (\ref{HandM}) behaves as the
log-normal distribution (\ref{Log-normal})
with $\alpha = 1/(2 \sigma^2)$, $\mu = - \ln \bar{x}/(\sigma^2)$.
Applying the PDF (\ref{HandM}) to the mid-scale range ($10^{3} \le x < 10^{4.2}$)
of Fig.~\ref{ProfitsDistribution-total}, we obtain the rate-of-change parameter
$\alpha = 0.082 \pm 0.089$
by using the least-squares method.
The error bar is not small because we have applied the least-squares method
to the quadratic curve in log-log scale.
The estimated value ($\alpha = 0.082 \pm 0.089$) is, however, consistent with
the values estimated by the change in $t_{\pm}(x_T)$ ($\alpha = 0.11 \pm 0.02$ or
$0.11 \pm 0.03$).
From these results, we conclude that Non-Gibrat's First Property is confirmed
by the empirical data.
\begin{figure}[!h]
\begin{center}
\includegraphics[height=6cm]{ProfitsDistribution-total.eps}
\caption{\label{ProfitsDistribution-total}
A PDF of positive profits in the database.
Pareto's Law is observed in the large-scale range ($x \ge 10^5$)
and in the log-normal distribution in the mid-scale range ($10^3 \le x < 10^{4.2}$).}
\end{center}
\end{figure}
\section{Non-Gibrat's Second Property}
In this section, we investigate another Non--Gibrat's Property
observed in the mid-scale
range of sales data. This is the main aim of this study.
First, detailed balance (\ref{DetailedBalance}) is also observed in sales data.
Here, we employ ``1,505,108'' data sets $(x_T, x_{T+1})$ that have two sales
at two successive points in time.
Figure~\ref{SalesDB-total} shows the joint PDF $P_{J}(x_T,x_{T+1})$
as a scatter plot of individual firms.
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{SalesDB-total-2.eps}
\caption{\label{SalesDB-total} Scatter plot of sales in the database.
Here, $x_T$ and $x_{T+1}$ are sales of individual firms in consecutive years.}
\end{center}
\end{figure}
Detailed balance (\ref{DetailedBalance}) is also confirmed by using the
KS,
WMW, and BM tests
in the same manner as in the previous section.
Figure~\ref{KStestSales-total} shows each $p$ value of the BM test for the
$N=5000$ case.
Regardless of the division number $N$ and the kind of test, $p$ values exceed $0.05$
in approximately $82\%$ of bins.
This means that the null hypothesis is not rejected within the $5\%$ significance level
in approximately $82\%$ of the range.
Note that the sales data also contain a large number of same-value amounts, which are round numbers.
$P$ values of the statistical test for bins with a large number of round values
are unusually small.
In this situation, $82\%$ is acceptable.
The percentage is slightly higher in the case where the range of $x_T$ is divided into
logarithmically equal bins.
We assume, therefore, that detailed balance (\ref{DetailedBalance})
in Fig.~\ref{SalesDB-total} is generally verified.
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{KStestSales-total.eps}
\caption{\label{KStestSales-total} Each $p$ value of the BM test for the scatter plot of sales data points in Fig.~\ref{SalesDB-total}.}
\end{center}
\end{figure}
Second, we divide the range of the initial value $x_T$ into logarithmically equal bins as
$x_T \in [10^{3+0.4(n-1)},10^{3+0.4n})$ $(n=1,2,\cdots,15)$.
The conditional growth-rate distributions $q(r|x_T)$ are shown in
Figs.~\ref{Sales-total-GrowthRate-1}--\ref{Sales-total-GrowthRate-3}.
\begin{figure}[h]
\begin{center}
\includegraphics[height=6cm]{Sales-total-GrowthRate-1.eps}
\caption{\label{Sales-total-GrowthRate-1} Conditional PDFs of sales growth rate in the small- and mid-scale ranges
($10^{3} \le x_T < 10^{5}$).
Here, $x_T$ and $x_{T+1}$ are sales in consecutive years, in thousand yen.}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[height=6cm]{Sales-total-GrowthRate-2.eps}
\caption{\label{Sales-total-GrowthRate-2} Conditional PDFs of sales growth rate in the mid- and large-scale ranges ($10^{5} \le x_T < 10^{7}$).
Here, $x_T$ and $x_{T+1}$ are sales in consecutive years, in thousand yen.}
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[height=6cm]{Sales-total-GrowthRate-3.eps}
\caption{\label{Sales-total-GrowthRate-3} Conditional PDFs of sales growth rate in the large-scale range
($10^{7} \le x_T < 10^{9}$).
Here, $x_T$ and $x_{T+1}$ are sales in consecutive years, in thousand yen.}
\end{center}
\end{figure}
Each growth-rate distribution in
Figs.~\ref{Sales-total-GrowthRate-1}--\ref{Sales-total-GrowthRate-3}
has curvatures.
It is difficult to approximate the growth-rate
distributions by the linear approximation
(\ref{approximation1})--(\ref{approximation2}) as in the profits case.
As the simplest extension,
we have added a second-order term with respect to $r$ to express the curvatures
as follows:
\begin{eqnarray}
\log_{10}q(r|x_T)&=&c(x_T) - t_{+}(x_T)~r+\ln10~u_{+} (x_T)~r^2
~~~~~{\rm for}~~r_c > r > 0~,
\label{approximation3}
\\
\log_{10}q(r|x_T)&=&c(x_T) + t_{-}(x_T)~r+\ln10~u_- (x_T)~r^2
~~~~~{\rm for}~~- r_c < r < 0~.
\label{approximation4}
\end{eqnarray}
Note that we must introduce a cut $r_c$ in order to normalize the probability integration
as $\int^{10^{r_c}}_{10^{-r_c}} dR~Q(R|x_T) = 1$,
since Eqs.~(\ref{approximation3}) and (\ref{approximation4})
are quadratic with respect to $r$.
From this normalization condition, $c(x_T)$ can be expressed by using
$t_{\pm}(x_T)$, $u_{\pm}(x_T)$, and $r_c$.
The expression is quite complicated, and
it is later observed that $c(x_T)$ only slightly depends on $x_T$ in the empirical data.
Therefore, we do not describe the expression here.
The approximation (\ref{approximation3})--(\ref{approximation4}) is rewritten as
\begin{eqnarray}
Q(R|x_T)&=&d(x_T)~R^{- 1 - t_{+}(x_T) + u_{+}(x_T) \ln R}~~~~~{\rm for}~~R > 1~,
\label{SecondNon-Gibrat'sLaw1}
\\
Q(R|x_T)&=&d(x_T)~R^{- 1 + t_{-}(x_T) + u_{-}(x_T) \ln R}~~~~~{\rm for}~~R < 1~.
\label{SecondNon-Gibrat'sLaw2}
\end{eqnarray}
By using this shape, in the case of $R>1$,
detailed balance (\ref{DetailedBalance2}) is reduced to
\begin{eqnarray}
\frac{\tilde{P}(x_T)}{\tilde{P}(x_{T+1})} = R^{~1+t_{+}(x_T) - t_{-}(x_{T+1})-\left[u_+(x_T)-u_-(x_{T+1})\right]\ln R}~.
\label{start}
\end{eqnarray}
By expanding Eq.~(\ref{start}) around $R=1$ with $x_T \to x$ and $x_{T+1} \to R~x$,
the following five differential equations are obtained:
\begin{eqnarray}
\Bigl[1+t_{+}(x)-t_{-}(x) \Bigr] \tilde{P}(x)
+ x~ {\tilde{P}}^{'}(x) = 0~,
\label{DE3}
\end{eqnarray}
\begin{eqnarray}
x \left[ {t_{+}}^{'}(x)+{t_{-}}^{'}(x) \right] + 2 \left[ u_{+}(x) - u_{-}(x) \right]=0~,
\label{DE4}\\
2~{t_{+}}^{'}(x)+{t_{-}}^{'}(x)+6{u_{+}}^{'}(x)+x \left[2~{t_{+}}^{''}(x)+{t_{-}}^{''}(x) \right]=0~,
\label{DE5}\\
{t_{+}}^{'}(x)+{t_{-}}^{'}(x)+3x \left[{t_{+}}^{''}(x)+{t_{-}}^{''}(x) \right]
+x^2 \left[{t_{+}}^{(3)}(x)+{t_{-}}^{(3)}(x) \right]=0~,
\label{DE6}\\
{t_{+}}^{'}(x)+7x~{t_{+}}^{''}(x)+6x^2~{t_{+}}^{(3)}(x)+x^3~{t_{+}}^{(4)}(x)=0~.
\label{DE7}
\end{eqnarray}
The same differential equations are obtained for $R < 1$.
Equations~(\ref{DE4})--(\ref{DE7}) uniquely fix the change
in the growth-rate distribution $t_{\pm}(x)$, $u_{\pm}(x)$
as follows:
\begin{eqnarray}
t_+(x) &=& \frac{\gamma}{3} \ln^3 x + \frac{\beta}{2} \ln^2 x
+ \alpha \ln x + C_{1}~,
\label{t+}\\
t_-(x) &=& - \frac{\gamma}{3} \ln^3 x + \frac{\delta - \beta }{2} \ln^2 x
+ (\eta - \alpha) \ln x
+C_{2}~,
\label{t-}\\
u_+(x) &=& - \frac{\gamma}{6} \ln^2 x - \frac{\delta + \beta}{6} \ln x + C_3~,
\label{u+}\\
u_-(x) &=& - \frac{\gamma}{6} \ln^2 x + \frac{2 \delta - \beta}{6} \ln x
+ C_3 + \frac{\eta}{2}~.
\label{u-}
\end{eqnarray}
Now, let us confirm these solutions with the empirical data.
Figure~\ref{Sales-total-Evaluation} shows $t_{\pm}(x_T)$, $u_{\pm}(x_T)$ and $c(x_T)$
estimated by fitting the approximation (\ref{approximation3})--(\ref{approximation4})
to each growth-rate distribution in
Figs.~\ref{Sales-total-GrowthRate-1}--\ref{Sales-total-GrowthRate-3}.
In Fig.~\ref{Sales-total-Evaluation}, $c(x_T)$ is fixed as the empirical value
and $t_{\pm}(x_T)$ and $u_{\pm}(x_T)$ are estimated by using the least-squares method.
For $n=13, 14, 15$ in Fig.~\ref{Sales-total-GrowthRate-3}, there are not sufficient data
points to estimate $t_{\pm}(x_T)$, $u_{\pm}(x_T)$ for $n=14, 15$ or
to estimate the error bar for $n=13$.
Therefore, data points for $n=13, 14, 15$ are not plotted in
Fig.~\ref{Sales-total-Evaluation}.
\begin{figure}[b]
\begin{center}
\includegraphics[height=6cm]{Sales-total-Evaluation.eps}
\caption{\label{Sales-total-Evaluation} Estimations of $c(x_T)$, $t_{\pm}(x_T)$, and $u_{\pm}(x_T)$.
Here, $x_T$ is the lower bound of each bin, in thousand yen, and
$c(x_T)$ is the original value of the growth-rate distribution.
From left, each point on the graph represents $n=1,2, \cdots, 12$.}
\end{center}
\end{figure}
On the one hand, for $n=9, 10, \cdots, 15$ $(x_T \ge 10^{6.2})$
in Figs.~\ref{Sales-total-GrowthRate-2} and \ref{Sales-total-GrowthRate-3},
the growth-rate distribution hardly changes as $n$ increases.
This means that Gibrat's Law (\ref{Gibrat}) is verified by the empirical data.
We regard $x_T \ge 10^{6.2}$ as the large-scale range
and set $\gamma = \beta = \delta = \alpha = \eta = 0$ in this range
because $t_{\pm}(x_T)$ and $u_{\pm}(x_T)$ do not depend on $x_T$.
In Fig.~\ref{Sales-total-Evaluation}, the values of $t_{\pm}(x_T)$ and $u_{\pm}(x_T)$ vary
in this range
because the number of data in Fig.~\ref{Sales-total-GrowthRate-3}
is statistically insufficient to estimate them
by the least-squares method.
However, by measuring positive and negative standard deviations $\sigma_{\pm}$ of
each growth-rate distribution
in Figs.~\ref{Sales-total-GrowthRate-1}--\ref{Sales-total-GrowthRate-3},
we confirmed that the growth-rate distribution hardly changes in the large-scale range
$x_T \ge 10^{6.2}$ (Fig.~\ref{var}).
\begin{figure}[!h]
\begin{center}
\includegraphics[height=6cm]{var.eps}
\caption{\label{var} Estimations of $\sigma_{\pm}(x_T)$.
Here, $x_T$ is the lower bound of each bin, in thousand yen.
From left, each point on the graph represents $n=1,2, \cdots, 15$.}
\end{center}
\end{figure}
On the other hand, in Fig.~\ref{Sales-total-GrowthRate-1},
while the negative growth-rate distribution hardly changes as $n$ increases,
the positive growth-rate distribution gradually decreases.
This is Non-Gibrat's Property in the mid-scale range of sales data.
We should estimate parameters $\gamma, \beta, \delta, \alpha$ and $\eta$
by applying the change in the growth-rate distribution
(\ref{t+})--(\ref{u-}) to Fig.~\ref{Sales-total-Evaluation}.
However, there are insufficient data points in Fig.~\ref{Sales-total-Evaluation}
for using the least-squares method by polynomial functions (\ref{t+})--(\ref{u-}).
Consequently, as a first-order approximation, we assume that
the negative growth-rate distribution does not depend on $x_T$,
even in the mid-scale range.
This approximation is guaranteed by Fig.~\ref{var}
because the negative standard deviation $\sigma_-$ hardly changes
compared with the positive standard deviation $\sigma_+$.
In this approximation, the parameters are simplified as
\begin{eqnarray}
\gamma=\delta=\beta=0~~~~~ {\rm and}~~~~~ \eta=\alpha~.
\label{approximation99}
\end{eqnarray}
Only the change in the positive growth-rate distribution $t_+(x_T)$ depends on $x_T$
as follows:
\begin{eqnarray}
t_+(x)&=&\alpha \ln x + C_{1}~,
\label{t+2}\\
t_-(x)&=&C_2~,~~~
u_+(x)=C_3~,~~~
u_-(x)=C_3 + \frac{\alpha}{2}~.
\label{u-2}
\end{eqnarray}
We call this Non-Gibrat's Second Property.
Applying $t_+(x_T)$ (\ref{t+2})
to $n=3, 4, 5, 6$ $(10^{3.8} \le x_T < 10^{5.4})$
in Fig.~\ref{Sales-total-Evaluation},
we obtain the rate-of-change parameter $\alpha = 0.68 \pm 0.03$ by
the least-squares method.
We regard $10^{3.8} \le x_T < 10^{5.4}$ as the mid-scale range of sales.
In this range, $t_{-}(x_T)$ and $u_{\pm}(x_T)$ hardly change compared with
$t_+(x_T)$, so the approximation (\ref{u-2}) is considered relevant.
Nevertheless, the value $\alpha$ estimated by the difference between $u_{+}(x_T)$
and $u_{-}(x_T)$
disagrees with the value estimated by the change in $t_+(x_T)$.
Most likely, this comes from a limitation of the second-order approximation with respect to $r$
(\ref{approximation3})--(\ref{approximation4}).
To fix this discrepancy,
we may add a third-order term with respect to $r$.
We will consider this point in the conclusion.
In addition, we should note that
the intercept $c(x_T)$ only slightly depends on $x_T$ in the mid- and large-scale ranges
$x_T \ge 10^{3.8}$, as in the profits case.
Using $t_{\pm}(x)$ (\ref{t+})--(\ref{t-}),
Eq.~(\ref{DE3}) uniquely determines the PDF of $x$ as
\begin{eqnarray}
P(x) \propto x^{-(\mu+1)}~
\exp \Bigl[
- \frac{\gamma}{6}\ln^4 x
+ \frac{\delta - 2 \beta}{6}\ln^3 x
-(\alpha-\frac{\eta}{2})\ln^2 x
\Bigr]~.
\label{P}
\end{eqnarray}
Here, we regard $d(x)$ in $\tilde{P}(x)=d(x)~P(x)$ as a constant and
denote $\mu=C_1 - C_2$.
The solutions (\ref{t+})--(\ref{u-}) and (\ref{P})
satisfy Eq.~(\ref{start}) beyond perturbation around $R = 1$, so
these are not only necessary but also sufficient.
In the approximation (\ref{approximation99}), the PDF is reduced to
\begin{eqnarray}
P(x)\propto x^{-(\mu+1)}~\exp \left[-\frac{\alpha}{2} \ln^2 x \right]~.
\label{P2}
\end{eqnarray}
\begin{figure}[t]
\begin{center}
\includegraphics[height=6cm]{SalesDistribution-total.eps}
\caption{\label{SalesDistribution-total}
A PDF of sales in the database.
Pareto's Law is observed in the large-scale range ($x > 10^{6.2}$)
and the log-normal distribution in the mid-scale range ($10^{3.8} \le x < 10^{5.4}$).}
\end{center}
\end{figure}
Figure~\ref{SalesDistribution-total} shows that the resulting PDF (\ref{P2})
fits correctly with the empirical sales data.
In the large-scale range ($\alpha=0$),
the PDF (\ref{P2}) behaves as Pareto's Law (\ref{Pareto}).
The Pareto index is estimated as approximately $\mu \sim 1$
in the large-scale range ($x \ge 10^{6.2}$) of Fig.~\ref{SalesDistribution-total}.
In the mid-scale range, the PDF (\ref{P2}) behaves as the
log-normal distribution (\ref{Log-normal}) in the same manner as in the profits case.
Applying the PDF (\ref{P2}) to the mid-scale range ($10^{3.8} \le x < 10^{5.4}$)
of Fig.~\ref{SalesDistribution-total}, we obtain the rate-of-change parameter
$\alpha = 0.65 \pm 0.04$
by using the least-squares method.
This is consistent with
the value estimated by the change in $t_{+}(x_T)$ ($\alpha = 0.68 \pm 0.03$).
From these results, we conclude that Non-Gibrat's Second Property is also confirmed
by the empirical data.
\section{Conclusion}
In this study, we have employed exhaustive business data on Japanese firms
that nearly cover not only the entire large-scale range but also the entire mid-scale range in terms of firm size.
Using this newly assembled database,
we first reconfirmed the previous analyses for profits data \cite{Ishikawa} as described below.
In the mid-scale range, the log-normal distribution is derived from detailed balance and
from Non-Gibrat's First Property.
In Non-Gibrat's First Property, the probability of positive growth decreases and the probability of negative growth increases symmetrically as the initial value $x_T$ increases.
Under detailed balance, this change is uniquely reduced from the shape of the growth-rate distribution, which is linear in log-log scale.
Second, the following findings were reported with respect to sales data.
Detailed balance is also observed in the mid- and large-scale ranges of sales data.
The growth-rate distribution of sales has wider tails than the linear growth-rate distribution of profits in log-log scale.
In the mid-scale range, while the probability of negative growth hardly changes as the initial value $x_T$ increases, the probability of positive growth gradually decreases. This feature is different from Non-Gibrat's First Property observed in the profits data.
We have approximated the growth-rate distribution with curvatures by a quadratic function. In addition, from an empirical observation, we have imposed the condition that the negative growth-rate distribution does not depend on $x_T$, even in the mid-scale range.
Under detailed balance, these approximations and conditions uniquely lead to a decrease in positive growth. We call this Non-Gibrat's Second Property.
In the mid-scale range, the log-normal distribution is also derived from detailed balance and from Non-Gibrat's Second Property. These results are confirmed by the empirical data.
In this study, it was clarified that
the shape of the growth-rate distribution of sales is different from that of profits.
It was also demonstrated that this difference is closely related to
the difference between two kinds of Non-Gibrat's Properties in the mid-scale range.
The growth-rate
distribution of income of firms is approximated by a linear function in
log-log scale as in the profits data.
The growth-rate distributions of assets, the number of employees, and personal income
have wider tails than a linear function in log-log scale, as in the sales data.
If we obtained exhaustive data that include the mid-scale range,
Non-Gibrat's First Property would probably be observed in the income data of firms,
while Non-Gibrat's Second Property would probably be observed in the assets,
the number of employees, and the personal income data.
We have not determined what makes the difference between the shapes of the growth-rate distributions.
However, this difference is probably related to the following factors \cite{Economics}.
Income and profits of firms are calculated by a subtraction
of total expenditures from total sales in a rough estimate.
Assets and sales of firms, the number of
employees, and personal income are not calculated by any subtraction.
Let us consider the distribution of added values, the sum of which is GDP.
Clearly, added values are calculated by some subtraction.
If we obtained exhaustive data of added values,
Non-Gibrat's First Property would certainly be observed.
It has been reported that the growth-rate distribution of GDPs of countries
is linear in log-log scale (for instance \cite{Canning}).
This report reinforces that speculation.
The results in this paper should be carefully considered in cases
where governments and firms discuss strategies of growth.
Finally, we consider a method to fix the inconsistency by which the rate-of-change parameter
$\alpha$ is not estimated by the difference between $u_{\pm}(x_T)$ (\ref{u-2}).
Let us add not only the second-order term with respect to $r$ but also a third-order
term as follows:
\begin{eqnarray}
\log_{10}q(r|x_T)&=&c(x_T) - t_{+}(x_T)~r+\ln10~u_{+} (x_T)~r^2 -\ln^2 10~v_{+}(x_T)~r^3
~~~~~{\rm for}~~r > 0~,
\label{approximation5}
\\
\log_{10}q(r|x_T)&=&c(x_T) + t_{-}(x_T)~r+\ln10~u_- (x_T)~r^2 +\ln^2 10~v_{-}(x_T)~r^3
~~~~~{\rm for}~~r < 0~.
\label{approximation6}
\end{eqnarray}
In the same manner as in the previous section, under detailed balance,
coefficients $t_{\pm}(x)$, $u_{\pm}(x)$,
and $v_{\pm}(x)$ are uniquely obtained as follows:
\begin{eqnarray}
t_+(x) &=& \frac{\zeta}{5} \ln^5 x + \frac{\epsilon}{4} \ln^4 x
+ \frac{\gamma}{3} \ln^3 x + \frac{\beta}{2} \ln^2 x
+ \alpha \ln x + C_{1}~,
\label{t+3}\\
t_-(x) &=&-\frac{\zeta}{5} \ln^5 x + \frac{\kappa - \epsilon}{4} \ln^4 x
+ \frac{\theta - \gamma}{3} \ln^3 x + \frac{\delta - \beta}{2} \ln^2 x
+ (\eta - \alpha) \ln x
+C_{2}~,
\label{t-3}\\
u_+(x) &=&-\frac{\zeta}{5} \ln^4 x
- \frac{4\epsilon + 3\kappa}{20} \ln^3 x
- (\lambda + \frac{2\gamma + \theta}{12}) \ln^2 x
+ (\nu - \frac{\delta + \beta}{6}) \ln x + C_3~,
\label{u+3}\\
u_-(x) &=&-\frac{\zeta}{5} \ln^4 x
- \frac{4\epsilon - 7\kappa}{20} \ln^3 x
- (\lambda + \frac{2\gamma - 5\theta}{12}) \ln^2 x
+ (\nu + \frac{2\delta - \beta}{6}) \ln x \nonumber\\
&&+ C_3+ \frac{\eta}{2}~,
\label{u-3}\\
v_+(x) &=& \frac{\zeta}{15} \ln^3 x + \frac{2\kappa+\epsilon}{20} \ln^2 x + \lambda \ln x+C_4~,
\label{v+3}\\
v_-(x) &=&-\frac{\zeta}{15} \ln^3 x + \frac{3\kappa-\epsilon}{20} \ln^2 x
- (\lambda - \frac{\theta}{6})\ln x+C_4+\mu~.
\label{v-3}
\end{eqnarray}
By imposing the condition that the negative growth-rate distribution does not depend on $x_T$
even in the mid-scale range, these are simplified as follows:
\begin{eqnarray}
t_+(x) &=& \frac{\beta}{2} \ln^2 x
+ \alpha \ln x + C_{1}~,
~~~~~
t_-(x) = C_{2}~,
\label{t4}\\
u_+(x) &=&-\frac{\beta}{2} \ln x + C_3~,
~~~~~
u_-(x) = C_3+ \frac{\alpha}{2}~,
\label{u4}\\
v_+(x) &=& C_4~,
~~~~~
v_-(x) = C_4-\frac{\beta}{6}~.
\label{v4}
\end{eqnarray}
The results in the previous sections (\ref{t+2}) and (\ref{u-2}) correspond to
a special case $\beta=0$, $C_4=0$ in Eqs.~(\ref{t4})--(\ref{v4}).
In the previous section, it was difficult to estimate $\alpha$ by the difference in
$u_{\pm}(x)$. In the expressions (\ref{t4})--(\ref{v4}),
this discrepancy is probably solved with a negative $\beta$.
Note that Eqs.~(\ref{t+})--(\ref{u-}) cannot be reduced to Eqs.~(\ref{t4}) and (\ref{u4})
in any parameterization.
It is technically difficult to estimate $t_{\pm}(x)$, $u_{\pm}(x)$, and $v_{\pm}(x)$
by approximating the growth-rate distribution
by the cubic function (\ref{approximation5})--(\ref{approximation6})
and to estimate $\beta$ and $\alpha$ fitting Eqs.~(\ref{t4})--(\ref{v4})
by the least-squares method.
At the same time, under the approximation by the cubic function
(\ref{approximation5})--(\ref{approximation6}),
the integration $\int^{\infty}_{0} dR~Q(R|x_T)$ converges without a cut $r_c$,
as in the linear approximation.
Because this work involves difficulties as well as advantages,
we will investigate the above issues in the near future.
\section*{Acknowledgments}
The authors thank the Research Institute of Economy, Trade and Industry, IAA (RIETI) for supplying the data set used in this work.
This study was produced from the research the authors conducted as
members of the Program for Promoting Social Science Research Aimed at
Solutions of Near-Future Problems, ``Design of Interfirm Networks to
Achieve Sustainable Economic Growth.''
This work was
supported in part by a Grant-in-Aid for Scientific Research (C) (No. 20510147) from
the Ministry of Education, Culture, Sports, Science and Technology, Japan.
Takayuki Mizuno was supported by funding from the Kampo Foundation 2009.
| 18,079 |
\section{Introduction}
The motion group $G=SO(d)\ltimes {\mathbb R} ^d$ is the semi-direct product
of $SO(d)$, the group of rotations in the space ${\mathbb R} ^d$, and ${\mathbb R}
^d$. This group plays a special role in the study of random walks on
Lie groups \cite{GKR}. A Central Limit Theorem on motion groups has
been proved by Roynette \cite{ro} and Baldi, Bougerol, Crepel
\cite{bbc} gave a Local Limit Theorem on Motion groups. Random walks
on homogeneous spaces of the motion group have been studied by
Gallardo and Ries \cite{gr}. The main novelty of this paper is in
the dynamic model of random walks which we define on the motion
group. The theory of dynamic random walks has been done by
Guillotin-Plantard in a commutative setting \cite{gu1, gu2, gu3}. So
far, dynamic random walks have been considered on Heisenberg groups,
the dual of $SU(2)$ \cite{GPS} and Clifford algebras \cite{sss}.
Needless to say, there is much work to do. This paper is another
(small) step/attempt in extending this theory to non-commutative
algebraic structures. Recently, random walks on the motion group
have been proposed as algorithmic ingredients for searching in
peer-to-peer wireless networks \cite{gms}. The organization is as
follows: Section II contains basic definitions, known limit
theorems, as well as a new convergence theorem for product of random
elements of the motion group. Dynamic random walks are considered in
Section III, we recall known results, show how to derive some limit
theorems from the classical case and investigate more deeply when
the rotations form an abelian group. It may be noted that these are
the first examples of non-discrete dynamic random walks. Section IV
provides some concluding remarks and further research aspects.
\section{Motion group}
\subsection{Basic definitions and known results}
The composition law of the motion
group $G=SO(d)\ltimes {\mathbb R} ^d $ is given by:
$$(R_1,T_1).(R_2,T_2)=(R_1\circ R_2, T_1+R_1(T_2))$$
Remember that $R_1\circ R_2$ is the rotation of angle $\Theta_1+\Theta_2$
if $\Theta_1$ (resp. $\Theta_2$) is the rotation angle of $R_1$
(resp.$R_2$).
More generally:$$(R_1,T_1)(R_2,T_2)...(R_n,T_n)=(R_1\circ
R_2\ldots\circ R_n,
T_1+R_1(T_2)+R_1\circ R_2(T_3)+...+R_1\circ R_2\ldots\circ R_{n-1}(T_n))$$
where $(R_i , T_i)$ are $G$-valued random variables.
Let $$S_n=T_1+R_1(T_2)+R_1\circ R_2(T_3)+...+R_1\circ R_2\ldots\circ R_{n-1}(T_n)$$
$S_n$ gives the position after n steps. $S_n$ is the sum of $n$
(not necessarily independent) random variables.\\
The following Central Limit Theorem has been proven in \cite{ro}:
\begin{theorem}
Assume that $R_i$, (resp. $T_i$), $i\in\{1,2,\ldots,n, \ldots \}$
are $n$ independent random variables with common law $\mu$ (resp
$\nu$), that the support of $\mu$ generates $SO(d)$ and that $\nu$
has a second
order moment. Then $\frac{S_n}{ \sqrt{n}}$ converges in law to
the Gaussian distribution $N(0,{\theta I_d})$ when $n$ goes to
infinity. $I_d$ stands for the $d\times d$ dimensional identity matrix and
$\theta$ is a positive constant.
\end{theorem}
{\bf Remark:} This theorem tells us, intuitively, that $\frac{S_n}{ \sqrt{n}}$
becomes rotation invariant when $n$ goes to infinity and that $S_n$
behaves asymptotically as a random walk on ${{\mathbb R}}^d$ which is rotation
invariant. In other words:
$$S_n\sim_{n\rightarrow\infty} Y_1+Y_2+\ldots +Y_n$$
where $Y_i$, $i\in\{1,2,\ldots,n\}$ are $n$ independent and
identically distributed random variables.
The following Local Limit Theorem has been proven in \cite{bbc}, we
formulate it below in a simple way:
\begin{theorem}
Let $P_n(O,M)$ be the probability that the random walks on $G$
reaches the point M of ${\mathbb R} ^d$ in n steps when staring from the
point O, then::
$$P_n(O,M)=P(S_n=M)\sim_{n\rightarrow\infty}\frac{K}{n^{d/2}}$$
where $K$ is a positive constant (independent of n).
\end{theorem}
\subsection{A convergence theorem}
Let $O(d)$ be the group of orthogonal linear transformations on ${\mathbb R}
^d$ ($d\geq 1$) and $K$ be a compact subgroup of $O(d)$ and $G = K
\ltimes {\mathbb R} ^d$ be a group of motions of ${\mathbb R} ^d$. Let $Y_i = (R_i,
T_i)$ be independent random variables. Let $S_n = T_1+R_1T_2 +
\ldots + R_1 R_2 \ldots R_{n-1} T_n$ and $X_n = R_1 R_2 \ldots
R_{n-1} T_n$ with $R_0 = 1$.
\begin{theorem}\label{lln}
Assume the following:
\begin{enumerate}
\item $R_1 \ldots R_n$ converges in law to $\omega _K$, the normalized
Haar measure on $K$;
\item ${\mathbb R} ^d$ has no non-zero $K$-invariant vectors;
\item $X_n$ has second moment;
\item $E(T_n)$ is bounded.
\end{enumerate}
Then
$${S_n\over b_n}\to 0~~ a.s$$
for any sequence $(b_n)$ such that $\sum {E(||X_n-E(X_n)||^2)\over
b_n ^2}<\infty$. \end{theorem}
\begin{proof} We recall that a random vector $T$ in $R^d$ is said to have
finite expectation if there is a vector $v\in {\mathbb R}^d$ such that $<v,u>
= E(<T, u>)$ for any $u \in {\mathbb R} ^d$ and in this case we write $E(T) =
v$. Also if $R$ is a random rotation on ${\mathbb R} ^d$, then $E(R)$ is a
operator on ${\mathbb R} ^d$ defined by
$$<E(R)u, v> = E(<Ru, v>)$$ for any two vectors $u, v \in {\mathbb R} ^d$.
It follows that $E(X_n) = E( R_1 R_2 \ldots R_{n-1}) E(T_n)$ for all
$n \geq 1$. For $u , v\in {\mathbb R}^d$,
$$<E(R_1R_2\ldots R_n)u, v> = \int <R_1R_2\ldots R_n u, v> d\omega =
\int <T(u), v> \rho _n (dT)$$ where $\rho _n$ is the distribution of
$R_1R_2\ldots R_n$. Since $R_1 R_2 \ldots R_n $ converges in law to
$\omega _K$, we get that $E(R_1 R_2 \ldots R_{n-1}) \to P_K$ where
$P_K$ is the projection onto $K$-fixed vectors.
We first claim that $E(X_n) \to 0$. Since ${\mathbb R} ^d$ has no
$K$-invariant vectors, $P_K = 0$. Now since $E(T_n)$ is bounded, if
$v$ is a limit point of $E(T_n)$, let $E(T_{k_n}) \to v$. Then
since $E(R_1 R_2 \ldots R_{n-1}) \to 0$ in the operator topology,
$E(X_{k_n}) \to 0$. Thus, $E(X_n) \to 0$.
Let $u \in {\mathbb R} ^d$. Take $Z_n = <X_n -E(X_n), u>$. Then $E(Z_n )
=0$. $Z_n$ are independent real random variables with finite second
moments. Then $${1\over b_n} \sum _{i=1} ^n Z_i \to 0~~ a.s.$$ for
any constant $(b_n)$ such that $\sum _0 ^\infty {Var(Z_n) \over
b_n^2} <\infty$ (cf. \cite{S}). This implies that
$${1\over b_n} \sum _{i=1}^n(X_i - E(X_i)) \to 0 ~~a.s.$$ We have shown that
$E(X_n) \to 0$ and hence ${1\over b_n}\sum _{i=1}^n E(X_i) \to 0$.
Thus,
$${1\over b_n}\sum _{i=1}^n X_i \ra 0 ~~a.s.$$
\end{proof}
The conditions in Theorem \ref{lln} are verified if we take $R_i$ to
be iid with the support of the common law is aperiodic (that is,
support is not contained in a coset of a proper normal subgroup) and
$T_i$ to be dynamic random walk with $b_n= {1\over n^\ap}$ for any
$\ap >{1\over 2}$. Thus, under these assumptions we get that
$${1\over n^\ap}(T_1+R_1T_2 + \ldots + R_1 R_2 \ldots R_{n-1} T_n)
\to 0 ~~a.s.$$
\section{Dynamic random walks}
\subsection{Preliminaries and known results}
Let $S=(E,{\cal A},\rho ,\tau )$ be a dynamical system
\index{dynamical
system} where $(E,{\cal A},\rho)$ is a probability space \index{probability space} and
$\tau $ is a transformation defined on $E$. Let $d\geq 1$ and
$h_{1},\ldots,h_{d}$ be functions defined on $E$ with values in
$[0,\frac{1}{d}]$. Let $(T_{i})_{i\geq 1}$ be a sequence of
independent random vectors with values in ${\mathbb Z} ^{d}$. Let $x\in E$
and $(e_{j})_{1\leq j\leq d}$ be the unit coordinate vectors of ${\mathbb Z}
^{d}$. For every $i\geq 1$, the law of the random vector $T_{i}$ is
given by
$$P(T_{i}=z)=\left\{\begin{array}{ll}
h_{j}(\tau ^{i}x) & \mbox{if}\ \ z=e_{j}\\
\frac{1}{d}-h_{j}(\tau ^{i}x) & \mbox{if}\ \ z=-e_{j}\\
0 & \mbox{otherwise}
\end{array} \right.
$$
We write $$S_{0}=0,\ \ S_{n}=\sum_{i=1}^{n} T_{i}\ \mbox{for}\ n\geq
1$$ for the ${{\mathbb Z}}^{d}$-random walk generated by the family
$(T_{i})_{i\geq 1}$. The random sequence $(S_n)_{n\geq 0}$ is called
a dynamic
${{\mathbb Z}}^{d}$-random walk.
It is worth remarking that if the functions $h_{j}$ are constant
then we have the classical random walks but if these functions are
all not constant, $(S_{n})_{n\in {\mathbb N}}$ is a non-homogeneous
Markov chain.\\
Let ${\cal C}_{1}(S)$ denote
the class of functions $f\in L^{1}(E,\mu)$ satisfying the
following condition $(H_1)$:
$$\left|\sum_{i=1}^{n}\Big(f(\tau ^{i}x)-\int_{E}f(x) d\rho (x)\Big)\right|=
o\Big(\frac{\sqrt{n}}{log(n)}\Big)$$
Let ${\cal C}_{2}(S)$ denote
the class of functions $f\in L^{1}(E,\mu)$ satisfying the
following condition $(H_2)$:
$$\sup_{x\in E}\left|\sum_{i=1}^{n}\Big(f(\tau ^{i}x)-\int_{E}f(x) d\rho (x)\Big)\right|=
o\Big(\sqrt{n}\Big)$$
A Central Limit Theorem:
\begin{theorem}\label{tcl}
Asssume that for every $j,l\in\{1,\ldots,d\}$, $h_{j}\in{\cal
C}_2(S)$, $h_{j}h_{l}\in{\cal C}_2(S)$ and
$\int_{E}h_{j}d\rho=\frac{1}{2d}$. Then, for every $x\in E$, the
sequence of processes $(\frac{1}{\sqrt{n}}S_{[nt]})_{t\geq 0}$
weakly converges in the Skorohod space ${\cal D}={\cal D}([0,\infty[)$ to the $d$-dimensional
Brownian motion
$$B_{t}=(B_{t}^{(1)},\ldots,B_{t}^{(d)})$$
with zero mean and covariance matrix $A t$.\index{Brownian motion}
\end{theorem}
The proof of this theorem is in \cite{GPS}.\\
A Local Limit Theorem:
\begin{theorem}
Let $h_{j}\in{\cal C}_1(S)$, $h_{j}h_{l}\in{\cal C}_1(S)$ and
$\int_{E}h_{j}d\rho =\frac{1}{2d}$. Then, for almost every $x\in E$,
$P(S_{2n}=0)$, the probability that starting from the point O, the
random walks comes back to O in $2n$ steps, has the following
asymptotic behavior:
$$P(S_{2n}=0)\sim\frac{2}{\sqrt{det(A)}(4\pi n)^{d/2}}$$
as $n\rightarrow\infty$
\end{theorem}
The proof of this theorem is also in \cite{GPS}.\\
\subsection{Dynamic random walks on the motion group}
Recall that we consider the random walk
$$S_n=T_1+R_1(T_2)+R_1\circ R_2(T_3)+...+R_1\circ R_2\ldots\circ R_{n-1}(T_n)$$
where $T_i$, $i\in{{\mathbb N}}$ are dynamic random variables as
defined above and we now define dynamic random rotations $R_i$.\\
If the rotations are classical random variables and translations are
dynamic random variables then one can adapt the result in \cite{ro}
and prove a Central Limit Theorem and a Local Limit Theorem
\cite{bbc} for $S_n$ thanks to the Central Limit Theorem and the
Local Limit Theorem for dynamic random walks \cite{GPS} given in the
above section. We do not write explicitely these theorems because
these formulation is almost the same as in \cite{bbc}, \cite{ro}.
Similar Central Limit Theorem and Local Limit Theorem hold true
under Lindenberg
condition on the translations $T_i$.\\
If both rotations and translations are dynamic random walks the
problem is still open. \\
We consider now the $2$-dimensional case. It is a known that $SO(2)$
is a compact abelian group (isomorphic to $U(1)$) and for any
irrational number $\theta \in {\mathbb R}$, $e^{2\pi i\theta }$ generates a
dense subgroup of $SO(2)$. Using this fact we prove that the
convolution product $\mu_1*\mu_2*\ldots*\mu_n$ of dynamic measures
corresponding to dynamic rotations $R_1$,\ldots, $R_n$ converges
weakly to the Haar measure of $SO(2)$.
Let $\theta$ be an irrational number and $R_j$ be random rotations
on ${\mathbb R} ^2$ defined by
$$P(R_{j}=z)=\left\{\begin{array}{ll}
f(\tau ^{j}x) & \mbox{if}\ \ z=e^{2\pi i\theta }\\
1-f(\tau ^{j}x) & \mbox{if}\ \ z=e^{-2\pi i \theta}\\
0 & \mbox{otherwise}
\end{array} \right.$$
and $f\colon E \to [0, 1]$ satisfies $f(1-f)\in {\cal C}_2(S)$ where
$E$ and ${\cal C}_2(S)$ are as in 3.1 with $d=1$.
If $f$ is an indicator function taking values $0$ and $1$, then it
can be easily seen that $R_i$ degenerate and hence the product
$R_1\ldots R_n$ does not converge (in law) as the set $\{ e^{2\pi i
k \theta} \mid k\geq 1 \}$ is dense in $SO(2)$. This forces us to
assume that $f$ is not a indicator function. In this case, we have
the following:
\begin{theorem}\label{dr} Almost surely $R_1R_2 \ldots R_n $ converges in law to
the Haar measure on $SO(2)$. \end{theorem}
In order to prove the above result we need to recall some details on
the dual of compact abelian groups and Fourier transform of
probability measures on compact groups.
\noindent {\bf Dual of compact groups:} For a compact abelian group
$K$, continuous homomorphisms from $K$ into $SO(2)$ are known as
characters and characters form a (locally compact abelian) group
which is denoted by $\hat K$ and is called the dual group of $K$:
cf. \cite{HR} for details on duality of locally compact abelian
groups. For each integer $n$, the map $z\mapsto z^n$ defines a
character on $SO(2)$ and defines an isomorphism between the group
${\mathbb Z}$ of integers with the dual of $SO(2)$ (cf. 23.27 (a) of
\cite{HR}). It is known that if $K_1, \ldots , K_d$ are compact
abelian groups, then the dual of $K_1\times \ldots \times K_d$ is
isomorphic to $\hat K_1 \times \ldots \times \hat K_d$ (cf. 23.18 of
\cite{HR}).
\noindent {\bf Fourier transform:} Let $K$ be a compact abelian group and
$\mu$ be a probability measure on $K$. Then the Fourier transform of
$\mu$, denoted by $\hat \mu$ is a function on $\hat K$ and is defined by
$$\hat \mu (\chi ) = \int \chi (x) d\mu (x)$$ for all $\chi \in \hat K$.
It is known that $\mu $ is the normalized Haar measure on $K$ if and only
if
$$\hat \mu (\chi )=\left\{\begin{array}{ll}
0 & \mbox{if}\ \ \chi {\rm ~~ is ~~non-trivial}\\
1 & \mbox{if}\ \ \chi {\rm ~~ is ~~trivial}
\end{array} \right.
$$
and if $X_n$ are $K$-valued random variables with Fourier transform
$f_n$, then $X_n$ converges in law to a $K$-valued random variable
$X$ if and only if $f_n $ converges to the Fourier transform of $X$
pointwise (cf. \cite{Ru}).
\begin{proof} $\!\!\!\!\!$ {\bf of Theorem \ref{dr}}\ \ Let $k$ be any
non-zero integer. It is sufficient to claim that
$$\prod _{j=1}^n|\int e^{2\pi i kx }d\mu _j |\to 0$$ as $n \ra
\infty$.
$$\begin{array}{lcl}
|\int e^{2\pi i kx }d\mu _j |^2 & =&|e^{2\pi ik\theta } f(\tau ^jx)
+ e^{-2\pi i k\theta}(1-f(\tau ^jx)) |^2 \\ &=& |\cos (2\pi k\theta)
+i \sin (2\pi k\theta )(f(\tau ^jx)-1+f(\tau ^jx)) |^2 \\
& = & \cos ^2(2\pi k\theta)+\sin ^2(2\pi k\theta )(1-2f(\tau ^jx)) ^2\\
& = & 1- 4\sin ^2 (2\pi k\theta )f(\tau ^jx)(1-f(\tau ^jx))\\
\end{array}$$
Suppose $f(\tau ^jx) (1-f(\tau ^jx) )\not \to 0$. Then $1- 4\sin ^2
(2\pi k\theta )f(\tau ^jx)(1-f(\tau ^jx))\not \to 1$ and hence
$\prod _{j=1} ^n |\int e^{2\pi i k\theta } d\mu _j | \to 0$. Thus,
it is sufficient to show that $f(\tau ^jx)(1-f(\tau ^jx))\not \to
0$.
If $f(\tau ^jx) (1-f(\tau ^jx)) \to 0$, then
$${1\over n}\sum _1 ^n f(\tau ^jx) (1-f(\tau ^jx)) \to 0 = \int
f(x) (1-f(x))d\rho (x)$$ and hence $f$ is an indicator function.
This is a contradiction. Thus proving the result. \end{proof}
Let $K$ be a compact connected abelian subgroup of $SO(d)$, for
instance one may take $K$ to be the maximal torus in $SO(d)$. In
this situation one can define dynamic random walks in many ways and
we will now consider two forms of dynamic random walks on $K$. The
first one is the following: $a \in K$ is such that the closed
subgroup generated by $a$ is $K$ (see 25.15, \cite{HR} for existence
of such $a$) and $R_j$ are random variables taking values in $K$
defined by
$$P(R_{j}=x)=\left\{\begin{array}{ll}
f(\tau ^{j}x) & \mbox{if}\ \ x=a\\
1-f(\tau ^{j}x) & \mbox{if}\ \ x=a^{-1}\\
0 & \mbox{otherwise}
\end{array} \right.$$
and $f\colon E \to [0, 1]$ satisfies $f(1-f)\in {\cal C}_2(S)$ where
$E$ and ${\cal C}_2(S)$ are as in Section 3.
In the situation we have the following as a consequence of Theorem
\ref{dr}.
\begin{theorem}\label{ac1} Almost surely $R_1R_2 \ldots R_n $ converges in law
to the Haar measure on $K$. \end{theorem}
\begin{proof} For any non-trivial character $\chi $ on $K$, the element $\chi
(a)$ in $SO(2)$ corresponds to an irrational rotation, hence we get
from Theorem \ref{dr} that $\chi (R_1 R_2 \ldots R_n)$ converges in
law to the Haar measure on $SO(2)$ which is $\chi (\omega _K)$. This
implies that $(R_1 R_2 \ldots R_n)$ converges in law to the Haar
measure on $K$ \end{proof}
\begin{remark} The Corollary \ref{ac1} could be proved for any monothetic
compact connected abelian group in a similar way but for simplicity
and for the purpose of the article we restrict our attention to
compact connected abelian subgroups of $SO(d)$: a topological group
$K$ is called monothetic if $K$ contains an element $a$ such that
the closed subgroup generated by $a$ is $K$ itself (cf. 9.2 of
\cite{HR} for monothetic compact groups). \end{remark}
We will now consider the second form of dynamic random walks on $K$.
Let $v_1,\cdots , v_r$ be a basis for the Lie algebra of $K$ and
$\exp$ be the exponential map of the Lie algebra of $K$ into $K$.
Let $e_k = \exp (v_k)$ $1\leq k \leq r$. Let $R_j$ be the random
variables taking values in $K$ defined by
$$P(R_{j}=x)=\left \{ \begin{array}{ll}
f_k(\tau ^{j}x) & \mbox{if}\ \ x=e_k\\
\frac{1}{r}-f_k(\tau ^{j}x) & \mbox{if}\ \ x=e_k^{-1}\\
0 & \mbox{otherwise}
\end{array} \right.$$
and $f_k$ are functions from $E$ taking values in $[0, {1\over r}]$
where $E$ is as in Section 3. We further assume that $k$-the
coordinate of $v_k$ is irrational so that $e_k$ is an irrational
rotation by an angle $\theta _k$ and all other coordinates of $v_k$
are $0$. We further assume that $1$ and $\theta _k$ are independent
over ${\mathbb Q}$.
In this situation also we have the following which could be proved
as in Theorem \ref{ac1}
\begin{theorem}\label{ar} Almost surely $R_1R_2 \ldots R_n $ converges in law to
the Haar measure on $K$. \end{theorem}
As an application of the results proved in the previous section and
the above results on compact groups we get the following:
\begin{theorem}\label{ct} Let $(R_j, T_j)$ be dynamic random walk on $K\ltimes
{\mathbb R} ^{d}$ where $R_j$ is the dynamic random walk on $K$ given in
Theorem \ref{ac1} or Theorem \ref{ar} and $T_j$ is dynamic random
walk on ${\mathbb R} ^{d}$ defined in 3.1. Then for $\ap >{1\over 2}$,
$${1\over n^\ap}(T_1+R_1T_2 + \ldots + R_1 R_2 \ldots R_{n-1} T_n)
\to P_K(v _0) ~~a.s$$ where $P_K$ is the projection onto the
$K$-invariant vectors in $R^d$ and $v_0= (2E(h_j|{\cal I})-{1\over
d})_{1\leq j \leq d}$.
\end{theorem}
\begin{proof} We first assume that ${\mathbb R} ^d$ has no non-zero $K$-invariant
vectors. Condition (1) of Theorem \ref{lln} follows from Theorems
\ref{ac1} and \ref{ar}. Let $X_n = R_1 R _2 \ldots R_{n-1} T_n$.
Then $E(<X_n, u>^2) = \int <R_1R_2\ldots R_{n-1}T_n, u> ^2 d\omega $
is finite as $T_n$ takes only finitely many values and rotations
preserve the norm. Thus, Condition (3) of Theorem \ref{lln} is
verified and condition (4) is easy to verify. Hence $${1\over
n^\ap}(T_1+R_1T_2 + \ldots + R_1 R_2 \ldots R_{n-1} T_n) \to 0
~~a.s$$
In general, let $V$ be the space of $K$-invariant vectors in ${\mathbb R}
^d$. Let $P_K$ be the orthogonal projection onto $V$ and $Q$ be the
projection onto the orthogonal complement of $V$. Then for any $v\in
{\mathbb R} ^d$, $v = P_K(v) +Q(v)$ and $Q({\mathbb R} ^d)$ has no non-zero
$K$-invariant vector. Since both $V$ and $Q({\mathbb R}^d)$ are
$K$-invariant, we get that $P_K(R(v))=P_K(v)$ $Q(R(v)) = R(Q(v))$
for any $v \in {\mathbb R} ^d$ and $R \in K$. Now the result follows from
the previous case and Theorem 2.1 of \cite{GPS}
\end{proof}
\section{Concluding remarks}
We have proved a new convergence result for classical random walks on
the motion group. Our results for the dynamic case are still partial
and we are planning to characterize recurrent and transient random
walks (in this model) on the motion group and the corresponding
homegeneous spaces.
So far, dynamic random walks have only been considered on Heisenberg
groups, the dual of $SU(2)$ \cite{GPS}, the motion group and Clifford
algebras \cite{sss}. A more general study of dynamic random walks
on Lie groups, homogeneous spaces and quantum groups is still to be
done. This is a challenging research project.
\begin{acknowledgement}
This work is done as a part of the IFIM project on "Dynamic random
walks on algebraic structures and applications in quantum
computing". The authors would like to thank IFIM for its support.
\end{acknowledgement}
| 8,067 |
\section{Introduction}
String compactifications with background fluxes (see e.g. \cite{Grana:2005jc,Douglas:2006es,Blumenhagen:2006ci,Denef:2007pq} for reviews) provide a simple framework in which the stabilization of moduli fields can be discussed in a very controlled and natural way. A complete stabilization of all moduli may also require the inclusion of quantum corrections, as e.g. in \cite{Kachru:2003aw}, but there are also scenarios where the fluxes alone are sufficient for a tree-level stabilization of all closed string moduli \cite{DeWolfe:2005uu}.
From a cosmological point of view, it is especially interesting to understand moduli stabilization at positive potential energy, either in order to obtain local dS minima so as to describe the present accelerated cosmic expansion, or in order to stabilize possible runaway directions in inflationary potentials. A particularly well controlled situation would be one in which this could be achieved at a purely classical level, i.e., by the dimensional reduction of the standard two-derivative 10D supergravity action supplemented with the lowest order actions for brane type sources.
As is well-known for a long time, however, there are powerful no-go theorems \cite{Gibbons:1984kp,deWit:1986xg,Maldacena:2000mw} that forbid such tree-level dS compactifications under a few simple assumptions, one of them being the absence of negative tension objects such as orientifold planes. As orientifold planes are a common ingredient in phenomenologically interesting type II compactifications, it seems natural to explore the possibility of tree-level dS vacua or inflation models in type II orientifolds. It is the purpose of this note to give an overview of the most promising controlled models of this type. For simplicity, we do not consider D-branes and the associated open string moduli (although the analysis would be similar). Moreover, we take the O-planes to be smeared over their transverse directions \cite{Grimm:2004ua,DeWolfe:2005uu,Koerber:2008rx,Caviezel:2008ik}, assuming that the results approximate fully localized warped solutions \cite{Acharya:2006ne} consistent with the results of \cite{Douglas:2010rt}.
\section{No-go theorems in the volume-dilaton plane}
Constructions of dS vacua or inflation models from classical string compactifications are severely limited by a set of very simple ``no-go theorems''. These no-go theorems go beyond \cite{Gibbons:1984kp,deWit:1986xg,Maldacena:2000mw}, as they do allow for orientifold planes and generalize the theorems used in \cite{Hertzberg:2007wc} for IIA flux compactifications on Calabi-Yau spaces with O6-planes and the IIB setup of \cite{Giddings:2001yu}. They follow from the scaling behavior of the different scalar potential contributions with respect to two universal moduli fields that occur in any perturbative and geometric string compactification. These two fields are the volume modulus $\rho \equiv (\textrm{vol}_6)^{1/3}$ and an orthogonal modulus related to the dilaton: $\tau \equiv e^{-\phi} \sqrt{\textrm{vol}_6} = e^{-\phi} \rho^{3/2}$, where $\phi$ denotes the 10D dilaton, and $\textrm{vol}_6$ is the 6D volume in string frame. After going to the four-dimensional Einstein frame, one then finds the following scalings for the contributions to the 4D scalar potential coming from the $H$-flux, the RR-fluxes $F_p$, as well as from O$q$-planes and the six-dimensional Ricci scalar $R_6$:
\begin{equation}
V_H \sim \tau^{-2} \rho^{-3}, \quad V_{F_p} \sim \tau^{-4} \rho^{3-p}, \quad V_{Oq} \sim \tau^{-3} \rho^{\frac{q-6}{2}}, \quad V_{R_6} \sim \tau^{-2} \rho^{-1}. \label{Scalings}
\end{equation}
Note that $V_H, V_{F_p} \geq 0$ and $V_{Oq} \leq 0$ while $V_{R_6}$ can have either sign.
The most widely studied classes of compactifications are based on special holonomy manifolds such as $CY_3$'s, $T^2 \times K3$ or $T^6$, which are Ricci-flat, i.e. they have $V_{R_6}=0$. In order to find the minimal necessary ingredients for classical dS vacua in this simplified case, we act on $V=V_H + \sum_p V_{F_p} + \sum_q V_{Oq}$ \footnote{The possible values for $p$ and $q$ consistent with four-dimensional Lorentz invariants are $p\in\{0,2,4,6\}$, $q\in\{4,6,8\}$ in type IIA theory and $p\in\{1,3,5\}$, $q\in \{3,5,7,9\}$ in type IIB theory. To cancel the charges of the O$q$-planes in the Ricci-flat case we need $V_H \neq 0$ and $\sum_p V_{F_p}\neq 0$. For compactification with $V_{R_6} \neq 0$ we need $\sum_p V_{F_p}\neq 0$.} with a differential operator $D:= -a \tau \partial_\tau - b \rho \partial_\rho$, where $a$ and $b$ denote some as yet arbitrary real constants. If one can show that there is a constant $c>0$ such that $D V \geq c V$, then dS vacua and generically slow-roll inflation are excluded. Indeed, a dS extremum requires $D V=0$ and $V>0$, which is inconsistent with $D V \geq c V >0$. Similarly, the slow-roll parameter $\epsilon =\frac{K^{i\bar{j}} \partial_i V \partial_{\bar{j}} V}{V^2} \geq \frac{c^2}{4a^2+3b^2}$ is normally of order one so that slow-roll inflation with $\epsilon \ll 1$ is forbidden. Using (\ref{Scalings}), this means that, if we can find $a,b$ such that
\begin{eqnarray}
&&D V_H = (2a+3b) V_H, \quad D V_{F_p} = (4a+(p-3)b) V_{F_p}, \quad D V_{Oq} = \left( 3a + \frac{6-q}{2} b\right) V_{Oq}, \nonumber\\
&& \text{with} \quad (2a+3b) \geq c \geq \left( 3a + \frac{6-q}{2} b\right), \quad (4a+(p-3)b) \geq c \geq \left( 3a + \frac{6-q}{2} b\right), \quad \forall p,q, \nonumber
\end{eqnarray}
then we have a no-go theorem that forbids classical dS vacua and inflation. The two inequalities above have a solution if and only if $q+p-6\geq 0,\, \forall p,q$. This condition is for example satisfied for type IIA compactifications on a $CY_3$ with O6-planes and arbitrary RR-fluxes or, analogously, for the type IIB theory with O3-planes and $F_3$-flux \cite{Hertzberg:2007wc}. Conversely, avoiding this no-go theorem at the classical level would require compactifications with $H$-flux that, in type IIA, allow for O4-planes and $F_0$-flux or, in type IIB, allow for O3-planes and $F_1$-flux. However, the $F_0$-flux needs to be odd under the O4 orientifold projection and therefore normally has to vanish. Similarly, all one-forms are normally odd under the O3 orientifold projection, but the $F_1$-flux has to be even and should therefore also vanish in this constellation\footnote{It might in principle be possible to consider compactifications on toroidal orbifolds that have for example $F_1$-flux only in the bulk and O3-planes in the twisted sector. In this note we restrict ourselves to the bulk sector only.}.
A possible way out of these difficulties might be to allow also for non Ricci-flat manifolds. This would contribute the additional term $V_{R_6} \sim -R_6 \sim \tau^{-2} \rho^{-1}$ to the scalar potential. It is easy to check that for positively curved manifolds ($V_{R_6} < 0$) the above conditions cannot be relaxed. Although $H$-flux is not necessary anymore to cancel the O$q$-plane tadpole, one still needs it to avoid a no-go theorem with $b=0$. For manifolds with integrated negative curvature, on the other hand, the condition for a no-go theorem becomes relaxed to $q+p-8\geq 0,\, \forall p,q$. The only exception is the case with O3-planes and $F_5$-flux, which saturates this inequality, but would require $c=0$ and therefore cannot be excluded based on this inequality. Table \ref{table} summarizes the no-go theorems against classical dS vacua and slow-roll inflation\footnote{In \cite{Danielsson:2009ff} a similar conclusion is reached for dS vacua with small cosmological constant using the ``abc''-method of \cite{Silverstein:2007ac}.}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Curvature & No-go, if & No no-go in IIA with & No no-go in IIB with \\ \hline \hline
$V_{R_6} \sim -R_6 \leq 0$ & \begin{tabular}{c} $q+p-6\geq 0,\, \forall p,q,$\\ $\epsilon \geq \frac{(3+q)^2}{3+q^2} \geq \frac{12}{7}$ \end{tabular} & O4-planes and $H$, $F_0$-flux & O3-planes and $H$, $F_1$-flux \\ \hline
$V_{R_6} \sim -R_6 > 0$ & \begin{tabular}{c}
$q+p-8\geq 0,\, \forall p,q,$ \\
(except $q=3,p=5$)\\ $\epsilon \geq \frac{(q-3)^2}{q^2-8q+19} \geq \frac{1}{3}$ \end{tabular} & \begin{tabular}{c}
O4-planes and $F_0$-flux \\
O4-planes and $F_2$-flux \\
O6-planes and $F_0$-flux
\end{tabular} &
\begin{tabular}{c}
O3-planes and $F_1$-flux \\
O3-planes and $F_3$-flux \\
O3-planes and $F_5$-flux \\
O5-planes and $F_1$-flux
\end{tabular} \\ \hline
\end{tabular}
\end{center}
\caption{The table summarizes the conditions that are needed in order to find a no-go theorem in the $(\rho,\tau)$-plane and the resulting lower bound on the slow-roll parameter $\epsilon$. The third and fourth column spell out the minimal ingredients necessary to evade such a no-go theorem.\label{table}}
\end{table}
As we have argued above, it is difficult to find explicit examples with O3-planes and $F_1$-flux or with O4-planes and $F_0$-flux. The same turns out to be true for O3-planes with non-vanishing curvature \cite{Caviezel:2009tu}. The prospects of stabilizing all moduli at tree-level in IIA compactifications with O4-planes are not clear so we will restrict ourselves in the rest of these notes to compactifications on manifolds with negative curvature and O6-planes in type IIA or O5-planes in type IIB. Moreover, we will focus on those compactifications that give an effective 4D, $\mathcal{N}=1$ supergravity action, which leads us to SU(3)-structure manifolds with O6-planes in IIA, and SU(2)-structure compactifications with O5- and O7-planes in type IIB string theory.\footnote{We need to compactify on an SU(2)-structure manifold in IIB, because the $F_1$-flux requires a 1-form. $\mathcal{N}=1$ supersymmetry then also requires O7-planes in addition to the O5-planes.}
\section{Type IIA}
The attempts to construct classical dS vacua in IIA compactifications on manifolds with negative curvature and O6-planes were initiated in \cite{Silverstein:2007ac}, where also other types of sources such as KK5-monopoles were used. A similar construction with only the ingredients of eq. \eqref{Scalings} was attempted in \cite{Haque:2008jz}, whereas in \cite{Danielsson:2009ff} the authors argued that the constructions of \cite{Silverstein:2007ac} and \cite{Haque:2008jz} cannot be lifted to full 10D solutions.
In this note, we review IIA compactifications on a special class of SU(3)-structure manifolds, namely coset spaces \cite{Koerber:2008rx,Caviezel:2008ik,Caviezel:2008tf} involving semisimple and Abelian groups, as well as twisted tori (solvmanifolds) \cite{Ihl:2007ah,Flauger:2008ad}. The underlying Lie group structure endows these spaces with a natural expansion basis (the left-invariant forms)
for the various higher-dimensional fields and fluxes, and one expects that the resulting 4D, $\mathcal{N}=1$ theory is a consistent truncation of the full 10D theory \cite{Cassani:2009ck}. Furthermore, in these compactifications it is possible to stabilize all moduli in AdS vacua \cite{Grimm:2004ua,DeWolfe:2005uu,Ihl:2007ah}. This means that the scalar potential generically depends on all moduli, which is a prerequisite for the construction of metastable dS vacua.
Whereas the previous analysis focused on the behavior of the potential in the volume-dilaton plane, it is clear that once the no-go theorems using these fields are circumvented, one must still make sure that there are no other steep directions of the scalar potential in directions outside the $(\rho,\tau)$-plane. For the coset spaces and twisted tori studied in \cite{Caviezel:2008tf,Flauger:2008ad}, the volume turns out to factorize further into a two-dimensional and a four-dimensional part: $\textrm{vol}_6 = \rho^3 = \rho_{(2)} \rho^2_{(4)}$. In such cases one can then study directions involving $\rho_{(2)}$ or $\rho_{(4)}$ and finds that, if for a given model
\begin{equation}
(-2 \tau\partial_\tau -\rho_{(4)} \partial_{\rho_{(4)}}) V_{R_6} \geq 6 V_{R_6},
\end{equation}
then the full scalar potential also satisfies $(-2 \tau\partial_\tau -\rho_{(4)} \partial_{\rho_{(4)}}) V \geq 6 V$, and one obtains the bound $\epsilon \geq2$. In \cite{Caviezel:2008tf} six out of seven coset spaces could be excluded by this refined no-go theorem. In \cite{Flauger:2008ad} many similar no-go theorems were discussed and used to exclude almost all concrete models of twisted tori.
The only spaces that could not be excluded in this manner are $SU(2) \times SU(2)$ with four O6-planes and a twisted version of $T^6/\mathbb{Z}_2 \times \mathbb{Z}_2$. These two spaces are closely related \cite{Aldazabal:2007sn}, and therefore it is not surprising that they have very similar properties. In particular, for both of these models it is possible to find (numerical) dS extrema \cite{Caviezel:2008tf,Flauger:2008ad}. Unfortunately, these dS extrema are unstable as one of the 14 real scalar fields turns out to be tachyonic with an $\eta$ parameter of order one. Interestingly, this tachyon is not the potential tachyon candidate identified for certain types of K\"ahler potentials in \cite{Covi:2008ea}. This can also be seen from the results in \cite{deCarlos:2009fq}, where a similar K\"ahler potential and a modified superpotential based on non-geometric fluxes lead to stable dS vacua (see also \cite{Dall'Agata:2009gv,Roest:2009tt,Dibitetto:2010rg}).
\section{Type IIB}
For type IIB compactifications we have seen that it is possible to evade simple no-go theorems in the $(\rho,\tau)$-plane if one includes O5-planes and $F_1$-flux. A concrete class of compactifications that allows for these ingredients and also preserves $\mathcal{N}=1$ supersymmetry in 4D was presented in \cite{Caviezel:2009tu} based on 6D SU(2)-structure spaces with O5- and O7-planes. As discussed there, these compactifications are formally T-dual to compactification of type IIA on SU(3)-structure spaces with O6-planes, however these T-duals are generically non-geometric and hence do not fall under the analysis of the previous section.
This particular class of IIB compactifications has the very interesting property that the tree-level scalar potential allows for fully stabilized supersymmetric AdS vacua with large volume and small string coupling \cite{Caviezel:2009tu}. This is very different from the no-scale property of classical type IIB compactifications on $CY_3$ manifolds along the lines of \cite{Giddings:2001yu}. It also shows that the scalar potential generically depends on all moduli.
For six-dimensional SU(2)-structure spaces the split of the volume $\textrm{vol}_6 = \rho^3 = \rho_{(2)} \rho^2_{(4)}$ into a two-dimensional and a four-dimensional part is very natural, and also the complex structure moduli naturally split into two classes. This allows one \cite{Caviezel:2009tu} to derive many no-go theorems and exclude most concrete examples of coset spaces and twisted tori with SU(2)-structure. The only space that was found to evade all the no-go theorems is $SU(2) \times SU(2)$ with an SU(2)-structure and O5- and O7-planes. Just as in the IIA analogue, we can find dS critical points, but again these have at least one tachyonic direction with a large $\eta$ parameter. It would be interesting to understand the geometrical meaning of this tachyon as well as the relation of the dS extrema found in \cite{Caviezel:2008tf,Flauger:2008ad,Caviezel:2009tu} to fully localized warped 10D solutions \cite{Acharya:2006ne,Douglas:2010rt}.
\begin{acknowledgement}
This work was supported by the German Research Foundation (DFG) within the Emmy Noether Program (Grant number ZA 279/1-2) and the Cluster of Excellence ``QUEST".
\end{acknowledgement}
| 5,582 |
\section{Introduction}
The key idea of valleytronics is in using the valley index as an additional active degree of freedom of charge carriers~\cite{PhysRevLett.99.236809,PhysRevLett.108.196802} in gapped graphene~\cite{PhysRevLett.99.236809}, monolayers of transition metal dichalcogenides (TMDs)~\cite{PhysRevLett.108.196802}, among other two-dimensional (2D) Dirac materials.
One of the representatives of TMDs is MoS$_2$: a material with a structure composed of molybdenum atoms sandwiched between pairs of sulfur atoms.
In contrast to graphene, it is characterized by the inversion symmetry breaking and it possesses a large band gap with the width in the optical range, absent in monolayer graphene~\cite{Pan2014}.
It represents a direct band gap material with the minima of the conduction band and maxima of the valence band located at points $K$ and $K'$ in reciprocal space.
Moreover, electrons in MoS$_2$ are subject to strong spin-orbital interaction, which also makes it different from graphene, where the spin-orbital interaction is relatively weak.
This latter property, which is due to the electrons occupying d-orbitals in MoS$_2$, results in an extra band splitting~\cite{Silva_Guill_n_2016}.
It has been shown that the interband (between the conduction and valence bands) transitions in Dirac materials are valley-sensitive: at a given circular polarization of the external electromagnetic perturbation, the interband transitions occur predominantly in one valley since the electrons in each valley couple with a specific polarization of light~\cite{Kovalev_2018}. %
Switching to the opposite circular polarization changes the valley where the interband transitions take place~\cite{Zeng_2012}.
These optical selection rules are fulfilled exactly for interband optical transitions, where the electron momentum is a good quantum number.
However, each material is to some extent disordered: it contains impurities, some of which are unintentional and emerge due to the imperfections of the growth technique, whereas some of the impurities are embedded intentionally in order to enhance electronic (or other) properties of the sample.
As a result, there emerge additional donor and acceptor energy levels in the vicinity of the conduction and valence bands, respectively.
Then, if the sample is exposed to external light
with the frequency smaller than the band gap of the material, the optical properties become mostly determined by the electron transitions from donor impurities to the conduction band and from the valence band to the acceptor states~\cite{Li1993}.
In this case, the electron states on impurities are characterized by some quantum numbers instead of the translational momentum due to localization.
The theoretical description of optical transitions from these states to the bands and the analysis of the corresponding optical selection rules, which take into account the valley quantum number, represents an important problem of valley optoelectronics, and it has been studied theoretically.
In particular, work~\cite{PhysRevLett.121.167402} presents a numerical study of a special type of disorder: the vacancy defects.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{system1.pdf}
\caption{System schematic. (a) A monolayer of MoS$_2$ (with impurities) exposed to a circularly polarized electromagnetic (EM) field of light (green cylinder).
(b) The bandstructure of MoS$_2$. The band gap is $\Delta$, and the system is exposed to an EM field with frequency $\omega$.
}
\label{Fig1}
\end{figure}
Usually, the energy gap between the impurity states and the conduction band corresponds to the terahertz (THz) frequency range (10-20 meV).
It can be used to design (pulsed) terahertz radiation detectors.
In such a detector, a polarized optical signal is transformed to electric current (to be measured).
The analysis of the optical selection rules here is thus of utmost importance for applications.
We want also to mention another potential application of the theory of impurity-band transitions.
It has been recently been proposed that TMD monolayers containing atomic impurities can be used as single-photon emitters~\cite{barthelmi2020atomistic}.
Utilizing artificially-created atomic vacancies, one can achieve the single-photon regime of operation.
This is one of fast-developing topics of research recently.
For the fundamental purposes and to design the devices, it is useful to study the general properties of defects of any type, and an analytical analysis would be beneficial here.
However, one of the problems to face is that the simple model, which assumes the impurity potential energy to be the Dirac delta-function~\cite{doi:10.1063/1.3556738}, is not applicable in the case of a Dirac Hamiltonian since the electron wave function becomes singular exactly at the center of coordinates~\cite{PhysRevLett.96.126402}.
In this Letter, we build a theory of impurity-band transitions in 2D Dirac materials, utilizing and modifying the model of zero-radius impurity potential, which is frequently used for the description of shallow impurities in semiconductors and semiconductor nanostructures~\cite{PhysRevLett.96.126402, pakhomov1996local}.
We investigate the optical properties of disordered TMDs, examining the light absorption and photoinduced transport effects, accounting for the spin-orbital coupling of electrons.
We study the behavior of drag electric current density and the absorption coefficient for different key parameters of the sample and different polarizations of the incident light.
It should be mentioned, that the generation of the electric current in 2D Dirac materials due to the \textit{interband} optical transitions (the photon and phonon drag effects) has been extensively studied~\cite{PhysRevB.81.165441, golub2011valley, PhysRevLett.122.256801, PhysRevB.102.235405, PhysRevB.102.045407}, but the impurity-band transitions have not been addressed.
\textit{Hamiltonian and eigenstates. }
Light absorption is governed by microscopic transitions of electrons from the
bound impurity states to the conduction band.
The Hamiltonian of the electron bound at the attractive potential $u(\textbf{r})$ reads
\begin{equation}
\label{EqHam1}
H=\left(\frac{\Delta}{2}\sigma_z + \textbf{v}\cdot\textbf{p}\right)\otimes \id - \frac{\lambda\eta}{2}(\sigma_z-\id)\otimes\hat{s}_z+u(\textbf{r}),
\end{equation}
where $\Delta$ is a band gap, $\textbf{v} = v(\eta\sigma_x,\sigma_y)$ is the velocity operator,
$\textbf{p}$ is the electron momentum, and $\sigma_\alpha$ with $\alpha=x,y,z$ the Pauli matrices of pseudospin.
The index $\eta=\pm1$ indicates the valley;
$\lambda$ is intrinsic spin-orbital coupling; $\hat{s}_z$ is the matrix of the electron spin.
The first term in Hamiltonian~(\ref{EqHam1})
describes a two-band model of gapped graphene (or a band structure of a TMD material).
We consider a shallow impurity potential, $u(\textbf{r})$, thus we assume that the ionization potential of the donor is much smaller than $\Delta$.
To find the eigenfunctions and eigenenergies, we write the Schr\"odinger equation in the momentum representation,
\begin{eqnarray}
\label{EqMatHam}
&&
\begin{pmatrix}
\frac{\Delta}{2}-E & vp_{-}
\\
vp_{+} & -\frac{\Delta}{2}+s\lambda\eta-E
\end{pmatrix}
\chi_{s,m}(\textbf{p})
\\
\nonumber
&&~~~~~~~~~~~~~~~~~~~~
+\int\frac{d\textbf{p}'
}{(2\pi\hbar)^2}
u({\textbf{p}-\textbf{p}'})\chi_{s,m}(\textbf{p}')
=0,
\end{eqnarray}
where $s=\pm 1$, $p_\pm=\eta p_x\pm ip_y=pe^{\pm i\eta\varphi}$ with $\varphi$ the angle of the vector $\textbf{p}$ with respect to $x$-axis, $m$ is the eigenvalue of the z-projection of the electron angular momentum (the quantum number which characterizes electron localized on impurity),
\begin{eqnarray}
&&u(\textbf{p}-\textbf{p}')
=
2\pi\int^\infty_0rdru(r)J_0(|\textbf{p}-\textbf{p}'|r)
\\
\nonumber
&&~~~~~=
2\pi\sum_{k}\int^\infty_0rdru(r)J_k(pr)J_k(p'r)\cos k(\varphi-\varphi '),
\end{eqnarray}
where
$J_k(x)$ are the $k$-order Bessel functions.
We search for the spinor eigenfunctions in the form,
\begin{equation}
\label{EqEig1}
\chi_{s,m}(\textbf{p})=\left(
\begin{array}{c}
a_{s,m}(p)e^{i m\varphi}\\
b_{s,m}(p)e^{i(m+\eta)\varphi}
\end{array}
\right)
\end{equation}
since this form reflects the axial symmetry,
and the eigenstates are characterized by the angular momentum projection with quantum number $m$.
Substituting Eq.~\eqref{EqEig1} in Eq.~\eqref{EqMatHam} and performing the integration over $\varphi'$, we find the system of equations for the coefficients $a_{s,m}$ and $b_{s,m}$,
\begin{eqnarray}
\label{Bessel}
&&0=
\begin{pmatrix}
\frac{\Delta}{2}-E & vp
\\
vp & -\frac{\Delta}{2}+s\lambda\eta-E
\end{pmatrix}
\begin{pmatrix}
a_{s,m}(p)\\
b_{s,m}(p)
\end{pmatrix}
\\
\nonumber
&&+
\int\limits^\infty_0rdru(r)\int\limits^\infty_0
\frac{p'dp'}{\hbar^2}
\begin{pmatrix}
J_m(pr)J_m(p'r)a_{s,m}(p')\\
J_{(m+\eta)}(pr)J_{(m+\eta)}(p'r)b_{s,m}(p')
\end{pmatrix}.
\end{eqnarray}
To draw principal conclusions, we can now simplify these equations.
For a shallow impurity ($\epsilon^i_{s,m,\eta}\ll\Delta$) and low enough temperatures only the low-lying impurity states are occupied. Then, we can consider the transitions from impurity states corresponding to $m=0$ and $m=\pm1$ levels only.
Assuming that the potential of each impurity $u(r)$ is sharply peaked in the vicinity of its center $r=0$ and it rapidly decreases with $r$~\cite{[{On one hand, in the framework of our model we assume that the potential is narrow enough, thus, we make certain qualitative simplifications in Eq.~\eqref{simple}. On the other hand, in order to disregard the intervalley mixing (and thus, disregard the splitting of the impurity states due to the intervalley mixing of electron wave functions on impurities), the potential has to be wide enough. Formally, if we denote as $a$ the characteristic size of the impurity potential and as $L$ the characteristic width of the wave function of electron localized on the impurity, one should fulfill $L\gg a\gg 1/(2K_0)$, where $2K_0$ is the distance between nonequivalent valleys in reciprocal space}]c2}, we can take the Bessel functions under the integral $r=0$. For the $m=0$ state, $J_0(0)=1$ and $J_{\eta=\pm1}(0)=0,$ and we find the simplified form of Eq.~\eqref{Bessel},
\begin{equation}
\label{simple}
\begin{split}
\begin{pmatrix}
\epsilon^i_{s,0,\eta} & vp \\
vp & -\Delta+s\lambda\eta
\end{pmatrix}
\begin{pmatrix}
a_{s,0} \\ b_{s,0}
\end{pmatrix}
+
\begin{pmatrix}
Au_0 \\ 0
\end{pmatrix}
=0,
\end{split}
\end{equation}
where
\begin{equation}
\label{Parms}
A=\int^\infty_0\frac{p'dp'}{\hbar^2}a_{s,0}(p');~~~ u_0 = \int^\infty_0u(r)rdr.
\end{equation}
The solution of ~Eq .\eqref{simple} reads
\begin{equation}\label{WF}
\begin{pmatrix}
a_{s,0} \\ b_{s,0}
\end{pmatrix}
=-
\hbar v
\sqrt{\frac{2\pi\epsilon^i_{s,0,\eta}}{\Delta}}
\begin{pmatrix}
\frac{\Delta-s\lambda\eta}{(vp)^2+\epsilon^i_{s,0,\eta}(\Delta-s\lambda\eta)}
\\
\frac{vp}{(vp)^2+\epsilon^i_{s,0,\eta}(\Delta-s\lambda\eta)}
\end{pmatrix}.
\end{equation}
For the $m = 1, \eta = -1$ state,
$J_0(0)=1$, $J_1(0)=0$, and $J_2(0)=0$, Eq.~\eqref{Bessel} can be simplified,
\begin{equation}\label{M1eq}
\begin{split}
\begin{pmatrix}
\epsilon^i_{s,1,-1} & vp \\
vp & -\Delta-s\lambda
\end{pmatrix}
\begin{pmatrix}
a_{s,1} \\ b_{s,1}
\end{pmatrix}
+
\begin{pmatrix}
0 \\ Bu_0
\end{pmatrix}
=0,
\end{split}
\end{equation}
where now
\begin{equation}
\label{defB}
B=\int^\infty_0\frac{p'dp'}{\hbar^2}b_{s,1}(p').
\end{equation}
The solution of ~Eq. \eqref{M1eq} reads
\begin{equation}
\begin{pmatrix}
a_{s,1} \\ b_{s,1}
\end{pmatrix}
=-
\hbar v
\sqrt
{
\frac
{2\pi(\Delta^2-\lambda^2)}
{\Delta\epsilon^i_{s,1,-1}}
}
\begin{pmatrix}
\frac{vp}{(vp)^2+\epsilon^i_{s,1,-1}(\Delta+s\lambda)}
\\
\frac{-\epsilon^i_{s,1,-1}}{(vp)^2+\epsilon^i_{s,1,-1}(\Delta+s\lambda)}
\end{pmatrix}.
\end{equation}
The energy $\epsilon^i_{s,1,-1}$ of this state can be also found using the definition Eq.~\eqref{defB}. We see that within the framework of the shallow-impurity model, the state $m=1$ forms `under' the $\eta=-1$ valley (and vise versa).
In other words, the following rule holds: $m+\eta=0$ for $m=\pm1$ states.
The electron states in the
conduction band are described by the wave function,
\begin{equation}
\psi_{s,\eta}(\textbf{p})=\left(
\begin{array}{c}
\cos\left(\frac{\theta_{s,\eta}}{2}\right) \\
\sin\left(\frac{\theta_{s,\eta}}{2}\right)e^{i\eta\varphi_{\textbf{p}}}
\end{array}
\right),
\end{equation}
where we use the notations $\cos\theta_{s,\eta}=(\Delta-s\lambda\eta)/2E_{s,\eta}(\textbf{p})$ and $\sin\theta_{s,\eta}=\eta vp/E_{s,\eta}(\textbf{p})$
with the conduction band electron energy $E_c(\textbf{p}) = s\eta\lambda/2 + E_{s,\eta}(\textbf{p})$, $E_{s,\eta}(\textbf{p})=\sqrt{(vp)^2+\left[(\Delta-s\lambda\eta)/2\right]^2}$.
Since the transitions from the impurity state with a given valley number $\eta$ to the conduction band of the other valley $\eta'\neq\eta$ are strongly suppressed due to the large distance between the valleys in the reciprocal space~\cite{Gorycaeaau4899}, the main contribution to the light absorption comes from the impurity-band transitions with the same valley number $\eta' = \eta$.
From the point of view of applications, the most interesting is the circularly-polarized EM field case.
The Hamiltonian describing the interaction of electrons with the external EM perturbation reads $\hat{V}(\textbf{}{r},t)=-e\textbf{v}\cdot\mathbf{A}(\textbf{r},t)$,
where $\mathbf{A}(\mathbf{r},t) = \mathbf{A}_0 exp(i\mathbf{kr}-i\omega t) + \mathbf{A}^{*}_0 exp(-i\mathbf{kr}+i\omega t)$ is the vector potential of EM field.
Here, $\textbf{A}_0 = A_0\hat{x}+A_0i\sigma\hat{y}$ with $\sigma$ the light polarization, $\hat{x}$ and $\hat{y}$ the unity vectors in the corresponding directions in direct space, $\mathbf{k}$ and $\omega$ being the photon wave vector and frequency, respectively.
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth]{current_m0.pdf}
\caption{Spectrum of electric current density due to the transitions from $m=0$, $\eta=-1$
impurity states to spin-up conduction band states for the polarization of light $\sigma = -1$ (red) and $\sigma = 1$ (blue); $\gamma=\epsilon^{i}_{1,0,-1}/\hbar$,
where $\epsilon^{i}_{1,0,-1} = 10$~meV is the energy of impurity counted from the bottom of the conduction band.
Black curves show the positive and negative contributions to current in $\sigma = -1$ case.
We used the density of impurities
$n_i \approx 5\times10^{12}$~cm$^{-2}$;
electron relaxation time $\tau=2\times10^{-13}$~s, velocity $v=at/\hbar$, the lattice constant of MoS$_{2}$ $a=3.193$~\AA,
effective hopping integral $t=1.10$~eV~\cite{Hatami_2014}, amplitude of light
$A_0=3.8\times10^{12}$ eV$\cdot$ s/C$\cdot m$, the band gap $\Delta=1.16$~eV, and the spin-orbit coupling strength $\lambda=75$~meV.
}
\label{Fig2}
\end{figure}
\textit{Electric current density. }
The general expression for the (partial) component of photon-drag electric current density, corresponding to electron transitions from the impurity state with a quantum number $m$ to the conduction band, reads $(\alpha = x, y)$
\begin{equation}
\label{EqCurrentMain}
j_{m\alpha}=\frac{2\pi en_i\tau}{\hbar}\int
\frac{v_\alpha(\textbf{p})d\textbf{p}}{(2\pi\hbar)^2}
|M_m(\textbf{p},\textbf{k})|^2
\delta(E_{c}(\textbf{p})-E_i-\hbar \omega),
\end{equation}
where $n_i$ is the impurity concentration, $e$ is the elementary charge, $\tau$ is the electron relaxation time in conduction band, $M_m(\mathbf{p},\mathbf{k})=\langle\psi_{s,\eta}(\mathbf{p})|\hat{V}|\chi_{s,m}(\mathbf{p}-\mathbf{k})\rangle$ is the matrix element of impurity-band transitions, and the electron velocity components read $v_x(\textbf{p})=\eta\sin\theta_\textbf{p}\cos\varphi_\textbf{p}$ and $v_y(\textbf{p})=-i\eta\sin\theta_\textbf{p}\sin\varphi_\textbf{p}$;
$E_i = \Delta/2 - \epsilon^i_{s,m,\eta}$ is the impurity energy level.
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth]{current_both_m.pdf}
\caption{Spectrum of electric current density due to the transitions from $m=1$, $\eta=-1$ (solid) and $m=-1$, $\eta=1$ (dashed) impurity states to spin-up conduction band states for the polarizations of light $\sigma = -1$ (red) and $\sigma = 1$ (blue); $\gamma=\epsilon^{i}_{1,1,-1}/\hbar$.
The other parameters are taken the same as in Fig.~\ref{Fig2}. }
\label{Fig3}
\end{figure}
Considering the $m = 0$ impurity state, we find the corresponding matrix element,
\begin{eqnarray}
\label{EqM0}
|M_0(\textbf{p,k})|^2 &=&
(evA_0)^2
(\hbar v)^2
\frac
{2\pi\epsilon^i_{s,0,\eta}}
{\Delta}
\\
\nonumber
&&
\times
\left\{
\frac{(\eta+\sigma)^2v^2(\textbf{p}-\textbf{k})^2\cos^2\left(\frac{\theta_{s,\eta}}{2}\right)
}{\Big[v^2(\textbf{p}-\textbf{k})^2+\epsilon^i_{s,0,\eta}(\Delta-s\lambda\eta)\Big]^2}
\right.
\\
\nonumber
&&~~~~~~~\left.
+
\frac{
(\eta-\sigma)^2(\Delta-s\lambda\eta)^2\sin^2\left(\frac{\theta_{s,\eta}}{2}\right)
}{\Big[v^2(\textbf{p}-\textbf{k})^2+\epsilon^i_{s,0,\eta}(\Delta-s\lambda\eta)\Big]^2}
\right\}.
\end{eqnarray}
Without the loss of generality, let us choose $\mathbf{k}$ to be directed along the x-axis ($\mathbf{k}=k\hat{x}$).
Then, in Eq.~\eqref{EqCurrentMain} only the term containing $\cos\varphi_\textbf{p}$ survives and $j_y = 0$, that reflects the fact that the photon-drag current should be directed along the photon wave vector.
Substituting Eq.~\eqref{EqM0} in Eq.~\eqref{EqCurrentMain}, we find
\begin{equation}
\begin{split}
&
j_{0x}
=
\beta_{0}'\Theta[\delta\hbar \omega_{s,0,\eta}]
\frac{k\pi}{v}\frac{(\Delta-s\lambda\eta)+\delta\hbar \omega_{s,0,\eta}}{\Big[(\delta\hbar \omega_{s,0,\eta})^2 + (\Delta - s\lambda\eta)\hbar \omega\Big]^2}
\\
&
\times
\frac{\delta\hbar \omega_{s,0,\eta}}{(\Delta - s\lambda\eta) + \delta\hbar \omega_{s,0,\eta}}
\Bigg\{
\frac{4(\Delta-s\lambda\eta)^2}{(\delta\hbar \omega_{s,0,\eta})^2 + (\Delta - s\lambda\eta)\hbar \omega}
\\
&
\times
\delta\hbar \omega_{s,0,\eta}(\eta-\sigma)^2
+
\Big[(\Delta-s\lambda\eta)+\delta\hbar \omega_{s,0,\eta}\Big](\eta+\sigma)^2
\\
\label{EqCurrent0}
&
\times
\left[
\frac{4\Big((\Delta-s\lambda\eta)+\delta\hbar \omega_{s,0,\eta}\Big)\delta\hbar \omega_{s,0,\eta}}{(\delta\hbar \omega_{s,0,\eta})^2 +(\Delta-s\lambda\eta)\hbar \omega}
-2\right]
\Bigg\},
\end{split}
\end{equation}
where
$\beta_{0}'=en_i\tau v^2\epsilon_i(evA_0)^2/\hbar\Delta$ and $\delta\hbar\omega_{s,m,\eta}=\hbar\omega-\epsilon^i_{s,m,\eta}$.
Figure~\ref{Fig2} shows the spectrum of electric current density for different polarizations of light and $m=0$, $\eta=-1$.
Interesting to note, that in the case of the polarization of light $\sigma = -1$, the electric current flows in the opposite direction in some region of frequencies and then it changes its direction.
A similar behavior (of the inversion of the direction of the electric current density) was demonstrated in work~\cite{PhysRevB.81.165441}.
Mathematically, it happens due to an interplay of different terms in Eq.~\eqref{EqCurrent0}, shown as dashed curves.
Such behavior is not observed for $\sigma=1$. For the case $m = 1$ (and, correspondingly, $\eta=-1$), we find
\begin{eqnarray}
\label{EqM1}
|M_{1}(\textbf{p},\textbf{k})|^2&=&
(evA_0)^2
(\hbar v)^2
\frac
{2\pi(\Delta^2-\lambda^2)}
{\Delta\epsilon^i_{s,1,-1}}
\\
\nonumber
&\times&
\left\{
\frac{(\sigma-1)^2(\epsilon^i_{s,1,-1})^2\cos^2\left(\frac{\theta_{s,-1}}{2}\right)}{\Big[v^2(\textbf{p}-\textbf{k})^2+\epsilon^i_{s,1,-1}(\Delta+s\lambda)\Big]^2}\right.\\
\nonumber
&&~~~~~\left.+
\frac{(\sigma+1)^2v^2(\textbf{p}-\textbf{k})^2\sin^2\left(\frac{\theta_{s,-1}}{2}\right)}{\Big[v^2(\textbf{p}-\textbf{k})^2+\epsilon^i_{s,1,-1}(\Delta+s\lambda)\Big]^2}
\right\}.
\end{eqnarray}
Again, only the x-component of the current is finite,
\begin{equation}
\label{EqCurrent1}
\begin{split}
j_{1x}
=
\beta_1'\Theta[\delta\hbar\omega_{s,1,-1}]\frac{k\pi}{v}\frac{(\Delta+s\lambda)+\delta\hbar\omega_{s,1,-1}}{\Big((\delta\hbar\omega_{s,1,-1})^2 + (\Delta + s\lambda)\hbar \omega\Big)^2}
\\
\times
\frac{\delta\hbar\omega_{s,1,-1}}{(\Delta+s\lambda) + 2\delta\hbar\omega_{s,1,-1}}
\Bigg\{
\frac{(\Delta+s\lambda)+\delta\hbar\omega_{s,1,-1}}{(\delta\hbar\omega_{s,1,-1})^2 + (\Delta + s\lambda)\hbar \omega}
\\
\times
4(\epsilon{^i_{s,1,-1}})^2(\sigma-1)^2+\delta\hbar\omega_{s,1,-1}(\sigma+1)^2
\\
\times
\left(
\frac{4\Big((\Delta+s\lambda)+\delta\hbar\omega_{s,1,-1}\Big)\delta\hbar\omega_{s,1,-1}}{(\delta\hbar\omega_{s,1,-1})^2 +(\Delta+s\lambda)\hbar \omega}
-2\right)
\Bigg\},
\end{split}
\end{equation}
where $\beta_{1}'=\beta_{0}'[\Delta^2-\lambda^2]/\epsilon_i^2$.
Figure~\ref{Fig3} shows the spectrum of electric current density for $m = -1$ and $m = 1$.
We choose $\eta = -1$ for $m = 1$ and $\eta = 1$ for $m = -1$ .
Also, it is the $\sigma=1$ polarization which gives the region of positive and negative electric currents for $m = 1$ (blue solid curve) and for $m = -1$, the $\sigma = -1$ polarized light gives such electric current (red dashed curve).
For a given $\sigma$, we have optical transitions in both the K (for $m=0$ or 1) and K$'$ (for $m=1$ or 0) valleys.
They can sum up or partially compensate each other.
\textit{Symmetry analysis of the electric current density.}
Let us now analyse the formulas for the electric current density [Eqs.~\eqref{EqCurrent0} and~\eqref{EqCurrent1}] from the symmetry point of view, and compare them with the case of a graphene monolayer~\cite{glazov2014high}.
A single-layer graphene (without a substrate) possesses the $D_{6h}$ point group, while single-layer MoS$_2$ has $D_{3h}$ point group.
However, for both the groups, the fourth-rank (generalized conductivity) tensor $\Phi_{\alpha\beta\gamma\mu}$ is the same~\cite{boyd2020nonlinear}.
The general expression for the electric current density reads~\cite{glazov2014high}
\begin{equation}
\label{EqCur1x}
j_{x} = T_1 k_x \frac{|E_x|^2 + |E_y|^2}{2} +
T_2 k_x \frac{|E_x|^2 - |E_y|^2}{2},
\end{equation}
where $E_x$ and $E_y$ are the components of the electric field;
$T_1$ and $T_2$ are constants describing linear photon drag effect.
Since the electric field is circularly polarized in our case, $\mathbf{E}=E_0(1,i)$,
only the first term in Eq.~\eqref{EqCur1x} remains, and the other one vanishes since $|E_x|=|E_y|$.
We see that Eqs.~\eqref{EqCurrent0} and~\eqref{EqCurrent1} obey the symmetry properties of the system.
\textit{Light absorption coefficient. }
Furthermore, let us study the light absorption coefficient for the $m-$th impurity state.
It is defined as the ratio of the energy flux of absorbed photons and the average energy flux of incident photons~\cite{fang2013quantum}, $\alpha_m(\hbar \omega) = \hbar \omega W_m/P$,
where $P$ is the average of the Poynting flux for the light intensity~\cite{chuang2012physics}, $P=n_rc\epsilon_0\omega^2A_0^2/2$,
where $n_r$ is the refractive index of MoS$_2$ and $\epsilon_0$ is the vacuum permittivity.
The probability of light absorption in a given valley $\eta$ and from a particular impurity state $m$ is given by the Fermi golden rule,
\begin{equation}
\nonumber
W_m(\omega)=\frac{2\pi n_i}{\hbar}\int\frac{d\textbf{p}}{(2\pi\hbar)^2}|M_m(\textbf{p},0)|^2\delta(E_{c}(\textbf{p})-E_i-\hbar \omega).
\end{equation}
For the transition from $m=0$ impurity state, we find
\begin{equation}
\begin{split}
&\alpha_0 =
\frac{2\pi n_ie^2v^2\epsilon^i_{s,0,\eta}}{n_rc\omega\Delta\epsilon_0}
\Theta[\delta\hbar \omega_{s,0,\eta}]
\frac{\delta\hbar \omega_{s,0,\eta}}{(\Delta - s\lambda\eta)+2\delta\hbar \omega_{s,0,\eta}}
~~~
\\
&\times
\frac{\sqrt{4\Big(\Delta-s\lambda\eta+\delta\hbar \omega_{s,0,\eta}\Big)\delta\hbar \omega_{s,0,\eta}+(\Delta-s\lambda\eta)^2}}{\Big((\delta\hbar \omega_{s,0,\eta})
^2+(\Delta-s\lambda\eta-\epsilon^i_{s,0,\eta})\hbar \omega\Big)^2}
\\
&\times
\left\{
(\eta+\sigma)^2\Big(\Delta-s\lambda\eta+\delta\hbar \omega_{s,0,\eta}\Big)^2\right.
\\
\nonumber
&~~~~~~~~~~~~~~~~~~~~~\left.
+(\eta-\sigma)^2(\Delta - s\lambda\eta)^2
\right\},
\end{split}
\end{equation}
and for $m=1,\eta=-1$ state,
\begin{equation}
\begin{split}
\alpha_1 & =
\frac{2\pi n_ie^2v^2}{n_rcw\Delta\epsilon_0}
\Theta[\delta\hbar \omega_{s,1,-1}]
\frac{\Delta^2-\lambda^2}{\epsilon^i_{s,1,-1}}
\frac{(\Delta+s\lambda)+\delta\hbar \omega_{s,1,-1}}{(\Delta + s\lambda)+2\delta\hbar \omega_{s,1,-1}}
\\\times&
\frac{\sqrt{4\Big(\Delta+s\lambda+\delta\hbar \omega_{s,1,-1}\Big)\delta\hbar \omega_{s,1,-1}+(\Delta+s\lambda)^2}}{\Big((\delta\hbar \omega_{s,1,-1})
^2+(\Delta+s\lambda-\epsilon^i_{s,1,-1})\hbar \omega\Big)^2}
\\\times&
\left\{
(\sigma-1)^2(\epsilon^i_{s,1,-1})^2
+(\sigma+1)^2(\delta\hbar \omega_{s,1,-1})^2
\right\}.
\end{split}
\end{equation}
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth]{absorbance.pdf}
\caption{Spectrum of
absorbance for $m = -1$ (red), $m = 0$ (green), and $m = 1$ (blue); $\sigma = 1$ (dashed) and $\sigma = -1$ (solid).
}
\label{Fig4}
\end{figure}
Figure~\ref{Fig4} shows the spectra of absorbance.
For the transitions from the state $m = 1$, the magnitude of absorbance is higher for the $\sigma = -1$ light but by increasing the photon energy,
the valley dependence disappears.
For the transitions from the state $m = 0$, both the polarized lights give comparable contribution.
It is enlightening to compare the matrix element corresponding to impurity-band transitions with the matrix element for the interband transitions, $|M_{cv}(\mathbf{}{p})|^2$~\cite{Kovalev_2018, PhysRevB.103.035434}.
The valley selectivity for interband transitions is to a large extent satisfied only at small values of momentum $p$, giving $|M_{cv}(0)|^2\propto(\eta+\sigma)^2$.
In our case, the transitions from $m=0$ impurity states are strongly suppressed due to $|M_0(0,0)|^2\rightarrow0$, whereas for $m=\pm1$ we find $|M_{m=\pm1}(0,0)|^2\propto\epsilon_i^2(\sigma+\eta)^2$ under the condition $m+\eta=0$.
It means that the valley selectivity takes place for orbital impurity states $m=\pm1$ (and thus, we have $\exp(im\varphi)\neq1$), reflecting the chirality of the band electron wavefunction. These general conclusions are supported by the numerical analysis.
For instance, the absorption coefficient in Fig.~\ref{Fig4} is large in the vicinity of the threshold for $m=1,\eta=-1$ state at $\sigma=-1$ polarization.
\textit{
In conclusion}, we
have studied the selection rules for the light-induced transitions from impurity states to the conduction band in two-dimensional gapped Dirac materials.
For that, we calculated and investigated
the absorption coefficient
and the photon-drag-induced electric current.
For clarity, we used the shallow impurity potential model.
Nevertheless, this model correctly reflects the selection rules of any impurity possessing the azimuthal symmetry.
Thus, our conclusions on the optical selection rules are sufficiently general.
\textit{Acknowledgements.}
We thank Dr.~Meng Sun and Ihor Vakulchyk for useful discussions.
The part of the calculations devoted to the absorption coefficient was supported by the Russian Science Foundation (Project No.~17-12-01039).
The current density analysis was supported by the Institute for Basic Science in Korea (Project No.~IBS-R024-D1).
| 10,095 |
\section{Introduction} \label{sec:intro}
The Triangulum Galaxy (M33) is one of two dwarf spirals in the Local Group \citep{Gaia_Co2021}. With a stellar mass of roughly $4.8 \times 10^{9}M_{\odot}$ \citep{corbelli2014}, it is ten times less massive than the Milky Way (MW) and Andromeda (M31) and twice as massive as the Large Magellanic Cloud (LMC). It has a high star formation rate (SFR), $\sim0.7M_{\odot} \rm yr^{-1}$ \citep{Blitz_2006}, characteristic of later type spirals. Furthermore, it is a relatively isolated galaxy, making its disk more pristine. As high redshift galaxy observations push down to lower stellar masses, M33 becomes an important local comparison point.
\par While M33 is sufficiently close to be studied in detail, much remains unknown. For example, there is debate as to its interaction history with M31. Some of M33's properties could be explained by an interaction; such as its highly warped H\,{\smcap i}\ disk that extends far beyond the stellar disk, out to $22\rm\ kpc$ \citep{braun2004, putman2009, semczuk2018}, the apparent disturbances to the outskirts of M33's stellar disk \citep{mcconnachie09, McConnachie_2010}, and the mutual burst of star formation $2-4$ Gyr ago for both M31 and M33 \citep[e.g.,][]{putman2009, mcconnachie09, lewis2015, Ferguson2016, semczuk2018, Williams2017}. However, new studies using the proper motion of both galaxies from Gaia DR2, the Hubble Space Telescope (HST) and the Very Long Baseline Array (VLBA), and cosmological simulations support the notion that M33 is on its first infall \citep{patel2017a, patel2017b, vandermarel18} and has had no significant interaction with M31.
\par Furthermore, M33 is more complex then expected for a low luminosity galaxy, which are thought to be dominated by a single component, although the highest mass dwarfs can have a thick disk in addition to a stellar halo \citep{Roychowdhury2013, van_der_Wel_2014, Patra2020, Kado_Fong_2020, kado-fong2021}. However, M33, is proving to be more complex, with a newly confirmed centrally-concentrated halo \citep{Gilbert2021} and bar \citep{Williams2021, smercina2022, lazzarini2022} and the mysterious warps described above \citep{braun2004, putman2009, semczuk2018}.
\par Resolved stellar spectroscopy is a valuable tool to unlock the history of M33, especially when information from spectroscopic observations is examined as a function of stellar age. This is {\it{only}} possible in the Local Group, in which the distances to galaxies allow us to view their entire disk while resolving individual stars. For example, stellar line-of-sight velocity dispersion as a function of stellar age can point to gradual heating and heating via one event, as seen in M31, where the high velocity dispersion increases monotonically with stellar age, suggesting a constant and violent merger history \citep{Dorman2015}. Comparing stellar kinematics to gas kinematics can also give insight into a galaxy's dynamical heating history, as was demonstrated in M31 \citep{Quirk2019}, as the asymmetric drift of stellar populations is correlated with their velocity dispersion in the plane of the disk. Furthermore, comparing this observed asymmetric drift to that in simulations suggests that M31 had a relatively recent 4:1 merger \citep{Quirk2020}. M33's distance of $859 \rm\ kpc$ \citep[]{degrijs2014} is comparable to that of M31 \citep{mcconnachie09}, so we can measure spectroscopy of individually resolved stars in M33. These techniques may also be capable of constraining the merger history of M33.
\par We expect M33 to be a particularly interesting target for these studies. Unlike M31's obvious remnants from its violent history, M33 is morphologically less disturbed. It has however a much higher SFR \citep{Blitz_2006} and lower disk mass surface density \citep{corbelli2014}, placing its disk in a very different regime than M31's. M33 is also the prime environment to observe the effects of internal heating (i.e., bursts from star formation, perturbations from giant molecular clouds and the bar, and density waves from spiral arms) because of its low mass and low inclination \citep[$\sim54 \degr$]{warner1973}. Stellar feedback can cause powerful inflows and outflows of gas and bursts of star formation, which can result in drastic radial migration in low mass galaxies and can even lead to the creation of a stellar halo \citep[e.g.,][]{stinson2009, maxwell2012, el-badry2016}. This internal feedback could have drastically changed the stellar kinematics of M33 since its birth. Stellar disks are fragile \citep{toth1992}, and while more massive disks are likely able to survive major merger events \citep[e.g.,][]{DSouza2018, Hammer2018} and low mass disks are believed to have to survive many minor events \citep{helmi2012}, low mass disks, like that of M33, are unlikely to remain intact after major merging events. This notion paired with the fact that M33 is relatively isolated ($\sim 230\rm\ kpc$ from M31), and about half of its stellar mass comes from stars that are $\sim 10$ Gyr \citep{Williams_2009}, means that the disk of M33 is fairly pristine and therefore can give us insight into the evolution of isolated high redshift galaxies.
\par {\it In this paper, we present the TRiangulum EXtended (TREX) Survey of $\sim 7000$ targets, making it the largest stellar spectroscopic survey of M33.} The survey spans across the entire disk of M33, out to a deprojected radius of $\sim 11$ kpc. It is the first dataset that consists of individually resolved stars that extends across the entire inner and into the outer disk. {\it With this dataset, we examine the kinematics of stars in the disk of M33 as a function of stellar age to measure the dynamical heating of the evolving disk.} This analysis, which uses a subset of the total sample, is the first study of disk kinematics as a function of stellar age in M33 using only individual stars and is overall the third of its kind (after the MW and M31). The robust dataset presented here has already been used to confirm the existence of a dynamically hot component in M33 \citep{Gilbert2021}.
\par This paper is organized as follows. In Section \ref{sec:Data} we present our new spectroscopic dataset and archival datasets used in this study. Section \ref{sec:ages} describes the separation of stars into broad age bins and the removal of possible halo stars, and Section \ref{sec:velocities} shows the calculation of local velocity dispersion, rotation velocity, and asymmetric drift. Section \ref{sec:illustris} highlights a comparison of observed kinematics to the kinematics seen in M33-like simulated galaxies, and Section \ref{sec:LG} compares them to the kinematics of M31 and the MW Solar Neighborhood. We summarize the main points of this work in Section \ref{sec:summary}, and in Section \ref{sec:rare}, we show the kinematics of rare spectral types.
\section{Data} \label{sec:Data}
Our study made use of large catalogs of stellar photometry and spectroscopy, as well as imaging data of the M33 gas content. Below we describe the photometric, stellar spectroscopic, and gas imaging catalogs.
\subsection{Stellar Data}
We started with large libraries of resolved stellar data, including space-based and ground-based photometry. These, in turn, allowed us to obtain our stellar spectroscopic dataset. We selected targets from a wide range of masses and evolutionary stages, including massive main sequence (MS), massive helium burning (HeB), intermediate mass asymptotic giant branch (AGB), and low mass red giant branch (RGB) stars. The use of these different stellar types is described later in Section \ref{sec:ages}. These different stages were not prioritized equally over the four observing epochs, see description below and Table \ref{tab:masks_info}. The broad evolutionary stage of a star comes from color magnitude diagrams (CMD) of the photometry catalogs described in Section \ref{sec:phot}. We describe this process in detail below.
\subsubsection{Photometry} \label{sec:phot}
Our strategy for target selection relies on selecting bright isolated stars from resolved stellar photometry. The high precision of this photometry allows us to target stars in the crowded central regions of M33 with confidence that we were observing isolated stars instead of blended light from multiple sources.
\par Over the four years of spectroscopic observations, our stellar selection varied in response to the available photometry and evolving scientific opportunities, as stated in Table \ref{tab:masks_info}. We relied on a mix of photometry from the Hubble Space Telescope (HST) and the Canada-France-Hawaii Telescope (CFHT). Targets observed in 2016 were selected using archival HST data with broad bands F475W + F814W, or F606W + F814W or where there was a gap in HST coverage, using data from MegaCam on CFHT with $i-$ and $g-$ bands. The HST fields were observed with the Advanced Camera for Surveys (ACS) and the reduction is described in \cite{Williams_2009, Williams2014}. Each of the 2016 masks overlapped with multiple of these ACS fields. The CFHT/MegaCam data was reduced using the MegaPipe reduction pipeline \citep{gwyn2008}. The primary targets for these masks were RGB stars, but also included some AGB and red HeB stars and a small number of MS and BHeB stars.
\par The 2018 and 2019 slitmasks had targets selected from HST photometry from the Panchromatic Hubble Andromeda Treasury: Triangulum Extended Region \cite[PHATTER]{Williams2021} and CFHT/MegaCam (same as described above). The PHATTER survey observed stars in the Andromeda and Triangulum galaxies with six filter photometry using ACS and WFC3: F275W, F336W, F475W, F814W, F110W, and F160W \citep{Dalcanton2012, Williams2021}. The photometric catalogs are described in \cite{Williams2014, Williams2021}.
\par The availability of six filter photometry allows us to more precisely divide stars into broad age bins, as described later in Section \ref{sec:ages}. With the six filter photometry, we were able to target a range of stellar evolutionary stages for the 2018 masks: MS, HeB, AGB, and RGB stars. To sample a broad range of stellar ages, we preferentially targeted rarer stars, including a large number of bright HeB stars. These stars were identified from PHATTER CMDs. CFHT/MegaCam data was used to fill in the outer parts of the DEIMOS slitmasks that extended beyond the HST PHATTER footprint, into the low density outer disk where HST resolution is less needed. For the 2019 masks, RGB stars were the primary targets. These stars were identified from PHATTER CMDs if in the PHATTER range. If they came from the CFHT/MegaCam MegaPipe reduction, we prioritized stars with $g - i > 0.5$ and $21 < i < 22$.
\par The 2020 slitmasks were positioned to probe the outer disk, and they are beyond the PHATTER footprint and any other continuous HST coverage. For these outer slitmasks, we used the catalog from the Pan-Andromeda Archaeological Survey (PAndAS) \citep{McConnachie2018}. PAndAS used CFHT/Megacam to observe $>$400 square degrees of sky centered on Andromeda and Triangulum with $i-$ and $g-$ bands. The observations and data reduction are described in \cite{McConnachie2018}. Only objects that were flagged by the PAndAS team to most likely be stars were included in our target list. We prioritized placing RGB stars on these masks ($g - i > 0.5$ and $20.5 < i < 22.5$).
\par To avoid blends, especially in the crowded central regions, we applied an isolation criterion $I_{neighbor} < I_{target} - (\frac{d}{0.8\arcsec})^{2} +3$ to exclude stars with neighbors that are too close and/or too bright and therefore might contaminate the target's light during spectroscopic observation with DEIMOS\citep{Dorman2012}. We applied this criterion to all of the photometry catalogs used, although it was most critical for the crowded regions targeted using PHATTER photometry. If a target candidate has a single neighbor that fulfills this criterion, it is excluded from the slitmask.
\par Even with this criterion, it is possible to have multiple objects in a given slit. The majority of these serendipitous observations have good quality spectra and well-measured velocities that did not interfere with the main target's spectrum. However, since we do not have easily paired photometry for these objects, we do not include them in this particular study, although they will eventually be incorporated. The total number of targets is 7684, with 2381 from 2016, 2457 from 2018, 906 from 2019, and 1940 from 2020. Adding in the serendipitous targets would increase the sample by $\sim 27 \%$.
\subsubsection{Keck II DEIMOS Spectroscopy}\label{sec:spec}
The spectroscopic data comes from four epochs of observing. All observations were taken with the DEIMOS Spectrograph \citep{faber2003} on the Keck II 10 meter telescope. The program uses thirty-six DEIMOS slitmasks and two different grating setups-- one to target a wide range of stellar evolutionary phases and one to target older, redder stars. The first ten slitmasks were observed in 2016 using the 600 line mm$^{-1}$ grating, which has a resolution of R$\sim 2000$, and a central wavelength and wavelength range of $7200$\AA and $\lambda\sim 4600$--9800\ \AA, respectively. This setting allows us to target a wide range of spectral types. In 2018, we observed eleven slitmasks across the central disk using the same configuration. In 2019, we obtained four additional slitmasks of spectroscopic data. The first used the 600 line mm$^{-1}$ grating configuration, and the remaining three were observed using the 1200 line mm$^{-1}$ grating (R$\sim 6000$)\ to account for additional moon light and to target older stars. With this grating, we used a central wavelength of 7800\AA\ and a wavelength range of $\lambda\sim 6300$--9800\ \AA, focusing on the redder part of the spectrum where RGB stars emit significant flux. In the Fall of 2020, we observed the last eleven slitmasks in the outer disk with the 1200 line mm$^{-1}$ configuration.
\par The layout of the thirty six slitmasks can be seen in Figure \ref{fig:m33_masks}. Because of the high density of stars in the inner regions, we were able to have slitmasks targeting different stars at the same slitmask location and orientation. Some of the targets on different slitmasks were repeated to get higher signal for faint targets and to help calibrate velocity measurement errors. The positions of the 2019 slitmasks are the same as the two most northern 2018 slitmasks. Each slitmask was observed for approximately 2 hours. Table \ref{tab:masks_info} lists the names, positions, orientations, exposure times, numbers of stars observed, years observed, gratings used, photometry sources for target selection, and the main targets for each DEIMOS slitmask.
\begin{figure}[bp!]
\centering
\includegraphics[width=\columnwidth]{m33_mask.pdf}
\caption{Layout and orientation of the thirty six DEIMOS slitmasks across the disk of M33 for the TREX Survey. Each rectangle on the image represents the rough shape and size of a DEIMOS slitmask (approximately 16\arcmin by 4\arcmin). The cyan bricks were observed in 2016, the green in 2018 and 2019, and the red in 2020. Because of the high density of stars in the central regions, we targeted different stars at the same slitmask position in 2018 and 2019. The 2019 slitmasks placements are the top two green slitmasks. The ellipse represents the approximate location of a break in the exponential surface density profile of the disk at $\sim 36$\arcmin\ \citep{ferguson2007, barker2011}. The background image of M33 is from the Digital Sky Survey (DSS). In this orientation, north is up, and east is to the left.}
\label{fig:m33_masks}
\end{figure}
\begin{deluxetable*}{c|c|c|c|c|c|c|c|c}
\rotate
\centering
\tablehead{Name & Center (J200) & Mask PA ($^\circ$) & Exposure Time (min) & N Targets & Year & Grating (l mm$^{-1}$) & Photometry Source & Primary Targets}
\startdata
M33D2A & 1:33:47.95 30:27:18.8 & 110 & 76 & 246 & 2016 & 600 & Archival HST + CFHT & RGB \\
M33D2B & 1:33:47.95 30:27:18.8 & 110 & 88 & 244 & 2016 & 600 & Archival HST + CFHT & RGB \\
M33D3A & 1:33:22.21 30:22:50.3 & 310 & 72 & 250 & 2016 & 600 & Archival HST + CFHT & RGB \\
M33D3B & 1:33:22.21 30:22:50.3 & 330 & 70 & 252 & 2016 & 600 & Archival HST + CFHT & RGB \\
M33D3D & 1:33:22.21 30:22:50.3 & 310 & 70 & 244 & 2016 & 600 & Archival HST + CFHT & RGB \\
M33D4A & 1:33:17.50 30:16:56.9 & 240 & 70 & 239 & 2016 & 600 & Archival HST + CFHT & RGB \\
M33D4B & 1:33:17.50 30:16:56.9 & 240 & 84 & 227 & 2016 & 600 & Archival HST + CFHT & RGB \\
M33D4C & 1:33:17.50 30:16:56.9 & 240 & 76 & 233 & 2016 & 600 & Archival HST + CFHT & RGB \\
M33MA1 & 1:35:16.71 30:28:25.4 & 100 & 76 & 225 & 2016 & 600 & Archival HST + CFHT & RGB \\
M33MA2 & 1:35:15.66 30:27:08.6 & 100 & 67 & 221 & 2016 & 600 & Archival HST + CFHT & RGB \\
A1M33P & 1:33:52.61 30:32:15.9 & 90 & 112 & 234 & 2018 & 600 & PHATTER + CFHT & range \\
A2M33P & 1:33:52.60 30:32:15.9 & 90 & 112 & 245 & 2018 & 600 & PHATTER + CFHT & range \\
B1M33P & 1:33:55.69 30:36:10.8 & 90 & 149.2* & 226 & 2018 & 600 & PHATTER + CFHT & range \\
B2M33P & 1:33:55.23 30:36:14.4 & 90 & 81.5 & 209 & 2018 & 600 & PHATTER + CFHT & range \\
C1M33P & 1:33:56.46 30:40:10.4 & 90 & 120 & 208 & 2018 & 600 & PHATTER + CFHT & range \\
C2M33P & 1:33:56.46 30:40:10.4 & 90 & 110 & 200 & 2018 & 600 & PHATTER + CFHT & range \\
D1M33P & 1:34:02.73 30:44:11.0 & 90 & 132.5 & 220 & 2018 & 600 & PHATTER + CFHT & range \\
D2M33P & 1:34:02.73 30:44:11.0 & 90 & 110 & 213 & 2018 & 600 & PHATTER + CFHT & range \\
E1M33P & 1:34:07.06 30:48:07.2 & 90 & 120 & 228 & 2018 & 600 & PHATTER + CFHT & range\\
E2M33P & 1:34:07.06 30:48:07.2 & 90 & 120 & 224 & 2018 & 600 & PHATTER + CFHT & range\\
K1M33P & 1:33:55.69 30:33:31.8 & 90 & 117.7 & 250 & 2018 & 600 & PHATTER + CFHT & range \\
D1M33R & 1:34:02.73 30:44:11.0 & 90 & 100 & 240 & 2019 & 1200G & PHATTER + CFHT & RGB \\
D2M33R & 1:34:02.73 30:44:11.0 & 90 & 193* & 226 & 2019 & 1200G & PHATTER + CFHT & RGB \\
E1M33R19 & 1:34:07.06 30:48:07.2 & 90 & 62.5 & 201 & 2019 & 600 & PHATTER + CFHT & RGB \\
E2M33R & 1:34:07.06 30:48:07.2 & 90 & 100 & 239 & 2019 & 1200G & PHATTER + CFHT & RGB \\
pTE1 & 1:35:00.02 30:36:01.6 & 100 & 108 & 200 & 2020 & 1200G & PAndAS & RGB \\
pTN1a & 1:34:51.22 31:13:10.7 & 100 & 120 & 194 & 2020 & 1200G & PAndAS & RGB \\
pTN1b & 1:34:51.22 31:13:10.7 & 22.5 & 120 & 174 & 2020 & 1200G & PAndAS & RGB \\
pTN2a & 1:34:21.72 30:57:46.6 & 22.5 & 180* & 218 & 2020 & 1200G & PAndAS & RGB \\
pTN2b & 1:34:21.72 30:57:46.6 & 22.5 & 108 & 214 & 2020 & 1200G & PAndAS & RGB \\
pTN3 & 1:34:04.46 31:17:45.8 & 90 & 120 & 143 & 2020 & 1200G & PAndAS & RGB \\
pTN4 & 1:33:59.33 31:13:43.0 & 90 & 129.2 & 166 & 2020 & 1200G & PAndAS & RGB \\
pTN5 & 1:35:41.37 31:11:27.2 & 90 & 108 & 147 & 2020 & 1200G & PAndAS & RGB \\
pTS1 & 1:32:59.31 30:09:19.0 & 22.5 & 108 & 183 & 2020 & 1200G & PAndAS & RGB \\
pTS2 & 1:32:05.42 30:05:53.2 & 90 & 71 & 137 & 2020 & 1200G & PAndAS & RGB \\
pTS3 & 1:33:44.32 30:06:35.3 & 90 & 115 & 164 & 2020 & 1200G & PAndAS & RGB \\
\enddata
\caption{Information for the 36 DEIMOS slitmasks that make up the TREX Survey. The position angle (PA) of the long axis of the slitmask is measured counterclockwise from north. Exposure times marked with an asterisk do not represent the effective exposure time and include exposures with bad seeing. For primary targets, "range" indicates main sequence (MS), helium burning (HeB), asymptotic giant branch (AGB), and red giant branch (RGB) stars. \label{tab:masks_info}}
\end{deluxetable*}
\par The DEIMOS spectra were reduced with the {\tt spec2d} and {\tt spec1d} programs \citep{cooper2012, newman2013}. This software has been adapted from its original use for the Sloan Digital Sky Survey to be used on DEIMOS spectroscopy. The resulting one-dimensional spectra were flat-fielded and sky subtracted and then cross correlated against stellar template spectra to measure the line-of-sight velocity of the target star \citep{simon2007}. The velocity measurements were confirmed visually using the {\tt zspec} software. At this step, each measurement is given a quality rating, and rare stars and MW foreground stars are identified (more details below). We then shifted the velocities to the heliocentric frame.
\par We account for possible miscentering of the star in the slit width direction, which causes a systematic wavelength shift. To do so, we calculated the wavelength shift of the atmospheric absorption line at 7600 \AA. We call this shift the A-band correction and applied it to the measured velocity of the star. We found that the A-band correction varies across the mask depending on the slit's position along the length of the mask, possibly because of a slight positional and/or rotational misalignment of the mask in the sky. To account for the spatial variation, we fitted a polynomial to the A-band velocity as a function of mask position for the stars with the best quality spectra. The polynomial was then applied to all stars to calculate the A-band correction based on the stars' positions along the mask. A typical A-band correction is $\sim -1.3$\,km~s$^{-1}$\ and varies by $\sim 7$\,km~s$^{-1}$\ across a mask.
\par The systematic uncertainties for the old stars observed with the 600 line mm$^{-1}$ and 1200 line mm$^{-1}$ gratings were calculated as in \cite{simon2007, Collins2011}, giving 5.6 \,km~s$^{-1}$ for the 600 line mm$^{-1}$ and 2.2 \,km~s$^{-1}$ for the 1200 line mm$^{-1}$ grating. We also estimate random uncertainties, derived from duplicate velocity cross correlation measurements of RGB stars (1.65 \,km~s$^{-1}$ for the 600 line mm$^{-1}$ and 1.85 \,km~s$^{-1}$ for the 1200 line mm$^{-1}$ grating). The final error is the result of adding the estimated random uncertainties to the systematic uncertainties in quadrature. We do not yet have enough duplicate observations of young and intermediate age stars to calculate an estimate of the systematic uncertainty. Initial analysis of the duplicate young stars suggest that a typical velocity measurement error for these stars is $\sim 12$ \,km~s$^{-1}$. For now, we take the velocity errors from the {\tt spec1d} pipeline \citep{cooper2012, newman2013} for the young and intermediate age stars. A typical velocity error measurement is assumed to be $\sim 6$ \,km~s$^{-1}$, but this is likely an underestimate of the true velocity uncertainty. We will rectify this in the future after obtaining a larger sample of repeat observations.
\par MW foreground stars are not identified during target selection. Instead, they are removed from our sample if there is Na\,{\smcap i} absorption present during visual inspection of their spectra, as this indicates the star is likely a dwarf star \citep{Gilbert_2006}. Once these visually classified foreground stars are removed, we compared the line-of-sight velocities of the remaining stars to a Besancon model \citep{robin2003,robin2014,robin2017,amores2017} at the location of the TREX Survey with a color and magnitude cut similar to the range of the targets, shown in Figure \ref{fig:cmds}. The MW foreground stars have radial velocities that peak at $-39$ \,km~s$^{-1}$, with $57\%$ of the distribution at $> -50$ \,km~s$^{-1}$. Only 18 stars, or $0.70\%$ of our {\it{final}} sample (described in Section \ref{sec:ages}) has line-of-sight velocities $> -50$ \,km~s$^{-1}$, so our study is largely free of MW contamination.
\par Targets are also removed from our sample if their spectra suggests the target is an extended source (i.e., background galaxy) or if the quality of the spectrum is too poor to return a well measured velocity. We also eliminate stars with $|v_{LOS}| > 500\rm kms^{-1}$ or that are miscentered in the slit enough to cause a needed correction on the order of $80\rm kms^{-1}$ \citep{sohn2007, simon2007} based on the polynomial fit estimate. Stars with extreme velocities compared to M33's systematic velocity are unlikely to be members of M33 or to have properly measured velocities. With foreground stars, poor quality targets, and duplicate measurements removed, our spectroscopic dataset consists of $4118$ stars in M33. In this specific study, we further narrow our sample based on velocity measurement error (Section \ref{sec:velocities}), CMD placement (Section \ref{sec:ages}), and, for the older stars, the probability of belonging to the stellar disk component (Section \ref{sec:halo}). With the final sample of resolved spectroscopy, we can examine the line-of-sight velocity for individual stars across the disk in M33.
\subsection{Gas Data}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{CMD_panels.pdf}
\caption{Color-magnitude diagrams of the subsets of photometric catalogs used for target selection with the final stellar sample overplotted. The left panel shows stars with HST photometry with the F457W and F814W bands. Most of these stars are from the PHATTER survey; some are from archival HST images. The center panel shows stars selected from archival HST photometry with the F606W and F814W bands only. The right CMD shows stars selected from the CFHT/Megacam + MegaPipe and from the PAndAS catalogue. In each panel, the blue points represent the young stars, the orange represents the intermediate age stars, and the red squares represent the old stars, and the grey represents a Hess diagram version of the full photometric catalogs. The points outlined in black are young weak CN stars (blue) and intermediate age carbon stars (orange) with ages derived from spectroscopic analysis instead of CMD divisions; see Appendix \ref{sec:rare} for more details. We list the adopted average age for each bin in the legend. All magnitudes have been extinction corrected.}
\label{fig:cmds}
\end{figure*}
\par Unlike stars, which retain their kinematic memories of past dynamical heating, dense gas's comparatively rapid cooling dissipates energy, leaving the gas as a low velocity dispersion tracer of the disk's gravitational potential. In this study, we use velocity measurements of H\,{\smcap i}, CO, and H$\alpha$ to make comparisons between the dynamics of the gas and stars in the disk of M33.
\par The H\,{\smcap i}\ measurements from the Very Large Array (VLA) are described in \cite{gratier2010} and have 5\arcsec--25\arcsec\ resolution (FWHM), depending on the specific pointing (see their Table 2 and 3). The data are from archival VLA imaging obtained in 1997, 1998, and 2001. The resulting gas line-of-sight velocities have an RMS uncertainty of $\sim 1.3$\,km~s$^{-1}$.
\par The CO(2-1) data were observed using the Institute for Radio Astronomy in the Millimeter Range (IRAM) 30 meter antenna by \cite{gratier2010, druard2014}. The observations began in 2008 and build off of those from \cite{gardan2007}. The angular resolution of the data is 12\arcmin\ with a spectral resolution of 2.6 \,km~s$^{-1}$\ (RMS).
\par The H$\alpha$ measurements were observed by \cite{kam2015} at Observatoire du Mont Megantic with a 1.6-m telescope and the Fabry-Perot interferometer in September 2012, producing an angular resolution of $\le 3$\arcsec\ and a typical velocity measurement uncertainty is $\sim 10$\,km~s$^{-1}$\ (FWHM).
\par Further details on the observations and data reduction are described in the references listed above. All of the gas velocity measurements have been shifted to the heliocentric frame. Each star corresponds to a specific a single pixel of the gas maps, which allows us to make local and direct comparisons of the gas and stellar kinematics. In Section \ref{sec:smoothing}, we discuss how we locally average the stellar kinematics to better match the resolution of the gas imaging so that one star does not represent the stellar kinematics of an entire pixel in a gas map.
\par The extent of the H\,{\smcap i}\ dataset is vast, so we are able to pair almost all stars with the nearest (in projection) H\,{\smcap i}\ velocity measurement. The percentages of stars paired with H\,{\smcap i}\ measurements are $91\%, 79\%$, and $71\%$ for the young, intermediate age, and old stars, respectively. (We discuss the division of stars into age bins in Section \ref{sec:ages}). The CO and H$\alpha$ velocity measurement maps have a smaller extent so only stars with a projected radius of $\le 4$ kpc are able to be paired to a CO and H$\alpha$ measurement. For H$\alpha$, this includes $24\%, 12\%$, and $4\%$ for the young, intermediate age, and old stars. For the CO, it covers $34\%, 25\%$, and $12\%$ for the young, intermediate age, and old stars.
\section{Broad Age Groups}\label{sec:ages}
We divide stars loosely into three age groups based on average stellar age at present day. First we use color magnitude diagrams (CMD). Different regions on a CMD are dominated by stars of different masses and ages. For example, the MS-dominated region we target consists almost entirely of massive stars with short stellar lifetimes. Regions dominated by evolving AGB stars are populated by intermediate mass stars with present day ages older than the main sequence, but not as old as the targeted RGB stars, which occupy a region dominated by older low mass stars with present day ages $>2.5$ Gyr.
\par We can use stellar population models to estimate average ages for each CMD stellar region \citep{Williams2014, Williams2021}. In the rest of the paper, we will refer to stars in the AGB region as ``intermediate age," and stars in the RGB region as ``old." We combine MS stars, blue HeB, and red HeB, and into a single broad age group that we will refer to as ``young." Even though the RHeB stars are far to the red, they are put into the young group because we targeted high mass ones with short lifetimes. See Figure \ref{fig:cmds} for the approximate location of each stellar lifetime division for our sample.
\par After the CMD division, we re-categorize weak CN and carbon stars based their spectroscopic information, regardless of their CMD location. Both the intermediate age carbon and young weak CN stars are identified using a combination of visual inspection and machine classification of stellar spectra; they are discussed in greater detail in Appendix \ref{sec:rare}. We assign weak CN stars to the young age group and carbon stars to the intermediate age group because the average age of these stars are consistent with the young and intermediate age group, respectively. We have marked these stars in Figure \ref{fig:cmds} with black outlines to distinguish them from the CMD divisions.
\par We assign each broad bin an average age using simulated CMDs: $\sim 80$ Myr for the young stars group; $\sim 1$ Gyr for intermediate age stars; and $\sim 4$ Gyr for the old stars. These age averages come from \cite{Williams2021, smercina2022} who compare the PHATTER targets to simulated CMDs using Padova isochrones \citep{marigo2017}. These age ranges are quite broad. The range ($16^{\rm th}$ to $84^{\rm th}$) in the young group is $\sim 20-180$ Myr, the intermediate age bin spans ages $\sim 0.56-2.2$ Gyr, and the old age bin spans $1.4-9$ Gyr. (See \cite{Williams_2009} for specific star formation histories of regions in M33.) Additionally, these age bins have some overlap and contamination due to the approximate nature of CMD divisions. However, the average ages for each bin are distinct enough to broadly study stellar kinematics as a function of age, which is the goal of this work. We compare the dynamics of these three broad groups and look for trends with stellar age.
\subsection{Removing Halo Contamination}\label{sec:halo}
\cite{Gilbert2021} provide evidence for the existence of a dynamically hot component in M33, using a subset of the spectroscopic dataset used in this paper. They do not find evidence for this component in their young stellar sample, made up of weak CN stars, which are best described by a single dynamically cold component. The stars of the hot component make up $\sim 22\%$ of the total old sample of the TREX Survey, which we correct for using the model described in \cite{Gilbert2021} to remove likely halo contaminants from the old disk population.
\begin{figure}[h!]
\centering
\includegraphics[scale=.7]{all_halo_map.pdf}
\caption{Map of the intermediate age and old stars color coded by probability of belonging to a dynamically cold component. The ellipse represents the approximate location of the disk break. The center of M33 is marked with a blue cross.}
\label{fig:halo_map}
\end{figure}
\par \cite{Gilbert2021} model the disk and halo assuming a tight kinematic connection to the H\,{\smcap i}\ disk. They compare the line-of-sight velocities of stars to the line-of-sight velocity of the H\,{\smcap i}\ at the same radius using the titled ring model in \citet{kam2017}, rather than individual H\,{\smcap i}\ measurements.
They model the disk and halo as Gaussians in a transformed line-of-sight velocity space defined as the difference between a star's velocity and the calculated line-of-sight velocity of the disk or halo component at the star's disk location, assuming the fractional rotation speed for that component.
This allows each component to rotate at a fraction of the speed of the H\,{\smcap i}\ disk model. The best fit model from \cite{Gilbert2021} then returns a probability that a given star's kinematics belong to a dynamically cold component. Although the \citet{Gilbert2021} analysis focuses only on the old stars, we see preliminary evidence that the intermediate age population may host a similar component, and thus we apply the same model to remove candidate kinematically hot AGB stars. The model was run separately on the AGB stars utilizing the same model formalism and procedure used by \citet{Gilbert2021} for RGB stars. We use the probability from the model to keep all intermediate age and old stars with velocities that are at least $80\%$ likely to belong to the dynamically cold component, eliminating velocity outliers and producing a more pure disk-like population.
We assume all young stars are disk stars. Figure \ref{fig:halo_map} shows a map of the intermediate age and old stars color coded by probability their kinematics are consistent with a cold component.
\par Removing stars with disk probabilities which are $< 80\%$ eliminates $\sim 14\%$ of the initial intermediate age bin and $\sim 23\%$ of the initial old age bin. For the old stars, the percentage of stars eliminated from the disk sample is consistent with the expected fraction of RGB halo stars from \cite{Gilbert2021}. Future work, utilizing an increased AGB sample, will characterize the kinematics of the AGB population as a whole and explore the nature of the AGB stars which have velocities well removed from the M33 disk velocity. In Section \ref{sec:illustris} and \ref{sec:LG}, we explore the implications of not removing possible halo stars.
\par With the quality cuts and the elimination of old halo stars and intermediate age halo star candidates, our study consists of 952 young stars, 521 intermediate age stars, and 1088 old stars for a total of 2561 stars.
\section{Stellar Kinematics as a Function of Age}
We get line-of-sight velocities for the stars in our sample from the stellar spectroscopic observations. In this section, we describe how we use the line-of-sight velocities to calculate local line-of-sight velocity dispersion, construct rotation curves, and calculate asymmetric drift for each stellar age bin.
\label{sec:velocities}
\subsection{Local Velocity Smoothing}\label{sec:smoothing}
\begin{figure*}
\centering
\includegraphics[scale=.85]{M33_map_radius.pdf}
\caption{Line-of-sight velocity and velocity dispersion as a function of position for the three broad age bins. The top row shows the young stars. The middle represents the intermediate age population, and the bottom row shows the old population. For all rows, the color in the first column represents individual line-of-sight velocity. For the second column, color represents the smoothed line-of-sight velocity. The third column shows the local velocity dispersion. The last column shows the size of the radius of the smoothing circle used at that point. The smallest and largest smoothing circle used are also shown. The ellipse represents the approximate location of the disk break. The center of M33 is marked with a black cross. The inner disk (r $<3.75$kpc) has higher velocity dispersion for all age bins, and the young stars show an extended area of high dispersion.}
\label{fig:LOS_map}
\end{figure*}
We calculate the local velocity dispersion by examining the velocities of neighbors around each star. We start with a $50\arcsec$ radius aperture for selecting neighbors and then grow in radius by $5\arcsec$ until there are at least fifteen stars of the same broad age bin within the circle. The sizes of the circles used are illustrated in the last column of Figure \ref{fig:LOS_map}. For the young group, the median radius was $85\arcsec$, for the intermediate age group the median radius was $120\arcsec$, and for the old age group the median radius was $100\arcsec$, which is 0.35 kpc, 0.5 kpc, and 0.42 kpc, respectively, at the distance of M33.
If the circle gets to $300\arcsec$ and still does not contain fifteen members, we do not calculate a line-of-sight velocity average or velocity dispersion at that location. The velocity of the skipped star can still contribute to the analysis of its neighbors however. This smoothing technique is similar to that used in \citet{Dorman2015}, \citet{Quirk2019}, and \citep{Quirk2020}. After smoothing, we have local velocity dispersion measurements centered on 879 young stars, 462 intermediate age stars, and 1053 old stars.
\par The resulting velocity maps are shown in Figure \ref{fig:LOS_map}. The second column shows the locally averaged line-of-sight velocities for each age bin. The third column shows the local velocity dispersion, the calculation of which we describe below. The fourth column shows the size of the smoothing circle that was used for each center, along with the size of the smallest and largest smoothing circle used for that age bin.
\subsection{Velocity Dispersion}
We calculate the weighted mean of the line-of-sight velocities and the inverse variance weighted root mean square dispersion (RMSE) of the members (Equation \ref{eq:m} and \ref{eq:rmse}). In these two equations, the weights are normalized and derived from the velocity measurement errors ($\sigma_{\rm err, \it i}$ in the following equations) such that, $\sum_{i=1}^{n} w_{i} = 1$ and $w_i = \sigma_{\rm err, \it i}^{-2}$. The velocity measurement error model is discussed in Section \ref{sec:spec}.
\begin{equation}\label{eq:m}
\overline{x} = \sum_{i=1}^{n} x_{i} \times w_{i}
\end{equation}
\begin{equation}\label{eq:rmse}
\sigma_{v} = \sqrt{\sum_{i=1}^{n} (x_{i} - \overline{x})^{2} \times w_{i}}
\end{equation}\label{eq:mean}
We use the inverse variance weighted RMSE as the dispersion. We only consider stars of the same age bin in the averaging and dispersion calculation.
\par Along with local velocity dispersion, we compute the median velocity dispersion for each of the three age bins, which is reported in Table \ref{tab:LOS_disp}. We can compare these values to the global models fit from \citet{Gilbert2021}, who find a global velocity dispersion of $\sim 16$\,km~s$^{-1}$\ for the young stars and $\sim 21$\,km~s$^{-1}$\ for the old stars, which is similar to the median local velocity dispersions reported here.
\par We also show details of the velocity dispersion distributions in Figure \ref{fig:disp_box}. The left panel shows the median value, interquartile range, and outliers for the three age bins across the full extent of the TREX Survey. Overall, we find that velocity dispersion does not vary strongly with stellar age, as the median values of each population do not vary significantly. Furthermore, the median values of each age bin are relatively low and are roughly twice the average dispersion of the H\,{\smcap i}\ \citep[$8$ \,km~s$^{-1}$]{chemin2020}. The low magnitude of velocity dispersion and the lack of an increase with stellar age are not expected when compared to expectations from simulations of slightly more massive disk galaxies \citep{martig2014} or observations of star clusters in M33 \citep{beasley2015}. However, these studies do not remove dynamically heated populations that are likely to belong to a halo, so it is not an exact comparison.
\par \citet{martig2014} find that the shape of the age-velocity dispersion relation for young and intermediate age stars is dependent on recent merger or other heating activity, whereas the velocity dispersion for the oldest stars is more dependent on birth kinematics. They also find that uncertainties in ages can obscure possible trends in the age-velocity dispersion relation. Since the age bins in this work are broad, we could be missing a more subtle trend, but this would not explain the low magnitudes.
\par While the medians of the distributions are similar, the distributions themselves are broad enough that they may be widened by trends with radius. In the right panel of Figure \ref{fig:disp_box}, we show the distributions of velocity dispersion for each age group broken into an inner ($r\ <\ 5$ kpc) and outer ($r\ <\ 5$ kpc) subgroup. For all age bins, the distribution of velocity dispersion shifts lower in the outer region compared to in the inner region, although the median velocity dispersion for the young stars is higher in the outer region. The number of outliers is higher in the inner region for the young and intermediate age stars.
\begin{table}[]
\centering
\begin{tabular}{c|c}
\hline
Age Group & Mean $\sigma_{LOS}\ \rm (kms^{-1})$ \\
\hline
Young & $15.9_{-0.2}^{+0.3}$ \\
Intermediate Age & $15.2_{-0.2}^{+0.2}$ \\
Old & $16.5_{-0.1}^{+0.2}$ \\
\hline
\end{tabular}
\caption{Medians of weighted velocity dispersion as a function of broad age bin. The errors on the median value represent the difference between the $16^{\rm th} \rm and\ 84^{\rm th}$ percentiles divided by $\sqrt{\rm N}$, where N is the number of stars.}
\label{tab:LOS_disp}
\end{table}
\begin{figure}[bp!]
\centering
\includegraphics[width=\columnwidth]{vel_disp_box.pdf}
\caption{Velocity dispersion distributions for the three age bins. The left panel shows the distributions for the full final population. The right panel shows each age bin divided into an inner ($r\ <\ 5$ kpc) subgroup and an outer subgroup ($r\ >\ 5$ kpc). For all boxplots, the shaded box represents the interquartile range, the horizontal line across the box represents the median, and the open circles show the outliers. The outliers are stars with a velocity dispersion that are at least 1.5 $\sigma\ $ from the first or third quartile value, which is the distance marked by the whiskers. Velocity dispersion does not vary significantly with stellar age and is on average higher in the inner region.}
\label{fig:disp_box}
\end{figure}
\par We look more closely at velocity dispersion as a function of radius in Figure \ref{fig:disp_dist}, which shows the distributions of velocity dispersion and radius. There is a clear downward trend in the intermediate age and old stars populations as one goes out to greater radii. This suggests the outer disk is dynamically cooler than the inner disk, which is consistent with the findings of \citet{Gilbert2021} and disk galaxies in general \citep[e.g.][]{bottema1993}. \citet{koch2018} find that the velocity dispersion of the H\,{\smcap i}\ decreases by $\sim 2$ \,km~s$^{-1}$ from the center of M33 to $\sim r = 8$ kpc. For the young stars, any trend is less clear because of the extended area of high dispersion, which we examine below, however there is a slight trend of increasing velocity dispersion with radius.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{disp_dist.pdf}
\caption{Velocity dispersion as a function of deprojected radius for the young, intermediate age, and old populations. Median lines are also plotted. For the intermediate and old age bins, velocity dispersion is higher in the inner regions than in the outer regions. The old stars show one concentrated area of extreme velocity dispersion, while the young stars show an extended area of high dispersion, and the intermediate age stars show no high velocity dispersion. The area of extreme velocity dispersion for the old stars is overlapped with an area that also has extreme velocities for the young stars.}
\label{fig:disp_dist}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{young_disp.pdf}
\caption{Velocity dispersion as a function of position for the young stars. Panel a) shows the young stars overplotted on an image of M33 from the DSS. The color of the points corresponds to velocity dispersion. The smaller red points mark H\,{\smcap ii} regions \citep{hodge1999}. Panel b) shows a UV image of M33 from the Swift Observatory (image credit: NASA/Swift/Stefan Immle) with a yellow ellipse showing the approximate location of the high velocity dispersion. (The DSS map is slightly enlarged compared to the image of M33 on the right.) These locations do not have higher concentrations of UV emission than other parts of the disk.}
\label{fig:young_disp}
\end{figure*}
\par Although there is not much trend between velocity dispersion and age, there is an extended area of extreme velocity dispersion for the youngest group of stars.
\noindent Figure \ref{fig:LOS_map} shows this extended area of high velocity dispersion in the young star population. We examined this anomalous region in detail to test if the substructure is real or potential velocity measurement artifact. First, we looked at young stars with extreme velocities that contribute to the high velocity dispersion in this extended region. Their velocities are well measured and pass our quality cuts.
Secondly, this region is also not dominated by the largest size smoothing circle. Because of this, we think the areas of high velocity dispersion in the young stars are real effects and will investigate it further using duplicate velocity measurements as future work. A smaller portion of this same area is also coincident with high velocity dispersion for the old stars that do have well characterized velocity measurement errors.
\par We also compare the location of the high velocity dispersion region to a UV map of M33 in Figure \ref{fig:young_disp}. The yellow ellipse in the right panel shows the approximate location of the extended area of high velocity dispersion with respect to the UV emission. Because there is ongoing star formation across the disk, there does not appear to be anything notable about the specific high dispersion area and no correlation between stellar velocity dispersion and whether a location is UV dark or UV light. The left panel shows velocity dispersion as a function of position compared to H\,{\smcap ii} regions from \citet{hodge1999}. There also does not seem to be anything coincident about the H\,{\smcap ii} regions around the high stellar velocity dispersion.
\par \citet{koch2018} examine the atomic ISM across the disk of M33 and find a filament of H\,{\smcap i}\ that extends to 8 kpc across the southern disk (see their Figure 18). While the filament overlaps the area of high velocity dispersion reported here, the young velocity dispersion is coincident with a void in the filament. It is further unlikely that the filament is related to the high stellar velocity dispersion because the line-of-sight velocities of the young stars would need to be blueshfited by $>30$ \,km~s$^{-1}$, which they are not. There are some localized bright areas of H\,{\smcap i}\ with broad line widths that are close to the area of high velocity dispersion shared by the young and old stars, but it is unclear if they are related to the high stellar dispersion.
\par It is possible that other phenomenon are causing the high velocity dispersion in the young stars and small number of old stars. For example, there could be unmarked massive gas clouds or a significant number of stellar binaries. Additionally this area could contain substructure from a relatively recent minor merger that lies in front of the disk. If M31 and M33 did have an interaction in the past, perhaps the interaction could have increased the dispersion of the H\,{\smcap i}\ and young stellar disk.
\subsection{Rotation Curves}
We use the weighted average line-of-sight velocities to calculate the rotation velocities of the stars, which we compare to the rotation velocities of gas tracers such as H\,{\smcap i}, CO, and H$\alpha$. To calculate the rotation velocity, we convert the line-of-sight velocity to a circular velocity using the tilted ring model described in \cite{kam2017}. \cite{kam2017} divide the H\,{\smcap i}\ disk of M33 into forty-nine annuli from $r=0\arcmin$ to $r=96\arcmin$ and measure the position angle (PA) and inclination ($i$) of each annulus, as tabulated in Table 4 of their paper. The rings in \citet{kam2017} have a width of $0.2'$. We interpolate this table to create thinner rings. We then calculate a star's deprojected distance from M33's center using the PA and $i$ of M33. With that distance, we match the star/gas measurement to a tilted ring from the interpolated table and assign it the corresponding ring's PA and $i$. We recalculate the deprojected distance, and reassign the star/gas measurement to another ring if needed. We repeat this process twice before adopting the final PA and $i$ for the star/gas measurement.
\begin{figure}[h!]
\centering
\includegraphics[scale=.8]{rc_full.pdf}
\caption{Rotation velocity as a function of deprojected radius. Rotation velocities are calculated with the tilted ring model in \cite{kam2017} (Equation \ref{eq:vrot}). The top panel shows the youngest age bin (light blue), the middle panel shows the intermediate age group (orange), and the bottom panel shows the old group (red). Each star has been paired with a H\,{\smcap i}\ velocity measurement along the same line-of-sight, and the gas is represented by the grey dots. The size of the points is proportional to the $\cos(\theta)$ factor in Equation \ref{eq:vrot} to illustrate the limitations of the equation around the minor axis. The solid (dotted) line shows the median rotation velocity for 0.5 kpc bins for the stars (H\,{\smcap i}). The deprojection effects around the minor axis cannot explain the full amount of scatter, especially for the gas.}
\label{fig:rc_full}
\end{figure}
\begin{equation}\label{eq:vrot}
v_{\rm rot} = \frac{v_{\rm LOS} - v_{\rm sys}}{\cos(\theta)\sin(i_{TR, \star})}
\end{equation}
\begin{figure*}
\centering
\includegraphics[scale=.74]{rc_inner.pdf}
\caption{Rotation velocity as a function of deprojected radius for the inner 4 kpc of the disk. Rotation velocities are calculated with the tilted ring model in \cite{kam2017} (Equation \ref{eq:vrot}). The top row shows the youngest age bin (light blue), the middle row shows the intermediate age group (orange), and the bottom row shows the old group (red). Each star has been paired with a H\,{\smcap i}\ (first column), CO (second column), and H$\alpha$ (third column) velocity measurement along the same line-of-sight, and the gas is represented by the grey dots. The size of the points is proportional to the $\cos(\theta)$ factor in Equation \ref{eq:vrot} to illustrate the limitations of the equation around the minor axis. The solid (dotted) line shows the median rotation velocity for 0.5 kpc bins for the stars (gas). The deprojection effects around the minor axis cannot explain the full amount of scatter, especially for the gas. The bar is believed to extend to 0.5 kpc \citep{Williams2021, smercina2022, lazzarini2022}.}
\label{fig:rc_inner}
\end{figure*}
The above equation projects a star onto a circular orbit. We use $v_{sys}= -180 \rm kms^{-1}$ \citep{kam2017}. Theta is the azimuthal angle in the plane of M33, and $i$ is the inclination that comes directly from matching a star/gas measurement to a tilted ring. Theta is calculated with $ \theta = \beta \times [\alpha \cos(i_{TR})]^{-1}$ where $\alpha = \eta \cos(\rm PA_{\it TR}) + \xi \sin(\rm PA_{\it TR})$ and $\beta = \xi \cos(\rm PA_{\it TR}) - \eta \sin(\rm PA_{\it TR})$. We use the assigned PA of the ring that a star/gas measurement lies in and the deprojected coordinates of the star/gas measurement centered on M33. Since each star is paired with a gas measurement, the star and the gas measurement share the same deprojected geometric parameters but can have different values for rotation velocities, as they have different line-of-sight velocities.
\begin{figure}[tp!]
\centering
\includegraphics[width=\columnwidth]{ad_full.pdf}
\caption{Asymmetric drift distributions for the young stars (blue solid), intermediate age (orange hatched), and old star (red outline). AD is calculated with respect to the H\,{\smcap i}\ using $v_a = v_{\rm rot,\ gas} - v_{\rm rot,\star}$. The medians are marked by the vertical lines that run from the peak of the corresponding distribution to the top of the plot. There is no clear trend between AD and stellar age, except that the width of the distribution decreases with stellar age.}
\label{fig:ad_full}
\end{figure}
With the above equations, we construct a rotation curve for each of our broad stellar age bins. The three rotation curves are shown in Figure \ref{fig:rc_full}. These rotation curves show the rotation velocity as a function of deprojected radius for the three stellar age bins and for the H\,{\smcap i}\ along the same line-of-sight of each star for the full extent of the TREX Survey. We also plot the inner rotation curves compared to the CO and H$\alpha$ datasets, which do not extend beyond 4 kpc so are not shown in the full rotation curve. These inner rotation curves are plotted in Figure \ref{fig:rc_inner} for the inner 4 kpc for the three age bins and H\,{\smcap i}, CO, and H$\alpha$.
\par In both Figures \ref{fig:rc_full} and \ref{fig:rc_inner} it is clear that the rotation curves of both the stars and the gas show significant scatter. \cite{Quirk2019} demonstrate the deprojection factor of the tilted ring model, or the denominator, approaches zero along the minor axis, regardless of inclination because of the $cos(\theta)$ factor, which can explain some but not all of the scatter in the rotation curves. Other sources of scatter in the stellar rotation curves could be from poorly measured velocities, particularly in the young stars, or disturbances from M33's bar \citep{Williams2021, smercina2022, lazzarini2022} and spiral arms. Scatter in the gas rotation curves could reflect the impact of star formation or turbulence in the ISM. The amount of scatter in the stellar rotation curves is largest for the youngest group and decreases with stellar age, which suggests M33's high star formation rate could be causing turbulence not only in the gas but also in the birth kinematics of stars born from that gas. This could also be causing some of the extreme velocities in the young stars.
\par To quantify the scatter, we have added median lines to each stellar and gas rotation curve and have made the marker size proportional to the $cos(\theta)$ factor. The scatter cannot entirely be attributed to deprojection effects along the minor axis, especially for the gas, which appears to have the most scatter away from the minor axis. We used binning of 0.5 kpc to remove likely outliers while preserving local discrepancies for the median lines. Any velocity difference between the stellar and gas rotation curves is a visual representation of asymmetric drift and will be explored further in the next section.
\subsection{Asymmetric Drift}
We use the stellar and gas rotation curves in Figures \ref{fig:rc_full} and \ref{fig:rc_inner} to calculate the asymmetric drift (AD or $v_{a}$) of the three broad age bins. Often, AD is defined as the difference between the circular velocity derived from the potential of a galaxy and the rotation velocity of the stars \citep{Stromberg}. For the purpose of this study, we define AD to be the difference between the rotation velocity of the gas and that of the stars \citep{Quirk2019, Quirk2020}. This choice allows us to make local and empirical AD measurements without relying on models of the potential of M33.
\par AD measurements can be used to measure dynamical heating \citep{Quirk2019}, as gas, which is collisional, can fairly easily dissipate energy and maintain a low energy orbit \citep{Sellwood11999}, while stars retain a non-circular orbit if they have been perturbed onto an eccentric orbit \citep{Leaman2017, Sellwood2002}. In M31, observed AD compared to that in simulated IllustrisTNG-100 M31-like analogs has provided evidence that M31 experienced a relatively recent major merger \citep{Quirk2020}. In our own galaxy, AD is routinely used to correct the local standard of rest and to predict rotation curves outside the solar neighborhood before Gaia \citep{Golubov2014, Huang2015}.
\par For every star and gas measurement pair, we calculate an AD value, $v_{a}$, from their respective rotation velocities, using $v_a = v_{\rm rot,\ gas} - v_{\rm rot,\star}$. We make AD measurements with respect to H\,{\smcap i}, CO, and H$\alpha$. (This equation is in rotation velocity space, whereas the offset used in \cite{Gilbert2021} for halo star identification is in line-of-sight space.) The CO and H$\alpha$ measurements only exist for the inner $\sim4$ kpc so we calculated AD in the inner regions with respect to all three kinds of gas but only with respect to H\,{\smcap i}\ for the full survey extent (or out to $\sim 11$ kpc). The distribution of these values are plotted in Figure \ref{fig:ad_full} and \ref{fig:ad_inner}, and the median values, width of the distributions, and percentage of outliers are listed in Table \ref{tab:ad_HI} and \ref{tab:ad_inner}. The outliers are stars with a velocity dispersion that are at least 1.5 $\sigma\ $ from the first or third quartile value.
\begin{table}
\centering
\begin{tabular}{c|c|c|c}
\hline
Age Group & AD w.r.t H\,{\smcap i}\ & Width & Outliers \\
& ($\rm kms^{-1}$) & ($\rm kms^{-1}$) & \% \\
\hline
Young & $15.1_{-1.7}^{+1.6}$ & 73.3 & 3.2\\
Intermediate Age & $8.0_{-1.6}^{+1.7}$ & 68.2 & 5.8 \\
Old & $24.5_{-1.1}^{+0.9}$ & 51.1 & 3.7\\
\hline
\end{tabular}
\caption{Stats for the distribution of AD for the three age bins with respect to H\,{\smcap i}\ for the full extent of the survey. Median values of AD, width (sigma) of the distribution, and the percentage of outliers in the distribution are shown. The errors on the median value represent the difference between the $16^{\rm th} \rm and\ 84^{\rm th}$ percentiles divided by $\sqrt{\rm N}$, where N is the number of stars. The outliers are stars with a velocity dispersion that are at least 1.5 $\sigma\ $ from the first or third quartile value.}
\label{tab:ad_HI}
\end{table}
\begin{table*}[]
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c|c}
\hline
Age Group & AD w.r.t H\,{\smcap i}\ & Width & Outliers & AD w.r.t CO & Width & Outliers & AD w.r.t H$\alpha$ & Width & Outliers\\
& ($\rm kms^{-1}$) & ($\rm kms^{-1}$) & \% & ($\rm kms^{-1}$) & ($\rm kms^{-1}$) & \% & ($\rm kms^{-1}$) & ($\rm kms^{-1}$) & \% \\
\hline
Young & $16.8_{-1.9}^{+1.8}$ & 72.0 & 2.7 & $18.9_{-3.0}^{+2.2}$ & 70.6 & 3.2 & $9.8_{-2.6}^{+2.4}$ & 64.0 & 3.9 \\
Intermediate Age & $5.9_{-1.7}^{+1.6}$ & 65.7 & 8.3 & $6.0_{-2.7}^{+2.4}$ & 49.0 & 3.6 & $-10.0_{-4.3}^{+8.5}$ & 72.7 & 3.6\\
Old & $23.3_{-1.4}^{+1.0}$ & 38.1 & 6.4 & $26.0_{-1.3}^{+1.3}$ & 17.4 & 0.9 & $-3.4_{-3.3}^{+3.5}$ & 21.8 & 0.0\\
\hline
\end{tabular}
\caption{Stats for the distribution of AD for the three age bins with respect to H\,{\smcap i}, H$\alpha$, and CO for the inner 4 kpc. Median values of AD, width (sigma) of the distribution, and the percentage of outliers in the distribution are shown. The errors on the median value represent the difference between the $16^{\rm th} \rm and\ 84^{\rm th}$ percentiles divided by $\sqrt{\rm N}$, where N is the number of stars. The outliers are stars with a velocity dispersion that are at least 1.5 $\sigma\ $ from the first or third quartile value.}
\label{tab:ad_inner}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{ad_inner.pdf}
\caption{AD distributions for the young stars (blue solid), intermediate age (orange hatched), and old groups (red outline) for the inner 4 kpc. AD is calculated with respect to the H\,{\smcap i}\ (left panel), CO (middle panel), and H$\alpha$ (right panel) using $v_a = v_{\rm rot,\ gas} - v_{\rm rot,\star}$. The medians are marked by the vertical lines that run from the peak of the corresponding distribution to the top of the plot. On average, the young and old stars have similar median AD values, suggesting there is no trend between AD and stellar age. For the H$\alpha$, the gas is lagging the intermediate age and old stars, resulting in a negative AD, but both AD values are consistent with zero within 1$\sigma$.}
\label{fig:ad_inner}
\end{figure*}
For the H\,{\smcap i}\ and CO, the intermediate stars show the lowest AD values, so there is no monotonic trend with stellar age. When compared to H$\alpha$, the intermediate age stars have the greatest offset in the rotation velocity of the gas and stars; however, this difference is barely larger than 1$\sigma$ so is not robust. Furthermore, the median AD values are negative for the intermediate age and old stars, which means that on average, the H$\alpha$ is lagging behind the stars and therefore the stars could be more settled on a circular orbit than the H$\alpha$. However, both of these AD values are consistent with 0 within 1$\sigma$. We expect AD to increase with stellar age \citep[e.g.,][]{Sellwood2002, Westfall2007, Gaia_Co2021, Quirk2019}, so seeing a high magnitude of AD in the young stars suggests there is something perturbing the young stars on short timescales, which resulted in their kinematics diverging from the gas.
\par Overall, the old stars tend to have the greatest magnitude of AD, but the distributions of AD values are narrower. In the inner 4 kpc, the young and intermediate age stars have the largest widths, that are at times double that of the old stars for each of the three types of gas used in calculating AD, which is the opposite of what is expected for a canonical model of steady dynamical heating of stars. The old stars are thus more systematically offset from the nearby gas in terms of the kinematics, whereas the young and intermediate stars are more randomly offset from the kinematics of the nearby gas. Over the full radial range, even though the young stars still have the widest AD distribution, the three groups of stars have more similar widths. This suggests the stars in our sample are most similar between radii 4 an 11 kpc.
\par The percentage of outliers (stars with AD values that are $1.5\sigma$ or greater away from the $16^{\rm th}$ or $84^{\rm th}$ percentiles) in each distribution shows no clear pattern with median AD, width of the distribution, or stellar age. The intermediate age stars have the highest percentage of outliers in the distribution of AD values calculated with respect to H\,{\smcap i}\ in the inner 4 kpc.
\par It is important to note that the three gas datasets used come from different telescopes and were reduced using different methods. Subtle differences in the derivation of the gas velocity measurement could be causing variations in the AD median values, not just physical phenomena. However, subtle differences would systematically affect the three age bins equally, which means these potential differences cannot be obscuring a lack of trend between age and asymmetric drift.
\par In summary, M33 does not show the expected increase in AD with stellar age because of the low AD of the intermediate age stars and the high AD value for the young stars. The distributions of AD get narrower with stellar age, showing that the younger stars are more randomly offset from the gas than the older stars.
\section{Comparison to IllustrisTNG}
\label{sec:illustris}
\par We analyze M33-like analogs from the IllustrisTNG50-1 cosmological simulation to comment on the uniqueness of the results presented above. The IllustrisTNG Project is a suite of N-body and hydrodynamic simulations from redshift $z=127$ to $z=0$. It uses the \texttt{AREPO} moving-mesh code \citep{marinacci18, naiman18, springel18, pillepich18, nelson18, springel10}. The simulations have a cosmological volume of (51.7 Mpc)$^3$ and have the following initial cosmological parameters from \cite{planck15}: $\Omega_m= 0.3089$, $\Omega_{\Lambda}=0.6911$, $\Omega_{b}=0.0486$, $\sigma_8=0.8159$, $n_s=0.9667$, and $h=0.6774$. For our analysis, we adopt a value of $h=0.704$ \citep[WMAP-9;][]{hinshaw13}.
\par We use data from the IllustrisTNG50-1 simulation (hereafter IllustrisTNG), the smallest but highest resolution simulation in the project that includes both dark matter and baryons. IllustrisTNG follows the evolution of 2160$^3$ dark matter particles and 2160$^3$ hydrodynamical cells, resulting in a a baryonic mass resolution of $m_{\rm bary} = $ $8.1 \times 10^4 \, M_{\sun}$ and a dark matter particle mass resolution of $m_{\rm DM}=4.4 \times 10^5 \, M_{\sun}$. Halos and subhalos are identified using \texttt{SUBFIND} \citep{springel01, dolag09}.
\par We identify general M33-like analogs as halos that fit the following criteria: the subhalo is the central/primary subhalo at $z=0$ in Friend of Friend (FoF) group with a virial mass of $M_{\rm vir}=1-2.5 \times 10^{11}M_{\sun}$ \citep[and references within]{patel18}; the subhalo has a stellar mass of $M_{\star}=2.8-5.6 \times 10^{9} M_{\sun}$ \citep{guo2010}, and the subhalo's maximum dark matter particle circular velocity is $< 70$\,km~s$^{-1}$. These cuts produce a sample of 224 analog galaxies. We eliminated eight of these due to lack of baryon particles across the inner 10 kpc, leaving 216 M33-like galaxy analogs.
\begin{figure*}[]
\centering
\includegraphics[width=\textwidth]{illustris_rc.pdf}
\caption{Cumulative stellar rotation curves for the 216 M33-like analogs from the IllustrisTNG50-1 simulation and the observations presented in this analysis. Left panel: The grey line represents the gas cells, the blue dashed line represent star particles with ages $<1$ Gyr, the purple line represents star particles with ages 1-5 Gyr, the green line represents star particles with ages 5-10 Gyr, and the red dotted line represents star particles with ages $>10$ Gyr. The vertical bars represent the width of the distribution ($16^{\rm th} - 84^{\rm th}$ percentile) of rotation velocities in a given radial bin. The median rotation velocities of the 0.1 kpc radial bins are shown. Right panel: The magenta filled region represents stellar particles with ages 1-5 Gyr. The width of the filled regions represent the width of the distribution ($16^{\rm th} - 84^{\rm th}$ percentile) of rotation velocities in a given radial bin. The observed M33 rotation curves for the three age bins are plotted on top. The observations are consistent with stellar particles ages 1-5 Gyr, which are a similar age to the observed sample.}
\label{fig:TNG_RC}
\end{figure*}
\par For each analog, we rotate the particles/gas cells to be face-on so that the ``line-of-sight'' direction is the $z$ component for all analogs. We exclude particles with $|z| > 10$ kpc and with a galactic radius of $> 20$ kpc to target star particles that likely belong to the disk. We then locally average the velocities to mimic the resolution from observations, following Section 3 of \cite{Quirk2020}. We divide the stellar particles into four groups based on exact stellar age: $< 1$ Gyr, 1-5 Gyr, 5-10 Gyr, and $> 10$ Gyr. Unlike the rough age division used for the observational part of the analysis, these age bins use a star particle's exact age and spans the full cosmological time. We use the gas cells to calculate AD for each age bin as well.
\par For each analog, we use the smoothed kinematics to construct an azimuthally averaged rotation curve using the equations below.
\begin{equation}
v_{rad} = \frac{x \cdot v_{x} + y \cdot v_{y}}{\sqrt{x^{2} + y^{2}}}
\end{equation}
\begin{equation}
v_{tot} = \sqrt{v_{x}^{2} + v_{y}^{2}}
\end{equation}
\begin{equation}
v_{rot} = v_{tan} = \sqrt{v_{tot}^{2} - v_{rad}^{2}}
\end{equation}
We limit the analysis to the 2D $xy$ plane to mimic the observed line-of-sight velocities. We choose to azimuthally average the rotation curve for each. To azimuthally average the rotation curves, the particles/cells are placed into 0.1 kpc bins based on their distance from the center of the analog. We then take the median of the rotation velocities in each bin for the final rotation curve. This process creates a rotation curve similar to the median lines shown in Figures \ref{fig:rc_full} and \ref{fig:rc_inner}.
\begin{figure}[bp!]
\centering
\includegraphics[width=\columnwidth]{comp_plot_whole_pop.pdf}
\caption{Cumulative median AD measurements for M33-like analogs and for observations of young, intermediate age, and old stars in the disk of M33. The shaded grey regions represent the median AD and the widths of the distribution ($16^{\rm th} - 84^{\rm th}$ percentile) of the AD for the analogs. The color points represent the median AD for the young (blue), intermediate age (orange), and old (red) stars. The black points represent the analog with AD values closest to what is observed. Compared to the whole sample of analogs, the AD from observations is significantly higher for the youngest stars, than that is seen in the simulated analogs.}
\label{fig:TNG_comp}
\end{figure}
\par To calculate a single set of rotation curves and median AD values as a function of stellar age for the entire analog sample, we first calculate both the rotation curve and AD for each individual analog, as described above. Since the rotation curves are radially binned, the cumulative rotation curve for the entire sample of analogs is made by using the median rotation velocity for each stellar particle age group at each radial bin. We calculate AD values for the entire sample using this cumulative rotation curve. The set of cumulative rotation curves is shown in the left panel of Figure \ref{fig:TNG_RC}. Also shown in this figure are the median rotation curves for the observed M33 young stars, intermediate age stars, and old stars in the right panel.
\par The set of observed M33 rotation curves is consistent with the rotation curves for star particles with ages $1-5$ Gyr from the simulated analogs. However, these rotation curves are not a perfect comparison, as the observational one comes from a tilted ring model, and the simulated ones come from calculating the actual tangential velocity of the stellar particles.
\par The cumulative AD values for the simulated M33 analogs as a function of stellar particle age are shown in Figure \ref{fig:TNG_comp} along with the observational median M33 AD measurements. For the simulated analogs, AD increases with stellar age. A gradient in AD with stellar age is also seen in M31-like simulated analogs in the same simulation \citep{Quirk2020}. There is no such gradient in the observed AD in M33.
\par For the young and intermediate age stars, the observed AD is higher than what is seen in simulated M33-like analogs. The AD for the oldest stars is consistent within the observed age errors. While some of the differences in the observed M33 AD and the simulated analogs AD is most likely from the differences in the way AD is calculated for each, it is likely that this would affect all of the age bins equally. Thus, the difference in AD between the observed intermediate and the simulated analogs could be a nonphysical systematic offset. However, the observed young stars have an AD value that is more offset from the AD of simulated analogs than the offset for the other two age groups. Thus even if there is a systematic offset, the young observed stars still break the gradient in age and AD that is seen in the simulated M33-like analogs.
\par The high AD observed in the young stars in M33 suggests that some phenomenon is dynamically heating the young stars. Of the M33-like analogs, seven have young stars with an AD greater than 10 \,km~s$^{-1}$. Six have young star AD values between 12 and 19 \,km~s$^{-1}$, and one has an AD value of 33 \,km~s$^{-1}$ for the young stars. Of these seven, four have AD distributions that are similar for all stars 0-10 Gyr (the age bins that best represent the observed bins). For three analogs, the intermediate age stars (1-5 Gyr) have the lowest AD. Upon visual inspection of the star and gas particles used in these analogs, only one analog shows a slight disturbance in the disk, and only one has had any mergers (minor or major) in the past 4 Gyr. Thus, there is no immediate connection between these analogs that could point towards a cause of the high AD in the young stars. Figure \ref{fig:TNG_comp} shows the AD values for the analog that is closest to the observed AD values. This analog has not had any mergers within the past 4 Gyr and does not show a disturbance or asymmetry in its disk. Closer inspection of the stellar assembly history of this analog is future work.
\par The observed sample of stars and the sample of simulated star particles are meant to exclude possible halo stars. However, halo cuts for each are different, and those for the simulated star particles were not based on a kinematic model. Including all halo stars roughly doubles the AD value for the intermediate age and old bins (see Section \ref{sec:LG} and Table \ref{tab:AD_ill_comp}). It is possible that the gap between the AD in the observations and the simulated M33-like analogs is larger for the intermediate age and old stars, but even if the simulated AD halved for the older bins, the largest gap would still be for the young stars.
\section{Discussion: Contextualizing M33 in the Local Group}
\label{sec:LG}
\par Resolved stellar spectroscopy of individual stars allows us to study a galaxy in detail. For example, one can examine dynamics as a function of age, spatial position, and metallicity, which puts narrow constraints on origin and evolution. Studies of this kind have already proved valuable for the comparative evolution of the LG's two massive spirals, M31 and the MW. In spite of having similar masses ($M_{\rm vir} \approx\ 10^{12}\ M_{\odot}$), the disk kinematics of the MW and M31 differ significantly. The MW has thin disk stars, while the majority of disk stars in M31 belong to a thick disk or kicked up disk \citep{Dorman2013, Dalcanton2015}, with all the analysis pointing towards M31 having a more violent merger history than the MW \citep[e.g.][]{Tanaka2009, Mackey2019, Ferguson2016}.
\par \citet{Leaman2017} use data from the literature to compare the age-velocity dispersion relation of eight LG members, including the MW, M31, and M33 to model the evolving interstellar medium (ISM) turbulence as a galaxy experiences heating from various sources. In this section, we focus on the observed velocity dispersion of the three most massive LG members and add measurements for M33 that include a robust sample of individually resolved stars across the inner and outer disk of M33. We also present the first comparison of AD as a function of stellar age of LG members.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{LG_comp.pdf}
\caption{A comparison of velocity dispersion as a function of stellar age for the MW \citep[black triangles]{Nordstrom2004}, M31 \citep[black squares]{Dorman2015}, M33 star clusters \citep[open circles; data later used by \citet{Leaman2017}]{beasley2015}, and M33 (this work, stars). The open stars show the velocity dispersion for intermediate age and old stars in M33 if halo candidates are not removed. For M31 and M33, the velocity dispersion is in the line-of-sight, whereas for the MW, the azimuthal component of velocity dispersion is shown. The MW measurements are also in the solar neighborhood, while the measurements of M31 come from across the northern disk, and those from M33 come from across the entire disk. The MW and M31 show a clear increase in velocity dispersion with stellar age. M33 star clusters also show this increase, but the data from this work does not. Additionally, the magnitude of velocity dispersion from this work is much lower than that seen in the M33 star clusters.}
\label{fig:LG_disp_comp}
\end{figure}
\par It is important to note that possible halo stars were not removed from the M31 velocity dispersion or AD analysis.
\citet{Dorman2012} find that $\sim 15\%$ of stars in the M31 sample are part of a dynamically hot component. \cite{Dorman2013} characterize the bulge, disk, and halo of M31 using the luminosity function of stars from the PHAT survey, line-of-sight velocities of RGB stars from the SPLASH survey, and surface brightness profiles. Using these criteria, \citet{Dorman2013} find that the hot component in the M31 sample is best described by a disk-like surface brightness profile and luminosity function but spheroid-like kinematics, suggesting that $\sim 30\%$ of the dynamically hot stars belong to a kicked up disk and $\sim 10\%$ of the M31 sample is halo contamination. Since \citet{Dorman2013} did not fit for a both a disk and a kicked up disk along with a halo component, it is possible that the halo contamination is lower.
\par The MW data presented here consists of nearby F and G dwarf stars in the Solar Neighborhood that were targeted using Hipparcos parallaxes as part of the Geneva-Copenhagen Survey \citep{Nordstrom2004}. \citet{Nordstrom2004} use kinematics to identify thick disk stars and distances to target nearby stars but do not formally remove possible halo stars. The MW halo fraction in the Solar Neighborhood is very small \citep[$0.15\%$][]{du2018}, so it is possible the sample contains few halo stars but unlikely that these bias the velocity dispersion measurements significantly.
\par It was important for this work to remove possible M33 halo stars because M33's halo is centrally concentrated and dense, so there is more overlap between the halo and disk stars' sightlines, giving a larger halo fraction in the RGB stars in M33. For the sake of comparison, though, we show the median velocity dispersion and AD values for M33 if halo stars are not removed. We also acknowledge there is not necessarily a clear distinction between a galaxy's halo and kicked up disk, even if they were formed from different mechanisms and that differentiating between a disk, kicked up disk, and a halo can be arbitrary.
\par Figure \ref{fig:LG_disp_comp} shows velocity dispersion as a function of stellar age for the MW solar neighborhood \citep{Nordstrom2004, Holmberg2009}, the northern disk of M31 \citep{Dorman2015}, star clusters in M33 \citep{beasley2015}, and stars in the disk of M33 (this work). M31 has both a greater velocity dispersion \citep{Nordstrom2004, Holmberg2009, Sysoliatina2018, Dorman2015, Budanova2017} and a steeper gradient in the age-velocity dispersion than the MW and M33 \citep{Dorman2015, Bhattacharya_2019}. The median velocity dispersion values are significantly lower in M33, which is expected, as M33 is a less massive and therefore more fragile disk.
\begin{table*}[]
\centering
\begin{tabular}{c|c|c|c}
\hline
Age Group & AD w.r.t H\,{\smcap i}\ in & AD in M33 & AD w.r.t H\,{\smcap i}\ in\\
& M33 ($\rm kms^{-1}$) & inc. halo ($\rm kms^{-1}$)& M31 ($\rm kms^{-1}$) \\
\hline
Young & $15.1_{-1.8}^{+1.6}$ & $15.1_{-1.8}^{+1.6}$ & $-8.2_{-0.72}^{+0.74}$ \\
Intermediate Age & $8.0_{-1.6}^{+1.7}$ & $20.6_{-1.8}^{+1.9}$ & $34.1_{-4.0}^{+4.4}$ \\
Old & $24.5_{-1.1}^{+0.9}$ & $42.0_{-1.2}^{+1.1}$ & $63.0_{-0.4}^{+0.59}$ \\
\hline
\end{tabular}
\caption{Median values of AD for stars in M33 (this work) and in M31 \citep{Quirk2019}.}
\label{tab:LG_comp_ad}
\end{table*}\label{tab:AD_ill_comp}
\par The velocity dispersion reported in this work is not consistent with that measured using star clusters in M33 \citep{beasley2015}. The star clusters show a higher magnitude that is consistent with the MW Solar Neighborhood and a gradient in velocity dispersion and stellar age. One potential explanation for the difference is that we remove stars with line-of-sight velocities that are consistent with a dynamically hot component. If we do not remove these stars, we do recover a sloped age velocity dispersion relation and higher magnitudes, although not high enough to match the dispersion of the star clusters. It is possible that a larger fraction of the star clusters than individual stars belong to the hot component.
\par It is unexpected that this work does not see a trend between velocity dispersion and age, as it is predicted and seen in the MW and M31 \citep{Leaman2017, beasley2015, Nordstrom2004, Dorman2015}. This lack of trend is what \citet{Leaman2017} calculate for dwarf spheroids in the LG with masses $\sim 1000$ times less than M33, not for a more massive galaxy like M33. Their models, which do not distinguish between galactic components, predict M33 should have a velocity dispersion of $\sim 40$ \,km~s$^{-1}$ for stars with ages of several Gyr, which is also consistent with the observations of M33 star clusters \citep{beasley2015}. If we do not remove likely halo stars, the oldest M33 stars in this work have a velocity dispersion that approaches 40 \,km~s$^{-1}$. This demonstrates the effect that contaminate halo stars can have in a measured age-velocity dispersion relation. The velocity dispersion of the individual stars and the clusters were also measured in two different ways in two different samples, which results in different systematic uncertainties in the intrinsic dispersion calculation and could contribute to the discrepancy.
\par The existence of a substantial kinematically hot halo component in M33's inner disk \citep{Gilbert2021} may suggest internal heating mechanisms, like gas flows from stellar feedback \citep{el-badry2016}, could have caused enough disk heating to substantially contribute to a stellar halo, which could also cause stellar clusters and individual stars that were born in the disk to be heated or kicked up into the halo component.
\par Additionally, because of their respective inclinations, the observed line-of-sight velocity dispersion of M33 includes a larger $z$ component than the observed line-of-sight velocity dispersion of M31. \citet{martig2014} find that in more massive disks, the vertical component of velocity dispersion is largely dependent on merger history but these trends can be obscured when there are large uncertainties on age estimates. It is possible our wide age bins combined with the large influence of the vertical component in M33's line-of-sight velocity dispersion are obscuring a subtle trend in the age-velocity dispersion relation, but it is unlikely to fully explain the low magnitude of velocity dispersion for the intermediate age and old stars.
\par We also make the first comparisons of the AD as a function of stellar age observed in M31 and M33. There are no current measurements of AD as a function of stellar age in the MW or other galaxies in the LG. Table \ref{tab:LG_comp_ad} compares the AD (with respect to H\,{\smcap i}) in M31 and M33. Like velocity dispersion, M31 has greater values of AD and a steeper rise in AD as a function of age than observed in M33, suggesting that it has experienced several significant and ongoing heating events throughout its relatively recent history. If possible halo stars are not removed from the M33 dataset, there is a trend between velocity dispersion/AD and stellar age. However, this trend disappears if the likely halo stars are removed. Furthermore, the AD values roughly double when possible halo stars are not removed for the intermediate age and old stars.
\par There are other measurements of AD from Integral-field-units (IFUs): \cite{Martinsson2013} use IFUs to measure the AD with respect to ionized gas in face-on spiral galaxies and find stars lag behind the ionized gas by $\sim 11\pm8\%$, which is consistent with the young stars lagging on average by $20\%$, the intermediate age stars by $9\%$, and the old stars by $18\%$. This amount of lag is similar to studies of AD in other inclined local galaxies \citep{Ciardullo2004,Herrmann2009,Westfall2007, Westfall2011} and in the MW \citep{Ratnatunga1997,Olling}.
\section{Summary and Conclusions}
\label{sec:summary}
In this work, we present the largest stellar spectroscopic survey of M33, the TREX Survey, and present initial analysis of the stellar disk kinematics as a function of stellar age using only individually resolved stars, which is the first of its kind in M33. Below we summarize our main findings and conclusions of the complex, yet low mass, galaxy.
\begin{itemize}
\item 1. The TREX survey consists of $\sim 4500$ stars with good quality spectra across the entire disk of M33, ranging from several evolutionary stages: massive MS and HeB stars, intermediate mass AGB stars, and low mass RGB stars. This work uses a subset (2561 spectra) of the full survey.
\item 2. We find that M33's stellar disk has an average velocity dispersion of $\sim 16$ \,km~s$^{-1}$, which is significantly lower than what is observed in the disk of MW and M31 \citep{Holmberg2009, Dorman2015} and lower than what is measured using star clusters \citep{beasley2015}. The average magnitude of AD is on the order of $\sim 16$, which is also lower than what is observed in M31.
\item 3. Velocity dispersion and AD do not increase with stellar age in the disk, which is unexpected. We highlight the importance of removing potential halo stars when measuring the age-velocity dispersion and age-AD relation.
\item 4. This analysis suggests that M33 has experienced a significantly different dynamical heating history than M31 and the MW, which may have been dominated by internal heating mechanisms rather than external ones.
\item 5. The young stars are as dynamically hot as the older stars in the stellar disk component. These young stars also have a wider distribution of AD values than the old disk stars. They are more dynamically hot than simulated M33-like analogs predict. Possible mechanisms for the heating of the young stars could include turbulence from ongoing star formation or scattering from giant molecular clouds. It is also possible that these stars are remnants of a relatively recent minor merger or other type of galaxy interaction and lie in front of the disk.
\end{itemize}
\section*{Acknowledgements}
The authors would like to thank everyone who contributed to the general public's safety, health, and well being during the ongoing COVID-19 pandemic. A very large thank you to all of the essential workers and medical professionals. Thank you also to the folks at Keck who worked to make PJ observing happen.
The authors recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
Thank you to Bruno Villasenor, without whom, our code would have been too slow. ACNQ and PG are grateful to the Sulphurous AA Overspray project for their generous support.
We would like to thank the anonymous referee for improving the clarity of this paper.
Support for this work was provided by NSF grants AST-1909066 (KMG), AST-1909759 (PG), DGE-1842400 (ACNQ), GO-14610 (BW), and ANID/FONDECYT Regular Project 1210992 (LC). The analysis pipeline used to reduce the DEIMOS data was developed at UC Berkeley with support from NSF grant AST-0071048. \\
\noindent {\it Facilities:} Keck:II(DEIMOS), CFHT(MegaCam), HST(ACS)\\
\noindent {\it Software:} This research made use of \texttt{astropy} \citep{astropy2013, astropy2018}, \texttt{matplotlib} \citep{Hunter:2007}, \texttt{numpy} \citep{numpy}, \texttt{scipy} \citep{scipy}, \texttt{scikit-learn} \citep{scikit-learn}, and \texttt{Illustris Python} \citep{nelson15}. \\
\noindent {\it Data Availability Statement:} The data used in this article is from the PAndAS catalogue, which is available to download at: http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/community/pandas/query.html. The other data in this work come from sources available on the Keck and HST archive or are not publicly available. \\
| 24,229 |
\section{Introduction}
\par Due to the advances of trigger-action platforms (e.g. IFTTT \cite{IFTTT2020}) in IoT domain, IoT networks become more vulnerable towards malicious event injection attacks. Since IoT devices create a chain of interactions maintaining functional dependencies between entities and actions \cite{Celik2019b} \cite{Alam2021}, it is possible for adversaries to remotely inject malicious events somewhere in the interaction chain using a ghost device and activate a critical action through the exploitation of autonomous trigger-action scenario. For instance, an adversary can inject a fake thermometer reading of 110\textdegree F into the chain to initiate a critical \textit{window opening} action.
\par There are a number of research efforts in the existing literature that attempt to solve the vulnerabilities caused by the trigger-actions in an IoT network. Most of them are designed to validate security properties by identifying the unsafe or insecure state transitions in the network \cite{Celik2019b} \cite{Nguyen-IoTSan-2018} \cite{Leonardo2018}. There is another line of research attempts where policy violations are addressed by checking sensitive user actions that may violate security policies \cite{Leonardo2018}. The research that is closest to our proposition is PEEVES \cite{sbirnbach2019}, where physical fingerprints of the devices are extracted using machine learning techniques to verify whether or not a certain event actually occurs.
\par In this paper, we propose IoTMonitor, a security system that adopts a Hidden Markov Model based approach to determine the optimal attack path an attacker may follow to implement a trigger-action based attack, thus providing suggestions for subsequent patching and security measures. Our system examines the physical changes happening in an IoT environment due to the event occurrences, discovers the probabilistic relation between physical evidence and underlying events using the Baum-Welch algorithm \cite{Baum1967} \cite{Baum1968}, and discerns the optimal attack path using the Viterbi algorithm \cite{viterbi1967}. When the optimal attack path is determined, IoTMonitor identifies the crucial nodes in the path that the attacker must compromise to carry out the attack. Such information can be used for prioritizing security measures for IoT platforms.
\par The contributions of the paper can be summarized as follows:
\vspace{-5pt}
\begin{itemize}
\item We propose IoTMonitor, a Hidden Markov model based system that identifies the optimal attack path in a trigger-action IoT environment based on the probabilistic relation between actual IoT events and corresponding physical evidence;
\item We implement the Baum-Welch algorithm to estimate transition and emission probabilities, and Viterbi algorithm to discern the attack path;
\item We propose an algorithm to detect the crucial nodes in an extracted optimal attack path, thus providing guidelines for subsequent security measures;
\item We thoroughly evaluate the performance of IoTMonitor in detecting the optimal attack path and achieve high accuracy scores.
\end{itemize}
\vspace{-5pt}
\par The rest of the paper is organized into four sections. In Section II, we define the attack landscape, discuss an attack scenario, and present the threat model. In Section III, we present IoTMonitor and discuss each component of it in detail. Later in Section IV, we present the evaluation results of our approach. Finally, in Section V, we conclude the paper by summarizing the methodology and outputs of our experiments, and presenting future extensions.
\section{Attack Landscape}
\subsection{A Sample Attack Scenario}
\par Assume that Alice has a limited number of trigger-action enabled IoT devices including Smart Lock, Motion Detector, Accelerometer, Smart Light, Coffee Machine, and Smart Window. Alice controls each device through a mobile application from her cell phone. The devices communicate with each other through a hub. Since the platform supports trigger-action functionality, a device has the capability to trigger an action of another device.
\par Alice sets up the trigger events as follows. When she unlocks the smart lock of the front door and walks in, the motion sensor in the living room detects the motion and activates ``home-mode''. The home-mode activation event automatically turns on the smart light. When the light is turned on, the events associated with coffee
grinding and window opening are triggered.
When coffee is ready, Alice takes the coffee and enters into her bedroom by opening and subsequently closing the door. The vibration generated by the opening and closing operations of the door is measured by an accelerometer. Thus, a chain of events are triggered by the initial action.
\par Now, Bob, an attacker, wants to compromise the smart window remotely when Alice is not at home and the front door is locked. His objective is to inject malicious events into the network to create a chain of interactions that eventually trigger the events associated with the window.
\subsection{Threat Model}
\par We assume that the attacker knows the locations of the IoT devices in the target system but he does not have physical access to the home. He can eavesdrop on wireless communication taking place between devices and the hub. His goal is to perform a trigger-action attack by injecting fake events into the IoT network through ghost applications. The ghost applications impersonate target devices just by mimicking their characteristics and functionalities. Therefore, he does not need to deploy any real IoT devices to conduct the attack.
\section{The IoTMonitor System}
\par Since the attacker exploits trigger-action functionality of IoT network to generate a chain of interactions by injecting fake events, we can thwart a trigger-action attack effectively if we can identify the optimal attack path the attacker may follow and perform security hardening on the crucial nodes in the attack path. In this research work, we propose \textit{IoTMonitor}, a system that discerns the optimal attack paths by analyzing physical evidence generated during the attack cycle, which are probabilistically correlated to the actual underlying events. IoTMonitor formulates the attack as a Hidden Markov Model (HMM) problem and solves it to determine the most likely sequence of events occur during an attack cycle and further identifies the crucial nodes in that sequence. Hence, in this paper, a \textit{node} represents an event occurring at a particular device.
\subsection{Our Assumption}
\par We assume that a configured trigger-action sequence contains $N$ events: $d_1, d_2,..., d_N$. The attacker injects fake events $\{d_i\}$ in the chain to achieve his final goal. Note that the attacker does not necessarily have to inject $d_1$ since he can wait for the occurrence of some real events to trigger the automatic chain occurrence of the rest of the events required to implement the attack. When an event is triggered, it causes some physical changes in the environment, which can be perceived as corresponding physical evidence $\{ph_i\}$ captured by an array of sensors and harnessed to verify the occurrence of that specific event. Note that some event may trigger non observable evidence, but others may trigger more than one evidence.
\par Given this assumption, IoTMonitor models the trigger action scenario as a HMM problem, where physical evidence are visible to the analysis agent, but the actual events remain hidden. The tasks of the agent are to determine the probabilistic relation between events and evidence, and employ it to figure out the optimal attack path and diagnose the crucial nodes in that path.
\begin{figure}
\centering
\includegraphics[scale=0.35]{figures/IoTMonitor_Framework_Updated.png}
\caption{IoTMonitor System}
\label{fig:IoTMonitor_system}
\vspace{-10pt}
\end{figure}
\subsection{IoTMonitor}
The proposed IoTMonitor comprises three main components: 1) state machine generator, 2) sequence extractor, and 3) crucial node detector. Fig \ref{fig:IoTMonitor_system} shows the architecture of IoTMonitor. We discuss the components below in detail.
\subsubsection{\textbf{State Machine Generator}}
\par When events are triggered in the environment and the deployed sensors capture corresponding evidence per event occurrence, this component will construct a \textit{state machine} to represent how state changes in the environment due to the exploitation of trigger-action functionalities across a series of time instances $t=1, 2,..., T$. Hence, \textit{states} delineate useful information regarding the occurrence of different events $d_i$ and corresponding evidence $\{ph_i\}$.
\par The state machine accommodates two types of states: 1) \textit{true states}, which correspond to the actual event occurrences, and 2) \textit{observation states}, which represent the physical evidence. Hence, the true states remain hidden, but the analysis agent leverages the observation states to infer the hidden true state sequence.
We define our state space as follows:
\begin{itemize}
\item true state, $x_i$ : state responding to the occurrence of $d_i$
\item observation state, $y_j$ : a subset of the physical evidence $\{ph_1, ph_2,..., ph_M\}$, which are emitted when the environment makes transition to a new state
\end{itemize}
\begin{figure}
\centering
\includegraphics[scale=0.35]{figures/State_Machine_Updated.png}
\caption{A Sample State Machine}
\label{fig:State_machine}
\vspace{-10pt}
\end{figure}
\par Hence, we assume that there are $N$ true states $X=\{x_1, x_2,..., x_N\}$, and $T$ observation states $Y = \{y_1, y_2,..., y_T\}$ in the state machine, where $X_t$ and $Y_t$, respectively, denote the true state and observation state at time $t$. Here, each $y_j$ contains a subset of the physical evidence $\{ph_1, ph_2,..., ph_M\}$, where the total number of evidence is $M$. Note that each observation state $Y_t$ in our experiment is determined with the help of a \textit{sliding window} function, which is discussed in detail in Section IV.
\vspace{1pt}
\par When the environment is in $x_i$ at time instance $t$ and makes a transition to any $x_j \in X$ at time instance $t+1$, it changes its true state with a \textit{transition probability} $q_{ij} \in Q$, which can be defined as:
\vspace{-5pt}
\small
\begin{equation}
\label{define-state-transition-probability}
q_{ij} = Pr (X_{t+1}=x_j | X_t=x_i), \hspace{0.3cm} 1 \leq i,j \leq N
\end{equation}
\normalsize
Suppose, because of this state transition, the environment emits a new observation $y_k \in Y$ with an \textit{emission probability} $\mu_j(y_k) \in E$, which can be defined as:
\vspace{-2pt}
\small
\begin{equation}
\label{define-emission-probablility}
\begin{split}
\mu_j(y_k) = Pr (Y_{t+1}=y_k | X_{t+1}=x_j), \hspace{0.3cm} & 1 \leq j \leq N \\ & 1 \leq k \leq T
\end{split}
\end{equation}
\normalsize
\par In the equation \eqref{define-state-transition-probability}, $Q = \{q_{ij}\}$ is termed as \textit{state transition probability distribution}, while $E = \{\mu_j(y_k)\}$ in the equation \eqref{define-emission-probablility} is termed as \textit{emission probability distribution}.
\par To model the attack as HMM, we need to generate an \textit{initial state distribution} $\sigma = \{\sigma_i\}$, such as:
\vspace{-3pt}
\small
\begin{equation}
\begin{aligned}
\label{initial-state-probability}
\sigma_i = Pr(X_1 = x_i), \hspace{0.3cm} 1 \leq i \leq N
\end{aligned}
\end{equation}
\normalsize
Hence, $\sigma_i$ is the initial state distribution at time instance $t=1$.
\par Combining all the five aforementioned tuples, IoTMonitor models the trigger-action attack as an HMM problem $\big \langle N, M, Q, E, \sigma \big \rangle$ and solves it to determine the optimal attack path given a sequence of observation states. IoTMonitor also creates a parameter $\theta = (\sigma, Q, E)$, which is called the \textit{current model} of HMM.
Figure \ref{fig:State_machine} shows a sample state machine where \textit{blue} circles represent the true states and \textit{yellow} circles represent the observation states.
\textbf{Note:} For the rest of the paper, we call \textit{observation state} as only \textit{observation} sometimes and use the terms \textit{true state} and \textit{state} interchangeably to mean the same thing.
\subsubsection{\textbf{Sequence Extractor}}
\par Once the trigger action sequence is modeled as an HMM problem, IoTMonitor attempts to estimate the probability values and retrieve the optimal hidden state sequence from the observations. First, it starts with estimating the converged state distributions, transmission probabilities, and emission probabilities. Then, it seeks to figure out the underlying state sequence that maximizes the probability of getting a certain observation sequence. To accomplish both tasks, the \textit{sequence extractor} employs the following two subcomponents: a) probability estimator, and b) sequence retriever. The details of both subcomponents are described below.
\par a) \textbf{Probability Estimator}: Given a complete observation sequence $\langle Y_1, Y_2,..., Y_T \rangle$, the goal of this component is to determine the following:
\vspace{-2pt}
\small
\begin{equation}
\begin{split}
\label{eqn1-baum-welch-general}
\theta^* & = \underset{\theta}{argmax} \ Pr(Y_1, Y_2,..., Y_T | \theta)
\end{split}
\end{equation}
\normalsize
\vspace{-20pt}
\par We use the Baum-Welch algorithm \cite{Baum1967} \cite{Baum1968} to iteratively update the current model $\theta$ and solve equation \eqref{eqn1-baum-welch-general}. It uses a \textit{forward-backward procedure} to find the maximum likelihood estimate of $\theta$ given a certain set of observations. We assume that each observation $Y_t$ is emitted by the environment at one discrete time instance $t=1,2,...,T$.
\vspace{3pt}
\par \textbf{Forward-backward Procedure}: Let $\alpha_t(i)$ and $\beta_t(i)$ are the probabilities of getting the observation sequences $ \langle Y_1, Y_2,..., Y_t \rangle$ and $\langle Y_{t+1}, Y_{t+2},..., Y_T \rangle$, respectively, while the system is being in the true state $x_i$ at time $t$.
So,
\vspace{-4pt}
\small
\begin{equation}
\begin{aligned}
\label{forward-backward-procedure}
\alpha_t(i) &= Pr(Y_1, Y_2, ..., Y_t, X_t = x_i | \theta) \\
\beta_t(i) &= Pr(Y_{t+1}, Y_{t+2}, ..., Y_T | X_t = x_i, \theta)
\end{aligned}
\end{equation}
\normalsize
\par We can compute $\alpha_t(i)$ and $\beta_t(i)$ using the following steps:
1. Initialization
\small
\begin{equation}
\begin{aligned}
\label{forward-backward-procedure-initialization}
\alpha_1(i) &= \sigma_i \mu_i(y_1), \hspace{0.3cm} 1 \leq i \leq N \\
\beta_T(i) &= 1, \hspace{0.3cm} 1 \leq i \leq N
\end{aligned}
\end{equation}
\normalsize
\vspace{-5pt}
2. Induction
\vspace{-10pt}
\small
\begin{equation}
\begin{aligned}
\label{forward-backward-procedure-induction}
& \alpha_{t+1}(j) = \mu_j(y_{t+1}) \sum_{i=1}^N \alpha_t(i) q_{ij}, \hspace{0.2cm} 1 \leq t \leq T-1, \hspace{0.2cm} 1 \leq j \leq N \\
& \beta_t(i) = \sum_{j=1}^N q_{ij} \mu_j(y_{t+1}) \beta_{t+1}(j), \hspace{0.1cm} t = T-1, ...,2, 1, \hspace{0.2cm} 1 \leq i \leq N \\
\end{aligned}
\end{equation}
\normalsize
\par These two steps combined is called the \textit{forward-backward procedure}, and $\alpha_t(i)$ and $\beta_t(i)$ are termed as \textit{forward variable} and \textit{backward variable}, respectively.
\par Now, suppose $\delta_t(i)$ is the probability of the system being in the true state $x_i$ at time instance $t$ given the complete observation sequence $\langle Y_1, Y_2,..., Y_T \rangle$ and the current model $\theta$. We can define this probability in terms of the forward and backward variables $\alpha_t(i)$ and $\beta_t(i)$, i.e.,
\small
\begin{equation}
\label{eqn1-update-delta}
\begin{split}
\delta_t(i) & = Pr(X_t = x_i | Y_1, Y_2, ..., Y_T, \theta) \\
& = \frac{Pr(X_t = x_i, Y_1, Y_2, ..., Y_T | \theta)}{Pr(Y_1, Y_2, ..., Y_T | \theta)} \\
& = \frac{\alpha_t(i)\beta_t(i)}{\sum_{j=1}^N \alpha_t(j)\beta_t(j)}
\end{split}
\end{equation}
\normalsize
\par Again, given the complete observation sequence $\langle Y_1, Y_2,..., Y_T \rangle$ and the current model $\theta$, suppose, $\xi_t(i,j)$ is the probability of the system being in the true states $x_i$ and $x_j$ at time instances $t$ and $t+1$, respectively. So,
\vspace{-3pt}
\small
\begin{equation}
\label{eqn2-update-xi}
\begin{split}
\xi_t(i,j) & = Pr(X_t = x_i, X_{t+1} = x_j | Y_1, Y_2, ..., Y_T, \theta) \\
& = \frac{Pr(X_t = x_i, X_{t+1} = x_j, Y_1, Y_2, ..., Y_T | \theta)}{Pr(Y_1, Y_2, ..., Y_T | \theta)} \\
& = \frac{\alpha_t(i) q_{ij} \beta_{t+1}(j) \mu_j(y_{t+1})}{\sum_{i=1}^N \sum_{j=1}^N \alpha_t(i) q_{ij} \beta_{t+1}(j) \mu_j(y_{t+1})}
\end{split}
\end{equation}
\normalsize
\par Now, we can update the initial state distribution $\Bar{\sigma}_i$, transition probability $\Bar{q}_{ij}$, and emission probability $\Bar{\mu}_j(y_k)$ using these two parameters $\delta_t(i)$ and $\xi_t(i,j)$. The state distribution can be updated as:
\small
\begin{equation}
\label{eqn1-update-final-state-distribution}
\Bar{\sigma}_i = \delta_1(i)
\vspace{-3pt}
\end{equation}
\normalsize
\par where, $\delta_1(i)$ is the expected number of times the system is in the true state $x_i$ at time instance $t=1$.
\par To update the transition probabilities, we have to compute the ratio of \textit{the expected number of state transitions from $x_i$ to only $x_j$} (the numerator of the equation \eqref{eqn2-update-final-transition-probabalities}) and \textit{the expected number of transitions from $x_i$ to all other true states} (the denominator of the equation \eqref{eqn2-update-final-transition-probabalities}).
\small
\begin{equation}
\label{eqn2-update-final-transition-probabalities}
\Bar{q}_{ij} = \frac{\sum_{t=1}^{T-1} \xi_t(i,j)}{\sum_{t=1}^{T-1} \delta_t(i)}
\end{equation}
\normalsize
\par And to update the emission probabilities, we have to take the ratio of two other quantities: \textit{the expected number of times being in state $x_j$ and observing the observation $y_k$} (the numerator of the equation \eqref{eqn3-update-final-emission probabilities}), and \textit{the expected number of times being in state} $x_j$ (the denominator of the equation \eqref{eqn3-update-final-emission probabilities}).
\small
\vspace{-5pt}
\begin{equation}
\label{eqn3-update-final-emission probabilities}
\Bar{\mu}_j(k) = \frac{\sum_{t=1}^{T} 1_{(Y_t = y_k)} \delta_t(j)}{\sum_{t=1}^{T} \delta_t(j)}
\end{equation}
\normalsize
where,
\small
\begin{equation}
\label{eqn2-getting-observation_y_k}
1_{(Y_t = y_k)} = \left\{
\begin{array}{@{}ll@{}}
1, \hspace{0.5cm} \text{if } \hspace{0.1cm} Y_t = y_k \\
0, \hspace{0.5cm} \text{Otherwise} \\
\end{array}\right.
\end{equation}
\normalsize
\par The updated parameters $\Bar{\sigma} = \{\Bar{\sigma}_i\}$, $\Bar{Q} = \{ \Bar{q}_{ij} \}$, and $\Bar{E} = \{ \Bar{\mu}_j(y_k) \}$ now constitute the new model $\Bar{\theta} = (\Bar{\sigma}, \Bar{Q}, \Bar{E})$. We need to iterate the equations \eqref{eqn1-update-final-state-distribution} \eqref{eqn2-update-final-transition-probabalities}, and \eqref{eqn3-update-final-emission probabilities} until we find $\Bar{\theta} \approx \theta$. This convergence is guaranteed in \cite{Baum1968} by Baum et al., where it is ensured that either 1) the initial model $\theta$ defines a critical point in the likelihood function where $\Bar{\theta}=\theta$, or 2) $\Bar{\theta}$ explains the observation sequence $\langle Y_1, Y_2,..., Y_T \rangle$ more suitably than $\theta$, i.e. $Pr(Y_1, Y_2,..., Y_T | \Bar{\theta}) > Pr(Y_1, Y_2,..., Y_T | \theta)$ \cite{Rabiner1989}.
\BlankLine
\par b) \textbf{Sequence Retriever}: Once the probability estimator determines the converged HMM model $\theta^*$, now, it is job for the \textit{Sequence Retriever} to extract the optimal sequence of hidden events using Viterbi algorithm \cite{viterbi1967}. Given a particular observation sequence $\langle Y_1, Y_2,..., Y_t \rangle$ at time instance $t$ and $Y_t = y_k$, the goal here is to determine the following:
\small
\begin{equation}
\label{viterbi-objective-eqn}
\begin{split}
\omega _t(i) &= \underset{x_1,..., x_{i-1}}{max} \Big \{ Pr(X_1 = x_1,...,X_t = x_i, Y_1,..., Y_t = y_k| \theta) \Big \} \\
& = \underset{x_1, x_2,..., x_{i-2}}{max} \bigg \{ \underset{x_{i-1}}{max} \Big \{\omega _{t-1}(i-1) q_{(i-1)(i)} \Big \} \mu_t (y_k) \bigg \}, \\ & \hspace{4.5cm} 2 \leq t \leq T, \hspace{0.1cm} 1 \leq i \leq N
\end{split}
\end{equation}
\normalsize
Hence, $\omega _t(i)$ represents the maximum probability of the occurrence of a particular state sequence $\langle x_1, x_2,..., x_i \rangle$ at time instance $t$ that corresponds to the aforementioned observation sequence $\langle Y_1, Y_2,..., Y_t \rangle$.
\par The equation \eqref{viterbi-objective-eqn} can be solved recursively to determine the highest probability of the occurrence of a complete state sequence $\langle x_1, x_2,..., x_N \rangle$ for the time instance $2 \leq t \leq T$ given that $\omega _1(i) = \sigma_i \mu_i(y_1)$. The recursion stops after computing $\omega _T(i)$ such as:
\small
\begin{equation}
\label{viterbi-omegha_final_instant}
\omega _T^* = \underset{1 \leq i \leq N}{max} \ \omega _T(i)
\end{equation}
\normalsize
\par But to obtain the optimal hidden sequence, we must trace the arguments that maximize the equation (\ref{viterbi-objective-eqn}) during each recursion. To achieve that, we introduce a variable $\chi$ to hold all the traces such as:
\small
\begin{equation}
\label{viterbi-trace-arguments}
\chi _t(i) = \underset{1 \leq i \leq N }{argmax} \Big \{ \omega _{t-1}(i-1) q_{(i-1)(i)} \Big \}, 2 \leq t \leq T, 1\leq i \leq N
\end{equation}
\normalsize
Note that $\chi _1(i) = 0$ for $t=1$ because we start tracing the states for the very first time at time instance $t=2$ once we have at least one previous state.
\par Once we have $\chi _T(i)$, all we need is backtracking through the traces to discern the optimal hidden sequence such as:
\small
\begin{equation}
\label{viterbi-backtracking}
\psi _t ^ * = \chi_{t+1} (\psi_{t+1} ^ *), \ t = T-1, ....., 2, 1
\end{equation}
\normalsize
Hence, $\psi _T ^ * (i) = \chi _T(i)$, and $\Upsilon = \{\psi_1 ^*, \psi_2 ^*,..., \psi_T ^*\}$ is the extracted optimal sequence. Note that each $\psi_t ^* \in \Upsilon$ represents a true state in $X$.
\vspace{5pt}
\normalsize
\subsubsection{\textbf{Crucial Node Detector}}
\par After the \textit{sequence retriever} extracts the hidden optimal sequence $\Upsilon = \{\psi_1 ^*, \psi_2 ^*,..., \psi_T ^* \}$, the component \textit{crucial node detector} applies Algorithm \ref{algo:crucial_node_detection} to detect the crucial events in the attack chain the attacker must compromise to successfully implement the attack. Hence, the most frequently triggered events are defined as \textit{crucial events}.
\par If there are $p$ number of different extracted sequences $\Upsilon_1, \Upsilon_2,..., \Upsilon_p$ for $p$ different attempts, the Algorithm \ref{algo:crucial_node_detection} first determines the \textit{longest common subsequence} $S_i$ between each $\Upsilon_i$ and the original sequence $X = \{x_1, x_2, ..., x_N\}$. Later, it computes the \textit{SCORE} value for each pair of states in the subsequence such as:
\small
\begin{equation}
\label{equation:SCORE_definition}
\begin{split}
& SCORE \Big[S_i[j],S_i[j+1] \Big] = \text{number of times a pair} \\
& \big \{ S_i[j],S_i[j+1] \big \} \text{ is present in the subsequence} \
\end{split}
\end{equation}
\normalsize
\vspace{-5pt}
\vspace{-5pt}
\begin{algorithm}[h]
\footnotesize
\caption{Crucial node detection algorithm}
\label{algo:crucial_node_detection}
\KwIn{$X, \Upsilon_1, \Upsilon_2, ...., \Upsilon_p$}
\KwOut{ Pairs of true states responding to the most frequently triggered events}
\BlankLine
\begin{algorithmic}[1]
\STATE $i \gets 1$
\WHILE{$i \leq p$}
\STATE $S_i \gets$ LCS between $X$ and $\Upsilon_i$ \tcp{\footnotesize LCS = Longest Common Subsequence}
\FOR{$j \gets 1$ \KwTo $(|S_i|-1)$}
\STATE $E[i, j] \gets \{S_i[j], S_i[j+1]\}$ \\
\IF{$E[i, j]$ not in $SCORE.Keys()$}
\STATE $SCORE[E[i,j]] \gets 1$ \\
\ELSE
\STATE $SCORE[E[i,j]] \gets SCORE[E[i,j]] + 1$
\ENDIF
\ENDFOR
\ENDWHILE
\RETURN $\underset{E[i,j]}{argmax} \ (SCORE[E[i,j]])$
\end{algorithmic}
\end{algorithm}
\vspace{-5pt}
\par Finally, the algorithm updates the \textit{SCORE} values based on the presence of pairs in all subsequences and retrieves the pairs with the maximum \textit{SCORE} value. It may output a number of pairs of states, such as $\{ x_{c_i}, x_{c_j} \}$, where there is a crucial state transition in the state machine from $x_{c_i}$ to $x_{c_j}$. Our goal is to identify the events (we call them \textit{nodes}) associated with such transitions that are exploited by the attackers to compromise the chain.
\vspace{5pt}
\par \textbf{A Simple Example}
\par Suppose, there is a sequence of states (responding to some triggered events): \{\textbf{door-opened, light-on, camera-on, fan-on, window-opened}\}. And after making three separate attempts, the sequence retriever returns the following three sequences:
Sequence-1: \{door-opened, light-on, light-on, camera-on, fan-on\}, Sequence-2: \{fan-on, light-on, camera-on, fan-on, window-opened\}, Sequence-3: \{door-opened, light-on, camera-on, window-opened, fan-on\}.
\par Now, if we apply Algorithm \ref{algo:crucial_node_detection} on this scenario, we find that the pair \{\textbf{light-on, camera-on}\} obtains the highest score. Consequently, we can conclude that the transition from the state \textbf{light-on} to \textbf{camera-on} is the most vital one in the state machine, and the nodes associated with those states are the most crucial ones in the chain. IoTMonitor identifies these crucial nodes so that we can perform security hardening to minimize the attacker's chance of compromising an IoT network. The security hardening part is out of the scope of this paper, and we plan to incorporate such capability in the extended version of IoTMonitor in recent future.
\section{Results and Evaluation}
\par To evaluate the performance of IoTMonitor, we utilize the PEEVES dataset \cite{sbirnbach2019} that records IoT event occurrences from 12 different IoT devices and sensors measurements from 48 deployed sensors to verify those events. We use 24-hours data for our experiment, and our experiment executes on a 16 GB RAM and 4 CPU core system.
\subsection{Dataset Processing}
\par Our experiment mainly deals with three types of data: 1) event data (used as true states) 2) sensor measurements (used as observations), and 3) timestamps. We concentrate only on those event occurrences which can be verified by the sensor measurements. Since sensor measurements here capture the physical changes that have happened in the environment due to the event occurrences, they can be used to crosscheck whether a certain event has occurred. We conceptualize the function \textit{sliding window} to determine whether an event is verifiable. Hence, the function provides us with a time window (in milliseconds) $w_i$ that starts at the timestamp of a particular event occurrence. After an event occurrence is recorded at time instance $t_i$, if we find the necessary sensor measurements to verify that occurrence within the time instance $t_i + w_i$, we consider that event verifiable and keep it in the sequence of events occurred. Otherwise, we discard it from the sequence. In our experiment, we consider 20 such sliding windows with the size between 105 milliseconds and 200 milliseconds with an increase of 5 milliseconds.
\subsection{Experiment Setting}
\par At the beginning of our experiment, we choose Gaussian distribution to randomly assign the transition probabilities and initial state probabilities for each true state. On the other hand, we use Dirichlet distribution to assign the emission probabilities. We use the same seed value for each execution.
\subsection{Probability Estimation Time}
\par Probability estimation time represents the time required to estimate the converged transition probability distribution $Q$ and the emission probability distribution $E$. Figure \ref{fig:estimation_time_decoding_time}(a) presents the estimation time for four different sequences of events of different lengths (5, 10, 15, and 20) against a range of sliding windows. In the figure we show the average estimation time after 10 executions.
\par As we can see from Figure \ref{fig:estimation_time_decoding_time}(a), the longest estimation time is $<4$ seconds for the sequence length of 20, while in most cases, it is $<0.5$ seconds. As the window size increases, the estimation time starts to decrease and stabilize. There are a few exceptional cases where the estimation time increases sharply for a increase in window size. For example, when the window size increases from 105 to 110 for the sequence of length 20, we see a sudden spike. We examine the source data and find that this spike is caused by the appearance of two new events that were not present earlier. Since the number of unique events increases and repetition of same events decreases in the sequence, the initial state distribution and transition probabilities are needed to be adjusted which costs adversely to the total estimation time. However, this type of exception is transient, and the graph stabilizes eventually. We do not present the estimation time for the sequences of lengths $>20$ in the Figure \ref{fig:estimation_time_decoding_time}(a) since we observe very little change in pattern for those sequences.
\vspace{-8pt}
\begin{figure}[h]
\centering
\includegraphics[scale=0.33]{figures/v2/EST_DCD_in_ms_final_v2.png}
\vspace{-10pt}
\caption{a) Probability estimation time with respect to sliding window size and length of the event sequence; b) Decoding time with respect to sliding window size and length of the event sequence }
\label{fig:estimation_time_decoding_time}
\vspace{-15pt}
\end{figure}
\subsection{Decoding Time}
\par Decoding time represents the time required to extract the hidden sequence when we have the converged $\theta^*$. Similar to probability estimation time, we take average decoding time after 10 executions. The Figure \ref{fig:estimation_time_decoding_time}(b) presents the decoding time for four different sequences of events with lengths 5, 10, 15, and 20 against a range of sliding windows.
\par If we look at the graph at Figure \ref{fig:estimation_time_decoding_time}(b), we see that the decoding time decreases when the window size increases. The longest decoding time we get is $<2.5$ milliseconds which is very fast for the retrieval of hidden event sequences. Although we see few little temporary spikes for the length 15 after sliding window 150, we still achieve $<2.0$ milliseconds as the decoding time.
\begin{figure}[h]
\centering
\includegraphics[scale=0.40]{figures/v2/iteration_to_convergence_ratio_final_version.png}
\caption{Number of iterations to estimate the converged transition probabilities and emission probabilities with respect to the ratio between number of observation states and number of true states}
\label{fig:computational overhead}
\vspace{-10pt}
\end{figure}
\vspace{-3pt}
\subsection{Computational Overhead}
\par Since our experiment dedicates most of the computation time to estimate the probabilities, we measure \textit{computational overhead} as the total number of iterations of the \textit{forward-backward procedure} required to reach the convergence of transition probabilities and emission probabilities. In Figure \ref{fig:computational overhead}, we present the required total number of iterations (in y-axis) with respect to the ratio between \textit{the total number of unique observation states} and \textit{the total number of unique true states} (in x-axis). We can see that, the computational overhead increases roughly linearly with the ratio.
\begin{figure}[h]
\centering
\includegraphics[scale=0.32]{figures/v2/Accuracy_score_heatmap_updated_v2.png}
\vspace{-15pt}
\caption{Accuracy score vs Sliding window size vs Length of the event sequence}
\label{fig:accuracy_score}
\vspace{-5pt}
\end{figure}
\subsection{Accuracy Score}
\par To determine how accurately the extracted hidden sequence of events represent the real events, we compute f-score for 29 different sequence of events starting with the length 2 and ending at length 30. We do not consider the sequence with length 1 because it does not offer any uncertainty in terms of transition and emission probability. We present a heatmap to visually show the correlation among accuracy score, sliding window size and length of the event sequence. In Figure \ref{fig:accuracy_score}, the accuracy scores are presented as colors.
\par As we can see, when the length of event sequence is $<15$, the increase in window size after 160 assures a very high accuracy score. We even get the accuracy score of 1.0 in some occasions. There is only one exception for the sequence of length 5. We see a decrease in accuracy score after the window size 105, and that's because we see a completely new sequence for the window sizes 110 to 200. Similar pattern also arises, although to a less extent, for the sequence of length 7. But it is quite evident that the increase in window size for the smaller lengths ensures higher accuracy score (equals or close to 1.0). When the length increases to a considerable extent, we start to see the impact of sliding windows on the accuracy score diminishing slowly. Since our system emphasizes on the functional dependencies (in terms of transition probability) of the events to extract the hidden sequence, the longer the sequence becomes, the looser are the dependencies.
\vspace{-5pt}
\section{Conclusion}
\par In this research work, we propose IoTMonitor that focuses on the extraction of the underlying event sequence using HMM approach given a set of physical evidence emitted during a trigger-action based attack in an IoT environment. We use the Baum Welch algorithm to estimate transition and emission probabilities, and Viterbi algorithm to extract the underlying event sequence.
Our experiments show that both probability estimation and sequence extraction operations converge reasonably fast.
In terms of accuracy score, IoTMonitor achieves 100\% in multiple cases and $\geq 90\%$ in a number of cases. We draw a heatmap to visually show the correlation among accuracy score, sliding windows, and length of the event sequences. We also present an algorithm to identify the crucial events in the extracted sequence which the attackers wish to compromise to implement a trigger-action attack.
Immediate extensions to our approach include the following efforts. First, we currently focus on the crucial nodes that appear in multiple attack paths. If we extend our research to an attack graph, we can identify crucial node pairs on different attack paths. Second, the physical evidence collected by sensors could contain noises or even inaccurate data. We will improve our algorithm to provide more robust attack detection capability for IoT platforms.
\bibliographystyle{./bibliography/IEEEtran}
| 11,288 |
\section{Introduction}\label{sec-introduction}
Two-qubit gates with very low error rates are essential for making a practical quantum processor. In superconducting architectures~\cite{Devoret2013, Wendin2017, Kjaergaard2020_review}, such operations with fidelity approaching 99.9\% have already been demonstrated~\cite{Sung2021, Kandala2021, Negirneac2021, Wei2022}. In the family of two-qubit gates utilizing microwave control, one of the most successful and popular gate schemes is based on the cross-resonance (CR) effect, which results in the controlled-\textsc{not} (\textsc{cnot}) operation up to local single-qubit rotations~\cite{Paraoanu2006, Rigetti2010}. This gate is activated by driving the control qubit at the frequency of the target qubit (CR drive).
It has been implemented experimentally multiple times, including experiments with ordinary flux qubits~\cite{deGroot2010}, capacitively shunted flux qubits~\cite{Chow2011}, and transmons~\cite{Chow2012, Corcoles2013, Takita2016, Sheldon2016b, Jurcevic2021, Kandala2021, Wei2022}.
These experiments are accompanied by several theoretical publications that discuss the gate and techniques of its improvement in great detail~\cite{deGroot2012, Kirchhoff2018, Tripathi2019, Magesan2020, Sundaresan2020, Malekakhlagh2020, Malekakhlagh2022}.
Despite being a common superconducting qubit, the transmon has a drawback of having a weakly anharmonic energy spectrum, which generally complicates qubit control. A promising alternative to the transmon is the fluxonium circuit with its strongly anharmonic spectrum and long coherence time of the low-frequency transition between the ground and first excited states~\cite{Manucharyan2007, Nguyen2019, Somoroff2021}. One of the microwave-activated two-qubit gate schemes that has been implemented experimentally with fluxonium qubits is based on driving in proximity with transitions leading to higher (noncomputational) excited states of the two-qubit spectrum~\cite{Ficheux2021, Xiong2022}. Such gate operations are facilitated by several-gigahertz transmon-like frequencies of those transitions, which results in stronger interactions between noncomputational levels in comparison to interactions between computational levels~\cite{Nesterov2018}. A gate scheme based on populating higher excited states was also suggested for heavy fluxoniums~\cite{Abdelhafez2020}. To fully benefit from a long coherence time of the main qubit transitions of fluxoniums, the population, however, should remain in the computational subspace during the gate operation. This reasoning has been used in a recent proposal of a two-qubit gate based on activating a two-photon transition by a strong microwave drive~\cite{Nesterov2021}. Because the fluxonium spectrum is anharmonic, a strong drive amplitude in that proposal does not cause significant leakage to noncomputational levels, and gate fidelity is therefore not spoiled by shorter coherence times of higher excited states. Other methods to remain in the computational subspace are to implement flux-tunable gates~\cite{Zhang2021, Chen2022a, Bao2022} or tunable couplers~\cite{Moskalenko2021}, although these schemes may incur first-order flux noise and require added hardware complexity.
In this paper, we explore theoretically the CR gate for fluxoniums and therefore continue studying gate schemes that benefit in full from long coherence time of computational transitions.
We focus on the selective-darkening (SD) variant of the CR gate, which was studied both experimentally~\cite{deGroot2010} and theoretically~\cite{deGroot2012} for flux qubits and has been lately used for transmons under the name of direct \textsc{cnot}~\cite{Jurcevic2021, Kandala2021, Malekakhlagh2022, Wei2022}. In addition, more recently, the SD variant was demonstrated for fluxoinum circuits~\cite{Dogan2022}. In this scheme, a strong CR drive of the control qubit is accompanied by a weak drive of the target qubit that is chosen to cancel rotation of the target qubit when the control qubit is in its ground state. When the control qubit is in its excited state, the target qubit rotates, which, with proper calibration, results in \textsc{cnot} operation. Using the language of effective Hamiltonian models~\cite{Paraoanu2006, Magesan2020}, these two tones result in effective $ZX$ and $IX$ terms of equal magnitudes but opposite signs. The primary effect of the CR drive without the second tone is to produce some unbalanced combination of $ZX$ and $IX$ terms, which requires an additional single-qubit rotation of the target qubit to achieve \textsc{cnot}. This basic scheme can be improved by using an echo sequence of two CR drives of opposite signs and of additional $\pi$ rotations of the control qubit, which eliminates various spurious terms in the effective Hamiltonian such as $ZI$ and $ZZ$ and makes the operation insensitive to the low-frequency amplitude noise~\cite{Corcoles2013, Takita2016, Sheldon2016b, Malekakhlagh2020, Sundaresan2020}.
Nevertheless, the SD approach, which produces a direct \textsc{cnot} operation without using an echo sequence and additional single-qubit rotations, results in faster gates and has been demonstrated experimentally to work well for transmons~\cite{Jurcevic2021, Kandala2021, Wei2022}. We illustrate via simulations that such a scheme produces high-fidelity and fast gates for fluxoniums as well, which holds even when $ZZ$ crosstalk is relatively strong. We show that this operation is facilitated by the structure of interaction matrix elements, which are enhanced for transitions to higher energy levels, eventually allowing one to achieve reasonably fast speeds for the proposed two-qubit gate. The operation requires single-qubit $Z$ rotations, which can be done instantly in software~\cite{McKay2017}.
For realistic fluxonium parameters with qubit lifetimes of 500 $\mu$s and 50 $\mu$s of the first and second excited states, we find greater than $99.99\%$ coherent fidelity for gate duration of 50 ns without using any advanced pulse shaping. In experiment of Ref.~\cite{Dogan2022}, the gate error was shown to be about 0.5\% with qubits having short coherence times in the range of 10-20 $\mu$s. The incoherent error is dominated by lifetime of the computational subspace: even when the relaxation time of the second excited states in simulations is only 1 $\mu$s, its contribution to this error is below 0.1\%.
The outline of the paper is as follows. In Sec.~\ref{sec-model}, we review the model of two coupled fluxoniums, the gate concept, discuss in detail the structure of charge matrix elements relevant for the gate operation, and elaborate on fundamental limitations on gate rate. In Sec.~\ref{sec-fidelity}, we discuss our simulations of the unitary dynamics, coherent error budget, and the reduction of gate fidelity by nonunitary processes. Finally, we conclude in Sec.~\ref{sec-conclusions}.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{fig_intro.pdf}\caption{
Circuit diagram (a) and energy levels (b) of two capacitively coupled fluxonium qubits $A$ and $B$ driven by two local microwave fields with frequency $\omega_d$. When $\omega_d \approx \omega_{01}^B$, a strong drive of qubit $A$ and a properly chosen much weaker drive of qubit $B$ realize a controlled-$U$ gate operation on qubit $B$ (target), so qubit $B$ is rotated only when qubit $A$ (control) is in $\ket{1}$. Wavy lines in panel (b) illustrate pairs of hybridized two-qubit levels that give dominant contributions to $\langle 10 |\hat{n}_A| 11\rangle$.
}\label{Fig-introduction}
\end{figure}
\section{Model and gate concept}\label{sec-model}
\subsection{Interacting fluxoniums}
We consider the system of two capacitively coupled fluxonium circuits in the presence of two local microwave drives, which is shown schematically in Fig.~\ref{Fig-introduction}(a). Without the loss of generality, we assume that qubits $A$ and $B$ are the control and target qubits, respectively, so the local drive applied to qubit $A$ is the CR drive, whose amplitude is larger than the amplitude of the second drive. Both drives are applied at the same frequency $\omega_d\approx \omega_{01}^B$, where $\omega^\alpha_{kl}$ is the frequency of the single-qubit transition $\ket{k_\alpha}-\ket{l_\alpha}$ between two eigenstates of qubit $\alpha$ ($\alpha = A, B$).
We model this system by the Hamiltonian
\begin{equation}\label{Hamiltonian-two-qubit}
\hat{H} = \hat{H}^{(0)}_{A} + \hat{H}^{(0)}_B + \hat{V} + \hat{H}_{\rm drive}\,,
\end{equation}
where the first two terms describe individual qubits and are given by
\begin{equation}\label{Hamiltonian-fluxonium}
\hat{H}_{\alpha}^{(0)} = 4E_{C,\alpha} \hat{n}_\alpha^2 + \frac 12 E_{L,\alpha} \hat{\varphi}_\alpha^2 - E_{J,\alpha} \cos(\hat{\varphi}_\alpha - \phi_{\rm ext,\alpha})\,
\end{equation}
with $\alpha = A, B$. Here $\hat{\varphi}_\alpha$ and $\hat{n}_\alpha$ are the dimensionless position-like flux operator and momentum-like charge operator with the commutation relation $[\hat{\varphi}_\alpha, \hat{n}_{\alpha'}] = i\delta_{\alpha\alpha'}$.
The third term in Eq.~(\ref{Hamiltonian-fluxonium}) describes capacitive interaction according to
\begin{equation}\label{interaction-charge}
\hat{V} = J_C \hat{n}_A \hat{n}_B\,.
\end{equation}
Parameters of Eqs.~\eqref{Hamiltonian-fluxonium} and \eqref{interaction-charge} are discussed in detail in Refs.~\cite{Nesterov2018, Nesterov2021}.
Here we briefly note that each qubit is characterized by its charging ($E_{C, \alpha}$), inductive ($E_{L, \alpha}$), and Josephson ($E_{J,\alpha}$) energies as well as the dimensionless variable $\phi_{{\rm ext}, \alpha}$, which is proportional to the external magnetic flux threading the loop formed by the superinductor and Josephson junction. The value of $\phi_{{\rm ext}, \alpha}$ is tunable \emph{in situ}, and, similarly to other microwave-activated schemes, here we consider fluxoniums permanently parked at their sweet spots of maximal coherence at $\phi_{{\rm ext}, \alpha} = \pi$~\cite{Nguyen2019, Somoroff2021}. The interaction strength $J_C$ in Eq.~(\ref{interaction-charge}) is determined by the mutual capacitance and individual capacitances of the two qubits~\cite{Nesterov2018, Nesterov2021}.
Finally, the last term in Eq.~\eqref{Hamiltonian-two-qubit} describes the coupling to two external microwave drives of frequency $\omega_d$:
\begin{equation}\label{drive}
\hat{H}_{\rm drive} =2\hbar f(t)\cos(\omega_d t) \left(\hat{n}_A + \eta\hat{n}_B\right) \,.
\end{equation}
Here $\hbar = h/2\pi$ is the reduced Planck constant, $f(t)$ is the time-dependent field envelope, and $\eta$ captures the combined effect of different drive amplitudes and of classical crosstalk. We emphasize that because of this crosstalk in a realistic system, each local drive couples to both qubits, so $\eta$ is not simply the ratio of two drive amplitudes. However, if the ratio of two amplitudes can be tuned, the value of $\eta$ can be tuned as well.
\subsection{Gate concept}
An essential condition of any CR scheme is the dependence of the drive matrix element for target-qubit transitions on the state of the control qubit. Let $\ket{kl}$ be the dressed two-qubit eigenstate of Hamiltonian~\eqref{Hamiltonian-two-qubit} at $\hat{H}_{\rm drive}=0$ corresponding to the noninteracting tensor-product state $\ket{k_A}\ket{l_B}$. Then, for our choice of the control and target qubits, each target-qubit transition is between states $\ket{k0}$ and $\ket{k1}$ for some $k$. Therefore, the essential CR condition implies
\begin{equation}\label{drive_inequality}
\langle 00|\hat{H}_{\rm drive}| 01\rangle \ne \langle 10|\hat{H}_{\rm drive}| 11\rangle\,.
\end{equation}
This way, the Bloch-sphere trajectory of qubit $B$ in the presence of $\hat{H}_{\rm drive}\ne 0$ depends on the state of qubit $A$.
When $J_C=0$, we find that the inequality \eqref{drive_inequality} is violated since both sides reduce to the same value determined by the single-qubit charge matrix element $\langle 0_B|\hat{n}_B|1_B\rangle$ for the target qubit. When $J_C\ne 0$, two-qubit bare states $\ket{k_A}\ket{l_B}$ hybridize to form dressed states $\ket{kl}$. Because of this hybridization, $\bra{k0} \hat{H}_{\rm drive} \ket{k1}$ acquires a $k$-dependent correction coming from corrections to both $\bra{k0} \hat{n}_{A} \ket{k1}$ and $\bra{k0} \hat{n}_{B} \ket{k1}$.
The SD condition requires one of the transition matrix elements of $\hat{H}_{\rm drive}$ to vanish. To be specific, we take
\begin{equation}\label{sd_condition}
\langle 00|\hat{H}_{\rm drive}| 01\rangle = 0\,,
\end{equation}
which, together with inequality \eqref{drive_inequality}, implies that only $\ket{10}-\ket{11}$ transition is activated, while $\ket{00}-\ket{01}$ transition is made forbidden, see Fig.~\ref{Fig-introduction}(b).
Using Eq.~\eqref{drive}, we find that the SD condition~\eqref{sd_condition} is equivalent to
\begin{equation}\label{eta}
\eta = - \frac{\bra{00}\hat{n}_A \ket{01} }{ \bra{00}\hat{n}_B \ket{01}}\,.
\end{equation}
The resonance Rabi frequency for the $\ket{10}-\ket{11}$ transition for the continuous drive $f(t)={\rm const.}$ in Eq.~\eqref{drive} is then given by
\begin{equation}\label{Omega_10_11_exact}
\Omega_{10-11} = 2f\left| \bra{10}\hat{n}_A \ket{11} - \bra{00}\hat{n}_A \ket{01} \frac{\bra{10}\hat{n}_B \ket{11}}{\bra{00}\hat{n}_B \ket{01}}\right|\,.
\end{equation}
The \textsc{cnot} gate duration is given by half period of Rabi oscillations: $t_{\rm gate} = \pi / \Omega_{10-11}$.
We further refer to the two-qubit charge matrix elements of the type $\bra{k0} \hat{n}_{A} \ket{k1}$ as cross matrix elements; they are zero at $J_C=0$. Matrix elements of the second type, $\bra{k0} \hat{n}_{B} \ket{k1}$, are referred to as direct matrix elements; they reduce to single-qubit values at $J_C=0$.
The first nonvanishing correction to the cross matrix elements is linear in $J_C$, while it is only quadratic for the direct matrix elements because of the parity selection rules for the charge operators at half flux quantum~\cite{Nesterov2018, Nesterov2021}. Therefore, to linear order in $J_C$, we find
\begin{equation}\label{Omega_10_11_nAonly}
\Omega_{10-11} = 2f \left|\bra{10}\hat{n}_A \ket{11} - \bra{00}\hat{n}_A \ket{01} \right| + O\left(J_C^2\right)\,.
\end{equation}
By analogy, if qubit $A$ is chosen as the target qubit and qubit $B$ is the control, we find
\begin{equation}\label{Omega_01_11_nBonly}
\Omega_{01-11} = 2f \left|\bra{01}\hat{n}_B \ket{11} - \bra{00}\hat{n}_B \ket{10} \right| + O\left(J_C^2\right)\,.
\end{equation}
Therefore, to linear order in $J_C$, Rabi rates for the CR effect are determined by the cross matrix elements.
We calculate the values of both cross and direct charge matrix elements in the next section.
\subsection{Charge matrix elements}\label{sec-charge-mel}
\begin{table*}[t]
\centering
\begin{tabular}{cccccccccc}
\hline\hline
\multirow{2}{*}{Qubit}
& $E_{L,\alpha}/h$ & $E_{C,\alpha}/h$ & $E_{J,\alpha}/h$ & $\omega^\alpha_{01}/2\pi$ & $\omega^\alpha_{12}/2\pi$
& $\omega^\alpha_{03}/2\pi$
&
\multirow{2}{*}{$|\langle 0_\alpha |\hat{n}_\alpha |1_\alpha\rangle |$} &
\multirow{2}{*}{$|\langle 1_\alpha |\hat{n}_\alpha |2_\alpha\rangle |$}
&
\multirow{2}{*}{$|\langle 0_\alpha |\hat{n}_\alpha |3_\alpha\rangle |$}
\\
& (GHz) & (GHz) & (GHz) & (GHz) & (GHz) & (GHz) & & &\\
\hline
$A$ & 1.09 & 1.06 & 4.62 & 0.53 & 3.80
& 7.03 & 0.14 & 0.58 & 0.41
\\
$B$ & 1.88 & 1.03 & 5.05 & 1.02 & 3.75
& 8.25 & 0.22 & 0.63 & 0.32
\\
\hline\hline
\end{tabular}
\caption{Fluxonium parameters used for numerical simulations.}
\label{Table-params}
\end{table*}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{fig_matrels.pdf}\caption{(a), (b) Matrix elements of qubit charge operators $\hat{n}_A$ (red squares) and $\hat{n}_B$ (blue circles) for various computational transitions calculated for parameters of Table~\ref{Table-params} at $J_C/h = 350$ MHz. The cross matrix elements, e.g., $\bra{10}\hat{n}_A \ket{11}$, give the dominant contribution to the CR effect. (c) The rates of effective $ZX$ (blue solid line), $IX$ (brown dash-dot line), and $ZZ$ (cyan dotted line) interactions calculated using Eqs.~\eqref{xi_zx}-\eqref{xi_zz} vs $J_C$ for $\lambda=0.2$ [see Eq.~\eqref{lambda}] and $\eta=0$. (d) The rates of $ZX$, negative $ZX$ (blue dotted line), and $IX$ interactions vs $\eta$ at $\lambda=0.2$ and $J_C/h=350$ MHz. }\label{Fig-matrels}
\end{figure}
For quantitative analysis, we use realistic hardware parameters shown in Table~\ref{Table-params} with variable interaction strength $J_C$. Single-qubit frequencies for these parameters are in the 500-1000 MHz range, which ensures that $n_{01}^\alpha$, where $n_{kl}^\alpha = -i\bra{k_\alpha}\hat{n}_\alpha\ket{l_\alpha}$, is not very small to allow a sufficient hybridization of computational levels with relevant states both inside and outside of the computational subspace. In comparison, devices with single-qubit transition frequencies in the 100-200 MHz range result in $n_{01}^\alpha \lesssim 0.05$, making gate operations based on hybridization of computational levels more challenging to implement. Such devices suit better for gate schemes involving higher noncomputational levels because of larger $n_{12}^\alpha$ and $n_{03}^\alpha$.
In particular, single-qubit transition frequencies in the controlled-$Z$ gate realization of Ref.~\cite{Ficheux2021}, which was based on driving in proximity with the $\ket{11}-\ket{21}$ transition, were only 70 and 130 MHz.
In Figs.~\ref{Fig-matrels}(a) and \ref{Fig-matrels}(b), we show both direct and cross matrix elements of $\hat{n}_A$ and $\hat{n}_B$ calculated numerically
for parameters of Table~\ref{Table-params} and $J_C/h = 350$ MHz. We make three main observations.
First, we notice that direct charge matrix elements are large in comparison to cross matrix elements, e.g., $|\bra{00} \hat{n}_A \ket{10}|\gg |\bra{00} \hat{n}_B \ket{10}|$ in Fig.~\ref{Fig-matrels}(a), since the cross matrix elements become nonzero only due to interaction-induced corrections, i.e., $\bra{00} \hat{n}_B \ket{10} = 0$ when $J_C=0$.
Second, we notice, however, that the \emph{change} of the matrix element with the qubit state changing is greater for cross matrix elements, e.g., $|\bra{10} \hat{n}_B \ket{11} - \bra{00} \hat{n}_B\ket{01}| < |\bra{10} \hat{n}_A \ket{11} - \bra{00} \hat{n}_A\ket{01}|$ in Fig.~\ref{Fig-matrels}(b). This fact is in line with our previous reasoning that the interaction effects are linear in $J_C$ in cross matrix elements and are quadratic in $J_C$ in direct matrix elements.
Finally, we also observe that $\Omega_{10-11} > \Omega_{01-11}$ since $|\bra{10} \hat{n}_A \ket{11} - \bra{00} \hat{n}_A\ket{01}| > |\bra{01} \hat{n}_B \ket{11} - \bra{00} \hat{n}_B\ket{10}|$. Therefore, we have chosen qubit $B$ as the target qubit for parameters of Table~\ref{Table-params}.
We find that magnitudes of cross matrix elements are well explained by an approximation based on first-order perturbation theory that accounts only for contributions coming from computational levels and from $\ket{2k}$, $\ket{k2}$, $\ket{3k}$, and $\ket{k3}$, where $k=0, 1$. Analytic expressions for this approximation are derived in Appendix~\ref{sec-perturbation}. As an example, in Fig.~\ref{Fig-introduction}(b), wavy lines show contributions to $\bra{10}\hat{n}_A\ket{11}$ coming from various pairs of hybridized levels. We emphasize that in comparison to true two-level systems, here couplings to higher levels such as that between $\ket{10}$ and $\ket{21}$ are relevant and cannot be ignored.
In fact, the dominant contributions to $\bra{10}\hat{n}_A\ket{11}$ are coming from hybridization of $\ket{10}$ with $\ket{21}$ and of $\ket{11}$ with $\ket{20}$ rather than from hybridization of levels within the computational subspace.
We also notice that because $\omega_{01}^A < \omega_{01}^B$, see Table~\ref{Table-params}, all four contributions to $\bra{10}\hat{n}_A\ket{11}$ interfere constructively, while, e.g., there is a destructive interference between contributions to $\bra{01}\hat{n}_B\ket{11}$. In addition, since charge matrix elements for the qubit transition approximately scale with frequency, $n_{01}^B$ is almost twice as large as $n_{01}^A$, which further increases contributions from hybridization of $\ket{10}$ and $\ket{21}$ and of $\ket{11}$ and $\ket{20}$ and eventually makes $\bra{10}\hat{n}_A\ket{11}$ the largest cross matrix element among four of them, see Figs.~\ref{Fig-matrels}(a) and \ref{Fig-matrels}(b).
\subsection{Effective Hamiltonian}
In this section, we use the language of effective Hamiltonian models~\cite{Paraoanu2006, Magesan2020} to give a complementary perspective on gate operation. The effective Hamiltonian is restricted to the computational subspace, has a block-diagonal structure with respect to control-qubit states, is written in the appropriate rotating frame of two qubits, and describes an effective interaction induced by a microwave drive in addition to a static $ZZ$ coupling. To focus on gate concept, we write it only to linear order in $f$, which yields
\begin{equation}\label{H_eff}
\hat{H}_{\rm eff} = \frac{\xi_{ZX}}{2} \hat{\sigma}_z^A \otimes \hat{\sigma}_x^B + \frac{\xi_{IX}}{2} \hat{\sigma}_0^A \otimes \hat{\sigma}_x^B + \frac{\xi_{ZZ}}{4} \hat{\sigma}_z^A \otimes \hat{\sigma}_z^B\,.
\end{equation}
Here $\xi_{ZX}$, $\xi_{IX}$, and $\xi_{ZZ}$ are the rates of effective $ZX$, $IX$, and $ZZ$ interactions, $\hat{\sigma}_0^A$ is the identity $2\times 2$ matrix, and $\hat{\sigma}_{i}^\alpha$ with $i=x, z$ is the Pauli matrix for qubit $\alpha$. It is $ZX$ interaction that is essential for any gate based on the CR effect. For the SD gates, particularly, $\xi_{ZX} = \pm \xi_{IX}$.
We note that even though we use tensor-product notation in Eq.~\eqref{H_eff}, this Hamiltonian is written in the interacting (dressed) eigenbasis rather than in the basis of tensor-product noninteracting states.
To linear order in $f$, we find
\begin{subequations}
\begin{equation}\label{xi_zx}
\begin{aligned}
\xi_{ZX} = f &\left[\bra{00}\hat{n}_A\ket{01} - \bra{10}\hat{n}_A \ket{11} \right. \\
& \left. + \eta \left(\bra{00}\hat{n}_B\ket{01} - \bra{10}\hat{n}_B \ket{11}\right) \right]\,,
\end{aligned}
\end{equation}
\begin{equation}\label{xi_ix}
\begin{aligned}
\xi_{IX} = f &\left[\bra{00}\hat{n}_A\ket{01} + \bra{10}\hat{n}_A \ket{11} \right. \\
& \left. + \eta \left(\bra{00}\hat{n}_B\ket{01} + \bra{10}\hat{n}_B \ket{11}\right) \right]\,,
\end{aligned}
\end{equation}
and
\begin{equation}\label{xi_zz}
\xi_{ZZ} = \omega_{10-11} - \omega_{00-01}\,.
\end{equation}
\end{subequations}
If higher-order terms are properly taken into account, $IX$ and $ZX$ rates saturate with increasing the drive amplitude $f$, $ZZ$ rates acquire small corrections, and two new terms - $ZI$ and $IZ$ - appear in Eq.~\eqref{H_eff}~\cite{Tripathi2019, Magesan2020}. The origin of the $ZI$ term is AC Stark effect due to an off-resonance drive of the control qubit. While this effect is formally quadratic in $f$, its magnitude is relatively large because it is of zeroth order in $J_C$, so the induced $ZI$ rate quickly becomes dominant in the effective Hamiltonian~\cite{Magesan2020}. In comparison, $IZ$ rate and $f$-dependent correction to $ZZ$ rate are of higher order in both $J_C$ and $f$ and are rather small~\cite{Magesan2020}.
A possibly large magnitude of $ZI$ rate is not relevant for gate operation as its effect can be easily absorbed into virtual single-qubit $Z$ rotations~\cite{McKay2017}.
In Fig.~\ref{Fig-matrels}(c), we plot $ZX$, $IX$, and $ZZ$ rates calculated using perturbative Eqs.~\eqref{xi_zx}-\eqref{xi_zz} as a function of the interaction strength $J_C$ assuming only qubit $A$ is driven, i.e., at $\eta=0$. The drive amplitude $f$ is chosen to correspond to $\lambda=0.2$, where $\lambda$ is the dimensionless drive amplitude for qubit $A$ defined according to
\begin{equation}\label{lambda}
\lambda = \frac{\Omega_{A, 0}}{\Delta_{AB}}\,.
\end{equation}
Here $\Omega_{A, 0} = 2f n^A_{01}$ is the single-qubit resonance Rabi frequency for qubit $A$ and $\Delta_{AB} = \omega^B_{01} - \omega^A_{01}$ is the detuning between qubit frequencies. The amplitude $\lambda$ is a measure of the strength of the off-resonance Rabi oscillations of qubit $A$ during the CR pulse; their contrast is given by $\lambda^2 / (\lambda^2 + 1)$. Linear results of Eqs.~\eqref{xi_zx}-\eqref{xi_zz} are valid under the condition $\lambda \ll 1$.
Figure~\ref{Fig-matrels}(c) illustrates that the strengths of $IX$ and $ZX$ terms are comparable, which signifies the contribution of higher noncomputational levels into these rates. In comparison, in purely two-level models, $\xi_{IX}=0$~\cite{Chow2011, Magesan2020}.
The SD variant of the CR scheme, Eq.~\ref{sd_condition}, implies that $\xi_{IX} + \xi_{ZX} = 0$, which can be achieved by varying the drive amplitude of the second pulse applied to qubit $B$ or parameter $\eta$. We illustrate this statement in Fig.~\ref{Fig-matrels}(d) by plotting $IX$, $ZX$, and negative $ZX$ rates vs $\eta$ for the same $\lambda$ as in Fig.~\ref{Fig-matrels}(c) and the same $J_C$ as in the top panels of Fig.~\ref{Fig-matrels}. We observe that $\xi_{ZX}$ is almost unaffected by changing $\eta$ because the difference between two direct matrix elements in Eq.~\ref{xi_zx} is negligible as discussed in Sec.~\ref{sec-charge-mel}. In comparison, $\xi_{IX}$ contains the sum of two direct matrix elements, so it strongly depends on $\eta$. In other words, a direct resonant drive of qubit $B$ induces its rotation irrespective of state of qubit $A$, which is characterized by a change in $\xi_{IX}$.
\subsection{Speed limit}\label{sec_speed_limit}
In addition to activating controlled rotations of the target qubit, the CR drive \eqref{drive} is applied off-resonantly to transitions of the control qubit, which contributes to coherent control errors and eventually limits gate speed. The simplest estimate for the maximum allowed drive amplitude of the CR drive can be found by equating the control-qubit effective drive amplitude given by $\Omega_{A,0}$ and drive detuning $\Delta_{AB}$, which results in $\lambda \sim 1$, see Eq.~\eqref{lambda}. Then, using Eq.~\eqref{Omega_10_11_nAonly}, we find the corresponding speed limit for the gate duration
\begin{equation}\label{t_min}
t_{\rm fsl} = \frac{n^A_{01}}{2 \left|\bra{10}\hat{n}_A \ket{11} - \bra{00}\hat{n}_A \ket{01} \right|} \frac{2\pi}{\Delta_{AB}}\,,
\end{equation}
which we refer to as the fundamental speed limit. A similar criteria based on the off-resonant drive of the control qubit but resulting in $\lambda \sim 1/2$ was used in Refs.~\cite{deGroot2010, deGroot2012} to estimate the maximum possible gate rate. The detuning between qubit frequencies $\Delta_{AB}$ is not restricted for small values for fluxoniums unlike transmons, where the choice of $\Delta_{AB}$ is often affected by the weak anharmonicity~\cite{Tripathi2019}.
We note that Eq.~\eqref{Omega_10_11_nAonly} used to derive Eq.~\eqref{t_min} is valid only in the limit of small $\lambda$.
When terms of higher order in the drive amplitude are accounted for, Eq.~\eqref{Omega_10_11_nAonly} is modified and $\Omega_{10-11}$ saturates as a function of $f$ at larger $f$~\cite{Tripathi2019}. Therefore, for $\lambda\sim 1$, the actual value of $\Omega_{10-11}$ is lower than that given by Eq.~\ref{Omega_10_11_nAonly} and the corresponding gate is longer than $t_{\rm fsl}$. In practice, experimentally implemented CR gates are typically at least three times the speed limit based on Eq.~\eqref{t_min} modified for an appropriate system.
Nevertheless, a quantum speed limit that is close to Eq.~\eqref{t_min} was discovered numerically in the analysis of Ref.~\cite{Kirchhoff2018} for a system of coupled transmons with Eq.~\eqref{t_min} modified for transmons. Close to that speed limit, however, very complex optimal-control pulses were necessary, which were found assuming unconstrained amplitude piecewise-constant controls with a high sampling rate of 0.1 ns.
In transmons, because of their weak anharmonicity, error due to coherent leakage into the $\ket{1}-\ket{2}$ transition of the control qubit can be significant and give an additional speed-limit restriction. Even though fluxoniums are strongly anharmonic, the corresponding charge matrix element $n^\alpha_{12}$ and thus the resonance Rabi frequency are typically a few times larger than than those for the main $\ket{0}-\ket{1}$ transition, see Table~\ref{Table-params}, which can still contribute to the coherent control error. To elaborate on this issue,
we first define the dimensionless drive amplitude for the $\ket{1_A}-\ket{2_A}$ transition similarly to Eq.~\eqref{lambda}:
\begin{equation}
\lambda_{1-2} = \frac{\Omega_{A, 0}^{12}}{\Delta^{12}_{AB}}\,.
\end{equation}
Here $\Omega_{A, 0}^{12} = 2f n^A_{12}$ and $\Delta^{12}_{AB} = \omega^A_{12} - \omega^B_{01}$ are the corresponding single-qubit resonance Rabi frequency and detuning between the transition and drive frequencies.
For parameters of Table~\ref{Table-params}, we find $\lambda_{1-2}/\lambda\approx 0.73 < 1$. Therefore, gate speed is primarily limited by the main $\ket{0_A}-\ket{1_A}$ transition of qubit $A$ as discussed above, although leakage error can be comparable to the error due to rotation of the control qubit, see Sec.~\ref{sec-error-budget} below for the discussion of error budget.
\section{Gate fidelity}\label{sec-fidelity}
\subsection{Unitary dynamics}\label{sec-unitary}
\begin{figure}
\includegraphics[width=\columnwidth]{fig_tdomain.pdf}\caption{(a), (b) Unitary time evolution of the populations of various two-qubit states during gate operation for the initial states $\ket{00}$ (a) and $\ket{10}$ (b). The gate is optimized over drive frequency and two drive amplitudes at fixed $J_C/h = 350$ MHz and $t_{\rm gate}=50$ ns, resulting in $F_{\rm coherent}>99.998\%$. (c), (d) Coherent gate error (blue solid lines) and target-rotation errors for control qubit in $\ket{0}$ (dashed purple lines) and in $\ket{1}$ (dotted red lines) vs drive frequency (c) and dimensionless drive amplitude (d) around their optimal values.}\label{fig_tdomain}
\end{figure}
In numerical simulations of gate dynamics, we use Gaussian envelopes
\begin{equation}\label{pulse_shape}
f(t) \propto \exp\left[-\frac{(t-t_{\rm gate}/2)^2}{2\sigma^2}\right] - \exp\left[-\frac{(t_{\rm gate}/2)^2}{2\sigma^2}\right]
\end{equation}
for the microwave-drive term~\eqref{drive}.
Here $t_{\rm gate}$ is the gate duration, $0<t<t_{\rm gate}$, and $\sigma = t_{\rm gate}/4$. We have verified that substituting these envelopes with Gaussian flat-top pulses does not significantly affect gate error. For given pulse and other parameters, we first find the unitary evolution operator in the 25-levels Hilbert space composed of five levels coming from each qubit and then project it into the computational subspace to obtain a $4\times 4$ matrix $\hat{U}_{\rm sim}$. To compare it with the ideal \textsc{cnot} operation defined by
\begin{equation}\label{cnot_def}
\hat{U}_{\textsc{cnot}} =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0
\end{pmatrix}\,,
\end{equation}
we apply additional single-qubit $Z$ rotations both before and after the gate, see Appendix~\ref{sec-single-z}. In an experimental setting, such rotations can be performed instantly in software and do not contribute to gate error~\cite{McKay2017}. When $\hat{U}_{\rm sim}$ has the same amplitudes of matrix elements as $\hat{U}_{\textsc{cnot}}$, these $Z$ rotations reduce $\hat{U}_{\rm sim}$ to $\hat{U}_{\textsc{cnot}}$ exactly, so any phase error in $\hat{U}_{\rm sim}$ does not contribute to gate infidelity.
Let $\hat{U}$ be the simulated gate operator $\hat{U}_{\rm sim}$ modified by the additional $Z$ rotations. To calculate the coherent gate fidelity, we use the standard expression~\cite{Pedersen2007}
\begin{equation}\label{fidelity-definition}
F_{\rm coherent} = \frac{{\rm Tr}\left(\hat{U}^\dagger \hat{U}\right) + \left|{{\rm Tr}\left(\hat{U}_{\textsc{cnot}}^\dagger \hat{U}\right)}\right|^2}{20}\,.
\end{equation}
We optimize it numerically over the drive frequency, the overall drive amplitude, and the parameter $\eta$, see Eq.~\ref{drive}, which is equivalent to optimizing over the two drive amplitudes for drives applied to the control and target qubits. As starting values in the optimization procedure for a given $t_{\rm gate}$, we use $\omega_d = \omega_{10-11}$, take $\eta$ from Eq.~\eqref{eta}, and calculate the overall amplitude prefactor from Eq.~\eqref{Omega_10_11_nAonly} assuming $\Omega_{10-11} = \pi / t_{\rm gate}$ with an extra rescaling to account for the Gaussian pulse shape~\eqref{pulse_shape}.
In Figs.~\ref{fig_tdomain}(a) and \ref{fig_tdomain}(b), we illustrate such an optimized gate in time domain for $t_{\rm gate}=50$ ns and $J_C/h=350$ MHz by plotting populations of several two-qubit states for two initial states. The coherent gate fidelity~\eqref{fidelity-definition} for these parameters is greater than $99.998\%$. In insets, we show intermediate populations of some of the states that should have zero occupation probabilities in an ideal gate operation. Having $P_{00\to 10}\ne 0$ in Fig.~\ref{fig_tdomain}(a) and $P_{10\to 00}\ne 0$ in Fig.~\ref{fig_tdomain}(b), where $P_{kl \to k'l'}$ is the population of state $\ket{k'l'}$ for the initial state $\ket{kl}$, implies a small chance of spurious control-qubit rotations, and having $P_{10\to 20}\ne 0$ and $P_{10\to 21}\ne 0$ in Fig.~\ref{fig_tdomain}(b) illustrates leakage to noncomputational states. We emphasize that all such control-qubit rotations and leakage probabilities are very small at the end of the pulse with a more detailed analysis of the error budget given in Sec.~\ref{sec-error-budget}.
To check the stability of this optimized gate operation with respect to variations of pulse parameters, we plot the coherent gate error vs drive frequency detuning $\omega_d-\omega_{10-11}$ and vs dimensionless drive amplitude $\lambda$ around their optimal values, see blue solid lines in Figs.~\ref{fig_tdomain}(c) and \ref{fig_tdomain}(d). We observe that the coherent fidelity stays above $99.99\%$ in the frequency window of about 500 kHz and in the drive-amplitude window of about 2\% of the optimal drive amplitude, which is readily achievable in experiments.
We emphasize that the optimal value of $\omega_d$ is close to the frequency of the bright (driven) transition $\omega_{10-11}$ rather than to the frequency of the darkened transition $\omega_{00-01}$, which is shown a vertical arrow in Fig.~\ref{fig_tdomain}(c). This way, a controlled rotation of the target qubit is activated by an on-resonance drive, which reduces effects of static $ZZ$ interaction~\eqref{xi_zz}. To illustrate a high-fidelity gate in a system with strong $ZZ$, here we have chosen system parameters with a $ZZ$ rate of $|\xi_{ZZ}|/2\pi \approx 2$ MHz, see the distance between $\omega_{10-11}$ and $\omega_{00-01}$ in Fig.~\ref{fig_tdomain}(c). In comparison, in simpler schemes to implement a CR gate with a single CR drive followed by additional rotations of the target qubit~\cite{Chow2011}, an ideal operation implies rotations of the target qubit for both states of the control. This condition is hard to achieve when static $ZZ$ coupling is strong, which makes it impossible to drive the target qubit in resonance for both states of the control qubit. In this situation, the $ZZ$ term can be mitigated by an echo sequence~\cite{Corcoles2013, Takita2016, Sheldon2016b, Malekakhlagh2020, Sundaresan2020}, which increases gate duration. Therefore, the SD scheme has a definite advantage in systems with stronger $ZZ$. Another advantage of our technique is that the error due to extra dynamical phase induced by $ZZ$ interaction does not contribute to gate infidelity because of additional single-qubit $Z$ rotations both before and after the gate operation as described in Appendix~\ref{sec-single-z}.
We next study coherent gate fidelity as a function of the pulse duration $t_{\rm gate}$ at fixed $J_C/h=350$ MHz and as a function of $J_C/h$ at fixed $t_{\rm gate}=50$ ns with the gate optimizations performed separately for each value of $t_{\rm gate}$ or $J_C/h$. The resulting error $1-F_{\rm coherent}$ is shown by the solid blue lines in Figs.~\ref{Fig-fidelity-tgdep} and \ref{Fig-fidelity-jdep}, where parameters of Fig.~\ref{fig_tdomain} are indicated by squares and vertical arrows. We discuss the observed behavior of coherent error in more detail together with the error budget in Sec.~\ref{sec-error-budget}.
\subsection{Coherent error budget}\label{sec-error-budget}
Here we define several contributions to $1-F_{\rm coherent}$, relating them to those transition probabilities $P_{kl\to k'l'}(t_{\rm gate})$ that should be zero for an ideal gate operation. For brevity, we omit the argument $t_{\rm gate}$ in these probabilities below. To derive the error-budget contributions, we first notice that $|\langle k'l' |\hat{U}|kl\rangle| = \sqrt{P_{kl \to k'l'}}$ when both states $\ket{kl}$ and $\ket{k'l'}$ are in the computational subspace. Then, we use Eq.~(\ref{fidelity-definition}) and the probability conservation law $\sum_{k'l'}P_{kl \to k'l'} = 1$, where the sum runs over all the two-qubit states, including noncomputational ones, to express $1-F_{\rm coherent}$ in terms of only those $P_{kl\to k'l'}$ that should be zero. Linearizing the resulting expression, we write
\begin{equation}\label{error_budget}
1 - F_{\rm coherent} = \mathcal{E}^{c0}_{\rm target} + \mathcal{E}^{c1}_{\rm target} + \mathcal{E}_{\rm control} + \mathcal{E}_{\rm leakage}
\end{equation}
with the individual terms explained below.
The first two terms of Eq.~\eqref{error_budget} represent the errors in the rotation of the target qubit for two states of the control qubit.
When the control qubit is in its ground state, we have
\begin{equation}\label{error_target0}
\mathcal{E}^{c0}_{\rm target} = \frac 15 \left(P_{00\to 01} + P_{01\to 00}\right)\,,
\end{equation}
implying error due to rotation of the target qubit when it should remain idle.
When the control qubit is in its state $\ket{1}$, we similarly find
\begin{equation}\label{error_target1}
\mathcal{E}^{c1}_{\rm target} = \frac 15 \left(P_{10\to 10} + P_{11\to 11}\right)\,,
\end{equation}
implying the error due to rotation of the target qubit on the Bloch sphere by an angle different from $\pi$. These two errors are shown by dashed purple and dotted red lines in Figs.~\ref{fig_tdomain}(c), \ref{fig_tdomain}(d), \ref{Fig-fidelity-tgdep}, and \ref{Fig-fidelity-jdep}. We observe that $\mathcal{E}^{c1}_{\rm target}$ gives the dominant contribution to $1-F_{\rm coherent}$ when the pulse frequency or overall amplitude are slightly detuned from their optimal values, see Figs.~\ref{fig_tdomain}(c) and \ref{fig_tdomain}(d). At the same time, $\mathcal{E}^{c0}_{\rm target}$ remains unimportant in these cases since the ratio of the two drive amplitudes is kept fixed, which enforces the SD condition~\eqref{sd_condition}. When $\eta$ is also detuned from its optimal value (not shown in plots), the $\ket{00}-\ket{01}$ transition is no longer darkened, leading to a larger $\mathcal{E}^{c0}_{\rm target}$. A small increase in $\mathcal{E}^{c0}_{\rm target}$ with deviation of the drive amplitude $\lambda$ in Fig.~\ref{fig_tdomain}(d) is associated with nonlinear corrections, which are absent in the SD condition~\eqref{sd_condition}, but lead to a small change of the optimal $\eta$. For optimized pulses for a given $t_{\rm gate}$, we find that $\mathcal{E}^{c0}_{\rm target}$ and $\mathcal{E}^{c1}_{\rm target}$ have similar values at short $t_{\rm gate}$ with their sum being the dominant contribution in $1-F_{\rm coherent}$, see Fig.~\ref{Fig-fidelity-tgdep}. At long $t_{\rm gate}$ or large $J_C/h$, we find that $\mathcal{E}^{c1}_{\rm target}$ increases and eventually determines $1-F_{\rm coherent}$.
The third term in Eq.~\eqref{error_budget} is the error due to transitions in the control qubit within the computational subspace:
\begin{equation}\label{error_control}
\mathcal{E}_{\rm control} = \frac 15 \sum_{k, l, l'=0, 1} P_{kl \to \bar{k}l'}\,,
\end{equation}
where $\bar{0}=1$ and vice versa. This error never dominates $1-F_{\rm coherent}$ for data points presented in this paper and is absent on the plots.
Finally, the last term in Eq.~\eqref{error_budget} describes the coherent leakage to higher noncomputational levels and is given by
\begin{equation}\label{error_leakage}
\mathcal{E}_{\rm leakage} = 1 - \frac 14 {\rm Tr}\left(\hat{U}^\dagger \hat{U}\right)\,.
\end{equation}
It is shown by green dash-dot lines in Figs.~\ref{Fig-fidelity-tgdep} and \ref{Fig-fidelity-jdep}. This leakage error is the dominant contribution to $1-F_{\rm coherent}$ at intermediate $t_{\rm gate}$ and is the reason of its local minimum at $t_{\rm gate}\approx 32$ ns.
We observe that $\mathcal{E}_{\rm target}^{c0}$, $\mathcal{E}_{\rm target}^{c1}$, and $\mathcal{E}_{\rm leakage}$ have discontinuities as a function of $J_C/h$ at $J_C/h\approx 210$ and $\approx 240$ MHz, which is accompanied by kinks in $1-F_{\rm coherent}$, see Fig.~\ref{Fig-fidelity-jdep}. This behaviour is explained by the crossing of different local minima of $1-F_{\rm coherent}$ at these values of $J_C/h$. A different optimization protocol, e.g., the one that minimizes $\mathcal{E}_{\rm leakage}$ rather than $1-F_{\rm coherent}$, can result in smoother behavior of the total error.
We notice that $1-F_{\rm coherent}$ vs $t_{\rm gate}$ in Fig.~\ref{Fig-fidelity-tgdep} crosses the 0.01 threshold at around 27 ns. At the same time, the fundamental speed limit~\eqref{t_min} is less than 10 ns for these parameters, which can be easily understood by noticing that $\lambda \lesssim 0.2$ at $t_{\rm gate} = 50$ ns, see Fig.~\ref{fig_tdomain}(d), and that Eq.~\eqref{t_min} is based on the $\lambda \sim 1$ criterion. This discrepancy between the shortest possible $t_{\rm gate}$ in a realistic gate and $t_{\rm fsl}$ given by Eq.~\eqref{t_min} is in line with our previous reasoning based on effects that are nonlinear in the drive amplitude, see text below Eq.~\eqref{t_min}. To approach the limit given by Eq.~\eqref{t_min}, optimal-control pulses, which have more complicated shapes than the Gaussian envelope~\eqref{pulse_shape}, are necessary~\cite{Kirchhoff2018}.
\subsection{Dissipation effects}\label{sec-dissipation}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{fig_fid_tgdep.pdf}\caption{ Gate error vs $t_{\rm gate}$ at $J_C/h=350$ MHz with pulse parameters optimized for each value of $t_{\rm gate}$. Lines show the total coherent error (blue solid line), leakage error (green dash-dot line), and target-qubit rotation errors for the control qubit in states $\ket{0}$ (purple dashed line) and in $\ket{1}$ (red dotted line). The square marker and vertical arrow point at the parameters of Figs.~\ref{fig_tdomain}(a) and \ref{fig_tdomain}(b). Empty markers show gate error in the presence of relaxation processes and relaxation-limited dephasing for $T_1^{0-1}=100$ $\mu$s and $T_1^{1-2}=1$ $\mu$s (circles), for $T_1^{0-1}=100$ $\mu$s and $T_1^{1-2}=\infty$ (triangles), and for $T_1^{0-1}=500$ $\mu$s and $T_1^{1-2}=50$ $\mu$s (diamonds). }\label{Fig-fidelity-tgdep}
\end{figure}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{fig_fid_jdep.pdf}\caption{
Gate error vs $J_C/h$ at $t_{\rm gate}=50$ ns with pulse parameters optimized for each value of $J_C/h$. Lines show the total coherent error (blue solid line), leakage error (green dash-dot line), and target-qubit rotation errors for the control qubit in states $\ket{0}$ (purple dashed line) and in $\ket{1}$ (red dotted line). The square marker and vertical arrow point at the parameters of Figs.~\ref{fig_tdomain}(a) and \ref{fig_tdomain}(b). Empty markers show gate error in the presence of relaxation processes and relaxation-limited dephasing for $T_1^{0-1}=100$ $\mu$s and $T_1^{1-2}=1$ $\mu$s (circles), for $T_1^{0-1}=100$ $\mu$s and $T_1^{1-2}=\infty$ (triangles), and for $T_1^{0-1}=500$ $\mu$s and $T_1^{1-2}=50$ $\mu$s (diamonds).
}\label{Fig-fidelity-jdep}
\end{figure}
To complete our analysis of gate operation, we discuss the error due to incoherent effects. We focus on qubit relaxation processes and assume that they give the dominant contribution to the loss of coherence, resulting in $T_2 = 2T_1$ relation for the coherence ($T_2$) and relaxation ($T_1$) times of all the relevant transitions. Since the system is mostly staying in the computational subspace with only small intermediate excitation of qubit second excited states, see Figs.~\ref{fig_tdomain}(a) and \ref{fig_tdomain}(b), the dominant error is expected to come from incoherent effects in $\ket{0_\alpha}-\ket{1_\alpha}$ transitions of both qubits. We denote the corresponding relaxation time by $T_1^{0-1}$ with the assumption that it is the same in both qubits. Because the lifetime of the second excited states can be much shorter than $T_1^{0-1}$, we also account for the relaxation in $\ket{1_\alpha}-\ket{2_\alpha}$ transitions with the corresponding time $T_1^{1-2}$ for both qubits.
We follow the procedure outlined in Ref.~\cite{Nesterov2021} to simulate gate dynamics by solving the full master equation, to find the resulting $16 \times 16$ $\chi$ matrix, and then to calculate gate fidelity by comparing this matrix to its ideal value. We implement this procedure for pulse parameters that are optimized for coherent error~\eqref{fidelity-definition}.
We first consider a suboptimal value $T_1^{0-1}=100$ $\mu$s for the main qubit transitions and a very short relaxation time $T_1^{1-2}=1$ $\mu$s of the $\ket{1_\alpha}-\ket{2_\alpha}$ transitions. We find that these parameters still result in gate error below $10^{-3}$ for most of the data points, see circles in Figs.~\ref{Fig-fidelity-tgdep} and \ref{Fig-fidelity-jdep}. To estimate the contribution of relaxation specifically in $\ket{0_\alpha}-\ket{1_\alpha}$ transitions, we then remove the collapse operators corresponding to the $\ket{2_\alpha}-\ket{1_\alpha}$ relaxation from the master equation and compare the two results. We find that gate error for simulations without relaxation in $\ket{1_\alpha}-\ket{2_\alpha}$ transitions is slightly smaller, see triangles in Figs.~\ref{Fig-fidelity-tgdep} and \ref{Fig-fidelity-jdep} labeled as $T_1^{1-2}=\infty$. Even though $T_1^{1-2}$ is only 1 $\mu$s, the contribution of relaxation in $\ket{1_\alpha}-\ket{2_\alpha}$ transitions to gate error is below $0.1\%$, which agrees with very low excitation probability of noncomputational states during the gate operation, see Figs.~\ref{fig_tdomain}(a) and \ref{fig_tdomain}(b). Finally, we present the results for devices with $T_1^{0-1}=500$ $\mu$s and $T_1^{1-2}=50$ $\mu$s, see diamond-shape markers in Figs.~\ref{Fig-fidelity-tgdep} and \ref{Fig-fidelity-jdep}. These longer relaxation times, which are realistically achievable in modern fluxonium devices~\cite{Nguyen2019, Somoroff2021}, reduce gate error down to $10^{-4}$ values.
\section{Conclusions}\label{sec-conclusions}
In conclusion, we have investigated a microwave-activated two-qubit gate scheme for fluxonium circuits, which is based on selective darkening of a transition in the computational subspace~\cite{deGroot2010, deGroot2012}. The scheme is facilitated by the cross-resonance effect~\cite{Paraoanu2006, Rigetti2010, Chow2011} and leads to high-fidelity \textsc{cnot} gates even for strong $ZZ$ coupling and large detuning between qubit frequencies. The gate fidelity in excess of $99.99\%$ evaluated for the unitary dynamics is achievable for a basic shape of the microwave radiation and does not require complicated pulse sequences or special arrangement of qubit energy levels. The population of higher excited states remains low during gate pulses, so the gate performance is nearly unaffected by the relaxation processes from energy levels outside of the computational subspace. Even for a short lifetime of 1 $\mu$s of the second excited state, the contribution to the gate error remains below 0.1\%.
The optimized gate operation was analyzed in this paper for a device with a relatively strong $ZZ$ coupling, which was $\xi_{ZZ}/2\pi \approx 2$ MHz in the case discussed in Fig.~\ref{fig_tdomain}. This example illustrates an excellent resilience of the SD gates against spurious $ZZ$ coupling. While this specific \textsc{cnot} gate works well for this value of $\xi_{ZZ}$, it is generally preferred to have processors with much smaller values of the $ZZ$ crosstalk during single-qubit gates and while idling.
The magnitude of $ZZ$ coupling can be greatly reduced by applying an always-on off-resonance microwave drive, which was demonstrated for fluxoniums~\cite{Xiong2022} and transmons~\cite{Mitchell2021, Wei2022}. Additional techniques to mitigate this crosstalk are based on using a multipath coupling scheme~\cite{Kandala2021, Mundada2019} and/or a tunable coupler~\cite{Mundada2019, Yan2018}, which will likely become an ultimate scalable solution for a multiqubit processor.
Without such techniques, for fluxonium parameters used in our analysis, a 50 ns-long high-fidelity gate is possible for a smaller value of $J_C/h$ with $\xi_{ZZ}/2\pi < 1$ MHz, see Fig.~\ref{Fig-fidelity-jdep}. Because gate rate~\eqref{Omega_10_11_nAonly} scales with $J_C$ linearly, while $\xi_{ZZ}$ is quadratic in $J_C$, an additional reduction of $J_C/h$ by a factor of 2 increases $t_{\rm gate}$ to 100 ns, but reduces $\xi_{ZZ}$ by an extra factor of 4. We also note that here we have performed a very basic optimization procedure of microwave pulses, which can be greatly improved with optimal control to allow a fast gate with even smaller $J_C/h$ or $\xi_{ZZ}$.
In transmons, the rate of the cross-resonance gate is maximized when two qubits are in the straddling regime, so the detuning between qubit frequencies is smaller than their anharmonicity~\cite{Tripathi2019}. In addition, in a multiqubit transmon processor, an extra care is required to tune frequencies to reduce spectator errors~\cite{Sundaresan2020}. These requirements of simultaneously having small detunings and avoiding frequency collisions, including transitions to the second excited states, lead to a complicated frequency-allocation procedure for transmon-based processors~\cite{Morvan2022}. Modern fabrication techniques result in 1\% imprecision in transmon frequencies~\cite{Kreikebaum2020}, which is insufficient to obtain a high yield of devices that satisfy frequency conditions unless an additional postfabrication tuning is performed~\cite{Zhang2022, Hertzberg2021}.
In comparison, frequency requirements for fluxoniums are much less stringent, which guarantees a better fabrication yield of fluxonium-based processors. The road map for a scalable fluxonium-based processor, including frequency allocation and estimates for the fabrication yield, has been recently presented in Ref.~\cite{Nguyen2022}. There, it was argued that the two-qubit gate realized via the CR effect is a viable solution for a scalable design for such a processor. In addition, it was shown that a combination of capacitive and inductive couplings can effectively suppress the static $ZZ$ rate, while maintaining high-fidelity CR gates.
Finally, another attractive feature of the fluxonium is that its low frequency implies a long coherence time, which currently exceeds 1 ms in best devices~\cite{Somoroff2021}. We have shown that comparable qubit lifetimes together with realistic relaxation time of 50 $\mu$s of the second excited states result in gate error within the $10^{-4}$ range. We note that the fluxonium frequency in Ref.~\cite{Somoroff2021} was 163 MHz, while devices with higher qubit frequencies are preferred for the realization of a high-fidelity SD scheme to increase hybridization of qubit states in a system with capacitive coupling. Thus, simulations in this paper were performed for two qubits with frequencies in the 500 MHz -- 1 GHz range. Generally, higher-frequency fluxoniums have proportionally lower coherence times.
However, much less community effort has been devoted to improving fluxonium devices in comparison to transmons. In particular, suboptimal fabrication procedure and antenna design were used in high-coherence fluxonium devices of Refs.~\cite{Nguyen2019, Somoroff2021}, resulting in effective dielectric loss tangents that are an order of magnitude larger than in the best 3D transmons~\cite{Wang2015}. Given recent advances in materials and fabrications of 2D transmons~\cite{Place2021, Wang2022a}, the fluxonium coherence time can be pushed up significantly both in planar and 3D geometries.
These arguments indicate that the fluxonium is an excellent candidate for a scalable processor, and the SD gate scheme is suitable for its realization.
\begin{acknowledgements}
We would like to thank Long Nguyen, Quentin Ficheux, Haonan Xiong, Lo\"ick Le Guevel, Ebru Dogan, and Dario Rosenstock for stimulating discussions. We acknowledge the support from ARO-LPS HiPS program (grant No. W911NF-18-1-0146). V.E.M. and M.G.V acknowledge the Faculty Research Award from Google and fruitful conversations with the members of the Google Quantum AI team. We used the QuTiP software package~\cite{Johansson2012, Johansson2013} and performed computations using resources and assistance of the UW-Madison Center For High Throughput Computing (CHTC) in the Department of Computer Sciences. The CHTC is supported by UW-Madison, the Advanced Computing Initiative, the Wisconsin Alumni Research Foundation, the Wisconsin Institutes for Discovery, and the National Science Foundation.
\end{acknowledgements}
\begin{widetext}
| 16,752 |
\section{}
Repulsive interactions in Fermi systems are at the heart of some of the most interesting phenomena in quantum many-body physics. For instance, the interplay between the spin and orbital degrees of freedom gives rise to Stoner's itinerant ferromagnetism in the continuum~\cite{Stoner_1933} and to the complex phases of the repulsive Hubbard model on a lattice~\cite{mielke_1993}.
The dilute repulsive spin-1/2 Fermi gas, where the interactions between two spin states $\uparrow$ and $\downarrow$ are described by a positive $s$-wave scattering length $a$, is one of the most fundamental quantum many-body models~\cite{Huang_1957,lee_1957,Galitskii_1958}. Among its important features, it is amenable to first-principle calculations in perturbation (for $k_\text{F}a\ll 1$, where $k_\text{F}$ is the Fermi wavenumber). In that limit, its properties (e.g. ground-state energy, Landau parameters, etc.) are universal, i.e. they depend on $a$ alone, not on details of short-range physics~\cite{Galitskii_1958,Landau_1957,Efremov_2000}.
Ultracold atomic gases have emerged as a powerful platform for studying this model, because effective repulsion can be implemented on the so-called `upper' (repulsive) branch using short-range attractive potentials~\cite{Jo_2009,Sanner_2012,Lee_2012,Valtolina_2017,Scazza_2017,Amico_2018,Scazza_2020}. This implementation is particularly interesting because it can realize the regime of strong ($k_\text{F}a\gtrsim 1$) yet short-range interactions ($k_\text{F}r_0\ll 1$, where $r_0$ is the potential range), see e.g.~\cite{Pricoupenko_2004,Shenoy_2011}.
However, the repulsive Fermi gas with short-range attractive potentials is intrinsically metastable. This originates from the existence of a universal bound state in the two-body problem for $a>0$, with a binding energy $\epsilon_\text{b} = \frac{\hbar^2}{m a^2}$ where $m$ is the mass of the atom. The pairing instability of the repulsive branch of the many-body system towards the lower (attractive) branch of bound pairs, depicted in Fig.~\ref{FIG:1}(a), is a complex problem; it is expected to evolve from an instability driven by \emph{universal} three-body recombination for $\epsilon_\text{b}\gg E_\text{F}$~\cite{Petrov_2003,Esry_2001}, to many-body pairing effects when $\epsilon_\text{b}\lesssim E_\text{F}$~\cite{Petrov_2003,Pekker_2011,He_2016,Amico_2018} where $E_\text{F}$ is the Fermi energy.
This pairing instability has played a central role in the study of the strongly repulsive Fermi gas and the search for the itinerant-ferromagnet phase~\cite{Duine_2005,LeBlanc_2009,conduit_2009,conduit_2009_2,conduit_2009_3,chang_2010,schmidt_2011,pilati_2010,von_2011,chang_2011,Shenoy_2011,massignan_2013,pilati_2014,zintchenko_2016,He_2016}. Pioneering experiments have shown decreased lifetime of the gas with increasing interactions~\cite{Jo_2009,Sanner_2012} and larger initial rate of reduction of repulsive correlations (possibly due to the ferromagnetic instability) compared to the initial pairing rate~\cite{Amico_2018,Scazza_2020}.
However, complex dynamics arising from the in-trap density inhomogeneity as well as the far-from-equilibrium nature of the initial quenched states have hindered the study of the homogeneous system's stability~\cite{Jo_2009,Amico_2018}. The advent of homogeneous gases prepared in optical box traps~\cite{Gaunt_2013,Chomaz_2015,Mukherjee_2017,navon_2021} has enabled the investigation of complex stability problems in clean settings~\cite{eigen_2017,bause_2021,shkedrov_2022}. Here, we revisit the fundamental problem of the stability of the repulsive Fermi gas by measuring the three-body recombination law in a homogeneous gas.
The experiment starts with a weakly attractive gas of $^6$Li atoms in a balanced mixture of the first and third lowest Zeeman sublevels (respectively labeled as $\uparrow$ and $\downarrow$), trapped in a red-detuned optical dipole trap. The gas is evaporatively cooled at a bias magnetic field $B= 287$~G. It is then loaded in a blue-detuned (at a wavelength of $639$~nm) cylindrical box trap constructed by intersecting a `tube' beam (produced with a set of axicons) with two thin sheets, see Fig.~\ref{FIG:1}(b). The magnetic field is then ramped to $B=597$~G where the interactions are weakly repulsive ($a \approx 500~a_0$, where $a_0$ is the Bohr radius ~\cite{zurn_2013_2}). At this stage, we typically have $N_{\uparrow} \approx N_{\downarrow} \approx 6 \times 10^5$ atoms per spin state at $T \approx 0.3~T_\text{F}$ with $E_\text{F} \approx k_{\mathrm{B}} \times 0.5\;\mu\mathrm{K}$ and a spin imbalance of $\frac{N_{\downarrow}-N_{\uparrow}}{N_{\downarrow} + N_{\uparrow}} = 0.2(3)\%$. The interaction field is then ramped to its final value over $100$~ms, and left to settle for an additional $25$~ms. We then hold the atoms for a variable duration $t_\text{hold}$. We image the gas near the zero crossing of $a$ ($|a| \le 50~a_0$) by quickly ramping the field to $B= 569$~G, so that trapped pairs are converted into tightly bound molecules and thus detuned from the atomic imaging resonance~\cite{imaging,SuppMat}.
\begin{figure}[!h]
\includegraphics[width=1\columnwidth]{figure1}
\caption{A homogeneous repulsive Fermi gas prepared in an optical box. (a) Sketch of the two lowest energy branches of a Fermi gas with a positive scattering length $a$; the `upper' (repulsive) branch is shown in red, the `lower' branch (a gas of fermion pairs) is shown in blue. The red dashed line is the repulsive Fermi gas energy up to second order in $k_\text{F}a$~\cite{Huang_1957,lee_1957}; the red shaded area depicts the energy width associated with the finite lifetime of the upper branch. (b) \emph{In-situ} imaging of the box-trapped Fermi gas. Gravity, here oriented along $-\mathbf{\hat{y}}$, is compensated by magnetic levitation. The image on the left is the column-integrated optical density (OD). The plots on the right are cuts along the white dashed lines of the image. The solid lines are derived from the fit used to extract the volume of the box; $V=7.3(6) \times 10^{-4}~\mathrm{mm}^3$. The slanted profile in the horizontal cut is caused by the slightly conical shape of our cylindrical box~\cite{SuppMat}.}
\label{FIG:1}
\end{figure}
We show in Fig.~\ref{FIG:2}(a) examples of time evolution of the atom number $N$ per spin state for different values of $a$, normalized to the initial atom number $N_0$. Qualitatively, the gas lifetime decreases with increasing $a$, even though $N_0$ also decreases (because of losses during the interaction field ramp and the settling time~\cite{SuppMat}). The average kinetic energy per particle $\epsilon_\text{kin}$, measured after time-of-flight expansion and shown in Fig.~\ref{FIG:2}(b), also slowly decreases with $t_\text{hold}$.
The origin of the decay is model-independently revealed by plotting the atom loss rate $\dot{N}/N_0$ versus $N/N_0$ (Fig.~\ref{FIG:2}(c)). The examples shown follow a scaling relation of the rate $\dot{N} \propto -N^{\gamma}$ (fits are shown as solid lines, and fitted values of $\gamma$ are in legend). We observe that $\gamma\approx 1$ at weak interactions ($a\ll 10^3~a_0$) where the losses are caused by density-independent collisions with the residual background gas. For stronger interactions, we observe $\gamma\approx 3$, consistent with an atom loss rate per unit volume
\begin{equation}\label{eq:loss}
\dot{n} = -L_3 n^3
\end{equation}
due to three-body collisions, with a constant loss coefficient $L_3$ and a uniform density $n=N/V$, where $V$ is the volume of the box.
\begin{figure}[!h]
\includegraphics[width=1\columnwidth]{{figure2}}
\caption{Decay of a uniform repulsive Fermi gas. (a) Evolution of atom numbers for different interaction strengths, normalized to the initial atom numbers $N_0$. The solid blue, yellow, and red lines are fits to a three-body loss model that includes a one-body loss rate determined from the green-line fit~\cite{onebody}. The three-body loss fits are limited to the region where $\epsilon_\text{kin}$ changes by less than $20\%$ of its initial value, indicated by solid circles; open circles are not used in the fit. The same marker style is used in (b) and (c). Dotted lines are extensions of the fits beyond the fitting range. (b) Evolution of the average kinetic energy per particle during atom losses. (c) Scaling relation between atom loss rate and atom number. Solid lines are power law fits and the extracted exponents $\gamma$ are listed in the legend. }
\label{FIG:2}
\end{figure}
\begin{figure*}[!hbt]
\centerline{\includegraphics[width=\textwidth]{{figure3}}}
\caption{Threshold law of three-fermion recombination. (a) Scaling relation between $L_3$ and the (time-averaged) kinetic energy $\bar{\epsilon}_\text{kin}$. The solid line shows the power law fit on three sets of data, which is rescaled by a factor for clarity (see legend). The dashed line is the fit assuming $\lambda = 1$~\cite{SuppMat}. (b) Temperature evolution during three-body losses. The dashed lines are theoretical predictions without adjustable parameters, given the initial measured $(T/T_\text{F})_0$ (see legend). The solid lines are linear fits to extract the coefficient $\theta$; the dotted lines show estimate on the uncertainty of $\theta$, see panel (c). (c) Temperature-change coefficient $\theta$ versus $T/T_\text{F}$. The vertical dashed line marks the critical $(T/T_\text{F})^*$ at which $\theta$ changes sign, and the horizontal dashed line shows the asymptotic value of $\theta$ in the classical limit.}
\label{FIG:3}
\end{figure*}
Now that we have established a range over which losses are dominated by three-body recombination, we quantitatively characterize the process. The event rate per unit volume for each type of event is $ \Omega \equiv K_3 n^3$ ($=\Omega_{\uparrow\uparrow\downarrow} = \Omega_{\uparrow\downarrow\downarrow}$) where $K_3$ is the recombination coefficient; $K_3$ can be studied through losses, since $K_3 = L_3/d$, where $d$ is the average number of atoms lost per event (either because their release energy from recombination exceeds the trap depth or because they form molecules that are optically detuned). We obtain $L_3$ by fitting $N(t)$ to the solution of Eq.~(\ref{eq:loss})~\cite{onebody} (solid lines in Fig.~\ref{FIG:2}(a)). To ensure that $L_3$ is approximately constant with $t_\text{hold}$, the fits are restricted to a range where ${\epsilon}_\text{kin}$ changes by at most $20\%$ of the initial value, see solid points in Fig.~\ref{FIG:2}~\cite{SuppMat}.
We examine this assumption more carefully by studying the relationship between $L_3$ and $\epsilon_\text{kin}$. We control $\epsilon_{\text{kin}}$ by varying the box depth at an intermediate evaporative cooling stage, keeping the final box depth $U_\text{box}$ the same. As shown in Fig.~\ref{FIG:3}(a) for three different values of $a$, we observe that $L_3$ scales as a power law of $\epsilon_\text{kin}$ averaged over time, $\bar\epsilon_\text{kin}$.
Theoretically, $K_3\propto \epsilon_\text{kin}^\lambda$, where the exponent $\lambda$ is determined by the three-body threshold laws, which crucially depends on the symmetries imposed by the quantum statistics of the collision participants~\cite{Esry_2001}. For instance, for three distinguishable particles or indistinguishable bosons, there is no energy dependence ($\lambda=0$); for three indistinguishable fermions, $\lambda=2$ ~\cite{Yoshida_2018,top_2021}. The generic process in the spin-$1/2$ Fermi gas corresponds to the previously-unverified case of collisions involving two indistiguishable fermions. The three-body event rate in a unit volume $\omega_3$ depends on the momenta $\mathbf{k}_1$ and $\mathbf{k}_2$ of the indistinguishable fermions, and is independent of the third participant's momentum $\mathbf{k}'$~\cite{petrov}:
\begin{equation}\label{eq:omega}
\omega_3(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}')\propto (\mathbf{k}_1-\mathbf{k}_2)^2.
\end{equation}
Integrating Eq.~(\ref{eq:omega}) over the phase space density of the three participants, one finds $\lambda=1$. Experimentally, we measure $\lambda_\text{exp} = 1.36(14)$~\cite{epsilonkin} (solid line in Fig.~\ref{FIG:3}(a)) , in reasonable agreement with the theoretical prediction.
The dependence of $\omega_3$ on momentum has interesting implications on the temperature dynamics of the gas during decay. In Fig.~\ref{FIG:3}(b), we show $T/T_0$ versus $N/N_0$ (where $T_0$ is the initial $T$). Depending on $T/T_\text{F}$, the system either cools down or heats up.
This effect results from an interplay between Fermi correlations and the momentum dependence of $\omega_3$. The cooling effect from the preferential removal of particles with large momenta (without spatial selectivity)~\cite{SuppMat}, strongest for $T\gg T_\text{F}$, competes with the heating from the perforation of the Fermi sea, which dominates in the deeply degenerate regime~\cite{timmermans_2001}. A theoretical model for a closed system, shown as colored dashed lines in Fig.~\ref{FIG:3}(b), yields good agreement with the observed evolution of the temperature for $N/N_0\gtrsim0.7$~\cite{SuppMat}. We attribute the discrepancy at late times for low $(T/T_\text{F})_0$ to additional cooling from plain evaporation.
Quantitatively, we define the coefficient $\theta \equiv \frac{N}{T}\left(\frac{\partial T}{\partial N}\right)_V$ under this rarefaction, and measure it at $t_{\text{hold}}=0$ for various $T/T_\text{F}$ (Fig.~\ref{FIG:3}(c)). We observe that the transition from heating to cooling occurs at a critical degeneracy $(T/T_\text{F})^* \approx 0.7$.
The measurements are in excellent agreement with the theoretical prediction (solid line in Fig.~\ref{FIG:3}(c))~\cite{SuppMat}, which establishes the crossing at $(T/T_\text{F})^* = 0.71$ (vertical dashed line). For $T\gg T_\text{F}$, $\theta$ approaches $2/9$, where the cooling effect is most pronounced. Note that for all $T$, $\theta<2/3$, so that this process does not increase the quantum degeneracy of the gas (see related scenarios for bosons~\cite{schemmer_2018,Dogra_2019}, and fermions near a narrow Feshbach resonance~\cite{Peng_2021}).
We now turn to the dependence of $L_3$ on interactions. In Fig.~\ref{FIG:4}(a), we display $\gamma$ versus $a$; the solid points are data where losses are three-body dominated (see Fig.~\ref{FIG:4} and caption).
We subsequently extract $L_3$ for all interactions by fixing $\gamma=3$ and taking one-body decay into account~\cite{onebody}; to factor out the effect of the threshold law, we display $L_3/\bar \epsilon_{\text{kin}}$, see Fig.~\ref{FIG:4}(b). We observe that over more than four orders of magnitude, $L_3/\bar \epsilon_{\text{kin}}$ follows a power law of $a$. Fitting the data in the three-body-dominated region (solid blue points in Fig.~\ref{FIG:4}(b)), we find $L_3/\bar \epsilon_{\text{kin}} \propto a^{6.1(2)}$ (solid blue line).
The fact that $L_3$ scales precisely as $a^6$ is strong evidence for the universality of this process. Indeed, should three-body recombination be universal, i.e. be independent of short-range physics, the threshold law implies the scaling of $K_3$ with interaction strength~\cite{DIncao_2005}. Specifically, if $K_3\propto \epsilon_\text{kin}^\lambda$, then on dimensional grounds one should have $K_3\propto \epsilon_\text{kin}^\lambda \frac{m^{\lambda-1}}{\hbar^{2\lambda-1}}a^{4+2\lambda}$. For two identical fermions, one finds $K_3\propto a^6$, in excellent agreement with our measurements. It is interesting to note that the $a^4$ scaling for bosons is not universal, due to effects related to Efimov physics~\cite{braaten_2007,naidon_2017}.
Compared to the bosonic case, an additional factor $\epsilon_{\text{kin}}/{\epsilon_\text{b}}$, $\propto(k_\text{F}a)^2$ at low $T$, can be interpreted as a suppression factor due to Pauli blocking, which arises as two identical fermions need to come within $\approx a$ of each other to form a final bound state.
Now that we established $L_3 \propto \epsilon_\text{kin} a^6$, we can extract the dimensionless constant $A$ in $L_3 = d A \epsilon_{\text{kin}} a^6/\hbar$, predicted to be universal~\cite{Auniv}. As some or all products of the recombination can be lost, $d$, the link between losses and recombinations, depends on the box depth $U_\text{box}$ and $\epsilon_\text{b}$. To gain insight into this link, we implement a second imaging protocol where we image the atoms directly at the interaction field (depicted in the top left inset of Fig.~\ref{FIG:4}(b)); in our range of $a$, molecules and atoms are optically unresolved~\cite{imaging}. The measurements are displayed as red circles in Figs.~\ref{FIG:4}(a)-(b).
\begin{figure}[!t]
\includegraphics[width=1\columnwidth]{{figure4}}
\caption{Universality of three-body recombination. (a) Atom-loss scaling exponent $\gamma$. Blue and red circles are respectively imaged near the zero crossing of $a$ or directly at the interaction field. Data in the three-body dominant region, selected by $|\gamma - 3| \le 0.5$ (blue band) and with a relative uncertainty $\le 20\%$, are shown by solid points and left open otherwise, in all panels. (b) Universal scaling of $L_3$ with $a$. The experiment sequence is shown in the upper insets. The blue line is the power law fit on the solid blue points. Vertical grey dashed lines mark the threshold values of $a$ such that $\epsilon_\text{b}/3 = 2 U_{\text{box}}$ and $2\epsilon_\text{b}/3 = U_{\text{box}}$, and the bands include average over initial energies ~\cite{SuppMat}. Bottom cartoons depict imaging and trapping regimes after recombinations for the atoms and molecules. (c) Universal constant $A$. Data points are the experimental values of $A = \hbar L_3/(3 \bar\epsilon_{\text{kin}} a^6)$, and the solid purple line is derived from a global $a^6$ fit to the data in (b) (not shown). The systematic error from the volume calibration is shown by the light purple band \cite{SuppMat}.}
\label{FIG:4}
\end{figure}
At low $a$, $L_3$ measured by both imaging methods coincide, as $d=3$ in both cases. The separation at $a \gtrsim 1300~a_0$ occurs close to the condition $\epsilon_\text{b}/3 \approx 2 U_{\text{box}}$ at which the molecules remain trapped (see cartoons at the bottom of Fig.~\ref{FIG:4}(b))~\cite{deposit}. For larger $a$, $d<3$ for the `interaction field' imaging.
For the `zero-crossing' imaging, $d=3$ still holds; the $a^6$ scaling extends up to the point where $2\epsilon_\text{b}/3 < U_{\text{box}}$, beyond which all recombination products may be trapped~\cite{Petrov_2003,unitary}. The maximum of $L_3(a)$ is located marginally beyond this threshold. Fixing $d=3$, we fit $L_3/\bar \epsilon_{\text{kin}}$ (solid blue points) and find $A = 143(16)_{\mathrm{stat.}}(24)_{\mathrm{sys.}}$. To examine more closely the quality of the $a^6$ scaling, we extract $A$ without free parameters from $(\hbar L_3/(3 \bar\epsilon_{\text{kin}})/a^6$ (Fig.~\ref{FIG:4}(c)). Our measurements are in excellent agreement with the theoretical prediction $A=148$ for the mass-balanced three-fermion problem~\cite{Petrov_2003}.
The range over which the $a^6$ scaling law applies is surprisingly large. First, it extends even at large $a$ where the measured $\gamma$ is only marginally close to $3$ (see open circles in Fig.~\ref{FIG:4}). Secondly, at the highest $a$ for which we observe $a^6$ scaling, $\epsilon_{\text{kin}}\gtrsim k_\text{B}\times 0.5$~$\mu$K is only slightly smaller than $\epsilon_\text{b}\approx k_\text{B}\times 1.2$~$\mu$K, even though the condition for the universal scaling is expected to be valid for $\epsilon_{\text{kin}} \ll \epsilon_\text{b}$~\cite{Petrov_2003}.
Finally, our measurement of $K_3$ provides an important ingredient for assessing the limits of equilibrium for a strongly interacting repulsive Fermi gas. To ensure equilibrium, $\Gamma_3 \equiv 3 K_3 n^2$~\cite{gamma3} must be significantly slower than $\Gamma_2$, the two-body elastic collision rate. We find $\Gamma_2/\Gamma_3 = (k_{\mathrm{F}} a)^{-4} I(T/T_\mathrm{F})$ where $I(T/T_\mathrm{F})$ is a universal function that reaches its maximum at $T \approx 1.2~T_{\mathrm{F}}$. At this temperature, $\Gamma_2=\Gamma_3$ at $k_\text{F} a \approx 1.3$, providing an upper bound to the interaction strength of a repulsive Fermi gas in equilibrium~\cite{SuppMat,kFalim}. This limit is close to the predicted point for the ferromagnetic transition, $k_\text{F} a=\pi/2$ in the mean-field approximation~\cite{houbiers_1997} and $\approx 1$ in quantum Monte Carlo simulations~\cite{pilati_2010,chang_2011,He_2016}.
In conclusion, we studied the stability of the repulsive Fermi gas with short-range interactions. We measured the universal recombination law for three particles of equal mass involving two identical fermions. This work paves the way for the study of complex stability problems of Fermi systems in clean uniform settings, e.g. multi-component gases~\cite{ottenstein_2008,huckans_2009,nakajima_2010}, mass-imbalanced mixtures~\cite{Taglieber_2008,Wille_2008,Barontini_2009,Pires_2014,Tung_2014}, and molecules~\cite{hoffmann_2018,duda_2022}. A future work could leverage uniform Fermi gases to explore the regime $\epsilon_\text{b}\lesssim\epsilon_{\text{kin}}$, where $K_3\propto \epsilon_{\text{kin}} a^6$ should no longer hold, and at low temperature many-body pairing mechanisms are expected to take over~\cite{Pekker_2011,He_2016}. To access the shorter time scales expected, fast state preparation and probing techniques such as internal state manipulation could be useful~\cite{Amico_2018,Scazza_2020}.
We thank D.S. Petrov, F. Scazza, M. Zaccanti, and G. Roati for fruitful discussions. We also thank Z. Hadzibabic, F. Werner, and L. Chambard for comments on the manuscript. This work was supported by the NSF, DARPA, the David and Lucile Packard Foundation, and the Alfred P. Sloan Foundation.
| 6,674 |
\section{Introduction}
\label{sec:Introduction}
In functional magnetic resonance imaging (fMRI) studies of human brain, the connectivity matrices are often constructed using the Pearson correlation between the average blood-oxygen-level dependent (BOLD) signals between parcellations \cite{chung.2019.NN}. The whole brain is often parcellated into $p$ disjoint regions, where $p$ is usually a few hundreds \cite{arslan.2018,glasser.2016}. Subsequently, either functional or structural information is overlaid on top of the parcellation and $p \times p$ connectivity matrix $C = (c_{ij})$ that measures the strength of connectivity between brain regions $i$ and $j$ is obtained. Recently, we are beginning to see large-scale brain networks that are more effective in localizing regions and increasing prediction power \cite{chung.2018.SPL,valencia.2009}. However, increasing parcellation resolution also increases the computational burden exponentially.
For an undirected network with $p$ number of nodes, there are $p(p-1)/2$ number of edges and thus, the brain network is considered as an object of in dimension $p(p-1)/2$. Such high dimensional data often requires $\mathcal{O}(p^3)$ run time for various matrix manipulations such as matrix inversion and singular value decomposition (SVD). Even at 3mm resolution, functional magnetic resonance images (fMRI) has more than 25000 voxels in the brain \cite{chung.2018.SPL}. It requires about 5GB of storage to store the matrix of size $25000 \times 25000$ (Figure \ref{fig:persist-cycle}). At 1mm resolution, there are more than 300000 voxels and it requires more than 670GB of storage. Considering large-scale brain imaging datasets often have thousands images, various learning and inference at higher spatial resolution would be a serious computational and storage challenge.
We directly address these challenges by embedding the brain networks into 3D unit sphere. Although there are few available technique for embedding graphs into hyperbolic spaces, the hyperbolic spaces are not intuitive and not necessarily easier to compute \cite{munzner.1998,shi.2019}. Since numerous computational tools have been developed for spherical data, we propose to embed brain networks on a 2D sphere. The spherical network embedding offers far more adaptability and applicability in various problems.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{network5000nodes.pdf}
\caption{The correlation brain network (left) obtained from the resting-state functional magnetic resonance images (rs-fMRI) of a healthy subject at 5000 regions \cite{chung.2018.SPL,gritsenko.2020}. The network is so dense to visualize all the edges (right), only the connections with positive correlations are shown. Beyond visualization, performing various matrix and graph operations such dense network is computationally time consuming and often not feasible.}
\label{fig:persist-cycle}
\end{center}
\end{figure}
\section{Embedding onto a sphere}
Consider a weighted complete graph $G=(V, C)$ consisting of node set $V = \left\{ 1, \dots, p \right\}$ and edge weight $C=(c_{ij})$, where $c_{ij}$ is the weight between nodes $i$ and $j$. The edge weights measure similarity or dissimilarity between nodes. The edge weights in most brain networks are usually given by some sort of similarity measure between nodes \cite{lee.2011.MICCAI,li.2009,mcintosh.1994,newman.1999,song.2005}. Most often the Pearson correlation is used in brain network modeling \cite{chung.2019.NN} (Figure \ref{fig:persist-cycle}).
Suppose measurement vector ${\bf x}_j = (x_{1j},\cdots,$ $x_{nj})^\top \in \mathbb{R}^n$ is given on node $j$ over $n$ subjects. We center and rescale the measurement ${\bf x}_j$ such that
$$ {\bf x}_{j}^{\top}{\bf x}_{j} = \sum_{i=1}^n x_{ij}^2 = 1, \quad \sum_{i=1}^n x_{ij} = 0.$$
We can show that $c_{ij} = {\bf x}_i^\top{\bf x}_j$ is the Pearson correlation \cite{chung.2019.NN}.
Such points are in the $n$-dimensional sphere $S^{n-1}$. Let the $n \times p$ data matrix be
${\bf X} = [{\bf x}_1, \cdots, {\bf x}_p].$
Then the correlation matrix is given by ${\bf X}^{\top}{\bf X}$
\begin{eqnarray} {\bf X}^{\top}{\bf X} = ({\bf x}_i^{\top} {\bf x}_j) = U^{\top} D(\eta_1, \cdots, \eta_p) U \label{eq:SVD}\end{eqnarray}
with $U^{\top}U = I_p$ and $D$ is a diagonal matrix with eigenvalues $\eta_1 \geq \cdots \geq \eta_p$. Since there are usually more nodes than subjects in brain images, i.e., $n \ll p$, the correlation matrix is not positive definite and some eigenvalues might be zeros \cite{chung.2018.SPL}. Since the correlation is a similarity measure, it is not distance.
Often used correlation distance
$c_{ij} = 1- {\bf x}_i^\top{\bf x}_j$ is not metric. To see this, consider the following 3-node counter example:
\begin{eqnarray*} {\bf x}_1 = (0, \frac{1}{\sqrt{2}}, -\frac{1}{\sqrt{2}})^\top, \quad {\bf x}_2 = (\frac{1}{\sqrt{2}}, 0, -\frac{1}{\sqrt{2}})^\top, \quad {\bf x}_3 &=& (\frac{1}{\sqrt{6}}, \frac{1}{\sqrt{6}}, -\frac{2}{\sqrt{6}})^\top.
\end{eqnarray*}
Then we have $c_{12} > c_{13} + c_{23}$ disproving triangle inequality. Then the question is under what condition, the Pearson correlations becomes a metric?
\begin{theorem}For centered and scaled data ${\bf x}_1, \cdots, {\bf x}_p$,
$\theta_{ij} = cos^{-1}({\bf x}_i^\top{\bf x}_j)$ is a metric.
\label{theorem:metric}
\end{theorem}
{\em Proof.}
The centered and scaled data ${\bf x}_i$ and ${\bf x}_j$ are residing on the unit sphere $S^{n-1}$. The correlation between ${\bf x}_i$ and ${\bf x}_j$ is the cosine angle $\theta_{ij}$ between the two vectors, i.e., $${\bf x}_i^\top {\bf x}_j = \cos \theta_{ij}.$$
The geodesic distance $d$ between nodes ${\bf x}_i$ and ${\bf x}_j$ on the unit sphere is given by angle $\theta_{ij}$:
$$\theta_{ij} = \cos^{-1} ({\bf x}_i^\top {\bf x}_j).$$
For nodes ${\bf x}_i, {\bf x}_j \in S^{n-1}$, there are two possible angles $\theta_{ij}$ and $2\pi - \theta_{ij}$ depending on if we measure the angle along the shortest or longest arcs. We take the convention of using the smallest angle in defining $\theta_{ij}$.
With this convention,
$$\theta_{ij} \leq \pi.$$
Given three nodes ${\bf x}_i, {\bf x}_j$ and ${\bf x}_k$, which forms a spherical triangle,
we then have spherical triangle inequality
\begin{eqnarray} \theta_{ij} \leq \theta_{ik} + \theta_{kj}. \label{eq:STI} \end{eqnarray}
The inequality (\ref{eq:STI}) is proved in \cite{reid.2005}. Other conditions for metric are trivial. \hfill $\qed$
Theorem \ref{theorem:metric} directly shows that the embedding of correlation matrices on $S^{n-1}$ can be easily done by simply centering and scaling data. A similar approach is proposed for embedding an arbitrary distance matrix to a sphere in \cite{wilson.2014}. In our case, the problem is further simplified due to the geometric structure of correlation matrices. On $S^2$, the network is simply embedded as nodes ${\bf x}_1, \cdots, {\bf x}_p$ and edges with weights $\cos ({\bf x}_i^{\top} {\bf x}_j)$ \cite{chung.2019.NN}.
\section{Hypersperical harmonic expansion of brain networks}
Once we embedded correlation networks onto a sphere, it is possible to algebraically represent such networks as basis expansion involving the hyperspherical harmonics \cite{domokos.1967,higuchi.1987,hosseinbor.2015.MIA2,hosseinbor.2015.MIA1,hosseinbor.2014.MICCAI}.
Let $\boldsymbol{\varphi} = (\theta, \varphi_2, \cdots, \varphi_{n-1})$ be the spherical coordinates of $S^{n-1}$ such that $\theta \in [0,\pi), \varphi_i \in [0,2\pi)$ where $\theta$ is the axial angle. Then the spherical Laplacian $\Delta_{n-1}$ is iteratively given as \cite{cohl.2011}
$$\Delta_{n-1} = \frac{\partial^2}{\partial \theta^2} + (n-2) \cot \theta \frac{\partial}{\partial \theta}+ \frac{1}{\sin^2 \theta} \Delta_{n-2}.$$
With respect to the spherical Laplacian $\Delta_{n-1}$, the hyperspherical harmonics $Y_{\bf l} (\boldsymbol{\varphi} )$ with ${\bf l} = (l_1, \cdots, l_{n-1})$ satisfies
$$ \Delta_{n-1} Y_{\bf l}(\boldsymbol{\varphi} ) = - \lambda_{\bf l} Y_{\bf l}(\boldsymbol{\varphi} )$$
with eigenvalues $\lambda_{\bf l} = l_{n-1}(l_{n-1} + n-1)$ for $|l_1 | \leq l_2 \leq \cdots l_{n-1}$. The hyperspherical harmonics are given in terms of the Legendre polynomials.
We can compute the hyperspherical harmonics inductively from the previous dimension starting with $S^2$, which we parameterize with $(\theta, \varphi_2) \in [0,\pi) \otimes [0,2\pi)$. Then the traditional spherical harmonics $Y_{l_2 l_1}$ is given as \cite{chung.2008.sinica,courant.1953}
\begin{eqnarray*}
Y_{l_2 l_1} =
\begin{cases}
c_{l_2 l_1}P^{|l_1|}_{l_2}(\cos\theta)\sin
(|l_1|\varphi_2), &-l_2 \leq l_1\leq -1, \\
\frac{c_{l_2 l_1}}{\sqrt{2}}P_{l_2}^{| l_1|}(\cos\theta),& L_1=0,\\
c_{l_2 l_1} P^{| l_1 |
}_{l_2}(\cos\theta)\cos (|l_1 |\varphi_2),& 1 \leq l_1\leq l_2,
\end{cases}
\end{eqnarray*}
where $c_{l_2 l_1}=\sqrt{\frac{2l_2+1}{2\pi}\frac{(l_2-| l_1 |)!}{(l_2+| l_1|)!}}$ and $P^{ l_1}_{l_2}$ is the associated Legendre polynomial satisfying
\begin{eqnarray*} P_{l_2}^{l_1}(x) = \frac{(1-x^2)^{l_1/2}}{2^{l_2} l_2!} \frac{d^{l_2+l_1}}{dx^{l_2+ l_1}}
(x^2 -1)^{l_2}, \; x \in [-1,1].\label{eq:ass.legendre}\end{eqnarray*}
Previous imaging literature often used the complex-valued spherical harmonics \cite{bulow.2004,gerig.2001,gu.2004,shen.2006}. In practice, it is suffice to use only real-valued spherical harmonics \cite{courant.1953}, which is more convenient in setting up real-valued models. The relationship between the real- and complex-valued spherical harmonics is given in \cite{blanco.1997,homeier.1996}.
The hyperspherical harmonics are orthonormal with respect to area element
$$d\mu(\boldsymbol{\varphi}) = \sin^{n-2} \theta \sin^{n-3} \varphi_2 \cdots \sin \varphi_{n-1} d \boldsymbol{\varphi}$$
such that
\begin{eqnarray*} \langle Y_{{\bf l}_1} Y_{{\bf l}_2} \rangle &=& \int_{S^{n-1}}Y_{{\bf l}_1} (\boldsymbol{\varphi} ) Y_{{\bf l}_2} (\boldsymbol{\varphi} ) \;d\mu(\boldsymbol{\varphi}) = \delta_{{\bf 1}_1 {\bf 1}_2},\end{eqnarray*}
where $\delta_{{\bf 1}_1 {\bf 1}_2}$ is the Kronecker's delta. Then using the hyperspherical harmonics, we can build
the multiscale representation of networks through the heat kernel expansion \cite{chung.2008.TMI,chung.2008.sinica}
\begin{eqnarray*} K_{t}(\boldsymbol{\varphi},\boldsymbol{\varphi}') = \sum_{\bf l} e^{-\lambda_{\bf l} t} Y_{\bf l}(\boldsymbol{\varphi} ) Y_{\bf l}(\boldsymbol{\varphi}'), \end{eqnarray*}
where the summation is over all possible valid integer values of ${\bf l}$.
Given initial data $g(t = 0, \boldsymbol{\varphi}) = f(\boldsymbol{\varphi})$ on $S^{n-1}$, the solution to diffusion equation
$$\frac{dg}{dt} = \Delta_{n-1} g$$
at time $t$ is given by
\begin{eqnarray*} g(t, \boldsymbol{\varphi}) &=& \int_{S^{n-1}} K_{t}(\boldsymbol{\varphi},\boldsymbol{\varphi}') f(\boldsymbol{\varphi}') \; d\mu(\boldsymbol{\varphi}')\\
&=& \sum_{{\bf l}} e^{-\lambda_{\bf l} t} f_{\bf l}Y_{\bf l}(\boldsymbol{\varphi})
\end{eqnarray*}
with spherical harmonic coefficients $f_{\bf l} = \langle f, Y_{\bf l} \rangle$. The coefficients are often estimated in the least squares fashion in the spherical harmonic representation (SPHARM) often used in brain cortical shape modeling
\cite{chung.2008.sinica,gerig.2001,gu.2004,shen.2006}. However, for the network embedding problem, it does not require the least squares method. The embedded network nodes can be modeled as the Dirac delta function such that
$$f (\boldsymbol{\varphi}) = \frac{1}{p} \sum_{i=1}^p \delta(\boldsymbol{\varphi} - {\bf x}_i).$$
We normalize the expression such that
$\int_{S^{n-1}} f (\boldsymbol{\varphi}) d\mu(\boldsymbol{\varphi}) =1.$
Then we can algebraically show that the solution is given by
$$g(t, \boldsymbol{\varphi}) = \sum_{{\bf l}} e^{-\lambda_{\bf l} t} Y_{\bf l} (\boldsymbol{\varphi} ) \sum_{i=1}^p Y_{\bf l}({\bf x}_i).$$
\section{Spherical multidimensional scaling}
We have shown how to embed correlation matrices into $S^{n-1}$ and model them parametrically using spherical harmonics. In many large scale brain imaging studies, the number of subject $n$ can be thousands. Embedding in such a high dimensional sphere may not be so useful in practice. We propose to embed correlation matrices into two sphere $S^2$, which is much easier to visualize and provides parametric representation through available SPAHRM tools \cite{chung.2008.TMI}.
Given metric $\theta_{ij} = \cos^{-1} ({\bf x}_i^{\top} {\bf x}_j)$, we want to find the lower dimensional embedding ${\bf y}_j =(y_{1j}, y_{2j}, y_{3j})^{\top} \in S^2$ satisfying
$$\| {\bf y}_j \|^2 = {\bf y}_j ^{\top} {\bf y}_j = \sum_{i=1}^3 y_{ij}^2 = 1.$$
This is a spherical multidimensional scaling (MDS) often encountered in analyzing spherical data such as earthquakes \cite{dzwinel.2005} and colors in computer vision \cite{maejima.2012}.
Consider $3 \times p$ data matrix ${\bf Y}= [{\bf y}_1, \cdots, {\bf y}_p]$. Then the spherical MDS proceeds as follows. Given data matrix ${\bf X}$ in (\ref{eq:SVD}), we find ${\bf Y}= [{\bf y}_1, \cdots, {\bf y}_p]$ that minimizes the loss
\begin{eqnarray} \mathcal{L}({\bf X}, {\bf Y}) = \sum_{i,j=1}^p \big[ \cos^{-1} ({\bf x}_i^{\top} {\bf x}_j) - \cos^{-1} ({\bf y}_i ^{\top} {\bf y}_j) \big]^2. \label{eq:sphericalMDS} \end{eqnarray}
The spherical MDS (\ref{eq:sphericalMDS}) is usually solved via the gradient descent on spherical coordinates \cite{maejima.2012}. However, the approximate version of (\ref{eq:sphericalMDS}) can be solved exactly. At the minimum of (\ref{eq:sphericalMDS}), the loss is expected to be small and we can approximate the expression using the Taylor expansion \cite{abramowitz.1988}
$$\cos^{-1} ({\bf x}_i^{\top} {\bf x}_j) = \frac{\pi}{2} - {\bf x}_i^{\top} {\bf x}_j + \cdots.$$
Ignoring any higher order terms, we minimize the linearized loss
\begin{eqnarray} \mathcal{L}({\bf X}, {\bf Y}) = \sum_{i,j=1}^p \big[ {\bf x}_i^{\top} {\bf x}_j - {\bf y}_i ^{\top} {\bf y}_j \big]^2. \label{eq:LXY} \end{eqnarray}
The loss (\ref{eq:LXY}) can be written in the matrix form
\begin{eqnarray} \mathcal{L}({\bf X}, {\bf Y}) = \mbox{tr} ( {\bf X}^{\top}{\bf X} - {\bf Y}^{\top}{\bf Y})^2 = \| {\bf X}^{\top}{\bf X} - {\bf Y}^{\top}{\bf Y} \|_F^2 \label{eq:LRAP},
\end{eqnarray}
with the Frobenius norm $\| \cdot \|_F$. The minimization of loss (\ref{eq:LRAP}) is a {\em low-rank approximation problem}, which can be exactly solved through the Eckart Young Mirsky theorem \cite{golub.1987}:
$$ \arg \min_{ {\bf Y}^{\top}{\bf Y} } \mathcal{L}({\bf X}, {\bf Y}) = U^{\top} D(\eta_1, \eta_2, \eta_3, 0, \cdots, 0) U,$$
where $U$ is the orthogonal matrix obtained as the SVD of ${\bf X}^{\top}{\bf X}$ in (\ref{eq:SVD}). The proof is in \cite{dax.2014}. The theorem states that in order to match a given symmetric matrix using the Frobenius norm with another symmetric matrix, we need to align with the principle directions of the given matrix.
Let $[{\bf v}_1, \cdots, {\bf v}_p]$ be $3 \times p$ matrix consisting of the first three rows of $D(\sqrt{\eta_1}, \sqrt{\eta_2},\\ \sqrt{\eta_3}, 0, \cdots, 0) U$. All other rows below are zero. Each column $\frac{{\bf v}_j}{ \| {\bf v}_j \|} \in S^2$. Further,
$$ \Big( \frac{{\bf v}_i^{\top} {\bf v}_j}{\|{\bf v}_i \| \| {\bf v}_j \|} \Big) = U^{\top} D(\eta_1, \eta_2, \eta_3, 0, \cdots, 0) U.$$
Thus ${\bf Y} = \Big[ \frac{{\bf v}_1}{ \| {\bf v}_1 \|}, \cdots, \frac{{\bf v}_p}{ \| {\bf v}_p \|} \Big]$ solves (\ref{eq:LRAP}) and we claim
\begin{theorem}
$$ \arg \min_{{\bf Y} \in S^2} \mathcal{L}({\bf X}, {\bf Y}) = \Big[ \frac{{\bf v}_1}{ \| {\bf v}_1 \|}, \cdots, \frac{{\bf v}_p}{ \| {\bf v}_p \|} \Big] .$$
\end{theorem}
The embedding is not unique. For any rotation matrix $Q$, $Q{\bf Y} $ will be another embedding.
In brain imaging applications, where we need to embed multiple brain networks, the rotation matrix $Q$ should not come from individual subject but should come from the fixed average template.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth]{embedding-sphere.pdf}
\caption{Left: embedding of a brain network with 5000 nodes to sphere $S^2$. Each scatter point can be modeled as the Dirac delta function. Right: embedding to hyperbolic space, where the embedded points from a torus-like circular pattern.}
\label{fig:embedding-sphere}
\end{center}
\end{figure}
\section{Experiement}
We applied the proposed spherical embedding to a functional brain network obtained through the Human Connectome Project (HCP) \cite{smith.2013,vanessen.2012}. The resting-state functional magnetic resonance image (rs-fMRI) of a healthy subject was collected with 3T scanner at 2.0 mm voxel resolution (91×109×91 spatial dimensionality), 1200 frames at a rate of one frame every 720 ms. The scan went through spatial and temporal preprocessing including spatial distortions and motion correction, frequency filtering, artifact removal as the part of HCP preprocessing pipeline \cite{smith.2013}. fMRI is then denoised using the Fourier series expansion with cosine basis \cite{gritsenko.2020}. A correlation matrix of size 5000 $\times$ 5000 was obtained by computing the Pearson correlation of the expansion coefficients across uniformly sampled 5000 brain regions (Figure \ref{fig:persist-cycle}). Following the proposed method, we embedded the brain network into 5000 scatter points on $S^2$ (Figure \ref{fig:embedding-sphere}). The method seems to embed brain networks uniformly across $S^2$. The Shepard diagram of displaying distance $\cos^{-1} ({\bf y}_i^{\top} {\bf y}_j)$ vs. $\cos^{-1} ({\bf x}_i^{\top} {\bf x}_j)$ is given in Figure \ref{fig:dist-dist}. The correlation between the distances is 0.51 indicating a reasonable embedding performance.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth]{dist-dist.pdf}
\caption{The shepard diagrams of all the pairwise distances (gray points) on 5000 nodes with spherical MDS (left) and hyperbolic MDS. Red points are first 500 pairwise distances. The geodesic distance $\cos^{-1} ({\bf x}_i^{\top} {\bf x}_j)$ is used for Pearson correlations. The correlation between distances are 0.51 for spherical MDS and 0.0018 for hyperbolic MDS demonstrating far better embedding performance of spherical MDS.}
\label{fig:dist-dist}
\end{center}
\end{figure}
\section{Discussion}
The embedding of brain networks to a sphere allows various computation of brain networks straightforward.
Instead of estimating the network gradient discretely in coarse fashion, it is possible to more smoothly estimate
the network gradient on a sphere using the spherical coordinates \cite{elad.2005}. We can further able to obtain various differential quantities of brain networks such as gradient and curls often used in the Hodge decomposition \cite{anand.2021.arXiv}. This is left as a future study.
The major body of literature on the embedding of networks is toward hyperbolic spaces, where 2D Poincare disk $D$ is often used \cite{munzner.1998,shi.2019,wilson.2014}. Figure \ref{fig:embedding-sphere} shows the embedding of the brain network to $D$ using hyperbolic MSD \cite{zhou.2021}. It is usually characterized by the torus-like circular pattern. Unlike spherical embedding, the hyperbolic embedding does not provide uniform scatter points.
The Shepard diagram of displaying the geodesic distance in the Poincare disk vs. $\cos^{-1} ({\bf x}_i^{\top} {\bf x}_j)$ is given in Figure \ref{fig:dist-dist}. The correlation between distances is 0.0018 indicating a poor embedding performance compared to the spherical MDS. For correlation based brain networks, the spherical MDS might be a better alternative over hyperbolic MDS. However, further investigation is needed for determining the optimal embedding method for brain networks.
\bibliographystyle{plain}
| 6,847 |
\section{Introduction}
\label{sec:intro}
Bouncing Universe is an interesting alternative to inflation; for
reviews and general
discussion see, e.g.,
Refs.~\cite{Novello:2008ra,Lehners:2008vx,Lehners:2011kr,Battefeld:2014uga,Brandenberger:2016vhg,Ijjas:2018qbo}.
Its realization within classical field theory requires
the violation of the null convergence condition (null energy condition in General
Relativity). One set of models where this property may occur without
obvious pathologies is the class of Horndeski theories~\cite{Horndeski:1974wa};
explicit examples of healthy bouncing stages are numerous in these
theories~\cite{Qiu:2011cy,Easson:2011zy,Cai:2012va,Osipov:2013ssa,Qiu:2013eoa,Koehn:2013upa,Qiu:2015nha,Ijjas:2016tpn},
for reviews see, e.g.,
Refs.~\cite{Kobayashi:2019hrl,Mironov:2019haz}. Within Horndeski
theories, extending bouncing cosmology to the whole time
interval
$-\infty<t<+\infty$ is not straightforward, however. Namely, perturbations about
bouncing spatially flat FLRW
backgrounds $ds^2 = -dt^2 + a^2(t)\delta_{ij} dx^i dx^j$ have either
gradient or ghost instabilities, or both, provided that the following two integrals diverge:
\begin{subequations}
\begin{align}
\label{jan21-22-1}
\int_{-\infty}^{t} a(t) (\mathcal{ F}_T +\mathcal{ F}_S) dt &= \infty \; ,
\\
\int_t^{+\infty} a(t) (\mathcal{ F}_T +\mathcal{ F}_S) dt &=
\infty \; ,
\end{align}
\end{subequations}
where
$\mathcal{ F}_T$ and $\mathcal{ F}_S$ are time-dependent coefficients in the
quadratic
actions for tensor (transverse traceless $h_{ij}$) and scalar ($\zeta$)
perturbations,
\begin{subequations}
\label{jan21-22-2}
\begin{align}
\label{jan21-22-2a}
\mathcal{ S}_{hh} & =\int dt d^3x \frac{ a^3}{8}\left[
\mathcal{ G}_T
\dot h_{ij}^2
-\frac{\mathcal{ F}_T}{a^2}
h_{ij,k} h_{ij,k} \right] \; ,
\\
\mathcal{ S}_{\zeta\zeta} &=\int dt d^3x a^3\left[
\mathcal{ G}_S
\dot\zeta^2
-\frac{\mathcal{ F}_S}{a^2}
\zeta_{,i}\zeta_{,i}
\right] \; .
\end{align}
\end{subequations}
This property, known as the no-go
theorem~\cite{Libanov:2016kfc,Kobayashi:2016xpl,Kolevatov:2016ppi,Akama:2017jsa},
rules out the simplest bouncing Horndeski setups where $a(t)$ tends to infinity
while $\mathcal{ F}_T$ and $\mathcal{ F}_S$ stay positive and
finite
as $t \to \pm \infty$.
One
way~\cite{Cai:2017dyi,Kolevatov:2017voe,Ye:2019sth,Mironov:2019qjt,Ilyas:2020qja,Zhu:2021whu}
of getting around this
theorem
is to employ beyond Horndeski theories~\cite{Zumalacarregui:2013pma,Gleyzes:2014dya}
Degenerate Higher-Order Scalar-Tensor (DHOST) generalizations~\cite{Langlois:2015cwa}. These, however, have their
own problems, since adding conventional matter fields often results in
superluminality~\cite{Mironov:2020pqh}. Without employing these generalizations, i.e.,
staying within the Horndeski
class, there is still a possibility to allow the coefficients
$\mathcal{ G}_T$, $\mathcal{ F}_T$, $\mathcal{ G}_S$ and $\mathcal{ F}_S$ to
tend to zero as $t\to -\infty$ in such a way that the integral in the left hand side of
\eqref{jan21-22-1} converges in the lower limit~\cite{Kobayashi:2016xpl}. At first sight
this is dangerous from the viewpoint of strong coupling: the coefficients
$\mathcal{ G}_T$, $\mathcal{ F}_T$, $\mathcal{ G}_S$ and $\mathcal{ F}_S$ are
analogs of the Planck mass squared, so their early-time behavior
$\mathcal{ G}_T, \mathcal{ F}_T, \mathcal{ G}_S, \mathcal{ F}_S \to 0$ as
$t \to -\infty$
implies
that the gravitational interaction is strong in remote past; note that we always work in the
Jordan frame and that $a(t)$ grows backwards in time at early times. Nevertheless,
depending on the model, the cosmological evolution may, in fact, be legitimately described
within classical field theory at all times, since even at early times
the classical energy scale
(which is $|t|^{-1}$ for power-law bounce) may be lower than the quantum strong coupling
scale~\cite{Ageeva:2018lko,Ageeva:2020gti,Ageeva:2020buc,Ageeva:2021yik}.
It is worth emphasizing that this idea of healthy bounce with
``strong gravity in the past''
(meaning that $\mathcal{ G}_T, \mathcal{ F}_T, \mathcal{ G}_S, \mathcal{ F}_S \to 0$ as
$t \to -\infty$) has been re-invented, albeit not quite explicitly,
in Refs.~\cite{Nandi:2020sif,Nandi:2020szp} from
a different
prospective: bounce in the Jordan frame is obtained
there
via conformal
transformation from an inflationary setup in the Einstein frame.
Unlike in
Refs.~\cite{Nandi:2020sif,Nandi:2020szp}, we
work directly in the Jordan frame.
It is relatively straightforward to construct Horndeski models which admit bouncing
solutions with power-law asymptotics at early times~\cite{Kobayashi:2016xpl,Ageeva:2021yik},
\begin{equation}
a(t) \propto (-t)^\chi \;, \;\;\;\;\;
\mathcal{ G}_T, \mathcal{ F}_T, \mathcal{ G}_S, \mathcal{ F}_S \propto \frac{1}{(-t)^{2\mu}}
\;, \;\;\; t \to -\infty \; ,
\label{jan24-22-1}
\end{equation}
with time-independent parameters $1> \chi> 0$,
$2\mu > \chi +1$.
The latter property guarantees that the inequality \eqref{jan21-22-1}
does not hold, which is a pre-requisite for healthy bounce.
In this paper we concentrate on this
simple case and consider the generation of cosmological perturbations at early
contraction stage.
In terms of conformal time $\eta \propto - (-t)^{1-\chi}$, the quadratic action
\eqref{jan21-22-2a} for tensor perturbations coincides with that
in General Relativity
with the background scale factor
\begin{equation}
a_{E} (\eta) = a(\eta) \mathcal{ G}_T^{1/2}(\eta)
\propto \frac{1}{(-\eta)^{\frac{\mu - \chi}{1-\chi}}} \; .
\label{jan24-22-11}
\end{equation}
In fact, this is precisely
the scale factor in the Einstein frame in our class of models.
Now, for $\mu <1$ the
Einstein frame cosmic time $t_{E} =\int~a_{E}(\eta) d\eta
= - (-\eta)^{\frac{1-\mu}{1-\chi}}$ runs
from $t_{E} = - \infty$, and the effective scale factor increases as
$a_{E} = (-t_{E})^{-b}$ where $b = \frac{\mu - \chi}{1-\mu} > 1$
~\cite{Ageeva:2020gti}. Such a geometry is singular as
$t_{E} \to -\infty$: it is past geodesically incomplete
and cannot be completed\footnote{This property is not pathological in
our case,
since by assumption particles with time-independent mass feel
the Jordan frame geometry rather than the Einstein frame one.}.
On the other hand, for
$\mu >1$ one immediately recognizes effective power-law inflation with
\begin{equation}
a_{E}(t) \propto t_{E}^{\frac{\mu - \chi}{\mu -1}} \; ,
\label{jun23-22-1}
\end{equation}
where
$t_{E} = (-\eta)^{- \frac{\mu -1}{1-\chi}}$ runs from $t_{E} =0$.
In the Einstein frame, this is a version of
$G$-inflation~\cite{Kobayashi:2010cm}. In either case,
for $\mu \approx 1$ the Einstein frame expansion is nearly exponential, so
the power spectrum of generated tensor perturbations
is nearly flat;
similar observation applies to scalar perturbations as well.
This implies that Horndeski bounce with ``strong gravity in the past'' may be capable
of generating realistic cosmological perturbations; we again emphasize the
similarity with Refs.~\cite{Nandi:2020sif,Nandi:2020szp,Kobayashi:2010cm}
(where approximate flatness
of the spectra is built in by construction).
In this paper we consider the models from the class of
Refs.~\cite{Kobayashi:2016xpl,Ageeva:2021yik}, as described
below.
The issues we would like to understand are: (i) what governs
spectral tilts and the overall amplitudes of scalar and tensor perturbations;
(ii) is it possible to obtain small tensor-to-scalar
ratio $ r = \mathcal{P}_{h}/\mathcal{P}_{\zeta}$ and, if so,
what sort of tuning is required for that; (iii) is it possible to generate
perturbations in a controllable way, i.e., in the regime where the background evolution
and perturbations are legitimately described within classical field theory and
weakly coupled quantum theory, respectively ---
and if so, does this constrain
the values of observables.
We emphasize that we design and study our models
in the Jordan frame where they have fairly simple structure. We could
equivalently work in the Einstein frame, but then the analysis would
be more cumbersome. We comment on the Einstein frame counterparts
of our findings where appropriate.
An alert reader would anticipate that the spectral tilts
(both scalar and tensor) are to large extent determined by the
value of $\mu$ in \eqref{jan24-22-1}; in particular, red tilts occur at
$\mu > 1$. This is indeed the case, see Sec.~\ref{subsec:perturbations}.
In this paper we mostly stick to the $\Lambda$CDM value of the scalar
spectral index~\cite{Planck:2018vyg},
\begin{equation}
n_S = 0.9649 \pm 0.0042 \; .
\label{jul17-22-1}
\end{equation}
We comment, however, that the possible presence of early dark
energy makes the scale invariant
Harrison--Zeldovich spectrum with $n_S = 1$ consistent with
observations~\cite{Ye:2021nej,Jiang:2022uyg}. Hence, we also briefly
consider a bounce model with the flat power spectrum.
This paper is organized as follows. We introduce our class of Horndeski
models and discuss early contracting stage of bouncing universe
in Secs.~\ref{sec:preliminaries}, \ref{sec:solution-powerlaw}.
We derive the properties of linearized scalar and tensor perturbations
generated at that stage in Sec.~\ref{subsec:perturbations}, while in
Sec.~\ref{sec:dilatation} we point out the relation between
the flatness of the spectra and approximate dilatation invariance
of the models. Sections \ref{sec:srong-preliminaries} and \ref{sec:u-bound-gen}
are central in our
relatively
general discussion in Sec.~\ref{preliminary_bounce}:
we observe that there is
tension between the small
value of $r$
(and to lesser extent the red scalar spectral tilt),
on the one hand, and the requirement of the absence of
strong coupling, on the other. We consider this issue at qualitative level
in Sec.~\ref{sec:srong-preliminaries} and proceed to quantitative analysis
in Sec.~\ref{sec:u-bound-gen}. We illustrate this tension
in Sec.~\ref{sec:explicit-bounce}, where we
derive
lower bounds on $r$ in very concrete models,
first
with the $\Lambda$CDM scalar tilt \eqref{jul17-22-1},
and then with $n_S=1$.
We conclude in Sec.~\ref{sec:conclude}.
Appendices A, B, and C contain details of more technical character.
\section{ Horndeski models with power-law contraction}
\label{preliminary_bounce}
\subsection{Preliminaries}
\label{sec:preliminaries}
We
consider
a subclass of Horndeski theories whose action has the form (in the Jordan frame)
\begin{align}
\cal S=&\int d^4x \sqrt{-g}
\left\{ G_2(\phi, X)-G_3(\phi, X)\Box \phi
+ G_4(\phi,X)R + G_{4X}\big[(\Box \phi)^2 - (\nabla_{\mu}\nabla_{\nu}\phi)^2\big]\right\}\;,
\label{Hor_L}\\
X =& -\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi\;,
\nonumbe
\end{align}
where
$\Box \phi = g^{\mu\nu} \nabla_\mu \nabla_\nu \phi$
and $(\nabla_{\mu}\nabla_{\nu}\phi)^2 = \nabla_{\mu}\nabla_{\nu}\phi \, \nabla^{\mu}\nabla^{\nu}\phi$,
$R$ is the Ricci scalar. The metric
signature is $(-,+,+,+)$. Unlike
the general Horndeski theory,
the Lagrangian \eqref{Hor_L} involves three arbitrary functions
$G_{2,3,4}$ rather than four.
It is convenient to work in the ADM formalism~ \cite{Kobayashi:2019hrl}.
To this end, the metric is written as
\begin{equation*}
ds^2=-N^2 d\hat{t}^2 +
\gamma_{ij}\left( dx^i+N^i d\hat{t}\right)\left(dx^j+N^j d\hat{t}\right) \;,
\end{equation*}
where $\gamma_{ij}$ is three-dimensional metric,
$N$ is the lapse function and $N_i=\gamma_{ij}N^j$
is the shift vector.
We denote the general time variable by $\hat{t}$ and reserve the
notation $t$ for cosmic time.
By choosing the unitary gauge (in which $\phi$ depends on $\hat{t}$
only and has prescribed form $\phi=\phi(\hat{t})$), one rewrites the
action as follows,
\begin{align}
\label{adm_lagr}
\mathcal{S} = \int d^4x \sqrt{-g} \left[ A_2 (\hat{t}, N) + A_3 (\hat{t}, N) K
+ A_4 (\hat{t}, N)
(K^2 - K_{ij}^2) + B_4 (\hat{t}, N) R^{(3)} \right]\; \text{,}
\end{align}
where
\begin{equation*}
A_4(\hat{t},N) = - B_4(\hat{t},N) - N\frac{\partial B_4(\hat{t},N)}{\partial N}\;,
\end{equation*}
$^{(3)} R_{ij}$ is the Ricci tensor made of $\gamma_{ij}$,
$\sqrt{-g} = N\sqrt{\gamma}$,
$K= \gamma^{ij}K_{ij}$, $^{(3)} R = \gamma^{ij} \phantom{0}^{(3)} R_{ij}$ and
\begin{align*}
K_{ij} &\equiv\frac{1}{2N}\left(\frac{d\gamma_{ij}}{d\hat{t}}
-\,^{(3)}\nabla_{i}N_{j}-\;^{(3)}\nabla_{j}N_{i}\right) ,
\end{align*}
is extrinsic curvature of hypersurfaces $\hat{t}=\mbox{const}$. The relationship between the
Lagrangian functions in the covariant and ADM formalisms is given
by~\cite{Gleyzes:2014dya, Gleyzes:2013ooa, Fasiello:2014aqa}
\begin{equation}
G_2 = A_2 - 2XF_{\phi} \text{,} \; \;\;\;\;
G_3 = - 2XF_X - F \text{,} \; \;\;\;
G_4 = B_4 \;\text{,}
\label{jan23-22-1}
\end{equation}
where $N$ and $X$ are related by
\begin{equation}
N^{-1} d\phi/d\hat{t} = \sqrt{2X}\;,
\label{jan25-22-30}
\end{equation}
an
\begin{equation}
\label{F}
F_X = - \frac{A_3}{\left(2X\right)^{3/2}} - \frac{B_{4\phi}}{X}\; \text{.}
\end{equation}
We note in passing a subtlety here. Equation \eqref{F} defines
$F(\phi, X)$ up to additive term $D(\phi)$. This term modifies the
Lagrangian functions,
\begin{equation}
G_2 \to G_2 - 2X D_\phi\;, \;\;\;\;\; G_3 \to G_3 - D \;.\nonumber
\end{equation}
However, the additional contribution to the action \eqref{Hor_L} vanishes
upon integration by parts,
\begin{equation}
\int d^4 x \sqrt{-g} (-2X D_\phi + D \Box \phi) =
\int d^4 x \sqrt{-g} \nabla_\mu(D \nabla^\mu \phi) =0\; .
\label{jun9-22-1}
\end{equation}
Therefore,
this freedom is, in fact, irrelevant.
To describe FLRW background and perturbations about it, one writes
\begin{subequations}
\label{jul17-22-2}
\begin{align}
N &=N_0(\hat{t}) (1+\alpha)\;, \\%\nonumber\\
N_{i} &=\partial_{i}\beta + N^T_i\;,
\\%\nonumber\\
\gamma_{ij} &=a^{2}(\hat{t}) \Big(\text{e}^{2\zeta}(\text{e}^{h})_{ij} + \partial_i\partial_j Y
+ \partial_i W^T_j + \partial_j W^T_i\Big) \; ,
\end{align}
\end{subequations}
where
$a(\hat{t})$ and $N_0(\hat{t})$ are background solutions,
$\partial_i N^{Ti}=0$ and
\begin{align*}
(\text{e}^h)_{ij} &=\delta_{ij}+h_{ij}+\frac{1}{2}h_{ik}h_{kj}+\frac{1}{6}h_{ik}h_{kl}h_{lj}+
\cdots\;, \quad h_{ii}=0\;, \quad
\partial_{i}h_{ij}=0 \; .
\end{align*}
Throughout this paper, we denote the background lapse
function by $N$ instead of $N_0$.
The residual gauge freedom is fixed
by setting $Y = 0$ and $W^T_i = 0$.
Variables
$\alpha$, $\beta$ and $N^T_i$ enter the action without temporal
derivatives;
the dynamical degrees of freedom are $\zeta$
and transverse traceless $h_{ij}$, i.e., scalar and tensor perturbations.
Upon integrating out variables $\alpha$ and $\beta$, one obtains
the quadratic actions for scalar and tensor perturbations~\cite{Kobayashi:2015gga}
\begin{subequations}
\label{jan23-22-5}
\begin{align}
\label{second_scalar}
\mathcal{S}_{\zeta \zeta}^{(2)}&=\int d\hat{t} d^{3} x N a^3
\left[
\frac{\mathcal{G}_S}{N^2}
\left(\frac{\partial{\zeta}}{\partial \hat{t}}\right)^{2} -
\frac{\mathcal{F}_S}{a^2} \left (\vec{\nabla} \zeta\right)^{2}\right] \; ,
\\
\mathcal{ S}_{hh}^{(2)}&=\int d\hat{t} d^3x \frac{N a^3}{8}\left[
\frac{\mathcal{G}_T}{N^2}
\left (\frac{\partial h_{ij}}{\partial\hat{t}}\right)^2
-\frac{\mathcal{ F}_T}{a^2}
h_{ij,k} h_{ij,k}
\right] \; .
\label{second_tensor}
\end{align}
\end{subequations}
Explicit expressions for the coefficients $\mathcal{G}_S$, $\mathcal{F}_S$, $\mathcal{G}_T$, and
$\mathcal{F}_T$
in general models of the type \eqref{adm_lagr}
as well as equations for background are collected in Appendix A.
\subsection{Power-law contraction}
\label{sec:solution-powerlaw}
To build a bouncing model with the early-time behavior \eqref{jan24-22-1}, we choose
the following form ~\cite{Ageeva:2021yik} of the Lagrangian functions in \eqref{adm_lagr}
at early times, $t \to -\infty$:
\begin{subequations}
\label{A_old}
\begin{align}
&A_2(\hat{t},N) = \hat{g} (-\hat{t})^{-2\mu -2} \cdot a_2 (N) \;\text{,} \\
&A_3 (\hat{t},N)= \hat{g} (-\hat{t})^{-2\mu -1} \cdot a_3 (N)\; \text{,}
\\
A_4 =&A_4 (\hat{t})= -B_4(\hat{t}) = - \frac{\hat{g}}{2} (-\hat{t})^{-2\mu}\; ,
\label{A4old}
\end{align}
\end{subequations}
where $\hat{g}$ is some positive
constant. Then the equations for background, eqs.~\eqref{eoms}, take the
following form
\begin{subequations}
\begin{align*}
&\frac{(Na_2)_N}{(-\hat{t})^{2}} + \frac{3 N a_{3N} H}{(-\hat{t})} + 3 H^2 =0\;,\\
&\frac{a_2}{(- \hat{t})^{2}} + 3 H^2 -
\frac{1}{N}\left[\frac{(2\mu +1) a_3}{(- \hat{t})^2} -
\frac{4 \mu H}{(-\hat{t})} - 2 \frac{dH}{d\hat{t}} +
\frac{a_{3N}}{(-\hat{t})} \frac{d N}{d\hat{t}} \right] =0 \; ,
\end{align*}
\end{subequations}
where $H$ is the physical Hubble parameter.
We make use of the {\it Ansatz}
\begin{equation}
N=\mbox{const} \; , \;\;\;\;\; a= d (-t)^\chi \; ,
\label{jan31-22-1}
\end{equation}
where $\chi > 0$ is a constant and $t= N\hat{t}$ is cosmic time, so that $H=\chi/t$,
and find the algebraic equations for $N$ and $\chi$:
\begin{subequations}
\label{jan24-22-10}
\begin{align}
(Na_2)_N - 3\chi a_{3 N} +3 \frac{\chi^2}{N^2} &= 0\;,
\label{aug12-21-1}
\\
a_2 - \frac{1}{N} (2\mu+1) \Big(a_3 + 2 \frac{\chi}{N} \Big) + 3 \frac{\chi^2}{N^2} &=0\;.
\label{aug12-21-2}
\end{align}
\end{subequations}
In what follows we assume that these equations have a solution with $N>0$ and $1>\chi>0$
(the reason for requiring that $\chi<1$ will become clear shortly, see also
eq.~\eqref{jan24-22-11}).
Let us emphasize that the form of the Lagrangian functions
\eqref{A_old}, as well as the entire discussion in this paper,
refer to the early contraction stage only. To obtain the bounce itself,
as well as subsequent expansion stage, one has to make use of
more sophisticated Lagrangian functions that can be obtained, e.g., by
gluing procedure elaborated in Ref.~\cite{Ageeva:2021yik}. A lesson
from Ref.~\cite{Ageeva:2021yik} is that designing stable bouncing models with
given early-time asymptotics with ``strong gravity in the past''
is relatively straightforward.
In this
paper we do not
aim at constructing complete cosmological models and stick to early times
when the cosmological perturbations are supposed to be generated.
The coefficients entering the quadratic actions
\eqref{jan23-22-5} for perturbations
are straightforwardly calculated by making use of the general
expressions \eqref{eq:Ft_Gt_form}, \eqref{eq:Fs_Gs_form}.
In what follows it is convenient to work in cosmic time $t= N\hat{t}$ and
write the quadratic actions in convenient forms (hereafter dot denotes
the derivative with respect to cosmic time $t$)
\begin{subequations}
\label{jan21-24-2}
\begin{align}
\label{jan24-22-2a}
\mathcal{ S}_{hh} & =\int dt d^3x \frac{ a^3}{8}\left[
\mathcal{ G}_T
\dot h_{ij}^2
-\frac{\mathcal{ F}_T}{a^2}
h_{ij,k} h_{ij,k} \right] \; ,
\\
\mathcal{ S}_{\zeta\zeta} &=\int dt d^3x a^3\left[
\mathcal{ G}_S
\dot\zeta^2
-\frac{\mathcal{ F}_S}{a^2}
\zeta_{,i}\zeta_{,i}
\right] \; .
\label{feb1-22-3}
\end{align}
\end{subequations}
Then
\begin{equation}
\mathcal{G}_T= \mathcal{F}_T = \frac{g}{(-t)^{2\mu}}\;,
\label{jan31-22-2}
\end{equation}
where
\begin{equation}
g = N^{2\mu} \hat{g}\;,
\label{jul7-22-1}
\end{equation}
and
\begin{equation}
\mathcal{G}_S= g \frac{g_S}{2(-t)^{2\mu}}
\; , \;\;\;\;\; \mathcal{F}_S = g \frac{f_S}{2(-t)^{2\mu}}\;,
\label{jan31-22-3}
\end{equation}
with
\begin{subequations}
\label{feb5-22-1}
\begin{align}
f_S &= \frac{2(2-4 \mu + N^2 a_{3N})}{2\chi - N^2 a_{3N}}\;,
\label{jan25-22-21a}
\\
g_S &= 2 \left[\frac{2 \Big(2 N^3 a_{2N}+ N^4 a_{2NN} -
3 \chi (2 \chi + N^3 a_{3NN})\Big)}{(N^2
a_{3N}-2\chi)^2} + 3\right]\;,
\label{jan25-22-21b}
\end{align}
\end{subequations}
where $a_{2,3}(N)$ and their derivatives are to be evaluated on the solution to
eqs.~\eqref{jan24-22-10}, so that $g$, $f_S$ and $g_S$
are independent of time.
Note that the propagation of the tensor perturbations is luminal,
\begin{equation}
u_T^2 = \frac{\mathcal{F}_T}{\mathcal{G}_T} = 1 \; ,\nonumber
\end{equation}
whereas the sound speed in the scalar sector is given by
\begin{equation}
u_S^2 = \frac{\mathcal{F}_S}{\mathcal{G}_S} = \frac{f_S}{g_S} \; ,\nonumber
\end{equation}
and can be substantially smaller than 1.
To end up this Section, we notice that the conformal
transformation
\begin{equation}
g_{\mu \nu} = \frac{M_P^2}{2B_4 } g_{\mu \nu \, (E)}\;, \nonumber
\end{equation}
where $M_P = (8\pi G)^{-1/2}$ is the reduced Planck mass, converts the
theory into the Einstein frame. In the Einstein frame, the action
is cubic Horndeski, and the scale factor is given by
eq.~\eqref{jan24-22-11}. This justifies the discussion in
Sec.~\ref{sec:intro} after eq.~\eqref{jan24-22-11}.
\subsection{Generating perturbations}
\label{subsec:perturbations}
As pointed out in Introduction, the quadratic actions for perturbations coincide
(modulo the fact that $u_S^2 \neq 1$) with the action for tensor perturbation in
power-law
expansion
setup. So, the cosmological perturbations with nearly flat
power spectrum may be generated at early contraction epoch. Let us consider for definiteness
scalar perturbation $\zeta$. It obeys the linearized equation
\begin{equation}
\ddot{\zeta}+\frac{2 \mu-3 \chi}{|t|} \dot{\zeta}+\frac{u_{S}^{2} k^{2}}{d^{2} |t|^{2 \chi}} \zeta=0\;,
\label{jan31-22-5}
\end{equation}
For $0<\chi<1$, the mode is effectively
subhorizon at early times (in the sense that
the effective Hubble time scale $|t|$ is greater
than
the period of oscillations
$d \cdot |t|^\chi/ (u_S k)$),
so it is adequately described within the WKB
approximation. At later times, the mode is superhorizon;
in what follows we consider the case
\begin{equation}
2\mu - 3 \chi >0 \; ,
\label{jul5-22-100}
\end{equation}
in which
the superhorizon mode
experiences the Hubble friction and freezes out. The horizon exit occurs at
time $t_f (k)$ when
\begin{equation}
\frac{2\mu - 3\chi}{|t_f|} \sim \frac{u_{S} k}{d |t_f|^{ \chi}}\;,\nonumber
\end{equation}
i.e.,
\begin{equation}
\label{t_f}
|t_f|(k) \sim \left[\frac{d}{k}\cdot \frac{(2\mu-3\chi)}{u_S}\right]^{\frac{1}{1-\chi}} \; .
\end{equation}
Thus, once the parameters of the theory are chosen in such a way that
$1> \chi >0$, $2\mu - 3 \chi >0$,
the perturbations are
generated in a straightforward way,
{\it provided that the weak coupling regime occurs
all the way down to
$|t| \sim |t_f|$}.
We outline the calculation in Appendix B, and here we
give the results for the power spectra:
\begin{equation}
\mathcal{P}_{\zeta} \equiv \mathcal{A}_{\zeta}\left(\frac{k}{k_*}\right)^{n_S-1}\;,
\;\;\;\;\; \mathcal{P}_{T} \equiv \mathcal{A}_{T}\left(\frac{k}{k_*}\right)^{n_T} \; ,
\label{general_ampl}
\end{equation}
where $k_*$ is pivot scale, the spectral tilts are
\begin{equation}
n_S - 1 = n_T= 2\cdot \left(\frac{1-\mu}{1-\chi}\right)
\label{general_n_s}\;,
\end{equation}
and the amplitudes are given by
\begin{subequations}
\label{jan25-22-20}
\begin{align}
\label{amplitude}
\mathcal{A}_{\zeta} &= \frac{1}{g}
\cdot\frac{1}{g_S u_{S}^{2 \nu}} \frac{(1-\chi)^{2 \frac{\mu-\chi}{1-\chi}}}{\pi
\sin ^{2}(\nu \pi) \Gamma^{2}(1-\nu)}\left(\frac{k_*}{2 d}\right)^{2 \frac{1-\mu}{1-\chi}} \; , \\
\mathcal{A}_{T} &= \frac{8}{g} \cdot \frac{(1-\chi)^{2 \frac{\mu-\chi}{1-\chi}}}{\pi
\sin ^{2}(\nu \pi) \Gamma^{2}(1-\nu)}\left(\frac{k_*}{2 d}\right)^{2 \frac{1-\mu}{1-\chi}},
\label{a_T}
\end{align}
\end{subequations}
where
\begin{equation}
\nu = \frac{1+2 \mu-3 \chi}{2(1-\chi)} = \frac{3}{2}
+ \frac{1-n_S}{2}\; .
\label{feb4-22-1}
\end{equation}
We immediately see from eqs.~\eqref{jan25-22-20} that the smallness
of the scalar and tensor amplitudes is guaranteed by the large value of the
overall pre-factor $\hat{g}$ in eqs.~\eqref{A_old},
and hence the factor $g$ given by \eqref{jul7-22-1}.
Also, we see from ~\eqref{jan25-22-20}
that the tensor-to-scalar ratio in our model is
\begin{equation}
r = \frac{\mathcal{A}_{T}}{\mathcal{A}_{\zeta}} =
8 \frac{f_S^{\nu}}{g_S^{\nu - 1}}
= 8 g_S u_S^{2\nu}
\; .
\label{feb1-22-2}
\end{equation}
This shows that it is not straightforward to have a
small value of $r$, as required by
observations~\cite{Planck:2018vyg,BICEP:2021xfz,Tristram:2021tvh},
which give
\begin{equation}
r<0.032 \; .\nonumber
\end{equation}
Since $f_S\leq g_S$ to avoid superluminality, small $r$ requires
that either $f_S\leq g_S \ll 1$ or $u_S\ll 1$ or both.
It is clear from \eqref{feb5-22-1} that obtaining both $g_S \ll 1$
and $f_S \ll 1$
requires strong fine-tuning. On the contrary,
eq.~\eqref{jan25-22-21a} suggests that ensuring that
$f_S\ll 1$ while $g_S \sim 1$,
and hence $u_S^2\ll 1$ may not be so problematic.
We give
concrete examples in Sec.~\ref{sec:explicit-bounce}.
Thus, the small tensor-to-scalar ratio in our set of models is
due to small sound speed of scalar perturbations. This is reminiscent
of the situation in
k-inflation~\cite{Garriga:1999vw,Mukhanov:2005bu,Langlois:2008wt},
where the tensor-to-scalar ratio is also suppressed for small $u_S$.
We now turn to the scalar and tensor tilts given by
eq.~\eqref{general_n_s}. First, we note that the two tilts are
equal to each other, unlike in most of inflationary
models. Second, we
point out that
approximate flatness of the spectra
is ensured in our set of models by choosing
$\mu \approx 1$, while the slightly red $\Lambda$CDM spectrum
\eqref{jul17-22-1}
is found for
\begin{equation}
\mu > 1 \; .\nonumber
\end{equation}
As we discuss below,
the small value of $r$, especially
in the case $\mu > 1$, is
non-trivial from the viewpoint
of the strong coupling problem. Before coming to the
strong coupling issue, let us
make a point
on approximate flatness itself.
\subsection{Flatness of power spectra and dilatation invariance}
\label{sec:dilatation}
Flatness of the power spectra at $\mu=1$ is not an accident:
the model with $\mu=1$ is invariant under scale transformations.
Let us see this explicitly.
One immediately observes that
for $\mu = 1$, the ADM action \eqref{adm_lagr} with the Lagrangian functions
given by \eqref{A_old} is invariant under scale transformation
\begin{equation}
\hat{t} = \lambda \hat{t}^\prime \; , \;\;\;\;\;
x^i = \lambda x^{\prime i} \; , \;\;\;\;\;
(N, N_i, \gamma_{ij} )(x^i, \hat{t}) = (N', N_i^\prime, \gamma_{ij}^\prime)
(x^{\prime i}, \hat{t}^{\prime})\;,
\label{jan25-22-60}
\end{equation}
with $\lambda = \mbox{const}$.
However, in the ADM language this is a somewhat murky point. To clarify it,
we move to covariant formalism with the action \eqref{Hor_L}. To this end,
we
define the field $\phi$, without loss of generality, in such a way that
\begin{equation*}
-\hat{t} = \text{e}^{-\phi} \; .
\end{equation*}
Then eq.~\eqref{jan25-22-30} gives
\begin{equation}
N = \frac{\text{e}^{\phi}}{\sqrt{2X}} \;,
\label{jan25-22-40}
\end{equation}
and the Lagrangian functions take the following forms
\begin{subequations}
\begin{align*}
&A_2 = \hat{g}
\text{e}^{(2\mu+2)\phi} a_2\Big(\frac{\text{e}^{\phi}}{\sqrt{2X}}\Big) \;\text{,} \\
&A_3 = \hat{g} \text{e}^{(2\mu+1)\phi} a_3\Big(\frac{\text{e}^{\phi}}{\sqrt{2X}}\Big)\; \text{,} \\
&B_4 = - \frac{\hat{g}}{2}
\text{e}^{2\mu\phi}\; .
\end{align*}
\end{subequations}
They define the Lagrangian functions $G_2$, $G_3$, $G_4$ in accordance with
eqs.~\eqref{jan23-22-1} and \eqref{F}.
Now, in covariant formalism we introduce the scale transformation of metric
and scalar field,
\begin{equation}
g_{\mu \nu} = \lambda^2 g_{\mu \nu}^\prime \; , \;\;\;\;
\phi = \phi^\prime - \ln \lambda\;,
\label{jan25-22-70}
\end{equation}
so that $X= \lambda^{-2} X^\prime$.
The combination in the right hand
side of eq.~\eqref{jan25-22-40} is invariant under this transformation.
The meaning of this property is that $N^\prime =
\frac{\text{e}^{\phi'}}{\sqrt{2X'}}$
(which is actually equal to $N$)
is the lapse function in coordinates $x^{\prime \mu}$ introduced in
\eqref{jan25-22-60}. With this understanding,
it is fairly straightforward to check that the action \eqref{Hor_L}
with $\mu=1$
is invariant under scale transformation \eqref{jan25-22-70}. This
is clear from the fact that under scale transformation one
has
\begin{equation}
A_2 (\phi, X) = \lambda^{-2\mu-2} A_2 (\phi', X')\; ,
\; \;\; A_3 (\phi, X) = \lambda^{-2\mu-1} A_3 (\phi', X') \; ,
\;\;\; B_4 (\phi) = \lambda^{-2\mu} B_4 (\phi')\;,\nonumber\end{equation}
and
$\Box \phi = \lambda^{-2} \Box^\prime \phi'$,
$R = \lambda^{-2} R'$.
A subtlety here concerns
the function $F$. Its derivative $F_X$
transforms as $F_X = \lambda^{2-2\mu} F^\prime_{X^\prime}$, as it should,
so that one would think that
\begin{equation}
F = \int F_X dX = \lambda^{-2\mu} \int F^\prime_{X^\prime} dX' =
\lambda^{-2\mu} F^\prime\; .\nonumber
\end{equation}
This is not quite true, though.
$F_X$ defined by eq.~\eqref{F} may contain a term
$c\mbox{e}^{2\mu \phi} X^{-1}$,
i.e., $F$ may contain a term $c \mbox{e}^{2\mu \phi} \ln X$.
Then, upon scaling
transformation, function $F$,
and hence
functions $G_2$ and $G_3$ obtain log-$\lambda$ terms,
\begin{equation}
(G_2)_{log} = -4 c \mu \ X^\prime \mbox{e}^{2\mu \phi'}
\cdot \lambda^{-2\mu -2}
\ln \lambda^{-2}\; , \;\;\;\;\;
(G_3)_{log} = - c \ \mbox{e}^{2\mu \phi'}\cdot \lambda^{-2\mu}
\ln \lambda^{-2}\; .\nonumber
\end{equation}
However, their contribution to the action \eqref{Hor_L} vanishes upon
integration by parts, in the same way as in
eq.~\eqref{jun9-22-1}. Modulo this subtlety, the functions
$G_2$, $G_3$, $G_4$ defined by
eqs.~\eqref{jan23-22-1} and \eqref{F}, have correct scaling at $\mu=1$
which ensures the scale invariance of the theory in covariant formulation.
\subsection{Tension between small $r$ and
strong coupling: preliminaries}
\label{sec:srong-preliminaries}
In this Section, we discuss
in general terms the problematic issue
with our mechanism of the generation of the cosmological perturbations
at
Horndeski bounce. It has to do with the dangerous strong coupling, on the one
hand,
and the small value of $r$, on the other --- especially for
positive $(\mu -1)$ as required for
the red scalar tilt.
Using a concrete example,
we will see in Sec.~\ref{sec:explicit-bounce} that the problem may be overcome,
but in a quite narrow range of parameters. This makes our mechanism
particularly interesting and falsifiable.
In this Section
we mostly consider for definiteness
the case $\mu > 1$, as required by the $\Lambda$CDM value of the
scalar spectral index \eqref{jul17-22-1},
see eq.~\eqref{amplitude}. Our formulas, however,
are valid also in the Harrison--Zeldovich case $\mu=1$, $n_S=1$.
We make specific
comments on the latter case in appropriate places.
Taken literally,
the model with $\mu>1$
suffers strong coupling problem in the asymptotic past,
$t \to -\infty$. This has been discussed in detail
in Ref.~\cite{Ageeva:2021yik} (see also
Refs.~\cite{Ageeva:2018lko,Ageeva:2020gti,Ageeva:2020buc}); here we
sketch the argument.
The characteristic classical energy scale in the power-law bounce model
is the inverse
time scale of evolution,
\begin{equation}
E_{class} (t) = |t|^{-1} \; .\nonumber
\end{equation}
Indeed, both background values of physical quantities and parameters governing the
perturbations evolve in power-law manner and get order 1 changes in time interval
of order $|t|$ (as an exception,
this does not apply to the scale factor $a(t)$ for $\chi \ll 1$, but
does apply to the Hubble parameter, since $|\dot{H}/H| \sim |t|^{-1}$). To see whether this
classical energy scale is lower than the quantum strong coupling scale, one has to estimate the
latter.
Let us consider first
the tensor sector of the model. Its quadratic action is given
by eq.~\eqref{jan24-22-2a}; importantly, the coefficient
${\mathcal{ G}_T}={\mathcal{ F}_T}$ tends to zero as $t \to - \infty$,
see \eqref{jan31-22-2}.
The cubic action reads~ \cite{Gao:2011vs}
\begin{eqnarray}
\mathcal{S}^{(3)}_{hhh}
= \int dt\text{ }a^3d^3x\Big[\frac{\mathcal{ F}_T}{4a^2}\left(h_{ik}h_{jl}
-\frac{1}{2}h_{ij}h_{kl}\right)h_{ij,kl}
\Big] \; .
\label{jun11-22-30}
\end{eqnarray}
Thus, the quantum strong coupling energy scale $E_{strong}$
in the tensor sector
is determined by the behavior of ${\mathcal{ F}_T}$.
To estimate this scale at a given moment of time, we note first that we can
rescale spatial coordinates at that moment
of time to set
\begin{equation}
a=1 \; .\nonumber
\end{equation}
Now, if the strong coupling scale $E_{strong}$ is much higher than
the energy scale
$|t|^{-1}$ of the classical evolution, the
background can be treated as slowly varying, and at a given moment of time
it is natural to introduce canonically normalized field $h_{ij}^{(c)}$ by
\begin{equation}
h_{ij} = {\mathcal{G}_T}^{-1/2} h_{ij}^{(c)} \; .\nonumber
\end{equation}
Then the cubic interaction term becomes
\begin{equation}
\mathcal{S}^{(3)}_{hhh}
=\int d t \text{ }d^3x
\Big[\frac{\mathcal{ F}_T}{4\mathcal{G}_T^{3/2}}\left(h^{(c)}_{ik}h^{(c)}_{jl}
-\frac{1}{2}h^{(c)}_{ij}h^{(c)}_{kl}\right)
\partial_k \partial_l h^{(c)}_{ij}
\Big]\; .\nonumber
\end{equation}
On dimensional grounds, the quantum strong coupling scale is estimated
as
\begin{equation}
\label{ETTT}
E_{strong}^{TTT} \sim \frac{\mathcal{G}_T^{3/2}}{\mathcal{ F}_T} =
\frac{g^{1/2}}{|t|^\mu} \; ,
\end{equation}
where we use \eqref{jan31-22-2}. This scale is indeed higher than the
classical energy scale $H\sim |t|^{-1}$ provided that
\begin{equation}
|t|^{2\mu-2} < g \; .
\label{feb1-22-1}
\end{equation}
As pointed out in
Refs.~\cite{Ageeva:2018lko,Ageeva:2020gti,Ageeva:2020buc,Ageeva:2021yik},
this inequality is indeed valid at arbitrarily large $|t|$
for $\mu < 1$, but {\it it does not hold in the asymptotic past}
for $\mu > 1$, as required for the red spectral tilt.
Thus, there is potential
tension between the red tilt and the validity of the
(semi-)classical field theory treatment, i.e.,
absence of strong coupling. One may
take various attitudes towards this
potential problem. First, one may
pretend to be ignorant about
the situation at very early times, and consider only the evolution
at the epoch when the theory is weakly coupled, in the sense that
$|t|^{-1} < E_{strong}$. Second, one
may
think of a slow change of the
exponent $\mu = \mu(t)$ from $\mu<1$ in the asymptotic past
to $\mu > 1$ at later times, when the perturbations are generated.
In any case, however, our calculation of
the power spectra in Sec.~\ref{subsec:perturbations} and Appendix B
is valid {\it provided that the WKB evolution before
the exit from effective horizon occurs in the weak coupling regime}.
This means
that the freeze-out time \eqref{t_f} must obey the weak
coupling condition \eqref{feb1-22-1} for any relevant momentum
$k$.
In fact, the tensor sector is not problematic in this
regard. To see this, we recall that the tensor modes exit the effective
horizon at
\begin{equation}
t_f^{(T)} (k) \sim \left(\frac{d}{k}\right)^{\frac{1}{1-\chi}},\nonumber
\end{equation}
(see eq.~\eqref{t_f} with $u_S = 1$ for tensor modes). Then the
relation~\eqref{feb1-22-1} with $ t= t_f^{(T)}$ becomes
\begin{equation}
\frac{1}{g} \left(\frac{d}{k}\right)^{2\frac{\mu-1}{1-\chi}} \ll 1.\nonumber
\end{equation}
The left hand side here is of the order of the tensor amplitude
${\cal A}_T$, eq.~\eqref{a_T}, so that the absence of strong
coupling at the horizon exit time is guaranteed by the smallness of
the tensor amplitude.
The latter property is actually obvious from
the Einstein frame viewpoint.
For $\mu > 1$, the Einstein frame universe experiences the
power-law inflation~\eqref{jun23-22-1}. Strong coupling in the asymptotics
$t\to -\infty$ ($t_E \to 0$) is interpreted as a mere fact that
the inflationary Hubble parameter $H_E \sim t_E^{-1}$ formally exceeds
$M_P$ at small $t_E$. Now, the tensor amplitude is of order $H_E^2/M_P^2$
at the exit time from the inflationary horizon; small tensor amplitude
means the absence of strong coupling at that time, $H_E \ll M_P$.
In the case $\mu = 1$, the condition of
validity of the classical description is time-independent,
\begin{equation*}
g\gg1 \; .
\end{equation*}
Again, this condition is automatically satisfied provided that
the tensor amplitude \eqref{a_T} is small.
The situation is more
subtle in the scalar sector,
since
the scalar sound speed $u_S$ is small, as required by
the small tensor-to-scalar ratio (see eq.~\eqref{feb1-22-2}).
To appreciate this new aspect,
we consider
scalar perturbations whose quadratic action is given by
\eqref{feb1-22-3}, i.e.,
\begin{equation}
\mathcal{ S}_{\zeta\zeta} =\int dt d^3x a^3 \mathcal{ G}_S \left[
\dot\zeta^2
-\frac{u_S^2}{a^2}
\zeta_{,i}\zeta_{,i}
\right] \; .\nonumber
\end{equation}
Hereafter we assume, in view of the above discussion,
that $g_S$ in eqs.~\eqref{jan31-22-3}, \eqref{jan25-22-21b}
is of order 1, and the smallness of $u_S$ is due to small $f_S$.
Cubic terms in the action for $\zeta$ are calculated in
Refs.~\cite{DeFelice:2011zh,Gao:2011qe,DeFelice:2011uc,Gao:2012ib}.
As we discuss in Sec.~\ref{sec:hierarchy}
and Appendix C,
the most relevant terms
reduce to
just one term \eqref{jun11-22-22}
in the cubic action (with $a=1$, as before):
\begin{align}
\mathcal{S}^{(3)}_{\zeta\zeta\zeta}
= \int d {t}~d^3 {x}
\Lambda_\zeta
{\partial}^2 \zeta \left( {\partial}_i \zeta \right)^2,
\label{jun11-22-22a}
\end{align}
with
\begin{equation}
\Lambda_\zeta =
\frac{\mathcal{ G}_T^3}{4\Theta^2} \; ,\nonumber
\end{equation}
where $\partial^2 = \partial_i \partial_i$,
and $\Theta$ is given by
\eqref{theta}. In our model \eqref{A_old}, we have
\begin{equation}
\Theta = g \frac{\vartheta}{|t|^{2\mu +1}} \; , \;\;\;\;\;\;
\vartheta = \frac{1}{2} N^2 a_{3N} - \chi\;,
\label{jun11-22-15}
\end{equation}
Thus,
\begin{equation}
\Lambda_\zeta = g \frac{\lambda_\zeta}{|t|^{2\mu - 2}} \; ,\nonumber
\end{equation}
where\footnote{In the
model of Sec.~\ref{sec:explicit-bounce}
the property $\lambda_\zeta = O(1)$ is valid provided that
$\chi$ is not fine tuned to be
very close to 1, which is the case we consider.}
\begin{equation}
\lambda_\zeta = \frac{1}{4\vartheta^2} = O(1)\;,
\label{jun12-22-10}
\end{equation}
for all values of $u_S$ including $u_S \ll 1$.
To get rid of the sound speed in the quadratic part of the action,
we not only set $a=1$, but
rescale the spatial coordinates further,
$x^i = u_S y^i$. Upon introducing canonically normalized field
\begin{equation*}
\zeta^{(c)} = (2\mathcal{G}_S)^{1/2} u_S^{3/2} \zeta\;,
\end{equation*}
we obtain the quadratic action
in canonical form (with effective
sound speed equal to 1),
whereas the
cubic action becomes
\begin{equation}
\mathcal{S}^{(3)}_{\zeta\zeta\zeta} = \int dt d^3y
\Lambda_\zeta (2\mathcal{G}_S)^{-3/2} u_S^{-11/2}
\partial^2_y \zeta^{(c)} (\partial_y\zeta^{(c)})^2
\; .
\label{jun10-22-1}
\end{equation}
On dimensional grounds, the strong coupling energy scale is determined by
\begin{equation}
(E_{strong}^{\zeta \zeta \zeta} )^{-3} \sim \Lambda_\zeta
(\mathcal{G}_S)^{-3/2} u_S^{-11/2}\;.\nonumber
\end{equation}
Collecting all factors, and omitting factors
of order 1, we get
\begin{equation}
E_{strong}^{\zeta \zeta \zeta} \sim \frac{1}{|t|} \left( \frac{g^{1/2}
u_S^{11/2}}{|t|^{\mu -1}} \right)^{1/3}.
\label{jun11-22-50}
\end{equation}
The condition of validity of (semi-)classical approximation, $E_{strong} > |t|^{-1}$, now reads
\begin{equation}
\left( \frac{g u_S^{11}}{|t|^{2(\mu -1)}} \right)^{1/6} > 1 \; .
\label{feb3-22-1}
\end{equation}
For small $u_S$ it is stronger than the condition \eqref{feb1-22-1}, i.e., eq.~\eqref{feb3-22-1}
is valid at
later times (smaller $|t|$) than eq.~\eqref{feb1-22-1}.
Let us see whether the condition \eqref{feb3-22-1} can be satisfied
at the times when the
relevant modes of perturbations exit the effective horizon.
The most dangerous are the
longest modes, i.e., the smallest $k=k_{min}$. To
obtain a rough estimate,
we take $k_{min}\approx k_*$ (the momentum dependence is weak in view of
small
$|n_S - 1|$), and relate the exit time \eqref{t_f} at $k=k_*$
with the scalar amplitude~\eqref{amplitude}.
We omit factors of order 1 and obtain
\begin{equation}
t_f^{2(\mu - 1)} \sim g {\cal A}_\zeta u_S^3 \; .\nonumber
\end{equation}
In this way we find
\begin{equation}
\left( \frac{g u_S^{11}}{|t_f(k_{min})|^{2(\mu -1)}} \right)^{1/6}
\sim \left( \frac{u_S^{8}}{\mathcal{A}_{\zeta}} \right)^{1/6}
\sim \left( \frac{r^{4/\nu}}{\mathcal{A}_{\zeta}} \right)^{1/6} \; ,\nonumber
\end{equation}
where we make use of eq.~\eqref{feb1-22-2} with $\nu$ given
by \eqref{feb4-22-1}.
So, the validity condition \eqref{feb3-22-1}
for our weak coupling calculations is roughly
\begin{equation}
\left( \frac{r^{4/\nu}}{\mathcal{A}_{\zeta}} \right)^{1/6} > 1 \; .
\label{mar22-22-1}
\end{equation}
We see that there is an interplay between two small numbers,
$r$ and $\mathcal{A}_{\zeta}$. For a crude estimate,
we take $\chi \ll 1$ and $\mu \approx 1$,
consistent with small $(1-n_S)$ as given by \eqref{general_n_s}. Then
$\nu \approx 3/2$. If we then take, as an example,
$r=0.02$ and insert $\mathcal{A}_{\zeta} \simeq 2\cdot 10^{-9}$ into the left
hand side of eq.~\eqref{mar22-22-1}, we obtain its numerical value
approximately equal
to 5, suspiciously close to 1. The lesson we
learn from this back-of-envelope estimate is twofold.
First, one cannot neglect numerical
factors ``of order 1'' here. In particular, one has to
be more precise when evaluating the
strong coupling scale $E_{strong}$: instead of
naive dimensional analysis, one has to consider
unitarity bounds. We
study this point in general terms in
Sec.~\ref{sec:u-bound-gen} and apply the results to
a concrete model
in Sec.~\ref{sec:explicit-bounce}. Second,
it is clear that one cannot have arbitrarily
small tensor-to-scalar ratio $r$ in our class
of models; indeed, $r\simeq 0.02$ appears to be already
on the edge of the validity of the
weakly coupled description that we make use of.
We substantiate the latter observation in
Sec.~\ref{sec:explicit-bounce} within the concrete model.
The above analysis goes through also in the case $\mu=1$,
$n_S=1$. Instead of \eqref{feb3-22-1},
we
obtain the condition for the absence of strong coupling, which is
again time-independent,
\begin{equation}
\label{mu_1_NOSC}
\left( g
u_S^{11} \right)^{1/6}>1 \; .
\end{equation}
With $\nu = 3/2$ this gives
\begin{equation*}
\left( \frac{r^{8/3}}{\mathcal{A}_{\zeta}} \right)^{1/6} > 1\; ,
\end{equation*}
which is similar to \eqref{mar22-22-1}.
We refine this qualitative argument in Sec.~\ref{sec:explicit-bounce}.
\subsection{Tree level unitarity and strong coupling energy scale}
\label{sec:u-bound-gen}
\subsubsection{Unitarity relations with different sound speeds}
\label{sec:speeds-unitarity}
The quantum energy scale of strong coupling is conveniently evaluated by
making use of the unitarity bounds on partial wave amplitudes (PWAs)
of
$2\to 2$ scattering~\cite{Oller:2019opk,oller.190503.1,Lacour:2009ej,Gulmez:2016scm}.
In our model we have nine $2 \to 2$ channels, which we collectively
denote by $\alpha \beta$, where $\alpha = (\alpha 1, \alpha 2)$
and $\beta = (\beta 1, \beta 2)$
refer to initial state and final state, respectively:
\begin{subequations}
\begin{align}
\alpha 1, \alpha 2 \to \beta 1 , \beta 2 ~
=~ &\zeta \zeta \to \zeta \zeta\ ,
\label{jun10-22-2a}\\
& \zeta h \to \zeta \zeta \; , \;\;\;\; \zeta \zeta \to \zeta h
\label{jun10-22-2c} \\
& \zeta h \to \zeta h
\\
&\zeta \zeta \to hh \; , \;\;\;\; hh \to \zeta \zeta
\label{jun10-22-2b}\\
& \zeta h \to hh \; , \;\;\;\; hh \to \zeta h\\
& hh \to hh \; .
\end{align}
\end{subequations}
An additional complication is that the perturbations $\zeta$ and $h$
have different sound speeds.
In this situation a (fairly obvious) generalization of the PWA unitarity
relation is~\cite{Ageeva:2022nbw}
\begin{equation}
\mbox{Im}~ a^{(l)}_{\alpha \beta} = \sum_\gamma a^{(l)}_{\alpha \gamma}
\frac{g_\gamma}{u_{\gamma 1} u_{\gamma 2}(u_{\gamma 1}+ u_{\gamma 2})}
a^{(l) \, *}_{\gamma \beta} \; ,
\label{jun9-22-2}
\end{equation}
where
$ a^{(l)}_{\alpha \beta}$ is PWA with angular momentum $l$ and initial
and final states $\alpha$ and $\beta$, respectively,
$\gamma$ refers to two particles in the intermediate state
with sound speeds $u_{\gamma 1}$ and $u_{\gamma 2}$, and $g_\gamma =2$ if these
intermediate particles are
distinguishable and $g_\gamma =1$ if these particles are
identical.\footnote{Equation \eqref{jun9-22-2} is not the most general
unitarity relation, but it is valid if the right hand side
is saturated by the tree level amplitudes. This is sufficient for our
purposes.} We omitted contributions to the right hand side
due to multiparticle intermediate states, since
they can only strengthen the unitarity bound.
Upon redefining
\begin{equation}
\tilde{a}^{(l)}_{\alpha \beta} =
\left(\frac{g_\alpha }{u_{\alpha 1} u_{\alpha 2}(u_{\alpha 1}
+ u_{\alpha 2})}\right)^{1/2}
a^{(l)}_{\alpha \beta} \left(\frac{g_\beta}{u_{\beta 1} u_{\beta 2}
(u_{\beta 1}+ u_{\beta 2})}\right)^{1/2} \; ,
\label{jun9-22-5}
\end{equation}
we arrive at familiar form of the unitarity relation which we write
in the matrix form
for
the matrix $ \tilde{a}^{(l)}_{\alpha \beta}$:
\begin{equation}
\mbox{Im}~ \tilde{a}^{(l)} = \tilde{a}^{(l)} \tilde{a}^{(l)\, \dagger} \;.\nonumber
\end{equation}
The most stringent tree level
unitarity bound is obtained for the largest
eigenvalue of the tree level
matrix $\tilde{a}^{(l)}$ (which is real and symmetric). This bound
reads~\cite{Grojean:2007zz}
\begin{equation}
|\mbox{maximum~eigenvalue~of}~ \tilde{a}^{(l)}| \leq \frac{1}{2} \; .\nonumber
\end{equation}
All these properties are derived in detail in Ref.~\cite{Ageeva:2022nbw}.
\subsubsection{Dimensional analysis for $u_S \ll 1$.}
The model we consider has the large parameter $u_S^{-1}$ which governs
small tensor-to-scalar ratio. So, as we have seen, the earliest time
after which we can trust our setup depends on $u_S$. Let us see that
the dependence
of the rescaled amplitudes $\tilde{a}^{(l)}_{\alpha \beta}$ on $u_S$ is the
only source of
refinement
of the naive estimate
for the time $t_{cl}$ of the onset of the classical theory
(cf.
\eqref{feb1-22-1})
\begin{equation}
|t_{cl}|^{2\mu -2} \sim g \; .
\label{jun10-22-10}
\end{equation}
We restrict ourselves to the cubic order in perturbations; by experience,
higher orders are expected not to give anything
new~\cite{Ageeva:2020buc,Ageeva:2021yik}.
{\it
Let us ignore the fact that $u_S \ll 1$ for the time being.}
Then the entire Lagrangian defined by the functions \eqref{A_old}
is proportional to $g (-t)^{-2\mu}$, while the only other ``parameter''
is $t$ (we ignore constants of order 1).
Note that $g (-t)^{-2\mu}$
has dimension $(\mbox{mass})^2$. So, on dimensional grounds,
before rescaling to canonically
normalized fields, the terms in the cubic Lagrangian have the following
schematic form
\begin{equation}
\frac{g}{|t|^{2\mu}} \cdot (|t|\partial)^n \cdot \partial^2 \cdot\varphi^3 \;, \nonumber
\end{equation}
where $\varphi$ stands collectively for (dimensionless)
scalar and tensor perturbations, and, with slight abuse of notations,
we do not distinguish
temporal and spatial derivatives
at this
stage.
Going to canonically normalized fields
$\varphi^{(c)} \sim (g^{1/2}/|t|^{\mu}) \varphi$, we write the cubic Lagrangian
as follows
\begin{equation}
\frac{|t|^\mu}{g^{1/2}} \cdot (|t|\partial)^n \cdot\partial^2 \cdot \varphi^{(c) \, 3} \; .
\label{jun11-22-44}
\end{equation}
With this cubic coupling, its contribution to
the $2\to 2$ amplitude is, schematically,
\begin{equation}
a^{(l)} \sim \frac{\left( (|t|^\mu/g^{1/2}) E^2 (E|t|)^n\right)^2}{E^2}
\sim \frac{|t|^{2\mu -2}}{g} (E|t|)^{2n + 2} \; ,\nonumber
\end{equation}
where $E$ is the center-of-mass energy, and $E^2$ in
the denominator comes from the propagator, see Fig.~\ref{fig:diagram}.
This reiterates that ignoring the fact that $u_S \ll 1$, one
would obtain the estimate \eqref{jun10-22-10}
for the time of the onset of the classical theory irrespectively of the
channel considered: at that time the amplitude at energy scale
$E \sim E_{class} =|t|^{-1}$ saturates the unitarity bound.
{\it
Let us now reintroduce $u_S \ll 1$.}
Importantly,
the coefficients in the cubic
Lagrangian do not contain inverse powers of $u_S$, so, no enhancement
by $u_S^{-1}$ occurs due to the cubic Lagrangian
itself.
Note that this is not entirely trivial. First,
the theory involves
non-dynamical variables
$\alpha$, $\beta$, $N_i^T$
entering \eqref{jul17-22-2}. Their expressions
$\alpha (\zeta, h_{ij})$, $\beta(\zeta, h_{ij})$ and $N_i^T (\zeta, h_{ij}))$,
obtained by solving the constraint equations, may in principle be enhanced
by inverse powers of $u_S$. As a matter of fact,
one can check that this is not the case.
Second, one may be tempted to use linearized equations of motion
when obtaining the cubic action. This would introduce spurious
inverse powers of $u_S$ when inserting $\partial_i \partial_i \zeta = u_S^{-2}
\ddot \zeta$. The latter subtlety is taken care of by working
consistently off-shell, as we do in what follows.
Still,
the rescaled amplitudes
$\tilde{a}^{(l)}$ acquire the dependence on $u_S$. Schematically,
the rescaled amplitudes are now
\begin{equation}
\tilde{a}^{(l)} \sim \frac{|t|^{2\mu -2}}{g} (E|t|)^{2n + 2} u_S^{-K},\nonumber
\end{equation}
where $K$ depends on the process. Now the time of the
onset of the classical theory is determined by
\begin{equation}
|t_{cl}|^{2\mu -2} \sim g u_S^K \;.\nonumber
\end{equation}
The larger $K$, the smaller $|t_{cl}|$, the later
the system
enters the
classical theory/weak coupling regime. So, to figure out the actual time
$t_{cl}$ (the latest of the ``strong coupling times''),
we are going to hunt for processes whose
rescaled amplitudes are enhanced by $u_S^{-K}$ with
the largest value of $K$.
In the case $\mu=1$, the validity condition for (semi-)classical treatment
is
$gu_S^K >1$, so, again, the strongest bound on the paramenetrs of a model
is obtained for the largest value of $K$.
\subsubsection{Hierarchy of rescaled amplitudes}
\label{sec:hierarchy}
Let us consider tree-level
diagrams of the types shown in Fig.~\ref{fig:diagram}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{general-diagramm.pdf}
\caption{Tree-level diagrams. Particles $\alpha 1$, $\alpha 2$, $\beta 1$,
$\beta 2$ and $\delta$ can be scalar or tensor.}
\label{fig:diagram}
\end{figure}
In our class of models, the amplitudes $\tilde{a}^{(l)}_{\alpha \beta}$
with one and the same energy $E$ in the center-of-mass frame
${\bf p}_{\alpha 1} = -{\bf p}_{\alpha 2}$
and different particles in the initial, final and
intermediate states
show hierarchical pattern
in terms of the large parameter
$u_S^{-1}$.
This pattern is due to the following properties:
(i) Due solely to rescaling,
the rescaled amplitudes \eqref{jun9-22-5} are enhanced by
a factor $u_S^{-3/2}$ for two initial (or two final)
{\it scalar} external legs; by a factor $u_S^{-1/2}$ if initial
(or final) legs are $\zeta h$;
no enhancement of
this sort is associated with two tensor initial (or final)
external legs.
(ii) Since the energy and momentum of a {\it scalar} are related by
$\omega = u_S p$ (we reserve the notation $E$ for the center-of-mass energy),
spatial momentum of an incoming or outgoing scalar may be either of
order $p \sim E/u_S$ or of order $p \sim E$. In the former case (only!)
every {\it spatial} derivative in a vertex, that acts
on external leg $\zeta$
gives enhancement
$u_S^{-1}$. The same observation applies to internal line $\zeta$ in $t$-
and $u$-channels,
if spatial momentum transfer is of order $E/u_S$.
(iii) The scalar propagator is given by
\begin{equation}
S(\omega,p) = \frac{1}{\omega^2 - u_S^2 p^2} \; .\nonumber
\end{equation}
For $\omega=0$ ($t$-
and/or $u$-channel diagrams with internal line $\zeta$)
this gives enhancement $u_S^{-2}$, provided that the momentum transfer is
$p \sim E$ (but not $E/u_S$).
To proceed further, we note that the maximum number of {\it spatial}
derivatives in triple-$\zeta$ vertex is 4.
In the particular class of Horndeski models \eqref{Hor_L}
with $G_5=0$,
and, furthermore, with $G_4 = G_4 (\phi)$,
there are at most 2 {\it spatial} derivatives in other vertices.
We discuss this point in Appendix C.
Another useful observation is that for a given center-of-mass
energy $E$, incoming (outgoing) momenta are of order
$p\sim E/u_S$ if {\it both} initial (final) particles are $\zeta$, and
$p\sim E$ otherwise.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{pure-scalars.pdf}
\caption{Purely scalar tree-level diagrams:
case $\textbf{(a)}$.}
\label{fig:diagr_3scalar}
\end{figure}
We now consider various channels and diagrams separately.
{\bf (a)}. Purely scalar diagrams,
Fig.~\ref{fig:diagr_3scalar}, process \eqref{jun10-22-2a}.
In this case all spatial momenta, including intermediate momentum
in $t$- and $u$-channel diagrams, are of order $E/u_S$. Hence, the enhancement
mechanisms (i) and (ii) are at work, while the mechanism (iii) is not.
The maximum number of spatial derivatives at each vertex is 4,
so the diagrams are of order
\begin{equation}
u_S^{-3/2} \cdot u_S^{-3/2} \cdot 1 \cdot u_S^{-4} \cdot u_S^{-4} = u_S^{-11}, \; \nonumber
\end{equation}
(hereafter the first two factors are
due to enhancement (i),
the third factor due to enhancement (iii) and the last two factors
due to enhancement (ii)).
This precisely matches the amplitude that one obtains from the
cubic action \eqref{jun10-22-1}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{h-propagator.pdf}
\caption{Tree-level diagrams with $\zeta$-legs and $h$-propagators:
case $\textbf{(b)}$.}
\label{fig:h-prop}
\end{figure}
{\bf (b)}. Strong enhancement would appear to occur also for diagrams with
scalar external legs and tensor exchange,
Fig.~\ref{fig:h-prop}.
However, as we pointed out,
the maximum number of spatial derivatives in each
$\zeta \zeta h$
vertex is 2 (rather than 4). Therefore, the enhancement factor is
\begin{equation}
u_S^{-3/2} \cdot u_S^{-3/2}\cdot 1 \cdot u_S^{-2} \cdot u_S^{-2} = u_S^{-7} \; .
\label{jun11-22-1}
\end{equation}
So, tensor exchange gives subdominant contribution.
{\bf (c)}. Process \eqref{jun10-22-2c} with $t$-channel $\zeta$-exchange,
Fig.~\ref{fig:c_d}, left diagram.
The incoming spatial momenta are of order $E$, while outgoing
ones are of order $E/u_S$. So, the spatial
momentum transfer is of order $E/u_S$ and the mechanism (iii)
does not work. The enhancement factor is
\begin{equation}
u_S^{-1/2}\cdot u_S^{-3/2} \cdot 1 \cdot u_S^{-2} \cdot u_S^{-4}
= u_S^{-8} \; ,
\label{jun11-22-2}
\end{equation}
so this process is also subdominant.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{c_and_d_cases.pdf}
\caption{Tree-level t-channel diagrams:
cases $\textbf{(c)}$ (left) and $\textbf{(d)}$
(right).}
\label{fig:c_d}
\end{figure}
{\bf (d)}. To illustrate the mechanism (iii), we consider
the process \eqref{jun10-22-2b} with $t$-channel
$\zeta$-exchange,
Fig.~\ref{fig:c_d}, right diagram. Spatial momenta
of incoming and outgoing particles are of order $p \sim E$,
the spatial momentum transfer is also of order $E$,
so the mechanism (ii) does not work. We find the enhancement factor
\begin{equation}
u_S^{-1/2} \cdot u_S^{-1/2} \cdot u_S^{-2} \cdot 1 \cdot 1 = u_S^{-3}\;,
\label{jun11-22-3}
\end{equation}
which is again weak.
Other diagrams can be studied in a similar way, and all of them are
suppressed as compared to purely scalar process {\bf (a)}.
The general argument is straightforward. By replacing one or more external
scalar legs in a purely scalar diagram of
the case {\bf (a)} by tensor leg(s),
one loses at least
an enhancement factor $u_S^{-1}$ from (i) and another factor $u_S^{-2}$
from (ii). One could in principle gain a factor $u_S^{-2}$ due to
(iii) (we have seen in our example {\bf (c)} that this is actually
not the case if one replaces just one scalar leg),
but in any case, the overall suppression of a new
diagram is at least $u_S$ as compared to the original, purely scalar
one\footnote{In reality the suppression is always even stronger, cf.
\eqref{jun11-22-1}, \eqref{jun11-22-2}, \eqref{jun11-22-3}.
Moreover,
in mixed sectors some
couplings
in the cubic action are themselves suppressed by positive powers of
$u_S$, leading to even
stronger suppression of the contributions to the amplitudes. }.
Replacing the scalar internal line by the tensor line does not improve
the situation, and in the particular case {\bf (b)} produces suppressed
result.
We conclude that
the latest time of the onset of the classical theory $t_{cl}$
is associated with the scalar sector of the theory.
This means, in particular, that the search for the largest eigenvalue
of the rescaled PWA matrix $\tilde{a}$ (Sec.~\ref{sec:speeds-unitarity})
is unnecessary. The
relevant terms in the cubic scalar action are those with four
{\it spatial} derivatives.
Modulo numerical factors, the time $t_{cl}$
is indeed determined from \eqref{feb3-22-1},
\begin{equation}
|t_{cl}|^{2\mu - 2} \sim g u_S^{11} \; ,\nonumber
\end{equation}
whereas for $\mu=1$ the condition is $ g u_S^{11} >1$.
To refine these estimates, we perform the calculation of the dominant
partial
wave amplitudes in the scalar sector.
\subsubsection{Strong coupling scale from tree-level unitarity}
We are now ready to perform the calculation of the strong
coupling energy scale, as implied by the tree-level unitarity bound.
The (off-shell)
cubic
action with 4 spatial derivatives
in the scalar sector is given by \eqref{jun11-22-22a}.
We continue
to use the approach of Sec.~\ref{sec:hierarchy},
set $a=1$ as before
and work with
the field $\tilde{\zeta}= (2\mathcal{G}_S)^{-1/2} \zeta$, which has
canonical time-derivative term and gradient term with sound speed $u_S$.
Then the cubic action
reads
\begin{align*}
\mathcal{S}^{(3)}_{\zeta\zeta\zeta}
= \int d {t}~d^3 {x}
\tilde{\Lambda}_\zeta
\partial^2 \tilde{\zeta} (\partial \tilde{\zeta})^2 \; ,
\end{align*}
wher
\begin{equation}
\tilde{\Lambda}_\zeta =
(2\mathcal{G}_S)^{-3/2} \Lambda_\zeta =
\frac{\lambda_\zeta |t|^\mu}{g_S^{3/2} g^{1/2}} \cdot |t|^2 \; .\nonumber
\end{equation}
It is now straightforward to calculate the $2\to 2$
matrix element $M (\cos \theta, E)$
as function of scattering angle $\theta$ and center-of-mass energy $E$.
We get
\begin{equation*}
M = \frac{E^6}{4 u_S^8}(1-\cos^2\theta) \tilde{\Lambda}_\zeta^2\;,
\end{equation*}
(the origin of the dependence on $u_S$ is the mechanism (ii) in
Sec.~\ref{sec:hierarchy}).
We now write the rescaled PWA amplitude
\begin{equation}
\tilde{a}^{(l)} = \frac{1}{2u_S^3} \cdot
\frac{1}{32\pi}\int ~d(\text{cos}\theta)~P_l(\text{cos}\theta)\, M \; ,\nonumber
\end{equation}
where $P_l$ are Legendre polynomials,
the factor $(2u_S^3)^{-1}$ comes from the redefinition
\eqref{jun9-22-5} (i.e.,
it is due to the mechanism (i) in Sec.~\ref{sec:hierarchy}),
and obtain
\begin{subequations}
\begin{align*}
\tilde{a}^{(0)} &= \frac{\tilde{\Lambda}_\zeta^2 E^6}{192\pi u_S^{11}}\;,
\\
\tilde{a}^{(2)} &= -\frac{ \tilde{\Lambda}_\zeta^2 E^6}{960 \pi u_S^{11}}\;.
\end{align*}
\end{subequations}
The
lowest
bound on the energy of strong coupling comes from
the $s$-wave amplitude. It saturates the unitarity bound
$|\tilde{a}^{(0)}| \leq 1/2$ at energy
\begin{equation}
E_{strong} (t) = \left(\frac{96\pi u_S^{11}}{\tilde{\Lambda}^2}\right)^{1/6}
= \frac{(96\pi)^{1/6}}{|t|} \left( \frac{g_S^3}{\lambda_\zeta^2}
\frac{g u_S^{11}}{|t|^{2\mu -2}} \right)^{1/6}.\nonumber
\end{equation}
This refines the estimate \eqref{jun11-22-50}. Proceeding as in
Sec.~\ref{sec:srong-preliminaries}, we calculate the ratio of quantum
and classical energy scales at the time when the mode $k_* \approx k_{min}$
exits the effective horizon:
\begin{equation}
\frac{E_{strong} (t_f (k_{*}))}{E_{class} (t_f (k_{*}))}
\equiv
\frac{E_{strong} (k_{*})}{E_{class} (k_{*})}
= E_{strong} (t_f (k_{*})) \cdot |t_f (k_*)| =
C\cdot \left(\frac{u_S^8}{{\cal A}_\zeta}\right)^{1/6},
\label{jul4-22-1}
\end{equation}
where
\begin{equation}
\label{C}
C = \frac{96^{1/6} g_S^{1/3} }{|\lambda_\zeta|^{1/3}}\left(\frac{2^{2\frac{\mu-1}{1
-\chi}}(1-\chi)^{2\frac{\mu
-\chi}{1-\chi}}(2\mu-3\chi)^{2\frac{1-\mu}{1-\chi}}}{\Gamma^2
(1-\nu)\text{sin}^2(\nu\pi)}\right)^{1/6}.
\end{equation}
This is the desired result for general models from the class
\eqref{A_old}.
In the case $\mu=1$, $n_S=1$ (and hence $\nu = 3/2$)
we have
\begin{equation}
C = \frac{96^{1/6} g_S^{1/3} }{|\lambda_\zeta|^{1/3}}
\left(\frac{(1-\chi)^2}{4\pi} \right)^{1/6} \; .
\nonumber
\end{equation}
The result \eqref{jul4-22-1} depends
in a fairly complicated way,
through the parameters $\chi$,
$g_S$ and $\lambda_\zeta$,
on both the form of the Lagrangian functions ($a_2(N)$ and $a_3(N)$ in
\eqref{A_old}) and the solution to the equations of motion,
eqs.~\eqref{jan24-22-10}.
To get an idea of how restrictive the condition for the absence of strong
coupling at the horizon
exit
is, we now turn to
concrete examples where all above points are seen explicitly.
\section{Examples}
\label{sec:explicit-bounce}
\subsection{$\mu > 1$, $n_S<1$.}
\label{sec:mu_less_1}
In this Section we consider a particular model of the type
\eqref{A_old} with the simple forms of the
Lagrangian functions:
\begin{subequations}
\label{jul18-22-1}
\begin{align}
\label{a_2}
a_2(N) &= c_2 + \frac{d_2}{N}\; ,\\
\label{a_3}
a_3(N) &= c_3+ \frac{d_3}{N} \; ,
\end{align}
\end{subequations}
where $c_2$, $c_3$, $d_2$, $d_3$, are dimensionless constants.
Making use of eqs.~\eqref{feb5-22-1}, we obtain
\begin{subequations}
\begin{align*}
f_S &= -2\left(\frac{4\mu -2 + d_3}{2\chi + d_3}\right)\; ,
\\
g_S &= \frac{6 d_3^2}{(2\chi + d_3)^2} \; .
\end{align*}
\end{subequations}
In accordance with
our discussion in Sec.~\ref{subsec:perturbations}, one
finds that the only way to obtain small
tensor-to-scalar ratio \eqref{feb1-22-2} is to ensure that $f_S \ll 1$
and $g_S \sim 1$, so that $u_S^2 = f_S/g_S \ll 1$.
We begin with the case $\mu > 1$, which corresponds to
$n_S<1$ in accordance with the $\Lambda$CDM value \eqref{jul17-22-1}, and
ensure the small value of $u_S$ by
imposing {\it a fine tuning relation}
\begin{equation}
d_3 = -2 \; .\nonumber
\end{equation}
This
choice appears rather remarkable, but we do not know whether it
may be a consequence of some symmetry or dynamical principle. Anyway,
with this choice,
we have
\begin{subequations}
\begin{align*}
f_S &= \frac{4(\mu -1)}{1-\chi} = 2(1 - n_S)\; ,
\\
g_S &= \frac{6 }{(1 -\chi)^2} \; ,
\end{align*}
\end{subequations}
where we recall eq.~\eqref{general_n_s}.
Interestingly, small tensor-to-scalar ratio $r \sim f_S^\nu/g_S^{\nu-1}$
and small scalar tilt $(1-n_S)$
are now governed
by one and the same small parameter $(\mu -1)$.
Proceeding
with $d_3 = -2$, we find that
equations for the background have relatively simple form.
{These are algebraic equations:
\begin{subequations}
\begin{align*}
3\chi^2 - 6\chi + c_2 N^2 &=0\;,
\\
3\chi^2 +2 (2\mu + 1) (1 - \chi) - \kappa N + c_2 N^2 &=0\;,
\end{align*}
\end{subequations}
where
{
\begin{equation}
\kappa = c_3 (1+2\mu) - d_2 \; .\nonumber
\end{equation}
The relevant solution to these equations is (the second solution
does not yield stable bounce)
\begin{subequations}
\begin{align*}
\chi &= \frac{3 + 8\rho (\mu-1)(2\mu+1) - \sqrt{9 -
12\rho (2\mu+1)(5-2\mu)}}{3 + 16\rho (\mu - 1)^2}\; ,
\\
N &= \frac{2}{\kappa}\left[1 + 2\mu - 2(\mu -1)\chi\right] \; ,
\end{align*}
\end{subequations}
where
\begin{equation}
\rho = \frac{c_2}{\kappa^2}\; .\nonumber
\end{equation}
While the expression for $N$ is not of particular physical significance
(the only requirement is that $N>0$),
the value of $\chi$ is an important characteristic of the solution.
Note that while $N$ depends on $c_2$ and $\kappa$ separately,
the parameter $\chi$ depends (for given
$\mu$)
on one combination $\rho$
out of the
three Lagrangian parameters remaining after setting $d_3=-2$. We will
see in what follows that $\mu$ and $\rho$ (or, equivalently, $\chi$)
are
the only parameters relevant
also for the strong coupling issue.
For small and positive $(\mu -1)$, the allowed range of
parameters is
\begin{equation}
\kappa >0 \; , \;\;\;\;
0< \rho \lesssim \frac{2}{27} \; .
\nonumber \end{equation}
These relations ensure that $N>0$ and,
importantly,
$2\mu-3\chi>0$, see
\eqref{jul5-22-100}.
We are now equipped with the explicit formulas to see what
range of the
tensor-to-scalar ratio is consistent with our weak
coupling calculations. We obtain from~\eqref{jun11-22-15},
\eqref{jun12-22-10}
\begin{subequations}
\label{jul18-22-10}
\begin{align}
\theta &= 1-\chi \; ,
\\
\lambda_\zeta &= \frac{1}{4(1-\chi)^2} \; ,
\end{align}
\end{subequations}
while the parameter $\nu$ is still given by \eqref{feb4-22-1}.
Besides the dependence on $\mu$,
these parameters again depend on $\rho$ only.
We express the parameter $\mu$ through $n_S$ and $\chi$ using
\eqref{general_n_s}. Then we are left with the only free
parameter $\chi$ (or, equivalently, $\rho$). Our final formulas
are
obtained from \eqref{feb1-22-2} and \eqref{jul4-22-1}:
\begin{align*}
r &= 48 (1-\chi)^{2(\nu-1)} \left( \frac{1-n_S}{3} \right)^\nu,
\\
\frac{E_{strong} (k_{*})}{E_{class} (k_{*})}
&= E_{strong} (t_f (k_{*}) \cdot |t_f (k_*)| = \tilde{C}\cdot
\left( \frac{r^{4/\nu}}{{\cal A}_\zeta}\right)^{1/6},
\end{align*}
where, as before, $\nu = 2- n_S/2$ and
\begin{equation*}
\tilde{C} = \frac{C}{(8g_S)^{2/3\nu}}\;,
\end{equation*}
where $C$ is given by \eqref{C}, so that
\begin{align*}
\tilde{C}
&= 2^{\frac{12-11n_S}{24-6n_S}}3^{\frac{4-3n_S}{24-6n_S}} (1-\chi)^{\frac{12-n_S}{3(4
-n_S)}}\left(\frac{(2-2\chi)^{1-n_S}\Big(2+(1-n_S)-\chi\big(3
+(1-n_S)\big)\Big)^{-(1-n_S)}}{\Gamma^2(\frac{n_S}{2}-1)
\text{sin}^2\left[(2-\frac{n_S}{2})\pi\right]}\right)^{1/6}
\nonumber\\
&\approx \frac{3^{1/18}(1-\chi)^{11/9}}{2^{5/18}\pi^{1/6}}
= 0.7 \cdot (1-\chi)^{11/9}\;,
\end{align*}
where we set $n_S = 1$ in the
last two expressions.
In Fig.~\ref{fig:r} we show $r$-ratio
and the ratio $E_{strong}(k_*)/E_{cl}(k_*)$ as
functions of $\chi$ for the
$\Lambda$CDM
central value
$n_S = 0.9649$ suggested by observations. The main message is that
the value of $r$ is bounded from below in our model,
$r >0.018$ for
$n_S = 0.9649$,
even for very generous unitarity bound $E_{strong}(k_*)/E_{cl}(k_*)>1$.
Note that the parameters should obey $(2\mu - 3\chi)>0$ which translates to
\begin{equation}
\chi < \frac{3-n_S}{4- n_S} \approx \frac{2}{3} \; .\nonumber
\end{equation}
This bound is also shown in Fig.~\ref{fig:r}.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{r_estecl.pdf}
\caption{\label{fig:r} The ratio $E_{strong}(k_*)/E_{cl}(k_*)$ (blue line)
and $r$-ratio (red line)
as functions of $\chi$
for the central value $n_S = 0.9649$. The allowed range $r< 0.032$ and the
unitarity bound $E_{strong}(k_*)/E_{cl}(k_*) > 1$ restrict the
parameter space to the
right lower part.
}
\end{figure}
In Fig.~\ref{fig:region} we show
what happens if the scalar tilt $n_S$ is varied within the
observationally allowed range. A point to note is that in the
entire allowed range, the parameter $r$ is fairly large,
$r > 0.015$,
while the strong coupling scale is always close to
the classical energy scale, $E_{strong} \lesssim 3 E_{cl}$.
We conclude that our simple model is on the verge of being ruled out.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{SC_r.pdf}
\caption{Space of parameters $n_S$ and $\chi$.
Colored strips correspond to different
ratios of strong coupling scale to classical scale:
$1< E_{strong}(k_*)/E_{cl}(k_*)<1.5$ (red),
$1.5< E_{strong}(k_*)/E_{cl}(k_*) <2.2$ (orange),
$2.2< E_{strong}(k_*)/E_{cl}(k_*) <3$ (green),
$3< E_{strong}(k_*)/E_{cl}(k_*) <4.5$ (blue),
$4.5 < E_{strong}(k_*)/E_{cl}(k_*) $ (magenta).
Dashed lines show
the tensor-to-scalar ratio:
$r = 0.02$ (green), $r = 0.025$ (blue),
$r = 0.032$ (black), $r = 0.036$ (red), and $r = 0.04$ (magenta).}
\label{fig:region}
\end{figure}
\subsection{$\mu=1$, $n_S=1$}
We now consider the case $\mu=1$, $n_S=1$ consistent with
the early dark energy idea~\cite{Ye:2021nej,Jiang:2022uyg}.
Our model is still defined by the functions \eqref{jul18-22-1},
now with
$\mu=1$.
Unlike in Sec.~\ref{sec:mu_less_1},
we ensure that $f_S\ll 1$, and hence the scalar sound speed is small,
by choosing
\begin{equation*}
d_3 = -2 + \epsilon \; , \;\;\;\;\;\; \epsilon \ll 1 \; .
\end{equation*}
Then
\begin{align*}
f_S &= \frac{2 \epsilon}{2-2\chi -\epsilon}\;, \\
g_S &= \frac{6(2-\epsilon)^2}{(2-2\chi - \epsilon)^2} \; ,
\end{align*}
while $\theta$ and $\lambda_\zeta$ are still given by \eqref{jul18-22-10}.
Note that $\epsilon>0$, since we
require $f_S>0$.
Background equations of motion are
again algebraic, and their solution is
\begin{align*}
\chi &= \frac{(2-\epsilon)\left(1+6\epsilon \rho -\sqrt{1-12(1-\epsilon)\rho}\right)}{2(1+3\epsilon^2\rho)}\;,\\
N &= \frac{3(2-\epsilon)\left(2-\epsilon(1-\sqrt{1-12(1-\epsilon)\rho})\right)}{2\kappa(1 + 3\epsilon^2\rho)}\;,
\end{align*}
with $\kappa = 3c_3 - d_2$, $\rho = c_2 /\kappa^2$.
For given $\epsilon$, we can again trade the parameter $\rho$ for $\chi$,
hence the two relevant parameters are now $\chi$ and $\epsilon$.
Having all formulas above, we are ready to understand what range of the
$r$-ratio is consistent with the weak coupling regime.
We recall \eqref{feb1-22-2} and \eqref{jul4-22-1}, set $\nu =3/2$ and find
\begin{equation*}
r = \frac{16 \epsilon^{3/2}}{[3(2-2\chi -\epsilon)]^{1/2}(2-\epsilon)}\;,
\end{equation*}
\begin{equation*}
\frac{E_{strong} }{E_{class} }
= \tilde{B}\cdot
\left( \frac{r^{8/3}}{{\cal A}_\zeta}\right)^{1/6}\;,
\end{equation*}
where the coefficient $\tilde{B}$ is
\begin{align*}
\tilde{B} = 6^{1/18} (1-\chi) \left(\frac{ \left[\frac{2-2\chi - \epsilon}{
2-\epsilon}\right]^{4/3}}{4\pi}\right)^{1/6} = 0.7 \cdot (1-\chi) \left(
\frac{2-2\chi - \epsilon}{2-\epsilon}\right)^{2/9}.
\end{align*}
In Fig. \ref{fig:mu1} we show $\frac{E_{strong} }{E_{class} }$ and $r$
on the plane of the
two parameters $\epsilon$ and $\chi$. One observes that in the model
with $\mu = 1$ it is easier
to obtain
small tensor-to-scalar ratio, still insisting on the
generation of the perturbations in the weak
coupling regime. Nevertheless, the value of $r$ cannot be much smaller
than 0.01, otherwise one would face with the strong coupling problem.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{SC_r_mu1.pdf}
\caption{Space of parameters $\epsilon$ and $\chi$ in the case $\mu = 1$.
Colored strips correspond to different
ratios of strong coupling scale to classical scale:
$1< E_{strong}/E_{cl}<1.8$ (red),
$1.8< E_{strong}/E_{cl} <2.7$ (orange),
$2.7< E_{strong}/E_{cl} <4.2$ (green),
$4.2< E_{strong}/E_{cl} <6$ (blue),
$6 < E_{strong}/E_{cl}$ (magenta).
Dashed lines show
the tensor-to-scalar ratio:
$r = 0.005$ (magenta), $r=0.01$ (green), $r = 0.022$ (blue),
$r = 0.032$ (black), and $r = 0.036$ (red).}
\label{fig:mu1}
\end{figure}
\section{Conclusion}
\label{sec:conclude}
In this paper we have studied,
in the
framework of the Horndeski theory, the contracting cosmological
stage which can afterwards pass through bounce
to a realistic cosmological expansion, as discussed in detail, e.g., in
Ref.~\cite{Ageeva:2021yik}. We have found that this stage
is capable of producing experimentally
consistent scalar power spectrum and small enough
tensor-to-scalar ratio $r$. Small value of $(1-n_S)$ is obtained at the expense
of mild fine tuning, while small $r$ requires small scalar sound speed,
$r\sim u_S^3$. The latter property is in potential tension with
the requirement of
weak coupling
at
the time of the generation of scalar perturbations: small $u_S$ enhances the
scattering amplitudes and modifies the unitarity relation, which is violated
at energies dangerously close to the energy scale of the classical evolution.
Thus, very small values of $r$ are strongly disfavored in our class of models.
We have illustrated these properties in
very concrete examples within
the Horndeski theory.
While our motivation originates from the interest in constructing
complete, singularity-free cosmologies, our initial stage of the bounce
is conformally equivalent to rapidly expanding Universe, and, indeed,
the red scalar tilt is obtained in a model, conformally
equivalent\footnote{The action for our set of models has fairly simple form
in our Jordan frame and is a lot more contrived in the inflationary
Einstein frame. For this reason we have performed our analysis entirely
in the Jordan frame.} to
$G$-inflation~\cite{Kobayashi:2010cm}.
So, we expect that our observation of the
importance of the quantum strong coupling problem may be relevant to
the models of $G$-inflation, and possibly also k-inflation. We think this
line of thought is worth pursuing in the future.
\section*{Acknowledgemets}
The authors are grateful to M. Shaposhnikov,
A. Starobinsky,
A. Westphal, and C. Wetterich
for helpful
discussions
and Yun-Song Piao for useful correspondence.
This work has been supported by Russian Science Foundation
Grant No. 19-12-00393.
\section*{ Appendix A. General expressions in the Horndeski model}
Here we give explicit formulas for a theory with action \eqref{adm_lagr}.
Equations of motion for spatially flat FLRW background read \cite{Kobayashi:2011nu}
\begin{subequations}
\label{eoms}
\begin{eqnarray}
& (NA_2)_N + 3NA_{3N}H + 6N^2(N^{-1}A_4)_N H^2 = 0\;,\\
& A_2
-6A_4H^2-\frac{1}{N}\frac{d}{d\hat{t}}\left( A_3+4A_4H \right) = 0 \; ,
\end{eqnarray}
\end{subequations}
where $H = (Na)^{-1} (da/d\hat{t}$) is the Hubble parameter.
The coefficient functions in the quadratic actions for perturbations \eqref{jan23-22-5}
are given by \cite{Kobayashi:2015gga}
\begin{subequations}
\label{eq:Ft_Gt_form}
\begin{align}
\mathcal{ G}_T &= -2A_4\;,\\
\mathcal{ F}_T &= 2B_4\;,
\end{align}
\end{subequations}
and
\begin{subequations}
\label{eq:Fs_Gs_form}
\begin{eqnarray}
\mathcal{ F}_S&=&\frac{1}{a N}\frac{d}{d \hat{t}}\left(\frac{a}{\Theta}\mathcal{ G}_T^2\right)
-\mathcal{ F}_T\;,
\label{eq:Fs_form}
\\
\mathcal{ G}_S&=&\frac{\Sigma }{\Theta^2}\mathcal{ G}_T^2+3\mathcal{ G}_T\;, \label{eq:Gs_form}
\end{eqnarray}
\end{subequations}
with
\begin{subequations}
\begin{align}
\Sigma&=
N A_{2N}+\frac{1}{2}N^2A_{2NN}+
\frac{3}{2}N^2A_{3NN}H+3\big(2A_4-2NA_{4N}+N^2A_{4NN}\big)H^2\;,
\\
\Theta&=2H\Big(\frac{NA_{3N}}{4H}-A_4 + NA_{4N}\Big)\;.
\label{theta}
\end{align}
\end{subequations}
\section*{Appendix B. Power spectra}
In this Appendix we present, for completeness, the calculation
of the power spectra of perturbations. The quadratic actions
for perturbations are given by eqs.~\eqref{jan21-24-2}, where
the scale factor is written in eq.~\eqref{jan31-22-1} and the
coefficients are given by eqs.~\eqref{jan31-22-2}, \eqref{jan31-22-3}.
We give the calculation for the scalar perturbations; tensor
perturbations are treated in the same way.
We introduce the canonically normalized field $\psi$ via
\begin{equation*}
\zeta=\frac{1}{\left(2\mathcal{G}_S a^{3}\right)^{1 / 2}} \cdot \psi\;,
\end{equation*}
so that the quadratic action is
\begin{equation*}
\mathcal{S}_{\psi \psi}^{(2)}=\int d^{3} x d t
\left[\frac{1}{2} \dot{\psi}^{2}+\frac{1}{2}
\frac{\ddot{\alpha}}{\alpha} \psi^{2}-\frac{u_S^2}{2a^2} (\vec{\nabla} \psi)^{2}\right]\;,
\end{equation*}
where
\begin{equation*}
\alpha=\left(2\mathcal{G}_S a^{3}\right)^{1 / 2} =
\frac{\mbox{const}}{(-t)^{\frac{2\mu - 3\chi}{2}}} \; .
\end{equation*}
Once the inequality
\eqref{jul5-22-100} is satisfied,
the second term in the integrand is negligible at early times
$t \to -\infty$, and the field $\psi$
can be treated within the WKB approximation.
The properly normalized negative-frequency
WKB solution is
\begin{equation}
\psi_{WKB} =\frac{1}{(2 \pi)^{3 / 2}} \frac{1}{\sqrt{2 \omega}} \cdot
\mathrm{e}^{- i \int \omega dt}
=\frac{1}{(2 \pi)^{3 / 2}} \sqrt{\frac{d}{2 u_{S} k}} (-t)^{\chi / 2}
\cdot \mathrm{e}^{ i \frac{u_{S}}{d} \frac{k}{1-\chi} (-t )^{1-\chi}},\nonumber
\end{equation}
where
\begin{equation*}
\omega = \frac{u_S k}{a}=
\frac{u_S\cdot k}{d (-t)^{\chi}}\;,
\end{equation*}
We now solve the complete equation \eqref{jan31-22-5}
for perturbation $\zeta$ with the early time asymptotics
$\zeta \to \zeta_{WKB} = \left(2\mathcal{G}_S a^{3}\right)^{-1 / 2}
\psi_{WKB}$ and obtain
\begin{equation*}
\zeta=\mathfrak{C} \cdot (-t)^{\delta} \cdot H_{\nu}^{(2)}\left(\beta (-t)^{1-\chi}\right)\;,
\end{equation*}
where
\begin{align*}
\delta &=\frac{1+2 \mu-3 \chi_1}{2}\;, \\
\beta &=\frac{u_{S} k}{d(1-\chi)}\;, \\
\nu &=\frac{\delta}{\gamma}=\frac{1+2 \mu-3 \chi}{2(1-\chi)}\;,
\end{align*}
and normalization factor $\mathfrak{C}$ is found by matching
to the WKB solution;
modulo an irrelevant time-independent phase, we have
\begin{equation*}
\mathfrak{C}=\frac{1}{(g g_S)^{1 / 2}}
\frac{1}{2^{5 / 2} \pi(1-\chi)^{1 / 2}} \frac{1}{d^{3 / 2}}\;.
\end{equation*}
At late times (formally, $|t| \to 0$), this solution is
time-independent,
\begin{equation*}
\zeta=
(- i) \frac{\mathfrak{C}}{\sin (\nu \pi)} \frac{(1-\chi)^{\nu}}{u_{S}^{\nu} \Gamma(1
-\nu)}\left(\frac{2 d}{k}\right)^{\nu}.
\end{equation*}
It determines the scalar power spectrum via
\begin{equation*}
\mathcal{P}_{\zeta}=4 \pi k^{3} \zeta^{2}\;.
\end{equation*}
Collecting all factors, we obtain the result quoted in
\eqref{general_ampl}.
Tensor power spectrum is obtained by replacing
$2\mathcal{G}_S \to \mathcal{G}_T/4$
(i.e., $g_S \to 1/4$) and $u_S \to u_T=1$,
and multiplying by 2 due two two tensor polarizations.
This gives the result for $\mathcal{P}_T$ quoted in
\eqref{general_ampl}.
\section*{Appendix C. Largest terms in cubic actions.}
In this Appendix we collect the expressions for cubic
actions that contain the largest number of {\it spatial}
derivatives.
As we discuss in the main text, we consistently work
off-shell, i.e., do not use equations of motion for dynamical
perturbations $\zeta$, $h_{ij}$ when
evaluating the unconstrained cubic action.
Neither we employ field redefinitions to get rid
of the terms in the cubic action which are
proportional to the linearized field
equations; we only use the background equations of motion and
perform integrations by parts. This is precisely what is done in
Refs.~\cite{DeFelice:2011zh,Gao:2011qe,Ageeva:2020gti}.
In this approach, no
coefficients in the cubic action are enhanced by inverse powers of $u_S$.
The terms with the largest number of spatial
derivatives are readily extracted from
Ref.~\cite{Ageeva:2020gti}. We stick to the action \eqref{Hor_L} with $G_5=0$, or,
equivalently, the action~\eqref{adm_lagr}; furthermore,
in our set of models we have
$G_4 = G_4 (\phi)$, or, equivalently,
\begin{equation}
A_4 = - B_4 = -B_4 (\hat{t})\;,\nonumber
\end{equation}
(no dependence on $N$, see eq.~\eqref{A4old}).
At a given moment of time we rescale spatial coordinates to set
$a=1$ as we do in Sec.~\ref{sec:srong-preliminaries} and later in the main
text. We work in cosmic time with $N=1$.
We consider various cubic terms in turn.
\subsection*{C1. Triple-$\zeta$ action.}
In the purely scalar sector, the maximum number of spatial derivatives
in the cubic action is 4, and the relevant terms, as given in
Ref.~\cite{Ageeva:2020gti}, are
\begin{align*}
\mathcal{S}^{(3)}_{\zeta\zeta\zeta}
&= \int d {t}~d^3 {x} \left[
\Lambda_7 \dot{\zeta} \left( {\partial}^2 \zeta \right)^2
+ \Lambda_8 \zeta \left( {\partial}^2 \zeta \right)^2
+ \Lambda_9 {\partial}^2 \zeta \left( {\partial}_i \zeta \right)^2
\right.
\nonumber \\
&~~~~~~~~~~~~~~~~~\left. + \Lambda_{10} \dot{\zeta} \left( {\partial}_i
{\partial}_j \zeta \right)^2
+ \Lambda_{11} \zeta \left( {\partial}_i {\partial}_j \zeta \right)^2
\right] \; ,
\end{align*}
where, as before, $\partial^2 = \partial_i \partial_i$ is the spatial Laplacian, and
\begin{subequations}
\label{jun11-22-10}
\begin{align}
& \Lambda_8 = - \Lambda_{11} =
-\frac{3\mathcal{ G}_T^3}{2\Theta^2}\;,
\label{jun11-22-21}
\\
& \Lambda_9 =
-\frac{2\mathcal{ G}_T^3}{\Theta^2} \;,
\end{align}
\end{subequations}
with $\Lambda_7 = - \Lambda_{10}$.
The function $\Theta$, entering \eqref{jun11-22-10}, is given by
\eqref{theta}, and
for the solution in Sec.~\ref{sec:solution-powerlaw}, the expression
for $\Theta$ reduces to \eqref{jun11-22-15}.
One notices
that terms with $\Lambda_7$ and $\Lambda_{10}$ cancel
out upon integration by parts
(using $\Lambda_7 = - \Lambda_{10}$
and neglecting the term of order
$\dot{\Lambda}_{10} \, \partial_i\partial_j \zeta\, \partial_i \zeta \, \partial_j\zeta$).
Moreover, among the remaining three monomials, only two are independent,
modulo integration by parts, since
\begin{equation}
\int~d^3x~ \zeta \left( {\partial}_i {\partial}_j \zeta \right)^2 =
\int~d^3x \left[ \zeta \left( {\partial}^2 \zeta \right)^2
+ \frac{3}{2}
{\partial}^2 \zeta \left( {\partial}_i \zeta \right)^2 \right].\nonumber
\end{equation}
Making use of \eqref{jun11-22-21}, one finds that the
relevant part of the
triple-$\zeta$
action has just one term
\begin{align}
\mathcal{S}^{(3)}_{\zeta\zeta\zeta}
= \int d {t}~d^3 {x}
\Lambda_\zeta
{\partial}^2 \zeta \left( {\partial}_i \zeta \right)^2 \; ,
\label{jun11-22-22}
\end{align}
where
\begin{equation}
\Lambda_\zeta = \Lambda_9 - \frac{3}{2} \Lambda_8 =
\frac{\mathcal{ G}_T^3}{4\Theta^2} \; .\nonumber
\end{equation}
\subsection*{C2. $h \zeta \zeta$, $hh \zeta$ and triple-$h$ actions.}
In general Horndeski theory with $G_5 \neq 0$, and/or
$G_4 = G_4 (\phi, X)$, the cubic $h \zeta \zeta$ action
has the following general form (see Ref.~\cite{Gao:2012ib} where explicit
expressions are given)
\begin{align}
\label{ssh}
\mathcal{ S}^{(3)}_{\zeta \zeta h} &= \int d {t}~d^3 {x}
\left[ c_1
h_{ij}\zeta_{,i}\zeta_{,j}
+c_2 \dot h_{ij} \zeta_{,i}\zeta_{,j}
+c_3 \dot h_{ij}\zeta_{,i}\psi_{,j} \right.
\nonumber \\
&~~~~~~~~~~~~~~~+\left. c_4 \partial^2h_{ij}\zeta_{,i}\psi_{,j}
+c_5\partial^2 h_{ij}\zeta_{,i}\zeta_{,j}
+c_6\partial^2 h_{ij}\psi_{,i}\psi_{,j}\right] \; ,
\nonumber
\end{align}
where
\begin{equation*}
\psi=\partial^{-2} \partial_t \zeta \; .
\end{equation*}
The term with $c_5$ involves 4 spatial derivatives. However,
in the particular case that we consider, $G_5=0$, $G_4=G_4(\phi)$, we have
\begin{equation*}
c_4 = c_5 = 0\;.
\end{equation*}
So, the cubic $h \zeta \zeta$ action involves 2 spatial derivatives only.
The general structure of the cubic $\zeta h h$ action is~\cite{Gao:2012ib}
\begin{align*}
\mathcal{ S}^{(3)}_{\zeta hh} &= \int d {t}~d^3 {x}
\left[
d_1\zeta\dot h_{ij}^2
+\frac{d_2}{a^2}\zeta h_{ij,k}h_{ij,k}
+d_3\psi_{,k}\dot h_{ij}h_{ij,k} + d_4\dot\zeta\dot h_{ij}^2 \right.
\nonumber
\\
&\left. +\frac{d_5}{a^2}\partial^2\zeta \dot h_{ij}^2
+d_6\psi_{,ij}\dot h_{ik}\dot h_{jk}
+\frac{d_7}{a^2}\zeta_{,ij}\dot h_{ik}\dot h_{jk}
\right] \; ,
\end{align*}
and in our case we have
\begin{equation*}
d_4 = d_5=d_6=d_7 = 0\;.
\end{equation*}
The cubic $\zeta hh$ action also involves at most 2 spatial derivatives.
The cubic action with tensor modes only is given by
\eqref{jun11-22-30}. It involves at most 2 spatial derivatives
as well.
| 36,719 |
\section{Classical derivation of the soliton growth rate}
\label{app:class}
In this appendix we derive the expression (\ref{eq:NsGen}) for the
soliton growth rate as the consequence of the classical equations of
motion. It is convenient to integrate out the gravitational potential
and rewrite the Schr\"odinger--Poisson system as a single equation
with non-local interaction,
\begin{equation}
\label{SPsingle}
i\partial_t\psi+\frac{\Delta \psi}{2m} -4\pi Gm^2\psi\frac{1}{\Delta}|\psi|^2=0\;,
\end{equation}
where $\tfrac{1}{\Delta}$ denotes the Green's function of the
Laplacian. Clearly, this equation conserves the total mass of axions
in a box $M_{\rm tot}=m\int d^3x |\psi|^2$. Now, we make the split
(\ref{split}) into the soliton and gas and, using the fact that the
soliton is a solution of \cref{SPsingle}, obtain the equation
for the gas component,
\begin{equation}
\label{gaseqapp}
\begin{split}
i\partial_t\psi_g+\frac{\Delta\psi_g}{2m}&-4\pi Gm^2\bigg[
\psi_g\frac{1}{\Delta}|\psi_s|^2
+\psi_s\frac{1}{\Delta}(\psi_s^*\psi_g)
+\psi_s\frac{1}{\Delta}(\psi_s\psi_g^*)\bigg]\\
&-4\pi Gm^2\bigg[
\psi_g\frac{1}{\Delta}(\psi_s^*\psi_g)
+\psi_g\frac{1}{\Delta}(\psi_s\psi_g^*)
+\psi_s\frac{1}{\Delta}|\psi_g|^2
+\psi_g\frac{1}{\Delta}|\psi_g|^2\bigg]=0\;.
\end{split}
\end{equation}
In the first line we have grouped the terms that affect the gas field
at linear order, whereas the second line contains interactions. Note
that, despite the presence of the small factor $4\pi Gm^2$, all terms
in the first line are of the same order because $\psi_s$ is
proportional to $(4\pi Gm^2)^{-1/2}$, see
\cref{solitonWF}. Correspondingly, the leading interaction terms are
of order $\sqrt{4\pi G m^2}$.
The mass of the gas is not constant. From \cref{gaseqapp} we have,
\begin{equation}
\frac{dM_g}{dt}=m\frac{d}{dt}\int d^3 x|\psi_g|^2
=-(8\pi Gm^3) \Im\int
d^3x\bigg[
\psi_s^*\psi_g\frac{1}{\Delta}(\psi_s^*\psi_g)
+\psi_s^*\psi_g\frac{1}{\Delta}|\psi_g|^2\bigg]\;,
\end{equation}
where we have canceled the boundary terms assuming periodic boundary
conditions. Since the total mass is conserved, this must be
compensated by the change in the soliton mass. Thus, we obtain for the
soliton growth rate,
\begin{equation}
\label{Gammaclass}
\Gamma_s=\frac{8\pi Gm^3}{M_s}
\Im\int
d^3x\bigg[
\psi_s^*\psi_g\frac{1}{\Delta}(\psi_s^*\psi_g)
+\psi_s^*\psi_g\frac{1}{\Delta}|\psi_g|^2\bigg]\;.
\end{equation}
If we neglect the interaction terms in \cref{gaseqapp}, it admits a
set of periodic-in-time solutions. We decompose the gas field in
these eigenmodes,\footnote{Due to the last term in the first line of
\cref{gaseqapp} that mixes $\psi_g$ and $\psi^*_g$, the eigenmodes
contain both positive and negative frequencies \cite{soliton2}. To
avoid cumbersome expressions, we neglect this subtlety in the following
discussion. It does not affect the final result for the soliton
growth rate.}
\begin{equation}
\label{psigdecomp}
\psi_g(t,\mathbf{x})=\sum_i a_i(t){\rm e}^{-i\mathcal{E}_i t}\psi_i(\mathbf{x})\;,
\end{equation}
where the amplitudes $a_i(t)$ slowly vary due to the
interactions. Substituting into \cref{Gammaclass} we obtain,
\begin{equation}
\label{Gammaclass1}
\Gamma_s=-\frac{2m}{M_s}\Im\bigg[
\sum_{i,j} a_i a_j{\rm e}^{-i(\mathcal{E}_i+\mathcal{E}_j-2\mathcal{E}_s)t}A'_{is,js}
+\sum_{i,j,k} a_i a_j a_k^*{\rm e}^{-i(\mathcal{E}_i+\mathcal{E}_j-\mathcal{E}_k-\mathcal{E}_s)t}A'_{is,jk}\bigg]\;,
\end{equation}
where the scattering amplitude $A'_{is,jk}$ is defined in
\cref{Astrip}, and $A'_{is,js}$ is defined similarly with the $k$th
wavefunction replaced by the soliton. All terms in the first sum
quickly oscillate since the gas states are separated from the ground
state by an energy gap of order $|\mathcal{E}_s|$. Thus, they disappear once we
average the growth rate over time scales of order $|\mathcal{E}_s|^{-1}$ and
we omit them in what follows.
The second sum does not vanish upon time averaging because the
combination of energies in the exponent can be small. However, to
obtain the physical growth rate we also have to average
over random initial phases of the gas amplitudes. In the
absence of interactions the amplitudes $a_i$ in \cref{Gammaclass1}
coincide with the initial amplitudes $a_i^{(0)}$ and thus averaging
over their phases will give $\Gamma_s=0$. To obtain a non-zero result,
we have to take into account gas interactions.
The first correction to the free gas field is due to
terms of order $\sqrt{4\pi Gm^2}$ in \cref{gaseqapp}. We can write it
schematically as
\begin{equation}
\label{psig1}
\psi_g^{(1)}=(4\pi Gm^2)\,{\cal G}_{\rm ret}*\bigg\{
\psi_g^{(0)}\frac{1}{\Delta}\big(\psi_s^*\psi_g^{(0)}\big)
+\psi_g^{(0)}\frac{1}{\Delta}\big(\psi_s{\psi_g^{(0)}}^*\big)
+\psi_s\frac{1}{\Delta}|\psi_g^{(0)}|^2\bigg\}\;,
\end{equation}
where $\psi_g^{(0)}$ is the free gas field and
${\cal G}_{\rm ret}$ is the retarded Green's function of the
operator in the first line of (\ref{gaseqapp}). Using the complete set
of eigenmodes, it can be written as,\footnote{For simplicity, we again
neglect the subtleties associated with the negative-frequency components
of the eigenmodes \cite{soliton2}.}
\begin{equation}
\label{Gret}
{\cal G}_{\rm ret}(t-t',\mathbf{x},\mathbf{x}')=\sum_i\int\frac{d\mathcal{E}}{2\pi}
\,\frac{\psi_i(\mathbf{x})\psi_i^*(\mathbf{x}')}{\mathcal{E}-\mathcal{E}_i+i\epsilon}\,{\rm e}^{-i\mathcal{E}(t-t')}\;.
\end{equation}
Substituting this expression into (\ref{psig1}) and expanding
$\psi_g^{(1)}$ and $\psi_g^{(0)}$ into eigenmodes, we obtain the
first-order correction to the amplitudes,
\begin{equation}
\begin{split}
a_i^{(1)}=-\sum_{j,k}\bigg[&
a_j^{(0)}a_k^{(0)}
\frac{{\rm e}^{-i(\mathcal{E}_j+\mathcal{E}_k-\mathcal{E}_i-\mathcal{E}_s)t}}{\mathcal{E}_j+\mathcal{E}_k-\mathcal{E}_i-\mathcal{E}_s+i\epsilon}
A'_{ks,ji}\\
&+a_j^{(0)}{a_k^{(0)}}^*
\frac{{\rm e}^{-i(\mathcal{E}_j-\mathcal{E}_k-\mathcal{E}_i+\mathcal{E}_s)t}}{\mathcal{E}_j-\mathcal{E}_k-\mathcal{E}_i+\mathcal{E}_s+i\epsilon}
(A'^*_{ks,ij}+A'^*_{is,kj})\bigg]\;.
\end{split}
\end{equation}
Next, we insert this expression into the first-order contribution to
the soliton growth rate,
\begin{equation}
\Gamma_s^{(1)}=-\frac{2m}{M_s}\sum_{i,j,k}
\Big(a_i^{(1)}a_j^{(0)}{a_k^{(0)}}^*
+a_i^{(0)}a_j^{(1)}{a_k^{(0)}}^*
+a_i^{(0)}a_j^{(0)}{a_k^{(1)}}^*\Big)
{\rm e}^{-i(\mathcal{E}_i+\mathcal{E}_j-\mathcal{E}_k-\mathcal{E}_s)t}A'_{is,jk}\;,
\end{equation}
and average over the phases of $a_i^{(0)}$ using
\begin{equation}
\langle a_i^{(0)}a_j^{(0)}{a_{i'}^{(0)}}^*{a_{j'}^{(0)}}^*\rangle
=f_if_j(\delta_{ii'}\delta_{jj'}+\delta_{ij'}\delta_{ji'})\;.
\end{equation}
Upon a somewhat lengthy, but straightforward calculation, we arrive at
\begin{equation}
\begin{split}
\langle \Gamma_s\rangle=\frac{m}{M_s}
\Im\sum_{i,j,k}\bigg\{&\frac{f_jf_k+f_if_k}{\mathcal{E}_k-\mathcal{E}_j-\mathcal{E}_i+\mathcal{E}_s+i\epsilon}
|A'_{is,jk}+A'_{js,ik}|^2\\
&+\frac{f_jf_k}{-\mathcal{E}_i+\mathcal{E}_s+i\epsilon}
\Big[(A'_{is,jj}+A'_{js,ij})(A'^*_{is,kk}+A'^*_{ks,ik})+{\rm
h.c.}\Big]\\
&+\frac{f_if_j}{\mathcal{E}_i+\mathcal{E}_j-\mathcal{E}_k-\mathcal{E}_s-i\epsilon}|A'_{is,jk}+A'_{js,ik}|^2\bigg\}.
\end{split}
\end{equation}
In the final step we use the formula
\begin{equation}
\Im\frac{1}{z+i\epsilon}=-i\pi\delta(z)\;.
\end{equation}
Then the second term vanishes because $\mathcal{E}_i\neq \mathcal{E}_s$, whereas the
rest of the terms reproduce \cref{eq:NsGen}. Thus, we have shown that
the classical derivation leads to the same soliton
growth rate as the quantum mechanical one, upon averaging over the
ensemble of gas realizations with different initial phases.
The above derivation also allows us to estimate the r.m.s. fluctuations
of $\Gamma_s$ in individual realizations. To this aim, let us return
to \cref{Gammaclass1} and smooth it with a Gaussian filter over time
scales $\langle\Gamma_s\rangle^{-1}>\tau\gg|\mathcal{E}_s|^{-1}$. We obtain,
\begin{equation}
\Gamma_s=-\frac{2m}{M_s}\Im\sum_{i,j,k} a_ia_ja_k^*
{\rm e}^{-i(\mathcal{E}_i+\mathcal{E}_j-\mathcal{E}_k-\mathcal{E}_s)t}A'_{is,jk}{\rm e}^{-\tau^2(\mathcal{E}_i+\mathcal{E}_j-\mathcal{E}_k-\mathcal{E}_s)^2/2}\;.
\end{equation}
To get the r.m.s. fluctuations, we subtract $\langle\Gamma_s\rangle$,
square the result and average over the gas phases. In the latter step
we can replace $a_i$ with $a_i^{(0)}$ to obtain the leading
contribution. Retaining only the unsuppressed terms we obtain,
\begin{equation}
\begin{split}
\langle\delta\Gamma_s^2\rangle&\simeq
\bigg(\frac{m}{M_s}\bigg)^2\sum_{i,j,k}f_if_jf_k\,
|A'_{is,jk}+A'_{js,ik}|^2{\rm e}^{-\tau^2(\mathcal{E}_i+\mathcal{E}_j-\mathcal{E}_k-\mathcal{E}_s)^2}\\
&\simeq\frac{\sqrt{\pi}}{\tau}
\bigg(\frac{m}{M_s}\bigg)^2\sum_{i,j,k}f_if_jf_k\,
|A'_{is,jk}+A'_{js,ik}|^2\delta(\mathcal{E}_i+\mathcal{E}_j-\mathcal{E}_k-\mathcal{E}_s)\;.
\end{split}
\end{equation}
Comparing this with the expression (\ref{eq:NsGen}) for the rate,
we get an estimate
\begin{equation}
\langle\delta\Gamma_s^2\rangle\sim\frac{1}{\tau}\frac{m}{M_s} f_g
\langle\Gamma_s\rangle\;.
\end{equation}
The fluctuations are much smaller than the average if
$\langle\Gamma_s\rangle\tau\gg mf_g/M_s$ which can be always achieved
by an appropriate choice of the smoothing scale, as long as the number
of particles in the soliton is much larger than the occupation numbers
of individual modes in the gas, $M_s/m\gg f_g$.
\section{Formation of axion soliton from the gas}
\label{sec:levkov}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\textwidth]{plot/input_fig.pdf}
\caption{
Gas parameters for the simulations with soliton
formation. Solid lines bound the regions without Jeans instability
for different simulation box sizes (see \cref{eqn:jeans_instability}).
The number of runs on different lattices is indicated in
parenthesis.
\label{fig:formation_input}
}
\end{center}
\end{figure}
In this appendix we report the results of simulations with
formation of the soliton from the gas. We use the same numerical
scheme and initial conditions for the gas as described in
\cref{sec:setup}, but we do not put the initial soliton. Instead, we
wait for the soliton to emerge spontaneously.
The purpose
of these simulations is twofold. First, we cross-check our numerical
approach by comparing with the simulations carried out in
\cite{Levkov:2018kau}.\footnote{We thank Dmitry Levkov and Alexander
Panin for sharing with us their results for a detailed comparison.}
Second, we investigate to what extent the evolution of spontaneously
formed solitons is similar to the evolution of the solitons inserted
into the gas from the start.
We perform 118 independent simulations with the parameters summarized
in \cref{fig:formation_input}. The parameter space is restricted by
the requirement of absence of the Jeans instability, so that the gas
does not form a halo and remains homogeneous.
Figure \ref{fig:formation_case} shows the results of a typical
simulation run. The maximal axion density within the simulation box
remains small for time
less than the relaxation time (\ref{trelax}) marked with the red
dotted line. Then it starts growing which signals the formation of a
soliton. As in \cref{sec:simulation}, we determine the
soliton mass from its peak density using \cref{eq:Mg}. We also
construct smoothed peak density and soliton mass using a top-hat
filter with the width $t_{\rm width}=70/k_g^2$. The smoothed
dependences are shown in the figure with thick blue lines.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\textwidth]{plot/case_study_fig.png}
\caption{
Example of spontaneous soliton formation in axion gas with parameters
${({\cal N}=128,k_g=0.5,f_g=0.02)}$. From top to
bottom: maximal density in the simulation box,
soliton mass estimated from the peak density,
virial ratio $E_U/E_K$, total energy in the box. Thick blue lines show
the smoothed dependences. Yellow dotted line is the fit
(\ref{eq:fitting_fig}). Vertical red and green dotted lines mark the
relaxation time (\ref{trelax}) and the measured soliton formation
time, respectively.
\label{fig:formation_case}
}
\end{center}
\end{figure}
To pin down the moment of soliton formation, we use the method
proposed in \cite{Veltmaat:2018}. We identify the density maximum
within the simulation box and compute the kinetic ($E_K$) and
potential ($E_U$) energy in a spherical region around it.
The radius of the sphere is chosen as the radius at which the
shell-averaged density drops to half of its peak value. To calculate
the kinetic energy, we evaluate the field gradient,
subtract the center-of-mass velocity
contribution, square the result and integrate
over the volume of the sphere. The potential energy is approximated
by the potential energy of a uniform ball with the mass enclosed
inside the sphere.
For a random peak in the gas the ratio $E_{U}/E_K$ is
close to zero, whereas for the soliton it obeys the virial
condition\footnote{The ratio is different from $-2$ because we
consider only the inner part of the whole soliton.}
$E_{U}/E_K\simeq -2.8$. In \cref{fig:formation_case} we see that this
ratio changes abruptly from $0$ to $-2.8$ around $t\sim\tau_{\rm
rel}$. We identify the soliton formation time $\tau_{\rm form}$ as
the moment when the smoothed curve $E_{U}/E_K$ crosses half of its
virial value,
\begin{equation}
E_U/E_K\Big|_{\tau_{\rm form}}=-1.4\;.
\end{equation}
This time is marked with the green dotted line in the plot. We see
that it agrees well with the relaxation time $\tau_{\rm rel}$.
Ref.~\cite{Levkov:2018kau} suggested that upon formation the growth of
the soliton is described by a power-law
\begin{equation}
M_s(t) = M_0\left( \dfrac{t}{\tau_0}-1\right)^\alpha
\label{eq:fitting_fig}
\end{equation}
with $\alpha=1/2$, $\tau_0=\tau_{\rm rel}$ and $M_0\simeq 12\pi
k_g$. To verify if this law is obeyed in our simulations, we fit the
smoothed soliton mass at $t>\tau_{\rm form}$
with the formula (\ref{eq:fitting_fig}) allowing
$\alpha$, $\tau_0$, $M_0$ to vary as free fitting parameters. The
fitting time range is restricted by the condition that the
energy error
$\left|E(t)/E(0)-1\right|$ does not exceed
$0.1\%$.
The result of the
fit is shown by yellow dotted line
in \cref{fig:formation_case}. The best-fit parameters for this run are
$\alpha=0.22$, $\tau_0=8.2\times 10^5$, $M_0=17.03$.
Note that
the value of $\alpha$ is significantly lower than $1/2$.
We will discuss shortly how this result can be reconciled with those
of Ref.~\cite{Levkov:2018kau}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\textwidth]{plot/stat_fig.pdf}
\caption{
Results of the measurements in the simulations with soliton
formation. The histograms show the distributions
of the soliton formation time $\tau_{\rm form}$, and the
parameters in the power-law fit (\ref{eq:fitting_fig})
of the soliton mass growth: $\alpha$, $\tau_0$, $M_0$. The
relaxation time $\tau_{\rm rel}$ is given by \cref{trelax} and $k_g$
is the gas momentum.
\label{fig:formation}
}
\end{center}
\end{figure}
We repeat the above analysis for each of 118 runs and construct the
histograms of
$\tau_{\rm form}$, $\alpha$, $\tau_0$, $M_0$
measured in different runs. These histograms are shown in
\cref{fig:formation} together with their means and standard
deviations. The mean values of $\tau_{\rm form}$, $\tau_0$ and
$M_0$ are in good agreement with the findings of
\cite{Levkov:2018kau}. On the other hand, for the exponent we obtain a
lower mean, $\alpha=0.33\pm 0.02$. It is important to notice,
however, that the distribution of $\alpha$ is quite broad, extending
from\footnote{There are three outlier runs with very high
($\alpha\simeq 0.8$) and very low ($\alpha\simeq 0.1$)
exponents. The origin of these large fluctuations is unknown.}
$0.2$ to $0.5$. From the analysis in the main text we know
that the soliton growth rate decreases when the soliton gets
heavier. This suggests that the spread in $\alpha$ can arise due to
different soliton masses achieved in different simulations. In this
picture, the runs with larger duration should yield lower values of
$\alpha$ since the solitons in them have more time to grow.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\textwidth]{plot/tend_alpha_fig.pdf}
\caption{The exponent in the power-law fit (\ref{eq:fitting_fig}) for
the soliton mass against final simulation time $t_{\rm end}$ measured in
units of the relaxation time (\ref{trelax}).
Longer simulations produce more massive solitons which have slower
growth rate and hence lower values of $\alpha$.
Three outlier
simulations with $\alpha\simeq 0.8$ and $\alpha\simeq 0.1$ represent
large fluctuations of unknown origin.
\label{fig:tend}
}
\end{center}
\end{figure}
To check this expectation, we plot in \cref{fig:tend} the best-fit value
of $\alpha$ as function of the duration of the
simulation\footnote{More precisely, we take $t_{\rm end}$ to be the
end of the time range used in the fit (\ref{eq:fitting_fig}).} in units of
relaxation time.
Apart from a few outliers, the bulk of the data exhibit a pronounced
anti-correlation between $\alpha$ and $t_{\rm end}/\tau_{\rm
rel}$. The exponent varies from $\alpha\simeq 0.5$ for newly-born
solitons down to $\alpha\lesssim 0.25$ for long-lived solitons. Thus,
the value $\alpha=1/2$ found in \cite{Levkov:2018kau} can be explained
by short duration of the simulations used in the analysis, whereas
longer simulations carried out in the present work uncover a trend for
the decrease of $\alpha$ with time. This trend is consistent, both
qualitatively and quantitatively, with the results on heavy soliton
growth from the main text. Indeed, the scaling (\ref{powerfit}) of the
soliton growth rate implies
\begin{equation}
\frac{1}{M_s}\frac{dM_s}{dt}\propto\frac{1}{M_s^n}~~~~~
\Longrightarrow~~~~~M_s\propto \bigg(\frac{t}{\tau_0}-1\bigg)^{1/n}\;,
\end{equation}
which leads to the identification $\alpha=1/n$. Thus, the slow-down of
the soliton growth with $\alpha$ decreasing from $1/2$ to $1/4$ as
time goes on matches the steepening of the $\Gamma_s$ dependence on
$k_gr_s$ with $n$ increasing from $2$ to $4$ at smaller $k_gr_s$ (see
\cref{sec:results}).
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\textwidth]{plot/stack_fig.png}
\caption{Same as upper panel in
\cref{fig:stack} with the addition of the soliton mass evolution
from a run with
soliton formation (in grey). The spontaneously
formed soliton approaches the same
growth rate as the solitons embedded in the gas from the start.
\label{fig:stack_fig}
}
\end{center}
\end{figure}
The above match is non-trivial. The simulations of
\cref{sec:simulation}
are
performed with Maxwellian gas and the growth rate is extracted from
time ranges shorter than half of the relaxation time to avoid any
significant change in the gas distribution. On the other hand, the
simulations in this appendix, by construction, span more than the
relaxation time. Moreover, it is known \cite{Levkov:2018kau} that the
soliton formation is preceded by a dramatic change in the gas
distribution with enhanced population of low-momentum modes. Thus, the
solitons in the two simulation suits are embedded in environments with
very different histories and their growth rate need not be the
same. Nevertheless, it turns out that the soliton growth exhibits a
remarkable universality. In \cref{fig:stack_fig} we superimpose the
time-dependent mass of a soliton born in the gas on top of the soliton
masses from out main simulation suit with solitons incorporated in
the initial conditions. We see that after a brief transient period of
a faster growth, the formed soliton approaches the same time
dependence as the solitons with the same mass
that are present in the gas from the
start.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\textwidth]{plot/VelDist.png}
\caption{
{\it Left panel:} Evolution of momentum distribution of axions in the
simulation box.
The mode amplitudes are spherically
averaged over shells with fixed $k=|\k|$.
{\it Right panel:} Zoom-in on the low-$k$ part
of the spectrum, where we divide the distribution by $k^2$ to make
the difference between curves more pronounced. The
distribution in a simulation with spontaneous formation of
the soliton from the gas (${\cal N}=128,\,k_g=0.5,\,f_g=0.06$)
is shown by solid lines with circles. It is
sampled at three moments of time: at the beginning of the simulation
(black),
at the time before soliton formation formation (red) and after
the soliton has formed (blue). Just before the soliton
forms the distribution features a pronounced bump at low momenta which
disappears afterwards. For comparison, we show with dashed lines
the distribution in a
simulation with soliton inserted in the initial conditions
($k_gr_{s}^{\rm init}=1.51$) sampled at the same time intervals. Maxwell
distribution corresponding to the input gas parameters
is shown with thick green line.
The momentum wavefunction of the soliton with the mass achieved at
latest sampling point is plotted by thick yellow line.
\label{fig:Maxwell}
}
\end{center}
\end{figure}
This suggests that the gas distribution restores its Maxwellian form
after the soliton formation. We check this conjecture by measuring the
amplitudes of axion modes $|\psi_\k|^2$ in the simulation from
\cref{fig:stack_fig} at several moments of time: at the beginning of
the simulation ($t=0$), before the soliton formation ($t=0.89\,\tau_{\rm
rel}$), and after the soliton has formed ($t=1.78\,\tau_{\rm rel}$). The
amplitudes are averaged over spherical shells with fixed values of
$k=|\k|$. The results are shown in \cref{fig:Maxwell} (solid
lines with circles). We see that right
before the soliton formation, the distribution
develops a pronounced bump in the low-$k$ part of the spectrum,
consistently with the results of \cite{Levkov:2018kau}. This bump,
however, disappears after the soliton is formed and at late times the
distribution qualitatively resembles Maxwellian (shown by the thick green
line). We also superimpose in the same figure the distribution for the
run with soliton initially present in the gas sampled at the same
intervals from the start of the simulation
(dashed lines). The parameters of this run are
$({\cal N}=128,\,k_g=0.5,\,f_g=0.06,\,k_gr_{s}^{\rm init}=1.51)$ and correspond to
the blue curve in \cref{fig:stack_fig}. In this case we see that
the distribution preserves the Maxwellian shape at all times, without
any excess at low-$k$ modes. We conclude that the presence of
the soliton affects the axion gas in a curious way: it stabilizes
the Maxwell distribution of axion momenta.
It is worth stressing that we are talking about the distribution in
the gas and not in the soliton itself. Though our numerical procedure
does not allow us to separate the two, we can compare the total
distribution to the wavefunction of the soliton in momentum
space. This is shown by thick yellow line in \cref{fig:Maxwell}. We take
the soliton mass to be $M_s=20$ corresponding to the latest sampling time.
We see that the contamination of the distribution
from the soliton is negligible.
We do not attempt to explore this ``Maxwellization'' phenomenon further
in this work. The axion momentum distribution is
subject to significant temporal fluctuations which form an obstruction
for moving beyond qualitative statements. For a quantitative study,
one needs to devise less noisy probes.
We leave this task for
future.
\section{Details of the numerical simulations}
\label{sec:nums}
\subsection{Convergence tests}
\label{sec:resolution}
In this work, we adopt second order drift-kick-drift
operator (\ref{DKD}) to
evolve wave function for each time step ${\rm d}t$.
The gravitational potential $\Phi$ and kinetic energy operators
$\Delta$ are calculated with CUDA Fast Fourier Transform
(cuFFT)\footnote{\url{https://developer.nvidia.com/cufft}}.
We notice that the single precision of cuFFT causes $\approx 10\%$
mass loss in $10^6$ time steps.
We therefore conduct the simulations in this work using the double precision.
This makes the mass loss negligible (less than $10^{-6}$).
The requirement that the gas and the soliton must be resolved by the
spatial lattice puts and upper bound on the gas momentum
$k_g$ and a lower bound on the initial soliton size
$r_s^{\rm init}$ accessible in the simulations.
To determine the domain of validity of our code, we perform
several convergence tests. First, we evolve the gas without the
soliton using three different time steps: ${\rm d}t=2/\pi\simeq 0.64$
(our fiducial value),
${\rm d}t=1/\pi\simeq 0.32$ and ${\rm d}t=1/(2\pi)\simeq
0.16$.
The gas parameters in all three runs are
$({\cal N}=128,\,k_g=0.5,\,f_g=0.04)$. The maximal density within the box
and the total energy measured in these runs are shown in the left panel
of \cref{fig:res_gas_sol}. We observe that the density curves
essentially coincide, while the energy error is proportional to $({\rm
d}t)^2$, as it should. For our fiducial value of ${\rm d}t=2/\pi$, the
error
stays well
below $10^{-7}$. We conclude that the gas with $k_g= 0.5$ is
comfortably resolved in our simulations.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\textwidth]{plot/res_gas.png}~~~~
\includegraphics[width=0.49\textwidth]{plot/res_sol.png}
\caption{Convergence tests in the simulations with pure gas (left) and
an isolated soliton (right). In each case we perform three runs:
one with the fiducial time step ${\rm d}t=0.64$,
and two with time steps
reduced by a factor of 2 and 4. The gas momentum is $k_g=0.5$,
whereas the soliton radius is $r_{s}^{\rm init}=1.5$. The lattice size if
${\cal N}=128$ in both cases.
\label{fig:res_gas_sol}
}
\end{center}
\end{figure}
Next, we repeat the same convergence test with an isolated soliton of
radius $r_{s}^{\rm init}=1.5$. The results are shown in the right panel of
\cref{fig:res_gas_sol}. Since the analytical template (\ref{eq:rhos})
used in the simulations to set the initial conditions slightly
deviates from the exact soliton profile, the soliton is initiated in
an excited state which leads to the oscillations of the peak
density. The oscillations obtained with three different time steps
match almost identically. The energy error also exhibits the proper
scaling, $|E(t)/E(0)-1|\propto ({\rm d}t)^2$. However, now it is
significantly larger, reaching up to $10^{-3}$ for the fiducial
${\rm d}t$. This is likely
due to high frequency of the soliton phase rotation
$|\mathcal{E}_s|\simeq 0.52$ which is less resolved with the large time
step. Therefore, to correctly capture the evolution of the soliton
wavefunction, we restrict our simulations to $r_{s}^{\rm init}\geq 1.5$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\textwidth]{plot/res.png}
\caption{Temporal (left) and spatial (right) convergence tests for the
extreme values of the gas momentum and soliton radius $k_g=1$,
$r_{s}^{\rm init}=1.5$.
Temporal test contains three simulations by decreasing the time step size
${\rm d}t$ by 2 or 4 relative to the fiducial value,
whereas the spatial test consists of two simulations with
the box size ${\cal N}$ differing by a factor of 2.
The simulations for spatial test follow the scaling
relation (\ref{eq:scaling}).
\label{fig:resolution}
}
\end{center}
\end{figure}
For a third test, we superimpose the soliton and the gas and again run
three simulations with decreasing time step. We take the soliton with
$r_{s}^{\rm init}=1.5$ and push the gas momentum up to $k_g=1$. The evolution of
the soliton mass and the total energy in these runs is shown in the
left panel of \cref{fig:resolution}. The soliton mass growth in the
three cases is broadly the same, though detailed features are slightly
different. The energy error is low in the initial time range
$t\lesssim 10^3$ where it also obeys the $({\rm d}t)^2$ scaling. However,
from $t\sim 10^3$ it starts to steadily grow and its scaling with
$({\rm d}t)^2$ gets violated. Still, the error remains small until very late
times. For the fiducial time step it reaches $10^{-3}$ when the
soliton mass exceeds $M_s\simeq 27$ and hence its radius drops below
$r_s\simeq 1.2$. This suggests that the soliton-gas system with
$r_s\sim 1.2$ and $k_g\sim 1$ is at the extreme of our numerical
resolution. Since we are interested in the averaged properties of the
soliton evolution, rather than fine details, we accept $k_g=1$ as the
upper boundary for admissible gas momenta. To ensure the absence of
any excessive loss of precision, we monitor the energy conservation
throughout our simulations and only use data where the energy is
conserved with accuracy better than $10^{-3}$.
Finally, we perform a spatial convergence test.
Instead of varying ${\rm d}x$, which is fixed to $1$ in our
code, we make use of the scaling
symmetry~(\ref{eq:scaling}). It implies that
decreasing ${\rm d}x$ is equivalent to an increase of ${\cal N}$
accompanied by an appropriate rescaling of other parameters. Thus we
consider two simulation runs with
$({\cal N}=128,\, k_g=1,\, f_g=0.04,\, r_{s}^{\rm init}=1.5)$ and
$({\cal N}=256,\, k_g=0.5,\, f_g=0.02,\, r_{s}^{\rm init}=3.0)$.
Note that we do not rescale the time step ${\rm d}t=2/\pi$ which
is tied to the lattice spacing in order to avoid aliasing
\cite{May:2021wwp}.
The results of these two runs are compared in the right panel of
\cref{fig:resolution}. While energy conservation is much better
satisfied on the bigger lattice, the broad features of the mass
evolution in these two runs agree. This further support the validity
of our numerical results up to the extreme values $k_g=1$,
$r_{s}^{\rm init}=1.5$.
\subsection{Conversion of peak density into soliton mass}
\label{sec:peak}
As discussed in \cref{sec:setup}, we estimate the mass of the soliton
and its radius from the maximal density in the box $\rho_{\rm max}$, assuming
that it corresponds to the soliton peak density $\rho_{s,\,{\rm peak}}$. However,
the interference of the soliton wavefunction with the gas waves can
increase the maximal density above that of the soliton. The
increase is proportional to the product of the soliton and gas
wavefunctions, hence to
the geometric mean of their densities. In more detail, we can
estimate the bias as
\begin{equation}
\frac{\rho_{\rm max}}{\rho_{s,\,{\rm peak}}}-1\sim 2\sqrt{\frac{\rho_g}{\rho_{s,\,{\rm peak}}}}\;,
\end{equation}
which can be significant even for large density contrasts. For
example, the density bias is about $40\%$ for
$\rho_{s,\,{\rm peak}}/\rho_g=30$. The situation is further complicated by large
fluctuations in the local gas density that can further increase the
bias. In particular, when the soliton is too light, its peak becomes
completely obscured by the gas.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.75\textwidth]{plot/mass_peak.pdf}
\caption{Ratio of the soliton mass estimator to the true soliton mass
as functon of the density contrast in the axion field generated by
superposition of the soliton and gas wavefunctions. We adopt the threshold
$\rho_{\rm max}>30\,\rho_g$ when measuring the soliton mass from the
simulations.
\label{fig:peak2mass}
}
\end{center}
\end{figure}
To pin down the lowest density contrast between the soliton and the
gas for which the bias is unimportant, we conduct a series of the
following auxiliary numerical experiments. We generate a gas field
with given mean density $\rho_g$ and superimpose on it a soliton of
mass $M_s$ {\em without any evolution}. Then we evaluate the estimator
of the soliton mass using our formula
\begin{equation}
M_{s,{\rm est}} = 25.04 \, \rho_{\rm max}^{1/4}\;,
\label{eq:rhos2mc}
\end{equation}
where $\rho_{\rm max}$ is the maximal density of the axion field in the
box. The estimator is compared to the true soliton mass in
\cref{fig:peak2mass}.
We observe that when the soliton is prominent enough, say
$\rho_{s,\,{\rm peak}}\simeq\rho_{\rm max}>100\,\rho_g$, the estimator is
unbiased.
On the other hand, for
$\rho_{\rm max}\lesssim20\,\rho_g$, we are essentially unable to distinguish the
soliton peak against the gas density fluctuations. We adopt the
threshold
$\rho_{\rm max}>30\,\rho_g$ when measuring the soliton mass in our simulations, which
introduces an error of at most 20\% in the mass estimate.
\section{Discussion and outlook}
\label{sec:conclusion}
\paragraph{Absence of kinetic equilibruium.}
We have found that a soliton (boson star)
immersed into a homogeneous Maxwellian axion gas evaporates if its virial
temperature is about 12 times lower than the virial temperature of the
gas, and grows otherwise. This rules out the possibility of a stable
kinetic equilibrium between the gas and the soliton.
\paragraph{Evaporation of light solitons.}
Though evaporation of cold solitons
may at first sight appear surprising, the mechanism
behind it is quite intuitive.
Being a self-gravitating system, the soliton possesses
negative heat capacity. Thus, a transfer of energy from the hot gas to
the cold soliton makes the latter even colder. This leads to a
run-away of the soliton temperature, and hence its mass, towards zero.
The parametric dependence of the evaporation rate can be estimated
using the following simple considerations.\footnote{We thank Neal Dalal
and Junwu Huang for the discussion on this topic.} Wave interference
in the axion gas produces density inhomogeneities with the
characteristic size of half de Broglie wavelength
$\lambda_a/2=\pi/k_g$. These inhomogeneities can be though of as
quasi-particles with the mass $M_{qp}\sim \rho_g(\lambda_a/2)^3$
\cite{Hui:2016ltb}. A single quasi-particle colliding with the soliton
transfers to it a recoil momentum
\begin{equation}
\delta p\sim \frac{G M_s M_{qp}}{r_s^2}\cdot \frac{r_s}{v_{qp}}\;,
\end{equation}
where $v_{qp}\sim k_g/m$ is the quasi-particle velocity, and $r_s$
appears as the typical impact parameter. This implies the soliton
recoil energy
\begin{equation}
\delta E_s\sim \frac{\delta p^2}{2M_s}\sim\frac{G^2M_{qp}^2 M_s}{2r_s^2
v_{qp}^2}\;.
\end{equation}
Since the size of the quasi-particle is smaller than $r_s$ for the
light soliton, the recoil energy is distributed non-uniformly
throughout the soliton volume. This
leads to excitation of its normal modes. The number of
axions that get excited from the ground state and hence get lost by
the soliton is of order $\delta N_s\sim -\delta E_s/|\mathcal{E}_s|$. Combining
everything together, we obtain the mass loss of the soliton in a
single quasi-particle collision,
\begin{equation}
\frac{\delta M_s}{M_s}\sim -\frac{G^2 M_{qp}^2 m^2}{2v_{qp}^2}\;,
\end{equation}
where we have used that $|\mathcal{E}_s|r_s^2\sim 1/m$. To obtain the
evaporation rate, we have to multiply this result by the number of
quasi-particles bombarding the soliton in a unit of time,
$J_{qp}\sim 4\pi r_s^2 (\lambda_a/2)^{-3} v_{qp}$. In this way we
arrive at
\begin{equation}
\Gamma_s\sim -\frac{2\pi^4G^2m^3\rho_g^2}{k_g^6}\,(k_g r_s)^2\;,
\end{equation}
which agrees with the exact expressions (\ref{Gammalast})
obtained from the kinetic theory
within a factor of two.
We have seen that the threshold for evaporation is set by the equality
of the evaporation rate and the relaxation rate in the gas --- a
competing process leading to the soliton
formation~\cite{Levkov:2018kau}. This explains why the solitons that
are formed
in the gas always have virial temperature comparable
to that of the gas: they are just hot (and heavy) enough to survive.
In what physical situation can the soliton evaporation be relevant?
For fuzzy dark matter, this is the case when a small subhalo with low
velocity dispersion and light solitonic core falls into a bigger halo
with higher velocity dispersion. Evaporation then adds a new
destruction mechanism for the subhalo soliton, on top of the tidal
stripping \cite{Du:2018qor}. The time scale of evaporation is given by
the inverse of $|\Gamma_s|$,
\begin{equation}
t_{\rm evap} \simeq 3\times 10^{9} \,
\left( \frac{ m }{ 10^{-21} \, {\rm eV} }\right)^3\,
\left( \frac{ \rho_g }{ {0.3 \,{\rm GeV/ cm^3}} }\right)^{-2}\,
\left( \frac{v_g }{ 30 \, {\rm km/s} }\right)^6 \,
\left( \frac{k_g r_s }{ 10 }\right)^{-2}
{\rm yr} \;,
\end{equation}
where $\rho_g$ and $v_g$ should be taken as the density and velocity
dispersion of the bigger halo at the orbit of the soliton. The
evaporation time is very sensitive to the halo parameters and can be
longer or shorter than the age of the universe depending on their
precise values.
The evaporation should be also taken into account in the evolution of
boson stars in merging QCD axion miniclusters. Though here the
particle mass is much higher, the evaporation time can still be much
shorter than the age of the universe due to the very small velocity
dispersion $v_g\sim 10^{-5}\,{\rm km/s}$ in the miniclusters and their
extremely high density $\rho_g\gtrsim 10^6\,{\rm GeV/cm^3}$
\cite{Kolb:1994fi}.
\paragraph{Growth of heavy solitons.}
For solitons with virial temperature above the evaporation threshold
($T_s\gtrsim 0.1\,T_g$) we have found that the growth rate quickly
decreases once the soliton temperature exceeds that of the gas. This
result is in qualitative agreement with other works
\cite{Levkov:2018kau,Chen:2020cef}. The growth rate of heavy solitons
measured from our numerical simulations is consistent with the power
law (\ref{powerfit}) with $n$ between $2$ and $4$. We have presented
analytic arguments favoring $n=4$ in the limit $k_gr_s\to 0$, which
is compatible with the numerical data in the lowest
$k_gr_s$ bins. These bins, however, suffer from large uncertainties and
it remains unclear if the range $k_gr_s\gtrsim 0.2$
probed
in the
simulations is sufficient to reach into the asymptotic heavy
soliton regime.
The power-law dependence of the rate (\ref{powerfit})
translates into power-law growth of the soliton mass,\footnote{Recall
that $r_s\propto M_s^{-1}$, whereupon the evolution equation for
the mass is easily integrated.}
\begin{equation}
\label{masspowerfit}
M_s\propto t^\alpha\;,~~~~~~\alpha=1/n\;.
\end{equation}
Ref.~\cite{Levkov:2018kau} established that $\alpha=1/2$ provides a
good fit to the soliton growth right after formation, whereas
Ref.~\cite{Chen:2020cef} found a dramatic flattening of the soliton
mass curve at
late times corresponding to $\alpha=1/8$. The results of
Ref.~\cite{Levkov:2018kau} are consistent with ours, though our
central value for the power $n=3$ predicts a somewhat shallower
dependence with $\alpha=1/3$. The steep growth observed in
\cite{Levkov:2018kau} might be due to a short duration of the
simulations. Indeed, by carrying out numerical experiments with the
same setup as in \cite{Levkov:2018kau} (see \cref{sec:levkov}) and
fitting the soliton mass with the formula (\ref{masspowerfit}), we
have
observed a correlation of the best-fit index $\alpha$
with the soliton lifetime: $\alpha$ is
about $1/2$ for newly formed solitons and descreases down to $1/4$ for
grown-up solitons long after the relaxation time (see
\cref{fig:tend}). This trend is in agreement with our main simulations
where we see indications of increasing $n$, and hence decreasing
$\alpha$, for heavier solitons.
However, at this point the numerical data are rather inconclusive as to
the robustness of this trend and the asymptotic value of $\alpha$ at
$t\to\infty$.
On the other hand, we do not see any evidence for the low
$\alpha=1/8$ found in \cite{Chen:2020cef}. Moreover, our analytic
considerations suggest that the asymptotic value of $\alpha$ is at
least as high as $1/4$. The discrepancy may be due to the difference
in the setups. We study a soliton in a homogeneous gas, whereas
Ref.~\cite{Chen:2020cef} considers a soliton in the center of an axion
halo. It is conceivable that suppression of the soliton growth in the
latter case stems from its back reaction on the halo.
It will be interesting to explore this possibility in more
detail in future.
\paragraph{Soliton-host halo relation.}
One can ask whether our results have any implications for the
soliton-host halo relation. The answer is: Not directly, because in the
cosmological setting the solitons were found to form during the initial
halo collapse when axions are not yet in the kinetic regime. Still,
with some degree of extrapolation, one can argue that our results make
unlikely formation of a light soliton since it would be evaporated by
the fast axions from the halo. This sets a lower bound on the soliton
mass which is just a factor of a few lower than $M_{s}^{\rm SSH}$,
the mass corresponding to the soliton-host halo
relation.\footnote{Note that by the soliton-host halo relation we
understand here correlation between the soliton mass and the
virial temperature of the halo, while in the literature the
soliton-host halo relation is commonly formulated in terms of the
halo mass. We believe that the former formulation reflects better
the underlying physical mechanisms behind the relation.}
Heavier solitons can, in principle, form with arbitrary masses and
will continue growing upon the halo virialization. The time scale for
this growth can, however, be very long and exceed the age of the
universe when the soliton mass exceeds $M_{s}^{\rm SSH}$. Moreover, it
is natural to speculate that the solitons are more likely to form as
light as they can which singles out $M_{s}^{\rm SSH}$ as the sweet
spot. This reasoning still does not tell us how far the soliton-host
halo relation can be extrapolated in the parameter space. In
particular, we do not know whether the solitons form in any halo and
for any value of axion mass, or for some parameters their formation
becomes improbable. More work is needed to answer these questions.
\paragraph{Persistence of Maxwell distribution.} It is known that
without a soliton the velocity distribution of axion gas relaxes
towards thermal form with high population of low-momentum modes
\cite{Levkov:2018kau}. We have found evidence that the presence of
soliton changes the picture. In this case the Maxwell distribution
appears to persist on timescales significantly longer than the kinetic
relaxation time. Moreover, in the simulations with soliton formation
we observed restoration of the Maxwell distribution after a transient
period with enhanced population of low-momentum modes preceding the
birth of the soliton. This ``Maxwellization'' manifests itself
indirectly in the universality of the soliton mass evolution in
simulations with different histories (figs.~\ref{fig:stack},
{\ref{fig:stack_fig}}), as well as in the directly measured momentum
distribution at different moments of time (\cref{fig:Maxwell}). The
latter, however, is subject to large temporal fluctuations which
presently do
not allow us to move beyond qualitative statements. It will be
interesting to study this phenomenon more quantitatively in future by
developing methods of measuring the momentum distribution with reduced
noise. A complementary approach would be to track the distribution of
axions in energy, instead of momentum, as suggested in
Ref.~\cite{Levkov:2018kau}.
\section{Introduction}
\label{sec:intro}
QCD axion
\cite{Weinberg:1977ma,Wilczek:1977pj,Shifman:1979if,Kim:1979if,Zhitnitsky:1980tq,Dine:1981rt,Preskill:1982cy,Abbott:1982af,Dine:1982ah}
and axion-like particles
\cite{Arvanitaki:2009fg,Marsh:2015xka,Hui:2016ltb,Hui:2021tkt}, are widely
discussed in the literature as well-motivated dark matter (DM)
candidates.
The QCD axion, originally suggested as a solution to the strong CP
problem \cite{Peccei:1977hh,Peccei:1977ur}, was soon realized
\cite{Preskill:1982cy} to be produced in the early universe and
behave as cold
dark matter after the QCD phase transition endowing it with the
mass. The requirement that the QCD axion accounts for all of DM leads
to a preferred mass window\footnote{The mass can be
smaller in scenarios where Peccei--Quinn symmetry is never restored
after inflation.}
$m_{aQCD}\sim 10^{-6}\div 10^{-4}~{\rm eV}$.
Axion-like particles with broad range of masses and very weak coupling
to
the
Standard Model naturally arise in many beyond Standard Model
scenarios and string theory
\cite{Svrcek:2006yi,Arvanitaki:2009fg}. For brevity, we will refer to
DM made of such particles as axion DM.
Particularly interesting is the case of ultralight (also called ``fuzzy'')
DM with mass $m_a\sim 10^{-22}\div 10^{-19}~{\rm eV}$
\cite{Hu:2000ke}. The de Broglie wavelength of such ultralight
particle corresponding to virial velocity in a galactic
halo,\footnote{Throughout the paper we
use the system of units $\hbar=c=1$.}
\begin{equation}
\lambda_a = \frac{2\pi }{ m v_a} \sim \, 1.2\times
\, \left( \frac { m_a}{10^{-22} \, \mathrm{eV} } \right)^{-1}
\, \left( \frac{ v_a}{100\,\mathrm{km/s}} \right)^{-1} \mathrm{kpc} \ ,
\end{equation}
is comparable to the typical cosmological and astrophysical
distances. Due to this property, ultralight dark matter exhibits rich
phenomenology affecting various cosmological observables and galactic
dynamics \cite{Marsh:2015xka,Hui:2016ltb,Hui:2021tkt}. The analysis of
Lyman-$\alpha$ forest
\cite{Armengaud:2017nkf,Kobayashi:2017jcf,Rogers:2020ltq}, galactic
rotation curves \cite{Bar:2018acw,Bar:2021kti}, halo
profiles of dwarf galaxies
\cite{Safarzadeh:2019sre,Zoutendijk:2021kee} and subhalo population in
the Milky Way \cite{DES:2020fxi} strongly disfavor DM lighter than
$10^{-21}~{\rm eV}$. Dynamical heating of stars by
ultralight DM in ultrafaint dwarf
galaxies has been used to infer tighter constraints
${m_a\gtrsim 10^{-19}~{\rm eV}}$ \cite{Marsh:2018zyw,Dalal:2022rmp}.
A distinctive feature of axion DM is its
huge occupation numbers (phase-space density)
which are allowed because axions are bosons,
\begin{equation}
f_{\textbf{k}} \sim 10^{86} \times \left( \frac{ \rho_a} { 0.3
\, {\rm GeV/cm}^3 }\right)
\, \left( \frac{m_a} {10^{-20} \, \mathrm{eV} } \right)^{-4}
\, \left( \frac{v_a}{100\, {\rm km/s}} \right)^{-3} \ .
\label{eq:Na}
\end{equation}
This implies that, rather than behaving as a collection of individual
particles, axion DM is best described by a coherent classical scalar
field with the
scattering rate of axions
increased due to the Bose enhancement. Typically,
in the study of structure
formation
all axion
interactions besides gravity can be neglected resulting in a
universal wave dynamics described by
Schr\"odinger--Poisson equations \cite{Hui:2021tkt}. The dependence of
these equations on the axion mass can be taken into account by a simple
rescaling, and thus they apply to any axion DM
as long as $f_{\bf k}\gg 1$.
The Schr\"odinger--Poisson system admits a spherically symmetric
localized solution known as {\it axion soliton} or {\it boson
star}\footnote{We will use the two names
interchangeably.}
\cite{Ruffini:1969qy}.
All axions comprising the soliton
are
in the same state which is the
ground state of the gravitational potential and hence the soliton
can be viewed
as inhomogeneous
Bose--Einstein condensate sustained by its own gravity
\cite{Guzman:2006yc}.
Numerical simulations of axion DM have revealed formation of boson
stars in the centers of virialized
axion halos (also known as miniclusters
\cite{Hogan:1988mp,Kolb:1993zz}
in the
case of QCD axion). This phenomenon was observed in the cosmological
setting \cite{Schive:2014dra,Veltmaat:2018,Mina:2020eik,May:2021wwp},
in numerical
experiments with halos created by collisions of several seed solitons
\cite{Schive:2014hza,Schwabe:2016rze,Mocz:2017wlg},
and in the kinetic relaxation regime~\cite{Levkov:2018kau}.
It was also found that
if the soliton is artificially
removed from the halo, evolution readily reinstates
it back~\cite{Yavetz:2021pbc}.
Thus, presence of a solitonic core appears to be a generic feature of an
axion halo. The rest of the halo represents a cloud of
randomly moving wavepackets with the velocities
roughly following the Maxwellian distribution and the average density
fitted by the NFW profile \cite{Navarro:1996gj},
similarly to the usual
cold DM.
It is natural to ask how the soliton interacts with this environment.
Refs.~\cite{Eggemeier:2019jsu,Schive:2019rrw,Li:2020ryg,Zagorac:2021}
showed that interference between the soliton and wavepackets leads to
oscillations of its density and to a random walk of the soliton center
around the halo center of mass. Further,
an interesting correlation between the soliton mass and the mass of
its host halo has been established in cosmological numerical
simulations \cite{Schive:2014dra,Schive:2014hza} and confirmed in
\cite{Veltmaat:2018,Eggemeier:2019jsu}. This relation can be rephrased
as equality between the virial temperatures of the soliton and the
host halo. While this relation may appear intuitive, the physical
mechanism behind it remains unclear. It is not reproduced by
simulations starting from
non-cosmological initial conditions
\cite{Schwabe:2016rze,Mocz:2017wlg,Chan:2021bja}, whereas more recent
cosmological simulations \cite{Nori:2020jzx,
May:2021wwp,Chan:2021bja} indicate that
it is subject to a large scatter, perhaps due to different merger
histories of different halos. The results of
Ref.~\cite{Levkov:2018kau} disfavor a potential interpretation of
the soliton-host halo relation as a condition for kinetic
equilibrium. Indeed, it was observed that, once formed, the solitons
continue to grow by condensation of axions from the surrounding
gas. On the other hand, Refs.~\cite{Eggemeier:2019jsu,Chen:2020cef}
argue that this growth slows down when the soliton
becomes heavy enough to heat up the inner part of the halo and,
given the finite time of the simulations, this can explain the observed
correlation. The mass of the soliton can be also significantly
affected by baryonic matter, typically leading to its
increase \cite{ChanEtal18,Veltmaat:2019hou}.
Boson stars give rise to
important signatures opening up various opportunities for future
discovery or constraints on axion DM.
In the case of fuzzy DM, they are expected to
play a prominent role in galactic dynamics modifying the rotation
curves \cite{Bar:2018acw,Bar:2021kti} and heating the
stars in the central regions through oscillations and random walk
\cite{Marsh:2018zyw,Chiang:2021uvt,Chowdhury:2021zik}.
When axion self-interaction is included, they become unstable if
their mass exceeds a certain threshold and collapse producing bursts
of relativistic axions \cite{Levkov:2016rkk}. Further allowing for
possible axion coupling to photons, they can be sources of radio
emission \cite{Tkachev:2014dpa,Hertzberg:2018zte,Levkov:2020txo}.
Presence or absence of boson stars in axion miniclusters can have
important implications for lensing searches \cite{Ellis:2022grh}.
Very dense boson stars made of inflaton field get produced in
inflationary models with delayed reheating opening a potentially rich
phenomenology, such as seeding primordial
black holes or contributing into stochastic high-frequency
gravitational wave background \cite{Eggemeier:2021smj}.
The dynamical range achievable in axion DM simulations is severely
limited by the computational costs (see the discussion in
\cite{May:2021wwp}). This calls for better theoretical understanding
of the physical laws governing the evolution of boson stars in various
environments which would allow their extrapolation outside of the
parameter regions explored in simulations. In the present
paper we make a step
in this direction by studying the evolution of a boson star immersed
in a box filled with homogeneous axion gas. Focusing on this setup
allows us to get rid of the uncertainties related to the dynamics of
the halo and keep under control the gas density and its velocity
distribution. The latter is chosen to be Maxwellian at the initial
moment of time. Similar setup was
employed in Ref.~\cite{Levkov:2018kau} to study the formation of the
soliton in the process of the gas kinetic relaxation. By contrast, we do not
assume the soliton to be formed from the gas and simply add it in the
initial conditions of our simulations. In this way we are able to
explore a wide range of soliton masses corresponding different ratios
between the soliton virial temperature $T_s$ and the temperature of
the gas $T_g$.
The key quantity that we are interested in is the rate of change of
the soliton mass,
\begin{equation}
\label{rate_def}
\Gamma_s=\frac{1}{M_s}\frac{dM_s}{dt}\;.
\end{equation}
We study the dependence of this quantity on the parameters
characterizing the gas and the soliton by a combination of analytical
and numerical methods.
We find that the solitons with $T_s/T_g\gtrsim 0.1$ grow by absorbing
particles from the gas. For fixed gas parameters, the growth rate is
essentially constant in the range $0.1\lesssim T_s/T_g\lesssim 1$,
whereas at $T_s/T_g\gtrsim 1$ it decreases as $(T_s/T_g)^{-n/2}$
with $n=2\div 4$.
Interestingly, we find that if $T_s/T_g\lesssim 0.08$, the soliton
{\em evaporates}, the time scale of this process being parametrically
shorter than the relaxation time.
This does not contradict previous results on soliton
formation from the gas by kinetic relaxation
\cite{Levkov:2018kau}.
Indeed, by running the simulations
longer than the evaporation of the initial soliton we observe after a while
the birth of a
new soliton with $T_s/T_g\gtrsim 0.1$,
in agreement with \cite{Levkov:2018kau}. It is
worth stressing the difference between the soliton evaporation and tidal
disruption by large-scale gradients of the halo gravitational field
\cite{Du:2018qor}. This is clear already from the fact that there is no
halo in
our
setup. Moreover, the qualitative direction of the process ---
evaporation vs. condensation ---
is
entirely determined by the soliton and gas temperatures and does not
depend on the density contrast between them.\footnote{Though the
quantitative characteristics --- the evaporation rate --- does
depend on the gas density,
$\Gamma_s\propto
\rho_g^2$ (see eq.~(\ref{Gammagamma})).}
The paper is organized as follows. In \cref{sec:Maxwell} we
introduce our framework and review the relevant properties of the
soliton solution to the Schr\"odinger--Poisson equations. In
\cref{sec:theory} we address the computation of the soliton
growth/evaporation rate formulating it as a quantum-mechanical
scattering
problem. We consider separately the cases of light (cold, $T_s/T_g\ll 1$)
and heavy (hot, $T_s/T_g\gg 1$) solitons and employ various approximations
to estimate the rate analytically. In
\cref{sec:simulation} we describe our numerical simulations,
extract the soliton growth rate from them
and compare it to the analytic predictions.
In \cref{sec:conclusion} we discuss the implications of our results
and compare to other works. Three appendices contain auxiliary
material.
In \cref{app:class} we provide an alternative derivation of the
soliton growth rate using only classical equations of motion. In
\cref{sec:levkov} we describe a suit of simulations
reproducing the setup of Ref.~\cite{Levkov:2018kau} where the soliton
forms from the gas spontaneously due to kinetic
relaxation. Appendix~\ref{sec:nums} contains additional details about
our numerical procedure.
\section{Wave Simulations}
\label{sec:simulation}
In this section we present our numerical simulations. We first
describe the setup. Then we provide three typical examples of
simulation runs for heavy, intermediate and light solitons and
introduce the procedure which we use to measure the soliton growth
rate.
Finally,
we assemble 195 individual simulation runs to extract the soliton
growth/evaporation rates and compare them to the theoretical
predictions of the previous section.
We focus here on the main suit of simulations where in each run we
assign a single soliton surrounded by Maxwellian axion gas as the
initial conditions. In \cref{sec:levkov} we also report the
simulations without the initial
soliton where it forms dynamically from the axion gas, as in
Ref.~\cite{Levkov:2018kau}.
\subsection{Setup}
\label{sec:setup}
\subsubsection*{Evolution}
We use the scaling transformation (\ref{eq:scaling}) to convert the
Schr\"odinger--Poisson equations into the following dimensionless form,
\begin{subequations}
\label{eq:dimensionlessEq}
\begin{align}
\label{eqSchrSim}
&i\partial_t\tilde\psi +\frac{1}{2} \Delta\tilde\psi - \tilde\Phi \,
\tilde\psi=0\;, \\
\label{eqPoisSim}
&\Delta\tilde \Phi=|\tilde\psi|^2\;,
\end{align}
\end{subequations}
which is equivalent to the choice of units $m=4\pi G=1$. This system
is solved on a cubic lattice of size ${\cal N}$ with periodic boundary
conditions on $\tilde \psi$ and $\tilde\Phi$. We use the residual
scaling symmetry to fix the lattice spacing to one, ${\rm d}x=1$. The
size of the lattice thus sets the length of the box side
and remains a free parameter. We run simulations for three different
values ${\cal N}=128,~256,~512$. In what follows we omit tildes over
dimensionless quantities.
The wavefunction is advanced by the leapfrog integration algorithm
(drift-kick-drift)~\cite{birdsall2018plasma,ChanEtal18},
\begin{equation}
\label{DKD}
\psi(t+ {\rm d} t, {\bf x} )= {\rm e}^{i \, \Delta \, {\rm d} t/4}
\cdot {\rm e}^{-i \, \Phi(t+{\rm d} t/2, {\bf x} ) \,
{\rm d} t }
\cdot {\rm e}^{i \, \Delta \, {\rm d} t/4} \, \psi(t, {\bf x} ) \, .
\end{equation}
We transform $\psi$ to the momentum space to evolve with $ {\rm e}^{i \, \Delta
\, {\rm d} t/4}$ and $\Delta$ is converted to $-{\bf k}^2$,
while the evolution with the gravitational potential, ${\rm e}^{-i \,
\Phi \, {\rm d} t }$, is performed in the real
space. Fourier components of the gravitational potential with $\k\neq
0$ are found from \cref{eqPoisSim},
\begin{equation}
\Phi_\k=-\frac{(|\psi|^2)_\k}{\k^2}\;,
\end{equation}
whereas the zero mode is set to vanish,\footnote{This
corresponds to an arbitrary choice of the zero-point energy in the
Schr\"odinger equation~(\ref{eqSchrSim}).} $\Phi_{\bf k=0}=0$.
We use uniform time step
${\rm d} t=2/\pi$ which
is determined by the requirement that the phase difference of a
high-momentum
mode with $k=\pi$ between consecutive time slices
does not exceed $\pi$.
To assess the accuracy of the simulations, we monitor the total energy
of the axion field in the box,
\begin{equation}
\label{Etot}
E=\frac{1}{2}\sum_\k\k^2|\psi_\k|^2+\frac{1}{2}\sum_\mathbf{x} \Phi(\mathbf{x})|\psi(\mathbf{x})|^2\;.
\end{equation}
We have observed that the energy conservation quickly deteriorates for
heavy solitons with sizes comparable to the lattice spacing,
$r_s\sim 1$ (see \cref{sec:resolution} for details). In our
analysis we only use runs where the energy is conserved with the
precision $\lesssim 0.1\%$.
\subsubsection*{Initial conditions for axion gas}
The gas wavefunction is set up in the initial conditions through its
Fourier decomposition,
\begin{equation}
\psi_g(t=0,\mathbf{x})=\frac{1}{{\cal N}^{3/2}}
\sum_\mathbf{k}a_\k \cdot {\rm e}^{i\mathbf{k}\cdot\mathbf{x}} \, ,
\end{equation}
where the absolute values of the amplitudes $a_\k$ are taken to follow
the Maxwell distribution (\ref{fgas}). To ensure that the gas modes
are well resolved on the lattice, we restrict to $k_g\leq 1$. The
phases of $a_\k$ are assigned to random numbers uniformly distributed
in the range $(0, 2\pi)$. We have repeated simulations for several
random initial phase realizations and have found that the choice of
realization does not affect our results. The mean gas density $\rho_g$ and its
total mass $M_g$ can be deduced as
\begin{equation}
\rho_g= \frac{1}{{\cal N}^3} \, \int {\rm d}^3 \mathbf{x}\,
|\psi(\mathbf{x}) |^2 = \frac{ f_g k_g^3}{(4\pi)^{3/2}} \, ,
\qquad\qquad M_g = \rho_g \, {\cal N}^3 = \frac{ f_g k_g^3 {\cal N}^3 } {
(4\pi)^{3/2} } \, .
\label{eq:rho0}
\end{equation}
The gas density is limited from above by the condition to avoid the
Jeans instability that triggers a halo formation and thereby
complicates the interpretation of simulation results. Thus, we require
the size of the simulation box to be smaller than the Jeans length
(\ref{lJeans}), which yields the condition:
\begin{equation}
{\cal N} < l_J ~~~~~\Longleftrightarrow~~~~~
f_g \, k_g < 0.054 \, \left( \frac{{\cal N} }{ 128} \right)^{-2} \ .
\label{eqn:jeans_instability}
\end{equation}
This puts a stringent restriction on the parameter space of the
simulations.
\subsubsection*{Initial conditions for soliton}
We superimpose
the soliton wavefunction on top of the gas wavefunction at the
beginning of the simulations.\footnote{Dynamical soliton formation from
the gas is discussed in \cref{sec:levkov}.}
The input soliton density profile uses the analytic fit
(\ref{eq:rhos}) characterized by a single parameter, the half-peak radius
$r_{s}^{\rm init}$. The peak density of the fit is taken to be
\cite{Schive:2014hza},
\begin{equation}
\rho_{s,\, {\rm peak}}^{\rm init} = \frac{2.794}{(r_{s}^{\rm init})^{4}},
\end{equation}
which is slightly lower (by less than $2\%$) than the exact value
implied by the formulas of \cref{sec:Maxwell}. This discrepancy is
negligible given other uncertainties of the simulations.
The initial phase of the soliton wave function is set to be zero. This
choice does not change our average result
since the phases of the axion gas are random.
We notice that the initial soliton gets
slightly deformed after superposing on the wavefunction of axion gas, but
this deformation has little effect on the late time evolution.
We take $r_{s}^{\rm init}\geq 1.5$ for the soliton to be resolved on the lattice.
Periodic boundary conditions give rise to image solitons at distance
${\cal N}$ from the central one. We have observed that these images can
distort the central soliton wavefunction. To avoid this distortion, we
require the soliton size to be much smaller than the box, $r_s^{\rm
init}<0.1\,{\cal N}$.
\subsubsection*{Measurement of the soliton mass}
During the simulations the radius of the soliton evolves together with
its mass. We estimate $r_s$, $M_s$ at a given time using their
relation to the soliton peak density provided by the fit to the
soliton density profile,\footnote{The expression for the soliton mass
(\ref{eq:Mg}) is by $3\%$ lower for a given peak density than the value obtained
from the exact wavefunction, see \cref{sec:Maxwell}. This
error is insignificant for our analysis. Note that its effect is
opposite to the bias introduced by the interference with the axion gas
discussed below.}
\begin{equation}
r_s=1.293\, \rho_{s,\,{\rm peak}}^{-1/4}~,~~~~~~~
M_s = 25.04\, \rho_{s,\,{\rm peak}}^{1/4}.
\label{eq:Mg}
\end{equation}
Since the soliton moves through the box during simulations, the
position of its peak is unknown. We choose the maximal density in the
whole box as a proxy for the soliton peak density assuming that the
soliton is prominent within the axion gas. Note that due to interference
between the soliton and the gas, the peak density of the axion field
does not, in general, coincide with the soliton peak. Choosing the
maximal density in the box can bias our estimate of the soliton peak
density, and hence of its mass, upwards. Detailed investigation of
this bias is performed in \cref{sec:peak}. It shows that
the bias is at most $20\%$ when the maximal density is higher than the
mean gas density by a factor of $30$ and quickly decreases for higher
density contrasts. To obtain the soliton growth rate we analyze only
the parts of the
simulations with
$\rho_{s,\,{\rm peak}}>30\,\rho_g$.
On the other hand, we require the mass of the soliton to be
significantly smaller
than the total mass of the gas in order to avoid
any effects on the
soliton evolution that can arise due to a shortage of particles in the gas. We
implement this by the condition $M_s<0.5\, M_g$.
\subsubsection*{Parameter space}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\textwidth]{plot/input.pdf}
\caption{Parameters of 195 individual simulations used in this
work. The four-dimensional parameter space is projected on the
directions corresponding to the box size ${\cal N}$, the
soliton half-peak radius $r_{s}^{\rm init}$, and the parameters of the
Maxwell distribution of axion gas $k_g$, $f_g$.
The horizontal axis is common to all panels and shows the product
$k_gr_{s}^{\rm init}$.
Green circles correspond to simulations leading to soliton growth,
while red circles show the cases of soliton evaporation.
Darker circles indicate multiple realizations of axion gas by
changing the phases in the wavefunction.
\label{fig:input}
}
\end{center}
\end{figure}
Our simulations have
four input parameters: ${\cal N}$, $k_g$, $f_g$, and $r_{s}^{\rm init}$, which describe
the box size, the
momentum distribution of axion gas, and the size of soliton.
In this work, we use three box sizes, ${\cal N}=128$, $256$, and $512$.
For the regime of light soliton, most of the simulations are conducted
with ${\cal N}=128$, while for heavy solitons
we use large boxes ${\cal N}=512$ in order to reach low $(k_gr_s)\sim
0.1$.
The remaining three parameters are sampled in the ranges
\begin{equation}
k_g \in ( 0.1\,,\, 1)~,~~~~~f_g \in (10^{-4}\,,\, 0.12)~,~~~~~
r_{s}^{\rm init} \in (1.5\,,\, 12 )\;.
\end{equation}
Their choice is dictated by the goal to efficiently capture the
soliton growth/evaporation within realistic simulation time, while
resolving the axion gas and the soliton on the lattice. In addition,
they are subject to constraints discussed above which we summarize
here for clarity:
\begin{itemize}
\item[a)] $f_g \, k_g < 0.054 \, \left({\cal N}/128 \right)^{-2}$: the
axion gas does not form a halo due to Jeans instability;
\item[b)] $r_{s}^{\rm init} < 0.1\, {\cal N}$: the effect of periodic images on the
soliton is suppressed;
\item[c)] $\rho_{s,\,{\rm peak}} > 30\rho_g$: soliton is prominent enough to
suppress bias in its mass measurement;
\item[d)] $M_s < 0.5\, M_g$: soliton does not overwhelm axion waves.
\end{itemize}
Note that the conditions (a,b) are imposed on the initial
configuration, whereas the conditions (c,d) are monitored throughout
the whole duration of the simulations.
In total we have run 195 simulations with independent realizations of
random gas phases. Their parameters are shown in
\cref{fig:input} against the product $k_g r_s^{\rm init}$ which
controls
the physics of the soliton-gas interaction.
\subsection{Growing and evaporating solitons }
\label{sec:case}
In this section we present a case study of several simulations that
illustrate possible evolution of the soliton-gas system. We use
these examples to introduce our procedure for extraction of the soliton
growth rate. We also provide evidence that the gas distribution
remains close to Maxwellian during the simulations.
We consider three simulation runs with the same initial gas
configuration characterized by $({\cal N}=128,k_g=1,f_g=0.01)$ and
different initial soliton sizes $r_s^{\rm init}$: $1.51$ (heavy
soliton), $2.71$ (median soliton), and $3.62$ (light
soliton). Figures~\ref{fig:cases}-\ref{fig:cases2}
show the evolution of the soliton
characteristics in the three runs. These include the soliton peak
density $\rho_{s,\,{\rm peak}}(t)$ (which we identify with the maximal density in the
box), the soliton mass $M_s(t)$ and the soliton radius $r_s(t)$. The
peak density is normalized to the mean density of the gas, whereas the
mass and radius are determined using the relations
(\ref{eq:Mg}). Clearly, the heavy soliton grows and the light soliton
evaporates which is consistent with the analysis of
\cref{sec:theory}. The
median soliton remains approximately
unchanged indicating that the transition from growth to evaporation
occurs at $(k_gr_s)\sim 2.7$.
We also plot in figs.~\ref{fig:cases}-\ref{fig:cases2} the change in the
total energy of the axion field in the box.
For the median and light solitons the energy is conserved with high
precision $|E(t)/E(0)-1|\lesssim 10^{-5}$ throughout the whole duration of the
simulations. For the heavy soliton, the energy exhibits a slow drift
and the error exceeds $0.1\%$ by the end of the simulations. We
associate this with the loss of spatial and temporal resolution for
heavy solitons which have small sizes $r_s\sim 1$ and high oscillation
frequencies $|\mathcal{E}_s|\sim 1$ (see \cref{sec:resolution} for a detailed
discussion). In our analysis we use only the portion of the simulation
where $|E(t)/E(0)-1| < 10^{-3}$.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.9\textwidth]{plot/case_study_rc5.png}
\caption{Evolution of the soliton peak density,
mass and radius for the case of heavy soliton ($r_{s}^{\rm init}=1.51$).
The mass and radius are estimated from the
peak density. Thin blue curves show the instantaneous values,
whereas the thick curves are obtained by smoothing
with a top-hat filter. Yellow dots show the result of fitting the
soliton mass
with a quadratic polynomial.
We also show the time dependence of the
total energy in the simulation box used to control the precision of
numerical calculations. The gas parameters
are (${\cal N}=128$, $k_g = 1$, $f_g =0.01$).
\label{fig:cases}
}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.9\textwidth]{plot/case_study_rc9.png}
\caption{Same as \cref{fig:cases} for the case of median soliton
($r_{s}^{\rm init}=2.71$).
\label{fig:cases1}
}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.9\textwidth]{plot/case_study_rc12.png}
\caption{Same as \cref{fig:cases} for the case of light soliton
($r_{s}^{\rm init}=3.62$).
\label{fig:cases2}
}
\end{center}
\end{figure}
We now describe our algorithm to extract the soliton growth rate
$\Gamma_s$. The task is complicated by strong oscillations of the
soliton peak density which are clearly visible in the plots and
translate into oscillations of the estimated soliton mass and
radius. Such oscillations have been observed in previous works
\cite{Veltmaat:2018,Eggemeier:2019jsu}
and correspond to the normal modes of the soliton
\cite{Guzman:2004wj,Guzman19}
with the frequency of the lowest mode $\omega\sim 0.5\,r_s^{-2}$. To
mitigate their effect, we construct running averages of the soliton
parameters by smoothing them with a top-hat function.\footnote{Note
that we smooth $\rho_{s,\,{\rm peak}}(t)$, $M_s(t)$ and $r_s(t)$ separately.}
We take the
width of the top-hat as a function of the initial soliton size
$t_{\rm width}=70(r_{s}^{\rm init})^2$ which covers about five periods of the
oscillations. The resulting smoothed dependences are shown in
figs.~\ref{fig:cases}-\ref{fig:cases2} by thick curves.
While smoothing suppresses most of the chaotic oscillations, it still
leaves some irregularities in the time dependence of the soliton mass
that introduce significant noise when calculating its time
derivative. To further suppress this noise, we fit the smoothed mass
with an analytic function of time. We have found that a quadratic fit
is sufficient in all cases. Thus, we write
\begin{equation}
M_s^{\rm fit}(t)=a_0+a_1t+a_2t^2 \;,
\label{eq:Ms_fit}
\end{equation}
where $a_0$, $a_1$ and $a_2$ are fitting parameters.
The fitting time-range is determined by the following criteria:
\begin{itemize}
\item
Inside the range the soliton peak density, mass and radius satisfy the
conditions (c,d) from \cref{sec:setup};
\item
The total energy in the simulation box is conserved within precision
$|E(t) / E(0) -1| < 0.1\%$;
\item
The time duration is smaller than half of the relaxation time
(\ref{trelax}) to avoid possible changes in the gas distribution due
to kinetic relaxation \cite{Levkov:2018kau}.\footnote{In principle,
this requirement might be too stringent
since we observe that in the presence of a
soliton the gas distribution remains close to Maxwellian even on
time scales longer than the relaxation time, as will be discussed shortly.}
\end{itemize}
The best-fit values of $a_0,a_1,a_2$ for the three sample runs are
given in \cref{table:cases}. The corresponding fitted curves are shown
in figs.~\ref{fig:cases}-\ref{fig:cases2} with yellow dots. We also define the ``fitted''
soliton radius by converting it from the soliton mass
in accordance with eqs.~(\ref{eq:Mg}),
\begin{equation}
\label{rsfit}
r_s^{\rm fit}(t)\equiv\frac{32.37}{M_s^{\rm
fit}(t)}=\frac{32.37}{a_0+a_1t+a_2t^2}\;.
\end{equation}
The result matches very well the smoothed dependence $r_s(t)$, see
figs.~\ref{fig:cases}-\ref{fig:cases2}.
We have verified that an independent fit of smoothed
$r_s(t)$ with a quadratic polynomial produces essentially identical
curves, which provides a consistency check of
our procedure.
\begin{table}[t]
\begin{center}
\begin{tabular}{| c |c||c|c|c | }
\hline
& $r_{s}^{\rm init}$ &$ a_0$ &$a_1$&$a_2$
\\
\hline
\makecell{heavy soliton }
& 1.51 & $20.79$&$0.283\times 10^{-5}$&$0.00239\times 10^{-10}$
\\
\hline
\makecell{median soliton }
& 2.71 & $11.35$&$-0.203\times 10^{-5}$&$0.0282\times 10^{-10}$
\\
\hline
\makecell{light soliton }
& 3.62 & $8.80$&$-0.595\times 10^{-5}$&$-0.0837\times 10^{-10}$
\\
\hline
\end{tabular}
\end{center}
\caption{Parameters of the soliton mass fit
for the three simulations
shown in figs.~\ref{fig:cases}-\ref{fig:cases2}.
The initial size of the soliton is
$r_{s}^{\rm init}$.
The parameters of axion gas are
${\cal N}=128$,
$k_g=1$, $f_g=0.01$.}
\label{table:cases}
\end{table}
We can now estimate the soliton growth rate substituting the fitted
time dependence of the soliton mass in the defining formula
(\ref{rate_def}), which yields,
\begin{equation}
\Gamma_s^{\rm fit}(t) =\frac{a_1+2\,a_2\,t}{a_0+a_1\,t+a_2\,t^2}\;.
\end{equation}
We are interested in the dependence of the growth rate on the soliton
radius $r_s$. Both these quantities depend on time, so a single run
provides a continuous set of data points
$\big(r_s^{\rm fit}(t),\Gamma_s^{\rm fit}(t)\big)$
sampled at different moments of
time. In view of uncertainties of our
smoothing and fitting procedure, we reduce this set to 20 data points
$\big(r_s^{\rm fit}(t_i),\Gamma_s^{\rm fit}(t_i)\big)$, $i=1,\ldots,20$,
evenly distributed in time within the range
of the fitting function $M_s^{\rm fit}(t)$.
These 20 data points represent the output of a
single run. In the next subsection we combine the outputs of 195 runs
to build the cumulative dependence of the growth rate on the soliton
and gas parameters.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.75\textwidth}
\centering
\includegraphics[width=\textwidth]{plot/stack.png}
\label{fig:stack1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.75\textwidth}
\centering
\includegraphics[width=\textwidth]{plot/stack_decay.png}
\label{fig:stack2}
\end{subfigure}
\caption{
Soliton mass evolution in
simulations with $k_gr_{s}^{\rm init}$
from 0.75 to 2.26 (top) and from 3.32 to 4.52 (bottom).
By shifting the curves along the time axis we have observed that
they can be stacked on top
of each other.
\label{fig:stack}
}
\end{figure}
Soliton growth rate depends on the gas distribution which can, in
principle, change during the simulations. This could lead to
incompatibility of the results read out at different moments from the
start of the runs. To verify that this is not the case, we compare the
runs that differ by the initial soliton mass, but have overlapping
soliton mass ranges spanned during the evolution. The top panel of
\cref{fig:stack} shows the evolution of the soliton mass in five
simulations of heavy solitons with $k_gr_{s}^{\rm init}$ varying from $0.75$ to
$2.26$. The gas parameters are chosen the same in all five runs
$({\cal N}=128,\,k_g=0.5,\,f_g=0.06)$. The curves have been shifted in
time until they overlap.
We observe that the curves are well
aligned with each other. In the lower panel of \cref{fig:stack} we
repeat the same exercise for five light soliton simulations with
$k_gr_{s}^{\rm init}$ from $3.32$ to $4.52$ and the gas parameters
$({\cal N}=128,\,k_g=1,\,f_g=0.01)$. The stacked curves are again well
aligned. We conclude that the soliton growth rate depends only on the
initial gas parameters and the instantaneous
soliton mass (or radius),
and is insensitive to the previous evolution of the soliton-gas
system. This justifies combination of the results extracted from
different runs at different stages of simulations.
The above results suggest that the gas distribution remains close to
Maxwellian during the simulations with solitons.
We have measured the distribution directly at different moments of
time and have seen that it is compatible with Maxwellian, though the
measurement is rather noisy, see \cref{fig:Maxwell} in
\cref{sec:levkov}.
This is in stark contrast with simulations \cite{Levkov:2018kau}
without initial soliton where the gas distribution exhibits distinct
evolution on the time scale $\tau_{\rm rel}$ (\cref{trelax})
towards populating low-momentum modes which culminates in the soliton
formation. However, as discussed in \cref{sec:levkov}, the
distribution appears to return to Maxwellian after the soliton is
formed. We also
find that the growth of the soliton mass, though
faster than in the Maxwellian gas right after the formation, approaches
the Maxwellian rate within time of order $\tau_{\rm rel}$, see
\cref{fig:stack_fig}. This gives another evidence that the presence of
the soliton ``Maxwellizes'' the gas.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth]{plot/stack_rck0}
\caption{Growth of the soliton mass in the simulations with the same
values of
$({\cal N}=128,\,k_g=0.5,\, k_gr_{s}^{\rm init}=1.51)$ and varying $f_g$. The
time axis in different runs has been scaled by $f_g^2$ and
normalized to the case $f_g=0.06$. The time span of the curves is
restricted to half of the relaxation time (\ref{trelax}) and covers
the portion of the data used in the measurement of the soliton
growth rate.
\label{fig:stack_scale}
}
\end{figure}
The analytic derivation of \cref{sec:theory} implies that at fixed
$k_gr_s$ the soliton growth/eva\-po\-ra\-tion rate is proportional to
$\rho_g^2/k_g^6\propto f_g^2$. To verify if this scaling holds in the
simulations, we perform several runs with the same ${\cal N}$, $k_g$
and $r_{s}^{\rm init}$,
but different $f_g$. We measure the time dependence of the soliton
mass and scale the time axis by $f_g^2$. The results are shown in
\cref{fig:stack_scale}. We see a satisfactory agreement between
different curves. A slightly faster growth of the curve with the
highest value of $f_g$ at late times can be due to the fact that
the gas in this case is closer to the Jeans instability leading to the
development of an overdensity (proto-halo) around the soliton. We have
clearly seen this overdensity in the runs with the parameters near the
Jeans instability limit (\ref{eqn:jeans_instability}) and observed
that it is correlated with the increase of the ratio
$\Gamma_s/f_g^2$. The associated bias is comparable to the other
uncertainties in the measurement of $\Gamma_s$ and is included in
the error bars for our final results in the next section.
\subsection{Results}
\label{sec:results}
In this section, we construct the cumulative dependence of $\Gamma_s$
on the soliton and gas parameters. As explained above, each simulation
run produces 20 data points $(r_s,\Gamma_s)$. We collect the data
points from 195 runs and bin them in logarithmic scale in $k_gr_s$. In
each bin we compute the average value and variance of
\begin{equation}
\Gamma_s\times \frac{(4\pi)^3}{ f_g^2}=\Gamma_s\times \frac{k_g^6}{\rho_g^2}\;.
\end{equation}
The results of this procedure are shown in \cref{fig:growth}. Note
that we restore the dimensionful constants in the scale of $\Gamma_s$
in the figure.
Consistently with the analysis
of \cref{sec:theory}, the growth rate is positive at small $k_gr_s$
(heavy solitons) corresponding to the soliton growth, and is negative
at large $k_gr_s$ (light solitons) corresponding to
evaporation. Moreover, the data points with the largest values of
$k_gr_s$ match the asymptotic dependence (\ref{F2new}), including the
numerical coefficient (\ref{Clsnum}),\footnote{Recall the
proportionality between $\nu$ and $k_g r_s$, \cref{eq:alpharatio}.}
\begin{equation}
\label{Gammalast}
\Gamma_s\simeq -2.1\times \frac{(4\pi G)^2m^3\rho_g^2}{k_g^6}\,(k_gr_s)^2\;.
\end{equation}
This dependence is shown by the blue line. Thus, we conclude that the
asymptotics (\ref{F2new}) are reached already at $k_gr_s\gtrsim
5$. The transition from evaporation to growth happens at $k_gr_s\sim
2.5$ which is in reasonable agreement with the naive estimate
(\ref{nucrit}). In terms of the gas and soliton virial temperatures,
it corresponds to $T_g/T_s\simeq 12$.
For lower $k_gr_s$ the soliton grows. The growth rate stays almost
constant in the range $0.7<k_gr_s<2$ where it is comparable to the
inverse of the gas
relaxation time $\tau_{\rm rel}^{-1}$, see \cref{trelax}. The lower
end of the plateau corresponds to the equality of the gas and soliton
virial
temperatures, $T_g/T_s= 1$, which is marked by the dashed vertical
line in \cref{fig:growth}.
At $k_gr_s<0.7$ (equivalently $T_g/T_s<1$) the growth rate quickly decreases. We
find that this decrease is consistent with a power law
\begin{equation}
\label{powerfit}
\Gamma_s\propto (k_gr_s)^n
\end{equation}
with $n\simeq 3$ indicated by the dotted line in the
plot. The points with the smallest values of $k_gr_s$ hint at a
steepening dependence with $n=4$ at $k_gr_s\to 0$, in agreement
with the analytic estimate
(\ref{eq:heavyS_rate}).
There are, however, several caveats that prevent us from claiming
that we have reached the heavy soliton asymptotics. First,
as pointed out in \cref{sec:heavysoliton}, the expression
(\ref{eq:heavyS_rate}) has been obtained under the assumption that the
contribution of the bound states into the soliton growth scales with
$k_gr_s$ in the same way as the contribution of states from
continuum. This assumption must be verified by analyzing the kinetic
cascade in the soliton--gas system which is beyond the scope of the
present paper. Second, the low-$(k_gr_s)$ bins in our simulations are
at the extreme of the numerical resolution and close to the threshold
for halo formation. Therefore they can be affected by
systematics.
Without the three lowest-$(k_gr_s)$ bins the numerical
data are compatible with a shallower slope $n=2$.
All in all, the heavy soliton limit is challenging both to numerical
and analytical methods.
Taking into account the uncertainties,
we conservatively conclude that the power $n$ in
\cref{powerfit} for heavy solitons lies in the range
$2\leq n\leq 4$. More work is needed to pin
down the precise asymptotic value of $n$ at $k_gr_s\to 0$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\textwidth]{plot/growth_poly_single.pdf}
\caption{
The soliton growth/evaporation rate as function of $k_gr_s$ --- the
product of the gas momentum and the soliton half-density radius.
The cumulative dependence is constructed using 3900 data points
extracted from 195 independent simulations with different gas and
soliton parameters.
The data are binned on logarithmic scale in
$k_gr_s$. Each dot gives the average value of the growth rate in the
bin, while the vertical error bars correspond to the standard deviation
within the bin. The blue solid line shows the asymptotic dependence
predicted by \cref{F2new}. At small $k_gr_s$ the dotted
lines indicate possible power-law dependences. The
dashed vertical line marks the value of $k_gr_s$ corresponding to the
equality of the gas and soliton virial temperatures, $T_g/T_s= 1$.
}
\label{fig:growth}
\end{center}
\end{figure}
\section{Soliton Wavefunction and Axion Gas}
\label{sec:Maxwell}
Non-relativistic axions with mass $m$ are described by a complex
scalar field $\psi$ obeying the Schr\"odinger--Poisson equations,
\begin{subequations}
\label{eq:SPeq}
\begin{align}
&i\partial_t\psi +\frac{\Delta\psi}{2m}-m\Phi\psi=0\;,\\
&\Delta\Phi=4\pi G m\,|\psi|^2\;,
\end{align}
\end{subequations}
where $G$ is the gravitational coupling, $\Phi$ is the Newton
potential and
$\Delta$ denotes the Laplacian.
The square of the field gives the particle number density,
$|\psi(t,\mathbf{x})|^2=n(t,\mathbf{x})$. Equations (\ref{eq:SPeq}) are invariant
under scaling transformations,
\begin{subequations}
\label{eq:scaling}
\begin{gather}
\psi \mapsto \tilde\psi(t,\mathbf{x})=\Lambda_3
\psi (\Lambda_1 t ,\Lambda_2\mathbf{x}) \, ,
\qquad
\Phi\mapsto \tilde\Phi( t, \mathbf{x} ) =
\frac{\Lambda_1^2}{\Lambda_2^2} \Phi (\Lambda_1t,\Lambda_2 \mathbf{x}) \, ,\\
m\mapsto \tilde m=\frac{\Lambda_2^2}{\Lambda_1}m\,,\qquad
G\mapsto \tilde G=\frac{\Lambda_1^3}{\Lambda_2^2\Lambda_3^2}G\,,
\end{gather}
\end{subequations}
where $\Lambda_{1,2,3}$ are arbitrary parameters. A one-parameter
family of these transformations that leaves $m$ and $G$ invariant
connects different solutions for a given axion; the
transformations that change the mass, but not $G$, allow one to map
between
solutions for axions with different masses; finally, the rescaling of
$G$ provides a freedom in the choice of units which is handy in
numerical simulations.
The system (\ref{eq:SPeq}) admits periodic spherically symmetric
solutions of the form,
\begin{equation}
\label{soliton}
\psi_s(t,\mathbf{x})=\chi(|\mathbf{x}|) {\rm e}^{ - i {\cal E}_s t}\;.
\end{equation}
The corresponding density $\rho_s(\mathbf{x})=m|\chi(|\mathbf{x}|)|^2$ is
time-independent and localized in space, hence these solutions are
called {\rm solitons}. ${\cal E}_s$ represents the binding energy
(chemical potential) of axions in the soliton and is negative. There
is a continuous family of solitons differing by their mass $M_s$ and
related by the subgroup of the scaling transformations
(\ref{eq:scaling}) that leave $m$ and $G$ fixed. Using this symmetry,
the soliton wavefunction can be written as
\begin{equation}
\label{solitonWF}
\chi(x)=\frac{k_s^2}{\sqrt{4\pi G m^3}}\chi_0(k_s x)\;,
\end{equation}
where $k_s$ is the scaling parameter characterizing the soliton
width. By the uncertainty relation, it sets the typical momentum of
particles comprising the soliton. The dimensionless function
$\chi_0(\xi)$ describes the ``standard soliton'' normalized by the
condition
\begin{subequations}
\label{stndeq}
\begin{equation}
\label{standbc}
\chi_0(0)=1\;.
\end{equation}
It solves the eigenvalue problem following from the
Schr\"odinger--Poisson system,
\begin{align}
\label{standeq1}
&\chi_0''+\frac{2}{\xi}\chi_0'=2(\Phi_0-\varepsilon_0)\chi_0\;,\\
\label{standeq2}
&\Phi_0''+\frac{2}{\xi}\Phi_0'=\chi_0^2\;,
\end{align}
\end{subequations}
where $\Phi_0(\xi)$ is the standard soliton gravitational potential
and $\varepsilon_0$ is its binding energy. Fig.~\ref{fig:standsol} shows the
function $\chi_0(\xi)$ obtained by numerically solving
eqs.~(\ref{stndeq}). It is well approximated by an analytic fit,
\begin{equation}
\label{chifit}
\chi_{0,{\rm fit}}=\big(1+c_0\xi^2\big)^{-4}~,~~~~~~c_0=0.0539\;,
\end{equation}
also shown in the figure. The fit differs from the exact solution only
at the tail where the exact solution falls off exponentially, whereas
the fit behaves as a power-law.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{plot/chi0_linear}
\qquad
\quad
\includegraphics[width=0.45\textwidth]{plot/chi0_log}
\caption{The standard soliton profile in linear (left)
and in log (right) scale.
The solid lines show the exact solution of the Schr\"odinger--Poisson
equations,
while the dotted lines
correspond to the fitting function (\ref{chifit}).
\label{fig:standsol}
}
\end{center}
\end{figure}
The standard soliton is characterized by the following dimensionless
quantities:
\begin{subequations}
\label{standnums}
\begin{align}
&\varepsilon_0=-0.692&& \text{binding energy}\;,\\
&\mu_0=4\pi \int_0^\infty d\xi\, \xi^2
\chi_0^2(\xi) = 25.9&& \text{total mass}\;,\\
&\xi_0=1.299&&\text{half-density radius,}~|\chi_0(\xi_0)|^2=1/2\;.
\end{align}
\end{subequations}
The corresponding values for a general soliton are obtained by
rescaling,
\begin{equation}
\label{EsMsrs}
{\cal E}_s=\varepsilon_0\frac{k_s^2}{m}~,~~~~~~
M_s=\mu_0 \frac{k_s}{4\pi Gm^2}~,~~~~~~
r_s=\frac{\xi_0}{k_s}\;,
\end{equation}
and its density profile can be approximated as
\begin{equation}
\rho_s ( {\bf x} )
\approx \frac{\rho_{s,\,{\rm peak}}}{
\left[ 1 + c_s \, \left( |{\bf x}|/ r_{s} \right)^2 \right]^8
}~,~~~~~
\rho_{s,\,{\rm peak}}=\frac{k_s^4}{4\pi G m^2}~,~~c_s=0.091\;.
\label{eq:rhos}
\end{equation}
Note that the width of the soliton is inversely proportional to its
mass. Accordingly, the peak density is proportional to the fourth
power of the mass. The total energy of the soliton consists of kinetic
and potential parts,
\begin{equation}
\label{Esoltot}
E_s=E_{s,{\rm kin}}+E_{s,{\rm pot}}=\int d^3x
\left(\frac{|\nabla\psi_s|^2}{2m} +\frac{m\Phi_s|\psi_s|^2}{2}\right)\;.
\end{equation}
Using the Schr\"odinger--Poisson equations one can show that they obey
the virial theorem, $E_s=-E_{s,{\rm kin}}=E_{s,{\rm pot}}/2$, and
\begin{equation}
E_s=\frac{M_s \mathcal{E}_s}{3m}\;.
\end{equation}
It is instructive to introduce the soliton virial temperature,
\begin{equation}
\label{Ts1}
T_s=\frac{2m E_{s,{\rm kin}}}{3M_s}=-\frac{2}{9}\mathcal{E}_s\;.
\end{equation}
Using eqs.~(\ref{EsMsrs}) one obtains alternative expressions,
\begin{equation}
\label{Ts2}
T_s=0.154\frac{k_s^2}{m}=\frac{0.259}{mr_s^2}\;.
\end{equation}
We are interested to study how the mass of the soliton
varies due to its interaction with a gas of axion waves.
We assume the gas to fill
a box
of size
\begin{equation}
\label{Lrs}
L\gg r_s\;.
\end{equation}
Far away from the soliton,
it is
described by a collection of plane waves,\footnote{At $|\mathbf{x}|\lesssim
r_s$ the wavefunctions are modified by the gravitational field of
the soliton, see below.}
\begin{equation}
\label{psigas}
\psi_g(t,\mathbf{x})=\frac{1}{L^{3/2}}\sum_\k a_{\k}\,
{\rm e}^{-i\frac{k^2}{2m}t+i\k\mathbf{x}}~,~~~~~~~|\mathbf{x}|\gg r_s\;.
\end{equation}
We choose the occupation numbers to follow the Maxwell distribution,
consistent with the velocity distribution in a DM halo,
\begin{equation}
\label{fgas}
f_\k\equiv |a_{\k}|^2 = f_g\,{\rm e}^{-k^2/k_g^2}\;,
\end{equation}
where $k_g$ sets the characteristic momentum of particles in the
gas. The normalization $f_g$ is related to the gas density as
\begin{equation}
\label{fgrhog}
f_g=\frac{(4\pi)^{3/2}}{m k_g^3}\rho_g\;.
\end{equation}
Validity of the classical description requires $f_g\gg 1$.
The phases of the amplitudes $a_{\k}$ are assumed to be random.
Using $k_g$ we can define an effective gas temperature,
\begin{equation}
\label{Tgas}
T_g=\frac{k_g^2}{2m}\;.
\end{equation}
To avoid confusion, we stress that this is not a true thermodynamic
temperature since \cref{fgas} is not an equilibrium distribution of
the boson gas which should follow the
Bose--Einstein formula. However, the latter cannot be reached within the
classical field theory. Rather, as demonstrated in
Ref.~\cite{Levkov:2018kau},
a homogeneous axion gas with initial distribution (\ref{fgas})
will evolve
towards the Rayleigh--Jeans occupation numbers diverging at low
$k$. This relaxation proceeds on the time scale
\begin{equation}
\label{trelax}
\tau_{\rm rel}=
\frac{\sqrt{2}b\,k_g^6}{12\pi^3G^2m^3\rho_g^2\,
\ln(k_gL)}\;,~~~~b\approx 0.9\;,
\end{equation}
and culminates in the spontaneous formation of a soliton.
We neglect the change of the gas distribution
in our theoretical considerations and discuss the validity of
this simplification later on. Numerically, we observe
that the Maxwell distribution appears to get reinstated in the gas once the
soliton is formed. Moreover, in the simulations where the soliton is
present for the whole duration, the distribution remains close to
Maxwellian at all moments of time.
Being a self-gravitating system, the homogeneous axion gas is unstable
with respect to gravitational collapse leading to a halo
formation.
The corresponding Jeans length
is
\begin{equation}
\label{lJeans}
l_J=\frac{k_g}{m}\sqrt{\frac{\pi}{2G\rho_g}}\;,
\end{equation}
where we have used that the sound speed in non-relativistic
Maxwellian gas is $k_g/(\sqrt{2}m)$. We avoid this instability by
considering the box size smaller than the Jeans length,
\begin{equation}
\label{LlJ}
L<l_J\;.
\end{equation}
Note that this condition is compatible with \cref{Lrs} since $l_J$ can
be made arbitrarily large by decreasing the gas density. In practice,
however, \cref{LlJ} imposes strong limitations on the numerical
simulations, see \cref{sec:simulation}.
The total axion field describing a soliton immersed in the gas is
given by the sum
\begin{equation}
\label{split}
\psi(t,\mathbf{x})=\psi_s(t,\mathbf{x}) + \psi_g(t,\mathbf{x})\;.
\end{equation}
For this decomposition
to be well-defined, the number of particles in the
soliton must be much larger than in any other state in the gas,
\begin{equation}
\label{largeNs}
M_s/m\gg f_\k\;.
\end{equation}
To compare the soliton size with the characteristic wavelength of
axion waves, we introduce
\begin{equation}
\nu \equiv \frac{k_g}{k_s} = 0.773\,r_s k_g
=0.555\sqrt{\frac{T_g}{T_s}} \ .
\label{eq:alpharatio}
\end{equation}
Recalling that the mass of the soliton is inversely proportional to
its size, we split solitons into three groups:
{\it light solitons} ($\nu \gg 1$), {\it heavy solitons} ($\nu \ll
1$), and {\it median solitons} ($\nu \sim 1$). Note that light
solitons are also {\it cold}, heavy solitons are {\it hot}, whereas
median solitons have the same virial temperature as the gas. We are
going to see that the evolution of solitons from different groups is
dramatically different.
\section{Particle Exchange between Soliton and Gas}
\label{sec:theory}
\subsection{Soliton growth rate from wave scattering}
\label{sec:solitonrate}
Soliton is composed of Bose--Einstein condensate occupying the ground
state in its own gravitational potential. Several processes affect the
soliton in the axion gas. One of them is the interference of gas waves
with the soliton field which leads to fluctuations of its peak
density. Another one is elastic scattering of waves on the soliton
which endows it with momentum and leads to its Brownian motion. These
processes, however, do not change the number of particles in the
ground state and are not of interest to us. We focus on the processes
that lead to particle exchange between the gas and the soliton and
thereby affect the amplitude of the Bose--Einstein condensate.
In this section we develop their description
using scattering
theory. We adopt the language of quantum field theory as the most
convenient tool for this task. However, it is important to emphasize
that quantum physics is not essential for the soliton-gas
interaction. In \cref{app:class} we show how the same results can be
obtained within purely classical approach.
We start by observing that the Schr\"odinger--Poisson equations can be
derived from the action
\begin{equation}
\label{Saxion}
S=\int dtd^3x\,\bigg(i\psi^*\partial_t\psi+\frac{\psi^*\Delta\psi}{2m}
+\frac{\Phi\Delta\Phi}{8\pi G}-m\Phi |\psi|^2\bigg)\;.
\end{equation}
We decompose the total axion field into the soliton and gas
components as in \cref{split}. At this point we should be more precise
about how we perform the split. The spectrum of particle states in the
soliton background contains unbound states with wavefunctions
becoming plane waves far away from the soliton, as well as bound
states in the soliton gravitational potential. In the literature,
the latter are usually
interpreted as excitations of the soliton. While
this is a valid interpretation, it is more convenient
for our purposes
to include them into the gas. The physical reason
is that no matter whether the state is bound or not, a
transfer of particles to it from the ground state will deplete
the coherence of the soliton, whereas the inverse process clearly has
an opposite effect. Thus, we adopt the following
convention: the soliton component refers to coherent particles
strictly in the ground state described by the wavefunction
(\ref{soliton}), whereas the gas includes all the rest of particles.
Decomposing also the Newton potential into the gravitational
potential of the soliton and perturbations, $\Phi=\Phi_s+\phi$,
substituting it into \cref{Saxion} and keeping only terms containing
perturbations, we obtain the gas action,
\begin{equation}
\label{Sgas}
S_g=\int dt d^3x\,\bigg(i\psi^*_g\partial_t\psi_g
+\frac{\psi_g^*\Delta\psi_g}{2m}-m\Phi_s|\psi_g|^2
+\frac{\phi\Delta\phi}{8\pi G}
-m\psi_s^*\,\phi\psi_g-m\psi_s\,\phi\psi_g^*
-m\phi|\psi_g|^2\bigg).
\end{equation}
In deriving this expression we have used that the soliton fields
$\psi_s$, $\Phi_s$ satisfy the Schr\"odinger--Poisson
equations. Following the rules of quantum field theory, we promote
$\psi_g$ and $\phi$ to second-quantized fields, whereas $\psi_s$,
$\Phi_s$ are treated as c-valued background.
The terms linear in $\psi_g$ break the phase-rotation symmetry of the
axion gas, $\psi_g\mapsto \psi_g{\rm e}^{i\alpha}$, and therefore lead to
non-conservation of gas particles. Of course, the total number
of non-relativistic axions is conserved, meaning that the particles
from the gas go into the soliton and vice versa. The last term in
\cref{Sgas} preserves the gas particle number and describes
interactions of axions in the absence of soliton. It is responsible
for the kinetic relaxation in a homogeneous gas \cite{Levkov:2018kau}.
Due to energy conservation, a particle can be absorbed or emitted by
the soliton only if it exchanges energy with another particle from the
gas. This leads us to consider the process $g+g\to g+s$ when two gas
particles scatter on each other and one of them merges into the
soliton, as well as the inverse process $s+g\to g+g$ when a particle
hits the soliton and kicks out another particle.
The Feynman diagrams for
these processes are shown in \cref{fig:scattering}. Solid straight
lines represent the gas particles, whereas dashed line corresponds
to the soliton. Wavy line stands for the ``propagator'' of the
Newton potential which is proportional to the inverse of Laplacian. In
the approximation of infinite box size it reads,
\begin{equation}
\label{Newtprop}
\begin{fmffile}{Newtprop}
\parbox{85pt}{
\begin{fmfgraph*}(50,50)
\fmfpen{thick}
\fmfleft{l1}
\fmfright{r1}
\fmflabel{$(t,\mathbf{x})$}{l1}
\fmflabel{$(t',\mathbf{x}')$}{r1}
\fmf{photon}{l1,r1}
\end{fmfgraph*}
}
\end{fmffile}
=-i\,4\pi G\,\delta(t-t')\int \frac{[d\k]}{k^2} {\rm e}^{i\k(\mathbf{x}-\mathbf{x}')}\;,
\end{equation}
where we have introduced a shorthand notation for the integration measure
\begin{equation}
\label{measure}
[d\k]\equiv \frac{d^3 k}{(2\pi)^3}\; .
\end{equation}
Combining it with the vertices implied by the action
(\ref{Sgas}), we obtain the amplitude for the diagram $(a)$ in
\cref{fig:scattering},
\begin{equation}
\label{M1s23}
A_{1s,23}
=(2\pi)\delta(\mathcal{E}_1+\mathcal{E}_2-\mathcal{E}_3-\mathcal{E}_s)\, (4\pi Gm^2)
\int\frac{[d\k]}{k^2} V_{1s}(\k) V_{23}(-\k)\;,
\end{equation}
with the vertex form factors
\begin{equation}
\label{Vs}
V_{1s}(\k)=\int d^3x\,\psi_1(\mathbf{x})\chi(|\mathbf{x}|){\rm e}^{i\k\mathbf{x}}\;,\qquad
V_{23}(\k)=\int d^3x\,\psi_2(\mathbf{x})\psi_3^*(\mathbf{x}){\rm e}^{i\k\mathbf{x}}\;,
\end{equation}
where
$\psi_i(\mathbf{x})$, $i = {1,2,3}$, are the wavefunctions of the states with
energies $\mathcal{E}_i$. The diagram $(b)$ is obtained simply by interchanging
the particles $1$ and $2$, so the total absorption amplitude is
$A_{1s,23}+A_{2s,13}$. The emission process --- diagrams $(c,d)$ in
\cref{fig:scattering} --- is described by the complex conjugate
amplitude $A_{1s,23}^*+A_{2s,13}^*$.
\begin{figure}[tb]
\begin{center}
\begin{fmffile}{absorption1}
\parbox{70pt}{
\begin{fmfgraph*}(70,70)
\fmfpen{thick}
\fmfleft{l1,l2}
\fmfright{r1,r2}
\fmflabel{${\cal E}_1$}{l1}
\fmflabel{${\cal E}_2$}{l2}
\fmflabel{${\cal E}_3$}{r2}
\fmflabel{${\cal E}_s$}{r1}
\fmf{plain}{l1,b1}
\fmf{plain}{l2,b2}
\fmf{photon,label=$\k$}{b1,b2}
\fmf{plain}{b2,r2}
\fmf{dashes}{b1,r1}
\end{fmfgraph*}
}
\end{fmffile}
\quad
+
\quad
\begin{fmffile}{absorption2}
\parbox{70pt}{
\begin{fmfgraph*}(70,70)
\fmfpen{thick}
\fmfleft{l1,l2}
\fmfright{r1,r2}
\fmflabel{${\cal E}_2$}{l1}
\fmflabel{${\cal E}_1$}{l2}
\fmflabel{${\cal E}_3$}{r2}
\fmflabel{${\cal E}_s$}{r1}
\fmf{plain}{l1,b1}
\fmf{plain}{l2,b2}
\fmf{photon,label=$\k$}{b1,b2}
\fmf{plain}{b2,r2}
\fmf{dashes}{b1,r1}
\end{fmfgraph*}
}
\end{fmffile}
\qquad
\qquad
\begin{fmffile}{creation1}
\parbox{70pt}{
\begin{fmfgraph*}(70,70)
\fmfpen{thick}
\fmfleft{l1,l2}
\fmfright{r1,r2}
\fmflabel{${\cal E}_2$}{r2}
\fmflabel{${\cal E}_1$}{r1}
\fmflabel{${\cal E}_3$}{l2}
\fmflabel{${\cal E}_s$}{l1}
\fmf{plain}{r1,b1}
\fmf{plain}{l2,b2}
\fmf{photon,label=$\k$}{b1,b2}
\fmf{plain}{b2,r2}
\fmf{dashes}{b1,l1}
\end{fmfgraph*}
}
\end{fmffile}
\quad
+
\quad
\begin{fmffile}{creation2}
\parbox{70pt}{
\begin{fmfgraph*}(70,70)
\fmfpen{thick}
\fmfleft{l1,l2}
\fmfright{r1,r2}
\fmflabel{${\cal E}_1$}{r2}
\fmflabel{${\cal E}_2$}{r1}
\fmflabel{${\cal E}_3$}{l2}
\fmflabel{${\cal E}_s$}{l1}
\fmf{plain}{r1,b1}
\fmf{plain}{l2,b2}
\fmf{photon,label=$\k$}{b1,b2}
\fmf{plain}{b2,r2}
\fmf{dashes}{b1,l1}
\end{fmfgraph*}
}
\end{fmffile}
~~~~~\\
~~~~~\\
~~~~~\\
$(a)$\qquad\qquad\qquad\qquad\qquad
$(b)$\qquad\qquad\qquad\qquad\qquad\quad
$(c)$\qquad\qquad\qquad\qquad\qquad
$(d)$
\end{center}
\caption{Feynman diagrams describing absorption ($a,b$) and emission
($c,d$) of a particle by the soliton interacting with axion gas.
Solid lines correspond to gas particles, dashed line corresponds to
the soliton, and wavy line --- to the Newtonian interaction.
The time direction is from left to right.
The
labels on the external legs represent the energies of the scattered
states, whereas $\k$ is the momentum exchange.
\label{fig:scattering}}
\end{figure}
The probability that two particles 1 and 2
scatter in the way that one of them merges into soliton in unit time
is given by
the usual formula,
\begin{equation}
\frac{dp_{12\to 3s}}{dt}=(2\pi)\delta(\mathcal{E}_1+\mathcal{E}_2-\mathcal{E}_3-\mathcal{E}_s)\,
|A_{1s,23}'+A_{2s,13}'|^2\;,
\end{equation}
where prime denotes the amplitudes stripped off the energy
$\delta$-function,
\begin{equation}
\label{Astrip}
A_{1s,23}'=(4\pi Gm^2)
\int\frac{[d\k]}{k^2} V_{1s}(\k) V_{23}(-\k)\;,
\end{equation}
and similarly for $A_{2s,13}'$. To obtain the change in the soliton
mass, we have to subtract the rate of the inverse process and
sum over all states in the gas weighting them with
the occupation numbers $f_i$. The weighting takes into account the
effect of the Bose enhancement due to non-zero occupation numbers of
the initial and final states. This yields,
\begin{align}
\Gamma_s&=\frac{m}{M_s}\times\frac{1}{2}
\!\sum_{\text{states 1,2,3}}
\!\!\!\!\!(2\pi) \delta(\mathcal{E}_1\!+\!\mathcal{E}_2\!-\!\mathcal{E}_3\!-\!\mathcal{E}_s)
\big(f_1 f_2 (1\!+\!f_3)-(1\!+\!f_1)(1\!+\!f_2)f_3\big)
|A'_{1s,23}+A'_{2s,13}|^2\notag\\
&\simeq \frac{m}{2M_s}
\sum_{\text{states 1,2,3}}
\!\!(2\pi) \delta(\mathcal{E}_1\!+\!\mathcal{E}_2\!-\!\mathcal{E}_3\!-\!\mathcal{E}_s)
\big(f_1 f_2 - f_1 f_3-f_2 f_3\big)|A'_{1s,23}+A'_{2s,13}|^2\;,
\label{eq:NsGen}
\end{align}
where the factor $1/2$ has been inserted to avoid double-counting
the pairs of states related by the interchange of particles 1 and
2. In going to the second line we used that the occupation
numbers are large and kept only the leading terms quadratic in $f_i$.
Equation
(\ref{eq:NsGen}) represents the key result of this subsection. It
describes the evolution of the soliton mass for arbitrary distribution
of the gas particles.
To proceed, we assume that the gas distribution far away from the
soliton is controlled by a single characteristic momentum $k_g$ as,
for example, in the case of the Maxwellian gas (\ref{fgas}). For the
bound states localized near the soliton, the occupation numbers can,
in principle, also depend on the soliton properties. These, as
discusses in \cref{sec:Maxwell}, are determined by a single parameter
$k_s$. Thus, we write an Ansatz,
\begin{equation}
\label{fAns}
f_i=\frac{\rho_g}{mk_g^3}\;
u\bigg(\frac{m\mathcal{E}_i}{k_g},\frac{k_g}{k_s}\bigg)\;,
\end{equation}
where $\rho_g$ is the density of the gas far away from the soliton,
and $u$ is a dimensionless function.
Next, it is convenient to rescale the coordinates, momenta, energies
and wavefunctions to units associated with the soliton,
\begin{equation}
\mathbf{x}=\boldsymbol{\xi}/{k_s}~,~~~\k={k_s} \boldsymbol{\kappa}
~,~~~\mathcal{E}_i=\varepsilon_i \frac{k_s^2}{m}~,~~~
\psi_i(\mathbf{x})=k_s^{3/2}\vf_i({k_s} \mathbf{x})\;.
\label{eq:rescale}
\end{equation}
Substituting these rescaled variables into eqs.~(\ref{Vs}),
(\ref{Astrip}), (\ref{eq:NsGen}) we obtain,
\begin{equation}
\label{Gammagamma}
\Gamma_s=\frac{(4\pi G)^2m^3\rho_g^2}{k_g^6}\,\gamma_s(\nu)\;,
\end{equation}
where $\nu=k_g/k_s$ is the parameter introduced in \cref{eq:alpharatio}. The
dimensionless function $\gamma_s(\nu)$ is computed by summing over the states in
the background of the standard soliton of \cref{sec:Maxwell},
\begin{equation}
\label{gammadef}
\gamma_s(\nu)=\frac{\pi}{\mu_0}
\sum_{\text{states 1,2,3}}
\!\!\delta(\varepsilon_1\!+\!\varepsilon_2\!-\!\varepsilon_3\!-\!\varepsilon_0)\;
\big(u_1 u_2 - u_1 u_3-u_2 u_3\big)|{\cal A}'_{1s,23}+{\cal A}'_{2s,13}|^2\;,
\end{equation}
where $\varepsilon_0$, $\mu_0$ are numerical coefficients
quoted in \cref{standnums} and
$u_i\equiv u(\varepsilon_i/\nu^2,\nu)$ are rescaled occupation
numbers. For the rescaled amplitudes we have
\begin{gather}
\label{dimlessA}
{\cal A}'_{1s,23}=\int\frac{[d\boldsymbol{\kappa}]}{\kappa^2}
{\cal V}_{1s}(\boldsymbol{\kappa})
{\cal V}_{23}(-\boldsymbol{\kappa})~,\\
{\cal V}_{1s}(\boldsymbol{\kappa})=
\int d^3\xi\,\vf_1(\boldsymbol{\xi})
\chi_0(\xi){\rm e}^{i\boldsymbol{\kappa\xi}}~,~~~~~~
{\cal V}_{23}(\boldsymbol{\kappa})=
\int d^3\xi\,\vf_2(\boldsymbol{\xi})
\vf_3^*(\boldsymbol{\xi}){\rm e}^{i\boldsymbol{\kappa\xi}}\;,
\label{dimlessVs}
\end{gather}
where $\chi_0(\xi)$ is the standard soliton profile.
In \cref{sec:simulation} we extract the function $\gamma_s(\nu)$ from numerical
simulations, whereas in the rest of this section
we estimate it analytically for the cases of
light and heavy solitons in Maxwellian gas.
Before moving on, let us comment on the structure of the eigenfunctions
in the soliton background which enter into the calculation of the
soliton growth rate through the form factors (\ref{Vs}) or
(\ref{dimlessVs})
(the details
will be presented in a forthcoming publication
\cite{soliton2}). First, it is clear from the third term in the action
(\ref{Sgas}) that the wavefunctions will be affected by the
soliton gravitational potential $\Phi_s$. While this effect is small for
highly excited unbound states with energies $\mathcal{E}_i\gg |\mathcal{E}_s|$,
it becomes important for the states with
$\mathcal{E}_i\lesssim |\mathcal{E}_s|$ and gives rise to a discrete spectrum of bound
states. Second, an additional
modification of the eigenfunctions comes from the term
$-m\psi_s^*\,\phi\psi_g$ and its complex
conjugate in \cref{Sgas}. These terms bring qualitatively new features by mixing
positive and negative frequencies in the eigenvalue equation
\cite{Guzman:2004wj,soliton2}. As a result, the eigenmodes contain
both positive and negative frequency components
which can be interpreted as consequence of the
Bogoliubov transformation required to diagonalize the Hamiltonian in
the presence of the condensate \cite{pitaevskii2016bose}.
The negative-frequency part is
significant for low lying modes and cannot be discarded. In
particular, it is
crucial for the existence of zero-energy excitations
required by the spontaneously broken
translation symmetry.
On the other hand, for the modes of the continuous spectrum
the negative-frequency component is
essentially negligible.
\subsection{Light soliton}
\label{sec:exactwf}
Calculation of $\gamma_s(\nu)$ is challenging in general. The
task simplifies for the case $\nu\gg 1$ which corresponds to
{\it light soliton} as defined in \cref{sec:Maxwell}. The typical
momentum of particles in the gas in this case is much larger than the
momentum of particles in the soliton. In other words, the soliton is
colder than the gas.
Let us understand which kinematical region gives the dominant
contribution into the sum in \cref{gammadef}. To this aim, consider
the amplitude (\ref{dimlessA}) and take the particles 2 and 3 to be
typical particles in the gas. Since their energies are much higher that
the soliton binding energy, their wavefunctions are well described by
plane waves with momenta $\boldsymbol{\kappa}_2$,
$\boldsymbol{\kappa}_3$ which are of order $\nu$. Substituting these
into the vertex ${\cal V}_{23}$ we obtain,
\begin{equation}
\label{V23light}
{\cal
V}_{23}(\boldsymbol{-\kappa})=(2\pi)^3\delta(\boldsymbol{\kappa}_2
-\boldsymbol{\kappa}_3-\boldsymbol{\kappa})\;,
\end{equation}
and hence the amplitude
\begin{equation}
\label{M1s23-1}
{\cal A}_{1s,23}'=\frac{{\cal
V}_{1s}(\boldsymbol{\kappa})}{\kappa^2}~,~~~~~
\boldsymbol{\kappa}=\boldsymbol{\kappa}_2-\boldsymbol{\kappa}_3 \;.
\end{equation}
The denominator enhances the amplitude for soft momentum
exchange. However, the exchange cannot be arbitrarily small since the
matrix element ${\cal V}_{1s}(\boldsymbol{\kappa})$ vanishes at
$\boldsymbol{\kappa}=0$ due to orthogonality of the wavefunctions
$\vf_1$ and $\chi_0$. It can be further shown \cite{soliton2} that a
linear in $\kappa$ contribution also vanishes as a consequence of
(spontaneously broken) translation invariance. Thus,
\begin{equation}
\label{V1sk2}
{\cal
V}_{1s}(\boldsymbol{\kappa})\sim \kappa^2
\end{equation}
and the pole in the amplitude cancels
out. We conclude that the amplitide is
maximal at $\kappa\sim 1$
where it is of order $1$. The corresponding wavefunction $\vf_1$ must
be one of the low-lying states with characteristic energy and momentum
$|\varepsilon_1|, \kappa_1\sim 1$. Notice that the amplitude obtained by the
interchange of particles 1 and 2 for the same kinematics is
suppressed,
\begin{equation}
\label{M2s13-2}
{\cal A}_{2s,13}'=\frac{{\cal
V}_{2s}(\boldsymbol{\kappa}_1-\boldsymbol{\kappa}_3)}{|\boldsymbol{\kappa}_1-\boldsymbol{\kappa}_3|^2}\sim \frac{1}{\kappa_3^2}\sim\frac{1}{\nu^2}\;.
\end{equation}
We now return to the expression (\ref{gammadef}) and rewrite it in the
following form,
\begin{equation}
\begin{split}
\gamma_s(\nu)=
\frac{\pi}{\mu_0}\sum_{\text{states 1,2,3}}
\delta(\varepsilon_1+\varepsilon_2-\varepsilon_3&-\varepsilon_0)\big[2u_1(u_2-u_3)|{\cal A}'_{1s,23}|^2
-2u_2u_3
|{\cal A}'_{1s,23}|^2\\
&+(u_1u_2-u_1u_3-u_2u_3)({\cal A}'_{1s,23}{\cal A}'^*_{2s,13}+\text{h.c})\big]\;.
\end{split}
\label{eq:dN_F2}
\end{equation}
For the preferred kinematics,
the first term in brackets is small. Indeed, using the Maxwell
distribution for the unbounded states we obtain,
\begin{equation}
u_2-u_3 = u_2 \,
\big(1-{\rm e}^{-2(\varepsilon_3-\varepsilon_2)/\nu^2}\big)=
u_2 \, \big(1-{\rm e}^{- 2 (\varepsilon_1-\varepsilon_0)/\nu^2}\big)
\approx u_2\frac{2(\varepsilon_1-\varepsilon_0)}{\nu^2} = O(\nu^{-2})\;,
\end{equation}
where in the second equality we used the energy conservation.
The terms in the second line in \cref{eq:dN_F2} are also suppressed
due to \cref{M2s13-2}. Thus, up to corrections of order
$O(\nu^{-2})$, we have
\begin{equation}
\label{gammalight1}
\gamma_s(\nu)=-\frac{2\pi}{\m_0}\sum_{\text{state 1}}
\int[d\boldsymbol{\kappa}_2][d\boldsymbol{\kappa}_3]
\delta\Big(\varepsilon_1-\varepsilon_0+\tfrac{\kappa_2^2}{2}-\tfrac{\kappa_3^2}{2}\Big)
(4\pi)^3{\rm e}^{-(\kappa_2^2+\kappa_3^2)/\nu^2}
\frac{|{\cal V}_{1s}(\boldsymbol{\kappa}_2-\boldsymbol{\kappa}_3)|^2}{|\boldsymbol{\kappa}_2-\boldsymbol{\kappa}_3|^4}\;.
\end{equation}
Two comments are in order. First, we observe that $\gamma_s(\nu)$ is
negative. Recalling that it multiplies the rate of the soliton mass
change, \cref{Gammagamma}, we conclude that the mass of a light
soliton decreases --- it {\em evaporates}. Second, the expression
(\ref{gammalight1}) does not depend on the occupation number of the
low-lying state 1.
This is a nice property.
Particles from the low-lying energy levels are further
upscattered by the gas and eventually become
unbound.
Calculation of the occupation numbers of these levels presents a
nontrivial task.
Fortunately, we don't need to know them to determine the soliton
evaporation rate in the leading order.
The next steps include changing the integration variables to
$\boldsymbol{\kappa}=\boldsymbol{\kappa}_2-\boldsymbol{\kappa}_3$ and
$\boldsymbol{\kappa}_+=(\boldsymbol{\kappa}_2+\boldsymbol{\kappa}_3)/2$ and
performing the integration over $\boldsymbol{\kappa}_+$. Discarding
suppressed terms, we obtain that $\gamma_s$ is proportional to $\nu^2$
with a numerical coefficient equal to a certain weighted
sum over states in the standard
soliton background,
\begin{equation}
\label{F2new}
\gamma_s(\nu)=-C_{ls}\,\nu^2~,~~~~~~C_{ls}=\frac{8\pi^2}{\mu_0}
\sum_{\text{state 1}}\int \frac{[d\boldsymbol{\kappa}]}{\kappa^5}|{\cal
V}_{1s}(\boldsymbol{\kappa})|^2\;.
\end{equation}
Despite an
apparent pole of the integrand
at $\kappa\to 0$, the coefficient $C_{ls}$ is finite due to the property
(\ref{V1sk2}).
Numerical evaluation gives
\cite{soliton2},
\begin{equation}
\label{Clsnum}
C_{ls}\simeq 3.5\;.
\end{equation}
To summarize, the light solitons evaporate.
The change of the soliton mass
is dominated by the process of $g+s \to
g+ g$, with gas particles kicking off axions from the soliton.
By considering the soft momentum exchange, we have obtained
the leading term in the function $\gamma_s(\nu)$ in the
evaporation rate, which is proportional to $\nu^2$ with an order-one
coefficient.
It is instructive to compare the time scale of evaporation
$|\Gamma_s|^{-1}$ with the relaxation time in the gas (\ref{trelax}).
We see that evaporation is faster than relaxation if $\nu$ exceeds the
critical values
\begin{equation}
\label{nucrit}
\nu_c=\sqrt{\frac{3\pi\,\ln(k_gL)}{4\sqrt 2\, b\, C_{ls}}}\simeq 1.6\;,
\end{equation}
where we have used $\ln(k_gL)\sim 5$.
This is close to the threshold for soliton evaporation
found in numerical simulations, see \cref{sec:simulation}. For
$\nu>\nu_c$ the relaxation in the gas can be neglected and our
assumption of the stability of the Maxwell distribution is well
justified.
\subsection{Heavy soliton}
\label{sec:heavysoliton}
In this section we consider the opposite limit $\nu\ll 1$ corresponding
to {\it heavy} or {\it hot} soliton. The analysis in this case is more
complicated, so we content ourselves with semi-qualitative discussion
focusing on the overall scaling of the growth rate function $\gamma_s$
with $\nu$. A more detailed study is left for future.
For heavy soliton, the typical energy of gas particles is much smaller
than the soliton binding energy which in our dimensionless
units is of order one. Then the process with kicking-off
particles from the soliton shown on the right of \cref{fig:scattering}
is strongly suppressed since it requires from particle $3$ to have
order-one energy. We are left with the absorption process given by the
diagrams $(a,b)$ on \cref{fig:scattering} and corresponding to the
term proportional to $u_1u_2$ in \cref{gammadef}. This already allows
us to conclude that
the heavy soliton grows at a strictly positive
rate, thereby excluding the possibility of a kinetic
equilibrium between the soliton and the gas.
Particles 1 and 2 that participate in the absorption process can belong
either to unbound or to bound states. A problem
arises because the occupation numbers of the bound states are
unknown. In a complete treatment, they must be determined
self-consistently from the solution of the Boltzmann equation in the
gas. Such analysis is beyond the scope of this paper. Below we
focus on the contribution into $\gamma_s(\nu)$ coming from the
processes when both states 1 and 2 are unbound, assuming that it
correctly captures the scaling of the full result with $\nu$. We
stress that this assumption must be verified by a detailed study which
we postpone to future. We further assume that the
occupation numbers of the unbound states are Maxwellian.
Even for unbound sates, the wavefunctions are significantly modified
by the long-range Newtonian potential of the soliton which in
the dimensionless units has the form,
\begin{equation}
U(\xi) = - \frac{\mu_0 }{4\pi \xi} \equiv
- \frac{\beta}{\xi}
\ .
\label{eq:Ur}
\end{equation}
We can capture its effect by approximating the exact eigenfunctions
with the Coulomb
wavefunctions,
\begin{equation}
\label{Coulombwave}
\vf_{\boldsymbol{\kappa}} (\boldsymbol{\xi} ) =
{\rm e}^{i(\beta/\kappa)(\ln{\beta/\kappa}-1)+i\pi/4}\,
\varGamma \bigg( 1 - i\frac{\beta}{\kappa}\bigg )
\, {\rm e}^{ \pi\beta /(2\kappa)}\, {\rm e}^{i \boldsymbol{ \kappa\xi} } \,
{}_{1}F_{1} \bigg( i \frac{\beta}{\kappa} ; 1; i (\kappa \xi -
i \boldsymbol{\kappa\xi} ) \bigg) \ ,
\end{equation}
where $\varGamma$ stands for the gamma-function and
${}_{1}F_{1}$ is the confluent hypergeometric (Kummer)
function. This solution describes a scattered wave with initial momentum
$\boldsymbol{\kappa}$. Note that, compared to the standard definition,
we have added a phase in
\cref{Coulombwave} for later
convenience.
For modes with small asymptotic momenta the eigenfunctions simplify,
\begin{align}
\label{Coulombsoft}
\vf_{\boldsymbol{\kappa}} (\boldsymbol{\xi} )
\to \sqrt{ \frac{2 \pi\beta}{\kappa} } \,
J_0 \big(2 \sqrt{ \beta ( \xi - {\bf n} \boldsymbol{\xi} ) } \big)
\equiv \frac{1}{\sqrt{\kappa}} \, \hat{ \varphi}_{\bf n}
(\boldsymbol{\xi}) ~,~~~~~\kappa\ll 1 \ ,
\end{align}
where ${\bf n}=\boldsymbol{\kappa}/\kappa$ is the unit vector in the
direction of momentum. We observe that the dependence on the absolute
value of momentum factorizes. Note that the eigenfunctions get
enhanced at $\kappa\to 0$ which reflects the focusing effect of the
Coulomb field. Note also that, despite the small momentum at infinity,
the eigenfunctions oscillate with order-one period at $\xi\sim 1$,
consistent with the fact that particles accelerate to an order-one momentum
in the vicinity of the soliton.
We now use \cref{Coulombsoft} for the gas particles 1 and 2 (but not
for the particle 3 which has $\kappa_3\sim 1$). This yields for the
amplitude,
\begin{subequations}
\label{eq:heavy_V23}
\begin{align}
& {\cal V}_{1s} (\boldsymbol{\kappa} ) =
\frac{1}{\sqrt{\kappa_1}} \int d^3\xi\,\hat\vf_{{\bf
n}_1}(\boldsymbol{\xi})
\chi_0(\xi){\rm e}^{i\boldsymbol{\kappa\xi}}
\equiv\frac{1} { \sqrt{\kappa_1}}
\hat{\cal V}_{1s}(\boldsymbol{\kappa} )\;,\\
& {\cal V}_{23} (\boldsymbol{\kappa} ) =
\frac{1}{\sqrt{\kappa_2}} \int d^3\xi\,\hat\vf_{{\bf
n}_2}(\boldsymbol{\xi})
\vf_{\boldsymbol{\kappa}_3}^*(\boldsymbol{\xi}){\rm e}^{i\boldsymbol{\kappa\xi}}
\equiv\frac{1} { \sqrt{\kappa_2}}
\hat{\cal V}_{23}(\boldsymbol{\kappa} )\;,\\
&{\cal A}'_{1s,23}=\frac{1}{\sqrt{\kappa_1\kappa_2}}
\int\frac{[d\boldsymbol{\kappa}]}{\kappa^2}
\hat{\cal V}_{1s}(\boldsymbol{\kappa} )
\hat{\cal V}_{23}(-\boldsymbol{\kappa} )
\equiv
\frac{1}{\sqrt{\kappa_1\kappa_2}} {\hat{\cal A}}'_{1s,23}\;,
\end{align}
\end{subequations}
where the hatted quantities do not depend on the absolute values of
the momenta $\kappa_1$, $\kappa_2$. We substitute this into
the expression for $\gamma_s$ and, upon neglecting $\varepsilon_1$, $\varepsilon_2$ in
the energy $\delta$-function, perform the integration over $\kappa_1$,
$\kappa_2$. In this way we obtain,
\begin{equation}
\label{gammasu}
\gamma_s^{(u)}(\nu)=\frac{\nu^4}{(2\pi)^2\mu_0}
\int d{\bf n}_1 d{\bf n}_2[d\boldsymbol{\kappa}_3]\;
\delta\bigg(\frac{\kappa_3^2}{2}+\varepsilon_0\bigg)
|\hat{\cal A}'_{1s,23}+\hat{\cal A}'_{2s,13}|^2\;,
\end{equation}
where the superscript $(u)$ is to remind that we consider only the
contribution from unbound states. All quantities inside the integral
are $\nu$-independent. Thus we conclude that $\gamma_s^{(u)}$ scales
as the fourth power of $\nu$. Assuming that this also holds for the
full contribution we write,
\begin{equation}
\label{eq:heavyS_rate}
\gamma_s(\nu)=C_{hs} \nu^4~,~~~~C_{hs}>0,~~~~~\text{at}~\nu\to 0\;.
\end{equation}
This implies that the soliton growth slows down with the
increase of the soliton mass.
We do not attempt to estimate the numerical coefficient $C_{hs}$.
As already mentioned, this would require inclusion of the
bound state contribution which is beyond our present scope. Another
caveat comes from the fact that the time scale of the heavy soliton
growth $\Gamma_s^{-1}$ happens to be parametrically longer than the
gas relaxation time (\ref{trelax}). On these time scales the gas
distribution may evolve away from Maxwellian which we assumed in our
derivation.\footnote{As discussed below, numerical simulations suggest
that Maxwell distribution may still be a good approximation, but this
question requires further study.}
Thus, the formula (\ref{eq:heavyS_rate}) should be taken
with a grain of salt. Its comparison with the results of simulations
is discussed in the next section.
| 42,127 |
\section{Introduction}
\label{sec:intro}
The kinetic theory of active particles(KTAP) \cite{bellomo2009complexity} provides a framework to describe large systems of interacting living particles on multiple scales.
Prominent examples of phenomena modeled in this setting include bacterial movement, cell migration, animal swarms and pedestrian crowds.
Viewed at very small length and time scales, one can observe individual particles, each with its own complex internal dynamic and interactions with the environment or other particles.
When many particles are involved, this level of detail is not practical.
As a first level of abstraction, the KTAP theory models the microscopic scale with PDEs for the expected distribution of particles in time, physical space and state space; so-called kinetic equations.
The connection between particle systems and kinetic equations has been established formally for example for neutron transport \cite{golse2012recent} and the movement of a bacterium \cite{stroock1974some}.
However, in the context of the kinetic theory of active particles, the models are formulated directly as a kinetic equation \cite{PH13, BBNS10a, bellomo2006onset, hillenM5}.
Kinetic equations are characterized by a free-streaming transport term resulting from particles movement and a collision operator modeling particle interactions as instantaneous state changes.
At larger scales only the resulting macroscopic population behavior can be observed, that is, the total number density of particles regardless of their internal microscopic state.
To pass from the microscopic description to a population law, one considers the limit of the kinetic equation when the mean free path of particles tends to zero.
Analytically, passage to the limiting macroscopic equation has been extensively studied for neutron transport \cite{LarKel75, BSS84} and more recently also in the context of biological cell migration \cite{othmer2000diffusion, burini2017hilbert}.
When only interactions between particles and the environment are considered and interactions between particles are neglected, the collision operator is linear.
In this case the resulting macroscopic equations are of diffusion type \cite{burini2017hilbert}.
A macroscopic equation derived in this manner can of course only be an approximation and one may ask how accurate it is in any given situation.
From a computational standpoint this means that we would like to compare simulations of the microscopic and macroscopic models.
However, when the mean free path is small, the collision term is very stiff and a straightforward discretization of the kinetic equation would need infeasible spatial and temporal resolution to resolve the small scales accurately \cite{larsen1987asymptotic}.
Therefore, a variety of so-called asymptotic preserving schemes have been developed \cite{jin1996numerical, klar1998asymptotic, jin2000uniformly, gosse2002asymptotic, lemou2008ap, buet2012design}.
These methods are constructed in such a way that---for a fixed resolution---they converge to a discretization of the limit equation.
A large portion of the work has been done in the context of the telegraph equation and the neutron transport equation, preferably in one space dimension.
To obtain analytical insights about the method, for instance stability conditions or consistency errors, it is reasonable to simplify the situation as much as possible.
But in this work the emphasis is on application rather than analysis.
As a step towards adapting AP methods for more applied situations we consider a kinetic model for glioma invasion in the human brain, developed in \cite{EHKS14, EHS}.
Malignant glioma are a type of brain tumor arising from mutations of glia cells.
Tumor recurrence after treatment is very probable because glioma cells migrate far from the original tumor site without being detected by state-of-the-art imaging methods \cite{claes2007}.
Predictive models could be used to estimate the invisible parts of the tumor and improve treatment success.
The model takes haptic interactions between glioma cells and white matter tissue into account.
According to the classification in \cite{dickinson1993stochastic}, this effect can be classified as either klinokinesis or taxis.
In addition to an anisotropic diffusion, the resulting macroscopic model features a drift towards regions with denser fibers.
We develop an AP method against this prototype model, which introduces some extra real-world complications.
In clinical praxis, information about the tissue structure of a patient's brain is contained in a diffusion tensor image (DTI) \cite{LeBihan2001DTI} obtained from a MRI scan.
The three dimensional DTI data comes in the form of a constant tensor per voxel with a spatial resolution of a few millimeters.
To avoid interpolation artifacts, the discretization should respect the data resolution.
Also, the scheme must be robust against discontinuities in the data.
Our scheme is an extension of the method of Lemou and Mieussens \cite{lemou2008ap} who employ a micro-macro decomposition on staggered grids.
In the following \secref{sec:kinetic-equation}, we introduce the kinetic equation first in a general form and then in the specific form of the glioma invasion model.
We also introduce a parabolic scaling of this equation.
Then in \secref{sec:micro-macro}, we briefly introduce the micro-macro decomposition and use this to informally derive the macroscopic limit of the kinetic equation.
A large part of the paper is dedicated to a detailed description of the AP method.
In \secref{sec:ap-scheme}, we first describe the space discretization on general primal-dual mesh pairs and then also present the scheme for the special situation of a regular grid.
We also discuss the resulting numerical scheme in the parabolic limit and how to overcome some of the problems of this limit scheme.
Time stepping and boundary conditions will also be described briefly.
It remains to find a suitable discretization of the velocity.
The linear spectral method that we use is described in \secref{sec:spectral-method}.
We do not do much analysis on the developed method but rather assess the method's properties numerically.
Therefore we present the results of a number of benchmark tests in \secref{sec:results}.
The emphasis is on situations close to the parabolic limit, also in the presence of discontinuous coefficients.
Finally we perform a series of computations on the glioma model with measured DTI data and realistic parameters.
\section{Haptotaxis models and their diffusion limit}
\label{sec:kinetic-equation}
First, we recall the general class of kinetic equations from \cite{corbin2018higher}.
Then we perform a parabolic scaling of this equation and present the resulting diffusion limit from \cite{EHKS14,corbin2018higher} without any derivation.
Finally we introduce a model for glioma invasion as a special case of the general setting.
\subsection{General microscopic setting}
The population is described by a distribution function $\ensuremath{f}(t,x,\normal{\vcoord})$ which can be interpreted as the number density of particles with speed $\normal{\vcoord} \in \mathbb{S}^2$ at time $t \in \mathbb{R}^{+}$ and position $x = (\xi, \eta, \zeta)$.
The particle distribution is governed by a linear kinetic equation of the form
\begin{align}
\de_t \ensuremath{f} + c \ensuremath{\nabla_x \cdot} (\normal{\vcoord} \ensuremath{f}) &= (\LCO[D] + \LCO[a]) \ensuremath{f} + \mathcal{S} \ensuremath{f},
\label{eq:lke-general}
\end{align}
on the domain
\begin{align*}
\Omega_{\tcoord\xcoord\vcoord} &= \Omega_{\tcoord} \times \Omega_{\xcoord} \times \Omega_{\vcoord}\\
&= T [0,1] \times X \normal{\Omega}_{\xcoord} \times \mathbb{S}^2.
\end{align*}
The left hand side models the free flight of particles with constant speed $c$ in arbitrary direction $\normal{\vcoord} \in \mathbb{S}^2$.
Changes in velocity happen in so-called collisions, i.e. particles change their velocity instantaneously at certain times.
This is modeled by the linear turning operator $(\LCO[D] + \LCO[a])$ on the right hand side of the equation.
Let $\kernelcol(x, \normal{\vcoord}, \normal{\vcoord}') := \kernelcol[D](x, \normal{\vcoord}, \normal{\vcoord}') + \kernelcol[a](x, \normal{\vcoord}, \normal{\vcoord}')$ be the rate at which particles at position $x$ with direction $\normal{\vcoord}'$ collide and change their direction to $\normal{\vcoord}$.
The interpretation as a rate is only meaningful if $\kernelcol$ is strictly positive and bounded from above:
\begin{alignat}{3}
0 &< \kernelcol[min] &&\leq \kernelcol[D](x,\normal{\vcoord}^\prime,\normal{\vcoord}) + \kernelcol[a] (x,\normal{\vcoord}^\prime,\normal{\vcoord}) &&\leq \kernelcol[max].
\label{eq:kernel-bounds2}
\end{alignat}
The turning operator $\LCO$ then maps the distribution $\ensuremath{f}$ onto another distribution $\LCO \ensuremath{f}$ via the kernel integral
\begin{align*}
\LCO \ensuremath{f} = (\LCO[D] + \LCO[a]) \ensuremath{f} &= \intVnprime{ \kernelcol (x, \normal{\vcoord}, \normal{\vcoord}') \ensuremath{f}(\normal{\vcoord}') - \kernelcol(x,\normal{\vcoord}',\normal{\vcoord}) \ensuremath{f}(\normal{\vcoord}) } .
\end{align*}
The first summand counts the gain for direction $\normal{\vcoord}$ due to particles turning from any direction $\normal{\vcoord}'$ to $\normal{\vcoord}$.
Accordingly the second term describes the particle losses for direction $\normal{\vcoord}$.
By this construction the operator $\LCO$ (as well as both parts $\LCO[D], \LCO[a]$ individually) preserves mass:
\begin{align}
\intVn { (\LCO \ensuremath{f})(\normal{\vcoord}) } = 0 .
\label{eq:lco-mass-conservation}
\end{align}
We need some additional structure for the turning to derive a diffusion limit.
The first kernel $\kernelcol[D]$ on its own is a turning rate, i.e. positive and bounded from above:
\begin{alignat}{3}
\label{eq:kernel-bounds1}
0 &< \kernelcol[D, min] &&\leq \kernelcol[D](x,\normal{\vcoord}^\prime,\normal{\vcoord}) &&\leq \kernelcol[D, max].
\end{alignat}
There is a positive normalization factor
\begin{align*}
\factorcol[D](x) := \intVn{\kernelcol[D](x, \normal{\vcoord}, \normal{\vcoord}') }
\end{align*}
that does not depend on the velocity $v'$.
The kernel is strictly positive, normalized and first-order symmetric:
\begin{equation}
\label{eq:equilibrium-assumptions}
\begin{aligned}
\ensuremath{E}(x, \normal{\vcoord}) &> 0, \\
\intVn{\ensuremath{E}(x, \normal{\vcoord})} &= 1, \\
\intVn{\normal{\vcoord} \ensuremath{E} (x, \normal{\vcoord})} &= 0.
\end{aligned}
\end{equation}
Additionally it admits a local equilibrium $\ensuremath{E}(x,\normal{\vcoord})$ that fulfills the detailed balance
\begin{align}
\label{eq:kernel-equilibrium}
\kernelcol[D](x,\normal{\vcoord},\normal{\vcoord}') \ensuremath{E}(x,\normal{\vcoord}') = \kernelcol[D](x,\normal{\vcoord}',\normal{\vcoord}) \ensuremath{E}(x,\normal{\vcoord}).
\end{align}
This is a slightly more general assumption than the symmetry assumption $\kernelcol(\normal{\vcoord}, \normal{\vcoord}') = \kernelcol(\normal{\vcoord}', \normal{\vcoord})$ in classic linear kinetic theory.
The kernel $\kernelcol[a]$ should be interpreted as a perturbation of the turning rate $\kernelcol[D]$.
It is only restricted by the bounds \eqref{eq:kernel-bounds2} on the full kernel $\kernelcol = \kernelcol[D] + \kernelcol[a]$.
The integral
\begin{align*}
\tilde\factorcol_a(x, \normal{\vcoord}') &= \intVn{\kernelcol[a](x, \normal{\vcoord}, \normal{\vcoord}') }
\end{align*}
in general still depends on the direction $\normal{\vcoord}'$ and can even be negative.
We define the normalization factor
\begin{align*}
\factorcol[a](x) := \frac{1}{c}\max_{\normal{\vcoord}' \in \mathbb{S}^2} \lbrace \abs{\tilde\factorcol_a(x, \normal{\vcoord}')}\rbrace.
\end{align*}
Finally, birth and death of particles enters the model via the source term
\begin{align*}
\mathcal{S} \ensuremath{f} &= \mu(x, \rho) \normal{\Source} \ensuremath{f}.
\end{align*}
The net growth rate $\mu(x, \rho)$ depends on the local particle density $\rho = \intVn{\ensuremath{f}(\normal{\vcoord})}$.
The operator $\normal{\Source}$ accounts for direction changes during proliferation.
We define the growth rate such that the source is normalized, i.e., $\int_{\Omega_{\vcoord}} \normal{\Source}\ensuremath{f} dv = \rho$.
\subsection{Parabolic scaling and diffusion limit}
\label{sec:parabolic-scaling}
To derive the diffusion limit of \eqref{eq:lke-general}, it is helpful to write it in a dimensionless form.
Therefore we introduce non-dimensional coordinates via $x = X \normal x$, $t = T \normal t$ together with the non-dimensional particle distribution $\ensuremath{f}(t,x,\normal{\vcoord}) = \ensuremath{f}_0 \normal \ensuremath{f}(\normal{\tcoord}, \normal{\xcoord}, \normal{\vcoord})$ and $\factorcol[D](x) = K_{\dif} \nfactorcol[D](\normal x)$, $\factorcol[a](x) = \frac{K_{\adv}}{X} \nfactorcol[a](\normal x), \mu(x, \rho) = M \normal \mu(\normal x, \normal \rho)$.
With this we can define dimensionless kernels via $\kernelcol[D](x, \normal{\vcoord}, \normal{\vcoord}') = K_{\dif} \nfactorcol[D](\normal{\xcoord}) \nkernelcol[D](\normal{\xcoord}, \normal{\vcoord}, \normal{\vcoord}') $ and $\kernelcol[a](x, \normal{\vcoord}, \normal{\vcoord}') = \frac{K_{\adv}}{X} \nfactorcol[a](\normal{\xcoord}) \nkernelcol[a](\normal{\xcoord}, \normal{\vcoord}, \normal{\vcoord}') $.
The dimensionless turning operators are
\begin{align*}
\nLCO[D] \normal\ensuremath{f} &= \intVnprime{ \nkernelcol[D](\normal{\xcoord}, \normal{\vcoord}, \normal{\vcoord}') \normal \ensuremath{f}(\normal{\vcoord}') - \nkernelcol[D](\normal{\xcoord}, \normal{\vcoord}', \normal{\vcoord}) \normal \ensuremath{f}(\normal{\vcoord})}, \\
\nLCO[a] \normal\ensuremath{f} &= \intVnprime{ \nkernelcol[a](\normal{\xcoord}, \normal{\vcoord}, \normal{\vcoord}') \normal \ensuremath{f}(\normal{\vcoord}') - \nkernelcol[a](\normal{\xcoord}, \normal{\vcoord}', \normal{\vcoord}) \normal \ensuremath{f}(\normal{\vcoord})},
\end{align*}
and finally a non-dimensional form of \eqref{eq:lke-general} is
\begin{align}
\label{eq:lke-dimensionless}
\partial_{\normal t} \normal \ensuremath{f} + \frac{T c}{X} \nabla_{\normal x} (\normal v \normal \ensuremath{f}) &= T K_{\dif} \nfactorcol[D](\normal x) \nLCO[D] \normal \ensuremath{f} + \frac{T c}{X} K_{\adv} \nfactorcol[a](\normal x) \nLCO[a] \normal \ensuremath{f} + T M \normal \mu(\normal x, \normal \rho) \normal{\Source} \normal \ensuremath{f}.
\end{align}
We recognize the Strouhal number $\St = \frac{X}{c T}$, a Knudsen number for turning events $\Kn_t = \frac{1}{K_{\dif} T}$, and a Knudsen number for proliferation events $\Kn_p = \frac{1}{M T}$.
Using these characteristic numbers and dropping the hats everywhere, we write the equation as
\begin{align}
\de_t \ensuremath{f} + \frac{1}{\St} \ensuremath{\nabla_x \cdot} (v \ensuremath{f}) = \frac{1}{\Kn_t} \factorcol[D](x) \LCO[D] \ensuremath{f} + \frac{K_{\adv}}{\St} \factorcol[a](x) \LCO[a] \ensuremath{f} + \frac{1}{\Kn_p} \mu(x, \rho) \mathcal{S} \ensuremath{f}
\end{align}
on the unit domain
\begin{align*}
\normal \Omega_{txv} &= [0,1]\times \normal{\Omega}_x \times \mathbb{S}^2, \\
\normal \Omega_x & \subseteq [0,1]^{S}.
\end{align*}
In accordance with \cite{jin1996numerical}, we take the parabolic scaling parameter
\begin{align*}
\varepsilon := \frac{\Kn_t}{\St} = \frac{c}{X K_{\dif}}
\end{align*}
as the ratio of mean free path and domain length.
To make the parabolic scaling apparent, we write \eqref{eq:lke-dimensionless} as
\begin{align}
\label{eq:lke-scaled}
\de_t \ensuremath{f} + \frac{\delta}{\varepsilon} \ensuremath{\nabla_x \cdot} (v \ensuremath{f}) &= \frac{\delta}{\varepsilon^2}\factorcol[D](x) \LCO[D] \ensuremath{f} + \frac{\delta \nu}{\varepsilon} \factorcol[a](x) \LCO[a] \ensuremath{f} + \theta \mu(x, \rho) \mathcal{S} \ensuremath{f},
\end{align}
with the parameters $\delta = \frac{\Kn_t}{\St^2}$, $\nu = K_{\adv}$, $\theta = \frac{1}{\Kn_p}$.
In the literature usually $\delta = \theta = 1, \nu = 0$ is assumed (see e.g. \cite{lemou2008ap, jin1996numerical, jin2000uniformly, klar1998asymptotic} ), which is not a problem from a theoretical perspective.
From the perspective of the application the characteristic numbers are determined by the physical parameters and thus cannot be chosen arbitrarily.
For fixed characteristic numbers $\delta, \nu, \theta$, equation \eqref{eq:lke-scaled} converges to an advection-diffusion equation for the density ${\rho_0}(t,x)$ as the parabolic scaling parameter approaches zero:
\begin{align}
\de_t {\rho_0} + \delta \ensuremath{\nabla_x \cdot} \left( \frac{1}{\factorcol[D]} \ensuremath{\nabla_x \cdot} \left({\rho_0} \ints{v \LCO[D]^{-1}( v \ensuremath{E}) \right)} - \frac{\nu \factorcol[a] }{\factorcol[D]} {\rho_0} \ints{v \LCO[D]^{-1} \LCO[a] \ensuremath{E}}\right) &= \theta \mu(x, {\rho_0}) {\rho_0}.
\label{eq:diffusion-limit-general}
\end{align}
Herein we use the angle brackets
\begin{align*}
\ints{\cdot} &= \intVn{\cdot~}
\end{align*}
as shorthand notation for the integral over the unit sphere.
We identify the symmetric positive definite diffusion tensor
\begin{align}
\label{eq:general-diffusion-tensor}
D:= -\frac{1}{\factorcol[D]}\ints{\LCO[D]^{-1}(v\ensuremath{E}) v\trans},
\end{align}
and the drift vector
\begin{align}
\label{eq:general-drift-vector}
a := -\frac{\factorcol[a]}{\factorcol[D]}\ints{v \LCO[D]^{-1}\LCO[a] \ensuremath{E}}.
\end{align}
Modulo hats, the diffusion equation transformed back to physical coordinates is
\begin{align*}
\de_t {\rho_0} - \frac{\delta X^2}{T D_0} \ensuremath{\nabla_x \cdot} \left(\ensuremath{\nabla_x \cdot}({\rho_0} D) - \frac{\nu D_0}{a_0 X} a {\rho_0}\right) = \frac{\theta}{M T} \mu(x, {\rho_0}) {\rho_0},
\end{align*}
with a characteristic diffusion speed $D_0$ and a characteristic drift speed $a_0$ related to the microscopic scales via
\begin{align*}
D_0 &= \frac{\pardelX^2}{T} = \frac{c^2}{K_{\dif}}, \\
a_0 &= \frac{\nu D_0}{X} = \frac{c^2 K_{\adv}}{X K_{\dif}}.
\end{align*}
Then finally the parabolic limit of \eqref{eq:lke-general} in physical coordinates is
\begin{align}
\label{eq:diffusion-limit-physical}
\de_t {\rho_0} - \ensuremath{\nabla_x \cdot} \left(\ensuremath{\nabla_x \cdot} (D {\rho_0}) - a {\rho_0} \right) &= \mu(x, {\rho_0}) {\rho_0}.
\end{align}
A formal proof of the limit via a Hilbert expansion in $\varepsilon$ can be found in \cite{othmer2000diffusion, EHKS14,corbin2018higher}.
We will not repeat this proof here but rather use the micro-macro decomposition in the next section to compute the limit in a less rigorous way.
In any case, the limit only exists if the operator $\LCO[D]$ is invertible on an appropriate space.
This is guaranteed by the following \lemref{lem:lcol-properties} from \cite{bellomo2006onset}.
\begin{definition}[Weighted $L^2$ space]
With $L^2_{\ensuremath{E}}$ we denote the $L^2$-space on $\mathbb{S}^2$ with respect to the weighted scalar product
\begin{align*}
\left( f(\normal{\vcoord}),g(\normal{\vcoord}) \right)_{\ensuremath{E}} = \ints{\frac{f(\normal{\vcoord}) g(\normal{\vcoord})}{\ensuremath{E}(\normal{\vcoord})} }.
\end{align*}
\end{definition}
\begin{lemma}[Properties of ${\LCO[D]}$]
\label{lem:lcol-properties}
Under assumptions \eqref{eq:kernel-bounds1}, \eqref{eq:kernel-equilibrium}, the turning operator $\LCO[D]: L^2_{\ensuremath{E}} \mapsto L^2_{\ensuremath{E}}$ has the following properties for each $x \in \Omega_x$:
\begin{enumerate}
\item $\LCO[D]$ is self-adjoint;
\item The one-dimensional nullspace of $\LCO[D]$ is $\mathcal{N}(\LCO[D]) = \vecspan{\ensuremath{E}}$;
\item There exists a unique solution to $\LCO[D] \ensuremath{f} = \ensuremath{g}$ for every $\ensuremath{g} \in \Nsp^\bot$, i.e. $\ensuremath{g}$ such that $\left( \ensuremath{g}, \ensuremath{E} \right)_{\ensuremath{E}} = \intVn{\ensuremath{g}(\normal{\vcoord})} = 0$.
\end{enumerate}
\end{lemma}
\subsection{A simple haptotaxis model for glioma}
\label{sec:glioma-model}
For the computations we use a model for haptotaxis induced glioma migration from \cite{EHKS14,corbin2018higher} that can be cast into the general setting.
Because it would exceed the scope of this paper to discuss the details of its derivation we only give a brief summary.
First of all, assume that a field of symmetric positive definite tensors $D_W(x) : \Omega_{\xcoord} \mapsto \mathbb{R}^{3\times 3}$ is given.
In practice, diffusion tensor imaging (DTI) provides piecewise constant measurements of the diffusion of water molecules through the tissue \cite{LeBihan2001DTI}.
As in \cite{EHS} we use this information to estimate the directional distribution of extracellular matrix (ECM) fibers $\ensuremath{E}[D_W](x, v)$ and the fraction of volume $\ensuremath{Q}[D_W](x)$ these fibers occupy.
One important aspect of the model is that glioma cells use ECM fibers for contact guidance, i.e., they align themselves to the fibers.
The fiber distribution $\ensuremath{E}$ plays the role of the collision equilibrium and therefore should fulfill assumptions \eqref{eq:equilibrium-assumptions} and \eqref{eq:kernel-equilibrium}.
A simple estimate for the fiber distribution is the so-called peanut distribution
\begin{align}
\label{eq:peanut}
\ensuremath{E}(x,\normal{\vcoord}) &= \frac{3}{4 \pi \trace (D_W)} (\normal{\vcoord} \trans D_W \normal{\vcoord}),
\end{align}
The turning rate for the first turning operator is constant, i.e. $\factorcol[D] = \lambda_0$, and the turning kernel $\kernelcol[D]$ is proportional to the fiber distribution
\begin{align*}
\kernelcol[D](x,\normal{\vcoord},\normal{\vcoord}') = \lambda_0 \ensuremath{E}(x,\normal{\vcoord}),
\end{align*}
such that the turning operator $\LCO[D] = \lambda_0 \left(\ints{\ensuremath{f}} \ensuremath{E} - \ensuremath{f} \right)$ is a simple relaxation to local equilibrium.
For any $\phi \in \Nsp^\bot$, i.e., $\ints{\phi} = 0$, the inverse of $\LCO[D]$ is simply
\begin{align}
\LCO[D]^{-1} (\phi) = -\frac{1}{\lambda_0}\phi.
\label{eq:glioma-lco-inverse}
\end{align}
The turning perturbation $\LCO[a]$ stems from a subcellular model that includes internal state changes of cells.
In this model cells change their turning behavior according to the ECM concentration.
This results in a collective movement in direction of the fiber gradient:
\begin{align*}
\kernelcol[a](x,\normal{\vcoord},\normal{\vcoord}') &= -\lambda_H(x) c \left( \nabla_x \ensuremath{Q}(x) \cdot \normal{\vcoord}' \right) \ensuremath{E}(x,\normal{\vcoord}), \\
\factorcol[a](x) &= \lambda_H(x) \norm{\nabla_x \ensuremath{Q}(x)}.
\end{align*}
For the source, we consider logistic growth towards the carrying capacity $\PM_{\text{cc}}$, thus the growth rate is given by
\begin{align*}
\mu(x, \rho) = M \left(1 - \frac{\rho}{\PM_{\text{cc}}}\right).
\end{align*}
We assume that no changes in direction occur during growth, which is expressed by
\begin{align*}
\normal{\Source} \ensuremath{f} = \ensuremath{f}.
\end{align*}
For a more detailed discussion the interested reader is referred to \cite{EHKS14,EHS,corbin2018higher}.
With these definitions, the glioma equation in physical coordinates reads
\begin{align*}
\de_t \ensuremath{f} + c \ensuremath{\nabla_x \cdot} (\normal{\vcoord} \ensuremath{f}) &= \lambda_0 \left( \ints{\ensuremath{f}} \ensuremath{E}(x,\normal{\vcoord}) - \ensuremath{f} \right) - c \lambda_H(x) \nabla_x\ensuremath{Q}(x) \cdot \left( \ints{\normal{\vcoord} \ensuremath{f}} \ensuremath{E}(x,\normal{\vcoord}) - \normal{\vcoord}\ensuremath{f} \right) + \mu(x, \rho) \ensuremath{f}.
\end{align*}
After applying the parabolic scaling from \secref{sec:parabolic-scaling}, the glioma model in dimensionless form becomes
\begin{align}
\de_t \ensuremath{f} + \frac{\delta}{\varepsilon} \ensuremath{\nabla_x \cdot} (v \ensuremath{f}) &= \frac{\delta}{\varepsilon^2} \left(\ensuremath{E} \ints{\ensuremath{f}} - \ensuremath{f}\right) - \frac{\delta\nu}{\varepsilon} \hat \lambda_H \nabla_x \ensuremath{Q} \cdot \left(\ensuremath{E} \ints{\ensuremath{f} v} - \ensuremath{f} v\right) + \theta \normal \mu \ensuremath{f},
\label{eq:lke-glioma-scaled}
\end{align}
with $\lambda_H = \frac{\lambda_1}{\lambda_0} \hat \lambda_H$ and the characteristic numbers
\begin{align*}
\varepsilon = \frac{c}{X \lambda_0}, \quad \delta = \frac{c^2}{\lambda_0} \frac{T}{X^2}, \quad \nu = \frac{\lambda_1}{\lambda_0}, \quad \theta = M T.
\end{align*}
The diffusion approximation is given by
\begin{align}
\de_t {\rho_0} - \delta \ensuremath{\nabla_x \cdot} \left( \ensuremath{\nabla_x \cdot} ({\rho_0} D_T) - \nu a_{T} {\rho_0} \right) = \theta \mu {\rho_0}
\label{eq:DiffusionLimitGlioma}.
\end{align}
Using the inversion formula \eqref{eq:glioma-lco-inverse}, the tumor diffusion tensor and drift resulting from the peanut distribution \eqref{eq:peanut} are given by
\begin{align}
\label{eq:DT-glioma}
D_T &= \ints{vv\ensuremath{E}} = \frac{1}{5} \left(I + \frac{2D_W}{\trace D_W} \right). \\
\label{eq:drift-glioma}
a_{T} &= \hat\lambda_H \nabla_x \ensuremath{Q} \cdot D_T.
\end{align}
\section{Micro-Macro decomposition and the diffusion limit}
\label{sec:micro-macro}
In the next section, we follow the work of Lemou and Mieussens \cite{lemou2008ap} quite closely to perform a micro-macro decomposition of equation \eqref{eq:lke-scaled} in the parabolic dimensionless form.
This serves as the starting point for the numerical discretization scheme.
From \lemref{lem:lcol-properties} we recall the nullspace $\mathcal{N}(\LCO[D]) = \vecspan{E}$ and range $\mathcal{R}(\LCO[D]) = \Nsp^\bot(\LCO[D])$ of the turning operator.
Orthogonal projections onto those spaces are
\begin{align*}
\Pi(\phi) &= \ints{\phi} \ensuremath{E}, \\
(\Identity - \Nspproj)(\phi) &= \phi - \ints{\phi} \ensuremath{E},
\end{align*}
respectively.
Using these projections, we split the particle distribution into an equilibrium part and a perturbation:
\begin{equation}
\begin{aligned}
\ensuremath{f} &= \Pi \ensuremath{f} + (\Identity - \Nspproj) \ensuremath{f} \\
&= \rho \ensuremath{E} + \varepsilon \ensuremath{g}.
\label{eq:APsplitPD}
\end{aligned}
\end{equation}
Here, $\rho(t, x) = \ints{\ensuremath{f}}$ is the local particle density.
Now the kinetic equation is split into a system of two equations\textemdash one for the macroscopic density $\rho$ and one for the microscopic perturbation $\ensuremath{g}$.
We obtain the $\rho$-equation by inserting the perturbation formula \eqref{eq:APsplitPD} into \eqref{eq:lke-scaled} and applying the projection $\Pi$:
\begin{align}
\de_t \rho + \delta \ensuremath{\nabla_x \cdot} \ints{\ensuremath{g} v} &= \theta \mu \rho,
\label{eq:APrho}
\end{align}
where we use the positivity and symmetry of the equilibrium \eqref{eq:equilibrium-assumptions} and the mass conservation $\ints{\LCO[D]} = 0, \ints{\LCO[a]} = 0$ of the turning operators \eqref{eq:lco-mass-conservation}.
Then, applying $(\Identity - \Nspproj)$ to \eqref{eq:lke-scaled} and dividing by $\varepsilon$ gives
\begin{align}
\de_t \ensuremath{g} + \frac{\delta}{\varepsilon} (\Identity - \Nspproj) \ensuremath{\nabla_x \cdot} (v \ensuremath{g}) &= -\frac{\delta}{\varepsilon^2} \ensuremath{\nabla_x \cdot} (v \rho \ensuremath{E}) + \frac{\delta \factorcol[D]}{\varepsilon^2} \LCO[D] \ensuremath{g} + \frac{\delta \nu \factorcol[a]}{\varepsilon^2} \LCO[a] \ensuremath{f} + \frac{\theta \mu}{\varepsilon} (\Identity - \Nspproj) \mathcal{S} \ensuremath{f},
\label{eq:APremainder}
\end{align}
where we use
\begin{align*}
\Pi \LCO \phi &= \ints{\LCO \phi} \ensuremath{E} \\
&= 0,\\
(\Identity - \Nspproj) \LCO[D] \ensuremath{f} &= \LCO[D] (\rho \ensuremath{E} + \varepsilon \ensuremath{g}) \\
&= \varepsilon \LCO[D] \ensuremath{g}, \\
\Pi \ensuremath{\nabla_x \cdot} (v \rho \ensuremath{E} ) &= \ints{\ensuremath{\nabla_x \cdot} v \rho \ensuremath{E}} \ensuremath{E} \\
&= \ensuremath{\nabla_x \cdot} \ints{v \rho \ensuremath{E}} \ensuremath{E} \\
&= 0.
\end{align*}
Apart from the new $\LCO[a]$ term, this formulation coincides with the decomposition in \cite{lemou2008ap}. The authors of \cite{lemou2008ap} show, that\textemdash for compatible initial and boundary conditions\textemdash the micro-macro decomposition is equivalent to the original kinetic equation \eqref{eq:lke-scaled}.
It is easy to see the diffusion limit from the decomposition in a rather informal way.
In the limit of $\varepsilon \rightarrow 0$, only the $\frac{1}{\varepsilon^2}$ terms remain in \eqref{eq:APremainder} and thus it is reduced to
\begin{align*}
\ensuremath{g}_0 &= \frac{1}{\factorcol[D]} \LCO[D]^{-1} \left( \ensuremath{\nabla_x \cdot} (v {\rho_0} \ensuremath{E}) - \nu \factorcol[a] {\rho_0} \LCO[a] \ensuremath{E} \right).
\end{align*}
Since $\ints{v\ensuremath{E}} = \ints{\LCO[a] \ensuremath{f}} = 0$, \lemref{lem:lcol-properties} assures that the inverse of $\LCO[D]$ in this expression exists and is unique.
Inserting this into the macro equation \eqref{eq:APrho} immediately gives the diffusion limit \eqref{eq:diffusion-limit-general}.
The main idea behind the asymptotic preserving scheme is to do something similar in a discrete way.
First the perturbation $\ensuremath{g}^{n+1}$ on the next time-level is computed using the micro equation, then this is inserted into the macro equation to update the density $\rho^{n+1}$.
\section{The asymptotic preserving method}
\label{sec:ap-scheme}
In general, a numerical scheme is called asymptotic preserving (AP) with respect to a scaling limit, if it converges to a valid scheme for the limit equation as $\varepsilon \rightarrow 0$ and the spatial discretization is fixed.
The stability criterion for the time step size $\Delta t$ must be bounded from below by a positive value independent of $\varepsilon$.
The main objective of this work is to develop such an asymptotic preserving scheme for the kinetic equation \eqref{eq:lke-scaled}.
We start from the micro-macro decomposition from the previous \secref{sec:micro-macro} and write it as
\begin{equation}
\label{eq:continuous-system}
\begin{alignedat}{2}
\de_t \rho &= \Phi^{\PM}(\rho, \ensuremath{g}) &+& \Gamma^{\PM}(\rho,\ensuremath{g}), \\
\de_t \ensuremath{g} &= \left({\Phi^{\PP}_{\FD}}(\rho) + \Phi^{\PP}(\rho, \ensuremath{g}) \right)&+& \Gamma^{\PP}(\rho,\ensuremath{g}).
\end{alignedat}
\end{equation}
Here the individual terms are grouped into those that will later be discretized explicitly in time
\begin{equation}
\label{eq:continuous-system-explicit-terms}
\begin{aligned}
\Phi^{\PM}(\rho,\ensuremath{g}) &= -\delta \ensuremath{\nabla_x \cdot} \ints{\ensuremath{g} v} + \theta \mu \rho,\\
{\Phi^{\PP}_{\FD}}(\rho) &= -\frac{\delta}{\varepsilon^2} \ensuremath{\nabla_x \cdot} (v \rho \ensuremath{E}), \\
\Phi^{\PP}(\rho, \ensuremath{g}) &= -\frac{\delta}{\varepsilon} (\Identity - \Nspproj) (\ensuremath{\nabla_x \cdot} (vg)) + \frac{\delta \nu \factorcol[a]}{\varepsilon^2} \LCO[a] \ensuremath{f} + \frac{\theta \mu}{\varepsilon} (\Identity - \Nspproj) (\mathcal{S} \ensuremath{f}),
\end{aligned}
\end{equation}
and those that will be discretized partially implicit
\begin{equation}
\label{eq:continuous-system-implicit-terms}
\begin{aligned}
\Gamma^{\PM}(\rho,\ensuremath{g}) &= 0, \\
\Gamma^{\PP}(\rho, \ensuremath{g}) &= \frac{\delta \factorcol[D]}{\varepsilon^2} \LCO[D] \ensuremath{g}.
\end{aligned}
\end{equation}
In \cite{lemou2008ap} the authors argued that it is enough to treat only the term $\LCO[D]$ in an implicit way to get an AP scheme.
We call the first-order scheme derived from the micro-macro decomposition in the form \eqref{eq:continuous-system}-\eqref{eq:continuous-system-implicit-terms}, in which only $\LCO[D]$ is treated implicitly, \mischeme{1}; and the second-order scheme \mischeme{2}.
But it is also possible to solve the source and $\LCO[a]$ terms implicitly in time.
That is, we regroup the terms into
\begin{equation}
\label{eq:continuous-system-iv}
\begin{aligned}
\tilde \Phi^{\PM}(\rho,\ensuremath{g}) &= -\delta \ensuremath{\nabla_x \cdot} \ints{\ensuremath{g} v}, \\
{\tilde \Phi^{\PP}_{\FD}}(\rho) &= -\frac{\delta}{\varepsilon^2} \ensuremath{\nabla_x \cdot} (v \rho \ensuremath{E}), \\
\tilde \Phi^{\PP}(\rho, \ensuremath{g}) &= -\frac{\delta}{\varepsilon} (\Identity - \Nspproj) (\ensuremath{\nabla_x \cdot} (vg)), \\
\tilde \Gamma^{\PM}(\rho,\ensuremath{g}) &= \theta \mu \rho, \\
\tilde \Gamma^{\PP}(\rho, \ensuremath{g}) &= \frac{\delta \factorcol[D]}{\varepsilon^2} \LCO[D] \ensuremath{g} + \frac{\delta \nu \factorcol[a]}{\varepsilon^2} \LCO[a] \ensuremath{f} + \frac{\theta \mu}{\varepsilon} (\Identity - \Nspproj) (\mathcal{S} \ensuremath{f}).
\end{aligned}
\end{equation}
and solve $\tilde \Phi^{\PM}, {\tilde \Phi^{\PP}_{\FD}}, \tilde \Phi^{\PP}$ explicitly and $\tilde \Gamma^{\PM}, \tilde \Gamma^{\PP}$ implicitly.
This variant of the scheme will be called \ivscheme{1}, or \ivscheme{2}.
In the following sections, we will see that the implicit time update for this scheme can still be done on each grid cell separately.
\subsection{Space discretization}
In \cite{lemou2008ap} the authors discretize the micro and macro equation with finite differences on staggered grids in one space dimension.
To generalize the method to arbitrary dimension ${S}$, we reformulate the method in the context of finite volumes on primal-dual mesh pairs.
Although the implementation supports only tensor-product grids at the moment, we write the scheme for conforming polyhedral meshes.
This has several benefits.
Most aspects of the scheme do not depend on the tensor-product structure, and also the implementation in DUNE (see \cite{dune-web-page}) is grid-agnostic in most parts.
The general notation is quite close to the implementation, which helps understanding the code and also will make an implementation on unstructured conforming meshes easier.
We choose a notation that is similar to that in \cite{buet2012design}.
We use the symbol $\mentity{i}$ wherever any kind of entity on the grid can be inserted(cell, face, edge, dual cell, \dots).
The index $i$ is used to label these generic entities.
Only considering topology, the dual mesh belonging to a primal mesh is defined as follows:
Each cell in the original mesh is identified with a vertex in the dual mesh and each primal vertex with a dual cell.
Wherever two primal cells intersect in a face, two dual vertices are connected with an edge and where two primal vertices are connected, there is a face between two dual cells.
We always use the indices $j, k \in \mathbb{N}$ to label cells $\mcell{\pcellidx}, \mcell{\pcellidxnb}$ in the primal grid and $r, s$ to identify primal vertices $\mvert{\pvertidx}, \mvert{\pvertidxnb}$.
Considering the primal-dual mapping, any primal cell index $\pcellidx$ also identifies a dual vertex $\mvert{\dvertidx}$ and a primal index $\pvertidx$ corresponds to a dual cell $\mcell{\dcellidx}$.
In one mesh two cells $\mcell{i}, \mcell{i'}$ are neighbors, if they intersect in a face $\mface{i}{i'} = \mcell{i} \cap \mcell{i'}$.
Then the two vertices $\mvert{i}, \mvert{i'}$ in the other grid are also neighbors, i.e., they are connected with an edge $\medge{i}{i'}$.
In this sense, the neighbors of an index $i$ are those indices $i'$ for which in one mesh the corresponding cells are neighbors and thus in the other grid the corresponding vertices are neighbors.
We write $\mneighbors{i}$ for the set of all neighbors of $i$.
A related concept is the adjacency between entities of different dimension.
If the edge $\medge{\pvertidx}{\pvertidxnb}$ is part of the cell $\mcell{\pcellidx}$ we say that $\mcell{\pcellidx}$ is adjacent to $\medge{\pvertidx}{\pvertidxnb}$, and denote this by $j \in \madjacents{r}{s}$.
The index pair $(r, s)$ also identifies a dual face $\mface{\dcellidx}{\dcellidxnb}$, thus $\madjacents{\pvertidx}{\pvertidxnb}$ equivalently is the set of all dual vertices $\mvert{\dvertidx}$ that are part of that face.
Lastly we denote the set of vertices of a cell $i$ with $\mverticesof{i}$.
The example mesh in \figref{fig:annotated-mesh-2d-a} is helpful to visualize these definitions.
\begin{figure}
\deffigures/Brain/{figures/primal_dual_mesh/}
\centering
\parbox{\figuretwocol}{%
\centering
\tikztitle{Faces}
\settikzlabel{fig:annotated-mesh-2d-a}
\withfiguresize{\figuretwocol}{\figuretwocol}{\externaltikz{annotated_mesh_2d_a}{\input{figures/Brain/ annotated_mesh_2d_a}}}
}
\hspace{\figurehorizontalsep}
\parbox{\figuretwocol}{%
\centering
\tikztitle{Facets}
\settikzlabel{fig:annotated-mesh-2d-b}
\withfiguresize{\figuretwocol}{\figuretwocol}{\externaltikz{annotated_mesh_2d_b}{\input{figures/Brain/ annotated_mesh_2d_b}}}
}
\caption{The primal-dual mesh pair in two dimensions. The primal cell $\mcell{\pcellidx}$ is marked green and the dual cell $\mcell{\dcellidx}$ in gray. \ref{fig:annotated-mesh-2d-a}: Highlighted are the primal face $\mface{\pcellidx}{\pcellidxnb}$ and the dual face $\mface{\dcellidx}{\dcellidxnb}$. \ref{fig:annotated-mesh-2d-b}: Highlighted are the subcell $\msubcell{\pcellidx}{\pvertidx} = \mcell{\pcellidx} \cap \mcell{\dcellidx}$, the primal facet $\mfacet{\pcellidx}{\pcellidxnb}{\dcellidx} = \mface{\pcellidx}{\pcellidxnb} \cap \mcell{\dcellidx}$ and the dual facet $\mfacet{\dcellidx}{\dcellidxnb}{\pcellidx} = \mface{\dcellidx}{\dcellidxnb} \cap \mcell{\pcellidx}$.}
\label{fig:annotated-mesh-2d}
\end{figure}
Given a primal mesh, the topological mapping alone does not define the geometry of the dual mesh uniquely.
For instance the dual vertex $\mvert{\dvertidx}$ can be anywhere inside the primal cell $\mcell{\pcellidx}$.
For the numerical scheme we need to know the geometry of the dual cells and especially their faces.
First note that a dual face $\mface{\dcellidx}{\dcellidxnb}$, which is the intersection between two dual cells, does not need to be planar.
In two space dimensions it can be constructed, however, from one planar facet $\mfacet{\dcellidx}{\dcellidxnb}{\pcellidx} = \mface{\dcellidx}{\dcellidxnb} \cap \mcell{\pcellidx}$ for each intersection with an adjacent primal cell $\mcell{\pcellidx}; j \in \madjacents{\pvertidx}{\pvertidxnb}$.
The facet $\mfacet{\dcellidx}{\dcellidxnb}{\pcellidx}$ is just the line $\medge{j}{\pvertidx, \pvertidxnb}$ between the primal 'cell center' $\mvert{\dvertidx}$ and some arbitrary point $\mvert{\pvertidx, \pvertidxnb}$ on the edge $\medge{\pvertidx}{\pvertidxnb}$(which coincides with a face $\mface{\pcellidx}{\pcellidxnb}$, for some $k$).
This construction is depicted in \figref{fig:annotated-mesh-2d-b} and is identical to the definition of a control volume in \cite{buet2012design}.
In three space dimensions the construction is similar but a bit more complicated.
For a sketch of the construction, see \figref{fig:annotated-mesh-3d}.
Because the primal mesh is polyhedral and conforming, the facet $\mfacet{\dcellidx}{\dcellidxnb}{\pcellidx}$ is bounded by line segments connecting the four points $\mvert{\dvertidx}, \mvert{j, k}, \mvert{\pvertidx, \pvertidxnb}, \mvert{j, k'}$.
The indices $k, k' \in \mneighbors{j} \cap \madjacents{\pvertidx}{\pvertidxnb}$ label those two neighbors of cell $\mcell{\pcellidx}$ that have $\medge{\pvertidx}{\pvertidxnb}$ as an edge.
With $\mvert{j, k}, \mvert{j, k'}$ we denote arbitrary points on the faces $\mface{\pcellidx}{\pcellidxnb}, \mface{j}{k'}$, for example their barycenters.
As in the two-dimensional setting, $\mvert{\pvertidx, \pvertidxnb}$ is an arbitrary point on the edge $\medge{\pvertidx}{\pvertidxnb}$.
In general, the four points do not have to lie in a plane.
Thus if we want to have a polyhedral dual mesh, the facet $\mfacet{\dcellidx}{\dcellidxnb}{\pcellidx}$ must be split into two triangles $\mfacet{\dcellidx}{\dcellidxnb}{\pcellidx, 1} \cup \mfacet{\dcellidx}{\dcellidxnb}{\pcellidx, 2} = \mfacet{\dcellidx}{\dcellidxnb}{\pcellidx}$ defined by the triplets $\mvert{\dvertidx}, \mvert{j, k}, \mvert{\pvertidx, \pvertidxnb}$, and $\mvert{\dvertidx}, \mvert{\pvertidx, \pvertidxnb}, \mvert{j, k'}$.
For tensor product grids and tetrahedral meshes (see \cite{weiss2013primal}), the four points lie in a plane if they are chosen as the barycenters of their respective entities, making the split into triangles unnecessary.
\begin{figure}
\deffigures/Brain/{figures/primal_dual_mesh/}
\centering
\withfiguresize{\figureonecol}{\figureonecol}{\externaltikz{annotated_mesh_3d}{\input{figures/Brain/ annotated_mesh_3d}}}
\caption{The primal-dual mesh pair in three dimensions. Shown is the primal cell $\mcell{\pcellidx}$ (green wireframe) and the facet $\mfacet{\dcellidx}{\dcellidxnb}{\pcellidx} = \mface{\dcellidx}{\dcellidxnb} \cap \mcell{\pcellidx}$ (gray solid).}
\label{fig:annotated-mesh-3d}
\end{figure}
We write the average of some function over the domain $\mentity{i}$ as
\begin{align*}
\mavg{\mentity{i}}{\cdot} &:= \frac{1}{\abs{\mentity{i}}}\int_{\mentity{i}} \cdot dx,
\end{align*}
in which
\begin{align*}
\abs{\mentity{i}} &= \int_{\mentity{i}} 1 dx
\end{align*}
is the volume of entity $\mentity{i}$.
In the following, we derive the minimally implicit variant \mischeme{1} of the scheme.
All that is required to obtain the variant with implicit volume terms \ivscheme{1} is a reordering of terms, analogously to \eqref{eq:continuous-system-iv}.
Let $(\rho, \ensuremath{g})$ be the solution of \eqref{eq:continuous-system}, with the average densities $\mavg{\mcell{\dcellidx}}{\rho}$ on dual cells, and the average perturbations $\mavg{\mcell{\pcellidx}}{\ensuremath{g}}$ on the primal cells.
The projection of equation \eqref{eq:continuous-system} onto the cell averages is a finite system of equations for the values $\PM_{\dcellidx} \approx \mavg{\mcell{\dcellidx}}{\rho}$, $\PP_{\pcellidx} \approx \mavg{\mcell{\pcellidx}}{\ensuremath{g}}$ which approximate the averages of the exact solution.
We collect these values in the vectors $\avgvec{\PM} = (\dots, \PM_{\dcellidx}, \rho_{\pvertidx+1} \dots)\trans$ and $\avgvec{\PP} = (\dots, \ensuremath{g}_{j}, \ensuremath{g}_{j + 1}, \dots)\trans$ and write the resulting space-discrete system as
\begin{equation}
\label{eq:semi-discrete-system}
\begin{alignedat}{2}
\de_t \avgvec{\PM} &= \avgvec{\Phi}^{\PM}(\avgvec{\PM}, \avgvec{\PP}) &+& \avgvec{\Gamma}^{\PM}(\avgvec{\PM}, \avgvec{\PP}) \\
\de_t \avgvec{\PP} &= (\avgvec{\Phi}^{\PP}_{\FD}(\avgvec{\PM}) + \avgvec{\Phi}^{\PP}(\avgvec{\PM}, \avgvec{\PP})) &+& \avgvec{\Gamma}^{\PP}(\avgvec{\PM}, \avgvec{\PP}),
\end{alignedat}
\end{equation}
using the same notation for the approximations of the projected operators. For instance we have $\avgvec{\Phi}^{\PM}(\avgvec{\PM}, \avgvec{\PP}) = (...,\Phi^{\PM}_{\pvertidx}(\avgvec{\PM}, \avgvec{\PP}), \Phi^{\PM}_{\pvertidx+1}(\avgvec{\PM}, \avgvec{\PP}),...)\trans$, where $\Phi^{\PM}_{\pvertidx}$ is an approximation to $\mavg{\mcell{\dcellidx}}{\Phi^{\PM}}$.
With second-order accuracy, the average $\mavg{\mentity{i}}{\cdot}$ can be swapped with a product or a chained function, i.e. given functions $u(x), w(x) \in C^2(\mentity{i})$, and $z(u) \in C^2(u(\mentity{i}))$ we have
\begin{align*}
\mavg{\mentity{i}}{u(x) w(x)} &= \mavg{\mentity{i}}{u(x)} \mavg{\mentity{i}}{w(x)} + \orderof{\Delta x^2} \\
\mavg{\mentity{i}}{z(u(x))} &= z\left(\mavg{\mentity{i}}{u(x)}\right) + \orderof{\Delta x^2}.
\end{align*}
Up to second-order accurate approximations to the explicit operators on each cell are
\begin{equation}
\label{eq:semidiscrete-system-explicit-terms}
\begin{aligned}
\Phi^{\PM}_{\pvertidx} &= -\delta \sum_{\pvertidxnb \in \mneighbors{\pvertidx}} \fluxrg + \theta \mu(\PM_{\dcellidx}) \PM_{\dcellidx} \\
{\Phi^{\PP}_{\FD}}_{j} &= -\frac{\delta}{\varepsilon^2} \sum_{k \in \mneighbors{j}} \fluxgr \\
\Phi^{\PP}_{j} &= -\frac{\delta}{\varepsilon} \sum_{k \in \mneighbors{j}} \fluxgg +\frac{\delta \nu \factorcol[a, j]}{\varepsilon^2} \LCO[a] \left(\tilde\rho_{j} \ensuremath{E}_{j} + \varepsilon \ensuremath{g}_{j}\right) + \frac{\theta \mu(\tilde \rho_{j})}{\varepsilon} (\Identity - \Nspproj) \mathcal{S} (\tilde\rho_{j} \ensuremath{E}_{j} + \varepsilon \PP_{\pcellidx})
\end{aligned}
\end{equation}
The average density on a primal cell $\tilde{\rho}_{j}$ is not a degree of freedom of the scheme and needs to be computed from the averages on contributing dual cells:
\begin{align}
\label{eq:density-on-primal-cell}
\tilde{\rho}_{j} &= \frac{1}{\abs{\mcell{\pcellidx}}} \sum_{\pvertidx \in \mverticesof{j}} \abs{\msubcell{\pcellidx}{\pvertidx}} \PM_{\dcellidx}.
\end{align}
The fluxes $\fluxrg$ are obtained by using Gauss' theorem on the term $\mavg{\mcell{\dcellidx}}{\Phi^{\PM}}$ from equation \eqref{eq:continuous-system-explicit-terms}:
\begin{align*}
\fluxrg &= \frac{\abs{\mface{\dcellidx}{\dcellidxnb}}}{\abs{\mcell{\dcellidx}}} \mapproxavg{\mface{\dcellidx}{\dcellidxnb}}{\ints{v \sprec{\ensuremath{g}}} \cdot n_{\pvertidx, \pvertidxnb}} \\
&= \frac{1}{\abs{\mcell{\dcellidx}}} \sum_{j \in \madjacents{\pvertidx}{\pvertidxnb}} \abs{\mfacet{\dcellidx}{\dcellidxnb}{\pcellidx}} \mapproxavg{\mfacet{\dcellidx}{\dcellidxnb}{\pcellidx}}{\ints{v \sprec{\ensuremath{g}}} \cdot n_{\pvertidx, \pvertidxnb}^{j}} \\
&\overset{(SO_1)}{=} \frac{1}{\abs{\mcell{\dcellidx}}} \sum_{j \in \madjacents{\pvertidx}{\pvertidxnb}} \left( \abs{\mfacet{\dcellidx}{\dcellidxnb}{\pcellidx, 1}} \ints{v \PP_{\pcellidx}} \cdot n_{\pvertidx, \pvertidxnb}^{j,1} + \abs{\mfacet{\dcellidx}{\dcellidxnb}{\pcellidx, 2}} \ints{v \PP_{\pcellidx}} \cdot n_{\pvertidx, \pvertidxnb}^{j,2} \right)\\
&\overset{(P)}{=} \frac{1}{\abs{\mcell{\dcellidx}}} \sum_{j \in \madjacents{\pvertidx}{\pvertidxnb}} \abs{\mfacet{\dcellidx}{\dcellidxnb}{\pcellidx}} \ints{v \PP_{\pcellidx}} \cdot n_{\pvertidx, \pvertidxnb}^{j} \\
\end{align*}
together with a quadrature rule $\mathcal{Q}$.
The unit outer normal of a facet $\mfacet{\dcellidx}{\dcellidxnb}{\pcellidx}$ is $n_{\pvertidx, \pvertidxnb}^{j}$.
The reconstruction $\sprec{\ensuremath{g}}(x)$ is a function that is piecewise continuous on primal cells and interpolates the averages: $\mavg{\mcell{\pcellidx}}{\sprec{\ensuremath{g}}} = \PP_{\pcellidx}$.
In the first-order scheme the reconstruction is piecewise constant and equal to the cell average:
\begin{align*}
\evalat{\sprec{\ensuremath{g}}(x)}{\mcell{\pcellidx}} &= \PP_{\pcellidx}.
\end{align*}
In the second-order scheme we make a piecewise linear ansatz
\begin{align*}
\evalat{\sprec{\ensuremath{g}(x)}}{\mcell{\pcellidx}} = \PP_{\pcellidx} + b \cdot (x-x_{j}),
\end{align*}
for the reconstruction, where $b$ is a limited estimate of the slope that can be obtained by a minmod or WENO ansatz.
Because we compute the flux on dual faces which are inside the primal cells where $\sprec{\ensuremath{g}}$ is continuous, we do not need an approximate flux function and only have to approximate the integrals by some quadrature.
Using a piecewise constant reconstruction, these simplify to a single evaluation of the cell mean.
Next we consider the fluxes $\fluxgr$ resulting from $\mavg{\mcell{\pcellidx}}{{\Phi^{\PP}_{\FD}}}$ in \eqref{eq:continuous-system-explicit-terms}:
\begin{align*}
\fluxgr &= \frac{\abs{\mface{\pcellidx}{\pcellidxnb}}}{\abs{\mcell{\pcellidx}}} \mapproxavg{\mface{\pcellidx}{\pcellidxnb}}{(v \sprec{\rho} \ensuremath{E}) \cdot n_{j, k}} \\
&= \frac{1}{\abs{\mcell{\pcellidx}}} \left( \sum_{\pvertidx \in \madjacents{j}{k}} \abs{\mfacet{\pcellidx}{\pcellidxnb}{\dcellidx}}\mapproxavg{\mfacet{\pcellidx}{\pcellidxnb}{\dcellidx}}{(v \sprec{\rho} \ensuremath{E})} \right) \cdot n_{j, k} \\
&\overset{(SO_1)}{=} \frac{1}{\abs{\mcell{\pcellidx}}} \left( \sum_{\pvertidx \in \madjacents{j}{k}} \abs{\mfacet{\pcellidx}{\pcellidxnb}{\dcellidx}} v \PM_{\dcellidx} \ensuremath{E}_{j} \right) \cdot n_{j, k} \\
\end{align*}
This time, the facets $\mfacet{\pcellidx}{\pcellidxnb}{\dcellidx}$ which are parts of the primal face $\mface{\pcellidx}{\pcellidxnb}$ all share the same constant normal $n_{j, k}$.
$\sprec{\rho}(x)$ is a piecewise continuous reconstruction of the density on dual cells.
Finally, application of the divergence theorem to $\mavg{\mcell{\pcellidx}}{\Phi^{\PP}}$ in equation \eqref{eq:continuous-system-explicit-terms}, together with the projection
\begin{align*}
(\Identity - \Nspproj) (\ensuremath{\nabla_x \cdot} (v \ensuremath{g})) &= \ensuremath{\nabla_x \cdot} (v \ensuremath{g}) - \ensuremath{\nabla_x \cdot} \ints{v \ensuremath{g}} \ensuremath{E}
\end{align*}
gives:
\begin{align*}
\fluxgg &= \frac{\abs{\mface{\pcellidx}{\pcellidxnb}}}{\abs{\mcell{\pcellidx}}} \left( \mapproxavg{\mface{\pcellidx}{\pcellidxnb}}{(\widehat{v \ensuremath{g}})} - \mapproxavg{\mface{\pcellidx}{\pcellidxnb}}{\widehat{\ints{v \ensuremath{g}}} \ensuremath{E}_{\mcell{\pcellidx}} } \right) \cdot n_{j, k}.
\end{align*}
Here, $\widehat{v \ensuremath{g}}$ is an approximate flux function, for example the upwind flux, that depends on the left and right state $\sprec{\ensuremath{g}}_{\mcell{\pcellidx}}, \sprec{\ensuremath{g}}_{\mcell{\pcellidxnb}}$ of the face $\mface{\pcellidx}{\pcellidxnb}$.
The second term of the projection is not in conservation form. In the spirit of wave-propagation for heterogeneous media as proposed by LeVeque (\cite{leveque2002finite}), we simply evaluate the equilibrium function $\ensuremath{E}_{\mcell{\pcellidx}}$ on the current cell $\mcell{\pcellidx}$.
The approximate implicit operators are
\begin{align*}
\Gamma^{\PM}_{\pvertidx} &= 0 \\
\Gamma^{\PP}_{j} &= \frac{\delta \factorcol[D, j]}{\varepsilon^2} \LCO[D] \PP_{\pcellidx} = \mavg{\mcell{\pcellidx}}{\Gamma^{\PP}} + \orderof{\Delta x^2}.
\end{align*}
If $\factorcol[D](x)$ is a constant on each cell, this is even exact, because $\LCO[D]$ is linear.
Note that the implicit operator on a cell only depends on the cell mean.
Thus the implicit part can be solved on each cell separately.
This is still true for the \ivscheme{1} and \ivscheme{2} variants in which all of the volume terms are treated implicitly.
\subsection{The resulting scheme on a square grid}
\label{sec:ap-scheme-square-grid}
We consider the tensor-product grid defined by a list of nodes $\left(x_{d, 1}, \dots, x_{d, i^{\max}_{d}} \right)$ for each space dimension $d \in 1,\dots, {S}$.
Let $\bs{i} = (i_1, ..., i_{S})$ be a multi-index.
The vertices of the tensor-product grid are all the points $x_{\bs{i}} = (x_{1, i_1},\dots, x_{{S}, i_{{S}}})$ such that $1 \leq i_{d} \leq i^{\max}_{d}$.
The primal cells $\mcell{\bs{i}+\ensuremath{\frac{1}{2}}}$ of this grid are the boxes $\cuboid_{{S}} (x_{\bs{i}}, x_{\bs{i} + 1})$ with centers $x_{\bs{i} + \ensuremath{\frac{1}{2}}} := \frac{x_{\bs{i}} + x_{\bs{i} + 1}}{2}$.
The box spanned by the two points $x_{low}, x_{up}$ is defined as
\begin{align*}
\cuboid_{{S}}(x_{low},x_{up}) = \left \lbrace x \in \mathbb{R}^{{S}}: \abs{x_{low}}_{\infty} \leq \abs{x}_{\infty} \leq \abs{x_{up}}_{\infty} \right \rbrace.
\end{align*}
With a slight abuse of multi-index notation, the sum of a multi-index and a scalar as in $\bs{i} + 1 := (i_1 + 1, \dots, i_{{S}} + 1)$ is applied component-wise.
The dual cell $\mcell{\bs{i}}$ with center $x_{\bs{i}}$ is the box $\cuboid_{{S}}(x_{\bs{i} - \ensuremath{\frac{1}{2}}}, x_{\bs{i} + \ensuremath{\frac{1}{2}}})$.
In the following we show the \mischeme{1} scheme on a two-dimensional square-grid, i.e. a tensor-product grid where all nodes are equally spaced:
\begin{align*}
x_{\bs{i}} = (\xidx, \yidx) \Delta x.
\end{align*}
In the first-order \mischeme{1} scheme, the reconstructions $\sprec{\rho}, \sprec{\ensuremath{g}}$ are piecewise constant and equal to the cell means.
All occurrences of a quadrature rule $\mathcal{Q}$ are replaced by the midpoint-rule.
Then the right-hand side of the macro equation becomes
\begin{align*}
\Phi^{\PM}_{(\xidx, \yidx)} = -\delta \frac{1}{2\Delta x} &\left\langle -v_{\xi}(\ensuremath{g}_{(\xidx - \onehalf, \yidx - \onehalf)} + \ensuremath{g}_{(\xidx - \onehalf, \yidx + \onehalf)}) -v_{\eta} (\ensuremath{g}_{(\xidx - \onehalf, \yidx - \onehalf)} + \ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)} ) \right. \\
&\left. +v_{\xi}(\ensuremath{g}_{(\xidx + \onehalf, \yidx - \onehalf)} + \ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}) +v_{\eta} (\ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)} + \ensuremath{g}_{(\xidx - \onehalf, \yidx + \onehalf)} \right\rangle + \theta \mu(\rho_{(\xidx, \yidx)})\rho_{(\xidx, \yidx)},
\end{align*}
when we insert the fluxes on all four faces.
The term ${\Phi^{\PP}_{\FD}}$ is
\begin{align*}
{\Phi^{\PP}_{\FD}}_{(\xidx + \onehalf, \yidx + \onehalf)} = -\frac{\delta}{\varepsilon^2} \frac{1}{2\Delta x} \ensuremath{E}_{(\xidx + \onehalf, \yidx + \onehalf)}
&\left[ -v_{\xi} (\rho_{(\xidx, \yidx)} + \rho_{(\xidx, \yidx + 1)}) - v_{\eta} (\rho_{(\xidx, \yidx)} + \rho_{(\xidx + 1, \yidx)}) \right. \\
&\left. +v_{\xi} (\rho_{(\xidx + 1, \yidx)} + \rho_{(\xidx + 1, \yidx + 1)}) + v_{\eta} (\rho_{(\xidx + 1, \yidx + 1)} + \rho_{(\xidx, \yidx + 1)}) \right]
\end{align*}
And finally we have:
\begin{align*}
\Phi^{\PP}_{(\xidx + \onehalf, \yidx + \onehalf)} =
&-\frac{\delta}{\varepsilon}\frac{1}{2\Delta x} \left[ \widehat{-v_{\xi}\ensuremath{g}}(\ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}, \ensuremath{g}_{(l-\ensuremath{\frac{1}{2}}, m+\ensuremath{\frac{1}{2}})})
\widehat{-v_{\eta} \ensuremath{g}}(\ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}, \ensuremath{g}_{(l+\ensuremath{\frac{1}{2}}, m-\ensuremath{\frac{1}{2}})}) \right. \\
&\qquad\qquad\left. + \widehat{v_{\xi}\ensuremath{g}}(\ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}, \ensuremath{g}_{(l+\ensuremath{\frac{3}{2}}, m+\ensuremath{\frac{1}{2}})})
+ \widehat{v_{\eta} \ensuremath{g}}(\ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}, \ensuremath{g}_{(l+\ensuremath{\frac{1}{2}}, m+\ensuremath{\frac{3}{2}})}) \right] \\
&+ \frac{\delta \nu \factorcol[a, j]}{\varepsilon^2} \left[ \LCO[a]\left( \tilde\rho_{(\xidx + \onehalf, \yidx + \onehalf)} \ensuremath{E}_{(\xidx + \onehalf, \yidx + \onehalf)} + \varepsilon \ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)} \right) \right] \\
&+ \frac{\theta \mu( \tilde \rho_{(\xidx + \onehalf, \yidx + \onehalf)} )}{\epsilon} \left[ \mathcal{S} \left( \tilde \rho_{(\xidx + \onehalf, \yidx + \onehalf)} \ensuremath{E}_{(\xidx + \onehalf, \yidx + \onehalf)} + \varepsilon \ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)} \right) -\tilde \rho_{(\xidx + \onehalf, \yidx + \onehalf)} \ensuremath{E}_{(\xidx + \onehalf, \yidx + \onehalf)} \right]
\end{align*}
with an average density $\tilde \rho_{(\xidx + \onehalf, \yidx + \onehalf)} = \frac{1}{4}(\rho_{(\xidx, \yidx)} + \rho_{(\xidx + 1, \yidx)} + \rho_{(\xidx + 1, \yidx + 1)} + \rho_{(\xidx, \yidx + 1)} )$ over the primal cell $\mcell{(\xidx + \onehalf, \yidx + \onehalf)}$.
The numerical flux function can be any of the usual methods, for example the upwind flux
\begin{align*}
\widehat{v_{\xi} \ensuremath{g}}(\ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}, \ensuremath{g}_{(l+\ensuremath{\frac{3}{2}}, m+\ensuremath{\frac{1}{2}})}) = \max(v_{\xi},0) \ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)} + \min(v_{\xi}, 0) \ensuremath{g}_{(l+\ensuremath{\frac{3}{2}}, m+\ensuremath{\frac{1}{2}})}.
\end{align*}
\subsection{Time discretization}
\newcommand{\lcol}[1]{\begin{array}{l} #1 \end{array}}
\newcommand{\rcol}[1]{\begin{array}{r} #1 \end{array}}
We use the IMEX schemes from \cite{Ascher1997}. The time-step size is denoted by $\Delta t$. In the first-order scheme, the forward-backward Euler scheme is used. For the particular system \eqref{eq:semi-discrete-system}, this reads
{
\renewcommand{\arraycolsep}{1.5pt}
\renewcommand{\arraystretch}{1.2}
\begin{alignat*}{3}
\rcol{\avgvec{\PM}^{*} \\ \avgvec{\PP}^{*}} &
\lcol{= \\ =} &&
\lcol{\avgvec{\PM}^{n} + \Delta t \avgvec{\Phi}^{\PM}(\avgvec{\PM}^{n}, \avgvec{\PP}^{n}) \\ \avgvec{\PP}^{n} + \Delta t \left(\avgvec{\Phi}^{\PP}_{\FD}(\avgvec{\PM}^{n}, \avgvec{\PP}^{n}) + \avgvec{\Phi}^{\PP}(\avgvec{\PM}^{n}, \avgvec{\PP}^{n}) \right)} &
\left. \lcol{~ \\ ~} \right\} &\, \text{explicit euler step} \\
\rcol{\avgvec{\PM}^{n+1} \\ \avgvec{\PP}^{n+1} } &
\lcol{= \\ =} &&
\lcol{\avgvec{\PM}^{*} + \Delta t \avgvec{\Gamma}^{\PM}(\avgvec{\PM}^{n+1}, \avgvec{\PP}^{*}) \\ \avgvec{\PP}^{*} + \Delta t \avgvec{\Gamma}^{\PP}(\avgvec{\PM}^{*}, \avgvec{\PP}^{n + 1}) } &
\left. \lcol{~ \\ ~} \right\} &\, \lcol{\text{implicit solve} \\ \text{without coupling}}
\end{alignat*}
}
In the minimally implicit variant \mischeme[]{1} we have $\avgvec{\Gamma}^{\PM} = 0$ and thus the implicit solve for density reduces to $\avgvec{\PM}^{n + 1} = \avgvec{\PM}^{*}$.
Lemou and Mieussens proved that their scheme is stable under the time step restriction
\begin{align}
\label{eq:stability-time-step}
\Delta t \leq \frac{1}{2} \left( \Delta t_{\text{micro}} + \Delta t_{\text{macro} } \right).
\end{align}
We do not try to prove a stability result, but all out computations indicate that this choice leads to a stable scheme.
The microscopic time step restriction comes from the CFL condition in the discretization of the transport part and is given by
\begin{align*}
\Delta t_{\text{micro}} = \frac{1}{2} \frac{\Delta x}{c}.
\end{align*}
On the macroscopic scale, the scheme must respect the stability condition of the diffusion approximation as well as the CFL condition from advection:
\begin{align*}
\Delta t_{\text{macro}} = \max \left(\frac{\Delta x^2}{2 \norm{D}}, \frac{\Delta x}{2\norm{a}} \right).
\end{align*}
\begin{remark}[Glioma equation]
Considering the glioma equation \eqref{eq:lke-glioma-scaled}, the implicit part in the \mischeme[]{1} scheme can be solved analytically.
We have
\begin{align*}
\PP_{\pcellidx}^{n+1} &= \PP_{\pcellidx}^{*} + \Delta t \frac{\delta \factorcol[D, j]}{\varepsilon^2}\LCO[D] \PP_{\pcellidx}^{n+1}\\
&= \PP_{\pcellidx}^{*} - \Delta t \frac{\delta \factorcol[D, j]}{\varepsilon^2} \PP_{\pcellidx}^{n+1} \\
\end{align*}
which is easily solved for the update:
\begin{align*}
\PP_{\pcellidx}^{n+1} &= \frac{1}{1 + \Delta t \frac{\delta \factorcol[D, j]}{\varepsilon^2} } \PP_{\pcellidx}^{*}
\end{align*}
This is of course no longer possible for the schemes \ivscheme{1} and \ivscheme{2} with implicitly discretized volume terms.
\end{remark}
The second-order scheme has to be chosen carefully to keep the asymptotic preserving property. The subclass of stiffly accurate schemes in \cite{Ascher1997}, in which the updated solution is identical to the last stage in a time-step, seems to maintain the AP-property.
The second-order time-stepping scheme for \eqref{eq:semi-discrete-system} is
{
\renewcommand{\arraycolsep}{1.5pt}
\renewcommand{\arraystretch}{1.2}
\begin{alignat*}{3}
\rcol{(\avgvec{\Phi}^{\PM})^{(1)} \\ (\avgvec{\Phi}^{\PP}_{\FD})^{(1)} \\ (\avgvec{\Phi}^{\PP})^{(1)}} &
\lcol{= \\ = \\ = } &&
\lcol{\avgvec{\Phi}^{\PM}(\avgvec{\PM}^{n}, \avgvec{\PP}^{n}) \\ \avgvec{\Phi}^{\PP}_{\FD}(\avgvec{\PM}^{n}, \avgvec{\PP}^{n}) \\ \avgvec{\Phi}^{\PP}(\avgvec{\PM}^{n}, \avgvec{\PP}^{n}) } &
\left.\lcol{~\\~\\~}\right\}& \, \lcol{\text{compute operators} \\ \text{at time $t$} }\\
\rcol{\avgvec{\PM}^{*} \\ \avgvec{\PP}^{*}} &
\lcol{= \\ =} &&
\lcol{\avgvec{\PM}^{n} + \tau \Delta t (\avgvec{\Phi}^{\PM})^{(1)} \\ \avgvec{\PP}^{n} + \tau \Delta t \left((\avgvec{\Phi}^{\PP}_{\FD})^{(1)} + (\avgvec{\Phi}^{\PP})^{(1)} \right) } &
\left.\lcol{~\\~}\right\}& \, \lcol{\text{intermediate explicit} \\ \text{step to } t+\tau \Delta t } \\
\rcol{\avgvec{\PM}^{(n, 1)} \\ \avgvec{\PP}^{(n,1)} } &
\lcol{= \\ = } &&
\lcol{\avgvec{\PM}^{*} + \tau \Delta t \avgvec{\Gamma}^{\PM}(\avgvec{\PM}^{(n,1)}, \avgvec{\PP}^{*})\\ \avgvec{\PP}^{*} + \tau \Delta t \avgvec{\Gamma}^{\PP}(\avgvec{\PM}^{*}, \avgvec{\PP}^{(n, 1)})} &
\left.\lcol{~ \\ ~}\right\}& \, \lcol{\text{intermediate implicit} \\ \text{step}}\\
\rcol{(\avgvec{\Phi}^{\PM})^{(2)} \\ (\avgvec{\Phi}^{\PP}_{\FD})^{(2)} \\ (\avgvec{\Phi}^{\PP})^{(2)} \\ (\avgvec{\Gamma}^{\PM})^{(2)} \\ (\avgvec{\Gamma}^{\PP})^{(2)} } &
\lcol{= \\ = \\ = \\ = \\ = } &&
\lcol{\avgvec{\Phi}^{\PM}(\avgvec{\PM}^{(n,1)},\avgvec{\PP}^{(n,1)}) \\ \avgvec{\Phi}^{\PP}_{\FD}(\avgvec{\PM}^{(n,1)}, \avgvec{\PP}^{(n,1)}) \\ \avgvec{\Phi}^{\PP}(\avgvec{\PM}^{(n,1)}, \avgvec{\PP}^{(n,1)}) \\ \avgvec{\Gamma}^{\PM}(\avgvec{\PM}^{(n,1)}, \avgvec{\PP}^{(n,1)}) \\ \avgvec{\Gamma}^{\PP}(\avgvec{\PM}^{(n,1)}, \avgvec{\PP}^{(n,1)}) } &
\left.\lcol{~\\~\\~\\~\\~}\right\}&\, \lcol{\text{compute operators} \\ \text{at time $t+\tau \Delta t$}} \\
\rcol{\avgvec{\PM}^{**} \\ ~ \\ \avgvec{\PP}^{**}\\ ~ } &
\lcol{= \\ ~ \\ = \\ ~} &&
\lcol{\avgvec{\PM}^{n} + (1-\tau)\Delta t (\avgvec{\Gamma}^{\PM})^{(2)} \\ + \Delta t (\sigma (\avgvec{\Phi}^{\PM})^{(1)} + (1-\sigma) (\avgvec{\Phi}^{\PM})^{(2)} ) \\ \avgvec{\PP}^{n} + ( 1 - \tau ) \Delta t (\avgvec{\Gamma}^{\PP})^{(2)} \\ + \Delta t (\sigma (\avgvec{\Phi}^{\PP}_{\FD} + \avgvec{\Phi}^{\PP})^{(1)} + (1-\sigma) (\avgvec{\Phi}^{\PP}_{\FD} + \avgvec{\Phi}^{\PP})^{(2)} )} &
\left.\lcol{~\\~\\~\\~}\right\}&\, \text{explicit step to $t + \Delta t$} \\
\rcol{\avgvec{\PM}^{n+1} \\ \avgvec{\PP}^{n+1} } &
\lcol{= \\ = } &&
\lcol{\avgvec{\PM}^{**} + \tau \Delta t \avgvec{\Gamma}^{\PM}(\avgvec{\PM}^{n+1}, \avgvec{\PP}^{**})\\ \avgvec{\PP}^{**} + \tau \Delta t \avgvec{\Gamma}^{\PP}(\avgvec{\PM}^{**}, \avgvec{\PP}^{n+1})} &
\left.\lcol{~ \\ ~}\right\}& \, \text{implicit step}
\end{alignat*}
with the constants
\begin{align*}
\tau &= \frac{2 - \sqrt{2}}{2} \\
\sigma &= 1 - \frac{1}{2 \tau}.
\end{align*}
}
Our numerical experiments indicate that the time step \eqref{eq:stability-time-step} needs to be restricted further by a factor of $0.2$ to achieve stability with this scheme.
\subsection{The asymptotic limit of the scheme}
\label{sec:asymptotic-limit}
We consider the first-order minimally implicit variant which can, with some reordering of the steps, be written as
\begin{align*}
\avgvec{\PP}^{*} &= \avgvec{\PP}^{n} + \Delta t \left(\avgvec{\Phi}^{\PP}_{\FD}(\avgvec{\PM}^{n}, \avgvec{\PP}^{n}) + \avgvec{\Phi}^{\PP}(\avgvec{\PM}^{n}, \avgvec{\PP}^{n}) \right) \\
\avgvec{\PP}^{n + 1} &= \avgvec{\PP}^{*} + \Delta t \avgvec{\Gamma}^{\PP}(\avgvec{\PP}^{n + 1}) \\
\avgvec{\PM}^{n+1} &= \avgvec{\PM}^{n} + \Delta t \avgvec{\Phi}^{\PM}(\avgvec{\PM}^{n}, \avgvec{\PP}^{n+1}).
\end{align*}
This looks already like a discrete version of the derivation of the diffusion limit \eqref{eq:diffusion-limit-general} where we first computed the perturbation and then inserted this into the density equation.
In the diffusion limit, only those terms with an $\frac{1}{\varepsilon^2}$ in front remain.
Thus the implicit perturbation update reduces to
\begin{align*}
\ensuremath{g}_{j}^{n + 1} &= -\frac{\varepsilon^2}{\Delta t \delta \factorcol[D, j]} \LCO[D]^{-1} \ensuremath{g}_{j}^{*}
\end{align*}
with
\begin{align*}
\ensuremath{g}_{j}^{*} &= \Delta t \left( {\Phi^{\PP}_{\FD}}_{j}(\avgvec{\PM}^{n}, \avgvec{\PP}^{n}) + \Phi^{\PP}_{j}(\avgvec{\PM}^{n}, \avgvec{\PP}^{n})\right) \\
&= \Delta t \left(
-\frac{\delta}{\varepsilon^2} \frac{1}{\abs{\mcell{\pcellidx}}} \sum_{k \in \mneighbors{j}} \left( \sum_{\pvertidx \in \madjacents{j}{k}} \abs{\mfacet{\pcellidx}{\pcellidxnb}{\dcellidx}} v \rho_{\pvertidx}^{n} \ensuremath{E}_{j} \right) \cdot n_{j, k}
+ \frac{\delta \nu \factorcol[a]}{\varepsilon^2} \LCO[a]\left(\ensuremath{E}_{j}\right) \tilde{\rho}_{j}^{n}
\right).
\end{align*}
Combining these two expressions yields
\begin{align*}
\ensuremath{g}_{j}^{n + 1} &= -\frac{1}{\factorcol[D, j]} \left(
-\frac{1}{\abs{\mcell{\pcellidx}}} \sum_{k \in \mneighbors{j}} \left( \sum_{\pvertidx \in \madjacents{j}{k}} \abs{\mfacet{\pcellidx}{\pcellidxnb}{\dcellidx}} \LCO[D]^{-1} (v \ensuremath{E}_{j}) \rho_{\pvertidx}^{n} \right) \cdot n_{j, k}
+\nu \factorcol[a, j] \LCO[D]^{-1} \LCO[a]\left(\ensuremath{E}_{j} \right) \tilde{\rho}_{j}^{n}
\right).
\end{align*}
Finally, we get the limit of the scheme as $\varepsilon \rightarrow 0$, when we insert this expression into the update for the density:
\begin{align*}
\rho_{\pvertidx}^{n+1} &= \rho_{\pvertidx}^{n} + \Delta t \left( -\delta \frac{1}{\abs{\mcell{\dcellidx}}} \sum_{\pvertidxnb \in \mneighbors{\pvertidx}} \sum_{j \in \madjacents{\pvertidx}{\pvertidxnb}} \abs{\mfacet{\dcellidx}{\dcellidxnb}{\pcellidx}} \ints{v \ensuremath{g}_{j}^{n+1} } \cdot n_{\pvertidx, \pvertidxnb}^{j} + \theta \mu(\rho_{\pvertidx}^{n}) \rho_{\pvertidx}^{n} \right).
\end{align*}
This is an explicit scheme for the density $\rho_{\pvertidx}^{n + 1}$.
The updated value $\rho_{\pvertidx}^{n + 1}$ only depends on the previous values on the same dual cell $\mcell{\dcellidx}$ and those cells $\mcell{\pvertidxnb'}$ which are connected to it with at least a vertex, i.e.,
$\mcell{\dcellidx} \cap \mcell{\pvertidxnb'} \neq \emptyset$ or $\exists \pcellidx : \mvert{\dvertidx} \in \mverticesof{\pvertidx} \wedge \mverticesof{\pvertidxnb'} $.
On a square grid in two dimensions, this is equivalent to
\begin{align*}
\ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}^{n+1} &= \frac{1}{\factorcol[D, j]} \frac{1}{2\Delta x} \LCO[D]^{-1} \left(\ensuremath{E}_{(\xidx + \onehalf, \yidx + \onehalf)} \right.
\left. \left[ -v_{\xi} \left(\rho_{(\xidx, \yidx + 1)}^{n} + \rho_{(\xidx, \yidx)}^{n}\right) -v_{\eta} \left(\rho_{(\xidx, \yidx)}^{n} + \rho_{(\xidx + 1, \yidx)}^{n} \right) \right. \right. \\
& \qquad\qquad\qquad\qquad\qquad\qquad \left. \left. + v_{\xi} \left(\rho_{(\xidx + 1, \yidx)}^{n} + \rho_{(\xidx + 1, \yidx + 1)}^{n}\right) + v_{\eta} \left(\rho_{(\xidx + 1, \yidx + 1)}^{n} + \rho_{(\xidx, \yidx + 1)}^{n} \right)\right] \right) \\
&\quad -\frac{\nu \factorcol[a, j]}{\factorcol[D]} \LCO[D]^{-1} \LCO[a](\ensuremath{E}_{(\xidx + \onehalf, \yidx + \onehalf)}) \frac{1}{4}\left(\rho_{(\xidx, \yidx)}^{n} + \rho_{(\xidx + 1, \yidx)}^{n} + \rho_{(\xidx + 1, \yidx + 1)}^{n} + \rho_{(\xidx, \yidx + 1)}^{n} \right)
\end{align*}
\begin{align*}
\rho_{(\xidx, \yidx)}^{n + 1} = \rho_{(\xidx, \yidx)}^{n} - \frac{\Delta t \delta}{2\Delta x}
&\left\langle -v_{\xi}\left(\ensuremath{g}_{(\xidx - \onehalf, \yidx - \onehalf)}^{n+1} + \ensuremath{g}_{(\xidx - \onehalf, \yidx + \onehalf)}^{n+1}\right) -v_{\eta} \left(\ensuremath{g}_{(\xidx - \onehalf, \yidx - \onehalf)}^{n+1} + \ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}^{n+1} \right) \right. \\
& \left. +v_{\xi}\left(\ensuremath{g}_{(\xidx + \onehalf, \yidx - \onehalf)}^{n+1} + \ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}^{n+1}\right) +v_{\eta} \left(\ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}^{n+1} + \ensuremath{g}_{(\xidx - \onehalf, \yidx + \onehalf)}^{n+1}\right) \right\rangle + \Delta t \theta \mu(\rho_{(\xidx, \yidx)}^{n})\rho_{(\xidx, \yidx)}^{n},
\end{align*}
For the special case that the equilibrium $\ensuremath{E}$ and the factors $\factorcol[D], \factorcol[a]$ are constant in space, we write the resulting scheme as one equation for the density by eliminating the perturbations.
After tedious calculations, we arrive at
\begin{align*}
\rho_{(\xidx, \yidx)}^{n + 1} = \rho_{(\xidx, \yidx)}^{n} + \Delta t\frac{\delta}{\factorcol[D]}\left( \overline{\ensuremath{\nabla_x \cdot} (D \nabla_x\rho)} \right) - \Delta t\frac{\delta \nu \factorcol[a]}{\factorcol[D]} \left(\overline{\ensuremath{\nabla_x \cdot}(a \rho)} \right) + \Delta t \theta \mu(\rho_{(\xidx, \yidx)}^{n})\rho_{(\xidx, \yidx)}^{n}
\end{align*}
with approximations to the diffusion
\begin{align*}
\overline{\ensuremath{\nabla_x \cdot} (D \nabla_x\rho)} = \frac{1}{4 \Delta x^2}\Big(
&\rho_{(l , m )}^{n} (-4D_{\xi \xi} -4 D_{\eta \eta}) \\
&+\rho_{(l-1, m )}^{n} (2D_{\xi \xi} - 2D_{\eta \eta}) \\
&+\rho_{(l+1, m )}^{n} (2D_{\xi \xi} -2D_{\eta \eta}) \\
&+\rho_{(l , m-1)}^{n} (-2D_{\xi \xi} + 2D_{\eta \eta}) \\
&+\rho_{(l , m+1)}^{n} (-2D_{\xi \xi} + 2D_{\eta \eta}) \\
&+\rho_{(l-1, m-1)}^{n} (D_{\xi \xi} + 2D_{\xi \eta} + D_{\eta \eta}) \\
&+\rho_{(l+1, m-1)}^{n} (D_{\xi \xi} - 2D_{\xi \eta} + D_{\eta \eta}) \\
&+\rho_{(l-1, m+1)}^{n} (D_{\xi \xi} - 2D_{\xi \eta} + D_{\eta \eta}) \\
&+\rho_{(l+1, m+1)}^{n} (D_{\xi \xi} + 2D_{\xi \eta} + D_{\eta \eta}) \Big)
\end{align*}
and drift
\begin{align*}
\overline{\ensuremath{\nabla_x \cdot}(a \rho)} = \frac{1}{8 \Delta x} \Big(
&\rho_{(l-1, m )}^{n} (-2a_{\xi} ) \\
&+\rho_{(l+1, m )}^{n} ( 2a_{\xi} ) \\
&+\rho_{(l , m-1)}^{n} ( -2a_{\eta}) \\
&+\rho_{(l , m+1)}^{n} ( 2a_{\eta}) \\
&+\rho_{(l-1, m-1)}^{n} (- a_{\xi} - a_{\eta}) \\
&+\rho_{(l+1, m-1)}^{n} ( a_{\xi} - a_{\eta}) \\
&+\rho_{(l-1, m+1)}^{n} (- a_{\xi} + a_{\eta}) \\
&+\rho_{(l+1, m+1)}^{n} ( a_{\xi} + a_{\eta}) \Big)
\end{align*}
wherein $D$ is the diffusion tensor from \eqref{eq:general-diffusion-tensor} and $a$ is the drift vector from \eqref{eq:general-drift-vector}.
If the diffusion tensor is the identity $D = I$, which is the case for example in the glioma equation with isotropic equilibrium $\ensuremath{E}(v) = 1$, then the discrete diffusion reduces to a diagonal five-point stencil:
\begin{align*}
\overline{\ensuremath{\nabla_x \cdot} (D \nabla_x\rho)} = \overline{\ensuremath{\nabla_x \cdot} (\nabla_x\rho)} = \frac{1}{2 \Delta x^2} \left( - 4 \rho_{(l , m )}^{n} +
\rho_{(l-1, m-1)}^{n} + \rho_{(l+1, m-1)}^{n} + \rho_{(l-1, m+1)}^{n} + \rho_{(l+1, m+1)}^{n} \right).
\end{align*}
In this special case, the presented AP-method is identical to the nodal scheme proposed in \cite{buet2012design}.
As already discussed therein, the scheme leads to a decoupling of meshes.
If we start with a Dirac initial condition on cell $(l, m)$, only every other cell $(l + l', m + m')$ with $l' + m' = 2q$ will ever receive some mass.
Computations of this linesource test show a strong checkerboard pattern, see \figref{fig:spmc-a}.
The drift is approximated by a central scheme, which is also not ideal.
For example, inserting the first unit vector $a = (1, 0)\trans$ for the drift, we get
\begin{align*}
\overline{\ensuremath{\nabla_x \cdot} (a \rho)} = \overline{\partial_{\xi} \rho} = \frac{1}{4 \Delta x} \left( -2 \rho_{(l-1, m)}^{n} +2\rho_{(l+1, m)}^{n} - \rho_{(l-1, m-1)}^{n} + \rho_{(l+1, m-1)}^{n} - \rho_{(l-1, m+1)}^{n} + \rho_{(l+1, m+1)}^{n}\right).
\end{align*}
In the next two subsections we show how to modify the AP-method in such a way that the diffusion and drift are better approximated in the limit.
Particularly, on a tensor-product grid the diffusion will be approximated by a standard five-point stencil, and the drift by an upwind method.
\subsection{An improved diffusion stencil in the limit}
\label{sec:spmc}
In the last section we have seen that the numerical diffusion approximation results from a concatenation of the macroscopic fluxes $\Delta t \Phi^{\PM}_{\pvertidx}(\avgvec{\PM}^{n}, \avgvec{\PP}^{n+1})$ with $-\frac{\varepsilon^2}{\Delta t \delta \factorcol[D]} \LCO[D]^{-1} {{\Phi^{\PP}_{\FD}}}_{j}(\avgvec{\PM}^{n}, \avgvec{\PP}^{n})$ on overlapping primal cells $j \in \mverticesof{\pvertidx}$.
The goal of this section is to modify $\Phi^{\PM}_{\pvertidx}$ and ${\Phi^{\PP}_{\FD}}_{j}$ such that---on a square grid in two dimensions---the resulting diffusion approximation becomes the standard five-point stencil.
To simplify the following computations as much as possible, we set $\delta = 1$, $\factorcol[D] = 1$ and use a constant-in-space equilibrium $\ensuremath{E}(x, v) = \ensuremath{E}(v)$ such that the diffusion tensor is $D = I$.
Recall the flux over primal faces in the most general form:
\begin{align*}
\fluxgr = \frac{\abs{\mface{\pcellidx}{\pcellidxnb}}}{\abs{\mcell{\pcellidx}}} \mapproxavg{\mface{\pcellidx}{\pcellidxnb}}{(v \sprec{\rho} \ensuremath{E}) \cdot n_{j, k}}.
\end{align*}
Together with a piecewise constant reconstruction of the density $\evalat{\sprec{\rho}}{\mcell{\dcellidx}} = \PM_{\dcellidx}$ this results in the formulation
\begin{align*}
\fluxgr = \frac{1}{\abs{\mcell{\pcellidx}}} \left( \sum_{\pvertidx \in \madjacents{j}{k}} \abs{\mfacet{\pcellidx}{\pcellidxnb}{\dcellidx}} v \PM_{\dcellidx} \ensuremath{E}_{j} \right) \cdot n_{j, k}.
\end{align*}
This is a sum of constant fluxes over the facets $\mfacet{\pcellidx}{\pcellidxnb}{\dcellidx}$, weighted by the facet volumes $\abs{\mfacet{\pcellidx}{\pcellidxnb}{\dcellidx}}$.
In the derivation of the AP scheme on square grids in \secref{sec:ap-scheme-square-grid} we used this method.
Considering the primal face $(l+\ensuremath{\frac{1}{2}}, m+\ensuremath{\frac{1}{2}}), (l+\ensuremath{\frac{3}{2}}, m+\ensuremath{\frac{1}{2}})$ in effect this method assigns equal weights $\frac{1}{2 \Delta x}$ to both overlapping dual cells $(l+1, m)$, $(l+1, m+1)$.
We get the same weights if we reconstruct $\sprec{\rho}$ as a globally continuous function from bilinear elements on each dual cell and use a midpoint quadrature rule on the faces.
Starting from this interpretation, we define four variants of the microscopic flux
\begin{align*}
\microexplfibressk{(\xidx + \onehalf, \yidx + \onehalf)}{\xi,+}, \microexplfibressk{(\xidx + \onehalf, \yidx + \onehalf)}{\xi,-}, \microexplfibressk{(\xidx + \onehalf, \yidx + \onehalf)}{\eta,+}, \microexplfibressk{(\xidx + \onehalf, \yidx + \onehalf)}{\eta,-}
\end{align*}
that use different quadratures on different faces.
In the $(\xi,+)$-variant, the flux on $\xi$-normal faces is evaluated at the upmost points, but for the $\eta$-normal faces the midpoint rule is used:
\begin{align*}
\microexplfibressk{(\xidx + \onehalf, \yidx + \onehalf)}{\xi,+} = -\frac{\delta}{\varepsilon^2}\frac{1}{\Delta x} \ensuremath{E}_{(\xidx + \onehalf, \yidx + \onehalf)} \left[ -v_{\xi} \rho_{(l, m+1)} - \onehalfv_{\eta} (\rho_{(l,m)}+ \rho_{(l+1, m)}) + v_{\xi} \rho_{(l+1, m+1)} + \onehalfv_{\eta}(\rho_{(l, m+1)} + \rho_{(l+1,m+1)})\right].
\end{align*}
Similarly the $(\xi,-)$-variant uses evaluations at the lowest points in $\xi$-normal faces:
\begin{align*}
\microexplfibressk{(\xidx + \onehalf, \yidx + \onehalf)}{\xi,-} = -\frac{\delta}{\varepsilon^2}\frac{1}{\Delta x} \ensuremath{E}_{(\xidx + \onehalf, \yidx + \onehalf)} \left[ -v_{\xi} \rho_{(l, m)} - \onehalfv_{\eta} (\rho_{(l,m)} +\rho_{(l+1, m)}) + v_{\xi} \rho_{(l+1, m)} + \onehalfv_{\eta}(\rho_{(l, m+1)} + \rho_{(l+1,m+1)})\right].
\end{align*}
The other variants are defined analogously for the $\eta$-normal faces.
The shifted evaluations are zeroth-order accurate Gauss-Radau quadrature rules, which is sufficient for a first-order scheme.
In a second-order scheme, they have to be replaced by the correct first-order Gauss-Radau rules.
We use each flux variant in the perturbation update $\ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}^{n+1}$ in turn to compute the four modified perturbations
\begin{align*}
\ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}^{n+1, (\xi,+)}, \ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}^{n+1, (\xi,-)}, \ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}^{n+1, (\eta,-)}, \ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}^{n+1, (\eta,-)}.
\end{align*}
Now we modify the density flux $\Phi^{\PM}_{(\xidx, \yidx)}$.
In each flux over a dual facet, the 'correct' variant of the perturbation is used:
\begin{align*}
\rho_{(\xidx, \yidx)}^{n + 1} = \rho_{(\xidx, \yidx)}^{n} - \frac{\Delta t \delta}{2\Delta x}
&\left\langle -v_{\xi}\left(\ensuremath{g}_{(\xidx - \onehalf, \yidx - \onehalf)}^{n+1, (\xi,+)} + \ensuremath{g}_{(\xidx - \onehalf, \yidx + \onehalf)}^{n+1, (\xi,-)}\right) -v_{\eta} \left(\ensuremath{g}_{(\xidx - \onehalf, \yidx - \onehalf)}^{n+1, (\eta,-)} + \ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}^{n+1, (\eta,-)} \right) \right. \\
& \left. +v_{\xi}\left(\ensuremath{g}_{(\xidx + \onehalf, \yidx - \onehalf)}^{n+1, (\xi,+)} + \ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}^{n+1, (\xi,-)}\right) +v_{\eta} \left(\ensuremath{g}_{(\xidx + \onehalf, \yidx + \onehalf)}^{n+1, (\xi,-)} + \ensuremath{g}_{(\xidx - \onehalf, \yidx + \onehalf)}^{n+1, (\xi,+)}\right) \right\rangle + \Delta t \theta \mu(\rho_{(\xidx, \yidx)}^{n})\rho_{(\xidx, \yidx)}^{n},
\end{align*}
The same tedious calculations as in the previous \secref{sec:asymptotic-limit} show that the diffusion is approximated by
\begin{align*}
\overline{\ensuremath{\nabla_x \cdot} (D \nabla_x\rho)} = \frac{1}{4 \Delta x^2}\Big(
&\rho_{(l , m )}^{n} (-8D_{\xi \xi} -8 D_{\eta \eta}) \\
&+\rho_{(l-1, m )}^{n} (4D_{\xi \xi})
+\rho_{(l+1, m )}^{n} (4D_{\xi \xi})
+\rho_{(l , m-1)}^{n} (4D_{\eta \eta})
+\rho_{(l , m+1)}^{n} (4D_{\eta \eta}) \\
&+\rho_{(l-1, m-1)}^{n} (2D_{\xi \eta})
+\rho_{(l+1, m-1)}^{n} (-2D_{\xi \eta})
+\rho_{(l-1, m+1)}^{n} (-2D_{\xi \eta})
+\rho_{(l+1, m+1)}^{n} (2D_{\xi \eta}) \Big)
\end{align*}
in the limit, which is the classical five-point stencil
\begin{align*}
\overline{\ensuremath{\nabla_x \cdot} (D \nabla_x\rho)} = \overline{\ensuremath{\nabla_x \cdot} (\nabla_x\rho)} = \frac{1}{2 \Delta x^2} \left( - 4 \rho_{(l , m )}^{n} +
\rho_{(l-1, m)}^{n} + \rho_{(l+1, m)}^{n} + \rho_{(l, m-1)}^{n} + \rho_{(l, m+1)}^{n} \right).
\end{align*}
if the diffusion tensor is isotropic $D = I$.
\begin{remark}[Extension to three dimensions]
In three space dimensions the procedure is structurally very similar but the notation becomes even more unwieldy.
The computational cost also increases, because we need twelve variants, four for each normal direction.
For example, in the variant $(\xi,++)$, the fluxes over $\xi$-normal faces are evaluated at the top right node.
\end{remark}
\begin{figure}[h]
\deffigures/Brain/{figures/SpuriousModesCorrector/}
\centering
\withfiguresize{\figurethreecol}{\figurethreecol}{\externaltikz{spurious_modes_corrector}{\input{figures/Brain/ spurious_modes_corrector}}}
\settikzlabel{fig:spmc-a}
\settikzlabel{fig:spmc-b}
\settikzlabel{fig:spmc-c}
\caption{Comparison between the direct application of the scheme $\mischeme{1}$ and the scheme with improved diffusion stencil \mischeme[+]{1} from \secref{sec:spmc} on the linesource benchmark.
Plots of the density for \mischeme[\times]{1} (\ref{fig:spmc-a}) and \mischeme[+]{1} (\ref{fig:spmc-b}). In \ref{fig:spmc-c}, the relative difference $\reldiff{\rho_{\times}}{\rho_{+}} = \frac{1}{\max\abs{\rho_{+}}} (\rho_{\times} - \rho_{+})$ is plotted on a signed truncated logarithmic scale.}
\label{fig:spmc}
\end{figure}
\subsection{Upwind discretization of the drift in the limit}
\label{sec:upwind-drift}
The limit drift approximation follows from a concatenation of the macroscopic flux $\Delta t \Phi^{\PM}$ with $-\frac{\nu \factorcol[a]}{\factorcol[D]} \LCO[D]^{-1} \LCO[a](\ensuremath{E}_{j}) \tilde{\rho}_{j}^{n}$.
Using an average density $\tilde{\rho}_{j}$ weighted by the subcell volumes as in \eqref{eq:density-on-primal-cell} leads to a central approximation of the drift.
But we know the local drift direction
\begin{align*}
a_{j} = \ints{v \LCO[D]^{-1}\LCO[a]\ensuremath{E}_{j}}
\end{align*}
and want to assign more weight to those cells $\mcell{\dcellidx}$ that are upwind of the center $\mvert{\dvertidx}$.
We write $\mvert{\dvertidx}^{*}$ for the intersection of the ray
\begin{align*}
\mvert{\dvertidx} - \tau a_{j}, \quad \tau \in \mathbb{R}^+
\end{align*}
with the cell boundary $\partial \mcell{\pcellidx}$.
Then we define
\begin{align*}
\tilde{\rho}_{j} = \sprec{\rho}(\mvert{\dvertidx}^{*})
\end{align*}
with a continuous, piecewise linear reconstruction $\sprec{\rho}$ by hat-functions.
This is of course only a first-order accurate approximation of the drift.
\subsection{Treatment of boundary conditions}
We consider only boundary conditions that preserve mass.
On a macroscopic level this translates to a zero-flux Robin-type boundary condition for the density in \eqref{eq:diffusion-limit-general}:
\begin{align}
\label{eq:bc-limit-zero-flux}
\evalat{-\ensuremath{\nabla_x \cdot} ({\rho_0} D) + \nu {\rho_0} a}{\partial \Omega_{\xcoord}} = 0.
\end{align}
This does not determine the boundary conditions on the microscopic level uniquely.
All microscopic boundary conditions for $\ensuremath{f}$ that can be cast into the class of reflective boundary conditions preserve mass.
At a reflective boundary, the values $\ensuremath{f}(v)$ are prescribed for incoming velocities $v \cdot n < 0$ and follow from the outgoing values via the reflection integral:
\begin{align}
\label{eq:bc-full}
\ensuremath{f}(v) &= \intBplus{v'}{B(v, v') \ensuremath{f}(v')} & \forall \Vminus{v},
\end{align}
Of course, the reflection kernel $B$ is defined such that the net mass flux across the boundary is zero, that is, it fulfills
\begin{equation}
\label{eq:bc-zero-flux}
\begin{aligned}
0 = \intV{(v \cdot n) \ensuremath{f}(v)}
&= \intBplus{v}{(v \cdot n) \ensuremath{f}(v)} + \intBminus{v}{(v \cdot n) \ensuremath{f}(v)}\\
&= \intBplus{v}{(v \cdot n) \ensuremath{f}(v)} + \intBminus{v}{(v \cdot n) \intBplus{v'}{B(v, v') \ensuremath{f}(v')}}.
\end{aligned}
\end{equation}
From the last line, we see that this is the case if
\begin{align*}
\intBminus{v}{(v \cdot n) B(v, v')} = -v' \cdot n
\end{align*}
holds.
To see the boundary condition for $\ensuremath{g}$ that is equivalent to \eqref{eq:bc-full}, we insert the micro-macro decomposition \eqref{eq:APsplitPD} and obtain
\begin{align*}
\ensuremath{g}(v) &= \frac{\rho}{\varepsilon} \left[ \intBplus{v'}{B(v, v') \ensuremath{E}(v')} - \ensuremath{E}(v) \right] + \intBplus{v'}{B(v, v') \ensuremath{g}(v')}
\end{align*}
If the kernel is not compatible with the equilibrium state then in the limit when $\varepsilon$ tends to zero, $\ensuremath{g}$ becomes unbounded at the boundary and we need to solve a half-space problem to compute the boundary condition.
Here we do not want to consider boundary layers and therefore demand that \eqref{eq:bc-full} hold for the equilibrium state $\ensuremath{E}$.
Then we have the condition
\begin{align}
\label{eq:bc-micro}
\ensuremath{g}(v) &= \intBplus{v'}{B(v, v') \ensuremath{g}(v')}
\end{align}
for $\ensuremath{g}$.
The value for $\rho$ is left unconstrained.
For the kernel, we consider two options.
The 'u-turn' kernel models that cells turn around 180 degrees when encountering a wall, independent of the angle of collision.
It is given by
\begin{align*}
B_{\text{uturn}}(v, v') &= \dirac{v}{-v'}.
\end{align*}
Because the equilibrium fulfills $\ensuremath{E}(v) = \ensuremath{E}(-v)$, the reflection equation \eqref{eq:bc-full} holds for the equilibrium.
It is easy to check the zero-mass-flux condition \eqref{eq:bc-zero-flux} for this kernel.
Another option is that after a collision with the wall, the incoming particles are in equilibrium
\begin{align*}
\ensuremath{f}(v) &= \alpha \ensuremath{E}(v) & \forall \Vminus{v}.
\end{align*}
This so-called thermal boundary condition can be achieved with the kernel
\begin{align*}
B_{\text{thermal}}(v, v') &= \frac{\alpha \ensuremath{E}(v)}{\intBplus{v'}{\ensuremath{f}(v')}}.
\end{align*}
The parameter $\alpha$ is defined by
\begin{align*}
\alpha &= -\frac{\intBplus{v}{(v \cdot n) \ensuremath{f}(v)}}{\intBminus{v}{(v \cdot n) \ensuremath{E}(v)}}
\end{align*}
to fulfill the zero-mass-flux condition \eqref{eq:bc-zero-flux}.
For a symmetric equilibrium we have $\alpha = 1$ and thus the boundary condition is compatible with the equilibrium.
\begin{remark}[Specular reflection]
The specular reflection kernel
\begin{align*}
B_{\text{spec}}(v, v') &= \dirac{v}{v' - 2 (v' \cdot n) n}
\end{align*}
models hard-sphere collisions between particles and the wall.
It conserves mass, but is not compatible with the equilibrium in general, only if the equilibrium is mirror symmetric around the outer boundary
\begin{align*}
\ensuremath{E}(v) &= \ensuremath{E}(v - 2 (v \cdot n) n).
\end{align*}
\end{remark}
If we want to, we can additionally constrain the density
\begin{align*}
\evalat{{\rho_0}}{\partial \Omega_{\xcoord}} = {\rho_0}_b.
\end{align*}
Then, together with \eqref{eq:bc-limit-zero-flux} this implies a condition for $\nabla_x {\rho_0}$, which can always be fulfilled because $D$ is invertible.
On the particle level, this means that we get the additional condition
\begin{align*}
\evalat{\rho}{\partial \Omega_{\xcoord}} = \intV{\ensuremath{f}(v)} = {\rho_0}_b.
\end{align*}
\section{Discretization of the velocity space by a linear spectral method}
\label{sec:spectral-method}
The scheme that we derived in the previous sections is discrete in time and space.
It remains to find a suitable discretization for the velocity.
We use a linear spectral Galerkin method based on real-valued spherical harmonics, which is a slight modification of the well-known $P_N$ method \cite{brunner2005two,seibold2014starmap,garrett2013comparison}.
First we define the spherical harmonics basis for the full space $L^2(\mathbb{S}^2)$ of particle distributions $\ensuremath{f}$.
A basis for the constrained space of perturbations $\ensuremath{g}$ from \lemref{lem:lcol-properties}
\begin{align}
\label{eq:perturbed-space}
\ensuremath{g} \in V := \Nsp^\bot(\LCO[D]) := \left\{ \ensuremath{g} \in L^2_{\ensuremath{E}} , (\ensuremath{g},\ensuremath{E})_{\ensuremath{E}} = \ints{\ensuremath{g}} = 0 \right\},
\end{align}
is then obtained by removing the first element in the full basis.
We collect the $2l+1$ harmonics of exactly order $l$ in the vector $\basisPDord{l}$.
For example, there is one zeroth-order harmonic $\basisPDord{0} = \frac{1}{\sqrt{4\pi}}$, and there are three first-order harmonics $\basisPDord{1} = \frac{1}{\sqrt{4\pi}}(\sqrt{3} v_{\xi}, \sqrt{3} v_{\eta}, \sqrt{3} v_{\zeta})$.
For an exact definition of the real-valued spherical harmonics refer to \cite{seibold2014starmap}.
The $(N+1)^2$ spherical harmonics up to order $N$
\begin{align*}
\bs{m} = \left(\basisPDord{0}, \basisPDord{1}, \dots, \basisPDord{N}\right) = \left(\basisPDcomp{0}, \dots, \basisPDcomp{n-1} \right),
\end{align*}
span a finite-dimensional subspace of $L^2(\mathbb{S}^2)$---the space of polynomials up to order $N$.
An infinite-dimensional basis of the full space $L^2(\mathbb{S}^2)$ is given by
\begin{align*}
\bs{m}^{\infty} = \left(\basisPDord{0}, \basisPDord{1}, \dots \right).
\end{align*}
One important property of the spherical harmonics is that they are orthonormal, that is
\begin{align*}
\ints{\basisPDcomp{i} \basisPDcomp{j}} = \delta_{ij}
\end{align*}
holds for any $i,j$.
Thus all basis components except for $\basisPDcomp{0}$ fulfill the constraint $\ints{\ensuremath{g}} = 0$ in \eqref{eq:perturbed-space}:
\begin{align*}
\ints{\basisPDcomp{i}} = \frac{1}{\sqrt{4 \pi}} \ints{\basisPDcomp{i} \basisPDcomp{0}} = 0 \qquad i > 0.
\end{align*}
We obtain bases for the constrained space $V$, and corresponding finite-dimensional subspaces by omitting the function $\basisPDcomp{0}$:
\begin{align*}
\bs{a}^{\infty} &= \left(\basisPPord{1}, \basisPPord{2}, \dots\right) := \left(\basisPDord{1}, \basisPDord{2}, \dots\right), \\
\bs{a} &= \left(\basisPPord{1}, \dots, \basisPPord{N} \right).
\end{align*}
A perturbation $\ensuremath{g}$ has a unique basis representation
\begin{align*}
\ensuremath{g}(v) &= \bs{u}^{\infty} \cdot \bs{a}^{\infty}(v),
\end{align*}
wherein the coefficients $\momPPcomp{i}^{\infty}$ are equal to the moments
\begin{align*}
\ints{\ensuremath{g} \basisPPcomp{i}^{\infty}} = \ints{\sum_{j} \momPPcomp{j}^{\infty} \basisPPcomp{j}^{\infty} \basisPPcomp{i}^{\infty} } = \momPPcomp{i}^{\infty}.
\end{align*}
because of the orthonormal property.
The orthogonal projection of $\ensuremath{g}$ onto the finite-dimensional subspace $V_h$ is
\begin{align*}
\mathfrak{g}(v) &= \bs{u} \cdot \bs{a}(v),
\end{align*}
with moments
\begin{align*}
\momPPcomp{i} = \ints{\mathfrak{g} \bs{a}} = \begin{cases}
\momPPcomp{i}^{\infty} = \ints{\ensuremath{g} \bs{a}} &, i < n \\
0 &, i \geq n.
\end{cases}
\end{align*}
The discrete-in-velocity approximation of problem \eqref{eq:continuous-system} is to find $(\rho, \mathfrak{g})$ that solve
\begin{equation}
\label{eq:velocity-discrete-system}
\begin{alignedat}{2}
\de_t \rho &= \Phi^{\PM}(\rho, \mathfrak{g}) &+& \Gamma^{\PM}(\rho,\mathfrak{g}), \\
\de_t \ints{\mathfrak{g} \bs{a}} = \de_t \bs{u} &= \ints{{\Phi^{\PP}_{\FD}}(\rho) \bs{a}} + \ints{\Phi^{\PP}(\rho, \mathfrak{g}) \bs{a}} &+& \ints{\Gamma^{\PP}(\rho,\mathfrak{g}) \bs{a}}.
\end{alignedat}
\end{equation}
This is a set of $n+1 = (N+1)^2$ equations for the $n+1$ unknowns $(\rho, \bs{u})$.
The individual terms therein are
\begin{align*}
\Phi^{\PM}(\rho,\mathfrak{g}) &= -\delta \ensuremath{\nabla_x \cdot} \ints{v \mathfrak{g}} + \theta \mu \rho,\\
\ints{{\Phi^{\PP}_{\FD}}(\rho) \bs{a}} &= -\frac{\delta}{\varepsilon^2} \ensuremath{\nabla_x \cdot} \left(\rho \ints{v\ensuremath{E}\bs{a}}\right), \\
\ints{\Phi^{\PP}(\rho, \mathfrak{g}) \bs{a}} &= -\frac{\delta}{\varepsilon} \left[\ensuremath{\nabla_x \cdot} \ints{v \mathfrak{g} \bs{a}} - \ensuremath{\nabla_x \cdot} \ints{v \mathfrak{g}} \ints{\ensuremath{E} \bs{a}} \right] + \frac{\delta \nu \factorcol[a]}{\varepsilon^2} \ints{\LCO[a] (\rho \ensuremath{E} + \varepsilon \mathfrak{g}) \bs{a}} + \frac{\theta \mu}{\varepsilon} \ints{ (\Identity - \Nspproj) \mathcal{S} (\rho \ensuremath{E} + \varepsilon \mathfrak{g}) \bs{a}},
\end{align*}
and
\begin{align*}
\Gamma^{\PM}(\rho,\mathfrak{g}) &= 0, \\
\ints{ \Gamma^{\PP}(\rho, \mathfrak{g}) \bs{a}} &= \frac{\delta \factorcol[D]}{\varepsilon^2} \ints{ \LCO[D](\mathfrak{g}) \bs{a}}.
\end{align*}
The equations are coupled through the flux moments $\ints{v \mathfrak{g}} \in \mathbb{R}^{{S}}$, $\ints{v \mathfrak{g} \bs{a}} \in \mathbb{R}^{n \times {S}}$ and moments of the collision term and source on the right hand side.
The macro equation is coupled with the micro equations through the moments
\begin{align*}
\ints{v \mathfrak{g}} = \frac{\sqrt{4\pi}}{\sqrt{3}}\ints{\basisPPord{1} \mathfrak{g}} = \frac{\sqrt{4\pi}}{\sqrt{3}} \momPPord{1}.
\end{align*}
In general, $i$-th order flux moments $\ints{v \mathfrak{g} \basisPPord{i}}$ can be written as a combination of the moments $\ints{\mathfrak{g} \bs{a}^{(i+1)}} = \bs{u}^{(i+1)}$ of order $i+1$.
Usually this relation is written in matrix form.
For instance for the $\xi$-component of the velocity, we write
\begin{align*}
\ints{v_{\xi} \mathfrak{g} \bs{a}} &= M_{\xi} \bs{u} \\
&:= \ints{v_{\xi} \bs{a} \bs{a}\trans} \bs{u}
\end{align*}
For details on how to compute these matrices for the full basis $\bs{m}$, see for example \cite{seibold2014starmap}.
Due to orthogonality of the basis, we can simply remove the first row and column of the matrix $\ints{v_{\xi} \bs{m} \bs{m}\trans}$ to get the matrices for the restricted basis $\bs{a}$.
Because the turning operators are linear, we can also write their contribution to the moment system in matrix form:
\begin{align*}
\ints{\LCO[D](\mathfrak{g}) \bs{a}} &= C_{D} \bs{u}, \\
\ints{\LCO[a](\mathfrak{g}) \bs{a}} &= C_{a} \bs{u}.
\end{align*}
\begin{remark}[Turning operators in the glioma equation]
From equation \eqref{eq:glioma-lco-inverse} we have
\begin{align*}
\LCO[D](\mathfrak{g}) = -\mathfrak{g},
\end{align*}
thus
\begin{align*}
\ints{\LCO[D](\mathfrak{g}) \bs{a}} = -\ints{\mathfrak{g} \bs{a}} = -\bs{u},
\end{align*}
and $C_{D} = -I$.
The turning perturbation is given by
\begin{align*}
\LCO[a](\mathfrak{g}) &= \hat\lambda_H \nabla_x \ensuremath{Q} \cdot (\ensuremath{E} \ints{v \mathfrak{g}} - v \mathfrak{g}).
\end{align*}
Its moments are
\begin{align*}
\ints{\LCO[a](\mathfrak{g}) \bs{a}} &= \hat\lambda_H \nabla_x\ensuremath{Q} \cdot \left(\ints{\ensuremath{E} \bs{a}} \ints{v \mathfrak{g}} - \ints{v \mathfrak{g} \bs{a}} \right)
\end{align*}
The dot product is between components of the gradient $\nabla_x\ensuremath{Q}$ and components of the velocity $v$.
The moments appearing in this expression have been calculated before.
With some abuse of vector notation, we have
\begin{align*}
\ints{\LCO[a](\mathfrak{g}) \bs{a}} = \hat\lambda_H \nabla_x\ensuremath{Q} \cdot \left(\ints{\ensuremath{E} \bs{a}} \frac{\sqrt{4\pi}}{\sqrt{3}} \momPPord{1} - (M_{\xi} \bs{u}, M_{\eta} \bs{u}, M_{\zeta} \bs{u}) \right).
\end{align*}
Because the source is just the identity $\mathcal{S} \ensuremath{f} = \ensuremath{f}$, the source moments can be simplified to
\begin{align*}
\ints{ (\Identity - \Nspproj) \mathcal{S} (\rho \ensuremath{E} + \varepsilon \mathfrak{g}) \bs{a}} = \varepsilon \bs{u}.
\end{align*}
\end{remark}
\begin{remark}
Equation \eqref{eq:velocity-discrete-system} is equivalent to the moment system
\begin{align*}
\de_t \bs{w} &= -\frac{\delta}{\varepsilon} \ensuremath{\nabla_x \cdot} \ints{v \mathfrak{f} \bs{w}} + \frac{\delta}{\varepsilon^2}\factorcol[D] \ints{\LCO[D] \mathfrak{f} \bs{w}} + \frac{\delta \nu}{\varepsilon} \factorcol[a] \ints{\LCO[a] \mathfrak{f} \bs{w}} + \theta \mu \ints{\mathcal{S} \mathfrak{f} \bs{w}}
\end{align*}
for the original equation \eqref{eq:lke-scaled} with the approximation $\mathfrak{f}$ and moments $\bs{w}$ of the particle distribution $\ensuremath{f}$ given by
\begin{align*}
\mathfrak{f} &= \bs{w} \cdot \bs{m} = \rho \ensuremath{E} + \varepsilon \mathfrak{g} , \\
\momPDcomp{i} &= \ints{\mathfrak{f} \basisPDcomp{i}} = \begin{cases}
\frac{1}{\sqrt{4\pi}} \rho & i = 0\\
\rho \ints{\ensuremath{E} \basisPPcomp{i}} + \varepsilon \momPPcomp{i} & i > 0
\end{cases}
\end{align*}
\end{remark}
The space and time discretization can be carried over to the moment system without change.
\section{Results}
\label{sec:results}
Whenever we know the analytical solution to a problem, we use it to numerically evaluate the convergence of our code with respect to grid refinement.
One such convergence test consists of $L+1$ runs with identical parameters but increasing grid refinement, starting with $M_{0}$ grid points per space direction and increasing by a constant factor $r$ in each step.
In run $l$, the number of grid points per dimension is then
\begin{align*}
M_l &= \lfloor M_0 r^l \rfloor \qquad l = 0,\dots, L,
\end{align*}
and the size of each grid cell
\begin{align*}
\Delta x_l = \frac{1}{M_l} = \frac{1}{\lfloor M_0 r^l \rfloor } \qquad l = 0,\dots,L.
\end{align*}
The error $e_l$ in each run is defined as the $L^2$-difference between the computed density $\rho_l$ and the exact solution $\rho_{ex}$, evaluated at the final time $T$
\begin{align*}
e_l = \Vert \rho_l(T, x) - \rho_{ex}(T, x) \Vert_{2} = \left( \int_{\Omega_{\xcoord}} (\rho_l - \rho_{ex})^2 dx \right)^{\frac{1}{2}}.
\end{align*}
The integral is computed by a quadrature of appropriate order.
Convergence rates between successive refinement steps are computed with the formula
\begin{align*}
\frac{\log(e_l) - \log(e_{l+1})}{\log(\Delta x_l) - \log(\Delta x_{l+1})}.
\end{align*}
In the presentation and discussion of results, we will make use of the pointwise relative difference
\begin{align*}
\reldiff{f}{g}(x) = \frac{1}{\underset{x \in \Omega_{\xcoord}}{\max}\abs{g}} (f(x) - g(x))
\end{align*} between two functions $f(x), g(x)$.
In error plots, a signed truncated logarithmic scale
\begin{align*}
\text{sign}(f) \big(\log(\max(\abs{f}, f_L)) - \log(f_L)\big)
\end{align*}
is useful to show a wide range of absolute values as well as their signs.
All computations are performed on the glioma model from \secref{sec:glioma-model} with the peanut distribution \eqref{eq:peanut}.
When not otherwise mentioned, we use the minimally implicit scheme with the stencil improvements from \secref{sec:spmc} and \secref{sec:upwind-drift}.
For the computations in \secref{sec:diffusion-limit-convergence} and \secref{sec:discontinuities} we need to prescribe the macroscopic diffusion tensor $D_T$.
We achieve this by constructing artificial values for the water diffusion tensor
\begin{align*}
D_W = \frac{1}{2} \left(5 D_T - I \right),
\end{align*}
according to the inverse of \eqref{eq:DT-glioma}.
Whenever we prescribe the macroscopic drift $a_{T}$, we define the volume fraction $\ensuremath{Q}$ according to the inverse of \eqref{eq:drift-glioma}:
\begin{align*}
\nabla_x \ensuremath{Q} &= \frac{1}{\hat \lambda_H} a_{T}\trans D_T^{-1}
\end{align*}
When the physical values of $D_T, a_{T}$ are given together with $X$ we can compute corresponding parameters $c, \lambda_0, \lambda_1$ for the microscopic glioma equation using the scaling relations in \secref{sec:parabolic-scaling}:
\begin{align*}
c = \frac{D_0}{X \varepsilon}, \quad \lambda_0 = \frac{D_0}{X^2 \varepsilon^2}, \quad \lambda_1 = \frac{a_0}{X \varepsilon^2}.
\end{align*}
\subsection{Fundamental solution of the limit equation}
\label{sec:diffusion-limit-convergence}
When the diffusion tensor $D$ and drift $a$ are constant and the growth factor $\theta$ is zero, the limiting advection-diffusion equation in physical coordinates \eqref{eq:diffusion-limit-physical} has the fundamental solution
\begin{align}
\label{eq:fundamental-solution}
{\rho_0}_{,f} &= \left((4\pi)^{{S}} \det D\right)^{-\ensuremath{\frac{1}{2}}} t^{-\frac{{S}}{2}} \exp\left(-\frac{1}{4t} (x - a t)\trans D^{-1} (x - a t)\right).
\end{align}
Our scheme should reproduce this solution when $\varepsilon$ is small.
For the test we choose
\begin{align*}
D_T &= D_0 \frac{1}{4.5} R \begin{pmatrix}
2.5 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1
\end{pmatrix} R\trans, \\
a_{T} &= a_0 \frac{1}{\sqrt{10}} \begin{pmatrix}
3 \\ 1 \\ 0
\end{pmatrix}.
\end{align*}
Herein the matrix $R$ rotates $e_1$ onto the main diffusion direction $(-1, 2, 0)\trans$.
We choose a characteristic diffusion speed $D_0 = \frac{1}{100}$.
We perform two tests, one without drift, i.e., $a_0 = 0$, and one with drift speed $a_0 = 0.1$.
To smoothen the initial Dirac-delta distribution, we choose the initial condition $\rho(0, x) = {\rho_0}_{,f}(t_O, x)$ with the time offset $t_O = 0.2$.
Then the solution at time $t$ is given by ${\rho_0}_{,f}(t + t_O, x)$.
First we test convergence of the first and second order schemes with respect to grid refinement, starting at a $40\times40$ grid and refining by factor $1.5$ five times.
The analytical solution is of course only valid in the diffusion limit, therefore we choose $\varepsilon = 10^{-5}$.
The $L^2$ error over the number of grid points is plotted in \figref{fig:fundamental-convergence-grid}.
Without the drift term, both schemes converge with second order accuracy to the analytic solution, as is to be expected for a discretization of the pure diffusion equation.
With the drift, the order of both schemes is reduced to about $0.9$ and absolute errors are also much greater.
We are also interested in convergence as $\varepsilon$ tends to zero.
From the grid refinement study, we see that at about $200 \times 200$ grid points, the error is roughly $\scinum{2}{-5}$ without drift and $\scinum{4}{-4}$ with the drift term.
As $\varepsilon$ approaches zero, we expect the total error to be dominated by this discretization error.
In \figref{fig:fundamental-convergence-epsilon}, the $L^2$ error of the first order scheme at $200 \times 200$ grid points is plotted, over values of $\varepsilon$ from one to $\scinum{1}{-9}$.
We observe that the error levels out at the expected discretization error below a threshold value of $\varepsilon$---roughly $\scinum{1}{-4}$ without drift and $\scinum{1}{-3}$ with drift.
Note that for certain intermediate values of $\varepsilon$, the error reaches a local minimum slightly below the limit discretization error because kinetic effects cancel out some of the numerical diffusion of the scheme.
Numerical solutions in the kinetic to intermediate regime ($\varepsilon \in [0.1, 0.01]$) are shown in \figref{fig:fundamental-limit-kinetic}.
In the kinetic regime, the problem is similar to the linesource problem \cite{garrett2013comparison}; only for anisotropic scattering.
Indeed the $P_1$ solutions feature a single ellipsoid wave, which travels at speed $\frac{1}{\sqrt{3}}c$ in the main diffusion direction and is biased towards the drift direction.
With decreasing $\varepsilon$ the diffusion dominates and the wave maximum is smeared out into a Gaussian.
Below $\varepsilon \approx \scinum{1}{-2}$ the solutions are too similar for direct visual comparisons.
Therefore, in \figref{fig:fundamental-limit-diffusive} we show relative differences on a signed logarithmic scale instead.
\figref{fig:fundamental-limit-diffusive-a} to \figref{fig:fundamental-limit-diffusive-f} show relative differences between the numerical solution and the fundamental solution to the diffusion equation \eqref{eq:fundamental-solution} .
Although not visible from a plot of the solution, at $\varepsilon = \scinum{1}{-2}$ still has some small kinetic effects(see \figref{fig:fundamental-limit-diffusive-a}) of relative magnitude $\scinum{1}{-2}$.
In \figref{fig:fundamental-limit-diffusive-f} the relative difference between the numerical solutions at $\varepsilon = \scinum{1}{-3}$ and $\varepsilon = \scinum{1}{-9}$ is plotted.
We see that already at $\varepsilon = \scinum{1}{-3}$ the discretization error dominates the kinetic effects.
From \figref{fig:fundamental-limit-diffusive-b} we see how the kinetic effects cancel some of the numerical diffusion.
The numerical diffusion from the drift discretization becomes apparent from \figref{fig:fundamental-limit-diffusive-e}: Looking in drift direction the solution at $\varepsilon = \scinum{1}{-9}$ overestimates the fundamental solution before and after the peak and underestimates at the peak.
With the fundamental solution we can also quantify the numerical diffusion of the scheme.
We fit a multivariate Gaussian to the numerical result and view the corresponding estimated diffusion tensor as the sum of the exact diffusion tensor and a contribution from the numerical scheme.
In \figref{fig:fundamental-numerical-diffusion}, the two eigenvalues and the main direction of this estimated numerical diffusion are plotted.
We observe that numerical diffusion converges at the same rate as the $L^2$ error.
When the drift term is active, it dominates the overall numerical diffusion by two orders of magnitude and the main axis of the numerical diffusion is parallel to the drift direction.
Without the drift, we observe an interesting difference between the \mischeme[]{1} scheme and the \mischeme[]{2} scheme.
For the \mischeme[]{2} scheme, both eigenvalues are positive and their ratio is close to the anisotropy factor $2.5$.
Additionally, the main axes of physical and numerical diffusion are aligned.
Thus, the numerical diffusion is proportional to the physical diffusion.
In the \mischeme[]{1} scheme the ratio of eigenvalues and main axis is the same.
However, both eigenvalues are negative, which indicates that the leading numerical error is dispersive rather than diffusive.
\begin{figure}[h]
\deffigures/Brain/{figures/DiffusionLimit/}
\centering
\withfiguresize{\figuretwocol}{\figuretwocol}{\externaltikz{diffusion_limit_convergence}{\input{figures/Brain/ diffusion_limit_convergence}}}
\settikzlabel{fig:fundamental-convergence-grid}
\settikzlabel{fig:fundamental-convergence-epsilon}
\caption{Convergence study for the fundamental solution test from \secref{sec:diffusion-limit-convergence}. \ref{fig:fundamental-convergence-grid}: $L^2$ errors over number of gridpoints on each axis. Shown are the errors for both \mischeme[]{1}, and \mischeme[]{2}, each without drift $a_{T} = 0$ and with some drift $a_{T} = 0.1$. \ref{fig:fundamental-convergence-epsilon}: $L^2$ errors over parabolic scaling parameter $\varepsilon$, for a fixed grid with $200 \times 200$ cells. Errors for the \mischeme[]{1} are shown both without and with drift.}
\label{fig:fundamental-convergence}
\end{figure}
\begin{figure}[h]
\deffigures/Brain/{figures/DiffusionLimit/}
\centering
\withfiguresize{\figurethreecol}{\figurethreecol}{\externaltikz{diffusion_limit_kinetic}{\input{figures/Brain/ diffusion_limit_kinetic}}}
\settikzlabel{fig:fundamental-limit-kinetic-a}
\settikzlabel{fig:fundamental-limit-kinetic-b}
\settikzlabel{fig:fundamental-limit-kinetic-c}
\settikzlabel{fig:fundamental-limit-kinetic-d}
\settikzlabel{fig:fundamental-limit-kinetic-e}
\settikzlabel{fig:fundamental-limit-kinetic-f}
\caption{The numerical solution to the fundamental solution test of \secref{sec:diffusion-limit-convergence}, using the \mischeme{1}-$P_1$ scheme. The density $\rho$ is depicted for solutions with various values of $\varepsilon$, ranging from the kinetic regime $\varepsilon = 0.1$ in \ref{fig:fundamental-limit-kinetic-a} to the intermediate regime in \ref{fig:fundamental-limit-kinetic-f} with $\varepsilon = 10^{-2}$.}
\label{fig:fundamental-limit-kinetic}
\end{figure}
\begin{figure}[h]
\deffigures/Brain/{figures/DiffusionLimit/}
\centering
\withfiguresize{\figurethreecol}{\figurethreecol}{\externaltikz{diffusion_limit_diffusive}{\input{figures/Brain/ diffusion_limit_diffusive}}}
\settikzlabel{fig:fundamental-limit-diffusive-a}
\settikzlabel{fig:fundamental-limit-diffusive-b}
\settikzlabel{fig:fundamental-limit-diffusive-c}
\settikzlabel{fig:fundamental-limit-diffusive-d}
\settikzlabel{fig:fundamental-limit-diffusive-e}
\settikzlabel{fig:fundamental-limit-diffusive-f}
\caption{The numerical solution to the fundamental solution test of \secref{sec:diffusion-limit-convergence}, using the \mischeme{1}-$P_1$ scheme. Each plot shows the relative difference in density $\rho$ between two solutions on a signed truncated logarithmic scale. \figref{fig:fundamental-limit-diffusive-a} - \ref{fig:fundamental-limit-diffusive-e} show the relative difference between the numerical solution at various $\varepsilon$, and the exact solution \eqref{eq:fundamental-solution}. \figref{fig:fundamental-limit-diffusive-f} shows the difference between the numerical solutions at $\varepsilon = 10^{-3}$ and $\varepsilon = 10^{-9}$.}
\label{fig:fundamental-limit-diffusive}
\end{figure}
\begin{figure}[h]
\deffigures/Brain/{figures/DiffusionLimit/}
\centering
\withfiguresize{\figurethreecol}{\figurethreecol}{\externaltikz{numerical_diffusion}{\input{figures/Brain/ numerical_diffusion}}}
\settikzlabel{fig:fundamental-numerical-diffusion-a}
\settikzlabel{fig:fundamental-numerical-diffusion-b}
\settikzlabel{fig:fundamental-numerical-diffusion-c}
\caption{Estimates of the numerical diffusion of the \mischeme[]{1} and \mischeme[]{2} schemes using the fundamental solution. Shown are the larger eigenvalue of the numerical diffusion tensor in \ref{fig:fundamental-numerical-diffusion-a}, the smaller eigenvalue in \ref{fig:fundamental-numerical-diffusion-b} and the direction of the main eigenvector in \ref{fig:fundamental-numerical-diffusion-c} for each scheme without and with the drift term.}
\label{fig:fundamental-numerical-diffusion}
\end{figure}
\subsection{Convergence analysis with manufactured solutions}
\label{sec:ms-convergence}
Convergence tests with manufactured solutions are useful to detect errors in the scheme and bugs in its implementation.
If we achieve the expected convergence order we can be more confident that we actually solve the correct problem.
We only consider the two-dimensional setting.
On the domain
\begin{align*}
\Omega_{\tcoord\xcoord\vcoord} &= [0, \frac{1}{4}] \times [0,1]^2 \times \mathbb{S}^2
\end{align*}
we prescribe the solution
\begin{align*}
\ensuremath{f}_{ex}(t, x, v) &= \ensuremath{E}(x, v) \left( \cos(2 \pi t) (p_6(\xi) + p_6(\eta)) + 2 \right).
\end{align*}
In terms of the density and perturbation, this is expressed as
\begin{equation}
\label{eq:ms-micro-macro}
\begin{aligned}
\rho_{ex}(t, x) &= \cos(2 \pi t) (p_6(\xi) p_6(\eta)) + 2 \\
\ensuremath{g}_{ex}(t, x, v) &= 0.
\end{aligned}
\end{equation}
The analytic solution at final time is simply $\evalat{\rho_{ex}}{t = \frac{1}{4}} \equiv 2$, $\evalat{\ensuremath{g}_{ex}}{t = \frac{1}{4}} \equiv 0$.
We choose a solution with zero micro part, because this makes the expression for the source easier.
Nevertheless, due to the coupling of the micro and macro parts, errors in the $\ensuremath{g}$ equation can still be detected with this method.
The sixth-order polynomial
\begin{align*}
p_6(\xi) &= 32 \left( -\xi^6 + 3 \xi^5 - 3 \xi^4 + \xi^3 \right)
\end{align*}
is carefully chosen such that its value, and its first and second derivative are zero at the boundary:
\begin{align*}
0 = p_6(0) = p_6(1) = p_6'(0) = p_6'(1) = p_6''(0) = p_6''(1).
\end{align*}
We add artificial source terms $\hat S_{\rho}, \hat S_{\ensuremath{g}}$ to the right hand side of \eqref{eq:APrho}, \eqref{eq:APremainder} and insert the solution \eqref{eq:ms-micro-macro} to obtain
\begin{align*}
\hat S_{\rho} &= \de_t \rho_{ex} = -2 \sin(2 \pi t) (p_6(\xi) + p_6(\eta)) \\
\hat S_{\ensuremath{g}} &= \frac{\delta}{\varepsilon^2} \ensuremath{\nabla_x \cdot} (v \rho_{ex} \ensuremath{E}) -\frac{\delta}{\varepsilon^2} \rho_{ex} \lambda_H \nabla_x \ensuremath{Q} \cdot (v \ensuremath{E})
\end{align*}
that will produce the desired solution.
To see the correct order, we need of course a smoothly varying fiber distribution.
Here we use a distribution with increasing anisotropy along the $\xi$-axis:
\begin{align*}
D_W(x) &= \begin{pmatrix}
1 + \xi & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{pmatrix}
\end{align*}
In each convergence test, we refine the grid five times, starting at $20$ grid points and increasing by a factor of 1.5 in each step.
\begin{figure}[h]
\deffigures/Brain/{figures/convergence/}
\centering
\withfiguresize{\figuretwocol}{\figuretwocol}{\externaltikz{convergence_all}{\input{figures/Brain/ convergence_all}}}
\settikzlabel{fig:ms-convergence-sto11}
\settikzlabel{fig:ms-convergence-sto22}
\settikzlabel{fig:ms-convergence-sto11-drift}
\settikzlabel{fig:ms-convergence-sto22-drift}
\caption{$L^2$ errors to manufactured solution at the final time for various values of $\varepsilon$}
\label{fig:ms-convergence}
\end{figure}
We set $\delta = 0.1$ and ignore natural growth, i.e., set $\theta = 0$.
Convergence tests were run with the first and the second order code, each with advection $\nu = 10$ and without advection $\nu = 0$.
Each of these tests was repeated for different values of the scaling parameter $\varepsilon$ ranging from one to $10^{-5}$.
The results are plotted in \figref{fig:ms-convergence}.
Without the drift $\nu = 0$, the first order code (see \figref{fig:ms-convergence-sto11}) shows the expected first order of convergence in the kinetic regime $\varepsilon = 1$ and second order of convergence in the diffusive regime $\varepsilon = 10^{-5}$.
In the transition between the regimes, the convergence order increases from one to two.
As expected, this increase in order is lost when the drift term is active (see \figref{fig:ms-convergence-sto11-drift}) and the convergence order is one for all considered values of $\varepsilon$.
We observe second order convergence for the second order code without drift, independently of the flow regime(see \figref{fig:ms-convergence-sto22}).
However, presence of the drift term reduces the order to one(see \figref{fig:ms-convergence-sto22-drift}).
This is due to the first order approximation of the drift term.
The second order code still produces smaller absolute errors than the first order code.
Interestingly, absolute errors for the second order code are much smaller with $\varepsilon = 1$ compared to all other values of $\varepsilon$.
\subsection{Strong discontinuities in the diffusion coefficients}
\label{sec:discontinuities}
The coefficients in the glioma model from \secref{sec:glioma-model} are estimated from DTI measurements of the brain, which give a water diffusion tensor $D_W$ per voxel.
Voxels typically have a length of a few millimeters.
On each voxel, the tensor is assumed constant and as such the resulting coefficients jump across the voxel boundaries.
Apart from these artifacts, there are genuine jumps in the data when the underlying tissue orientation changes rapidly.
Thus we are interested in the behavior of our scheme in the presence of discontinuous coefficients, especially if $\varepsilon$ is small.
In the context of flow through porous media, a number of benchmarks with strong jumps in the diffusion coefficient have been developed\cite{eigestad2005convergence, rivie2000part}.
We adapt two benchmarks with an analytical solution for our scheme.
The first is a special case of a benchmark with discontinuities in permeability at quadrant boundaries from Eigestad and Klausen \cite{eigestad2005convergence} which we call isotropic quadrants test.
The domain is divided into four quadrants of which each is assigned a constant and isotropic permeability.
The other test is similar to the 'piecewise uniform flow' in \cite{eigestad2005convergence}.
It features two domains of constant diffusion tensor with a single discontinuity.
But here we align the discontinuity with the $x_2$-axis and choose constant anisotropic diffusion tensors whose main axes meet at an angle at the interface.
Note that the benchmarks are designed for the stationary porous media equation
\begin{align*}
\ensuremath{\nabla_x \cdot} (D \nabla_x {\rho_0} ) &= 0
\end{align*}
Our code is neither stationary nor does it solve the porous media equation.
If growth and drift are neglected, the code should approximately solve
\begin{align}
\label{eq:diffusion-benchmark-eq}
\de_t {\rho_0} - \delta \ensuremath{\nabla_x \cdot} (\ensuremath{\nabla_x \cdot} (D {\rho_0})) &= 0.
\end{align}
for small $\varepsilon$.
However, we can run the simulations for a long enough time $T^*$, until a steady state is reached and choose a very small $\varepsilon$, e.g., $10^{-5}$.
In the steady state, the choice of $\delta$ does not play a role.
Effectively, this is a very inefficient iterative solver of the stationary equation.
As a convergence criterion we use the relative $L^2$-difference between successive time steps, i.e, we abort the simulation if
\begin{align*}
\frac{\Vert \rho(t_{i-1}) - \rho(t_i)\Vert_2}{\Vert \rho(t_i) \Vert_2 \Delta t_i} &< tol.
\end{align*}
In the benchmarks, we prescribe Dirichlet boundary conditions for $\rho$ according to the exact solution and Maxwellian boundary conditions \eqref{eq:bc-micro} for the micro equation $\ensuremath{g}$.
\subsubsection{Quadrants with jump in permeability}
\label{sec:quadrants}
First, we switch to polar coordinates
\begin{align*}
\begin{pmatrix}
\xi \\ \eta
\end{pmatrix}
&=
r \begin{pmatrix}
\cos(\theta) \\
\sin(\theta)
\end{pmatrix}.
\end{align*}
The $i$-th quadrant is then $\quadrant{i} = (r, \theta) \in [0, \infty) \times [\frac{i \pi}{2}, \frac{(i+1) \pi}{2})$, for $i = 0,\dots 3$.
On each quadrant, we have a constant isotropic diffusion tensor $D_i = \kappa_i I$.
\begin{figure}
\centering
\deffigures/Brain/{figures/DiffusionBenchmark/}
\withfiguresize{\figuretwocollegend}{\figuretwocollegend}{\externaltikz{quadrants_solution}{\input{figures/Brain/ quadrants_solution.tex}}}
\settikzlabel{fig:quadrants-solution}
\settikzlabel{fig:quadrants-convergence}
\caption{The benchmark described in \secref{sec:quadrants}, with discontinuities in permeability at the quadrant boundaries.
\ref{fig:quadrants-solution}: Analytic solution \eqref{eq:quadrants-solution} for the permeability values in \tabref{tab:quadrants-coeffs}. \ref{fig:quadrants-convergence}: Convergence of $L^2$-error with respect to grid refinement.}
\label{fig:quadrants}
\end{figure}
The stationary solution to \eqref{eq:diffusion-benchmark-eq} has the form
\begin{align}
\label{eq:quadrants-solution}
{\rho_0}_{,ex}(r, \theta) &= r^{\alpha} \left( a_i \cos(\alpha \theta) + b_i \sin(\alpha \theta) \right) & (r, \theta) \in \quadrant{i},
\end{align}
with coefficients $\alpha, a_i, b_i$ determined by the continuity of the density and the flux at the interfaces.
Continuity of the density gives the four conditions
\begin{align*}
{\rho_0}_{,ex}(r, \theta_i^-) = {\rho_0}_{,ex}(r, \theta_i^+),
\end{align*}
wherein $\theta_i^\pm$ mean that the interface at $\frac{i \pi}{2}$ is approached from the left or the right.
Continuity of the fluxes translates into the conditions
\begin{align*}
\pfrac{}{n} D {\rho_0}_{,ex}(r, \theta_i^-) = \pfrac{}{n} D {\rho_0}_{,ex}(r, \theta_i^+),
\end{align*}
with
\begin{align*}
\pfrac{}{n} D {\rho_0}_{,ex}(r, \theta) = \kappa \pfrac{}{n} {\rho_0}_{,ex} = \alpha r^{\alpha-1}(-a_i \sin(\alpha \theta) + b_i \cos(\alpha \theta)).
\end{align*}
Here we used that on each quadrant the coefficients are constant.
Altogether we have eight conditions for nine coefficients.
We arbitrarily set $a_0 = 1$ and solve for the remaining coefficients numerically.
Similar to \cite{eigestad2005convergence}, we take the permeability $\kappa$ equal at diagonally opposite quadrants, and set
\begin{align*}
\kappa_0 &= \kappa_2 = 100, \\
\kappa_1 &= \kappa_3 = 1.
\end{align*}
In the code this is achieved by prescribing the turning rate
\begin{align*}
\lambda_0(x) = \frac{3}{\kappa}
\end{align*}
and an isotropic water diffusion tensor $D_W = I$.
The coefficients that belong to this choice are listed in \tabref{tab:quadrants-coeffs}.
They are identical to the values reported in \cite{eigestad2005convergence}.
A plot of the analytic solution \eqref{eq:quadrants-solution} corresponding to these coefficients is shown in \figref{fig:quadrants-solution}.
Due to the discontinuous permeability, the solution to the diffusion equation only belongs to the fractional Sobolev space $H^{1+\alpha-\nu}, \forall \nu > 0$, i.e., it is at most $1+\alpha$ times differentiable.
Therefore the maximum order of convergence we can expect with respect to grid refinement is $2\alpha$.
We performed a grid refinement study with five refinements, a refinement factor of $1.5$ and 20 grid points on the coarsest grid.
Surprisingly the observed order of convergence(see \figref{fig:quadrants-convergence}) is about $0.4$---significantly greater than the theoretical order $2\alpha \approx 0.25$.
The error at $45$ grid points is exceptionally large because for an odd number of grid points, the quadrant boundary does not coincide with primal cell edges.
\begin{table}
\centering
\begin{tabular}{lllll}
$i$ & 0 & 1 & 2 & 3 \\
\hline
$\kappa_i $ & 100. & 1. & 100. &1. \\
$a_i$ & 1. & 2.96039604 &-0.88275659 &-6.45646175 \\
$b_i$ & 0.1 & -9.6039604 &-0.48035487 & 7.70156488 \\
\hline
$\alpha$ & \multicolumn{4}{l}{0.126902069721}
\end{tabular}
\caption{Coefficients for the exact solution \eqref{eq:quadrants-solution} of the quadrants test described in \secref{sec:quadrants}.}
\label{tab:quadrants-coeffs}
\end{table}
\subsubsection{Interface with change in diffusion tensor axis}
\label{sec:halfplane}
In this test, the diffusion tensor is constant but anisotropic on the left and right half-planes.
At the interface--the $\eta$-axis--there is an abrupt change in the main direction of diffusion.
Let $R(\theta) \in SO(3)$ a rotation around the $\zeta$-axis with angle $\theta$.
The diffusion tensor field is parametrized by left and right anisotropies $a^L, a^R$ and left and right angles of main diffusion $\theta^L, \theta^R$:
\begin{align*}
D(x) &= \begin{cases}
D^{L} = \frac{1}{a^L + 2} R\trans(\theta^L) \begin{pmatrix}
a^L & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{pmatrix}
R(\theta^L) & \xi < 0\\
D^{R} = \frac{1}{a^R + 2} R\trans(\theta^R) \begin{pmatrix}
a^R & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{pmatrix}
R(\theta^R) & \xi >0
\end{cases}
\end{align*}
The piecewise linear function
\begin{align*}
{\rho_0}(x) = \begin{cases}
{\rho_0}^{L} = s^L \cdot x & \xi < 0 \\
{\rho_0}^{R} = s^R \cdot x & \xi > 0
\end{cases}
\end{align*}
is a stationary solution to the diffusion equation \eqref{eq:diffusion-benchmark-eq} on each half-plane.
For a given left slope $s^L$, we use the continuity of the solution and normal fluxes at the interface to compute the right slope $s^R$.
Continuity of the solution gives us
\begin{align*}
s^{R}_{\eta} = s^{L}_{\eta},
\end{align*}
and continuity of the normal fluxes translates to
\begin{align*}
\ensuremath{\nabla_x \cdot}( D^L {\rho_0}^{L}(0^-, \eta)) \cdot e_1 &= \ensuremath{\nabla_x \cdot}( D^R {\rho_0}^{R}(0^+, \eta) ) \cdot e_2\\
D^{L}_{\xi \xi} s^{L}_{\xi} + D^{L}_{\eta \xi} s^{L}_{\eta} &= D^{R}_{\xi \xi} s^{R}_{\xi} + D^{R}_{\eta \xi} s^{R}_{\eta} \\
s^{R}_{\xi} &= \frac{1}{D^{R}_{\xi \xi}} \left( -D^{R}_{\eta \xi} s^{R}_{\eta} + D^{L}_{\xi \xi} s^{L}_{\xi} + D^{L}_{\eta \xi} s^{L}_{\eta}\right).
\end{align*}
We compute two different situations whose parameters are summarized in \tabref{tab:diffusion-benchmark-halfplane-coeffs} and that only differ in the tangential flux at the interface, which is determined by $s^{L}_{\eta}$.
In the first test there is no tangential flux at the interface.
In this case the numeric solution is identical to the analytic solution.
However, in the second test in which the tangential flux component is not zero, the numeric solution differs significantly from the analytic solution.
Relative differences in density and flux between the computation results on a $50\times50$ grid and the analytic solution are plotted in \figref{fig:tensorjump}.
The errors are largest at the interface, especially at the lower and upper boundary.
In density the error is about $10\%$, but in the fluxes it reaches $300\%$.
\begin{table}
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{llll}
& & L & R\\
\hline
&$\theta$ & $80$\textdegree & $20$\textdegree \\
&$a$ & 2.5 & 2.5 \\
\hline
Test 1&$s$ & (1,0) & (0.44965177, 0. ) \\
Test 2&$s$ & (1,1) & (0.35261053, 1. )
\end{tabular}
\caption{Coefficients for the anisotropic half-plane test described in \secref{sec:halfplane}. }
\label{tab:diffusion-benchmark-halfplane-coeffs}
\end{table}
\begin{figure}
\centering
\deffigures/Brain/{figures/DiffusionBenchmark/}
\withfiguresize{\figurethreecol}{\figurethreecol}{\externaltikz{tensorjump}{\input{figures/Brain/ tensorjump.tex}}}
\settikzlabel{fig:tensorjump-a}
\settikzlabel{fig:tensorjump-b}
\settikzlabel{fig:tensorjump-c}
\caption{Numerical solution to the half-plane test in \secref{sec:halfplane}. Shown are the relative errors in density $\rho$ (\ref{fig:tensorjump-a}) and flux components $\ints{v_{\xi} \ensuremath{g}}, \ints{v_{\eta} \ensuremath{g}}$ (\ref{fig:tensorjump-b}, \ref{fig:tensorjump-c}) on a signed truncated logarithmic scale. }
\label{fig:tensorjump}
\end{figure}
\subsection{Computation using DTI data from human brain}
\label{sec:brain}
To demonstrate the full capabilities of the scheme we compute the model of glioma invasion in the human brain (see \secref{sec:glioma-model}) with the parameters in \tabref{tab:brain-parameters}.
We do not claim that these parameters, which are similar to those in \cite{EHS}, are accurate at all, not even to an order of magnitude.
But the results are qualitatively similar to clinical observations (see e.g. \cite{swan2018patient}) and therefore serve as a starting point to test the scheme under more realistic conditions.
The diffusion tensor field $D_W$ is the same as in \cite{EHS, corbin2018higher}.
It remains to estimate the volume fraction $\ensuremath{Q}[D_W](x)$ and the function $\hat \lambda_H$.
We use the same estimates as in \cite{EHKS14,corbin2018higher}, namely
\begin{align*}
\nonumber\ensuremath{Q}[D_W](x) &= CL(D_W(x)) := 1 - \left( \frac{\trace (D_W)}{4 \max\Eigenvalue(D_W)} \right)^{\frac{3}{2}}.
\end{align*}
for the volume fraction and
\begin{align*}
\hat \lambda_H[\ensuremath{Q}](x) &= \frac{1}{1 + \frac{\alpha(\ensuremath{Q})}{\lambda_0}}h'(\ensuremath{Q}), \\
\alpha &= k^+ \ensuremath{Q} + k^-, \\
h &= \frac{k^+ \ensuremath{Q}}{\alpha},
\end{align*}
for the activation function, with positive constants $k^+, k^-$.
We are not interested in absolute values of $\rho$ but rather in the ratio $\frac{\rho}{\PM_{\text{cc}}}$ and therefore set the carrying capacity $\PM_{\text{cc}} = 1$ in the computations.
A two dimensional slice through the three dimensional data set is visualized in \figref{fig:brain-setup}.
The two dimensional computations are performed on a $40mm \times 40mm$ square subset of that slice.
We simulate the tumor growth over a time span of two years starting from its original appearance at an isolated site.
Therefore, initially we set $\rho = 1$ on the grid cell at the center of the computation domain and $\rho = 0$ everywhere else.
It is reasonable to assume that the tumor starts in equilibrium, i.e., $\ensuremath{g}(0, x) = 0$ everywhere.
In \figref{fig:brain-evolution} snapshots of the simulated density $\rho$ at half-year intervals are shown.
The tumor evolves basically like a traveling wave in the Fisher equation with heterogeneous wave speed due to the heterogeneous diffusion tensor and drift.
We observe an increased speed of the invasion front along the white matter tracts.
The solution inside the tumor is almost stationary and fluctuates around the carrying capacity of the growth.
Note that due to the drift, the model allows migration into regions that are already full and thus the density can become larger than the carrying capacity.
This can be seen in \figref{fig:brain-evolution-4c}, wherein we show contours of $\rho$ at selected percentages of the carrying capacity.
Next we compare solutions of the model with different settings.
Therefore, in \figref{fig:brain-comparisons} we plot the $10\%$ contour lines of the solution at the final time of two years.
In \figref{fig:brain-comparisons-a}, we compare solutions for various values of $\varepsilon$, between $\varepsilon = \scinum{1}{-3}$ and $\varepsilon = \scinum{1}{-10}$.
The original parameters describe a situation very close to the diffusion limit, with $\varepsilon \approx \scinum{3.3}{-6}$.
As we can expect from the results in \secref{sec:diffusion-limit-convergence} there is no difference between the original model and the model with $\varepsilon = \scinum{1}{-10}$.
However, we start to see differences when we artificially choose a greater $\varepsilon$.
Generally the invasion front is faster for greater $\epsilon$ because due to a reduced turning rate individual cells have a higher chance of overtaking the diffusive invasion front.
At $\varepsilon = \scinum{1}{-4}$ we observe a distance of contours comparable to the $2mm$ resolution of the DTI data set.
This value of $\varepsilon$ corresponds to the cell speed $c \approx \scinum{6.9}{-6} \frac{mm}{s}$, which is about a hundredth of the original value, and turning rates $\lambda_0 \approx \scinum{8.6}{-4}, \lambda_1 \approx \scinum{1.1}{-1}$ approximately one thousandth of the original rates.
Thus the kinetic model could be relevant for cell species that migrate very slowly and change their orientation very rarely (in this example once every twenty minutes).
We also investigate the influence of the spatial and temporal discretization scheme on the solution and compare the \mischeme[]{1}, \ivscheme[]{1}, \mischeme[]{2}, \ivscheme{2} schemes (see \figref{fig:brain-comparisons-b}).
The second-order variants \mischeme[]{2} and \ivscheme[]{2} agree very well in most of the domain.
The contours of both second-order schemes lie between the contour for the \mischeme[]{1} scheme on the inside and the contour for the \ivscheme[]{1} scheme on the outside everywhere.
Hence the \mischeme[]{1} scheme seems to underestimate the invasion front, whereas the \ivscheme[]{1} scheme overestimates it.
Considering the explicit and implicit distretizations of $\dot x = x$, this behavior is to be expected.
Because the situation is very close to the diffusion limit, higher moment orders in the velocity discretization make no difference to the solution.
In \figref{fig:brain-comparisons-c} the contours for the $P_1$ and the $P_3$ solutions are plotted and are visually identical.
Finally, we compare the solution of the two-dimensional model with a slice of the three dimensional model (see \figref{fig:brain-comparisons-d}).
\begin{table}
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{l@{\hspace{5pt}}lr@{.\hspace{0pt}}l@{$\times$\hspace{0pt}}lcl}
\multicolumn{2}{l}{Parameter} &\multicolumn{3}{l}{Value}& & Description \\
\hline
T && $6$&$31$& $10^{ 7}$ & $s$ & time span = one year \\
c && $2$&$1$ & $10^{-4}$ & $ \frac{mm}{s}$ & cell speed \\
$\lambda_0$ && $8$&$0$ & $10^{-1}$ & $\frac{1}{s}$ & cell-state independent part of turning rate\\
$\lambda_1$ && $1$&$5$ & $10^{ 2}$ & $\frac{1}{s}$ & cell-state dependent part of turning rate\\
$k^+$ && $1$&$0$ & $10^{-1}$ & $\frac{1}{s} $ & attachment rate of cells to ECM\\
$k^-$ && $1$&$0$ & $10^{-1}$ & $\frac{1}{s} $ & detachment rate of cells to ECM\\
$M$ && $8$&$44$ &$10^{-7}$& $\frac{1}{s} $ & growth rate\\
\hline
$\St$ &&$1$&$21$ & $10^{-2}$ & & Strouhal number\\
$\Kn$ &&$3$&$96$ & $10^{-8}$ & & Knudsen number\\
$\varepsilon$ &&$3$&$28$ & $10^{-6}$ & & parabolic scaling number\\
$\delta$ &&$2$&$72$ & $10^{-4}$ & & parabolic scaling number\\
$\nu$ &&$1$&$25$ & $10^{2}$ & & Ratio of turning rate coefficients\\
\end{tabular}
\caption{The reference parameters and the resulting characteristic numbers used in the simulations of glioma invasion in the human brain.}
\label{tab:brain-parameters}
\end{table}
\begin{figure}
\centering
\deffigures/Brain/{figures/Brain/}
\withfiguresize{\figurethreecol}{\figurethreecol}{\externaltikz{brain_time_evolution}{\input{figures/Brain/ brain_time_evolution.tex}}}
\settikzlabel{fig:brain-setup}
\settikzlabel{fig:brain-evolution-1}
\settikzlabel{fig:brain-evolution-2}
\settikzlabel{fig:brain-evolution-3}
\settikzlabel{fig:brain-evolution-4}
\settikzlabel{fig:brain-evolution-4c}
\caption{\ref{fig:brain-setup}: A two dimensional slice through the DTI data set. The white box indicates the computational domain and the white arrow the initial tumor location. \ref{fig:brain-evolution-1} - \ref{fig:brain-evolution-4}: Plots of the glioma simulation in six-month intervals. \ref{fig:brain-evolution-4c}: Contours at $100\%, 10\%, 1\%$ and $0.1\%$ of the carrying capacity at the final time. Tumor density $\rho$ is shown in color. The volume fraction $\ensuremath{Q}$ is encoded in the grayscale background image; brighter color means greater $\ensuremath{Q}$. The black arrows show the limit drift vector $a_{T}$. }
\label{fig:brain-evolution}
\end{figure}
\begin{figure}
\centering
\deffigures/Brain/{figures/Brain/}
\withfiguresize{\figuretwocollegend}{\figuretwocollegend}{\externaltikz{brain_comparisons}{\input{figures/Brain/ brain_comparisons.tex}}}
\settikzlabel{fig:brain-comparisons-a}
\settikzlabel{fig:brain-comparisons-b}
\settikzlabel{fig:brain-comparisons-c}
\settikzlabel{fig:brain-comparisons-d}
\caption{Results of glioma simulations(\secref{sec:brain}) with varied parameters and schemes. Shown are always the $10\%$ contours. For the interpretation of the background image, refer to \figref{fig:brain-evolution}. \ref{fig:brain-comparisons-a}: Solutions for various $\varepsilon$ in the intermediate to diffusive regime. \ref{fig:brain-comparisons-b}: Comparison between the numerical schemes. \ref{fig:brain-comparisons-c}: Comparison between moment orders. \ref{fig:brain-comparisons-d}: Comparison between the two-dimensional and three-dimensional models.}
\label{fig:brain-comparisons}
\end{figure}
\section{Discussion}
\label{sec:discussion}
The goal of this work was to develop a numerical tool for a special class of transport equations that lead to an advection-diffusion-reaction equation in the parabolic limit.
This method should be applicable to a wide range of scaling regimes, from almost free transport to very close to the diffusion limit.
One example of an application that is very close to the parabolic limit is a model of glioma invasion in the human brain.
The method was developed mainly with this model and the corresponding data in mind.
This means that in the implementation, we could take advantage of the simplifications it offers; for example that the turning operator is explicitly invertible or that the equilibrium distribution is of a quadratic form.
But probably the most significant influence on the method development came from the associated data.
DTI data are measured and delivered on regular grids with fixed spatial resolution.
On each grid cell, the water diffusion tensor is assumed constant, because there is no natural way to interpolate between those tensors.
To avoid interpolation artifacts in the solution, the space discretization has to use the same grid as the original data.
As a consequence, the method was implemented only for tensor-product grids and not tested for more general grids.
However, the method has to address the strong heterogeneities and discontinuities of the DTI data.
As a starting point for our scheme, we used the method developed by Lemou and Mieussens \cite{lemou2008ap}.
This scheme employs a micro-macro decomposition and discretizes the microscopic and macroscopic components on different parts of a staggered grid.
In this work, we generalized the method to an asymptotic preserving finite-volume formulation on primal-dual mesh pairs that works in two and three space dimensions.
In the description of the method, we used a mostly mesh-agnostic notation because we are confident that it also is applicable on unstructured meshes.
Most parts of the implementation in DUNE \cite{dune-web-page} are already written mesh-independently, but a complete implementation is still only available for tensor-product grids.
Development and testing of the unstructured implementation are left for the future.
To discretize the velocity space in the micro equation, we employ the method of moments.
More specifically, we use spherical harmonic basis functions and a linear reconstruction ansatz.
In the diffusive regime, first-order basis polynomials are accurate enough, which means that only one degree of freedom per space dimension is needed.
Compare this to the discrete ordinates method, that needs at least two degrees of freedom per space dimension to maintain symmetry.
For successively less diffusive regimes, higher moment orders can be added as needed.
Of course, in the kinetic regime the linear moment method has the usual drawback of producing unphysical Gibb's phenomena.
But this is not a problem in the diffusive regime.
For asymptotic preserving methods, one special point of interest is resulting discretization in the parabolic limit.
We show the limit diffusion and drift approximations only for a very simplified setting---regular grid with constant isotropic coefficients---but this is enough to identify two drawbacks of the basic method.
First, the limit diffusion approximation is a five-point diagonal stencil that leads to a decoupling of grids and spurious oscillations.
The same effect was also described in \cite{buet2012design} and seems to be a general problem for primal-dual discretizations.
We propose alterations of the basic method that effectively allows us to modify the limiting discretization of the diffusion and drift terms.
In effect, this leads to the classical five-point stencil for the diffusion and an upwind approximation of the drift.
However, the drift discretization comes at the price of being inherently first-order accurate.
We perform a wide range of benchmarks to numerically test some of the method's properties.
The fundamental solution test demonstrates that the method indeed is asymptotic preserving and in the limit converges with the correct order to the fundamental solution.
Moreover, we use this benchmark to estimate properties of the modified equation of the scheme.
Of special interest is the behavior of the method in presence of strong discontinuities as encountered in the DTI data.
For this, we adapt two stationary benchmark tests from the porous media community.
The scheme deals well with strong jumps in permeability and has surprisingly a higher rate of convergence than could be expected from the regularity of the solution.
Also, jumps in diffusion direction across an interface are resolved well, as long as the flux is only normal to the interface.
Any tangential flux drastically reduces the approximation quality.
Last but not least we demonstrate the capabilities of the method in the glioma invasion model.
Although the parameters are only very rough estimates, the overall situation is similar to the application.
The method performs well even on the coarse and heterogeneous real-world DTI data.
\newpage
\addcontentsline{toc}{section}{References}
\bibliographystyle{plain}
| 51,876 |
\section{Introduction and statement of results}
Let $\Omega_1\subset\Omega_2\subset...\subset\Omega_{m+1}\subset {\bf R}^n$, $m\ge 1$, $n\ge 2$, be bounded, strictly convex domains with smooth boundaries $\Gamma_k=\partial\Omega_k$, $\Gamma_k\cap\Gamma_{k+1}=\emptyset$. Let also $\Omega_0\subset\Omega_1$ be a bounded domain with smooth boundary $\Gamma_0=\partial\Omega_0$ such that ${\bf R}^n\setminus\Omega_0$ is connected. In the present paper we are interested in studying the large time behavior of the solutions of the following mixed boundary value problems:
$$
\left\{
\begin{array}{l}
(i\partial_t-c_k^2\Delta)u_k^0(x,t)=0\quad\mbox{in}\quad(\Omega_k\setminus\Omega_{k-1})\times (0,+\infty),\,k=1,...,m+1,\\
Bu_1^0(x,t)=0\quad\mbox{on}\quad\Gamma_0\times (0,+\infty),\\
u_k^0(x,t)=u_{k+1}^0(x,t),\,\partial_\nu u_k^0(x,t)=\partial_\nu u_{k+1}^0(x,t)
\quad\mbox{on}\quad \Gamma_k\times(0,+\infty),\,k=1,...,m,\\
\partial_\nu u_{m+1}^0(x,t)+ia(x)u_{m+1}^0(x,t)=0\quad\mbox{on}\quad \Gamma_{m+1}\times(0,+\infty),\\
u_k^0(x,0) = f_k^0(x), \, k=1,...,m+1,
\end{array}
\right.
\eqno{(1.1)}
$$
and
$$
\left\{
\begin{array}{l}
(\partial_t^2-c_k^2\Delta)u_k^1(x,t)=0\quad\mbox{in}\quad(\Omega_k\setminus\Omega_{k-1})\times (0,+\infty),\,k=1,...,m+1,\\
Bu_1^1(x,t)=0\quad\mbox{on}\quad\Gamma_0\times (0,+\infty),\\
u_k^1(x,t)=u_{k+1}^1(x,t),\,\partial_\nu u_k^1(x,t)=\partial_\nu u_{k+1}^1(x,t)
\quad\mbox{on}\quad \Gamma_k\times(0,+\infty),\,k=1,...,m,\\
\partial_\nu u_{m+1}^1(x,t)+a(x)\partial_tu_{m+1}^1(x,t)=0\quad\mbox{on}\quad \Gamma_{m+1}\times(0,+\infty),\\
u_k^1(x,0) = f_k^1(x), \, \partial_t u_k^1(x,0) = g_k^1(x), \, k=1,...,m+1,
\end{array}
\right.
\eqno{(1.2)}
$$
where either $B=Id$ (Dirichlet boundary conditions) or $B=\partial_\nu$ (Neumann boundary conditions), $\partial_\nu$ denotes the normal derivative to the boundary, $c_k$ are constants satisfying
$$c_1>c_2>...>c_{m+1}>0,\eqno{(1.3)}$$
and $a(x)$ is a continuous, real-valued function on $\Gamma_{m+1}$ supposed to satisfy
$$a(x)\ge a_0 \quad\mbox{on}\quad \Gamma_{m+1},\eqno{(1.4)}$$
with some constant $a_0>0$. The equation (1.2) describes the propagation of acoustic waves in different media with different speeds $c_k$, $k=1,...,m+1$, which do not penetrate into $\Omega_0$.
The boundary condition on $\Gamma_{m+1}$ is a strong dissipative one which guarantees that the energy of the solutions of (1.2) with finite energy initial data tends to zero as $t\to +\infty$. The equation (1.1) is of Schr\"odinger type with weak dissipative boundary conditions. In fact, the large time behavior of the solutions of (1.1) and (1.2) is closely related to the behavior on the real axis of the corresponding resolvent operator, $R^j(\lambda)$, $\lambda\in{\bf C}$, defined for ${\rm Im}\,\lambda<0$ as follows. Given $$v^j=(v_1^j,...,v_{m+1}^j)\in H:=\oplus_{k=1}^{m+1}L^2\left(\Omega_{k}\setminus\Omega_{k-1},c_k^{-2}dx\right),$$
$R^j(\lambda)v^j=(u_1^j,...,u_{m+1}^j)\in H$ solves the equation
$$
\left\{
\begin{array}{l}
(\lambda^2+c_k^2\Delta)u_k^j=v_k^j\quad\mbox{in}\quad\Omega_k\setminus\Omega_{k-1},\,k=1,...,m+1,\\
Bu_1^j=0\quad\mbox{on}\quad\Gamma_0,\\
u_k^j=u_{k+1}^j,\,\partial_\nu u_k^j=\partial_\nu u_{k+1}^j
\quad\mbox{on}\quad \Gamma_k,\,k=1,...,m,\\
\partial_\nu u_{m+1}^j+i\lambda^j a(x) u_{m+1}^j=0\quad\mbox{on}\quad \Gamma_{m+1}.
\end{array}
\right.
\eqno{(1.5)}
$$
It is well known that $\lambda^jR^j(\lambda):H\to H$ extends meromorphically to the whole complex plane ${\bf C}$ with no poles on the real axis (the latter can be derived from the Carleman estimates of \cite{kn:B}). In the present paper we will study the behavior of $R^j(\lambda)$
for $\lambda\in{\bf R}$, $|\lambda|\gg 1$. To this end we need to impose some conditions on $\Omega_0$ (as weak as possible). We first make the following assumption:
$$\mbox{every generalized ray in $\Omega_1\setminus\Omega_0$ hits the boundary $\Gamma_1$}.\eqno{(1.6)}$$
Clearly, (1.6) is fulfilled if $\Omega_0$ is strictly convex. However, the class of the domains for which (1.6) is satisfied is much larger than the class of strictly convex domains. We can now state our first result.
\begin{Theorem} Assume (1.3), (1.4) and (1.6) fulfilled. Then, there exist constants $C,C_1>0$ so that $R^j(\lambda)$ ($j=0,1$) satisfies the bound
$$\left\|R^j(\lambda)\right\|_{H\to H}\le C|\lambda|^{-j},\quad\lambda\in{\bf R},\,|\lambda|\ge C_1.\eqno{(1.7)}$$
\end{Theorem}
One can derive from this theorem the following
\begin{corol} Under the assumptions of Theorem 1.1, the solutions $$u^j(x,t)=(u_1^j(x,t),...,u_{m+1}^j(x,t))$$ of (1.1) and (1.2) satisfy the estimates (for $t\gg 1$):
$$\left\| u^0(\cdot,t)\right\|_H\le \widetilde Ce^{-Ct}\left\|u^0(\cdot,0)\right\|_H,\eqno{(1.8)}$$
with constants $\widetilde C,C>0$ independent of $t$ and $u^0$, and
$$\left\|\nabla_x u^1(\cdot,t)\right\|_H+\left\|\partial_t u^1(\cdot,t)\right\|_H\le \widetilde Ce^{-Ct}\left(\left\|\nabla_x u^1(\cdot,0)\right\|_H+\left\|\partial_t u^1(\cdot,0)\right\|_H\right),\eqno{(1.9)}$$
with constants $\widetilde C,C>0$ independent of $t$ and $u^1$.
\end{corol}
To prove (1.8) and (1.9) it suffices to show that the solutions of (1.1) and (1.2) are given by semi-groups $e^{itA_j}$, respectively, acting on suitable Hilbert spaces
${\cal H}_j$ with generators $A_j$ of compact resolvent and hence of discrete spectrum. Then Theorem 1.1 implies that
$$\left\|(A_j-z)^{-1}\right\|_{{\cal H}_j\to{\cal H}_j}=O(1)\quad\mbox{for}\quad z\in{\bf R},\,|z|\gg 1,$$
which in turn implies (1.8) and (1.9), respectively (see Section 2 for more details).
In the case when there is no transmission of waves (which corresponds to taking $m=0$ in the setting above) the above estimates follow from the results of \cite{kn:BLR}. In fact, in \cite{kn:BLR} a more general situation is studied, namelly $\Omega_1$ is not necessarilly strictly convex and (1.4) is supposed to hold on a non-empty subset $\widetilde\Gamma_1$ of $\Gamma_1$. Then (1.6) is replaced by the assumption that every generalized ray in $\Omega_1\setminus\Omega_0$ hits $\widetilde\Gamma_1$ at a non-diffractive point (see \cite{kn:BLR} for the definition and more details). The situation changes drastically in the case of transmission (which corresponds to taking $m\ge 1$ in the setting above) due to the fact that the classical flow for this problem is much more complicated. Indeed, when a ray in $\Omega_{k+1}\setminus\Omega_k$ hits the boundary $\Gamma_k$ (if $1\le k\le m$) or the boundary $\Gamma_{k+1}$ (if $0\le k\le m-1$), it splits into two rays - one staying in
$\Omega_{k+1}\setminus\Omega_k$ and another entering into $\Omega_{k}\setminus\Omega_{k-1}$ or $\Omega_{k+2}\setminus\Omega_{k+1}$, respectively. Consequently, there are infinitely many rays which do not reach the boundary $\Gamma_{m+1}$ where the dissipation is active. The condition (1.3), however, guarantees that these rays carry a negligible amount of energy, and therefore (1.3) is crucial for the above estimates to hold. Indeed, if for example we have $c_{k_0}<c_{k_0+1}$
for some $1\le k_0\le m$, then one can construct quasi-modes concentrated on the boundary $\Gamma_{k_0}$ (see \cite{kn:PV}). Consequently, we have in this case a sequence, $\{\lambda_k\}_{k=1}^\infty$, of poles of $R^j(\lambda)$ such that $|\lambda_k|\to \infty$ and $0<{\rm Im}\,\lambda_k\le C_N|\lambda_k|^{-N}$, $\forall N\ge 1$. Note also that the fact that the domains $\Omega_k$, $k=1,...,m+1$, are strictly convex is crucial for our proof to work, and
quite probably Theorem 1.1 as well as the estimates (1.8) and (1.9) are no longer true without this condition. This is essential for the proof of Proposition 2.3 below (proved in \cite{kn:CPV}). It also guarantees nice properties of the Neumann operator (denoted by $N_k(\lambda)$, $k=1,...,m$ below) associated to the Helmholtz equation in ${\bf R}^n\setminus \Omega_k$ (see Lemmas 4.2 and 4.4).
To prove Theorem 1.1 we make use of the results of \cite{kn:CPV} where an exterior transmission problem has been studied. Consider the exterior stationary problem
$$
\left\{
\begin{array}{l}
(\lambda^2+c_1^2\Delta)u=v\quad\mbox{in}\quad{\bf R}^n\setminus\Omega_0,\\
Bu=0\quad\mbox{on}\quad\Gamma_0,\\
u - \lambda - \mbox{outgoing}.
\end{array}
\right.
\eqno{(1.10)}
$$
Then the outgoing resolvent, ${\cal R}_0(\lambda)$, for the exterior problem is defined by $u={\cal R}_0(\lambda)v$. Let $\chi\in
C_0^\infty({\bf R}^n)$, $\chi=1$ on $\Omega_0$. It is well known that the cut-off resolvent $\chi {\cal R}_0(\lambda)\chi$ is analytic in ${\rm Im}\,\lambda<0$ and meromorphic in ${\rm Im}\,\lambda>0$ with no poles on the real axis. Clearly, the condition (1.6) implies that $\Omega_0$
is non-trapping, that is, all generalized rays in ${\bf R}^n\setminus\Omega_0$ escape at infinity. In particular, this implies that the cut-off resolvent $\chi {\cal R}_0(\lambda)\chi$ satisfies the bound
$$\left\|\chi {\cal R}_0(\lambda)\chi\right\|_{L^2({\bf R}^n\setminus\Omega_0)\to L^2({\bf R}^n\setminus\Omega_0)}\le C|\lambda|^{-1}\quad\mbox{for}\quad \lambda\in{\bf R},\,|\lambda|\ge 1.\eqno{(1.11)}$$
In fact, the only thing we use in the proof of Theorem 1.1 is the estimate (1.11). In other words, we can replace the condition (1.6) by the estimate (1.11). Note also that (1.11) implies that $\chi {\cal R}_0(\lambda)\chi$ extends analytically in a strip $\{\lambda\in{\bf C}:|{\rm Im}\,\lambda|\le Const,\,|\lambda|\ge 1\}$ and that (1.11) still holds in this larger region (see \cite{kn:V}).
An interesting open problem is to get estimates similar to those stated above for more general domains $\Omega_0$ for which (1.6) and (1.11)
are not satisfied. A typical example for such domains is $\Omega_0={\cal O}_1\cup{\cal O}_2$, where ${\cal O}_1$ and ${\cal O}_2$ are strictly convex domains with smooth boundaries, ${\cal O}_1\cap{\cal O}_2=\emptyset$. In this case there is one periodic ray between ${\cal O}_1$ and ${\cal O}_2$ which does not reach $\Gamma_1$. It is well known that in this case (1.11) does not hold. Instead, we have that, in the case of Dirichlet boundary conditions (i.e. $B=Id$), the cut-off resolvent $\chi {\cal R}_0(\lambda)\chi$ is analytic in a strip $\{\lambda\in{\bf C}:|{\rm Im}\,\lambda|\le Const,\,|\lambda|\ge 1\}$ with polynomially bounded norm (see \cite{kn:G}, \cite{kn:I1}). Our purpose is to treat such more general domains $\Omega_0$. More precisely, we make the following assumption:\\
There exist constants $C,C_1, C_2,p>0$ so that the cutoff resolvent $\chi {\cal R}_0(\lambda)\chi$ is analytic in a strip $\{\lambda\in{\bf C}:|{\rm Im}\,\lambda|\le C_1,\,|\lambda|\ge C_2\}$ and satisfies there the bound
$$\left\|\chi {\cal R}_0(\lambda)\chi\right\|_{L^2({\bf R}^n\setminus\Omega_0)\to L^2({\bf R}^n\setminus\Omega_0)}\le C|\lambda|^p.\eqno{(1.12)}$$
Note that (1.12) is also satisfied for domains $\Omega_0=\cup_{\ell=1}^L{\cal O}_\ell$, $L\ge 3$, where ${\cal O}_\ell$ are strictly convex domains with smooth boundaries, ${\cal O}_{\ell_1}\cap{\cal O}_{\ell_2}=\emptyset$, $\ell_1\neq\ell_2$, satisfying some natural conditions (see \cite{kn:I2} for more details). Note that in this case there could be infinitely many periodic broken rays which do not reach the boundary $\Gamma_1$. Let us also mention that semi-classical analogues of (1.12) have been recently proved in \cite{kn:NZ}, \cite{kn:NZ1} in a very general situation.
Our main result is the following
\begin{Theorem} Assume (1.3), (1.4) and (1.12) fulfilled. Then, there exist constants $C,C_1>0$ so that $R^j(\lambda)$ ($j=0,1$) satisfies the bound
$$\left\|R^j(\lambda)\right\|_{H\to H}\le C|\lambda|^{-j}(\log|\lambda|)^{2^{m+1}},\quad\lambda\in{\bf R},\,|\lambda|\ge C_1.\eqno{(1.13)}$$
\end{Theorem}
Given an integer $k\ge 0$, set $\alpha_k=(2^k+1)^{-1}$. One can derive from this theorem the following
\begin{corol} Under the assumptions of Theorem 1.3, the solutions $$u^j(x,t)=(u_1^j(x,t),...,u_{m+1}^j(x,t))$$ of (1.1) and (1.2) satisfy the estimates (for $t\gg 1$):
$$\left\| u^0(\cdot,t)\right\|_H\le \widetilde C\exp\left(-C\varepsilon t^{\alpha_{m+1}}\right)\left\|u^0(\cdot,0)\right\|_{H^\varepsilon},\eqno{(1.14)}$$
for every $0<\varepsilon\le\varepsilon_0$, with constants $C,\varepsilon_0>0$ independent of $t$, $\varepsilon$ and $u^0$, $\widetilde C$ independent of $t$ and $u^0$, and
$$\left\|\nabla_x u^1(\cdot,t)\right\|_H+\left\|\partial_t u^1(\cdot,t)\right\|_H\le \widetilde C\exp\left(-C\varepsilon t^{\alpha_{m+1}}\right)\left(\left\|\nabla_x u^1(\cdot,0)\right\|_{H^\varepsilon}+\left\|\partial_t u^1(\cdot,0)\right\|_{H^\varepsilon}\right),\eqno{(1.15)}$$
for every $0<\varepsilon\le\varepsilon_0$, with constants $C,\varepsilon_0>0$ independent of $t$, $\varepsilon$ and $u^1$, $\widetilde C$ independent of $t$, and $u^1$,
where $H^\varepsilon :=\oplus_{k=1}^{m+1}H^\varepsilon\left(\Omega_{k}\setminus\Omega_{k-1}\right)$ denotes the corresponding Sobolev space.
\end{corol}
Note that the estimate (1.15) (with $\alpha_{m+1}=1/2$) has been proved in \cite{kn:C}, \cite{kn:C1} in the case of the damped wave equation on a bounded manifold without boundary under the assumption that there is only one closed geodesic of hyperbolic type which does not pass through the support of the dissipative term but
all other geodesics do so. This result has been recently improved in \cite{kn:S} for a class of manifolds with negative curvature, where a strip free of eigenvalues has been obtained and, as a consequence, an analogue of (1.15) (with $\alpha_{m+1}=1$) has been proved.
If $\Omega_0$ is strictly convex, the conclusions of Theorem 1.1 still hold if we admit transmision of waves in the interior of $\Omega_0$ moving with a speed $>c_1$, i.e. if we replace the boundary condition $Bu=0$ on $\Gamma_0$ by a transmission problem. Indeed, in this case we have (1.11) according to the results of \cite{kn:CPV}. Thus, it is natural to ask whether Theorem 1.3 still holds if $\Omega_0$ consists of two strictly convex bodies and we admit transmision of waves in the interior. To be more precise, we define the resolvent $\widetilde {\cal R}_0(\lambda)$ as $u=\widetilde{\cal R}_0(\lambda)v$, where $u=(u_1,u_2,u_3)$ and $v=(v_1,v_2,v_3)$ satisfy the equation
$$
\left\{
\begin{array}{l}
(\lambda^2+\alpha_k^2\Delta)u_k=v_k\quad\mbox{in}\quad{\cal O}_k,\,k=1,2,\\
(\lambda^2+c_1^2\Delta)u_3=v_3\quad\mbox{in}\quad{\bf R}^n\setminus({\cal O}_1\cup{\cal O}_2),\\
u_k=u_3,\,\partial_\nu u_k=\partial_\nu u_3\quad\mbox{on}\quad\partial{\cal O}_k,\,k=1,2,\\
u_3 - \lambda - \mbox{outgoing},
\end{array}
\right.
\eqno{(1.16)}
$$
where $\alpha_k>c_1$, $k=1,2$, are constants, ${\cal O}_1$ and ${\cal O}_2$ are strictly convex domains with smooth boundaries, ${\cal O}_1\cap{\cal O}_2=\emptyset$. In analogy with the case of one strictly convex body discussed above, it is natural to make the following\\
\noindent
{\bf Conjecture.} {\it The resolvent} $\widetilde {\cal R}_0(\lambda)$ {\it satisfies the condition (1.12).}\\
Clearly, if this conjecture holds true, so does Theorem 1.3 in this more complex situation. However, it seems quite hard to prove.
The method we develop to prove the above results allows to get a decay of the local energy of the solutions of the following problem:
$$
\left\{
\begin{array}{l}
(\partial_t^2-c_k^2\Delta)u_k(x,t)=0\quad\mbox{in}\quad(\Omega_k\setminus\Omega_{k-1})\times (0,+\infty),\,k=1,...,m,\\
(\partial_t^2-c_{m+1}^2\Delta)u_{m+1}(x,t)=0\quad\mbox{in}\quad({\bf R}^n\setminus\Omega_m)\times (0,+\infty),\\
Bu_1(x,t)=0\quad\mbox{on}\quad\Gamma_0\times (0,+\infty),\\
u_k(x,t)=u_{k+1}(x,t),\,\partial_\nu u_k(x,t)=\partial_\nu u_{k+1}(x,t)
\quad\mbox{on}\quad \Gamma_k\times(0,+\infty),\,k=1,...,m,\\
u_k(x,0) = f_k(x), \, \partial_t u_k(x,0) = g_k(x), \, k=1,...,m+1.
\end{array}
\right.
\eqno{(1.17)}
$$
More precisely, we have the following
\begin{Theorem} Under the assumptions (1.3) and (1.6), for every compact $K\subset{\bf R}^n\setminus\Omega_0$ there exists a constant
$C_K>0$ so that the solution $$u(x,t)=(u_1(x,t),...,u_{m+1}(x,t))$$ of (1.17) satisfies the estimate (for $t\gg 1$)
$$\left\|\nabla_x u(\cdot,t)\right\|_{L^2(K)}+\left\|\partial_t u(\cdot,t)\right\|_{L^2(K)}\le C_Kp_0(t)\left(\left\|\nabla_x u(\cdot,0)\right\|_{L^2(K)}+\left\|\partial_t u(\cdot,0)\right\|_{L^2(K)}\right),\eqno{(1.18)}$$
provided ${\rm supp}\,u(\cdot,0)$, ${\rm supp}\,\partial_tu(\cdot,0)\subset K$, where
$$p_0(t)=\left\{\begin{array}{l}
e^{-\gamma t}\quad if\,\,n\,\, is\,\, odd,\\
t^{-n}\quad if\,\,n\,\,is\,\, even,
\end{array}\right.$$
with a constant $\gamma>0$ independent of $t$. Furthermore, under the assumptions (1.3) and (1.12), we have the weaker estimate
$$\left\|\nabla_x u(\cdot,t)\right\|_{L^2(K)}+\left\|\partial_t u(\cdot,t)\right\|_{L^2(K)}\le C_{K,\varepsilon}p_\varepsilon(t)\left(\left\|\nabla_x u(\cdot,0)\right\|_{H^\varepsilon(K)}+\left\|\partial_t u(\cdot,0)\right\|_{H^\varepsilon(K)}\right),\eqno{(1.19)}$$
for every $0<\varepsilon\le\varepsilon_0$, provided ${\rm supp}\,u(\cdot,0)$, ${\rm supp}\,\partial_tu(\cdot,0)\subset K$, where
$$p_\varepsilon(t)=\left\{\begin{array}{l}
\exp\left(-\gamma\varepsilon t^{\alpha_{m}}\right)\quad if\,\,n\,\,is\,\,odd,\\
t^{-n}\quad if\,\,n\,\, is\,\, even,
\end{array}\right.$$
with constants $\varepsilon_0,\gamma>0$ independent of $t$ and $\varepsilon$.
\end{Theorem}
Note that the estimate (1.18) is known to hold for non-trapping compactly supported perturbations of the Euclidean Laplacian (see \cite{kn:Va}). Note also that an estimate similar to (1.19) (with $\alpha_m=1/2$) has been proved in \cite{kn:C2} in the case of compactly supported metric perturbations of the Euclidean Laplacian under the assumption that there is only one closed geodesics of hyperbolic type.
According to the results of \cite{kn:V}, to prove (1.18) it suffices to show that the corresponding cutoff resolvent is analytic in some
strip near the real axis with a suitable control of its norm at high frequencies. Thus, (1.18) follows from Theorem 2.2 below applied with $k=m$ (which is actually proved in \cite{kn:CPV}), while (1.19) is a consequence of Theorem 3.2 applied with $k=m$.
The paper is organized as follows. In Section 2 we prove Theorem 1.1, Corollary 1.2 and (1.18) using in an essential way the results of \cite{kn:CPV}. Similar ideas have already been used in \cite{kn:AV}. In Section 3 we prove Theorem 1.3, Corollary 1.4 and (1.19). To this end
we prove in Section 4 an analogue of the results of \cite{kn:CPV} under (1.12) (see Theorem 3.2 below).
\section{The case $\Omega_0$ non-trapping}
Let $w=(w_1,...,w_{m+1})$, $v=(v_1,...,v_{m+1})$ satisfy the equation
$$
\left\{
\begin{array}{l}
(\lambda^2+c_k^2\Delta)w_k=v_k\quad\mbox{in}\quad\Omega_k\setminus\Omega_{k-1},\,k=1,...,m+1,\\
Bw_1=0\quad\mbox{on}\quad\Gamma_0,\\
w_k=w_{k+1},\,\partial_\nu w_k=\partial_\nu w_{k+1}
\quad\mbox{on}\quad \Gamma_k,\,k=1,...,m.
\end{array}
\right.
\eqno{(2.1)}
$$
We will first show that Theorem 1.1 follows from the following
\begin{Theorem} Assume (1.3) and (1.6) fulfilled. Then, there exist constants $C,\lambda_0>0$ so that for $\lambda\ge\lambda_0$ the solution to (2.1) satisfies the estimate
$$\|w\|_H\le C\lambda^{-1}\|v\|_H+C\left\|w_{m+1}|_{\Gamma_{m+1}}\right\|_{L^2(\Gamma_{m+1})}+C\lambda^{-1}\left\|\partial_\nu w_{m+1}|_{\Gamma_{m+1}}\right\|_{L^2(\Gamma_{m+1})}.\eqno{(2.2)}$$
\end{Theorem}
Applying Green's formula to the solution of (1.5) in each domain $\Omega_k\setminus\Omega_{k-1},\,k=1,...,m+1,$ and summing up these identities lead to the identity
$${\rm Im}\,\left\langle u^j,v^j\right\rangle_H:=\sum_{k=1}^{m+1}{\rm Im}\,\left\langle c_k^{-2}u_k^j,v_k^j\right\rangle_{L^2(\Omega_k\setminus\Omega_{k-1})}$$ $$=-{\rm Im}\,\left\langle \partial_\nu u_{m+1}^j,u_{m+1}^j\right\rangle_{L^2(\Gamma_{m+1})}=\lambda^j\,\left\langle a u_{m+1}^j,u_{m+1}^j\right\rangle_{L^2(\Gamma_{m+1})}.\eqno{(2.3)}$$
By (1.4) and (2.3) we conclude
$$a_0\lambda^j\left\|u_{m+1}^j|_{\Gamma_{m+1}}\right\|_{L^2(\Gamma_{m+1})}^2\le \gamma\lambda^j\|u^j\|_H^2+\gamma^{-1}\lambda^{-j}\|v^j\|_H^2,\eqno{(2.4)}$$
for every $\gamma>0$. On the other hand, applying (2.2) with $w=u^j$ yields
$$\|u^j\|_H^2\le C\lambda^{-2}\|v^j\|_H^2+C\left\|u_{m+1}^j|_{\Gamma_{m+1}}\right\|_{L^2(\Gamma_{m+1})}^2.\eqno{(2.5)}$$
Combining (2.4) and (2.5) and taking $\gamma$ small enough, independent of $\lambda$, we get
$$\|u^j\|_H\le C\lambda^{-j}\|v^j\|_H,\eqno{(2.6)}$$
which is equivalent to (1.7) for real $\lambda\gg 1$. Clearly, the case $-\lambda\gg 1$ can be treated in the same way.\\
{\it Proof of Theorem 2.1.} Given any $1\le k\le m$, define the resolvent ${\cal R}_k(\lambda)$ as $u={\cal R}_k(\lambda)v$, where $u=(u_1,...,u_{k+1})$, $v=(v_1,...,v_{k+1})$ satisfy the equation
$$
\left\{
\begin{array}{l}
(\lambda^2+c_\ell^2\Delta)u_\ell=v_\ell\quad\mbox{in}\quad\Omega_\ell\setminus\Omega_{\ell-1},\,\ell=1,...,k,\\
(\lambda^2+c_{k+1}^2\Delta)u_{k+1}=v_{k+1}\quad\mbox{in}\quad{\bf R}^n\setminus\Omega_k,\\
Bu_1=0\quad\mbox{on}\quad\Gamma_0,\\
u_\ell=u_{\ell+1},\,\partial_\nu u_\ell=\partial_\nu u_{\ell+1}
\quad\mbox{on}\quad \Gamma_\ell,\,\ell=1,...,k,\\
u_{k+1} - \lambda-\mbox{outgoing}.
\end{array}
\right.
\eqno{(2.7)}
$$
Let us first see that Theorem 2.1 follows from the following
\begin{Theorem} Assume (1.3) and (1.6) fulfilled. Then, for every $1\le k\le m$ the cutoff resolvent $\chi{\cal R}_k(\lambda)\chi$ satisfies the estimate
$$\left\|\chi {\cal R}_k(\lambda)\chi\right\|_{L^2({\bf R}^n\setminus\Omega_k)\to L^2({\bf R}^n\setminus\Omega_k)}\le C|\lambda|^{-1}\quad\mbox{for}\quad \lambda\in{\bf R},\,|\lambda|\ge 1,\eqno{(2.8)}$$
where $\chi\in C_0^\infty({\bf R}^n)$, $\chi=1$ on $\Omega_k$.
\end{Theorem}
Choose a real-valued function $\rho\in C_0^\infty({\bf R})$, $0\le\rho\le 1$, $\rho(t)=1$ for $|t|\le\delta/2$, $\rho(t)=0$ for $|t|\ge\delta$,
$d\rho(t)/dt\le 0$ for $t\ge 0$, where $0<\delta\ll 1$ is a parameter. Given $x\in \Omega_{m+1}\setminus\Omega_m$, denote by $d(x)$ the distance between $x$ and $\Gamma_{m+1}$. Hence $\psi(x)=\rho(d(x))\in C^\infty(\Omega_{m+1})$, $\psi=1$ near $\Gamma_{m+1}$, $\psi=0$ on $\Omega_m$. The following estimate is proved in \cite{kn:CPV} (see Proposition 2.2) using in an essential way that the boundary $\Gamma_{m+1}$ is strictly concave viewed from the interior.
\begin{prop} There exist constants $C,\lambda_0,\delta_0>0$ so that if $0<\delta\le\delta_0$, $\lambda\ge\lambda_0$, we have the estimate
$$\|\psi u\|_{H^1(\Omega_{m+1}\setminus\Omega_m)}\le C\lambda^{-1}\|(\lambda^2+c_{m+1}^2\Delta) u\|_{L^2(\Omega_{m+1}\setminus\Omega_m)}$$
$$+C\left\|u|_{\Gamma_{m+1}}\right\|_{L^2(\Gamma_{m+1})}+C\lambda^{-1}\left\|\partial_\nu u|_{\Gamma_{m+1}}\right\|_{L^2(\Gamma_{m+1})}$$ $$
+ O_\delta(\lambda^{-1/2})\| u\|_{H^1(\Omega_{m+1}\setminus\Omega_m)},\quad\forall u\in H^2(\Omega_{m+1}\setminus\Omega_m),\eqno{(2.9)}$$
where the Sobolev space $H^1$ is equipped with the semi-classical norm with a small parameter $\lambda^{-1}$.
\end{prop}
Let $\chi\in C_0^\infty({\bf R}^n)$, $\chi=1$ on $\Omega_m$, supp$\,\chi\subset\Omega_{m+1}$. Clearly, the solution to (2.1) satisfies the equation
$$
\left\{
\begin{array}{l}
(\lambda^2+c_k^2\Delta)w_k=v_k\quad\mbox{in}\quad\Omega_k\setminus\Omega_{k-1},\,k=1,...,m,\\
(\lambda^2+c_{m+1}^2\Delta)\chi w_{m+1}=\chi v_{m+1}+c_{m+1}^2[\Delta,\chi] w_{m+1}\quad\mbox{in}\quad{\bf R}^n\setminus\Omega_m,\\
Bw_1=0\quad\mbox{on}\quad\Gamma_0,\\
w_k=w_{k+1},\,\partial_\nu w_k=\partial_\nu w_{k+1}
\quad\mbox{on}\quad \Gamma_k,\,k=1,...,m.
\end{array}
\right.
\eqno{(2.10)}
$$
Therefore, applying (2.8) with $k=m$ leads to the estimate
$$\sum_{k=1}^m\|w_k\|_{L^2(\Omega_k\setminus\Omega_{k-1})}+\|\chi w_{m+1}\|_{L^2({\bf R}^n\setminus\Omega_m)}$$ $$\le C\lambda^{-1}\sum_{k=1}^{m+1}\|v_k\|_{L^2(\Omega_k\setminus\Omega_{k-1})}+C\lambda^{-1}\|[\Delta,\chi] w_{m+1}\|_{L^2(\Omega_{m+1}\setminus\Omega_m)}.\eqno{(2.11)}$$
Choose $\chi$ so that $\psi=1$ on both supp$\,[\Delta,\chi]$ and supp$\,(1-\chi)|_{\Omega_{m+1}}$. Then (2.11) can be rewritten as follows
$$\sum_{k=1}^{m+1}\|w_k\|_{L^2(\Omega_k\setminus\Omega_{k-1})}\le C\lambda^{-1}\sum_{k=1}^{m+1}\|v_k\|_{L^2(\Omega_k\setminus\Omega_{k-1})}+C\|\psi w_{m+1}\|_{H^1(\Omega_{m+1}\setminus\Omega_m)},\eqno{(2.12)}$$
where again $H^1$ is equipped with the semiclassical norm. Using (2.9) with $u=w_{m+1}$ and combining with (2.12) lead to the estimate
$$\sum_{k=1}^{m+1}\|w_k\|_{L^2(\Omega_k\setminus\Omega_{k-1})}\le C\lambda^{-1}\sum_{k=1}^{m+1}\|v_k\|_{L^2(\Omega_k\setminus\Omega_{k-1})}+C\lambda^{-1/2}\| w_{m+1}\|_{H^1(\Omega_{m+1}\setminus\Omega_m)}$$ $$+C\left\|w_{m+1}|_{\Gamma_{m+1}}\right\|_{L^2(\Gamma_{m+1})}+C\lambda^{-1}\left\|\partial_\nu w_{m+1}|_{\Gamma_{m+1}}\right\|_{L^2(\Gamma_{m+1})}.\eqno{(2.13)}$$
On the other hand, by Green's formula we have
$$\sum_{k=1}^{m+1}{\rm Re}\,\left\langle w_k,v_k\right\rangle_{L^2(\Omega_k\setminus\Omega_{k-1},c_k^{-2}dx)}=\lambda^2\sum_{k=1}^{m+1}\|w_k\|^2_{L^2(\Omega_k\setminus\Omega_{k-1},c_k^{-2}dx)}$$ $$-\sum_{k=1}^{m+1}\|\nabla w_k\|^2_{L^2(\Omega_k\setminus\Omega_{k-1})}-{\rm Re}\,\left\langle \partial_\nu w_{m+1},w_{m+1}\right\rangle_{L^2(\Gamma_{m+1})},$$
which in turn implies
$$\sum_{k=1}^{m+1}\|\nabla w_k\|_{L^2(\Omega_k\setminus\Omega_{k-1})}\le C\lambda^{-1}\sum_{k=1}^{m+1}\|v_k\|_{L^2(\Omega_k\setminus\Omega_{k-1})}+C\sum_{k=1}^{m+1}\| w_k\|_{L^2(\Omega_k\setminus\Omega_{k-1})}$$
$$+C\lambda^{-1/2}\left(\left\|w_{m+1}|_{\Gamma_{m+1}}\right\|_{L^2(\Gamma_{m+1})}+\left\|\lambda^{-1}\partial_\nu w_{m+1}|_{\Gamma_{m+1}}\right\|_{L^2(\Gamma_{m+1})}\right).\eqno{(2.14)}$$
Combining (2.13) and (2.14) and taking $\lambda$ large enough, we conclude that the second term in the right-hand side of (2.13) can be absorbed, thus obtaining (2.2).
\mbox{\ }\hfill $\Box$ \par \vskip 10pt
{\it Proof of Theorem 2.2.} Since (2.8) holds true for $k=0$ in view of the assumption (1.6), one needs to show that (2.8) with $k-1$ implies (2.8) with $k$. This, however, is proved in \cite{kn:CPV} (see Theorem 1.1; see also Section 4 below).
\mbox{\ }\hfill $\Box$ \par \vskip 10pt
The fact that (1.7) implies (1.8) and (1.9) is more or less well known. In what follows we will sketch the main points. Define the operator $A_0$ on the Hilbert space ${\cal H}_0=H$ as follows
$$A_0u=(-c_1^2\Delta u_1,...,-c_{m+1}^2\Delta u_{m+1}),\quad u=(u_1,...,u_{m+1}),$$
with domain of definition
$${\cal D}(A_0)=\left\{u\in H: A_0u\in H, Bu_1|_{\Gamma_0}=0, u_k|_{\Gamma_k}=u_{k+1}|_{\Gamma_k}, \partial_\nu u_k|_{\Gamma_k}=\partial_\nu u_{k+1}|_{\Gamma_k}, k=1,...,m,\right.$$ $$\left. \partial_\nu u_{m+1}|_{\Gamma_{m+1}}=-ia(x)u_{m+1}|_{\Gamma_{m+1}}\right\}.$$
By Green's formula we have
$${\rm Im}\,\left\langle A_0u,u\right\rangle_H=-{\rm Im}\,\left\langle\partial_\nu u_{m+1},u_{m+1}\right\rangle_{L^2(\Gamma_{m+1})}=
\left\langle a u_{m+1},u_{m+1}\right\rangle_{L^2(\Gamma_{m+1})}\ge 0,$$
which in turn implies that $A_0$ is a generator of a semi-group $e^{itA_0}$. Then the solutions to (1.1) can be expressed by the formula
$$u^0(t)=e^{itA_0}u^0(0),\quad t\ge 0.$$
It follows from \cite{kn:B} that, under the assumption (1.4), $A_0$ has no eigenvalues on the real axis. Moreover, applying (1.7) with $j=0$ and $z=\lambda^2$ yields that the resolvent $(A_0-z)^{-1}$ is analytic in a strip $|{\rm Im}\,z|\le\gamma_0$, $\gamma_0>0$, and satisfies in this region the bound
$$\left\|(A_0-z)^{-1}\right\|_{{\cal H}_0\to {\cal H}_0}\le Const,$$
which in turn implies
$$\left\|e^{itA_0}\right\|_{{\cal H}_0\to {\cal H}_0}\le \widetilde C e^{-Ct},\quad t>0,\eqno{(2.15)}$$
with constants $\widetilde C,C>0$ independent of $t$. Clearly, (2.15) is equivalent to (1.8).
We would like to treat the equation (1.2) in a similar way. To this end, introduce the Hilbert space ${\cal H}_1=\dot H^1_B\oplus H$, where
$$\dot H^1_B=\dot H^1_B(\Omega_1\setminus\Omega_0)\oplus\oplus_{k=2}^{m+1}\dot H^1(\Omega_k\setminus\Omega_{k-1}),$$
$$\dot H^1(\Omega_k\setminus\Omega_{k-1})=\left\{u:\,\int_{\Omega_k\setminus\Omega_{k-1}}|\nabla u|^2dx<+\infty\right\},\quad 2\le k\le m+1,$$
$$\dot H^1_B(\Omega_1\setminus\Omega_0)=\left\{u:\,\int_{\Omega_1\setminus\Omega_0}|\nabla u|^2dx<+\infty\right\},\quad \mbox{if}\quad B=\partial_\nu,$$
$$\dot H^1_B(\Omega_1\setminus\Omega_0)=\left\{u:\,\int_{\Omega_1\setminus\Omega_0}|\nabla u|^2dx<+\infty,\,u|_{\Gamma_0}=0\right\},\quad \mbox{if}\quad B=Id.$$
On ${\cal H}_1$ define the operator $A_1$ as follows
$$A_1= -i\left( \begin{array}{ll} \hskip 0.5cm 0& Id\\ c^2(x) \Delta &0
\end{array}
\right),$$ where $$c^2(x) \Delta u:=(c_1^2\Delta u_1,...,c_{m+1}^2\Delta u_{m+1}),\quad u=(u_1,...,u_{m+1}),$$
with domain of definition
$${\cal D}(A_1)=\left\{(u,v)\in {\cal H}_1: v\in \dot H^1_B, c^2(x) \Delta u\in H, Bu_1|_{\Gamma_0}=0, u_k|_{\Gamma_k}=u_{k+1}|_{\Gamma_k}, \right.$$ $$\left. \partial_\nu u_k|_{\Gamma_k}=\partial_\nu u_{k+1}|_{\Gamma_k},k=1,...,m,\,\partial_\nu u_{m+1}|_{\Gamma_{m+1}}=-a(x)v_{m+1}|_{\Gamma_{m+1}}\right\}.$$
By Green's formula we have
$${\rm Im}\,\left\langle A_1\left(
\begin{array}{ll} u\\ v
\end{array}\right),\left(
\begin{array}{ll} u\\ v
\end{array}\right)\right\rangle_{{\cal H}_1}=-{\rm Re}\,\left\langle\left(
\begin{array}{ll} v\\ c^2(x)\Delta u
\end{array}\right),\left(
\begin{array}{ll} u\\ v
\end{array}\right)\right\rangle_{{\cal H}_1}$$ $$=-{\rm Re}\,\left\langle\partial_\nu u_{m+1},v_{m+1}\right\rangle_{L^2(\Gamma_{m+1})}=
\left\langle a v_{m+1},v_{m+1}\right\rangle_{L^2(\Gamma_{m+1})}\ge 0,$$
which in turn implies that $A_1$ is a generator of a semi-group $e^{itA_1}$. Then the solutions to (1.2) can be expressed by the formula
$$\left(
\begin{array}{ll} u^1(t)\\ \partial_tu^1(t)
\end{array}\right)=e^{itA_1}\left(\begin{array}{ll} u^1(0)\\ \partial_tu^1(0)
\end{array}\right),\quad t\ge 0.$$
It follows from \cite{kn:B} that, under the assumption (1.4), $A_1$ has no eigenvalues on the real axis. Moreover, applying (1.7) with $j=1$ and $z=\lambda$ yields that the resolvent $(A_1-z)^{-1}$ is analytic in a strip $|{\rm Im}\,z|\le\gamma_1$, $\gamma_1>0$, and satisfies in this region the bound
$$\left\|(A_1-z)^{-1}\right\|_{{\cal H}_1\to {\cal H}_1}\le Const,$$
which in turn implies
$$\left\|e^{itA_1}\right\|_{{\cal H}_1\to {\cal H}_1}\le \widetilde C e^{-Ct},\quad t>0,\eqno{(2.16)}$$
with constants $\widetilde C,C>0$ independent of $t$. It is easy to see that (2.16) is equivalent to (1.9).
Introduce the Hilbert space ${\cal H}=\dot H^1_{B,sc}\oplus H_{sc}$, where
$$H_{sc}:=\oplus_{k=1}^{m}L^2\left(\Omega_{k}\setminus\Omega_{k-1},c_k^{-2}dx\right)\oplus L^2\left({\bf R}^n\setminus\Omega_m,c_{m+1}^{-2}dx\right),$$
$$\dot H^1_{B,sc}=\dot H^1_B(\Omega_1\setminus\Omega_0)\oplus\oplus_{k=2}^{m}\dot H^1(\Omega_k\setminus\Omega_{k-1})\oplus\dot H^1\left({\bf R}^n\setminus\Omega_m\right),$$
$$\dot H^1({\bf R}^n\setminus\Omega_m)=\left\{u:\,\int_{{\bf R}^n\setminus\Omega_m}|\nabla u|^2dx<+\infty\right\}.$$
On ${\cal H}$ define the operator $A$ as follows
$$A= -i\left( \begin{array}{ll} \hskip 0.5cm 0& Id\\ c^2(x) \Delta &0
\end{array}
\right), $$
with domain of definition
$${\cal D}(A)=\left\{(u,v)\in {\cal H}: v\in \dot H^1_{B,sc}, c^2(x) \Delta u\in H_{sc}, Bu_1|_{\Gamma_0}=0, u_k|_{\Gamma_k}=u_{k+1}|_{\Gamma_k}, \right.$$ $$\left. \partial_\nu u_k|_{\Gamma_k}=\partial_\nu u_{k+1}|_{\Gamma_k},k=1,...,m\right\}.$$
By Green's formula we have
$${\rm Im}\,\left\langle A\left(
\begin{array}{ll} u\\ v
\end{array}\right),\left(
\begin{array}{ll} u\\ v
\end{array}\right)\right\rangle_{{\cal H}}=0,$$
which in turn implies that $A$ is a generator of a group $e^{itA}$. Then the solutions to (1.17) can be expressed by the formula
$$\left(
\begin{array}{ll} u(t)\\ \partial_tu(t)
\end{array}\right)=e^{itA}\left(\begin{array}{ll} u(0)\\ \partial_tu(0)
\end{array}\right),\quad t\ge 0.$$
As in \cite{kn:V}, it follows from (2.8) applied with $k=m$ and $z=\lambda$ that the cutoff resolvent $\chi(A-z)^{-1}\chi$ is analytic in a strip $|{\rm Im}\,z|\le\gamma$, $\gamma>0$, and satisfies in this region the bound
$$\left\|\chi(A-z)^{-1}\chi\right\|_{{\cal H}\to {\cal H}}\le Const,$$
where $\chi\in C_0^\infty({\bf R}^n)$, $\chi=1$ on $\Omega_m$.
This in turn implies (see \cite{kn:V}, \cite{kn:K})
$$\left\|\chi e^{itA}\chi\right\|_{{\cal H}\to {\cal H}}\le C_\chi p_0(t),\quad t>0,\eqno{(2.17)}$$
with a constant $C_\chi>0$ independent of $t$. It is easy to see that (2.17) is equivalent to (1.18).
\section{The case $\Omega_0$ trapping}
As in the previous section, Theorem 1.3 follows from the following
\begin{Theorem} Assume (1.3) and (1.12) fulfilled. Then, there exist constants $C,\lambda_0>0$ so that for $\lambda\ge\lambda_0$ the solution to (2.1) satisfies the estimate
$$(\log\lambda)^{-2^m}\|w\|_H\le C\lambda^{-1}\|v\|_H+C\left\|w_{m+1}|_{\Gamma_{m+1}}\right\|_{L^2(\Gamma_{m+1})}+C\lambda^{-1}\left\|\partial_\nu w_{m+1}|_{\Gamma_{m+1}}\right\|_{L^2(\Gamma_{m+1})}.\eqno{(3.1)}$$
\end{Theorem}
Moreover, proceeding as in Section 2 it is easy to see that Theorem 3.1 follows from the following theorem the proof of which will be given in the next section.
\begin{Theorem} Assume (1.3) and (1.12) fulfilled. Then, for every $0\le k\le m$ the cutoff resolvent $\chi{\cal R}_k(\lambda)\chi$ is analytic in $\{\lambda\in{\bf C}:|{\rm Im}\,\lambda|\le C_1(\log|\lambda|)^{-2^k},\,|\lambda|\ge C_2\}$ and satisfies in this region the estimate
$$\left\|\chi {\cal R}_k(\lambda)\chi\right\|_{L^2({\bf R}^n\setminus\Omega_k)\to L^2({\bf R}^n\setminus\Omega_k)}\le C|\lambda|^{-1}(\log|\lambda|)^{2^k},\eqno{(3.2)}$$
where $C,C_1$ and $C_2$ are positive constants.
\end{Theorem}
\noindent
{\bf Remark.} It is natural to expect that (1.12) implies that all cutoff resolvents $\chi{\cal R}_k(\lambda)\chi$, $k=1,...,m$, are analytic in
some strip $\{|{\rm Im}\,\lambda|\le C_1,|\lambda|\ge C_2\}$, $C_1,C_2>0$. However, this remains a difficult open problem. Note that large free of resonances regions far from the real axis are obtained in \cite{kn:CPV2} under some natural assumptions.
To prove Corollary 1.4 observe first that (1.13) is equivalent to the estimate (with $j=0,1$)
$$\left\|(A_j-z)^{-1}\right\|_{{\cal H}_j\to {\cal H}_j}\le C\left(\log|z|\right)^{2^{m+1}}\quad{\rm for}\,\, z\in{\bf R},\,|z|\ge C',\eqno{(3.3)}$$
with some constants $C>0$, $C'>2$ independent of $z$. Clearly, (3.3) implies that $(A_j-z)^{-1}$ is analytic in
$$\Lambda=\left\{z\in {\bf C}: |{\rm Im}\,z|\le C_1\left(\log|z|\right)^{-2^{m+1}},\,|z|\ge C_2\right\}$$
and satisfies in this region the bound (3.3). Therefore, using the fact that the operators $A_j$ are elliptic together with a standard interpolation argument, we conclude that
$$\left\|(A_j-z)^{-1}\right\|_{{\cal H}_j^\varepsilon\to {\cal H}_j}\le C_\varepsilon\quad{\rm for}\,\, z\in\Lambda,\eqno{(3.4)}$$
for every $\varepsilon>0$ with a constant $C_\varepsilon>0$ independent of $z$, where ${\cal H}_0^\varepsilon:= H^\varepsilon$, while the norm
$\|\cdot\|_{{\cal H}_1^\varepsilon}$ is defined by replacing in the definition of ${\cal H}_1$ all norms $L^2$ by the Sobolev norms $H^\varepsilon$.
On the other hand, proceeding as in \cite{kn:L} one can show that (3.4) implies
$$\left\|e^{itA_j}\right\|_{{\cal H}_j^\varepsilon\to {\cal H}_j}\le \widetilde C_\varepsilon\exp\left(-C\varepsilon t^{\alpha_{m+1}}\right),\quad t>0,\eqno{(3.5)}$$
for $0<\varepsilon\le\varepsilon_0$, with constants $C,\varepsilon_0>0$ independent of $t$ and $\varepsilon$, $\widetilde C_\varepsilon>0$ independent of $t$. Clearly, (3.5) is equivalent to (1.14) and (1.15), respectively.
Similarly, the estimate (3.2) with $k=m$ implies that the cutoff resolvent $\chi(A-z)^{-1}\chi$ is analytic in $\{z\in{\bf C}: |{\rm Im}\,z|\le C_1(\log|z|)^{-2^m},\,|z|\ge C_{2}\}$ and satisfies in this region the estimate
$$\left\|\chi(A-z)^{-1}\chi\right\|_{{\cal H}^\varepsilon\to {\cal H}}\le C_\varepsilon,\eqno{(3.6)}$$
where ${\cal H}^\varepsilon$ is defined as ${\cal H}_1^\varepsilon$ above. On the other hand, as in \cite{kn:PV1} one can show that (3.6) implies
$$\left\|\chi e^{itA}\chi\right\|_{{\cal H}^\varepsilon\to {\cal H}}\le C_{\chi,\varepsilon} p_\varepsilon(t),\quad t>0,\eqno{(3.7)}$$
with a constant $C_{\chi,\varepsilon}>0$ independent of $t$. It is easy to see that (3.7) is equivalent to (1.19).
\section{Proof of Theorem 3.2}
We will prove (3.2) by induction in $k$. Let us first see that the assumption (1.12) implies (3.2) with $k=0$. This is essentially proved in \cite{kn:Bu} (see Proposition 4.4 and Lemma 4.7). The idea is to apply the Phragm\`en-Lindel\"of principle to the operator-valued function
$$g(\lambda)=\frac{\lambda e^{iN\lambda\log\lambda}}{\log\lambda}\chi{\cal R}_0(\lambda)\chi,\quad {\rm Re}\,\lambda\ge C_2,$$
where $\log\lambda=\log|\lambda|+i\arg\lambda$ and $N>0$ is a constant big enough. It is well known that the outgoing resolvent satisfies the bound
$$\left\|{\cal R}_0(\lambda)\right\|_{L^2({\bf R}^n\setminus\Omega_0)\to L^2({\bf R}^n\setminus\Omega_0)}\le\frac{1}{|\lambda||{\rm Im}\,\lambda|}\quad\mbox{for}\quad {\rm Im}\,\lambda<0.\eqno{(4.1)}$$
Hence, on ${\rm Im}\,\lambda=-(N\log|\lambda|)^{-1}$, ${\rm Re}\,\lambda\ge C_2$, we have the bound
$$\left\|g(\lambda)\right\|_{L^2\to L^2}\le \frac{Ce^{-N{\rm Im}\,(\lambda\log\lambda)}}{|{\rm Im}\,\lambda|\log|\lambda|}\le \frac{Ce^{N|{\rm Im}\,\lambda|\log|\lambda|}}{|{\rm Im}\,\lambda|\log|\lambda|}\le Const.\eqno{(4.2)}$$
On the other hand, by (1.12), on ${\rm Im}\,\lambda=C_1>0$, ${\rm Re}\,\lambda\ge C_2$, we have the bound
$$\left\|g(\lambda)\right\|_{L^2\to L^2}\le C|\lambda|^{p+1}e^{-N{\rm Im}\,(\lambda\log\lambda)}\le Ce^{(p+1-N{\rm Im}\,\lambda)\log|\lambda|}\le Const,\eqno{(4.3)}$$
if we choose $N=(p+1)/C_1$. By the Phragm\`en-Lindel\"of principle, we conclude from (4.2) and (4.3) that the function $g(\lambda)$ satisfies the bound
$$\left\|g(\lambda)\right\|_{L^2\to L^2}\le Const,\eqno{(4.4)}$$
in $-(N\log|\lambda|)^{-1}\le {\rm Im}\,\lambda\le C_1$, ${\rm Re}\,\lambda\ge C_2$. It follows from (4.4) that for $-(N\log|\lambda|)^{-1}\le {\rm Im}\,\lambda\le \varepsilon/2N$, ${\rm Re}\,\lambda\ge C_2$, $0<\varepsilon\ll 1$, we have
$$\left\|\lambda\chi{\cal R}_0(\lambda)\chi\right\|_{L^2\to L^2}\le C\log|\lambda|e^{N{\rm Im}\,(\lambda\log\lambda)}\le C\log|\lambda|e^{\frac{\varepsilon \log|\lambda|}{2}}\le C\varepsilon^{-1}|\lambda|^\varepsilon,\eqno{(4.5)}$$
with a constant $C>0$ independent of $\lambda$ and $\varepsilon$. On the other hand, for $-\varepsilon/2N\le {\rm Im}\,\lambda\le -(N\log|\lambda|)^{-1}$ the estimate (4.5) follows from (4.1). Thus we conclude that (4.5) holds for $|{\rm Im}\,\lambda|\le \varepsilon/2N$, ${\rm Re}\,\lambda\ge C_2$. Clearly, the case ${\rm Re}\,\lambda\le -C_2$ can be treated similarly. Taking $\varepsilon$ such that $|\lambda|^\varepsilon=2$, we obtain (3.2) with $k=0$.
Thus, to prove Theorem 3.2 it suffices to show that (3.2) with $k-1$, $1\le k\le m$, implies (3.2) with $k$.
Let $w=(w_1,...,w_k)$, $v=(v_1,...,v_k)$ satisfy the equation
$$
\left\{
\begin{array}{l}
(\lambda^2+c_\ell^2\Delta)w_\ell=v_\ell\quad\mbox{in}\quad\Omega_\ell\setminus\Omega_{\ell-1},\,\ell=1,...,k,\\
Bw_1=0\quad\mbox{on}\quad\Gamma_0,\\
w_\ell=w_{\ell+1},\,\partial_\nu w_\ell=\partial_\nu w_{\ell+1}
\quad\mbox{on}\quad \Gamma_\ell,\,\ell=1,...,k-1.
\end{array}
\right.
\eqno{(4.6)}
$$
We need the following extension of Theorem 2.1.
\begin{Theorem} Assumed (3.2) fulfilled with $k-1$. Then, there exist constants $C,\lambda_0>0$ so that for $\lambda\ge\lambda_0$
the solution to (4.6) satisfies the estimate
$$(\log\lambda)^{-2^{k-1}}\|w\|_{H_k}\le C\lambda^{-1}\|v\|_{H_k}+C\left\|w_k|_{\Gamma_k}\right\|_{L^2(\Gamma_k)}+C\lambda^{-1}\left\|\partial_\nu w_k|_{\Gamma_k}\right\|_{L^2(\Gamma_k)},\eqno{(4.7)}$$
where $H_k:=\oplus_{\ell=1}^kL^2\left(\Omega_\ell\setminus\Omega_{\ell-1},c_\ell^{-2}dx\right)$.
\end{Theorem}
{\it Proof.} Let $\chi\in C_0^\infty({\bf R}^n)$, $\chi=1$ on $\Omega_{k-1}$, supp$\,\chi\subset\Omega_k$, such that $\psi=1$ on supp$\,[\Delta,\chi]$ and supp$\,(1-\chi)|_{\Omega_k}$. We have
$$\chi w=\chi_1{\cal R}_{k-1}(\lambda)\chi_1\left(\chi v+[\Delta,\chi]w_k\right),\eqno{(4.8)}$$
where $\chi_1=1$ on supp$\,\chi$, supp$\,\chi_1\subset\Omega_k$. By (3.2) with $k-1$ and (4.8) we conclude
$$(\log\lambda)^{-2^{k-1}}\|w\|_{H_k}\le(\log\lambda)^{-2^{k-1}}\left(\|\chi w\|_{H_k}+\|\psi w_k\|_{L^2(\Omega_k\setminus\Omega_{k-1})}\right)$$ $$\le C\lambda^{-1}\|v\|_{H_k}+C\|\psi w_k\|_{H^1(\Omega_k\setminus\Omega_{k-1})},\eqno{(4.9)}$$
where $H^1$ is equipped with the semiclassical norm. By (2.9) and (4.9),
$$(\log\lambda)^{-2^{k-1}}\|w\|_{H_k}\le C\lambda^{-1}\|v\|_{H_k}+C\lambda^{-1/2}\| w_k\|_{H^1(\Omega_k\setminus\Omega_{k-1})}$$ $$+C\left\|w_k|_{\Gamma_k}\right\|_{L^2(\Gamma_k)}+C\lambda^{-1}\left\|\partial_\nu w_k|_{\Gamma_k}\right\|_{L^2(\Gamma_k)}.\eqno{(4.10)}$$
On the other hand, we have an analogue of (2.14) with $m+1$ replaced by $k$, which together with (4.10) yield
$$(\log\lambda)^{-2^{k-1}}\left(\|w\|_{H_k}+\| w_k\|_{H^1(\Omega_k\setminus\Omega_{k-1})}\right)\le C\lambda^{-1}\|v\|_{H_k}+C\lambda^{-1/2}\| w_k\|_{H^1(\Omega_k\setminus\Omega_{k-1})}$$ $$+C\left\|w_k|_{\Gamma_k}\right\|_{L^2(\Gamma_k)}+C\lambda^{-1}\left\|\partial_\nu w_k|_{\Gamma_k}\right\|_{L^2(\Gamma_k)}.\eqno{(4.11)}$$
Clearly, we can absorb the second term in the right-hand side of (4.11) by taking $\lambda$ big enough, thus obtaining (4.7).
\mbox{\ }\hfill $\Box$ \par \vskip 10pt
Note that it suffices to prove (3.2) for $\lambda\in{\bf R}$, $|\lambda|\gg 1$, only (see \cite{kn:V}). Without loss of generality we may suppose $\lambda>0$.
Let $u=(u_1,...,u_{k+1})$, $v=(v_1,...,v_{k+1})$ satisfy the equation (2.7) with supp$\,v_{k+1}\subset K$, where $K\subset{\bf R}^n\setminus\Omega_k$ is a compact. Set $f_k=u_{k+1}|_{\Gamma_k}=u_{k}|_{\Gamma_k}$. Define the outgoing Neumann operator, $N_k(\lambda)$, for the exterior problem in ${\bf R}^n\setminus\Omega_k$ as follows
$$N_k(\lambda)f=\lambda^{-1}\partial_{\nu'}U_k(\lambda)f|_{\Gamma_k},$$
where $\nu'$ is the outer unit normal to $\Gamma_k$, and $U_k(\lambda)$ solves the equation
$$
\left\{
\begin{array}{l}
(\lambda^2+c_{k+1}^2\Delta)U_k(\lambda)f=0\quad\mbox{in}\quad{\bf R}^n\setminus\Omega_k,\\
U_k(\lambda)f=f\quad\mbox{on}\quad\Gamma_k,\\
U_k(\lambda)f - \lambda - \mbox{outgoing}.
\end{array}
\right.
\eqno{(4.12)}
$$
Define also the operator $G_k(\lambda)$ via the equation
$$
\left\{
\begin{array}{l}
(\lambda^2+c_{k+1}^2\Delta)G_k(\lambda)f=f\quad\mbox{in}\quad{\bf R}^n\setminus\Omega_k,\\
G_k(\lambda)f=0\quad\mbox{on}\quad\Gamma_k,\\
G_k(\lambda)f - \lambda - \mbox{outgoing}.
\end{array}
\right.
\eqno{(4.13)}
$$
Set $\widetilde u_{k+1}=u_{k+1}-G_k(\lambda)v_{k+1}$. Then, the equation (2.7) can be rewritten as follows
$$
\left\{
\begin{array}{l}
(\lambda^2+c_\ell^2\Delta)u_\ell=v_\ell\quad\mbox{in}\quad\Omega_\ell\setminus\Omega_{\ell-1},\,\ell=1,...,k,\\
(\lambda^2+c_{k+1}^2\Delta)\widetilde u_{k+1}=0\quad\mbox{in}\quad{\bf R}^n\setminus\Omega_k,\\
Bu_1=0\quad\mbox{on}\quad\Gamma_0,\\
u_\ell=u_{\ell+1},\,\partial_\nu u_\ell=\partial_\nu u_{\ell+1}
\quad\mbox{on}\quad \Gamma_\ell,\,\ell=1,...,k-1,\\
\widetilde u_{k+1}=u_k,\,\partial_{\nu'}\widetilde u_{k+1}=-\partial_\nu u_k+\lambda h_k,\quad\mbox{on}\quad\Gamma_k,\\
\widetilde u_{k+1} - \lambda-\mbox{outgoing},
\end{array}
\right.
\eqno{(4.14)}
$$
where $h_k=\lambda^{-1}\partial_{\nu'}G_k(\lambda)v_{k+1}|_{\Gamma_k}$, and we have used that $\nu'=-\nu$. Hence $\widetilde u_{k+1}=U_k(\lambda)f_k$, and (4.14) implies
$$
\left\{
\begin{array}{l}
(\lambda^2+c_\ell^2\Delta)u_\ell=v_\ell\quad\mbox{in}\quad\Omega_\ell\setminus\Omega_{\ell-1},\,\ell=1,...,k,\\
Bu_1=0\quad\mbox{on}\quad\Gamma_0,\\
u_\ell=u_{\ell+1},\,\partial_\nu u_\ell=\partial_\nu u_{\ell+1}
\quad\mbox{on}\quad \Gamma_\ell,\,\ell=1,...,k-1,\\
u_{k}=f_k,\,\lambda^{-1}\partial_{\nu} u_{k}=-N_k(\lambda)f_k+h_k,\quad\mbox{on}\quad\Gamma_k.\\
\end{array}
\right.
\eqno{(4.15)}
$$
The fact that $\Omega_k$ is strictly convex implies the bounds (see Theorem 3.1 of \cite{kn:CPV}):
$$\|h_k\|_{L^2(\Gamma_k)}\le C_K\lambda^{-1}\|v_{k+1}\|_{L^2({\bf R}^n\setminus\Omega_k)},\eqno{(4.16)}$$
$$\|u_{k+1}\|_{L^2(K)}\le \|U_{k}(\lambda)f_k\|_{L^2(K)}+\|G_{k}(\lambda)v_{k+1}\|_{L^2(K)}$$ $$\le C_K\|f_k\|_{H^1(\Gamma_k)}+C_K\lambda^{-1}\|v_{k+1}\|_{L^2({\bf R}^n\setminus\Omega_k)}.\eqno{(4.17)}$$
Hereafter all Sobolev spaces $H^1$ will be equipped with the semi-classical norm. Applying Green's formula to the solutions of (4.15) leads to the identity
$$-\lambda{\rm Im}\,\left\langle N_k(\lambda)f_k,f_k\right\rangle_{L^2(\Gamma_k)}+\lambda{\rm Im}\,\left\langle h_k,f_k\right\rangle_{L^2(\Gamma_k)}={\rm Im}\,\left\langle \partial_\nu u_k|_{\Gamma_k},u_k|_{\Gamma_k}\right\rangle_{L^2(\Gamma_k)}$$
$$=-\sum_{\ell=1}^k{\rm Im}\,\left\langle u_\ell,v_\ell\right\rangle_{L^2(\Omega_\ell\setminus\Omega_{\ell-1},c_\ell^{-2}dx)}.\eqno{(4.18)}$$
Hence, $\forall\beta>0$, we have
$$-{\rm Im}\,\left\langle N_k(\lambda)f_k,f_k\right\rangle_{L^2(\Gamma_k)}\le \beta^2\|f_k\|_{L^2(\Gamma_k)}^2+\beta^{-2}\|h_k\|_{L^2(\Gamma_k)}^2+\beta^2\|u\|_{H_k}^2+\beta^{-2}\lambda^{-2}\|v\|_{H_k}^2.\eqno{(4.19)}$$
Since $\Omega_k$ is strictly convex, the Neumann operator satisfies the bound (e.g. see Corollary 3.3 of \cite{kn:CPV})
$$\left\| N_k(\lambda)f_k\right\|_{L^2(\Gamma_k)}\le C\|f_k\|_{H^1(\Gamma_k)}.\eqno{(4.20)}$$
Applying Theorem 4.1 with $w=(u_1,...,u_k)$ and using (4.20), we get
$$(\log\lambda)^{-2^{k-1}}\|u\|_{H_k}\le C\lambda^{-1}\|v\|_{H_k}+C\|f_k\|_{H^1(\Gamma_k)}.\eqno{(4.21)}$$
Choose a function $\eta_k\in C^\infty(T^*\Gamma_k)$ such that $\eta_k=1$ on $\{\zeta\in T^*\Gamma_k:\|\zeta\|\le c_k^{-1}+\epsilon\}$,
$\eta_k=0$ on $\{\zeta\in T^*\Gamma_k:\|\zeta\|\le c_{k+1}^{-1}-\epsilon\}$, $0<\epsilon\ll 1$, which is possible in view of (1.3).
Recall that $\|\zeta\|^2$ is the principal symbol of the (positive) Laplace-Beltrami operator on $\Gamma_k$ evaluated at $\zeta$.
We will denote by ${\rm Op}_\lambda(\eta_k)$ the $\lambda-\Psi$DO on $\Gamma_k$ with symbol $\eta_k$. Since $\Omega_k$ is strictly convex and $\eta_k$ is supported in the hyperbolic region for the corresponding exterior boundary value problem, it is well known that $N_k(\lambda){\rm Op}_\lambda(\eta_k)$ is a $\lambda-\Psi$DO with principal symbol $-i\eta_k(\zeta)\sqrt{c_{k+1}^{-2}-\|\zeta\|^2}$ (e.g. see the appendix of \cite{kn:G}). This together with (4.20) and G\"arding's inequality imply immediately the following
\begin{lemma} There exist constants $C_1,C_2>0$ such that we have
$$-{\rm Im}\,\left\langle N_k(\lambda)f_k,f_k\right\rangle_{L^2(\Gamma_k)}\ge C_1\|f_k\|_{L^2(\Gamma_k)}^2-C_2\|{\rm Op}_\lambda(1-\eta_k)f_k\|_{H^1(\Gamma_k)}^2.\eqno{(4.22)}$$
\end{lemma}
By (4.16), (4.17), (4.19), (4.21), (4.22), taking $\beta=\beta'(\log\lambda)^{-2^{k-1}}$ with $\beta'>0$ small enough independent of $\lambda$, we conclude
$$(\log\lambda)^{-2^{k-1}}\left(\|u\|_{H_k}+\|u_{k+1}\|_{L^2(K)}+\|f_k\|_{L^2(\Gamma_k)}\right)$$ $$\le C\lambda^{-1}(\log\lambda)^{2^{k-1}}\left(\|v\|_{H_k}+\|v_{k+1}\|_{L^2({\bf R}^n\setminus\Omega_k)}\right)+C\|{\rm Op}_\lambda(1-\eta_k)f_k\|_{H^1(\Gamma_k)}.\eqno{(4.23)}$$
On the other hand, the fact that $1-\eta_k$ is supported in the elliptic region for the corresponding interior boundary value problem implies the following
\begin{prop} There exist constants $C,\lambda_0>0$ so that for $\lambda\ge\lambda_0$ we have
$$\|{\rm Op}_\lambda(1-\eta_k)f_k\|_{H^1(\Gamma_k)}\le C\lambda^{-3/2}\|v_k\|_{L^2(\Omega_k\setminus\Omega_{k-1})}+C\lambda^{-1/2}\|u_k\|_{L^2(\Omega_k\setminus\Omega_{k-1})}$$ $$
+C\lambda^{-1/3}\|f_k\|_{L^2(\Gamma_k)}+C\|h_k\|_{L^2(\Gamma_k)}.\eqno{(4.24)}$$
\end{prop}
{\it Proof.} Choose a smooth function $\psi$ such that $\psi=1$ on $\{x: {\rm dist}(x,\Gamma_k)\le\delta\}$, $\psi=0$ outside $\{x: {\rm dist}(x,\Gamma_k)\le 2\delta\}$, where $\delta>0$ is a small parameter independent of $\lambda$. Set $\varphi(\zeta)=(1-\eta_k(\zeta))\langle\zeta\rangle$, $\zeta\in T^*\Gamma_k$, $w_k=\psi{\rm Op}_\lambda(\varphi)u_k$. Clearly,
$g_k:=w_k|_{\Gamma_k}={\rm Op}_\lambda(\varphi)f_k$,
$$\lambda^{-1}\partial_\nu w_k|_{\Gamma_k}=\lambda^{-1}{\rm Op}_\lambda(\varphi)\partial_\nu u_k|_{\Gamma_k}=-{\rm Op}_\lambda(\varphi)N_k(\lambda)f_k+{\rm Op}_\lambda(\varphi)h_k$$
$$=-N_k(\lambda)g_k+[{\rm Op}_\lambda(\varphi),N_k(\lambda)]f_k+{\rm Op}_\lambda(\varphi)h_k.$$
By Green's formula we have
$$\lambda M+\lambda^{-1}{\rm Re}\,\left\langle\left(c_k^2\Delta+\lambda^{2}\right)w_k,w_k\right\rangle_{L^2(\Omega_k\setminus\Omega_{k-1},c_k^{-2}dx)}=
-{\rm Re}\,\left\langle\lambda^{-1}\partial_\nu w_k|_{\Gamma_k},w_k|_{\Gamma_k}\right\rangle_{L^2(\Gamma_k)}$$
$$={\rm Re}\,\left\langle N_k(\lambda)g_k,g_k\right\rangle_{L^2(\Gamma_k)}-{\rm Re}\,\left\langle [{\rm Op}_\lambda(\varphi),N_k(\lambda)]f_k,g_k\right\rangle_{L^2(\Gamma_k)}-{\rm Re}\,\left\langle {\rm Op}_\lambda(\varphi)h_k,g_k\right\rangle_{L^2(\Gamma_k)},\eqno{(4.25)}$$
where
$$M=\left\|\lambda^{-1}\nabla w_k\right\|_{L^2(\Omega_k\setminus\Omega_{k-1})}^2-c_k^{-2}\left\| w_k\right\|_{L^2(\Omega_k\setminus\Omega_{k-1})}^2.$$
Let us see that
$$\|g_k\|_{L^2(\Gamma_k)}^2\le C\lambda M,\quad C>0.\eqno{(4.26)}$$
Denote by $x_n>0$ the normal coordinate to $\Gamma_k$, i.e. given $x\in \Omega_k$, we have $x_n={\rm dist}(x,\Gamma_k)$. Given $0<x_n\le 2\delta\ll 1$, set $\Gamma_k(x_n)=\{x\in \Omega_k:{\rm dist}(x,\Gamma_k)=x_n\}$. Clearly, $M$ can be written in the form
$$M=\left\|\lambda^{-1}\partial_{x_n}w_k\right\|_{L^2}^2+\left\langle\left(-\lambda^{-2}\Delta_{\Gamma_k(x_n)}-c_k^{-2}\right)w_k,w_k\right\rangle_{L^2},$$
where $\Delta_{\Gamma_k(x_n)}$ denotes the (negative) Laplace-Beltrami operator on $\Gamma_k(x_n)$. Since $1-\eta_k$ is supported in the elliptic region $\{\zeta\in T^*\Gamma_k:\|\zeta\|>c_k^{-1}\}$, taking $\delta>0$ small enough we can arrange that on supp$\,\psi(1-\eta_k)$ the principal symbol of the operator $-\lambda^{-2}\Delta_{\Gamma_k(x_n)}-c_k^{-2}$ (considered as a semi-classical differential operator with a small parameter $\lambda^{-1}$) is lower bounded by a constant $C>0$ times the principal symbol of $-\lambda^{-2}\Delta_{\Gamma_k(x_n)}+1$. Therefore, by G\"arding's inequality we conclude
$$M\ge C\|w_k\|_{H^1(\Omega_k\setminus\Omega_{k-1})}^2,\quad C>0.\eqno{(4.27)}$$
On the other hand, by the trace theorem we have
$$\|g_k\|_{L^2(\Gamma_k)}^2\le C\lambda \|w_k\|_{H^1(\Omega_k\setminus\Omega_{k-1})}^2,\quad C>0.\eqno{(4.28)}$$
Clearly, (4.26) follows from (4.27) and (4.28).
Since $\Omega_k$ is strictly convex, the Neumann operator $N_k(\lambda)$ is a $\lambda-\Psi$DO with a principal symbol having a non-positive real part. The following properties of $N_k$ are proved in Section 3 of \cite{kn:CPV} (see Proposition 3.4).
\begin{lemma} There exists a constant $C>0$ such that we have
$${\rm Re}\,\left\langle N_k(\lambda)f,f\right\rangle_{L^2(\Gamma_k)}\le C\lambda^{-1/3}\|f\|_{L^2(\Gamma_k)}^2,\eqno{(4.29)}$$
$$\left\|[{\rm Op}_\lambda(\varphi),N_k(\lambda)]f\right\|_{L^2(\Gamma_k)}\le C\lambda^{-1/3}\|f\|_{H^1(\Gamma_k)}.\eqno{(4.30)}$$
\end{lemma}
Since $\|f_k\|_{H^1(\Gamma_k)}$ is equivalent to $\|g_k\|_{L^2(\Gamma_k)}$ and using the estimate
$$\|f_k\|_{H^1(\Gamma_k)}\le C\|f_k\|_{L^2(\Gamma_k)}+\|{\rm Op}_\lambda(1-\eta_k)f_k\|_{H^1(\Gamma_k)},$$
one can easily see that (4.24) follows from combining (4.25), (4.26), (4.29) and (4.30).
\mbox{\ }\hfill $\Box$ \par \vskip 10pt
Combining (4.23) and (4.24) and taking $\lambda$ big enough, we conclude
$$\|u\|_{H_k}+\|u_{k+1}\|_{L^2(K)}\le C\lambda^{-1}(\log\lambda)^{2^{k}}\left(\|v\|_{H_k}+\|v_{k+1}\|_{L^2({\bf R}^n\setminus\Omega_k)}\right).\eqno{(4.31)}$$
Clearly, (4.31) is equivalent to (3.2) for real $\lambda\gg 1$, which is the desired result.
\mbox{\ }\hfill $\Box$ \par \vskip 10pt
{\bf Acknowledgements.} A part of this work was carried out while F. Cardoso was visiting the University of Nantes in May 2009 with the support of the agreement Brazil-France in Mathematics - Proc. 69.0014/01-5. The first author is also partially supported by the CNPq-Brazil.
| 23,649 |
\section{Introduction}
We shall study a generalized eigenvalue problem under the form
\begin{equation}
\label{prob}
\sigma A(k) U = L(k) U
\end{equation}
where $L(k)$, $A(k)$
are operators (possibly unbounded) which depend smoothly on the real parameter $k$
on some Hilbert space $H$ with moreover $L(k)$ symmetric.
Our aim is to give an elementary criterion which ensures the existence of $\sigma>0$ and
$k\neq 0$ such that \eqref{prob} has a nontrivial solution $U$. Our motivation for this problem
is the study of transverse instability of solitary waves in Hamiltonian partial differential equations.
Indeed, let us consider a formally Hamiltonian PDE, say in $\mathbb{R}^2$, under the form
\begin{equation}
\label{pde}
\partial_{t} \,\mathcal{U} = \mathcal{J} \nabla H( \mathcal{U}), \quad \mathcal{J}^*=-\mathcal{J}
\end{equation}
and assume that there is a critical point of the Hamiltonian (hence a stationary solution) $ \mathcal{U}(x,y)= Q(x)$ which depends only on one variable.
Note that many equations of mathematical physics have one-dimensional
solitary waves solutions which can be seen as critical points of a modified Hamiltonian after
a suitable change of frame. We shall consider a few examples below. An interesting question
is the stability of the one-dimensional state when it is submitted to general two-dimensional
perturbations. There are many examples where the one-dimensional state even if it is stable when submitted
to one-dimensional perturbations is destabilized by transverse oscillations, we refer for
example to \cite{Z2}, \cite{APS}, \cite{Kiv}.
In our previous works \cite{RT1}, \cite{RT2}, \cite{RT3}, we have developed a framework which allows
to pass from spectral instability to nonlinear instability. The aim of this note
is to state a general criterion which allows to get spectral instability.
Note that the linearization of \eqref{pde} about $Q$ reads
\begin{equation}
\label{pdelin}
\partial_{t} V = \mathcal{J} \mathcal{L} V
\end{equation}
where $\mathcal{L}= D \nabla H(Q)$ is a symmetric operator. Since $Q$ does not depend on
the transverse variable $y$, if $\mathcal{J}$ and $H$ are invariant by translations in $y$, we can look for a solution of \eqref{pdelin} under the form
\begin{equation}
\label{fourier} V(t,x,y)= e^{\sigma t } e^{iky } U(x).\end{equation}
This yields an eigenvalue problem for $U$ under the form
\begin{equation}
\label{probham} \sigma U= (J M) (k) U\end{equation}
where $M(k)$, $J(k)$ defined by
$$ M(k) U= e^{-i ky } \mathcal{L}(e^{iky} U), \quad J(k)U = e^{-i ky } \mathcal{J}(e^{iky} U) $$
are operators acting only in the $x$ variable.
Consequently, if $J(k)$ is invertible, we can set the problem under the form \eqref{prob} with
$A(k)= J(k)^{-1}$. As we shall see on the examples,
it may happen that the skew symmetric operator $J(k)$
(which very often does not depend on $k$) is
not invertible.
In these cases, we can also recast the problem under the form \eqref{prob}.
For example, we can look for solutions of \eqref{probham} under the form $U=J(k)^*V$
and thus get a problem under the form \eqref{prob} with $A(k)= J(k)^*,$
$L(k) = J(k)M(k) J(k)^*.$
For the sake of simplicity, we shall work within a real framework but
our result can be easily generalized to complex Hilbert spaces.
We shall also study \eqref{prob} only for $k>0$.
A similar instability criterion for $k<0$ can be obtained by setting $\tilde{A}(k)= A(-k)$,
$\tilde{L}(k)= L(-k)$ and by studying the problem \eqref{prob} for $\tilde{A}$ and $\tilde{L}$.
Let us fix the functional framework. We consider a (real) Hilbert space $H$
with scalar product $(\cdot, \cdot)$. We
assume that
$L(k)$ is a self-adjoint unbounded operator with domain $\mathcal{D}$
continuously imbedded in $H$ and independent of
the real parameter $k$. Moreover,
$L(k)$ as an operator from $\mathcal{D}$ to $H$ is assumed to depend smoothly on $k$.
Finally, we also assume that $A(k)\in \mathcal{L}(\mathcal{D}, H)$ and depends smoothly
on $k$. A $\mathcal{C}^1$ dependence is actually sufficient for our purpose.
Our aim here is to present a criterion which allows to prove transverse instability in solitary
waves stability problems. This amounts to prove the existence of a nontrivial solution
of \eqref{prob} with $k \neq 0$ and $\sigma$ with positive real part.
In solitary wave stability problem, $0$ is very often (when the problem is translation invariant in $x$)
an eigenvalue of $L(0)$ with eigenvector $Q'$. Consequently, since we know
that \eqref{prob} has a nontrivial solution for $\sigma=0$, $k=0$, we can look for a solution
$(\sigma, U, k)$
of \eqref{prob} in the vicinity of this particular solution. The main difficulty to implement
this strategy is that also very often in solitary waves stability problems,
$0$ is in the essential spectrum of $JM(0)$, therefore the standard Lyapounov-Schmidt reduction
cannot be used. One way to solve this problem is to introduce an Evans function
with parameter
$D(\sigma, k)$ (we refer for example to \cite{AGJ}, \cite{GZ}, \cite{PW}, \cite{KS} for the definition
of the Evans function)
for the operator $JM(k)$ and then to study the zeroes of $D$ in the vicinity of $(0,0)$
(after having proven that $D$ has in a suitable sense a smooth continuation in the vicinity of $(0,0)$).
We refer for example to \cite{Bridges}, \cite{Benzoni}, \cite{Zumbrun-Serre}
for the study of various examples.
Let us also mention \cite{chug}, \cite{PS}, \cite{GHS}, for other approaches,
where the eigenvalue problem
is not reformulated as an ODE with parameters.
Here we shall present a simple approach which relies only on the properties
of $L(k)$ which are rather easy to check (mostly since it is a self-adjoint operator) and does not
rely in principle on the reformulation of the problem as an ODE.
Our main assumptions are the following:
\begin{itemize}
\item[{\bf (H1)}] There exists $K>0$ and $\alpha>0$ such that $L(k) \geq \alpha\,{\rm Id}$ for $|k| \geq K$;
\item[{\bf(H2)}] The essential spectrum $Sp_{ess}(L(k))$ of $L(k)$
is included in $[c_{k}, + \infty)$ with $c_{k}>0$ for $k \neq 0$;
\item [\bf{(H3)}] For every $k_{1} \geq k_{2} \geq 0$, we have $L(k_{1}) \geq L(k_{2})$. In addition, if for some $k>0$ and $U \neq 0$, we have $L(k)U= 0$, then
$(L'(k)U, U) > 0$
(with $L'(k)$ the derivative of $L$ with respect to $k$);
\item[\bf{(H4)}] The spectrum $Sp(L(0))$ of $L(0)$ is under the form
$ \{- \lambda \} \cup I$ where $- \lambda <0$ is an isolated
simple eigenvalue and $I$ is included in $[0, + \infty)$.
\end{itemize}
Let us point out that the structure of the spectrum of $L(0)$ assumed in (H4)
is one of the assumption needed to have the one-dimensional stability of the wave
(at least when there is a one-dimensional group of invariance in the problem),
we refer to \cite{GSS}. Note that $0$ may be embedded in the essential spectrum
of $L(0)$.
\bigskip
Our main result is the following:
\begin{theoreme}
\label{main}
Assuming (H1-4), there exists $\sigma >0$, $k \neq 0$ and $U\in \mathcal{D}\backslash\{0\}$
solutions of \eqref{prob}.
\end{theoreme}
Note that we get an instability with $\sigma$ real and positive.
Once the spectral instability is established, one may use the general framework developed
in \cite{RT2} to prove the nonlinear instability of the wave.
The assumption (H3) is clearly matched if $L'(k)$ is positive for every $k>0$. This last property is
verified for all the examples that we shall discuss in this paper. Moreover if $L'(k)$ is positive for $k>0$, the proof of Theorem~\ref{main} can be slightly simplified (see Remark~\ref{referee} below).
The paper is organized as follows.
In the following section, we shall give the proof of Theorem \ref{main}.
Next, in order to show how our abstract result can be applied, we shall study various examples: the KP-I equation,
the Euler-Korteweg system and the Gross-Pitaevskii equation.
Note that we have already used similar arguments to prove the instability of capillary-gravity
solitary water-waves in \cite{RT3}.
We hope that our approach can be useful
for other examples, we also believe that this approach can be adapted to many situations
with slight modifications.
\section{Proof of Theorem \ref{main}}
The first step is to prove that there exists $k_{0}>0$ such that
$L(k_{0})$ has a one-dimensional kernel.
Let us set
$$ f(k) = \inf_{ \|U \|=1} (L(k) U, U).$$
Note that by (H4) $L(0)$ has a negative eigenvalue, hence
we have on the one hand that
$f(0)<0$. On the other hand by assumption (H1), we have that $f(k)>0$
for $k \geq K$.
Since $f$ is continuous, this implies that there exists a minimal $k_{0} >0$
such that $f(k_{0})=0$.
For every $k<k_{0}$, we get that $L(k)$ has a negative eigenvalue
(since $f(k)$ is negative and $L(k)$ self-adjoint, this is a direct consequence of the variational characterization
of the bottom of the spectrum and of (H2) which gives that the essential spectrum of $L (k)$
is in $(0, + \infty)$).
Actually, there is a unique negative simple eigenvalue. Indeed, if we assume
that $L(k)$ has two (with multiplicity) negative eigenvalues, then $L(k)$ is negative
on a two-dimensional subspace. By (H3), this yields that $L(0) \leq L(k)$
is also negative on this two-dimensional subspace.
This contradicts (H4)
which contains that $L(0)$ is nonnegative on a codimension one subspace.
By the choice of $k_{0}$ and (H2), we also have that the kernel of $L(k_{0})$ is non-trivial.
To conclude, we first note that
if for every $k \in (0, k_{0})$ the kernel of $L(k)$ is trivial, then the kernel of $L(k_{0})$
is exactly one-dimensional.
Indeed, let us pick $k<k_{0}$, then, since $L(k)$ has a unique simple negative eigenvalue
and a trivial kernel, we get that $L(k)$ is positive on a codimension one subspace.
Since $L(k_{0}) \geq L(k)$ by (H3), this implies that the kernel of $L(k_{0})$
is exactly one-dimensional.
Next, we consider the case that there exists $k_{1}\in(0, k_{0})$
such that $L(k_{1})$ has a nontrivial kernel.
Since $L(k_{1})$ has a unique simple negative eigenvalue, we get that
$L(k_{1})$ is nonnegative on a codimension $1$ subspace $\mathcal{V}= (\varphi)^\perp
\equiv \{V\in \mathcal{D}\,:\, (V,\varphi)=0\}$
, $\varphi$ being an eigenvector associated to the negative eigenvalue.
Moreover, thanks to (H2), we have an orthogonal decomposition of $\mathcal{V}$,
\begin{equation}
\label{ortho}\mathcal{V}= \mbox{Ker } L(k_{1}) \oplus_{\perp} \mathcal{P}
\end{equation}
with $\mathcal{P}$ stable
for $L(k_{1})$ and $L(k_{1})$ restricted to $\mathcal{P}$ coercive.
Note that moreover $ \mbox{Ker } L(k_{1}) $
is of finite dimension. For every $U \in \mathcal{S}$ where
$\mathcal{S}$ is the unit sphere of $\mbox{Ker }L(k_{1})$ i.e.
$\mathcal{S}=\{ U \in \mbox{Ker }L(k_{1}), \, \|U\|=1\}$, we have by (H3)
that $ (L'(k_{1})U, U)>0$.
From the compactness of $\mathcal{S}$, we get that $c_{0}= \inf_{U \in \mathcal{S}}
( L'(k_{1})U, U) $ is positive. This yields that for every $k \geq k_{1}$ close to $k_{1}$
and $U$ in $\mathcal{S}$,
$$
(L(k) U, U) \geq {c_{0} \over 2} (k- k_{1})$$
and hence by homogeneity that
\begin{equation}
\label{L'}
(L(k) U, U) \geq {c_{0} \over 2} (k- k_{1}) \|U\|^2, \quad \forall U \in \mbox{Ker }L(k_{1}).
\end{equation}
Now according to the decomposition \eqref{ortho} of $\mathcal{V}$, we can write
$ L(k)$ with the block structure
$$ L(k) = \left(\begin{array}{cc} L_{1}(k) & A(k) \\ A^*(k) & L_{2}(k) \end{array}\right).$$
By the choice of $\mathcal{P}$, $L_{2}(k_{1})$ is coercive, therefore, there exists $\alpha>0$
such that for every $k$ close to $k_{1}$, we have
\begin{equation}
\label{L2+}( L_{2}(k) U, U) \geq \alpha \|U\|^2, \quad \forall U \in \mathcal{P}.
\end{equation}
Moreover, we also have that $A(k_{1})=0$ (since $\mathcal{P}$ is a stable subspace for $L(k_{1})$).
By the assumed regularity with respect to $k$, we thus get that
\begin{equation}
\label{A-} \|A(k)\|_{\mathcal{L}(\mathcal{P}, Ker \, L(k_{1}) )} \leq M \, |k-k_{1}|, \quad \forall
k \in [k_{1}/2, 2k_{1}] \end{equation}
for some $M>0$.
Consequently, by using \eqref{L'}, \eqref{L2+} and \eqref{A-}, we get that for every
$U=(U_{1}, U_{2}) \in \mathcal{V}$ and every $k>k_{1}$ close to $k_{1}$, we have
$$ (L(k) U, U) \geq {c_{0} \over 2} (k- k_{1}) \|U_{1}\|^2+ \alpha \|U_{2}\|^2 - 2 M (k-k_{1}) \|U_{1}\|\, \|U_{2}\|.$$
From the Young inequality, we can write
$$ 2 M (k-k_{1}) \|U_{1}\|\, \|U_{2}\| \leq {c_{0} \over 4} \|U_{1}\|^2(k-k_{1}) + \tilde{M}(k-k_{1}) \|U_{2}\|^2$$
with $\tilde{M}= 4M^2/c_{0}$ and hence, we obtain
$$ (L(k) U, U) \geq {c_{0} \over 4} (k- k_{1}) \|U_{1}\|^2+ ( \alpha - \tilde{M}(k-k_{1})) \|U_{2}\|^2.$$
In particular, we get that for every $k>k_{1}$ close to $k_{1}$, $L(k)$ is coercive on $\mathcal{V}$ and
hence positive. Let us take some $k<k_{0}$ with this last property.
Since by (H3), $L(k_{0}) \geq L(k)$, we get
that $L(k_{0})$ is also positive on $\mathcal{V}$ which has codimension $1$.
Therefore the kernel of $L(k_{0})$ is exactly one-dimensional.
We have thus obtained as claimed that there exists $k_{0}>0$ such that $L(k_{0})$ has a one-dimensional
kernel. Thanks to (H2), we also have that $L(k_{0})$ is a Fredholm operator with zero index.
We can therefore use the Lyapounov-Schmidt method to study the eigenvalue problem
\eqref{prob} in the vicinity of $\sigma=0$, $k=k_{0}$ and $U= \varphi$ where
$\varphi$ is in the kernel of $L(k_{0})$ and such that $\| \varphi \|=1$.
We look for $U$ under the form $U=\varphi+V$, where
$$
V\in {\varphi}^{\perp}\equiv \{V\in \mathcal{D}\,:\, (V,\varphi)=0\}.
$$
Therefore we need to solve $G(V,k,\sigma)=0$ with $\sigma>0$, where
$$
G(V,k,\sigma)=L(k)\varphi+L(k)V-\sigma A(k) \varphi-\sigma A(k) V,\quad V\in {\varphi}^{\perp}\,.
$$
We shall use the implicit function theorem to look for $V$ and $k$ as functions of $\sigma$.
Note that the same approach is for example used in \cite{GHS}.
We have that
\begin{equation}
\label{DG}
D_{V,k}G(0,k_0,0)[w,\mu]=\mu\Big[\frac{d}{dk}L(k)\Big]_{k=k_0}\varphi+L(k_0)w\,.
\end{equation}
By using (H3), we obtain that $D_{V,k}G(0,k_0,0)$ is a
bijection from $ {\varphi}^{\perp} \times \mathbb{R}$ to $H$.
We can thus apply the implicit function theorem to get that
for $\sigma$ in a neighborhood of zero there exists $k(\sigma)$ and $V(\sigma)$ such that
$ G(V(\sigma), k(\sigma), \sigma)=0$. This ends
the proof of Theorem~\ref{main}.
\begin{rem}\label{referee}
Let us remark that if we assume that $L'(k)$ is positive for $k>0$ in place of (H3), then we can simplify the argument
giving a $k_0\neq 0$ such that $L(k_0)$ has a one-dimensional kernel.
Namely, in this case by using (H4), we have
that $L(0)$ is nonnegative on a codimension $1$ subspace $\mathcal{V}$
(given by $\mathcal{V}= \pi_{[0, + \infty)}(L(0)) \cap \mathcal{D}$, where $\pi_{[0, + \infty)}(L(0))$
is the spectral projection on the nonnegative spectrum of $L(0)$).
Next, using that $L'(s)$ is positive for $s>0$, we get for every $k>0$ that
$$ (L(k)U, U) = \int_{0}^k (L'(s)U, U)\, ds + (L(0)U, U) \geq 0, \quad \forall U \in \mathcal{V}.$$
Moreover, if $(L(k)U, U)= 0$ for $U \in \mathcal{V}$ then the above identity yields
$$ \int_{0}^k (L'(s)U, U)\, ds=0$$
and hence by using again that $L'(k)$ is positive for $k>0$,
we obtain that $U=0$.
Consequently, we get that for $k>0$, $L(k)$ is positive on a codimension $1$ subspace.
This yields that the dimension of the kernel of $L(k_{0})$ is exactly one.
\end{rem}
\section{Examples}
In this section we shall study various physical examples where Theorems \ref{main}
can be used to prove the instability of line solitary waves.
\subsection{KP-I equation}
We shall first see that the instability argument given in \cite{RT2} can be interpreted
in the framework of \eqref{prob}.
Let us consider the generalized KP-I equation where
\begin{equation}
\label{KP} \partial_{t} u = \partial_{x}\big( - \partial_{xx} u - u^p \big) + \partial_{x}^{-1}
\partial_{yy} u , \quad p=2,\, 3, \, 4
\end{equation}
and $u(t,x,y)$ is real valued. There is an explicit one-dimensional solitary wave
(which thus solves the generalized KdV equation):
$$ u(t,x)= Q(x-t) = \Big({ p+ 1 \over 2} \Big)^{1 \over p-1} \Big(
\mbox{sech}^2 \big( {(p-1) (x-t) \over 2 } \big) \Big)^{ 1 \over p-1 }.$$
Note that in this problem, it suffices to study the stability of the speed one solitary wave
since the solitary wave with speed $c>0$ can be deduced from it by scaling: the solitary wave
with speed $c>0$ is given by
$$ Q_{c}(\xi)= c^{ 1 \over p-1} Q(\sqrt{c}\, \xi).$$
After changing $x$ into $x-t$ (and keeping the notation $x$) and linearizing about $Q$,
we can seek for solution under the form
$$ e^{\sigma t } e^{iky } V(x)$$
to get
the equation
$$\sigma V= \partial_{x} \big( - \partial_{xx} - k^2 \partial_{x}^{-2}+ 1 - p Q^{p-1} \big) V.$$
We can seek for a solution $V$ under the form $V= \partial_{x} U$
to get that $U$ solves
$$ - \sigma \partial_{x} U=\Big(- \partial_{x}( - \partial_{xx} +1 - p Q^{p-1} ) \partial_{x} +k^2 \Big)U.$$
Therefore, this eigenvalue problem is under the form \eqref{prob} with
$$ A(k)= - \partial_{x}, \quad L(k)
=- \partial_{x}( - \partial_{xx} + 1 - p Q^{p-1} ) \partial_{x} +k^2.$$
By choosing $H= L^2(\mathbb{R})$ and $\mathcal{D}= H^4(\mathbb{R})$, we are in an appropriate functional framework.
Note that $L(k)$ has a self-adjoint realization.
Let us check the assumptions (H1-4).
Since we have
$$ (L(k) U, U) \geq \| \partial_{xx} U \|_{L^2}^2 + k^2 \|U \|_{L^2}^2 + \| \partial_{x} U \|_{L^2}^2
- p \|Q^{p-1} \|_{L^\infty } \| \partial_{x} U \|_{L^2}^2$$
and that
for every $\delta >0$, there exists $C(\delta) >0$ such that
$$ \| \partial_{x} U \|_{L^2}^2 \leq \delta \| \partial_{xx} U \|_{L^2}^2 + C(\delta) \| U \|_{L^2}^2,$$
we immediately get that (H1) is verified.
Next, we note that $L(k)$ is a compact perturbation of
$$L_{\infty}(k)= - \partial_{x}( - \partial_{xx} + 1 ) \partial_{x} +k^2, $$
we thus get from the Weyl Lemma and the explicit knowledge of the spectrum of $L_{\infty}(k)$ that
the essential spectrum of $L(k)$ is included in $[k^2, + \infty)$ and thus
that (H2) is verified.
Assumption (H3) is obviously verified since $L'(k)$ is positive for every $k>0$.
Finally, let us check (H4). Note that $L(0)= -\partial_{x} C \partial_{x}$ where
$C$ is a second order differential operator. We notice that
$C Q' =0$ and that by the same argument as above, the essential spectrum of $C$
is contained in $[1, + \infty)$. Since $ Q'$ vanishes only once, we get by
Sturm Liouville theory (we refer for example to \cite{Dunford}, chapter XIII) that $C$ has a unique negative eigenvalue with associated eigenvector $\psi$.
Moreover, we also have that
\begin{equation}
\label{Calpha}
(Cu, u ) \geq 0
\quad \forall u \in (\psi)^{\perp}
\end{equation}
After these preliminary remarks, we can get that
$L(0)$ has a negative eigenvalue. Indeed
by an approximation argument, we can construct a sequence $u_{n}$ in $\mathcal{D}$
such that $\partial_{x} u_{n}$ tends to $\psi $ in $\mathcal{D}$ then, for
$n$ sufficiently large $(L(0) u_{n}, u_{n}) = (C\partial_{x} u_{n}, \partial_{x}u_{n})$ is negative.
By the variational characterization of the lowest eigenvalue, we get that
$L(0)$ has a negative eigenvalue.
Moreover, for every $U$ such that $ (\partial_{x}U, \psi)=0$,
we have that
$$ \big(L(0) U, U\big) = \big(C \partial_{x} U, \partial_{x} U\big)\geq0$$
This proves that $L(0)$ is nonnegative on a codimension one subspace
and hence that there is at most one negative eigenvalue.
We have thus proven that (H4) is verified.
Consequently, we get from Theorem \ref{main} that the solitary wave
is transversally unstable.
\subsection{Euler-Korteweg models}
We consider a general class of models describing the isothermal motion of compressible fluids
and taking into account internal capillarity.
The main feature of these models is that the free energy $F$ depends both on
$\rho$ and $\nabla \rho$. In the isentropic case, we have:
$$ F(\rho, \nabla \rho) = F_{0}(\rho) + {1 \over 2} K(\rho) |\nabla \rho |^2$$
where $F_{0}(\rho)$ is the standard part and $K(\rho)>0$ is a capillarity coefficient. The pressure
which is defined by
$p= \rho {\partial F \over \partial \rho} - F$
reads
$$ p(\rho, \nabla \rho)= p_{0}(\rho) + {1 \over 2} \big( \rho K'(\rho) - K(\rho) \big) | \nabla \rho|^2 \big.
$$
The equations of motion read
\begin{eqnarray}
\label{euler1}
& & \partial_{t} \rho + \nabla \cdot( \rho u ) = 0 , \\
& & \label{euler2} \partial_{t} u + u \cdot \nabla u + \nabla (g_{0}(\rho) ) = \nabla \big( K(\rho) \Delta \rho +
{1 \over 2} K'(\rho ) | \nabla \rho |^2 \big).
\end{eqnarray}
In this model, $\rho>0$ is the density of the fluid and $u$ the velocity,
$g_{0}$ (which is linked to $p_{0}$ by
$\rho g_{0}'(\rho)=p_{0}'(\rho)$)
and $K(\rho)>0$ are smooth functions of $\rho$ for $\rho>0$.
We shall consider a one-dimensional solitary wave of \eqref{euler1}, \eqref{euler2} under the form
$$(\rho(t,x,y), u(t,x,y))= (\rho_{c}(x-ct), u_{c}(x-ct))= Q_{c}(x-ct)$$
such that
\begin{equation}
\label{rhoinfty}
\lim_{x \rightarrow \pm \infty} Q_{c}= Q_{\infty}=(\rho_{\infty}, u_{\infty}), \quad \rho_{\infty}>0.
\end{equation}
We shall assume that
\begin{equation}
\label{hypeuler}
\rho_{\infty} g_{0}'(\rho_{\infty}) > (u_{\infty}- c)^2.
\end{equation}
This condition ensures that $Q_{\infty}$ is a saddle point in the ordinary differential equations
satisfied by the profile. Under this condition, one can find solitary waves, moreover,
they have the interesting property that
$ \rho_{c}'$ vanishes only once. We refer for example to \cite{Benzoni-Danchin},
for the study of the existence of
solitary waves for this system.
Here we shall study the (linear) transverse instability of these solitary waves.
We shall restrict our study to potential solutions of \eqref{euler1}, \eqref{euler2}
that is to say solutions such that $u= \nabla \varphi$. Note that this will give a better
instability result, this means that we are able to find instabilities even in the framework of potential
solutions.
This yields the system
\begin{eqnarray}
\label{euler3}
& & \partial_{t} \rho + \nabla\varphi \cdot \nabla \rho + \rho \Delta \varphi = 0 , \\
& & \label{euler4} \partial_{t} \varphi + {1 \over 2} | \nabla \varphi|^2 + g_{0}(\rho) = K(\rho) \Delta \rho +
{1 \over 2} K'(\rho ) | \nabla \rho |^2.
\end{eqnarray}
Changing $x$ into $x-ct$ (and keeping the notation $x$) to make the wave stationary, linearizing
\eqref{euler3}, \eqref{euler4} about a solitary wave $Q_{c}= (\rho_{c}, u_{c})$ and
looking for solutions $(\eta, \varphi)$ under the form
$$ (\eta, \varphi)= e^{\sigma t } e^{iky } U(x), $$
we find an eigenvalue problem
under the form \eqref{prob} with $A(k)= J^{-1}$ and
$$ J= \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right),$$
$$ L(k)= \left( \begin{array}{cc}
- \partial_{x}\big( K(\rho_{c}) \partial_{x}) +k^2 K(\rho_{c}) - m & -c \partial_{x} + u_{c} \partial_{x} \\
c \partial_{x} - \partial_{x}\big( u_{c}\cdot \big) & - \partial_{x}\big(\rho_{c} \partial_{x} \cdot \big)
+ \rho_{c}k^2 \end{array} \right)$$
where the function $m(x)$ is defined by
$$ m= K'(\rho_{c}) \rho_{c}'' + {1 \over 2} K''(\rho_{c}) (\rho_{c}')^2 - g_{0}'(\rho_{c}).$$
By taking $H= L^2(\mathbb{R}) \times L^2(\mathbb{R})$, $\mathcal{D}= H^2 \times H^2$,
we are in the right functional framework, in particular, $L(k)$ has a self-adjoint realization. Let us now check assumptions
(H1-4):
\begin{itemize}
\item (H1): with $U= (\rho, \varphi)$, we have
\begin{multline*}
\big(L(k)U, U \big) \geq \int_{\mathbb{R}} \Big( K(\rho_{c}) \big( | \partial_{x} \rho |^2 +
k^2 |\rho|^2 \big) + \rho_{c} \big(| \partial_{x} \varphi |^2 + k^2
| \varphi |^2 \big) \\
- \mathcal{O}(1)\big( |\rho |( |\rho | + | \varphi |) + | \partial_{x} \rho | \, | \varphi| \big)\Big)\,dx
\end{multline*}
where $\mathcal{O}(1)$ is independent of $k$.
Since $K(\rho_{c}) \geq \alpha >0$, we get
by using the Young inequality:
\begin{equation}
\label{young}
ab \leq {\delta \over 2} a^2 + { 1 \over 2 \delta } b^2, \quad a,\, b \geq 0, \quad \delta >0,\end{equation}
that (H1) is verified for $k$ sufficiently large.
\bigskip
\item (H2) By standard arguments (see \cite{Henry}, for example),
to locate the essential spectrum of $L(k)$, we have to study the
spectrum of
$$ L_{\infty}(k) =
\left( \begin{array}{cc}
\big( K(\rho_{\infty})( - \partial_{xx} +k^2 \big) +
g_{0}'(\rho_{\infty})& -c \partial_{x} + u_{\infty} \partial_{x} \\
c \partial_{x} - u_{\infty } \partial_{x} & \rho_{\infty} \big( - \partial_{xx} + k^2\big)
\end{array}
\right).$$
By using the Fourier transform, we can compute explicitly the spectrum of this operator,
we find that $\mu$ is in the spectrum of $L_{\infty}(k)$ if and only if there exists $\xi$
such that
$$ \mu^2 - s(\xi, k) \mu + p(\xi, k)= 0$$
with
\begin{eqnarray*}
& & s= (K(\rho_{\infty}) + \rho_{\infty})(k^2+ \xi^2) + g_{0}'(\rho_{\infty} ), \\
& & p=
\rho_{\infty} K(\rho_{\infty})(k^2 + \xi^2 )^2 + \rho_{\infty} g_{0}'(\rho_{\infty}) k^2
+ \big(\rho_{\infty } g_{0}'(\rho_{\infty}) - (u_{\infty} - c )^2 \big) \xi^2 \geq 0 .\end{eqnarray*}
By using that $\rho_{\infty}$ and $K(\rho_{\infty})$ are positive and the condition \eqref{hypeuler},
we get that the two roots are nonnegative for all $k$ and strictly positive for $k \neq 0$.
This proves that (H2) is matched.
\bigskip
\item(H3) We have
$$L'(k) = \left( \begin{array}{cc} 2 k K(\rho_{c}) & 0 \\ 0 & 2 \rho_{c} k \end{array} \right).$$
Consequently, (H3) is verified since $\rho_{c}$ and $K(\rho_{c})$ are positive.
\bigskip
\item (H4) We shall use the following algebraic lemma:
\begin{lem}
\label{lemalg}
Consider a symmetric operator on $H$ under the form
$$ L= \left( \begin{array}{cc} L_{1} & A \\ A^* & L_{2} \end{array}\right)$$
with $L_{2}$ invertible. Then we have
$$ (LU, U)= \Big( \big( L_{1}- A L_{2}^{-1} A^*\big) U_{1}, U_{1} \Big) + \Big( L_{2}
\big( U_{2}+ L_{2}^{-1} A^* U_{1} \big), U_{2} + L_{2}^{-1} A^*U_{1} \Big).$$
\end{lem}
The proof is a direct calculation. Note that
the above lemma remains true as soon as the quadratic form in the right-hand side
makes sense (and hence even if $L_{2}^{-1}$ is not well-defined.)
Let us apply this lemma to $L(0)$. We see that with
$$ A= (u_{c} - c ) \partial_{x}, \quad L_{2}= - \partial_{x} (\rho_{c} \partial_{x} \cdot),$$
if $u\in H^2$ solves the equation
$L_{2} u = - A^* U_{1}$, then
$$ \partial_{x} u= - {1 \over \rho_{c} } (u_{c} - c) U_{1}.$$
Consequently, we get
$$ AL_{2}^{-1} A^* U_{1} = (u_{c}- c ) \partial_{x} u = {(u_{c} -c )^2 \over \rho_{c}} U_{1}$$
and hence we have the following factorization:
\begin{equation}
\label{factor1} (L(0) U, U)= (MU_{1}, U_{1}) +
\int_{\mathbb{R}} \rho_{c} \Big| \partial_{x} U_{2} + {1 \over \rho_{c}} (u_{c} - c) U_{1} \Big|^2\, dx \end{equation}
where
$$ MU_{1}= -\partial_{x}\big( K(\rho_{c}) \partial_{x} U_{1}\big) - m \,U_{1} - {(u_{c} - c)^2 \over \rho_{c}}
U_{1}.$$
Note that $M$ is a second order differential operator and that by using the profile equation satisfied
by $Q_{c}$, we can check that $\rho_{c}'$ is in the kernel of $M$. Since $\rho_{c}'$
has a unique zero, this proves that $M$ has exactly one negative eigenvalue with corresponding
eigenfunction $R$.
From the condition \eqref{hypeuler}, we also get that the essential spectrum of $M$
is included in $[ \alpha, +\infty)$ for some $\alpha>0$. In particular (since $M$ is self-adjoint), we get that
\begin{equation}
\label{M1}
(MU_{1}, U_{1}) \geq 0, \quad \forall \,U_{1} \in (R)^\perp.
\end{equation}
We can now use these properties of $M$ to prove that (H4) is matched.
We can first get from \eqref{factor1} that $L(0)$ has indeed one negative direction. A first try
would be to take $U_{1}= R$ and
$$ \partial_{x} U_{2}= - {1 \over \rho_{c}} (u_{c} - c) R.$$
The problem is that this equation does not have a solution in $L^2$. Nevertheless, we can get the
result by using an approximation argument. Indeed, again by cutting the low frequencies, we can choose a sequence $U_{2}^n \in H^2$
such that
$$\partial_{x} U_{2}^n \rightarrow - {1 \over \rho_{c}} (u_{c} - c)R$$
in $H^2$.
Then since $(MR, R)<0$, we get that
$ (L(0) U^n, U^n)<0$, with $U^n= (R, U_{2}^n)$, for $n$ sufficiently large
and hence by the variational characterization of the smallest eigenvalue, we get that
$L(0)$ has a negative eigenvalue.
From \eqref{factor1} and \eqref{M1}, we then get that this negative eigenvalue is unique.
This proves that (H4) is verified.
\end {itemize}
Consequently, we can use Theorem~\ref{main}
to get the instability of the solitary wave.
We have thus proven:
\begin{theoreme}
\label{theoEK}
If a solitary wave satisfies the condition \eqref{hypeuler}, then it is unstable
with respect to transverse perturbations.
\end{theoreme}
Note that a similar result has been obtained in \cite{Benzoni} by using an Evans function
calculation.
\subsection{Travelling waves of the Gross-Pitaevskii equation}
In this subsection, we consider the Gross-Pitaevskii equation which is a standard model
for Bose-Einstein condensates,
\begin{equation}
\label{GP}
i \partial_{t} \psi + { 1 \over 2 }\Delta \psi + \psi (1 - | \psi|^2) =0 \end{equation}
where the unknown $\psi$ is complex-valued. This equation has well-known explicit
one-dimensional travelling waves (the so-called dark solitary waves) whose modulus tend to $1$
at infinity, for every $c < 1 $ they read :
\begin{equation}
\label{dark} \psi(t,x,y)= \Psi_{c}(z) =
\sqrt{ 1 - c^2 }\, \mbox{tanh}\,\big( z \sqrt{ 1 - c^2} \big) \big) + i c , \quad z= x-ct.\end{equation}
In the case of the standard solitary waves of the cubic focusing Schr\"odinger equation,
their transverse instability which was shown by
Zakharov and Rubenchik can be studied by a standard bifurcation argument since
$0$ is not in the essential spectrum of the linearized operator, we refer for example to
\cite{RT1} for the details. This is not the case for the dark solitary waves, $0$ is in the essential spectrum of the linearized
operator, we shall thus use the criterion given by Theorem \ref{main}.
Note that for $c \neq 0$, $\Psi_{c}$ does not vanish. Consequently, we can study
the stability of these waves (travelling bubbles) by using the Madelung transform, i.e. by seeking solutions
of \eqref{GP} under the form
$$ \psi = \sqrt{\rho}e^{i \varphi}$$
with smooth $\rho$ and $\varphi$. We then classically find that
$(\rho, u= \nabla \varphi)$ is a solution of \eqref{euler1}, \eqref{euler2} with:
$$ g_{0}(\rho)= \rho-1, \quad K(\rho) = {1 \over 4 \rho}.$$
The dark solitary waves for $c \neq 0$ becomes a solitary wave $(\rho_{c} , u_{c})$
of \eqref{euler1}, \eqref{euler2} with
$$ \rho_{c}(z)= c^2 + (1-c^2) \mbox{tanh}^2 \,\big( z \sqrt{ 1 - c^2} \big)\Big), \quad$$
$$ u_{c}(z)= - {c (1 -c^2) \over \rho_{c} } \Big( 1 - \mbox{tanh}^2 \,\big( { z } \sqrt{ 1 - c^2} \big)
\Big).$$
In particular, we thus have $\rho_{\infty}= 1$ and $u_{\infty}= 0$. Since $g_{0}'(\rho)=1$, the condition
\eqref{hypeuler} reduces to $c^2<1$. Consequently, we have by Theorem \ref{theoEK}
that all the dark solitary waves with
$ |c|<1,$ $c \neq 0$ are unstable with respect to transverse perturbation.
Note that the one-dimensional stability of these travelling bubbles was shown in \cite{Lin}.
It remains to study the case $c=0$. Note that $\Psi_{0}$ is a stationary solution, the so-called black soliton, which has
the very simple expression
$$ \Psi_{0}(x)= \mbox{tanh }\big(x).$$
Its one dimensional orbital stability has been shown in \cite{Gerard}, \cite{Gravejat-Saut}.
Here we shall prove that it becomes transversally unstable by using Theorem \ref{main}.
Since the Madelung transform is not appropriate (the solitary wave vanishes
at the origin) we shall work directly on
the formulation \eqref{GP}.
Linearizing \eqref{GP} about $\Psi_{0}$, splitting real and imaginary parts and
seeking solutions under the form \eqref{fourier} yield a problem
under the form \eqref{prob} with $A(k)= J^{-1}$ and
$$ J= \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array}\right), \quad
L(k) = \left( \begin{array}{cc} {1 \over 2 }( - \partial_{xx} + k^2) + 3 \Psi_{0}^2 - 1 & 0 \\ 0
& {1\over2}(-\partial_{xx} + k^2) - (1-\Psi_{0}^2) \end{array}\right).$$
Again, with $H= L^2 \times L^2$ and $\mathcal{D}= H^2 \times H^2$, we can
check (H1-4).
(H1) and (H3) are obviously matched. Thanks to the decay of the solitary wave, we
have that $L(k)$ is a compact perturbation of
$$ L_{\infty}(k)= \left( \begin{array}{cc} {1 \over 2 }( - \partial_{xx} + k^2) + 2 & 0 \\ 0
& {1\over2}(-\partial_{xx} + k^2) \end{array}\right)$$
and hence, a simple computation shows that (H2) is also matched.
Finally, we can also easily check (H4).
Let us set $L(0)= \mbox{diag }(L_{1}, L_{2})$. We first notice
that
$$ L_{1} \Psi_{0}'=0, \quad L_{2} \Psi_{0}=0.$$
and that the essential spectrum of $L_{1}$ is contained in $[2, + \infty)$ and the one
of $L_{2}$ in $[0, +\infty)$.
Since $\Psi_{0}'$ does not vanish and $\Psi_{0}$ vanishes only once, we get by Sturm-Liouville theory that
$0$ is the first eigenvalue of $L_{1}$ and that
$L_{2}$ has a unique negative eigenvalue.
This proves that (H4) is matched.
Consequently, we get from Theorem \ref{main} that the black soliton $\Psi_{0}$ is transversally unstable.
We have thus proven:
\begin{theoreme}
For every $c$, $|c|<1$, the dark solitary waves \eqref{dark} are transversally unstable.
\end{theoreme}
\begin{rem}
Using arguments as above, we can also prove the transverse instability of the one dimensional localized solitary waves of the nonlinear Schr\"odinger equation and thus
obtain another proof of the classical Zakharov-Rubenchik instability result.
\end{rem}
\begin{rem}
The most difficult assumption to check is often the assumption (H4). Note that on the above
examples this is always a direct consequence of Sturm-Liouville theory which is an ODE result.
In the above examples, the eigenvalue problem for $L(0)$ is already itself an ODE.
Nevertheless, for the capillary-gravity solitary waves problem studied in \cite{RT3},
there is a nonlocal operator arising in the definition of $L(0)$
and hence the eigenvalue problem for $L(0)$ cannot be formulated as an ODE.
Nevertheless, it was proven by Mielke \cite{Mielke} that
(H4) is matched since in the KdV limit the spectral properties of $L(0)$ are the
same as the ones of the linearized KdV hamiltonian about the KdV solitary wave
which are known (again thanks to Sturm Liouville theory).
\end{rem}
\vspace{1cm}
{\bf Acknowledgements.}
We thank Jean-Claude Saut and David Chiron for fruitful discussions about this work.
We also warmly thank the referee for the careful reading of the manuscript and many remarks
which
have greatly improved the result and the presentation.
| 16,068 |
\section{Introduction}
Due to their ubiquitous presence in everyday life, the importance of polymers can hardly be over estimated. Indeed, polymer science has had a major impact on the way we live. Just 50 years ago, materials we now take for granted did not exist at all. Due to their structural complexity, polymers are generally not crystalline at low temperatures. Rather, they exhibit an amorphous, glassy structure. The glass transition (the transition from a liquid to an amorphous solid) is thus an important concept when it comes to an understanding of the properties of polymer systems.
As to the application, polymers are often used as protective coatings in many fields ranging from the car industry (corrosion resistance) to microelectronics (protection against thermal as well as mechanical load)~\cite{Zhang::PolEngSci::1999, Rayss::JApplPolSci::1993, Armstrong::ElectrochemicaActa::1993}.
In such situations, the polymer is always in contact with a solid substrate or even confined between two solid plates (film geometry). An important piece of information for materials design is therefore how the thermal and mechanical properties of a polymer system are affected by interaction with a solid substrate.
In addition to its technological importance, the investigation of polymers close to solid surfaces is also of great theoretical interest. One is, for example interested to understand how the broken symmetry of the space in the proximity of the substrate as well as related energetic issues modify static and dynamic properties (chain conformation, diffusion, reorientation dynamics, etc.) of polymers close to the interface. A particularly interesting aspect is also how the glass transition is affected by the presence of the solid substrates.
However, the presence of a wide range of time and length scales makes the study of polymer systems a very challenging multiscale problem \cite{LodgeMuthu_JPC1996}. In the simplest case of linear homopolymers, each macromolecule contains $N_{\mathrm{p}}$ identical repeat units (monomers), connected to form a chain. In experiments, the chain length may vary in the range $10^2 \le N_{\mathrm{p}} \le 10^6$. This implies that the average size of a polymer, measured for instance by the radius of gyration ${R_{\text{g}}}$ \cite{DoiEdwards,RubinsteinColby}, varies between ${R_{\text{g}}} \sim\! 10 d$ up to ${R_{\text{g}}} \sim\! 1000 d$, where $d$ denotes the diameter of a single monomer ($d$ being of the order of a few $\AA$).
These different length scales are reflected in the particular
features of a polymer melt. In the melt the monomers pack densely,
leading to an amorphous short-range order on a local scale and to an
overall low compressibility of the melt. Both features are
characteristic of the liquid state. Qualitatively, the collective
structure of the melt thus agrees with that of non-polymeric
liquids. Additional features, however, occur if one considers the
scale of a chain. A long polymer in a (three-dimensional) melt is
not a compact, but a self-similar object
\cite{RubinsteinColby,DeGennes,Witten_ROP1998}. It possesses a
fractal `open' structure which allows other chains to penetrate into
the volume defined by its radius of gyration. On average, a polymer
interacts with $\sqrt{N_{\mathrm{p}}}$ other chains,
a huge number in the large-$N_{\mathrm{p}}$ limit. This strong
interpenetration of the chains has important consequences. For
instance, intrachain excluded volume interactions, which would swell
the polymer in dilute solution, are screened by neighboring chains
\cite{DoiEdwards,RubinsteinColby,DeGennes,Edwards_JPhysA1975,MuthukumarEdwards_JCP1982,WittmerEtal:PRL2004}, although nontrivial nongaussian orientational correlations due to excluded
volume remain \cite{WittmerEtal:PRL2004}.
A polymer in a melt thus in many respects behaves on large scales as if it were a
random coil, implying that its radius of gyration scales with chain
length like ${R_{\text{g}}} \sim \sqrt{N_{\mathrm{p}}}$. Furthermore, the
interpenetration of the chains creates a temporary network of
topological constraints
\cite{DoiEdwards,RubinsteinColby,DeGennes,McLeish_AdvPhys2002}.
These entanglements greatly slow down the chain dynamics and render
the melt in general very viscous compared to low-molecular weight liquids.
In this article, we will provide a brief overview on computer simulations of polymers close to interfaces, while at the same time briefly touching upon some of the techniques aimed at a reduction in the computational cost related to the presence of multiple length and time scales. It should, however, be emphasized here that the present survey is far from being exhaustive, neither in regard to the modeling of polymer melts nor in regard to specific computational aspects. We refer the interested reader to extended reviews on these topics, e.g.\ references~\cite{MullerPlathe:2002,MullerPlathe:2003,GlotzerPaul:AnnRevMatSci2002,Kroger:PhysRep2004,Binder_MCMD1995,BinderBaschnagelPaul2003,BaVa2005}.
In principle, a full multiscale simulation approach constructs the connection between the quantum description of interactions (Fig.~1) over the all-atom description of a chemically realistic model but using effective classical potentials up to the coarse-grained level of mesoscale models explicitly. Such an explicit multiscale approach which derives effective interactions on the mesoscale ab initio still is an ambitious and difficult task
\cite{DelleSite2005a,DelleSite2005b,Praprotnik2008,Mulder2009}
and most of the work in the literature considers only partial steps of the full problem. Thus, in this article, we will concentrate on the description in terms of coarse-grained bead-spring type models lacking explicit chemical details, and we shall describe the steps (Fig.~1) to derive these models in a schematic way only.
\section{Coarse-graining}
\label{subsec:cg} On a fundamental level, interaction forces originate from the adaption of the electronic degrees of freedom to the positions of the nuclei. It may therefore appear natural to model polymer melts via the Car--Parrinello method \cite{MarxHutter_NIC2000}. This method is a molecular dynamics (MD) technique \cite{Allen_NIC2004,BinderEtal_JPCM2004} which allows the electrons to adiabatically follow the motion of the nuclei, thereby replicating the energy landscape that the nuclei feel at any instant of their motion. Using recent extensions of the method, one is now able to study system sizes up to about 1000 nuclei for about $10$ ns \cite{Kuehne_PRL2007}. This time, however, barely suffices to equilibrate the system at high temperature in the liquid state \cite{Binder_MCMD1995,Kremer:MacroChemPhys2003,BaVa2005}.
\begin{figure} \rsfig{0.45}{coarse_graining.eps}
\caption[]{A schematic view of different scales which can be focused upon when describing polymers. The quantum level takes account of the electrons to calculate the interactions between the nuclei. On the computationally less demanding atomistic level, the electronic degrees of freedom are replaced by a force field \cite{PaulSmith_RPP2004}. Computationally still less demanding than atomistic models are simulations at the coarse-grained level. Here, a monomer is associated with a spherical site and the realistic potentials are replaced by simpler ones~\cite{JBETal:AdvPolySci2000,MullerPlathe:2002,MullerPlathe:2003,BaVa2005}.}
\label{fig:coarse_graining} \end{figure}
Some kind of coarse-graining procedure is therefore necessary in order to adequately describe statistical mechanical properties of polymer systems on time and length scales of interest. Such a procedure usually consists of elimination of fast degrees of freedom by incorporating them in effective potentials \cite{JBETal:AdvPolySci2000,MullerPlathe:2003,PaulSmith_RPP2004}.
\subsection{Atomistic models}
A first degree of simplification may consist in replacing the electronic degrees of freedom by empirical potentials for the bond lengths, the bond angles, the torsional angles and the nonbonded interactions between distant monomers along the chain (`quantum level $\rightarrow$ atomistic level', see \fref{fig:coarse_graining}). This step introduces a `force field', i.e., the form of the potentials is postulated and the corresponding parameters (e.g.\ equilibrium bond length, force constants, etc.) are determined from quantum-chemical calculations and experiments \cite{JBETal:AdvPolySci2000,PaulSmith_RPP2004}.
Several such force fields have been proposed throughout the past decades for both explicit atom models and united atom models. An explicit atom model treats every atom as a separate interaction site, whereas a united atom model lumps a small number of real atoms together into one site \cite{JBETal:AdvPolySci2000,MullerPlathe:2003,PaulSmith_RPP2004}. Typical united atoms are CH, $\mathrm{CH}_2$, and $\mathrm{CH}_3$. The reduction of force centers translates into the computational advantage of allowing longer simulation times. With a time step of $\sim\! 10^{-15}$ s---compared to $\sim\! 10^{-17}$ s for the Car-Parrinello method---a few thousand united atoms can be simulated over a time lapse of several $\mu$s, about an order of magnitude longer than an explicit atom simulation of comparable system size.
Both explicit atom models and united atom models have been used in the study of glass-forming polymers (see e.g.\ \cite{Clarke_review1995,Clarke:currentopinion1998} for reviews on older work). Current examples include polyisoprene (explicit atom; \cite{ColmeneroEtal:PRE2002,ColmeneroEtal:JPCM2003}), atactic polystyrene (united atom; \cite{LyulinEtal:Macromolecules2002_1,LyulinEtal:Macromolecules2002_2,LyulinEtal:Macromolecules2003}) and {\em cis-trans} 1,4-polybutadiene (united and explicit atom models; \cite{PaulSmith_RPP2004,SmithEtal:JCP2004,KrushevPaul_PRE2003,KrushevEtal_Macromolecules2002,SmithEtal:JCP2002,GrantPaul_ChemPhys2000}). Certainly, the ultimate objective of these modeling efforts is that the simulation results lend themselves to a quantitative comparison with experiments. Such a comparison may, however, require a careful fine-tuning of the force field. For the family of neutral hydrocarbon polymers the optimization of the torsional potential appears particularly crucial. Not only the position and the relative depth of the minima, but also the barriers between them should be accurately determined, as local relaxation processes, involving transitions between the minima, are exponentially sensitive to them. In extreme cases, imprecise barrier heights may seriously affect the dynamics while leaving structural features of the melt unaltered.
\begin{figure} \rsfig{0.4}{PB_SqWq_MSD.eps} \caption[]{Simulation results for
{\em cis-trans} 1,4-polybutadiene (adapted from reference~\cite{KrushevPaul_PRE2003}). Main panel: Collective static structure factor, $S(q)$, and single-chain structure factor, $w(q)$, versus the modulus of the wave vector $\vec{q}$ at $T = 273$ K. Two united atom models are compared: a chemically realistic model (CRC) and the same model but without torsional potential (FRC). The vertical arrows indicate the $q$-values associated with the radius of gyration and with the first maximum of $S(q)$ (`amorphous halo'). The maximum occurs at $q^*\simeq 1.47$ {\AA}$^{-1}$. In real space, this value would correspond to an intermonomer distance of $\approx 4.3$ {\AA} which is roughly compatible with the average Lennard--Jones diameter of the model ($d \approx 3.8$ {\AA}). Inset: Mean-square displacement $g_0(t)$, averaged over all monomers, versus time for the CRC and FRC models at $T= 273$ K. The horizontal line indicates the radius of gyration ${R_{\text{g}}}^2= 218$ {\AA}$^2$ (which is found to be the same for both models \cite{KrushevEtal_Macromolecules2002}).}
\label{fig:PBD_CrcFrc}
\end{figure}
Such an example is shown in \fref{fig:PBD_CrcFrc}. The figure compares simulation results for two models of a polybutadiene melt: a carefully validated united atom model which reproduces the experimentally found structure and dynamics of the melt, and the same model with the torsional potential switched off. Apparently, suppression of the torsional potential has no influence on the structure, but considerably accelerates the monomer dynamics.
The above example demonstrates that different potentials may lead to realistic representations of structural properties, but to diverging predictions for the dynamics. Such an observation is not limited to polymers, but was also made e.g.\ for amorphous $\mathrm{SiO}_2$ \cite{kobreview1999}. It suggests that the design of a chemically realistic model, aiming at a parameter-free comparison between simulation and experiment, should involve information about both the structural and the dynamic properties.
\subsection{Generic models}
In many cases, one is interested in a basic understanding of polymeric features rather than detailed description of a specific model. For uncharged linear polymers, these features are presumed to be the dependence of material properties on chain connectivity, excluded-volume interactions, monomer--monomer attractions and/or some stiffness along the chain backbone. In this case, it appears permissible to forgo fast degrees of freedom (bond length and bond angle vibrations, etc.) in favor of a coarse-grained model. A special type of coarse-grained models are so-called `generic models' \cite{MullerPlathe:2003}. Various such generic models have been studied in the literature (for reviews see references~\cite{Binder_MCMD1995,KremerBinder1988,BaschnagelWittmerMeyer:NIC_Review2004,Kotelyanskii2004}). Due to the lack of space, in the following we present only one of these models in more detail, which was used in recent simulations \cite{BennemannPaulBinder1998,BennemannBaschnagelPaul1999_incoherent,BennemannPaulBaschnagel1999,BennemannPaulBaschnagel1999_Rouse,natureBDBG1999,betaDynamics,alphaDynamics,BuchholzPaulVarnikBinder:JCP2002,AicheleEtal_2002,AicheleEtal_PRE2004,VarnikEtal:JCP2000,Varnik:CPC2002,VarnikEtal:PRE2002,VarnikEtal:EPJE2002,VarnikBinder:JCP2002,VarnikEtal:EPJE2003,Peter2006,Peter2007,Peter2008}. Certainly, this choice is biased by our own experience.
Indeed, the issue of polymers at interfaces in particular and confined liquids in general has received considerable attention in simulation studies. Various systems---simple liquids \cite{ScheidlerEtal:EPL2000,ScheidlerEtal:EPL2002,ScheidlerEtal:JPCB2004,GalloEtal:EPL2002,FehrLoewen:PRE1995,BoddekerTeichler:PRE1999}, hydrogen-bonded or molecular liquids \cite{GalloEtal:JCP2000,TeboulSimionesco:JPCM2002}, silica \cite{Roder:JCP2001}, polymers \cite{XuMattice:Macro2003,StarrEtal:PRE2001,StarrEtal:Macro2002,YoshimotoEtal:JCP2005,JaindePablo:PRL2004,JaindePablo:Macro2002,BohmedePablo:JCP2002,torresEtal:PRL2000,MansfieldEtal:Macro1991,ManiasEtal:ColSurf2001,BaljonEtal:Macro2005,BaljonEtal:PRL2004}---and confining geometries---pores \cite{ScheidlerEtal:EPL2000,GalloEtal:JCP2000,TeboulSimionesco:JPCM2002}, fillers in glass-forming matrices \cite{StarrEtal:PRE2001,StarrEtal:Macro2002,GalloEtal:EPL2002}, thin films \cite{XuMattice:Macro2003,YoshimotoEtal:JCP2005,JaindePablo:PRL2004,JaindePablo:Macro2002,BohmedePablo:JCP2002,torresEtal:PRL2000,MansfieldEtal:Macro1991,ManiasEtal:ColSurf2001,BaljonEtal:Macro2005,BaljonEtal:PRL2004,Roder:JCP2001,ScheidlerEtal:EPL2002,ScheidlerEtal:JPCB2004,FehrLoewen:PRE1995,BoddekerTeichler:PRE1999}---have been considered. References provided here as well as throughout this paper are hoped to, at least partially, compensate this shortcoming.
\section{A bead-spring model for polymer melts}
\label{subsec:beadspring} In 1990 \cite{KremerGrest1990} Kremer and Grest proposed a versatile bead-spring model for the simulation of polymer systems. The `Kremer--Grest' model has ever since been deployed to investigate numerous problems in polymer physics, including relaxation processes in polymer solutions \cite{DuenwegEtal:KB_review1995} and melts \cite{KremerGrest_review1995,PuetzKremerGrest2000,Kremer:NIC2004} or the behavior of polymer brushes \cite{Grest_review1995,Grest_review1999}, to name just a few.
In a variant of this model, proposed by Bennemann and coworkers \cite{BennemannPaulBinder1998}, the chains contain $N_{\mathrm{p}}$ identical monomers of mass $m$. All monomers, bonded and nonbonded ones, interact by a truncated Lennard--Jones (LJ) potential
\begin{equation}
{\ensuremath{U_{\mathrm{LJ}}}} (r) = \left \{
\begin{array}{ll}
4 \epsilon \left[ (d/r)^{12} - (d/r)^{6} \right] & \quad \mbox{for} \; r \leq \ensuremath{r_{\mathrm{c}}}\;, \\
0 & \quad \mbox{else} \;.
\end{array}
\right . \label{eq:LJ12-6TS}
\end{equation}
The parameter $\ensuremath{r_{\mathrm{c}}} = 2\ensuremath{r_{\mathrm{min}}}$ is the cut-off distance, where $\ensuremath{r_{\mathrm{min}}} = 2^{1/6}d$ is the minimum of \eref{eq:LJ12-6TS}. In contrast to the original version of the Kremer--Grest model,
where $\ensuremath{r_{\mathrm{c}}} = \ensuremath{r_{\mathrm{min}}}$ (leading to purely repulsive intermolecular interactions), the cutoff distance proposed by Bennemann is motivated by the wish to work with a potential that is as short-ranged as possible while yet including the major part of the attractive van-der-Waals interaction. Even though attractive interactions are not expected to appreciably affect the local structure in a dense melt, they have a significant effect on thermodynamic properties. Furthermore, they are important for simulations of e.g.\ the phase behavior of polymer solutions \cite{MuellerMacDowell:Macro2000,VirnauEtal:NewJP2004}, thin films with a film-air interface \cite{HeineEtal:PRE2003,TsigeGrest:Macromolecules2004} or crazing in polymer glasses \cite{BaljonRobbins:Macro2001,RottlerRobins:PRE2003}.
The chain connectivity is ensured by a FENE (finitely extensible non-linear elastic) potential
\begin{equation}
{\ensuremath{U_{\mathrm{FENE}}}} (r) = - \frac{1}{2}k R_0^2 \ln \Big [1 -
\Big(\frac{r}{R_0}\Big)^2 \Big], \; R_0=1.5d,\,
k=\frac{30\epsilon}{d^2} \; \label{eq:FENE}
\end{equation}
The FENE potential diverges logarithmically in the limit of $r \rightarrow R_0$ (`finite extensibility') and vanishes parabolically as $r \rightarrow 0$ (`elastic behavior').
The superposition of the FENE- and the LJ-potentials yields a steep effective bond potential with a minimum at $\ensuremath{r_{\mathrm{b}}} \approx 0.96d$ (see for instance Ref.~\cite{VarnikEtal:EPJE2002}).
The difference between $\ensuremath{r_{\mathrm{b}}}$ and $\ensuremath{r_{\mathrm{min}}}$ is crucial for the ability of the model
to bypass crystallization and exhibit glass-like freezing in.
\subsection{Approximate mapping to real units} The parameters of \eref{eq:LJ12-6TS} define the characteristic scales of the melt: $\epsilon$ the energy scale, $d$ the length scale, and $\tau_{\mathrm{LJ}} = (m d^2 / \epsilon)^{1/2}$ the time scale. In the following, we utilize LJ-units. That is, $\epsilon = 1$, $d= 1$, and $m = 1$. Furthermore, temperature is measured in units of $\epsilon/k_{\mathrm{B}}$ with the Boltzmann constant $k_{\mathrm{B}} = 1$.
Although reduced units are commonly employed in simulations and are
of technical advantage \cite{AllenTildesley,FrenkelSmit}, it might
still be interesting to obtain a feeling how they translate into
physical units. Such a mapping to real
systems has recently been carried out by Virnau \cite{VirnauEtal:NewJP2004,VirnauEtal:JCP2004} and by Paul and Smith
\cite{PaulSmith_RPP2004}. Virnau explored the phase
separation kinetics of a mixture of hexadecane
($\mathrm{C}_{16}\mathrm{H}_{34}$) and carbon dioxide
($\mathrm{CO}_2$). By identifying the critical point of the
liquid-gas transition in hexadecane with that of bead-spring chains
containing 5 monomers they found $d \simeq 4.5 \times 10^{-10}$
m and $\epsilon \simeq 5.8 \times 10^{-21}$ J. Paul and Smith
used the data on the dynamics of chemically realistic models for nonentangled melts
of polyethylene and polybutadiene and obtained $\tau_\mathrm{LJ}\simeq 0.21 \times 10^{-12}$s.
These values for $d$, $\epsilon$, and $\tau_\mathrm{LJ}$ are compatible with the
estimates obtained by Kremer and Grest when comparing the dynamics
of entangled bead-spring melts to real polymers (see table III of
\cite{KremerGrest1990}).\label{page:estimate-for-d-LJ}
\subsection{Choice of the chain length} In polymer glass simulations the chain length is usually chosen as a compromise between two opposing wishes: On the one hand, it should be sufficiently large to separate the scales of the monomer and of the chain size so that polymer-specific effects (or at least the onset thereof) become observable. On the other hand, computational expedience suggests to work with short chains. Because the simulations aim at following the increase of the monomeric relaxation time $\tau_0$ with decreasing temperature over as many decades as possible, slow relaxation processes, already present at high temperatures ($T$) due to entanglements, should be avoided. Thus, the chain length should be smaller (or at least not much larger) than the entanglement length $N_\mathrm{e}$. Extensive studies of the Kremer--Grest model show that $N_\mathrm{e} \approx 32$ Shorter chains exhibit Rouse-like dynamics \begin{equation} \tau(N_{\mathrm{p}}) = \tau_0 N_{\mathrm{p}}^{\approx 2} \;. \label{eq:tau_Rouse} \end{equation} As the Bennemann model is expected to have a similar $N_\mathrm{e}$, the chain length $N_{\mathrm{p}} = 10$ was proposed as a possible compromise \cite{BennemannPaulBinder1998}. This chain length was used in all subsequent studies pertaining to glass-forming polymer melts \cite{BennemannPaulBinder1998,BennemannBaschnagelPaul1999_incoherent,BennemannPaulBaschnagel1999,BennemannPaulBaschnagel1999_Rouse,natureBDBG1999,betaDynamics,alphaDynamics,BuchholzPaulVarnikBinder:JCP2002,AicheleEtal_2002,AicheleEtal_PRE2004,VarnikEtal:JCP2000,Varnik:CPC2002,VarnikEtal:PRE2002,VarnikEtal:EPJE2002,VarnikBinder:JCP2002,VarnikEtal:EPJE2003}.
\subsection{Including solid substrates}
\label{subsec:films}
Even though real substrates can have a complex structure, it appears natural in the spirit of the polymer models discussed above also to treat the substrate at a generic level. One obvious feature is its impenetrability. So a minimal model must at least respect monomer-substrate excluded volume interactions. Further generic features could be some surface roughness and adhesive power. Based on this reasoning, simulations often model the substrate as a crystal \cite{PatrykiejewSurfSci2000,Steele:SurfSci1973} made of particles that interact with each other and with the monomers by LJ-potentials.
\begin{figure}
\unitlength=1mm
\begin{picture}(100,20)(5,0)
\put(18,-11){
\includegraphics*[width=60mm,clip=]{T0.44_NEQ_rho0.99_h450_w600_40chains.ps}}
\end{picture}
\vspace*{5mm} \caption[]{Snapshot of a polymer system between two substrates of triangular lattice structure (only $40$ chains out of $200$, each containing $N_{\mathrm{p}} = 10$ monomers, are shown).}
\label{fig:polymerfilm-snapshot}
\end{figure}
Such crystalline substrates may be implemented by tethering the substrate ('wall') atoms to the sites of a triangular lattice \cite{VarnikBinder:JCP2002} via harmonic springs (`Tomlinson model' \cite{RobbinsMueser2001,He-Robbins})
\begin{equation}
U_{\mathrm{T}}(\vec{r}) = \frac{1}{2} k_{\mathrm{T}} \big ( \vec{r} -
\vec{r}_{\mathrm{eq}} \big )^2, \quad k_{\mathrm{T}} = 100 \mathrm{~(LJ~units).}
\label{eq:Tomlinson}
\end{equation}
Here, $\vec{r}_\mathrm{eq}$ denotes the equilibrium position of an atom on the triangular lattice and $k_{\mathrm{T}}$ the spring constant. The substrate atoms are LJ-particles that interact with each other and with the monomers. The parameters ($\epsilon$ and $d$) for these interactions---wall-wall and monomer-wall---are the same as in \eref{eq:LJ12-6TS}.
Since the spatial arrangement of the substrate atoms may have a strong influence on the properties of the melt in the very vicinity of the substrate, it is interesting to also study the effect of substrates with an amorphous structure. This can be achieved in a way quite similar to the implementation of crystalline walls (Fig.~\ref{fig:polymerfilm-snapshot}), the only difference being the random (instead of regular) distribution of substrate atoms.
If one is only interested in the average force which the substrate exerts on a monomer, one may treat the wall as a continuum and integrate over the parallel $(x,y)$-directions and the vertical one up to the wall-melt interface. Carrying out this calculation for the LJ-potential one obtains
\begin{equation}
U_{\mathrm{w}}(z) = \epsilon_{\mathrm{w}} \bigg [\Big (\frac{d}{z}
\Big)^{9} - f_{\mathrm{w}} (\frac{d}{z} \Big)^{3} \bigg ]
\label{eq:LJ9-3}
\end{equation}
where $\epsilon_\mathrm{w}$ denotes the monomer-wall interaction energy and $f_{\mathrm{w}}$ is a constant. While the second attractive term is important if one wants to study polymer adsorption \cite{MetzgerEtal:MacroTheo2002} or wetting phenomena \cite{MetzgerEtal:JCP2003,MuellerEtal:IJMPC2001}, the first term of \eref{eq:LJ9-3} suffices to impose a geometric confinement. This is the stance we have adopted in most of the simulations on supercooled polymer films \cite{VarnikEtal:JCP2000,Varnik:CPC2002,VarnikEtal:PRE2002,VarnikEtal:EPJE2002,VarnikEtal:EPJE2003}.
\section{Molecular dynamics versus Monte Carlo}
In the framework of computer simulations, it appears natural to address dynamical problems via MD techniques. However, if one is interested in equilibrating long-chain glass-forming polymer melts at low $T$, MD might not be the most efficient approach. The realistic molecular dynamics has the drawback that the available simulation time is often not sufficiently long for an equilibration of the polymer configuration at low temperatures or for long chain lengths.
Therefore, one might envisage resorting to Monte Carlo (MC)
techniques \cite{LandauBinder,Binder2008,Binder2009,Binder2009b}.
The strategic advantage offered by this method is the number of ways in which MC moves may be designed
to explore configuration space. The hope is to find an algorithm
that, freed of the need to capture the real dynamics, efficiently
decorrelates the configurations of glass-forming polymer melts at
low $T$. This demand on the algorithm appears to exclude the
simplest MC technique, the application of only local MC moves, as a
possible candidate. A local MC move consists of selecting a
monomer at random and in attempting to displace it by a small amount
in a randomly chosen direction
\cite{BaschnagelWittmerMeyer:NIC_Review2004}. Not only should the
local character of coordinate updating share the essential
problematic features of the (local) molecular dynamics at low $T$ or for
large $N_{\mathrm{p}}$, but also may one expect that local MC moves will yield
an unfavorably large prefactor of the relaxation time due to their
stochastic character. This conjecture is based on an observation
made by Gleim and coworkers \cite{GleimKobBinder1998}. They compared
the relaxation dynamics of a glass-forming binary mixture simulated,
on the one hand, by MD and, on the other hand, by a stochastic
(Brownian) dynamics (which is in some respect similar to MC). They
demonstrated that, although the structural relaxation at
long times is the same for both methods, MD is roughly an order of
magnitude faster than the stochastic dynamics.
However, MC moves need not be local. They can be tailored to alter
large portions of a chain. A prominent example of such nonlocal
moves is the configuration-bias Monte Carlo (CBMC) technique
\cite{FrenkelSmit,BaschnagelWittmerMeyer:NIC_Review2004}.
Application of this technique to dense polymer systems in the
canonical ensemble usually involves the attempt to remove a portion
of a chain starting from one of its monomers that is randomly chosen
and to regrow the removed portion subject to the constraints imposed
by the local potential energy. If successful, this implies a large
modification of the chain configuration, thereby promising efficient
equilibration. However, Bennemann found that even in the
limit where only the end is reconstructed (`smart reptation'), CBMC
is inferior to ordinary MD \cite{BennemannPaulBinder1998}. In a
dense melt, the probability of inserting a monomer becomes vanishingly
small anywhere except at the position where it was removed. So, the
old configuration of the chain is just restored. This trapping of
the chain makes the relaxation become very slow.
Thus, successful nonlocal chain updates in dense systems should
involve moves that do not require (much) empty space. A promising
candidate is double-bridging algorithms which were successfully
employed in simulations of polyethylene chains
\cite{KarayiannisEtal:JCP2002,BanaszakEtal:JCP2003}, of the
Kremer--Grest model \cite{AuhlEtal:2003}, and of a lattice model, the
bond-fluctuation model \cite{BaschnagelWittmerMeyer:NIC_Review2004}.
The basic idea of the algorithm is to find pairs of neighboring
chains which one can decompose into two halves and reconnect in a
way that preserves the monodispersity of the polymers. Such a
connectivity-altering move drastically modifies the conformation of
the two chains and thus strongly reduces the slowing of the dynamics
due to large values of $N_{\mathrm{p}}$. However, if we attempt to repeat this
move over and over again on the melt configuration we started with,
a successful double-bridging event is likely to annihilate one of
its predecessors by performing the transition between two chains in
the reverse direction. To avoid this inefficiency this nonlocal
chain updating should be complemented by a move which efficiently
mixes up the local structure of the melt. At low $T$, efficient
relaxation of the liquid structure calls for a method which
alleviates the glassy slowing down in general. Thus, any algorithm
achieving this aim in non-polymeric liquids should also accelerate
the equilibration of glassy polymer melts, provided that it can be
generalized to respect chain connectivity. At present, no technique
has been established to fully solve this problem (see Ref.~\cite{BrumerReichman:JPC2004} for a topical review).
However, promising candidates appear to be `parallel tempering'
\cite{BunkerDunweg:PRE2000,YamamotoKob2000} (see however
\cite{DeMicheleSciortiono:PRE2002}), a recent MC approach proposed by Harmandaris and
Theodorou combining scission and fusion moves \cite{Harmandaris2002a,Harmandaris2002b} or variants of
`Wang-Landau sampling' \cite{YanEtal:PRL2004,Binder2008}.
\section{Polymers at interfaces: some salient features}
\label{subsection:simulation-fene-polymer-in-film}
In order to demonstrate the capabilities of the simple model described above, we focus here on our recent simulation studies. Compared to the earlier Monte Carlo studies of the bond-fluctuation lattice model \cite{BaschnagelWittmerMeyer:NIC_Review2004}, reviewed in references~\cite{BinderBaschnagelPaul2003,MischlerEtal:ACIS2001}, our work deployed MD simulations to explore the features of a continuum model, spatially confined to a slab geometry \cite{VarnikEtal:PRE2002,VarnikEtal:EPJE2002,VarnikEtal:EPJE2003,VarnikBinder:JCP2002}.
As an example, \fref{fig:rho+msd} compiles results of molecular dynamics simulations of
the polymer model of Eqs.\ (\ref{eq:LJ12-6TS}) and~(\ref{eq:FENE}) in the proximity of a solid substrate (at distances of a few monomer diameters), showing the strong heterogeneity induced by the substrate both in the static structure (exemplified by the local density) as well as in transport properties of the liquid. The figure also illustrates the influence of different solid structures such as a perfectly smooth substrate as compared to substrates with atomic scale corrugation (amorphous as well as crystalline).
\begin{figure}
\hspace*{1mm}
\includegraphics[width=6cm]{density_profiles_p1_D20_T0.55_allWalls.eps}\vspace*{4mm}
\includegraphics[width=6cm]{msd_layer_resolved_allWalls.eps}
\caption[]{Molecular dynamics simulations of substrate effects on the structure (a) and dynamics (b) of a model polymer melt described by Eqs.~(\ref{eq:LJ12-6TS}) and (\ref{eq:FENE}). (a): Density profile (normalized to the liquid density at infinite distance) versus distance z from the substrate. (b): Local mean square displacements (MSD) versus time at two different distances, $z$, from a solid substrate \cite{BaVa2005}.}
\label{fig:rho+msd}
\end{figure}
As shown in \fref{fig:rho+msd}, the liquid density exhibits oscillations in the proximity of a substrate. These oscillations are of comparable magnitude both for an ideally flat and an amorphous corrugated substrate. Despite this similarity in the behavior of the local density, the effect on dynamics is of opposite nature. While a perfectly smooth substrate (no corrugation at all) leads to an acceleration of diffusive dynamics, the dynamics of monomers close to an amorphous substrate is slowed down. A qualitative understanding of this, at first glance unexpected, behavior can be obtained by invoking the concept of effective friction \cite{VarnikEtal:PRE2002,VarnikEtal:EPJE2002}.
\begin{figure}
\vspace*{5mm}
\includegraphics[width=6cm]{fig18a.eps}\vspace*{3mm}
\includegraphics[width=6cm]{fig18b.eps}
\caption[]{Growth of both the strength and the range of substrate effects upon cooling. (a): Relaxation time versus distance from the substrate for a model polymer melt described by Eqs.~(\ref{eq:LJ12-6TS}) and (\ref{eq:FENE}) at various temperatures (Lennard--Jones units). The horizontal dashed lines indicate the bulk values (expected at large distances from the substrate). (b): Monomer number density profile for the same range of temperatures \cite{VarnikEtal:PRE2002}.}
\label{fig:tau+rho}
\end{figure}
Figure \ref{fig:rho+msd} also underlines the fact that the spatial arrangement of substrate atoms may play a crucial role in the properties of the adjacent liquid. Strong layering of liquid particles has a dynamic counterpart manifest in a temporary arrest within the liquid layer closest to the substrate (note the extended plateau in the mean square displacement).
Substrate effects both on the packing behavior as well as on dynamics can become quite dramatic when cooling the liquid towards the freezing temperature. This aspect is demonstrated in Fig.~\ref{fig:tau+rho} for the case of the polymer model of Eqs.~(\ref{eq:LJ12-6TS}) and~(\ref{eq:FENE}) close to a purely repulsive and perfectly smooth substrate (Eq.~(\ref{eq:LJ9-3}) with $\epsilon_\mathrm{w}=\epsilon$ and $f_\mathrm{w}=0$). The figure shows the structural relaxation time versus the distance from the substrate for various temperatures. While the effect of the substrate is rather weak and short ranged at high temperatures, the strength of the substrate effects grows significantly as the temperature is decreased. Furthermore, the spatial extension of the region affected by the substrate also becomes larger upon cooling. At the highest temperature shown, the substrate effects are visible only within a distance of 2--3 particle diameters. In contrast to this, the range of substrate effects exceeds 6 particle diameters at the lowest temperature investigated. The most remarkable effect, however, is on dynamic properties: At the lowest temperature investigated, the relaxation time decreases by roughly two orders of magnitude when approaching the substrate from infinity.
It is noteworthy that the above observations via computer simulations of a very simple model are qualitatively in line with experimental findings on real polymers \cite{Ellison2003,Ellison2005}. This indicates that, at least on a qualitative level, generic features such as geometric confinement as well as adhesive/repulsive nature of the substrate play a more important role than material specific details.
\begin{figure} \rsfig{0.45}{TcTg_vs_thickness_relative_values.eps}
\caption[]{Scaling plot of $\ensuremath{T_{\mathrm{c}}}(h)$ and $\ensuremath{T_{\mathrm{g}}}(h)$.
Here $h$ is the film thickness and $\ensuremath{T_{\mathrm{c}}}(h)$ is the so called critical temperature of the mode coupling theory determined in MD simulations of the polymer model specified via Eqs.~(\ref{eq:LJ12-6TS}) and (\ref{eq:FENE}) confined between two purely repulsive and perfectly smooth walls. It is compared to the glass transition temperatures $\ensuremath{T_{\mathrm{g}}}(h)$ of three studies:
(i) Monte Carlo simulations of a lattice model for free-standing atactic polypropylene (PP) films \cite{XuMattice:Macro2003}
(ii) Experiments of supported atactic polystyrene (PS) films (spin cast
from toluene solution onto silicon wafers) \cite{HerminghausEtal:EPJE2001}
(iii) Experiments of supported, high-molecular weight PS films \cite{KeddieEtal:EPL1994,ForrestDalnoki:AdvColSci2001}. The solid line indicates $\ensuremath{T_{\mathrm{g}}}(h)/\ensuremath{T_{\mathrm{g}}}=1 - (h_0/{h})^\delta$
($h_0$ is a material dependent characteristic length)
\cite{KimEtal:Langmuir2000,KimEtal:Langmuir2001}, the dashed line $ \ensuremath{T_{\mathrm{g}}}(h)/\ensuremath{T_{\mathrm{g}}}=1/(1+h_0/h)$
\cite{KeddieEtal:EPL1994} and the dotted line the approximation $1-h_0/h$, valid for small $h_0/h$.}
\label{fig:TcTg}
\end{figure}
The above discussed effects of substrate on the dynamics of structural relaxation have strong implications regarding the thermal and mechanical properties of the system. Indeed, in the case of perfectly smooth and repulsive walls, the confined system behaves more liquid like compared to the same polymer system in the bulk (infinitely far from the substrate). Similarly, the presence of atomistically corrugated substrates with adhesive interactions increases the solid character of the system. For sufficiently small slab or film thickness ($h \le 100$ nm), this behavior translates itself into a dependence of the glass transition temperature $\ensuremath{T_{\mathrm{g}}}$ (the temperature at which the polymer forms an amorphous solid) both on the type and thickness of the slab.
Figure \ref{fig:TcTg} illustrates this issue for the case of smooth non adhesive walls, where the expected reduction in $\ensuremath{T_{\mathrm{g}}}$ is observed in experiments accompanied by a similar reduction in $\ensuremath{T_{\mathrm{c}}}$ in simulations ($\ensuremath{T_{\mathrm{c}}}$ is the so called ideal glass transition temperature within the mode coupling theory \cite{GoetzeSjoegren1992_RPP,Goetze:MCTessentials,Goetze:JPCM1999}). It is noteworthy that, in the case of adhesive substrates, the expected opposite effect, i.e.\ an increase in the glass transition temperature upon confinement is indeed observed (see e.g.\ Torres and coworkers \cite{torresEtal:PRL2000} and references therein).
Let us now turn to the question how polymer specific properties such as the conformation of a chain may change close to a substrate. For this purpose, we show in \fref{fig:fig4} that polymer conformation is significantly stretched along the parallel direction in the proximity of the substrate. For this purpose, we compare the parallel and perpendicular components of the radius of gyration, $R^{2}_{\rm g,\parallel}$, $R^{2}_{\rm g,\bot}$ as well as the same components of the end-to-end distance, $R^{2}_{\rm ee,\parallel}$, $R^{2}_{\rm ee,\bot}$ in a slab geometry.
\begin{figure}
\vspace*{5mm}
\includegraphics[width=6cm]{fig4a.eps}
\caption[]{Components of the radius of gyration and the end-to-end distance in directions parallel ($R^{2}_{\rm g,\parallel}$ and $R^{2}_{\rm ee,\parallel}$) and perpendicular ($R^{2}_{\rm g,\bot}$ and $R^{2}_{\rm ee,\bot}$) to the substrate versus the distance, $z$, from the substrate. $R^{2}_{\rm g,\parallel}(z)$ (left ordinate) and $R^{2}_{\rm ee,\parallel}(z)$ (right ordinate) behave qualitatively similar. They develop a maximum close to the substrate and then converge towards a constant (bulk) value in the film center indicated by horizontal dashed lines. (Note that $R^{2}_{\rm g,\parallel}$ and $R^{2}_{\rm g,\bot}$ are shifted upwards by an amount of
unity in order to avoid crossing with the end-to-end curves)
\cite{VarnikEtal:EPJE2002}.}
\label{fig:fig4}
\end{figure}
The components are plotted versus the distance, $z$, from the substrate, where $z$ denotes the position of chain's center of mass. So, $R^{2}_{\rm g,\parallel}(z)$, for instance, is the radius of gyration parallel to the substrate, which is averaged over all chains whose centers of mass are located at $z$. The figure shows that both the radius of gyration and the end-to-end distance agree with the bulk value if $z > z_{\rm w} \!+\! 2 R_{\rm g}^{\rm bulk}$. Here, $z_{\rm w} \approx 1$ is the wall position, i.e.\ the smallest distance between a monomer and the substrate. As the chain's center of mass approaches the substrate, $R^{2}_{\rm g,\parallel}$ and $R^{2}_{\rm ee,\parallel}$ first develop a shallow minimum and then increase to about twice the bulk value followed by a sharp decrease to zero in the very vicinity of the substrate where practically no chain is present. On the other hand, the perpendicular components, $R^{2}_{\rm g,\bot}$ and $R^{2}_{\rm ee,\bot}$, first pass through a maximum before decreasing to almost 0 at the substrate. This behavior has been observed in several other simulations (see \cite{BaschnagelBinderMilchev2000} and references therein), also for larger chain length than studied here \cite{BitsanisHadziioannou1990,WangBinder1991}.
\section{Conclusion}
In this paper, we provide a brief survey of modeling and simulation studies of dense systems of flexible linear polymers close to solid substrates. The challenging character of polymer science relies upon the fact that the smallest and largest length scales present in a polymer system may span many orders of magnitude, usually $\sim 1 \AA$ (size of an atom) up to hundreds of nanometers (end-to-end distance). This broad range of length scales brings about a correspondingly wide time window. An adequate study of polymer systems thus necessarily involves the use of simplifying concepts allowing one to focus on essential features. It is, therefore, not surprising that coarse graining procedures are common practice in polymer science. We therefore address in this article some of the basic ideas regarding the development of coarse grained models for a computational study of polymer systems. Along this route, we also address Monte Carlo methods as compared to molecular dynamics simulations.
Some salient properties of polymers close to a solid substrate are also presented. In particular, we show how the presence of a solid substrate may affect both the static and dynamic properties of a polymer melt. This is exemplified by a significant slowing down of the local diffusion dynamics --and the closely related dynamics of structural relaxation-- in the proximity of attractive substrates. Similarly, a generic enhancement of diffusion (resulting in a reduction of the glass transition temperature in a slab geometry) is observed close to perfectly smooth non adhesive surfaces. These findings are evidenced both experimentally and by computer simulations. Polymer specific features, on the other hand, reflect themselves, e.g.\ in a change of conformational degrees of freedom from fully isotropic in the bulk to an elongated (in direction parallel to the substrate) state in the very vicinity of the substrate.
We gratefully thank J. Baschnagel, Wolfgang Paul and Marcus M\"uller for useful discussions and providing us with results of their recent work. FV acknowledges the support from the industrial sponsors of the ICAMS, the state of North Rhine-Westphalia and European Union.
| 12,070 |
\section{Introduction}
Consider a group of $K$ sensors measuring a common phenomenon, like weather. In this paper, we investigate a communication scenario in which some sensors desire to obtain measurements of the other nodes with the help of some existing relay nodes in the network. In the language of information theory, we can consider measurements of sensors as outputs of discrete memoryless correlated sources and model the communication network as a cooperative relay network in which each node can simultaneously be a transmitter, a relay and a receiver. So the problem can be defined as below:\par
\emph{Given a set of sources $U_{{\mathcal A}}=\{U_{a_j}:a_j\in{\mathcal A}\}$ observed at nodes ${\mathcal A}=\{a_1,\cdots,a_M\}\subseteq{\mathcal V}$ respectively (${\mathcal V}=\{1,\cdots,V\}$ is the set of nodes in the network) and a set of receivers at nodes ${\mathcal D}=\{d_1,\cdots,d_K\}\subseteq{\mathcal V}$ which is not necessarily disjoint from ${\mathcal A}$, what conditions must be satisfied to enable us to reliably multicast $U_{{\mathcal A}}$ to all the nodes in ${\mathcal D}$ over the cooperative relay network?} \par
The problem of Slepian-Wolf (SW) coding over multi-user channels has been considered for some special networks. First in \cite{tuncel}, Tuncel investigated the problem of multicasting a source over a broadcast channel with side information at the receivers. He proposed a joint source-channel coding scheme which achieves \emph{operational separation} between source coding and channel coding in the sense that the source and channel variables are separated. He also proved the optimality of his scheme. In a recent work \cite{gunduz}, this problem was generalized to the problem of lossy multicasting of a source over a broadcast channel with side information. In \cite{babu}, a necessary and sufficient condition for multicasting a set of correlated sources over acyclic Aref networks \cite{aref} was derived. The problem of multicasting correlated sources over networks was also studied in the network coding literature \cite{ho,effros}.\par
Cooperative relay network has been widely studied in terms of achievable rate region for relay networks \cite{xie,kramer2005}, multiple access relay channels \cite{marc} and multi-source, multi-relay and multi-destination networks \cite{xie:2007}. In all the mentioned works, two main strategies of Cover and El Gamal for relay channels \cite{cover}, namely, \emph{decode and forward} (DF) and \emph{compress and forward} (CF) were generalized for the cooperative relay networks. In a more general setting \cite{goldsmith}, G\"{u}nd\"{u}z, et.al., consider a compound multiple access channel with a relay in which three transmitters where, one of them acts as a relay for the others, want to multicast their messages to the two receivers. Several Inner bounds to the capacity region of this network were derived using DF, CF and also structured lattice codes. Although finding the capacity of the simple relay channel is a longstanding open problem, an approximation for the Gaussian relay network with multicast demands has been recently found in \cite{aves:isit,aves:phd,aves:sub}. In these works, the authors propose a scheme that uses the Wyner-Ziv coding at the relays and a distinguishability argument at the receivers.\par
In this paper, we first study the problem of multi-layer Slepian-Wolf coding of multi-component correlated sources, in which each source should encode its components according to a given hierarchy. Using the sub-modularity of the entropy function and a covering lemma, we prove an identity which states that for any points of SW-region with respect to joint encoding/decoding of the components, there exists a multi-layer SW-coding which achieves it. To the best of our knowledge, this identity is new and we call it the SW-identity. Then, we propose a \emph{joint Source-Wyner-Ziv encoding/sliding window decoding} scheme for Slepian-Wolf coding over cooperative networks. In this scheme, each node compresses its channel observation using Wyner-Ziv coding and then jointly maps its source observation and compressed channel observation to a channel codeword. For decoding, each receiver uses sliding window decoding with respect to an ordered partition of other nodes. For each ordered partition, we obtain a set of DMCS which can reliably be multicast over the cooperative relay network. By utilizing the SW-identity, we obtain the union of the sets of all feasible DMCS with respect to all ordered partitions. Our scheme results in \emph{operational separation} between the source and channel coding. In addition, this scheme does not depend on the graph of the network, so the result can easily be applied to any arbitrary network.
We show that the sufficient conditions for our scheme, are indeed necessary conditions for the Slepian-Wolf coding over arbitrary Aref networks and linear finite-field cooperative relay networks. Moreover, we prove the feasibility of multicasting of all DMCS whose Slepian-Wolf region overlap the cut-set bound within a constant number of bits over a Gaussian cooperative relay network. This establishes a large set of DMCS that belongs to the set of DMCS which can reliably be multicast in the operational separation sense. Note that the model considered in this paper, encompasses the model of multiple access channel with correlated sources. So the set of feasible DMCS in the operational separation sense is a subset of all feasible DMCS. We extract an achievable rate region for cooperative relay networks by reducing sufficient conditions for reliable multicasting. We show that this achievable rate region subsumes some recent achievable rates based on the CF strategy \cite{kramer2005,yassaee}. In addition, we estimate the capacity region of Gaussian cooperative relay networks within a constant number of bits from the cut-set bound. Our result improves capacity approximation of Gaussian relay networks given in \cite{aves:sub}.\par
The rest of the paper is organized as follows. In section \ref{sec:2}, we introduce notations and definitions used in this paper. Section \ref{sec:3} derives necessary conditions for reliable multicasting of DMCS over cooperative networks. Section \ref{sec:4} studies the multi-layer Slepian-Wolf coding, in particular, a novel identity related to the entropy function is derived. In section \ref{sec:5}, we obtain feasibility constraints which are the main results of the paper. In sections \ref{sec:6} and \ref{sec:7}, we derive necessary and sufficient conditions for multicasting of DMCS over some classes of semi-deterministic networks and Gaussian cooperative relay networks, respectively. Section \ref{sec:8} employs results of the previous sections to derive an inner bound and an outer bound for the capacity region of a cooperative relay networks. Section \ref{sec:9} concludes the paper.
\section{Preliminaries and Definitions}\label{sec:2}
\subsection{Notation}
We denote discrete random variables with capital letters, e.g., $X$, $Y$, and their realizations with lower case letters $x$, $y$. A random variable $X$ takes values in a set ${\mathcal X}$. We use $|{\mathcal X}|$ to denote the cardinality of a finite discrete set ${\mathcal X}$, and $p_X(x)$ to denote the probability mass function (p.m.f.) of $X$ on ${\mathcal X}$, for brevity we may omit the subscript $X$ when it is obvious from the context. We denote vectors with boldface letters, e.g. $\mathbf{x}$, $\mathbf{y}$. The superscript identifies the number of samples to be included in a given vector, e.g., $X^i=(X_1,\cdots,X_i)$. We use $\mathit{T}_{\epsilon}^n(X)$ to denote the set of $\epsilon$-strongly typical sequences of length $n$, with respect to p.m.f. $p_X(x)$ on ${\mathcal X}$. Further, we use $\mathit{T}_{\epsilon}^n(Y|\mathbf{x})$ to denote the set of all $n$-sequences $\mathbf{y}$ such that $(\mathbf{x},\mathbf{y})$ are jointly typical, w.r.t. $p_{XY}(x,y)$. We denote the vectors in the $j$th block by a subscript $[j]$. For a given set ${\mathcal S}$, we use the shortcuts $X_{{\mathcal S}}=\{X_i:i\in{\mathcal S}\}$ and $R_{{\mathcal S}}=\sum_{i\in{\mathcal S}}R_i$. We use ${\mathcal S}\backslash{\mathcal T}$ to denote the set theoretic difference of ${\mathcal S}$ and ${\mathcal T}$. We say that $a_n\stackrel{.}{\le}2^{nb}$, if for each $\epsilon>0$ and sufficiently large $n$, the relation $a_n\le 2^{n(b-\epsilon)}$ holds.
\subsection{Sub-modular Function}
Let ${\mathcal V}$ be a finite set and $2^{{\mathcal V}}$ be a power set of it, i.e., the collection of all subsets of ${\mathcal V}$. A function $f:2^{{\mathcal V}}\rightarrow\mathbb{R}$ is called sub-modular, if for each ${\mathcal S},{\mathcal T}\subseteq{\mathcal V}$,
\begin{equation}
f({\mathcal S}\cap{\mathcal T})+f({\mathcal S}\cup{\mathcal T})\le f({\mathcal S})+f({\mathcal T})
\end{equation}
Function $f$ is called super-modular, if $-f$ is sub-modular. Given two sets ${\mathcal S},{\mathcal T}$ and a sub-modular function $f$, we define $f({\mathcal S}|{\mathcal T})\triangleq f({\mathcal S}\cup{\mathcal T})-f({\mathcal T})$.\\
Let $X_{{\mathcal A}}$ be DMCS with distribution $p(x_{{\mathcal A}})$. For each ${\mathcal S}\subseteq{\mathcal A}$, we define the entropy function $h$ as $h({\mathcal S})=H(X_{{\mathcal S}})$ where $H(X)$ denotes the entropy of random variable $X$. It is well-known that the entropy function $h$ is a sub-modular function over the set ${\mathcal A}$ \cite{yeoung:book}. The sub-modularity property of the entropy function plays an essential role in the remainder of the paper, (in contrast to the non-decreasing property of the entropy, i.e, $h({\mathcal S})\ge h({\mathcal T}),\ \forall {\mathcal T}\subseteq{\mathcal S}$).
\subsection{Some Geometry}
A polytope is a generalization of polygon to a higher dimension. Point, segment and polygon are polytopes of dimension $0$, $1$ and $2$, respectively. A polytope of dimension $d\ge 3$ can be considered as a space bounded by a set of polytopes of dimension $d-1$. The boundary polytope of dimension $d-1$ is called facet. For a given polytope $\mathbf{P}$, a collection of polytopes $\{\mathbf{P}_1,\cdots,\mathbf{P}_n\}$ is called a \emph{closed covering} of $\mathbf{P}$, if $\mathbf{P}=\cup_{i=1}^n \mathbf{P}_i$.
\begin{lemma}\label{le:cover}
Let $\mathbf{P}$ be a polytope and ${\mathcal F}=\{\mathbf{P}_1,\mathbf{P}_2,\cdots,\mathbf{P}_n\}$ be a collection of polytopes with the same dimension as $\mathbf{P}$. If $\mathbf{P}$ and ${\mathcal F}$ satisfy the following conditions:
\begin{enumerate}
\item $\forall i:\quad\mathbf{P}_i\subset\mathbf{P}$
\item Each facet of $\mathbf{P}$ is covered by some facets of some polytopes $(\mathbf{P}_{i1},\cdots,\mathbf{P}_{ik})$.
\item For each facet of $\mathbf{P}_i$ inside $\mathbf{P}$, there is $\mathbf{P}_j\neq\mathbf{P}_i$ such that
$\mathbf{P}_i$ and $\mathbf{P}_j$ have only that facet as the common part.
\end{enumerate}
\par then ${\mathcal F}$ is a \emph{closed covering} of $\mathbf{P}$.
\end{lemma}
\begin{proof}
The proof is provided in the Appendix \ref{app:1}.
\end{proof}
Lemma \ref{le:cover} provides a powerful tool for dealing with the regions which are described with a set of inequalities.
\begin{definition}
\label{def:majorize}
A point $Q=(q_1,\cdots,q_d)$
in $\mathbb{R}^d$ is said to majorize point $P =(p_1,\cdots,p_d)$, if $q_i\ge p_i$ for all $i$. In addition, point $Q$ is said to majorize set ${\mathcal P}$ (denoted by $Q\succ{\mathcal P}$), if there exists a point $X\in{\mathcal P}$ which is majorized by $Q$.
\end{definition}
It is easy to show that majorization has the following simple property:
\begin{equation}
\label{eq:maj-property}
Q\succ {\mathcal P}_1\cup{\mathcal P}_2 \quad \Leftrightarrow\quad Q\succ {\mathcal P}_1\ \mbox{or}\ Q\succ {\mathcal P}_2
\end{equation}
\begin{definition}
\label{def:associate}
Let $f$ be a sub-modular function over the set ${\mathcal V}$. The \emph{essential polytope} associated with $f$ is:
\begin{equation}
\label{eq:esspoly}
\mathbf{P}_f=\{\mathbf{x}\in\mathbb{R}^{|{\mathcal V}|}: x_{{\mathcal V}}= f({\mathcal V})\ \mbox{and}\ \forall {\mathcal S}\subset{\mathcal V}, x_{{\mathcal S}}\ge f({\mathcal S}|{\mathcal S}^C)\}
\end{equation}
where $\mathbf{x}=[x_1,x_2,\cdots,x_{|{\mathcal V}|}]$ and $x_{{\mathcal S}}=\sum_{i\in{\mathcal S}}x_i$.
\end{definition}
%
\par The essential polytope of the sub-modular function $f$ over the set ${\mathcal V}$ is a polytope of dimension $|{\mathcal V}|-1$, which has $2^{|{\mathcal V}|}-2$ facets, each corresponding to intersection of hyperplane $x_{{\mathcal T}}=f({\mathcal T}|{\mathcal T}^C)$ with $\mathbf{P}_{f}$ for each non-empty subset ${\mathcal T}\subset{\mathcal V}$. By $\mathbf{F}_{f,{\mathcal T}}$, we denote the facet corresponding to the subset ${\mathcal T}$. Since $g({\mathcal T})=f({\mathcal T}|{\mathcal T}^C)=f({\mathcal V})-f({\mathcal T})$ is a super-modular function, one can easily show that $\mathbf{F}_{f,{\mathcal T}}$ is a non-empty polytope of dimension $|{\mathcal V}|-2$ (see for example, \cite{polytope}) .
\begin{lemma}
\label{le:facet}
The facet $\mathbf{F}_{f,{\mathcal T}}$ of polytope $\mathbf{P}_f$ can be decomposed to projections of $\mathbf{P}_f$ on $\mathbb{R}^{{\mathcal T}}$ and $\mathbb{R}^{{\mathcal T}^C}$ (in which $\mathbb{R}^{{\mathcal S}}$ stands for the space $\{\mathbf{x}\in\mathbb{R}^{|{\mathcal V}|}:\forall s\in{\mathcal S}^C, x_{s}=0\}$). More precisely,
\begin{equation}
\label{eq:app}
\mathbf{F}_{f,{\mathcal T}}=\{\mathbf{x}\in\mathbb{R}^{|{\mathcal V}|}:\mathbf{x}_{{\mathcal T}}\in \mathbf{F}_{f,{\mathcal T}}^{(1)}, \mathbf{x}_{{\mathcal T}^C}\in\mathbf{F}_{f,{\mathcal T}}^{(2)}\}
\end{equation}
where
\begin{equation}
\label{eq:pface1}
\mathbf{F}_{f,{\mathcal T}}^{(1)}=\{\mathbf{x}\in\mathbb{R}^{{\mathcal T}}: x_{{\mathcal T}}=f({\mathcal T}|{\mathcal T}^C),\ \mbox{and}\ \forall{\mathcal S}\subset{\mathcal T}, x_{{\mathcal S}}\ge f({\mathcal S}|{\mathcal S}^C)\}
\end{equation}
and
\begin{equation}
\label{eq:pface2}
\mathbf{F}_{f,{\mathcal T}}^{(2)}=\{\mathbf{x}\in\mathbb{R}^{{\mathcal T}^C}: x_{{\mathcal T}^C}=f({\mathcal T}^C),\ \mbox{and}\ \forall{\mathcal S}\subset{\mathcal T}^C, x_{{\mathcal S}}\ge f({\mathcal S}|{\mathcal T}^C\backslash{\mathcal S})\}.
\end{equation}
Moreover, $\mathbf{F}_{f,{\mathcal T}}^{(1)}$ and $\mathbf{F}_{f,{\mathcal T}}^{(2)}$ are the essential polytopes of the functions $f_1:2^{{\mathcal T}}\rightarrow\mathbb{R}$ and $f_2:2^{{\mathcal T}^C}\rightarrow\mathbb{R}$ respectively, where $f_1({\mathcal S})=f({\mathcal S}|{\mathcal T}^C)$ and $f_2({\mathcal S})=f({\mathcal S})$.
\end{lemma}
\begin{proof}
The proof is provided in Appendix \ref{app:2}.
\end{proof}
\begin{lemma}[\cite{polytope}]
\label{le:sum-polytope}
Let $f_1$ and $f_2$ be two sub-modular functions defined on a set ${\mathcal V}$. Then,
\begin{equation}
\mathbf{P}_{f_1+f_2}=\mathbf{P}_{f_1}+\mathbf{P}_{f_2}
\end{equation}
where the sum of two sets is defined as ${\mathcal X}+{\mathcal Y}=\{x+y:x\in{\mathcal X},y\in{\mathcal Y}\}$.
\end{lemma}
\subsection{System Model}
A cooperative relay network is a discrete memoryless network with $V$ nodes ${\mathcal V} =\{1, 2,\cdots, V\}$, and a channel of the form
\[
({\mathcal X}_1,{\mathcal X}_2,\cdots,{\mathcal X}_V,p(y_1,y_2,\cdots,y_V|x_1,x_2,\cdots,x_V),{\mathcal Y}_1,{\mathcal Y}_2,\cdots,{\mathcal Y}_V).
\]
At each time $t = 1, 2,\cdots$, every node $v\in{\mathcal V}$ sends an input $X_{v,t}\in{\mathcal X}_v$, and receives an output $Y_{v,t}\in{\mathcal Y}_v$, which are related via $p(Y_{1,t},\cdots , Y_{V,t}|X_{1,t}, . . . ,X_{V,t})$.
\begin{definition}[Reliable multicasting of correlated sources over cooperative networks]
\label{def:sw}
Let ${\mathcal A}$ and ${\mathcal D}$ be two subsets of ${\mathcal V}$ corresponding to the set of the sources and the destinations, respectively. We say that the set of DMCS, $U_{\mathcal A}$, can reliably be multicast over discrete memoryless cooperative network, to all nodes in ${\mathcal D}$, if there exists a sequence of a pair of positive integers $(s_n,r_n)$ such that $s_n\rightarrow\infty,\ r_n\rightarrow\infty,\ \dfrac{r_n}{s_n}\rightarrow 1$ as $n\rightarrow\infty$ and a sequence of encoding functions
\[ f_{v,t}^{(s_n)}:{\mathcal U}_v^{s_n}\times{\mathcal Y}_v^{t-1}\rightarrow{\mathcal X}_v\quad \mbox{for}\quad t=1,\cdots,r_n\]
at all nodes $v\in{\mathcal V}$, where, for the non-source nodes we let ${\mathcal U}_v=\emptyset$
and a set of decoding functions defined at each node $d_i\in{\mathcal D}$;
\[g_{d_i}^{(s_n,r_n)}:{\mathcal U}_{d_i}^{s_n}\times{\mathcal Y}_{d_i}^{r_n}\rightarrow{\mathcal U}_{{\mathcal A}}^{s_n}\]
such that the probability of error
\[
P_{e,d_i}^{(s_n,r_n)}=\Pr\left(g_{d_i}^{(s_n,r_n)}(U_{d_i}^{s_n},Y_{d_i}^{r_n})\neq U_{{\mathcal A}}^{s_n}\right)
\]
vanishes for all $d_i\in{\mathcal D}$ as $n$ goes to the infinity.
\end{definition}
According to Definition \ref{def:sw}, the joint probability distribution of the random variables factors as,
\begin{equation}
p(\mathbf{u}_{{\mathcal A}},\mathbf{x}_{{\mathcal V}},\mathbf{y}_{{\mathcal V}})=\prod_{j=1}^{s_n} p(u_{{\mathcal A},j})\prod_{t=1}^{r_n}\prod_{v=1}^V p(x_{v,t}|y_v^{t-1},\mathbf{u}_v)p(y_{{\mathcal V},t}|x_{{\mathcal V},t})
\end{equation}
\begin{remark}
The network model described in the Definition \ref{def:sw} includes several network models such as MAC with feedback, relay networks and multi-way channels (i.e., a generalization of the two-way channel).
\end{remark}
\section{Cut-set type necessary conditions for reliable multicasting}\label{sec:3}
In this section, we prove necessary conditions for reliable multicasting of correlated sources over cooperative network.
\begin{proposition}
\label{pro:ob}
A set of DMCS $U_{{\mathcal A}}$ can reliably be multicast over a cooperative network, only if there exists a joint p.m.f. $p(x_{{\mathcal V}})$ such that
\begin{equation}
\label{eq:sw2}
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})< \min_{d_i\in{\mathcal D}\backslash{\mathcal S}}\min_{{\mathcal V}\supseteq{\mathcal W}\supseteq{\mathcal S}:\atop d_i\in{\mathcal W}^C} I(X_{{\mathcal W}};Y_{{\mathcal W}^C}|X_{{\mathcal W}^C})
\end{equation}
\end{proposition}
\begin{proof}
Using Fano's inequality, imposing the condition $P_{e,d_i}^{(s_n,r_n)}\rightarrow 0$ as $n\rightarrow\infty$, it follows that:
\begin{equation}
\label{eq:ob}\forall {\mathcal S}\subseteq{\mathcal V}, d_i\in{\mathcal D}\backslash{\mathcal S}: \frac{1}{s_n}H(U_{{\mathcal A}}^{s_n}|Y_{d_i}^{r_n},U_{d_i}^{s_n})\leq\epsilon_n
\end{equation}
with $\epsilon_n\rightarrow 0$ as $n\rightarrow\infty$. We also have $\frac{1}{s_n}H(U_{{\mathcal S}}^{s_n}|U_{{\mathcal A}\backslash{\mathcal S}}^{s_n}Y_{d_i}^{r_n}U_{d_i}^{s_n})\leq\epsilon_n$. For each $({\mathcal W},d_i)$ such that ${\mathcal S}\subseteq{\mathcal W}\subseteq{\mathcal V}$ and $d_i\in{\mathcal W}^C$, we have:
\begin{align}
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})&=\frac{1}{s_n}H(U_{{\mathcal S}}^{s_n}|U_{{\mathcal A}\backslash{\mathcal S}}^{s_n})
\label{fa:1}\\ &=\frac{1}{s_n}(I(U_{{\mathcal S}}^{s_n};Y_{d_i}^{r_n}|U_{{\mathcal A}\backslash{\mathcal S}}^{s_n})+H(U_{{\mathcal S}}^{s_n}|U_{{\mathcal A}\backslash{\mathcal S}}^{s_n}Y_{d_i}^{r_n}))
\label{fa:2}\\
&\leq\frac{1}{s_n}I(U_{{\mathcal S}}^{s_n};Y_{{\mathcal W}^C}^{r_n}|U_{{\mathcal A}\backslash{\mathcal S}}^{s_n})+\epsilon_n
\label{fa:3}\\
&=\frac{1}{s_n}\sum_{i=1}^{r_n}
I(U_{{\mathcal S}}^{s_n};Y_{{\mathcal W}^C,i}|U_{{\mathcal A}\backslash{\mathcal S}}^{s_n}Y_{{\mathcal W}^C}^{i-1}X_{{\mathcal W}^C,i})+\epsilon_n
\label{fa:4}\\
&=\frac{1}{s_n}\sum_{i=1}^{r_n}H(Y_{{\mathcal W}^C,i}|U_{{\mathcal A}\backslash{\mathcal S}}^{s_n}Y_{{\mathcal W}^C}^{i-1}X_{{\mathcal W}^C,i})-
H(Y_{{\mathcal W}^C,i}|U_{{\mathcal A}}^{s_n}Y_{{\mathcal W}^C}^{i-1}X_{{\mathcal W}^C,i})+\epsilon_n
\label{fa:5}\\
&\leq\frac{1}{s_n}\sum_{i=1}^{r_n}H(Y_{{\mathcal W}^C,i}|X_{{\mathcal W}^C,i})-H(Y_{{\mathcal W}^C,i}|U_{{\mathcal A}}^{s_n}Y_{{\mathcal V}}^{i-1}X_{{\mathcal V},i})+\epsilon_n
\label{fa:6}\\
&=\frac{1}{s_n}\sum_{i=1}^{r_n}I(X_{{\mathcal W},i};Y_{{\mathcal W}^C,i}|X_{{\mathcal W}^C,i})+\epsilon_n
\label{fa:7}\\
&=\frac{r_n}{s_n} I(X_{{\mathcal W},Q};Y_{{\mathcal W}^C,Q}|X_{{\mathcal W}^C,Q},Q)+\epsilon_n
\label{fa:8}\\
&\leq\frac{r_n}{s_n}I(X_{{\mathcal W},Q};Y_{{\mathcal W}^C,Q}|X_{{\mathcal W}^C,Q})+\epsilon_n
\label{fa:9}\\
&\rightarrow I(X_{{\mathcal W}};Y_{{\mathcal W}^C}|X_{{\mathcal W}^C})
\label{fa:10}
\end{align}
\noindent where \eqref{fa:4} follows from the fact that $X_{{\mathcal W}^C,i}$ is a function of $(Y_{{\mathcal W}^C}^{i-1},U_{{\mathcal W}^C\cap{\mathcal A}}^{s_n})$ and the fact that ${\mathcal W}^C\cap{\mathcal A}\subseteq{\mathcal A}\backslash{\mathcal S}$, \eqref{fa:6} follows since conditioning reduces entropy, \eqref{fa:7} follows because $(U_{{\mathcal A}}^{s_n},Y_{{\mathcal V}}^{i-1})-X_{{\mathcal V},i}-Y_{{\mathcal V},i}$ form a Markov chain, \eqref{fa:8} is obtained by introducing a time-sharing random variable $Q$ which is uniformly distributed over the set $\{1,2,\cdots,r_n\}$ and is independent of everything else, \eqref{fa:10} follows by allowing $s_n,r_n\rightarrow\infty$ with $\frac{r_n}{s_n}\rightarrow 1$ and defining $Y_{{\mathcal V}}\triangleq Y_{{\mathcal V},Q}$ and $X_{{\mathcal V}}\triangleq X_{{\mathcal V},Q}$.
\end{proof}
\section{Multi-Layer Slepian-Wolf Coding}\label{sec:4}
Before describing our scheme and the related results, in this section, we deal with the problem of \emph{multi-layer Slepian-Wolf coding} (ML-SW). Study of the ML-SW enables us to find a new tool to analyze the main problem. In the previous works (for example \cite{mlsw}, \cite{mlsw1}), ML-SW is used to describe a source with some small components (for example, by a binary representation of it) and then successively encoding these components with SW-coding instead of encoding the whole source at once. For example, if we describe an i.i.d. source $S$ by $(X,Y)$, i.e., $S=(X,Y)$, instead of encoding $S$ by $R=H(S)$ bits/symbol, we can first describe $X$ by $R_X=H(X)$ bits/symbol and then apply SW-coding to describe $Y$ by $R_Y=H(Y|X)$ bits/symbol, assuming that the receiver knows $X$ from decoding the previous layer information as a side information. Since the total bits required to describe $S$ in two layers is $R_X+R_Y=H(X,Y)=H(S)$, it follows that there is no loss in the two-layer SW-coding compared with the jointly encoding of the source components. A natural question is: \emph{How can this result be generalized to a more general setting of multi-terminal SW-coding}?
\begin{figure}[t]
\centering
\includegraphics[scale=.8]{2L}
\caption{Two-Layer Slepian-Wolf coding for a pair of two-component correlated sources. This coding is suboptimal in the sense that it does not achieve the entire of Slepian-Wolf coding. }
\label{fig:2L}
\end{figure}
At first, let us look at the two-terminal SW-coding. Suppose two sources $S_1=(X_1,Y_1)$
and $S_2=(X_2,Y_2)$ are given. Joint SW-coding yields that lossless description of $(S_1,S_2)$ with rates $(R_1,R_2)$ is feasible, provided that $(R_1,R_2)\in\{(r_1,r_2):r_1\ge H(X_1Y_1|X_2Y_2),r_2\ge H(X_2Y_2|X_1Y_1), r_1+r_2\ge H(X_1X_2Y_1Y_2)\}$. Now suppose the following simple ML-SW. Assume in the first layer, $X_1$ and $X_2$ are encoded by SW-coding with rates $(R_{11},R_{21})$ and in the next layer $Y_1$ and $Y_2$ are encoded by SW-coding with rates $(R_{12},R_{22})$ assuming that the receiver knows $(X_1,X_2)$ from decoding of the previous layer information (See Fig. \ref{fig:2L}). The lossless description of $(S_1,S_2)$ in this manner is possible, if:
\begin{align*}
R_1=R_{11}+R_{12}&\ge H(X_1|X_2)+H(Y_1|X_1X_2Y_2)\ge H(X_1Y_1|X_2Y_2)\\
R_2=R_{21}+R_{22}&\ge H(X_2|X_1)+H(Y_2|X_1X_2Y_1)\ge H(X_2Y_2|X_1Y_1)\\
R_1+R_2&\ge H(X_1X_2)+H(Y_1Y_2|X_1X_2)=H(X_1X_2Y_1Y_2)
\end{align*}
\begin{figure}[t]
\centering
\includegraphics[scale=.7]{SW-region}
\caption{Slepian-Wolf rate region vs rate regions with two and three layers Slepian-Wolf coding. Segments $AC$ correspond to the three-layer SW-coding, in which in the first layer, $X_2$ is encoded, then in the second layer $(Y_2,X_1)$ is encoded assuming that $X_2$ is already decoded at the receiver and in the third layer $Y_1$ is encoded assuming that $(X_2,Y_2,X_1)$ is already available at the receiver. Segment $CD$ corresponds to two-layer SW-coding of the Fig. \ref{fig:2L}. Segment $DB$ is obtained from a similar three layer SW-coding to that of segment $AC$. Notice that each corner point of any multi-layer SW-coding that lies inside the SW-region is coincident to a corner point of another multi-layer SW-coding. }
\label{fig:SW}
\end{figure}
This shows that this simple layering can not achieve all the points in the SW-region, in particular the corner points $A=(H(X_1Y_1|X_2Y_2),H(X_2Y_2))$ and $B=(H(X_1Y_1),H(X_2Y_2|X_1Y_1))$ can not be achieved by this scheme(See Fig. \ref{fig:SW}). But the point $A$ can be achieved by successive SW-coding of $X_2$, $Y_2$, $X_1$ and $Y_1$ regarding that the previous sources are available at the receiver. This method suggests that instead of dividing the SW-coding in two layers, SW-coding can be performed in three layers: in the first layer $X_2$ is described for the receiver with rate $R_{21}\ge H(X_2)$, in the second layer $(Y_2,X_1)$ are encoded by SW-coding in the presence of $X_2$ at the receiver, and finally in the last layer $Y_1$ is described using SW-coding assuming $(X_2,Y_2,X_1)$ are available to the receiver. Analyzing this strategy, yields that $(R_1,R_2)$ are achievable if,
\begin{align*}
R_1=R_{11}+R_{12}&\ge H(X_1|X_2Y_2)+H(Y_1|X_1X_2Y_2)= H(X_1Y_1|X_2Y_2)\\
R_2=R_{21}+R_{22}&\ge H(X_2)+H(Y_2|X_1X_2)\ge H(X_2Y_2|X_1Y_1)\\
R_1+R_2&\ge H(X_2)+H(X_1Y_2|X_2)+H(Y_1|X_2Y_2X_1)=H(X_1X_2Y_1Y_2)
\end{align*}
From this strategy, the corner point $A$ is achieved, but the corner point $B$ is not achieved. In addition, as it can be seen in Fig. \ref{fig:SW}, the other corner point of this scheme ($C$) is coincident with one of the corner points of the two-layer scheme. By symmetry, the corner point $B$ is achieved by a three-layer scheme in which $X_1$, $(X_2,Y_1)$ and $Y_2$ are encoded in the first, second and third layer respectively. In addition, as it can be seen in Fig. \ref{fig:SW}, the union of the regions of the three different layering schemes is a closed covering of the SW-region. Note that in all the three schemes, there is a hierarchy in the sense that the first component of each source (i.e., $X_i$) is encoded prior to the second component of it (i.e., $Y_i$).
The result of the two-terminal SW-coding suggests that to obtain the entire SW-region of multi-components DMCS, it suffices to consider all possible layering schemes such that a given hierarchy on each source is satisfied.
\begin{definition}
An ordered partition $\mathbf{C}$ of a set ${\mathcal V}$ is a sequence $[{\mathcal L}_1,{\mathcal L}_2,\cdots,{\mathcal L}_K]$ of subsets of ${\mathcal V}$, with union ${\mathcal V}$, which are non-empty, and pairwise disjoint. Denote the family of all ordered partitions of a given set ${\mathcal V}$, by ${\mathcal F}_{{\mathcal V}}$.
\end{definition}
\par
Consider a DMCS $S_{{\mathcal V}}$ with two component sources, i.e., $S_v=(X_v,Y_v)$. Now we describe ML-SW with respect to a given ordered partition $\mathbf{C}=[{\mathcal L}_1,\cdots,{\mathcal L}_K]$. In addition, we assume that the decoder has access to side information $Z$ which is correlated with $(X_{{\mathcal V}},Y_{{\mathcal V}})$ according to an arbitrary distribution $p(x_{{\mathcal V}},y_{{\mathcal V}},z)$.
\begin{enumerate}
\item In the first layer, using SW-coding, $X_{{\mathcal L}_1}^n$ is encoded with rates $R_1=(R_{11},R_{12},\cdots,R_{1V})$ in which for $v\notin{\mathcal L}_1$, we set $R_{1v}=0$. The receiver can reliably decode $X^n_{{\mathcal L}_1}$ provided that
\begin{equation}
\label{eq:sw-l1}
\forall{\mathcal S}\subseteq{\mathcal L}_1: R_{1{\mathcal S}}\ge H(X_{{\mathcal S}}|X_{{\mathcal L}_1\backslash{\mathcal S}}Z)
\end{equation}
Define the function $h_{\mathbf{C},1}:2^{{\mathcal V}}\rightarrow \mathbb{R}$ as
\[
h_{\mathbf{C},1}({\mathcal S})=H(X_{{\mathcal S}\cap{\mathcal L}_1}|Z)
\]
Now using the sub-modularity of the entropy function, we have
\begin{align}
h_{\mathbf{C},1}({\mathcal S}\cap{\mathcal T})+h_{\mathbf{C},1}({\mathcal S}\cup{\mathcal T})&=H(X_{{\mathcal S}\cap{\mathcal T}\cap{\mathcal L}_1}|
Z)+H(X_{({\mathcal S}\cup{\mathcal T})\cap{\mathcal L}_1}|Z)\nonumber\\
&=H(X_{({\mathcal S}\cap{\mathcal L}_1)\cap({\mathcal T}\cap{\mathcal L}_1)}|Z)+H(X_{({\mathcal S}\cap{\mathcal L}_1)\cup({\mathcal T}\cap{\mathcal L}_1)}|Z)\nonumber\\
&\le H(X_{{\mathcal S}\cap{\mathcal L}_1}|Z)+H(X_{{\mathcal T}\cap{\mathcal L}_1}|Z)\nonumber\\
&=h_{\mathbf{C},1}({\mathcal S})+h_{\mathbf{C},1}({\mathcal T})\label{eq:sub-h}
\end{align}
Hence $h_{\mathbf{C},1}$ is sub-modular. In addition, we have: $h_{\mathbf{C},1}({\mathcal S}|{\mathcal S}^C)=H(X_{{\mathcal V}\cap{\mathcal L}_1}|Z)-H(X_{{\mathcal S}^C\cap{\mathcal L}_1}|Z)=H(X_{{\mathcal S}}|X_{{\mathcal L}_1\backslash{\mathcal S}},Z)$. Note that $R_{1{\mathcal S}}=R_{1{\mathcal S}\cap{\mathcal L}_1}$, thus \eqref{eq:sw-l1} is equivalent to
\begin{equation}
\label{eq:sw-eq1}
\forall{\mathcal S}\subseteq{\mathcal V}: R_{1{\mathcal S}}\ge h_{\mathbf{C},1}({\mathcal S}|{\mathcal S}^C)
\end{equation}
Now it follows from Definition \ref{def:associate} that $R_1$ is contained in the SW-region of the first layer, iff it majorizes the essential polytope of $h_{\mathbf{C},1}$, i.e., $R_1\succ \mathbf{P}_{h_{\mathbf{C},1}}$.
\item In the layer $2\le i\le K+1$, assuming that $(X^n_{{\mathcal L}^i},Y^n_{{\mathcal L}^{i-1}})$ has been decoded at the receiver from the previous layers (where ${\mathcal L}^i=\cup_{k=1}^{i-1} {\mathcal L}_k$), using SW-coding $(X_{{\mathcal L}_i}^n,Y^n_{{\mathcal L}_{i-1}})$ is encoded with rates $R_i=(R_{i1},R_{i2},\cdots,R_{iV})$ in which for $v\notin{\mathcal L}_{i-1}\cup{\mathcal L}_i$, we set $R_{iv}=0$. The receiver can reliably decode $(X_{{\mathcal L}_i}^n,Y^n_{{\mathcal L}_{i-1}})$ provided that,
\begin{equation}
\label{eq:sw-li}
\forall{\mathcal S}\subseteq{\mathcal L}_{i-1}\cup{\mathcal L}_i: R_{i{\mathcal S}}\ge H(X_{{\mathcal S}\cap{\mathcal L}_i}Y_{{\mathcal S}\cap{\mathcal L}_{i-1}}|X_{{\mathcal L}_i\backslash{\mathcal S}}Y_{{\mathcal L}_{i-1}\backslash{\mathcal S}}X_{{\mathcal L}^i}Y_{{\mathcal L}^{i-1}}Z)
\end{equation}
Define the function $h_{\mathbf{C},i}:2^{{\mathcal V}}\rightarrow \mathbb{R}$ as follows:
\[
h_{\mathbf{C},i}({\mathcal S})=H(X_{{\mathcal S}\cap{\mathcal L}_i}Y_{{\mathcal S}\cap{\mathcal L}_{i-1}}|X_{{\mathcal L}^i}Y_{{\mathcal L}^{i-1}}Z)
\]
Now in similar manner to \eqref{eq:sub-h}, it can be shown that $h_{\mathbf{C},i}$ is sub-modular. Following similar steps described in the previous stage, we conclude that $R_i$ is contained in the SW-region of the layer $i$, iff it majorizes the essential polytope of $h_{\mathbf{C},i}$, i.e., $R_i\succ \mathbf{P}_{h_{\mathbf{C},i}}$.
\end{enumerate}
Define $R\triangleq\sum_{k=1}^{K+1}R_k$ (which is the overall rate vector) and $h_{\mathbf{C}}\triangleq \sum_{k=1}^{K+1}h_{\mathbf{C},k}$. We showed that $R\succ \mathbf{P}_{h_{\mathbf{C}}}$. On the other side, suppose that the point $R$ majorizes $\mathbf{P}_{h_{\mathbf{C}}}$, so there is a point $R^*\in\mathbf{P}_{h_{\mathbf{C}}}$ such that $R\succ R^*$. Applying Lemma \ref{le:sum-polytope} to $(h_{\mathbf{C},k}:1\le k\le K+1)$, we have $\mathbf{P}_{h_{\mathbf{C}}}=\sum_{k=1}^{K+1}\mathbf{P}_{h_{\mathbf{C},k}}$. Hence there are points $(R^*_k\in\mathbf{P}_{h_{\mathbf{C},k}}:1\le k\le K+1)$ such that $R^*\triangleq\sum_{k=1}^{K+1}R^*_k$. Let $R_k=R^*_k+\frac{\Delta R}{K+1}$ where $\Delta R=R-R^*$. Now we have $R\triangleq\sum_{k=1}^{K+1}R_k$ and for all $k$, $R_k\succ \mathbf{P}_{h_{\mathbf{C},k}}$. Thus, each rate vector $R$ satisfying $R\succ\mathbf{P}_{h_{\mathbf{C}}}$ can be achieved using ML-SW coding with respect to $\mathbf{C}$. Therefore the set of all achievable rates with respect to $\mathbf{C}$ is given by:
\begin{align}
\label{eq:sw-bc}
\mathcal{R}_{\mathbf{C}}&= \{R\in\mathbb{R}^{|{\mathcal V}|}:R\succ\mathbf{P}_{h_{\mathbf{C}}}\}\nonumber\\
&=\{R\in\mathbb{R}^{|{\mathcal V}|}:\forall {\mathcal S}\subseteq{\mathcal V}, R_{{\mathcal S}}\ge\sum_{i=1}^{K+1} H(X_{{\mathcal S}\cap{\mathcal L}_i}Y_{{\mathcal S}\cap{\mathcal L}_{i-1}}|X_{{\mathcal L}_i\backslash{\mathcal S}}Y_{{\mathcal L}_{i-1}\backslash{\mathcal S}}X_{{\mathcal L}^i}Y_{{\mathcal L}^{i-1}}Z)
\}
\end{align}
\par The next theorem, is the main result of this section.
\begin{theorem}[SW-identity]
\label{thm:sw-covering}
The set $\{\mathcal{R}_{\mathbf{C}}:\mathbf{C}\in{\mathcal F}_{{\mathcal V}}\}$ is a closed covering of ${\mathcal R}_{SW}$ which is the SW-region defined by:
\begin{equation}
\label{eq:sw-region}
{\mathcal R}_{SW}= \{R\in\mathbb{R}^{|{\mathcal V}|}:\forall {\mathcal S}\subseteq{\mathcal V}, R_{{\mathcal S}}\ge H(X_{{\mathcal S}} Y_{{\mathcal S}}|X_{{\mathcal S}^C}Y_{{\mathcal S}^C}Z)\}
\end{equation}
\end{theorem}
\begin{proof}
Define the function $h:2^{{\mathcal V}}\rightarrow\mathbb{R}$ with $h({{\mathcal S}})=H(X_{{\mathcal S}}Y_{{\mathcal S}}|Z)$. $h$ is a sub-modular function with the essential polytope $\mathbf{P}_h$. By definition, a point $R$ belongs to SW-region iff it majorizes $\mathbf{P}_h$. To prove the theorem, we must show that
\begin{equation}
\label{eq:eq-thm}
{\mathcal R}_{SW}=\bigcup_{\mathbf{C}\in{\mathcal F}_{{\mathcal V}}}{\mathcal R}_{\mathbf{C}}
\end{equation}
Applying Equation \eqref{eq:maj-property} to the RHS of \eqref{eq:eq-thm} yields,
\begin{equation}
\label{eq:un}
\bigcup_{\mathbf{C}\in{\mathcal F}_{{\mathcal V}}}{\mathcal R}_{\mathbf{C}}= \{R\in\mathbb{R}^{|{\mathcal V}|}:R\succ\bigcup_{\mathbf{C}\in{\mathcal F}_{{\mathcal V}}}\mathbf{P}_{h_{\mathbf{C}}}\}
\end{equation}
\par Thus, to prove the theorem, we only need to show that $\{\mathbf{P}_{h_{\mathbf{C}}}:\mathbf{C}\in{\mathcal F}_{{\mathcal V}}\}$ is a closed covering of $\mathbf{P}_h$.
We prove this by strong induction on $|{\mathcal V}|$. For
$N = 1$ as base of induction, it is clear (The case $N=2$ was proved separately in the beginning of the section). For $|{\mathcal V}|\ge 2$ assume that the theorem holds for any ${\mathcal V}$ with size $|{\mathcal V}|\le N-1$. We show that $\{\mathbf{P}_{h_{\mathbf{C}}}:\mathbf{C}\in{\mathcal F}_{{\mathcal V}}\}$ and $\mathbf{P}_h$ satisfy the conditions of Lemma \ref{le:cover}, thus $\{\mathbf{P}_{h_{\mathbf{C}}}:\mathbf{C}\in{\mathcal F}_{{\mathcal V}}\}$ is a closed covering of $\mathbf{P}_h$.
\begin{claim}
\label{cl:1}
For any ordered partition $\mathbf{C}$ of ${\mathcal V}$, we have
\[ \mathbf{P}_{h_{\mathbf{C}}}\subseteq\mathbf{P}_h.\]
\end{claim}
\emph{Proof of Claim \ref{cl:1}}.
First note that, (See equation \eqref{eq:sw-li})
\begin{align}
h_{\mathbf{C}}({\mathcal S}|{\mathcal S}^C)&=\sum_{i=1}^{K+1} H(X_{{\mathcal S}\cap{\mathcal L}_i}Y_{{\mathcal S}\cap{\mathcal L}_{i-1}}|X_{{\mathcal L}_i\backslash{\mathcal S}}Y_{{\mathcal L}_{i-1}\backslash{\mathcal S}}X_{{\mathcal L}^i}Y_{{\mathcal L}^{i-1}}Z) \label{eq:app-1}\\
&\ge \sum_{i=1}^{K+1}H(X_{{\mathcal S}\cap{\mathcal L}_i}Y_{{\mathcal S}\cap{\mathcal L}_{i-1}}|X_{{\mathcal S}^C}Y_{{\mathcal S}^C}X_{{\mathcal S}\cap{\mathcal L}^i}Y_{{\mathcal S}\cap{\mathcal L}^{i-1}}Z)\label{eq:p1-1}\\
&= H(X_{{\mathcal S}}Y_{{\mathcal S}}|X_{{\mathcal S}^C}Y_{{\mathcal S}^C}Z)\label{eq:p1-2}\\
&= h({\mathcal S}|{\mathcal S}^C)\label{eq:p1-3}
\end{align}
where \eqref{eq:p1-1} follows from the fact that $({\mathcal L}_i\backslash{\mathcal S})\cup{\mathcal L}^i\subseteq{\mathcal S}^C\cup({\mathcal S}\cap{\mathcal L}^i)$ with equality holds if ${\mathcal S}={\mathcal V}$ and conditioning does not reduce the entropy, and \eqref{eq:p1-2} follows by the chain rule, since $\{{\mathcal L}_i\cap{\mathcal S}\}_{i=1}^K$ is a partition of ${\mathcal S}$. Now we can conclude the claim from \eqref{eq:p1-3}. $\square$
\begin{claim}
\label{cl:2}
Suppose ${\mathcal F}_{{\mathcal T}^C,{\mathcal T}}$ is a subset of ${\mathcal V}$ that consists of all ordered partitions which are generated by concatenating an ordered partition of ${\mathcal T}^C$ and an ordered partition of ${\mathcal T}$, i.e.,
\[{\mathcal F}_{{\mathcal T}^C,{\mathcal T}}=\{\mathbf{C}\in{\mathcal F}_{{\mathcal V}}:\mathbf{C}=[\mathbf{C}_1,\mathbf{C}_2],\mathbf{C}_1\in{\mathcal F}_{{\mathcal T}^C}\ \mbox{and}\ \mathbf{C}_2\in{\mathcal F}_{{\mathcal T}}\}\]
Then, the set of facets $\{\mathbf{F}_{h_{\mathbf{C}},{\mathcal T}}:\mathbf{C}\in{\mathcal F}_{{\mathcal T}^C,{\mathcal T}}\}$ is a closed covering of $\mathbf{F}_{h,{\mathcal T}}$.
\end{claim}
\emph{Proof of Claim \ref{cl:2}}.By Lemma \ref{le:facet}, $\mathbf{F}_{h,{\mathcal T}}$ is given by:
\begin{equation}
\mathbf{F}_{h,{\mathcal T}}=\{\mathbf{x}\in\mathbb{R}^{{\mathcal V}}:\mathbf{x}_{{\mathcal T}}\in\mathbf{P}_{h_1},\mathbf{x}_{{\mathcal T}^C}\in\mathbf{P}_{h_2}\}
\end{equation}
In which $\mathbf{P}_{h_1}$ and $\mathbf{P}_{h_2}$ are the associated essential polytopes of sub-modular functions $h_1({\mathcal S})=H(X_{{\mathcal S}}Y_{{\mathcal S}}|Z)$ and $h_2({\mathcal S})=H(X_{{\mathcal S}}Y_{{\mathcal S}}|X_{{\mathcal T}^C}Y_{{\mathcal T}^C}Z)$ with domains $2^{{\mathcal T}^C}$ and $2^{{\mathcal T}}$, respectively. More precisely, $\mathbf{P}_{h_1}$ and $\mathbf{P}_{h_2}$ are given by:
\begin{small}
\begin{gather*}
\mathbf{P}_{h_1}=\{\mathbf{x}\in\mathbb{R}^{{\mathcal T}^C}:x_{{\mathcal T}^C}=H(X_{{\mathcal T}^C}Y_{{\mathcal T}^C}|Z)
,\ \mbox{and}\ \forall{\mathcal S}\subset{\mathcal T}^C,x_{{\mathcal S}}\ge H(X_{{\mathcal S}}Y_{{\mathcal S}}|X_{{\mathcal S}^C\cap{\mathcal T}^C}Y_{{\mathcal S}^C\cap{\mathcal T}^C}Z)\} \\
\mathbf{P}_{h_2}=\{\mathbf{x}\in\mathbb{R}^{{\mathcal T}}:x_{{\mathcal T}}=H(X_{{\mathcal T}}Y_{{\mathcal T}}|X_{{\mathcal T}^C}Y_{{\mathcal T}^C}Z)
,\ \mbox{and}\ \forall{\mathcal S}\subset{\mathcal T},x_{{\mathcal S}}\ge H(X_{{\mathcal S}}Y_{{\mathcal S}}|X_{{\mathcal S}^C\cap{\mathcal T}}Y_{{\mathcal S}^C\cap{\mathcal T}}X_{{\mathcal T}^C}Y_{{\mathcal T}^C}Z)\}
\end{gather*}
\end{small}
Now, since the size of ${\mathcal T}^C$ and ${\mathcal T}$ are smaller than $N$, by applying the induction assumption to essential polytopes $\mathbf{P}_{h_1}$ and $\mathbf{P}_{h_2}$ (with side information $\tilde{Z}=(X_{{\mathcal T}^C},Y_{{\mathcal T}^C},Z)$ at the decoder), we obtain:
\begin{align}
\mathbf{P}_{h_1}&=\bigcup_{\mathbf{C}_1\in{\mathcal F}_{{\mathcal T}^C}}\mathbf{P}_{h_{1,\mathbf{C}_1}}\nonumber\\
\mathbf{P}_{h_2}&=\bigcup_{\mathbf{C}_2\in{\mathcal F}_{{\mathcal T}}}\mathbf{P}_{h_{2,\mathbf{C}_2}}\label{eq:p-1-1}
\end{align}
\par where $\mathbf{C}_1=[{\mathcal L}_{1,1},\cdots,{\mathcal L}_{1,K_1}]$, $\mathbf{C}_2=[{\mathcal L}_{2,1},\cdots,{\mathcal L}_{2,K_2}]$ and the functions $h_{1,\mathbf{C}_1}$ and $h_{2,\mathbf{C}_2}$ whose domain are $2^{{\mathcal T}^C}$ and $2^{{\mathcal T}}$, are defined by:
\begin{align}
h_{1,\mathbf{C}_1}({\mathcal S})&=\sum_{k=1}^{K_1+1}H(X_{{\mathcal S}\cap{\mathcal L}_{1,k}}Y_{{\mathcal S}\cap{\mathcal L}_{1,k-1}}|X_{{\mathcal L}_1^k}Y_{{\mathcal L}_1^{k-1}}Z)\label{eq:p2-1}\\ h_{2,\mathbf{C}_2}({\mathcal S})&=\sum_{k=1}^{K_2+1}H(X_{{\mathcal S}\cap{\mathcal L}_{2,k}}Y_{{\mathcal S}\cap{\mathcal L}_{2,k-1}}|X_{{\mathcal L}_2^k}Y_{{\mathcal L}_2^{k-1}}\tilde{Z}) \label{eq:p2-2}
\end{align}
Using \eqref{eq:p2-1} and \eqref{eq:p2-2}, we obtain $\mathbf{P}_{h_{1,\mathbf{C}_1}}$ and $\mathbf{P}_{h_{2,\mathbf{C}_2}}$ as:
\begin{align}
\mathbf{P}_{h_{1,\mathbf{C}_1}}=&\big\{\mathbf{x}\in\mathbb{R}^{{\mathcal T}^C}:x_{{\mathcal T}^C}=H(X_{{\mathcal T}^C}Y_{{\mathcal T}^C}Z),\ \mbox{and}\ \forall{\mathcal S}\subset{\mathcal T}^C\nonumber\\ &
x_{{\mathcal S}}\ge\sum_{k=1}^{K_1+1}H(X_{{\mathcal S}\cap{\mathcal L}_{1,k}}Y_{{\mathcal S}\cap{\mathcal L}_{1,k-1}}|X_{{\mathcal L}_{1,k}\backslash{\mathcal S}}Y_{{\mathcal L}_{1,k-1}\backslash{\mathcal S}}X_{{\mathcal L}_1^k}Y_{{\mathcal L}_1^{k-1}}Z)\big\}\label{eq:p0-1} \\
\mathbf{P}_{h_{2,\mathbf{C}_2}}=& \big\{\mathbf{x}\in\mathbb{R}^{{\mathcal T}}:x_{{\mathcal T}}=H(X_{{\mathcal T}}Y_{{\mathcal T}}|\tilde{Z}),\ \mbox{and}\ \forall{\mathcal S}\subset{\mathcal T}\nonumber\\ &
x_{{\mathcal S}}\ge\sum_{k=1}^{K_2+1}H(X_{{\mathcal S}\cap{\mathcal L}_{2,k}}Y_{{\mathcal S}\cap{\mathcal L}_{2,k-1}}|X_{{\mathcal L}_{2,k}\backslash{\mathcal S}}Y_{{\mathcal L}_{2,k-1}\backslash{\mathcal S}}X_{{\mathcal L}_2^k}Y_{{\mathcal L}_2^{k-1}}X_{{\mathcal T}^C}Y_{{\mathcal T}^C}Z)\big\}\label{eq:p0-2}
\end{align}
Let $\mathbf{C}=[{\mathcal L}_{1,1},\cdots,{\mathcal L}_{1,K_1},{\mathcal L}_{2,1},\cdots,{\mathcal L}_{2,K_2}]$ be the concatenation of $\mathbf{C}_1$ and $\mathbf{C}_2$. We assert that
\begin{equation}
\label{eq:p3}
\mathbf{F}_{h_{\mathbf{C}},{\mathcal T}}=\{\mathbf{x}\in\mathbb{R}^{{\mathcal V}}:\mathbf{x}_{{\mathcal T}^C}\in\mathbf{P}_{h_{1,\mathbf{C}_1}},\mathbf{x}_{{\mathcal T}}\in\mathbf{P}_{h_{2,\mathbf{C}_2}}\}
\end{equation}
By Lemma \ref{le:facet}, $\mathbf{x}$ belongs to $\mathbf{F}_{h_{\mathbf{C}},{\mathcal T}}$, iff
\begin{equation}
\label{eq:p4}
\begin{array}{lccr}
{\mathcal S}\subseteq{\mathcal T}^C:& x_{{\mathcal S}}\ge h_{\mathbf{C}}({\mathcal S}|{\mathcal T}^C\cap{\mathcal S}^C) &\mbox{with equality for}& {\mathcal S}={\mathcal T}^C\\
{\mathcal S}\subseteq{\mathcal T}:& x_{{\mathcal S}}\ge h_{\mathbf{C}}({\mathcal S}|{\mathcal S}^C) &\mbox{with equality for}& {\mathcal S}={\mathcal T}
\end{array}
\end{equation}
To evaluate \eqref{eq:p4}, consider
\begin{align}
h_{\mathbf{C}}({\mathcal S})&=\sum_{k=1}^{K_1}H(X_{{\mathcal S}\cap{\mathcal L}_{1,k}}Y_{{\mathcal S}\cap{\mathcal L}_{1,k-1}}|X_{{\mathcal L}_1^k}Y_{{\mathcal L}_1^{k-1}}Z)\nonumber\\&\qquad+H(X_{{\mathcal S}\cap{\mathcal L}_{2,1}}Y_{{\mathcal S}\cap{\mathcal L}_{1,K_1}}|X_{{\mathcal T}^C}Y_{{\mathcal L}_1^{K_1}}Z)+\sum_{k=2}^{K_2+1}H(X_{{\mathcal S}\cap{\mathcal L}_{2,k}}Y_{{\mathcal S}\cap{\mathcal L}_{2,k-1}}|X_{{\mathcal L}_2^k}Y_{{\mathcal L}_2^{k-1}}\tilde{Z})
\end{align}
where we have used the fact that ${\mathcal L}_1^{K_1+1}={\mathcal T}^C$.
Now, we compute the RHS of \eqref{eq:p4}:
\begin{align}
h_{\mathbf{C}}({\mathcal S}|{\mathcal T}^C\cap{\mathcal S}^C)&=h_{\mathbf{C}}({\mathcal T}^C)-h_{\mathbf{C}}({\mathcal T}^C\cap{\mathcal S}^C)\nonumber\\
&=\sum_{k=1}^{K_1+1}H(X_{{\mathcal L}_{1,k}}Y_{{\mathcal L}_{1,k-1}}|X_{{\mathcal L}_1^k}Y_{{\mathcal L}_1^{k-1}}Z)-H(X_{{\mathcal S}^C\cap{\mathcal L}_{1,k}}Y_{{\mathcal S}^C\cap{\mathcal L}_{1,k-1}}|X_{{\mathcal L}_1^k}Y_{{\mathcal L}_1^{k-1}}Z)\label{eq:p5-1}\\
&=\sum_{k=1}^{K_1+1}H(X_{{\mathcal S}\cap{\mathcal L}_{1,k}}Y_{{\mathcal S}\cap{\mathcal L}_{1,k-1}}|X_{{\mathcal L}_{1,k}\backslash{\mathcal S}}Y_{{\mathcal L}_{1,k-1}\backslash{\mathcal S}}X_{{\mathcal L}_1^k}Y_{{\mathcal L}_1^{k-1}}Z)\label{eq:p5-2}\\
h_{\mathbf{C}}({\mathcal S}|{\mathcal S}^C)&=\sum_{k=1}^{K_2+1}H(X_{{\mathcal S}\cap{\mathcal L}_{2,k}}Y_{{\mathcal S}\cap{\mathcal L}_{2,k-1}}|X_{{\mathcal L}_{2,k}\backslash{\mathcal S}}Y_{{\mathcal L}_{2,k-1}\backslash{\mathcal S}}X_{{\mathcal L}_2^k}Y_{{\mathcal L}_2^{k-1}}\tilde{Z})\label{eq:p5-3}
\end{align}
where \eqref{eq:p5-1} follows, because ${\mathcal S}\subseteq{\mathcal T}^C$ and ${\mathcal T}^C$ are disjoint from all ${\mathcal L}_{2,i}$, \eqref{eq:p5-3} follows from the fact that ${\mathcal T}$ is disjoint from all ${\mathcal L}_{1,i}$.\par
Now \eqref{eq:p0-1}, \eqref{eq:p0-2}, \eqref{eq:p5-2} and \eqref{eq:p5-3} together show the truth of assertion. Finally, the assertion with \eqref{eq:p-1-1} implies that for each point $\mathbf{x}\in\mathbf{F}_{h,{\mathcal T}}$, there exists an ordered partition $\mathbf{C}\in{\mathcal F}_{{\mathcal T}^C,{\mathcal T}}$ for which $\mathbf{F}_{h_{\mathbf{C}},{\mathcal T}}$ contains $\mathbf{x}$. This completes the proof of Claim 2. $\square$
\begin{claim}
\label{cl:3}
For each facet $\mathbf{F}_{h_{\mathbf{C}},{\mathcal T}}$ of an essential polytope of a given ordered partition $\mathbf{C}$ inside the $\mathbf{P}_h$, there exists an ordered partition $\mathbf{C}^*\neq\mathbf{C}$, such that
\begin{equation}
\label{eq:common}
\mathbf{P}_{h_{\mathbf{C}}}\bigcap\mathbf{P}_{h_{\mathbf{C}^*}}=\mathbf{F}_{h_{\mathbf{C}},{\mathcal T}}
\end{equation}
\end{claim}
\par \emph{Proof of Claim \ref{cl:3}}. Let $\mathbf{C}=[{\mathcal L}_1,\cdots,{\mathcal L}_K]$. From the proof of Claim \ref{cl:2}, the corresponding facets to $({\mathcal L}_{i}^{K}=\cup_{k=i}^{K}{\mathcal L}_k:k\ge2)$ lie on the boundary of $\mathbf{P}_h$. Thus, we only consider the facets corresponding to ${\mathcal T}\neq{\mathcal L}_{i}^{K}=\cup_{k=i}^{K}{\mathcal L}_k$. For such ${\mathcal T}$, set $\mathbf{C}^*=[{\mathcal L}_1^*,\cdots,{\mathcal L}_K^*,{\mathcal L}_{K+1}^*]$, where ${\mathcal L}_k^*=({\mathcal T}\cap{\mathcal L}_{k-1})\cup({\mathcal T}^C\cap{\mathcal L}_k)$. Now we show that
\begin{equation}
\label{eq:cl3}
\mathbf{F}_{h_{\mathbf{C}},{\mathcal T}}=\mathbf{F}_{h_{\mathbf{C}^*},{\mathcal T}^C}.
\end{equation}
This proves Claim \ref{cl:3}, because $\mathbf{P}_{h_{\mathbf{C}}}$ gets the minimum of $x_{{\mathcal T}}$ on the $\mathbf{F}_{h_{\mathbf{C}},{\mathcal T}}$, and $\mathbf{P}_{h_{\mathbf{C}}}$ gets the maximum of $x_{{\mathcal T}}$ on the $\mathbf{F}_{h_{\mathbf{C}^*},{\mathcal T}^C}$ (since $x_{{\mathcal T}}=H(X_{{\mathcal V}}Y_{{\mathcal V}}|Z)-x_{{\mathcal T}^C}$). \par
We provide the formal proof of \eqref{eq:cl3} in Appendix \ref{app:3}. Instead, we give the main idea behind the construction of $\mathbf{C}^*$. First, consider the simple SW-coding of a DMCS $X_{{\mathcal V}}$ with rate-tuple $R_{{\mathcal V}}$. It is well-known that the minimum of $R_{{\mathcal T}}$ is achieved with joint decoding of $X_{{\mathcal T}^C}$ with sum-rate $R_{{\mathcal T}^C}=H(X_{{\mathcal T}^C}|Z)$ followed by joint decoding of $X_{{\mathcal T}}$ in the presence of $X_{{\mathcal T}^C}$ at the decoder with sum-rate $R_{{\mathcal T}}=H(X_{{\mathcal T}}|X_{{\mathcal T}^C}Z)$. Also, Lemma \ref{le:facet} confirms this result about any sub-modular function. Moreover, this lemma tells us that each point which achieves the minimum of $R_{{\mathcal T}}$ can be obtained by this two-level decoding. Now consider the ML-SW coding with respect to $\mathbf{C}$. Each point of ML-SW region can be written in the form $R=\sum_{k=1}^{K+1}R_k$, where $R_k$ lies in the SW-region of layer $k$. So $R_{{\mathcal T}}$ can be split into the rates $R_{k,{\mathcal T}}=R_{k,{\mathcal T}\cap({\mathcal L}_k\cup{\mathcal L}_{k-1})}$. Thus to minimize $R_{{\mathcal T}}$, we require to minimize each of $R_{k,{\mathcal T}\cap({\mathcal L}_k\cup{\mathcal L}_{k-1})}$. In layer $k$, SW-coding is done over $(X_{{\mathcal L}_k},Y_{{\mathcal L}_{k-1}})$, therefore to achieve the minimum of $R_{k,{\mathcal T}\cap({\mathcal L}_k\cup{\mathcal L}_{k-1})}$, it suffices to consider two levels of decoding at the decoder: the decoder, first decodes $(X_{{\mathcal T}^C\cap{\mathcal L}_k},Y_{{\mathcal T}^C\cap{\mathcal L}_{k-1}})$ in the presence of $(X_{{\mathcal L}^k},Y_{{\mathcal L}^{k-1}},Z)$, then decodes $(X_{{\mathcal T}\cap{\mathcal L}_k},Y_{{\mathcal T}\cap{\mathcal L}_{k-1}})$ in the presence of $(X_{{\mathcal L}^k},Y_{{\mathcal L}^{k-1}},X_{{\mathcal T}^C\cap{\mathcal L}_k},Y_{{\mathcal T}^C\cap{\mathcal L}_{k-1}},Z)$. In overall, to minimize $R_{{\mathcal T}}$ with respect to $\mathbf{C}$, one can consider the following $2K+2$ levels of decoding:
\begin{equation}
\label{eq:chain-1} X_{{\mathcal T}^C\cap{\mathcal L}_1},X_{{\mathcal T}\cap{\mathcal L}_1},\cdots,(X_{{\mathcal T}^C\cap{\mathcal L}_k},Y_{{\mathcal T}^C\cap{\mathcal L}_{k-1}}),(X_{{\mathcal T}\cap{\mathcal L}_k},Y_{{\mathcal T}\cap{\mathcal L}_{k-1}}),\cdots,Y_{{\mathcal T}^C\cap{\mathcal L}_{K}},Y_{{\mathcal T}\cap{\mathcal L}_K}
\end{equation}
On the other side, to maximize $R_{{\mathcal T}}$ (or equivalently, to minimize $R_{{\mathcal T}^C}$) with respect to $\mathbf{C}^{*}$, the following order on SW-coding is required,
\begin{equation}
\label{eq:chain-2} X_{{\mathcal T}\cap{\mathcal L}_1^*},X_{{\mathcal T}^C\cap{\mathcal L}_1^*},\cdots,(X_{{\mathcal T}\cap{\mathcal L}_k^*},Y_{{\mathcal T}\cap{\mathcal L}_{k-1}^*}),(X_{{\mathcal T}^C\cap{\mathcal L}_k^*},Y_{{\mathcal T}^C\cap{\mathcal L}_{k-1}^*}),\cdots,Y_{{\mathcal T}\cap{\mathcal L}_{K+1}^*},Y_{{\mathcal T}^C\cap{\mathcal L}_{K+1}^*}
\end{equation}
Now, note that ${\mathcal T}^C\cap{\mathcal L}_k^*={\mathcal T}^C\cap(({\mathcal T}\cap{\mathcal L}_{k-1})\cup({\mathcal T}^C\cap{\mathcal L}_k))={\mathcal T}^C\cap{\mathcal L}_k$ and ${\mathcal T}\cap{\mathcal L}_k^*={\mathcal T}\cap(({\mathcal T}\cap{\mathcal L}_{k-1})\cup({\mathcal T}^C\cap{\mathcal L}_k))={\mathcal T}\cap{\mathcal L}_{k-1}$; in particular ${\mathcal T}\cap{\mathcal L}_1^*={\mathcal T}^C\cap{\mathcal L}_{K+1}^*=\emptyset$. Comparing \eqref{eq:chain-1} with \eqref{eq:chain-2}, we see that these two multi-level decoding schemes are the same, thus the intersection of $\mathbf{P}_{h_{\mathbf{C}}}$ and $\mathbf{P}_{h_{\mathbf{C}*}}$ is $\mathbf{F}_{h_{\mathbf{C}},{\mathcal T}}=\mathbf{F}_{h_{\mathbf{C}^*,{\mathcal T}^C}}$. $\square$
\par Now Claims \ref{cl:1}--\ref{cl:3} ensure that $\mathbf{P}_{h_{\mathbf{C}}}$ and $\mathbf{P}_h$ satisfy the conditions of Lemma \ref{le:cover}. This completes the proof.
\end{proof}
\section{Feasible Constraints for reliable multicasting of a DMCS over cooperative networks}\label{sec:5}
In this section, we obtain a set of DMCS which can reliably be multicast over a cooperative network. Our approach is based on generalization of the CF strategy for relay networks. Two types of generalization have been considered in the previous works, \cite{kramer2005,rost}. In \cite{kramer2005}, the CF strategy was generalized in the following manner:
\begin{enumerate}
\item Each relay and destination partially decode the messages of the other relays.
\item Each relay compresses its observation $Y_v$, in the presence of side information from messages of the other relays.
\item Each relay sends its compressed observation through a Multiple Access Channel (MAC). Finally, destination decodes the source message.
\end{enumerate}
This scenario deals with relays in a symmetric way, i.e., all relays lie in a single MAC layer. In \cite{rost}, a generalization of \emph{mixed strategy} of \cite[Theorem 7]{cover} is proposed. By relaxing the partial decode-and-forward part of the mixed strategy, we obtain a generalization of the CF strategy. In this scenario, relays are ordered according to a given permutation. Each relay compresses its observation using multiple description method (MD) and sends these descriptions through a broadcast channel with a degraded message set. Each relay and destination decode their respective descriptions after decoding their broadcast messages according to a sequential decoding scheme. However, if the relays use the simple Wyner-Ziv coding rather than MD, the result is a special case of \cite[Theorem 3]{kramer2005}. In another scenario proposed in \cite{rost:asilomar}, CF is generalized for half-duplex channels. Although, this method is proposed for half-duplex relay networks, it can be generalized for relay networks, too. In this scenario, each relay uses the simple Wyner-Ziv coding. This scenario differs from the previous generalization of CF, in which the destination considers an ordering of relays, and decodes the compressed observation of relay $k$, in the presence of compressed observations of relays $(k+1,k+2,\cdots,N-1)$ which are decoded in the previous blocks. This is similar to ML-SW coding. \par
We propose a joint source coding and Wyner-Ziv coding for multicasting a DMCS $U_{{\mathcal A}}$ over cooperative networks. In this scenario, in each block, each node compresses its observation using Wyner-Ziv coding, then in the next block jointly maps the compressed observation and its current source sequence to a channel input codeword and transmits the codeword. The joint encoding used in this scheme, benefits from the advantage of joint source-channel coding in comparison with source-channel separation in the multicast scenario, which is illustrated in \cite{tuncel}. Moreover, in this scheme, each node has two types of sources including the compressed observation and the source sequence which are required to decode at each destination. By the nature of relaying, it is not possible to decode these sources, simultaneously. This situation is similar to ML-SW coding, in which two components of the source are not being decoded simultaneously. Motivated by the results of ML-SW coding, e.g., Theorem \ref{thm:sw-covering}, each destination groups the other nodes into some layers according to its ability to decode the information of other nodes. Using insights from the ML-SW, in the first level of decoding, the destination can directly decode the first component of the information of nodes in the first layer, i.e., the source sequences of the first layer, through a MAC layer between layer one and the destination, and in level $k$ of decoding, destination decodes the source sequences of layer $k$ and the compressed observations of layer $k-1$ (second component of information of layer $k-1$) jointly through the MAC layer between layer $k$ and the destination in the presence of the decoded information from levels $(1,2,\cdots,k-1)$ as side information. These side information play two roles in improving the decoding:
\begin{enumerate}
\item These are side information for Slepian-Wolf coding that enlarge the SW-region.
\item These are side information for MAC that enlarge the MAC-region. Unlike the first role, this role does not arise from the ML-SW.
\end{enumerate}
\par Enlarging the SW-region and the MAC-region provides the opportunity for some intersection between the two regions which results in the reliable transmission of source sequences of the nodes in layer $k$ and the compressed observations of the nodes in layer $k-1$ in an operational separation sense, even if the original MAC region does not intersect with the original SW-region. \par
The next theorem is the main result of the paper.
\begin{theorem}
\label{thm:sw}
The set of DMCS $U_{{\mathcal A}}$ can reliably be multicast over a cooperative network to nodes in ${\mathcal D}$, if there exist auxiliary random variables $\hat{Y}_{{\mathcal V}}$ and $Q$, such that for each ${\mathcal S}\subseteq{\mathcal A}$, we have
\begin{equation}
\label{eq:sw}
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})< \min_{d_i\in{\mathcal D}\backslash{\mathcal S}}\min_{{\mathcal V}\supseteq{\mathcal W}\supseteq{\mathcal S}: \atop d_i\in{\mathcal W}^C} [I(X_{{\mathcal W}};Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C}Q)-I(Y_{{\mathcal W}};\hat{Y}_{{\mathcal W}}|X_{{\mathcal V}}Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Q)]
\end{equation}
\noindent where the joint p.m.f. of random variables
factors as
\begin{equation}
\label{eq:dist}
p(q)p(u_{{\mathcal A}})[\prod_{v\in{\mathcal V}}p(x_v|q)p(\hat{y}_v|x_v,y_v,q)]p(y_{{\mathcal V}}|x_{{\mathcal V}}).
\end{equation}
\end{theorem}
\begin{remark}
The constraint \eqref{eq:sw} separates source coding from channel coding in the operational separation sense \cite{tuncel}.
To see this, observe that the constraint \eqref{eq:sw} is equivalent to the following constraint,
\begin{equation}\label{eq:sw-5}
\forall{\mathcal W}\subseteq{\mathcal V},d_i\in{\mathcal W}^C: H(U_{{\mathcal W}\cap{\mathcal A}}|U_{{\mathcal A}\backslash{\mathcal W}})+I(Y_{{\mathcal W}};\hat{Y}_{{\mathcal W}}|X_{{\mathcal V}}Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Q)<I(X_{{\mathcal W}};Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C}Q).
\end{equation}
Consider a cut $\Lambda=({\mathcal W},{\mathcal W}^C)$. The RHS of the \eqref{eq:sw-5} provides an achievable flow through the cut $\Lambda$. The first term in the LHS of \eqref{eq:sw-5} represents the rate of the Slepian-Wolf coding for describing $U_{{\mathcal W}\cap{\mathcal A}}$ to the destinations in the other side of the cut in the presence of $U_{{\mathcal A}\backslash{\mathcal W}}$ which is available in ${\mathcal W}^C$. The second term in the LHS of \eqref{eq:sw-5} can be interpreted as the rate of the Wyner-Ziv coding for describing a compression of the observation $Y_{{\mathcal W}}$, i.e. $\hat{Y}_{{\mathcal W}}$, to the other side of the cut in the presence of $(X_{{\mathcal W}^C},\hat{Y}_{{\mathcal W}^C},Y_{d_i})$ and $X_{{\mathcal W}}$, which the latter can be regarded as the output of channel decoder. Since the compression rate of the sources is less than the information flow, one can expect that the multicasting of the sources is feasible, due to the source-channel separation approach. \end{remark}
\begin{proof}[Proof of Theorem \ref{thm:sw}]
For the sake of simplicity, we assume that $|{\mathcal Q}|=1$ where $Q$ is a time-sharing random variable. First, we characterize a set of DMCS which can reliably be multicast over a cooperative network, with respect to given ordered partitions at each destination. For each destination node $d_i$, let ${\mathcal V}_{-d_i}={\mathcal V}\backslash\{d_i\}$. The following lemma, establishes a set of sufficient conditions for reliable multicasting of $U_{{\mathcal A}}$ over the cooperative network. We provide the proof of it in Subsection \ref{sub:a}.
\begin{lemma}
\label{le:first}
The set of DMCS $U_{{\mathcal A}}$ can reliably be multicast over a cooperative network to subset ${\mathcal D}$ of the nodes, if for each $d_i\in{\mathcal D}$, there exists an ordered partition $\mathbf{C}^{(d_i)}=[{\mathcal L}_1,{\mathcal L}_2,\cdots,{\mathcal L}_{\ell}]$ of ${\mathcal V}_{-d_i}$ such that for each ${\mathcal S}\subseteq{\mathcal V}_{-d_i}$, the following constraint is satisfied:
\begin{align}
\label{eq:sufficient}
\sum_{t\in{\mathcal S}}H(X_t)+H(\hat{Y}_t|X_tY_t)\geq&
\sum_{k=1}^{\ell+1}\Big(H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i})+\nonumber\\
&\quad\qquad H(X_{{\mathcal S}\cap{\mathcal L}_k}\hat{Y}_{{\mathcal S}\cap{\mathcal L}_{k-1}}|X_{{\mathcal L}_k\backslash{\mathcal S}}\hat{Y}_{{\mathcal L}_{k-1}\backslash{\mathcal S}}X_{{\mathcal L}^k}\hat{Y}_{{\mathcal L}^{k-1}}Y_{d_i}X_{d_i})\Big),
\end{align}
\noindent where the random variables $(x_{{\mathcal V}},y_{{\mathcal V}},\hat{y}_{{\mathcal V}})$ are distributed according to \eqref{eq:dist}.
\end{lemma}
\par This lemma gives a partial solution to the problem of reliable multicasting of $U_{{\mathcal A}}$ over the cooperative network, in the sense that to find out that the multicasting of $U_{{\mathcal A}}$ is feasible, we must consider all the possible ordered partitions of each set ${\mathcal V}_{-d_i}$ and check that the constraint \eqref{eq:sufficient} is satisfied or not. If for each destination node $d_i$, there exists at least one ordered partition of ${\mathcal V}_{-d_i}$ such that \eqref{eq:sufficient} is satisfied, then reliable multicasting is feasible. Since the number of ordered partitions of a set ${\mathcal V}$ grows rapidly with $|{\mathcal V}|$, such approach (checking the constraint \eqref{eq:sufficient} for all the ordered partitions) seems to be difficult. However, using Theorem \ref{thm:sw-covering}, we show that there exists a set of constraints that unifies the set of constraints \eqref{eq:sufficient} with respect to all the ordered partitions. The following lemma, establishes such a result.
\begin{lemma}[Unified Sufficient Conditions]\label{le:6}
For a destination node $d_i$, there exists at least one ordered partition $\mathbf{C}^{(d_i)}$ of ${\mathcal V}_{-d_i}$ for which the constraint \eqref{eq:sufficient} is satisfied, if and only if the following constraint is satisfied,
\begin{equation}
\label{eq:uniuni}
\forall {\mathcal S}\subseteq{\mathcal V}_{-d_i}:\sum_{t\in{\mathcal S}}H(X_t)+H(\hat{Y}_t|X_tY_t)\geq
H(\hat{Y}_{{\mathcal S}}X_{{\mathcal S}}|X_{{\mathcal S}^C}\hat{Y}_{{\mathcal S}^C}Y_{d_i}X_{d_i})+H(U_{{\mathcal S}}|U_{{\mathcal S}^C}U_{d_i}).
\end{equation}
\end{lemma}
\emph{Proof of Lemma \ref{le:6}}. For each $v\in{\mathcal V}$, define $R_v=H(X_v)+H(\hat{Y}_v|Y_vX_v)$ and
\[R^{(d_i)}=(R_1,\cdots,R_{d_i-1},R_{d_i+1},\cdots,R_V).\]
Consider the RHS of \eqref{eq:sufficient}. Since random variables $U_{{\mathcal A}}$ and $(X_{{\mathcal V}},\hat{Y}_{{\mathcal V}},Y_{{\mathcal V}})$ are independent, the constraint \eqref{eq:sufficient} can be rewritten as
\begin{equation}\label{eq:correc}
\forall{\mathcal S}\subseteq{\mathcal V}_{-d_i}: R^{(d_i)}_{{\mathcal S}}\ge \sum_{k=1}^{\ell+1}H(U_{{\mathcal S}\cap{\mathcal L}_k}X_{{\mathcal S}\cap{\mathcal L}_k}\hat{Y}_{{\mathcal S}\cap{\mathcal L}_{k-1}}|U_{{\mathcal L}_k\backslash{\mathcal S}}X_{{\mathcal L}_k\backslash{\mathcal S}}\hat{Y}_{{\mathcal L}_{k-1}\backslash{\mathcal S}}U_{{\mathcal L}^k}X_{{\mathcal L}^k}\hat{Y}_{{\mathcal L}^{k-1}}U_{d_i}Y_{d_i}X_{d_i}).
\end{equation}
The RHS of \eqref{eq:correc} can be expressed in the form of \eqref{eq:sw-bc} with ${\mathcal V}={\mathcal V}_{-d_i}$ , $X_v=(X_v,U_v)$, $Y_v=\hat{Y}_v$ and $Z=(Y_{d_i},X_{d_i},U_{d_i})$, thus the constraint \eqref{eq:sufficient} is equivalent to $R^{(d_i)}\in{\mathcal R}_{\mathbf{C}^{(d_i)}}$. Therefore for the node $d_i$, there exists at least one ordered partition of ${\mathcal V}_{-d_i}$ such that \eqref{eq:sufficient} is satisfied, iff $R^{(d_i)}\in\cup_{\mathbf{C}^{(d_i)}\in{\mathcal F}_{{\mathcal V}_{-d_i}}}{\mathcal R}_{\mathbf{C}^{(d_i)}}$. Applying Theorem \ref{thm:sw-covering}, we conclude that such $\mathbf{C}^{(d_i)}$ exists iff \eqref{eq:uniuni} is satisfied.$\square$
\par The constraint \eqref{eq:uniuni} can be rewritten in the following form:
\begin{subequations} \label{eq:adi}\begin{align}
\forall {\mathcal S}\subseteq{\mathcal A}\backslash\{d_i\} :&
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})\le \min_{{\mathcal W}\supseteq{\mathcal S}\atop d_i\in{\mathcal W}^C} R^{(d_i)}_{{\mathcal W}}-H(\hat{Y}_{{\mathcal W}}X_{{\mathcal W}}|X_{{\mathcal W}^C}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Y_{d_i}),\label{eq:adi1}\\ \label{eq:adi2}
\forall {\mathcal S}\subseteq{\mathcal A}^C\backslash\{d_i\}:
&R^{(d_i)}_{{\mathcal S}} - H(\hat{Y}_{{\mathcal S}}X_{{\mathcal S}}|X_{{\mathcal S}^C}\hat{Y}_{{\mathcal S}^C\backslash\{d_i\}}Y_{d_i})\ge 0 .
\end{align}\end{subequations}
Consider the constraint \eqref{eq:adi}. In Appendix \ref{app:simplify}, using the joint p.m.f. \eqref{eq:dist} we will show that this constraint is equivalent to the following constraint
\begin{subequations} \label{eq:adi-adi}\begin{align}
\forall {\mathcal S}\subseteq{\mathcal A}\backslash\{d_i\} :&
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})< \min_{{\mathcal W}\supseteq{\mathcal S}: \atop d_i\in{\mathcal W}^C} [I(X_{{\mathcal W}};Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C})-I(Y_{{\mathcal W}};\hat{Y}_{{\mathcal W}}|X_{{\mathcal V}}Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}})]
,\label{eq:adi-adi1}\\ \label{eq:adi-adi2}
\forall {\mathcal S}\subseteq{\mathcal A}^C\backslash\{d_i\}:
& I(\hat{Y}_{{\mathcal S}};Y_{{\mathcal S}}|X_{{\mathcal V}}\hat{Y}_{{\mathcal S}^C\backslash\{d_i\}}Y_{d_i})\le I(X_{{\mathcal S}};\hat{Y}_{{\mathcal S}^C\backslash\{d_i\}}Y_{d_i}|X_{{\mathcal S}^C}).
\end{align}\end{subequations}
The first constraint \eqref{eq:adi-adi1} is the same as the constraint \eqref{eq:sw}, so we only need to show that the second constraint \eqref{eq:adi-adi2} is an additional constraint. The second constraint represents a sufficient condition for reliable multicasting of the compressed observations of the non-source nodes to the destinations. Since the destinations only need to decode the sources and do not need to decode any other information, it is logical to neglect the second constraint, which completes the proof of Theorem \ref{thm:sw}. We provide a rigorous proof of this fact in subsection \ref{sub:c}.
\end{proof}
\subsection{Multi-Layer Slepian-Wolf coding over a cooperative network (Proof of Lemma \ref{le:first})}
\label{sub:a}
We transmit $s_{nB}=nB$-length source over cooperative network in $B+2V-3$ blocks of length $n$ where $V$ is the cardinality of ${\mathcal V}$. Observe that $r_{nB}=n(B+2V-3)$ and $\dfrac{r_{nB}}{s_{nB}}\rightarrow 1$ as $B\rightarrow \infty$, thus the sequence $\{(s_{nB},r_{nB})\}_{B=1}^{\infty}$ satisfies the condition of Definition \ref{def:sw}.
\subsubsection*{Codebook generation at node $v$}
Fix $0<\epsilon''<\epsilon'<\epsilon$. Also fix $\delta>0$ such that $|\mathit{T}_{\epsilon}^n(U_v)|<2^{n(H(U_{v})+\delta)}$. To each element of $\mathit{T}_{\epsilon}^n(U_v)$, assign a number $w_v\in[1,2^{n(H(U_{v})+\delta)}]$ using a one-to-one mapping. Moreover, for each non-typical sequence, set $w_v =1$. Denote the result by $\mathbf{u}_{v}(w_v)$. For channel coding, independently repeat the following procedure $V$ times. Denote the resulting $k$-th codebook by ${\mathcal C}_v(k)$.\par
Choose $2^{n(H(U_{v})+I(Y_v;\hat{Y}_v|X_v)+2\delta)}$ codewords $\mathbf{x}_v(w_v,z_v)$, each drawn uniformly and independently from the set $\mathit{T}_{\epsilon''}^n(X_v)$ where $z_v\in[1,2^{n(I(Y_v;\hat{Y}_v|X_v)+\delta)}]$. For Wyner-Ziv coding, for each $\mathbf{x}_v(w_v,z_v)$ choose $2^{n(I(Y_v;\hat{Y}_v|X_v)+\delta)}$ codewords $\mathbf{\hat{y}}_v(z'_v|\mathbf{x}_v)$, each drawn uniformly and independently from the set $\mathit{T}_{\epsilon'}^n(\hat{Y}_v|\mathbf{x}_v)$ where $z'_v\in[1,2^{n(I(Y_v;\hat{Y}_v|X_v)+\delta)}]$.
\subsubsection*{Encoding at node $v$}
Divide the $nB$-length source stream $u_v^{nB}$ into $B$ vectors $(\mathbf{u}_{v,[j]}:1\leq j\leq B)$ where $\mathbf{u}_{v,[j]}=(u_{v,(j-1)n+1},\cdots,u_{v,jn})$. We say that the channel encoder receives $\mathbf{m}_v=(m_{v,[1]},\cdots,m_{v,[B]})$, if for $1\leq j\leq B$, $\mathbf{u}_{v,[j]}$ is assigned to $m_{v,[j]}\in[1,2^{n(H(U_{v})+\delta)}]$. Encoding is performed in $B+2V-3$ blocks where in block $b$, we use the codebook ${\mathcal C}_v(b\mod V)$. For $1\leq b\leq B+2V-3$, define:
\[
w_{v,[b]}= \left\{
\begin{array}{ll}
m_{v,[b-V+1]} &, V\le b\le B+V-1\\
1 & ,\mbox{otherwise}.
\end{array}\right.
\]
\par In block $1$, a default codeword, $\mathbf{x}_v(1,1)$ is transmitted. In block $b>1$, knowing $z_{v,[b-1]}$ from Wyner-Ziv coding at the end of block $b-1$ (described below), node $v$ transmits $\mathbf{x}_v(w_{v,[b]},z_{v,[b-1]})$.
\subsubsection*{Wyner-Ziv coding}
At the end of block $b$, node $v$ knows $(\mathbf{x}_{v,[b-1]},\mathbf{y}_{v,[b-1]})$ and declares that $z_{v,[b-1]}=z_v$ is received if $z_{v}$ is the smallest index such that $(\mathbf{\hat{y}}_{v,[b-1]}(z_{v}|\mathbf{x}_{v,[b-1]}),\mathbf{x}_{v,[b-1]},\mathbf{y}_{v,[b-1]})$ are jointly typical. Since we have more than $2^{nI(Y_v;\hat{Y}_v|X_v)}$ codewords, such a $z_v$ exists with high probability. (See Table \ref{ta:enc} which illustrates encoding for a network with four nodes in which node $4$ is only a destination, i.e., ${\mathcal U}_4={\mathcal X}_4=\emptyset$.)
\begin{table*}
\centering
\caption{Encoding Scheme for Multicasting of two blocks of source sequences over a network with ${\mathcal V}=\{1,2,3,4\}$, ${\mathcal A}=\{1,2,3\}$, ${\mathcal D}=\{3,4\}$ and node $4$ has no channel input, i.e., ${\mathcal U}_4={\mathcal X}_4=\emptyset$.}
\label{ta:enc}
\vspace{-0.6cm}
\resizebox{\textwidth}{!} {%
\begin{tabular}[t]{|c|c|c|c|c|c|c|c|}
\hline
Node &Block 1& Block 2& Block 3& Block 4& Block 5& Block 6& Block 7 \\
\hline \hline
& & & & $\mathbf{u}_1(m_{1[1]})$ & $\mathbf{u}_1(m_{1[2]})$ & & \\
1 & $\mathbf{x}_1(1,1)$ & $\mathbf{x}_1(1,z_{1[1]})$ & $\mathbf{x}_1(1,z_{1[2]})$ & $\mathbf{x}_1(m_{1[1]},z_{1[3]})$ & $\mathbf{x}_1(m_{1[2]},z_{1[4]})$ & $\mathbf{x}_1(1,z_{1[5]})$ & $\mathbf{x}_1(1,z_{1[6]})$ \\
& $\mathbf{\hat{y}}_1(z_{1[1]}|\mathbf{x}_{1[1]})$ & $\mathbf{\hat{y}}_1(z_{1[2]}|\mathbf{x}_{1[2]})$ & $\mathbf{\hat{y}}_1(z_{1[3]}|\mathbf{x}_{1[3]})$ & $\mathbf{\hat{y}}_1(z_{1[4]}|\mathbf{x}_{1[4]})$ & $\mathbf{\hat{y}}_1(z_{1[5]}|\mathbf{x}_{1[5]})$ & $\mathbf{\hat{y}}_1(z_{1[6]}|\mathbf{x}_{1[6]})$ & $\mathbf{\hat{y}}_1(z_{1[7]}|\mathbf{x}_{1[7]})$\\
\hline
& & & & $\mathbf{u}_2(m_{2[1]})$ & $\mathbf{u}_2(m_{2[2]})$ & & \\
2 & $\mathbf{x}_2(1,1)$ & $\mathbf{x}_2(1,z_{2[1]})$ & $\mathbf{x}_2(1,z_{2[2]})$ & $\mathbf{x}_2(m_{2[1]},z_{2[3]})$ & $\mathbf{x}_2(m_{2[2]},z_{2[4]})$ & $\mathbf{x}_2(1,z_{2[5]})$ & $\mathbf{x}_2(1,z_{2[6]})$ \\
& $\mathbf{\hat{y}}_2(z_{2[1]}|\mathbf{x}_{2[1]})$ & $\mathbf{\hat{y}}_2(z_{2[2]}|\mathbf{x}_{2[2]})$ & $\mathbf{\hat{y}}_2(z_{2[3]}|\mathbf{x}_{2[3]})$ & $\mathbf{\hat{y}}_2(z_{2[4]}|\mathbf{x}_{2[4]})$ & $\mathbf{\hat{y}}_2(z_{2[5]}|\mathbf{x}_{2[5]})$ & $\mathbf{\hat{y}}_2(z_{2[6]}|\mathbf{x}_{2[6]})$ & $\mathbf{\hat{y}}_2(z_{2[7]}|\mathbf{x}_{2[7]})$\\
\hline
& & & & $\mathbf{u}_3(m_{3[1]})$ & $\mathbf{u}_3(m_{3[2]})$ & & \\
3 & $\mathbf{x}_3(1,1)$ & $\mathbf{x}_3(1,z_{3[1]})$ & $\mathbf{x}_3(1,z_{3[2]})$ & $\mathbf{x}_3(m_{3[1]},z_{3[3]})$ & $\mathbf{x}_3(m_{3[2]},z_{3[4]})$ & $\mathbf{x}_3(1,z_{3[5]})$ & $\mathbf{x}_3(1,z_{3[6]})$ \\
& $\mathbf{\hat{y}}_3(z_{3[1]}|\mathbf{x}_{3[1]})$ & $\mathbf{\hat{y}}_3(z_{3[2]}|\mathbf{x}_{3[2]})$ & $\mathbf{\hat{y}}_3(z_{3[3]}|\mathbf{x}_{3[3]})$ & $\mathbf{\hat{y}}_3(z_{3[4]}|\mathbf{x}_{3[4]})$ & $\mathbf{\hat{y}}_3(z_{3[5]}|\mathbf{x}_{3[5]})$ & $\mathbf{\hat{y}}_3(z_{3[6]}|\mathbf{x}_{3[6]})$ & $\mathbf{\hat{y}}_3(z_{3[7]}|\mathbf{x}_{3[7]})$ \\
\hline
\end{tabular}}
\end{table*}
\subsubsection*{Decoding at node $d_i$}
Let $\mathbf{C}^{(d_i)}=[{\mathcal L}_1,\cdots,{\mathcal L}_{\ell}]$ be an ordered partition of the set ${\mathcal V}_{-d_i}={\mathcal V}\backslash\{d_i\}$. We propose a sliding window decoding with respect to $\mathbf{C}^{(d_i)}$. Define $s_{v,[b]}=(w_{v,[b]},z_{v,[b-1]})$. Suppose that $(s_{{\mathcal L}_1,[b-1]},s_{{\mathcal L}_2,[b-2]},\cdots,s_{{\mathcal L}_\ell,[b-\ell]})$ have been correctly decoded at the end of block $b-1$. Node $d_i$, declares that $(\hat{s}_{{\mathcal L}_1,[b]},\cdots,\hat{s}_{{\mathcal L}_{\ell},[b-\ell+1]})$ has been sent, if it is a unique tuple such that for each $1\le k\le\ell+1$ satisfies the following conditions,
\begin{small}
\begin{equation}
\label{eq:typ}
\begin{array}{lc}
\begin{split}
\Big(\mathbf{x}_{{\mathcal L}_k}(\hat{s}_{{\mathcal L}_k,[b-k+1]}),\mathbf{\hat{y}}_{{\mathcal L}_{k-1}}(\hat{z}_{{\mathcal L}_{k-1},[b-k+1]}|\mathbf{x}_{{\mathcal L}_{k-1},[b-k+1]}),\mathbf{x}_{{\mathcal L}^k,[b-k+1]},&\\
\mathbf{\hat{y}}_{{\mathcal L}^{k-1},[b-k+1]},\mathbf{y}_{d_i,[b-k+1]},\mathbf{x}_{d_i,[b-k+1]}\Big)\in\mathit{T}_{\epsilon}^n,&\quad\mbox{for all $k$ such that $k\le b $}
\\
(\mathbf{u}_{{\mathcal L}_k}(\hat{w}_{{\mathcal L}_k,[b-k+1]}),\mathbf{u}_{{\mathcal L}^k}(w_{{\mathcal L}^k,[b-k+1]}), \mathbf{u}_{d_i}(w_{d_i,[b-k+1]}))\in\mathit{T}_{\epsilon}^n,&\quad \mbox{for all $k$ such that $V\le b-k+1\le V+B-1$}
\end{split}
\end{array}
\end{equation}
\end{small}
\noindent where $\hat{s}_{{\mathcal L}_k,[b-k+1]}=(\hat{w}_{{\mathcal L}_k,[b-k+1]},\hat{z}_{{\mathcal L}_k,[b-k]})$. Note that at the end of block $b+V+\ell-2$, the vector $w_{{\mathcal A},[b+V-1]}=m_{{\mathcal A},[b]}$ is decoded. Since each $(\mathbf{u}_{v,[b]}:v\in{\mathcal A})$ is jointly typical with high probability, we find the source sequence $\mathbf{u}_{{\mathcal A},[b]}$ with small probability of error. Hence at the end of block $B+V+\ell-2$, $u_{{\mathcal A}}^{nB}$ is decoded with small probability of error. \par
Note that in the first $V-1$ blocks of decoding, no sources information is decoded. The advantage of decoding compressed observation in these blocks is to provide side information at the receiver, in order to improve decoding in the next blocks.
\begin{table*}
\centering
\caption{Illustration of decoding scheme of four-node network depicted in Table \ref{ta:enc}, at node $4$ with respect to the ordered partition $\mathbf{C}^{(4)}=[\{1,2\},\{3\}]$ at the end of blocks $2$ and $5$. The gray cells highlight the random variables corresponding to the unknown indices and the yellow cells highlight the random variables available at decoder which will be used for decoding of the unknown indices through a joint typicality condition between them and the gray random variables.}
\label{ta:dec}
\vspace{-0.6cm}
\resizebox{\textwidth}{!} {%
\begin{tabular}[t]{|c|c|c||c|c|c|c|c|}
\hline
Node &Block 1& Block 2& Block 3& Block 4& Block 5& Block 6& Block 7 \\
\hline \hline
& & & &\cellcolor{gray!20!yellow} $\mathbf{u}_1(m_{1[1]})$ & \cellcolor{gray!30}$\mathbf{u}_1(m_{1[2]})$ & & \\
1 &\cellcolor{gray!20!yellow} $\mathbf{x}_1(1,1)$ & $\cellcolor{gray!30}\mathbf{x}_1(1,z_{1[1]})$&\cellcolor{gray!20!yellow} $\mathbf{x}_1(1,z_{1[2]})$ & \cellcolor{gray!20!yellow}$\mathbf{x}_1(m_{1[1]},z_{1[3]})$ & \cellcolor{gray!30}$\mathbf{x}_1(m_{1[2]},z_{1[4]})$ & $\mathbf{x}_1(1,z_{1[5]})$ & $\mathbf{x}_1(1,z_{1[6]})$ \\
& \cellcolor{gray!30}$\mathbf{\hat{y}}_1(z_{1[1]}|\mathbf{x}_{1[1]})$ & $\mathbf{\hat{y}}_1(z_{1[2]}|\mathbf{x}_{1[2]})$ &\cellcolor{gray!20!yellow} $\mathbf{\hat{y}}_1(z_{1[3]}|\mathbf{x}_{1[3]})$ & \cellcolor{gray!30}$\mathbf{\hat{y}}_1(z_{1[4]}|\mathbf{x}_{1[4]})$ & $\mathbf{\hat{y}}_1(z_{1[5]}|\mathbf{x}_{1[5]})$ & $\mathbf{\hat{y}}_1(z_{1[6]}|\mathbf{x}_{1[6]})$ & $\mathbf{\hat{y}}_1(z_{1[7]}|\mathbf{x}_{1[7]})$\\
\hline
& & & & \cellcolor{gray!20!yellow}$\mathbf{u}_2(m_{2[1]})$ &\cellcolor{gray!30} $\mathbf{u}_2(m_{2[2]})$ & & \\
2 &\cellcolor{gray!20!yellow} $\mathbf{x}_2(1,1)$ & \cellcolor{gray!30}$\mathbf{x}_2(1,z_{2[1]})$ &\cellcolor{gray!20!yellow} $\mathbf{x}_2(1,z_{2[2]})$ & \cellcolor{gray!20!yellow}$\mathbf{x}_2(m_{2[1]},z_{2[3]})$ &\cellcolor{gray!30} $\mathbf{x}_2(m_{2[2]},z_{2[4]})$ & $\mathbf{x}_2(1,z_{2[5]})$ & $\mathbf{x}_2(1,z_{2[6]})$ \\
& \cellcolor{gray!30}$\mathbf{\hat{y}}_2(z_{2[1]}|\mathbf{x}_{2[1]})$ & $\mathbf{\hat{y}}_2(z_{2[2]}|\mathbf{x}_{2[2]})$ &\cellcolor{gray!20!yellow} $\mathbf{\hat{y}}_2(z_{2[3]}|\mathbf{x}_{2[3]})$ & \cellcolor{gray!30}$\mathbf{\hat{y}}_2(z_{2[4]}|\mathbf{x}_{2[4]})$ & $\mathbf{\hat{y}}_2(z_{2[5]}|\mathbf{x}_{2[5]})$ & $\mathbf{\hat{y}}_2(z_{2[6]}|\mathbf{x}_{2[6]})$ & $\mathbf{\hat{y}}_2(z_{2[7]}|\mathbf{x}_{2[7]})$\\
\hline
& & & & \cellcolor{gray!30}$\mathbf{u}_3(m_{3[1]})$ & $\mathbf{u}_3(m_{3[2]})$ & & \\
3 & $\mathbf{x}_3(1,1)$ & $\mathbf{x}_3(1,z_{3[1]})$ &\cellcolor{gray!20!yellow} $\mathbf{x}_3(1,z_{3[2]})$ & \cellcolor{gray!30}$\mathbf{x}_3(m_{3[1]},z_{3[3]})$ & $\mathbf{x}_3(m_{3[2]},z_{3[4]})$ & $\mathbf{x}_3(1,z_{3[5]})$ & $\mathbf{x}_3(1,z_{3[6]})$ \\
& $\mathbf{\hat{y}}_3(z_{3[1]}|\mathbf{x}_{3[1]})$ & $\mathbf{\hat{y}}_3(z_{3[2]}|\mathbf{x}_{3[2]})$ &\cellcolor{gray!30} $\mathbf{\hat{y}}_3(z_{3[3]}|\mathbf{x}_{3[3]})$ & $\mathbf{\hat{y}}_3(z_{3[4]}|\mathbf{x}_{3[4]})$ & $\mathbf{\hat{y}}_3(z_{3[5]}|\mathbf{x}_{3[5]})$ & $\mathbf{\hat{y}}_3(z_{3[6]}|\mathbf{x}_{3[6]})$ & $\mathbf{\hat{y}}_3(z_{3[7]}|\mathbf{x}_{3[7]})$ \\
\hline & &&& \cellcolor{gray!20!yellow}$\mathbf{u}_{4[4]}$ & \cellcolor{gray!20!yellow}$\mathbf{u}_{4[5]}$ & &\\
4& $\cellcolor{gray!20!yellow}\mathbf{y}_{4[1]}$ & \cellcolor{gray!20!yellow}$\mathbf{y}_{4[2]}$ &\cellcolor{gray!20!yellow} $\mathbf{y}_{4[3]}$ & \cellcolor{gray!20!yellow}$\mathbf{y}_{4[4]}$ & \cellcolor{gray!20!yellow}$\mathbf{y}_{4[5]}$ & $\mathbf{y}_{4[6]}$ & $\mathbf{y}_{4[7]}$\\
\hline
Decoding& & & & $\hat{m}_{\{1,2\},[1]}$&\cellcolor{gray!30}$\hat{m}_{\{1,2\},[2]}$,$\hat{m}_{3,[1]}$ &$\hat{m}_{3,[2]}$ &\\
at node 4& $\emptyset$& \cellcolor{gray!30}$\hat{z}_{\{1,2\},[1]}$&$\hat{z}_{\{1,2\},[2]}$,$\hat{z}_{3,[1]}$&$\hat{z}_{\{1,2\},[3]}$,$\hat{z}_{3,[2]}$&\cellcolor{gray!30}$\hat{z}_{\{1,2\},[4]}$,$\hat{z}_{3,[3]}$&$\hat{z}_{\{1,2\},[5]}$,$\hat{z}_{3,[4]}$&$\hat{z}_{\{1,2\},[6]}$,$\hat{z}_{3,[5]}$\\
\hline
\end{tabular}}
\end{table*}
\begin{example} Consider the four-node network of Table \ref{ta:enc}. Here we assume that node $4$ observes source $U_4$ correlated with the other sources. Let $\mathbf{C}^{(4)}=[\{1,2\},\{3\}]$. Decoding at node $4$ begins at the end of block $2$. In block $2$, node $4$ declares that $\hat{z}_{\{1,2\},[1]}$ is decoded if $(\mathbf{x}_1(1,\hat{z}_{1,[1]}),\mathbf{x}_2(1,\hat{z}_{2,[1]}),\mathbf{y}_{4[2]})$ and \\ $(\mathbf{\hat{y}}_1(\hat{z}_{1,[1]}|\mathbf{x}_{1[1]}),\mathbf{\hat{y}}_2(\hat{z}_{2,[1]}|\mathbf{x}_{2[1]}),\mathbf{x}_{1[1]}(1,1),\mathbf{x}_{2[1]}(1,1),\mathbf{y}_{4[1]})$ are jointly typical. In the next block, $(\hat{z}_{\{1,2\},[2]},\hat{z}_{3,[1]})$ are decoded and in block $b$, $(\hat{w}_{\{1,2\},[b]},\hat{z}_{\{1,2\},[b-1]},\hat{w}_{3,[b-1]},\hat{z}_{3,[b-2]})$ are decoded, if (See Table \ref{ta:dec})
\begin{align}
(\mathbf{u}_{\{1,2\}}(\hat{w}_{\{1,2\},[b]}),\mathbf{u}_{4}(w_{4[b]}))&\in\mathit{T}_{\epsilon}^n\\
(\mathbf{x}_{\{1,2\}}(\hat{w}_{\{1,2\},[b]},\hat{z}_{\{1,2\},[b-1]}),\mathbf{y}_{4[b]})&\in\mathit{T}_{\epsilon}^n\\
(\mathbf{u}_{3}(\hat{w}_{3,[b-1]}),\mathbf{u}_{\{1,2\},[b-1]},\mathbf{u}_{4}(w_{4[b-1]}))&\in\mathit{T}_{\epsilon}^n\\
(\mathbf{x}_{3}(\hat{w}_{3,[b-1]},\hat{z}_{3,[b-2]}),\mathbf{\hat{y}}_{\{1,2\}}(\hat{z}_{\{1,2\},[b-1]}|\mathbf{x}_{\{1,2\}[b-1]}),\mathbf{x}_{\{1,2\}[b-1]},\mathbf{y}_{4[b-1]})&\in\mathit{T}_{\epsilon}^n\\
(\mathbf{\hat{y}}_{3}(\hat{z}_{3,[b-2]}|\mathbf{x}_{3[b-2]}),\mathbf{x}_{\{1,2,3\}[b-2]},\mathbf{\hat{y}}_{\{1,2\}[b-2]},\mathbf{y}_{4[b-2]})&\in\mathit{T}_{\epsilon}^n
\end{align}
\end{example}
\subsubsection*{ Error Probability Analysis}
Let $\mathbf{U}_{v,[b-V+1]}$ be the observed sequence at node $v$, which is used for encoding in block $b$. We bound the error probability of decoding at the end of block $b$ averaged over $(\mathbf{U}_{{\mathcal A}[b-V+1]},\mathbf{U}_{{\mathcal A}[b-V]},\cdots,\mathbf{U}_{{\mathcal A}[b-\ell-V+2]})$ and all random codebooks, assuming that no error occurred in the decoding of the previous blocks. Let $S_{v[j]}=(W_{v[j]},Z_{v[j-1]})$, in which $W_{v[j]}$ and $Z_{v[j-1]}$ are the indices of $\mathbf{U}_{v,[b-V+1]}$ and $\mathbf{\hat{Y}}_{v,[b-1]}$, respectively. Define $\mathbf{S}_b=(S_{{\mathcal L}_1[b]},\cdots,S_{{\mathcal L}_{\ell}[b-\ell+1]})$. Also, let $\mathbf{s}=(s_{{\mathcal L}_1},\cdots,s_{{\mathcal L}_{\ell}})$, in which $s_v=(w_v,z_v):w_v\in[1,2^{n(H(U_v)+\delta)}],z_v\in[1,2^{n(I(Y_v;\hat{Y}_v|X_v)+\delta)}]$. Define the events,
\begin{align}
{\mathcal E}_0(b,k)&:=\{(\mathbf{U}_{{\mathcal L}_k,[b-k-V+2]},\mathbf{U}_{{\mathcal L}^k,[b-k-V+2]},\mathbf{U}_{d_i,[b-k-V+2]})\notin\mathit{T}_{\epsilon}^n\} \nonumber\\
{\mathcal E}_1(b,k,v)&:=\{(\mathbf{X}_{v,[b-k+1]},\mathbf{Y}_{v,[b-k+1]},\mathbf{\hat{Y}}_{v}(z_v|\mathbf{X}_{v,[b-k+1]}))\notin\mathit{T}_{\epsilon'}^n,\ \mbox{for all $z\in[1,2^{n(I(Y_v;\hat{Y}_v)+\delta)}]$}\}\nonumber\\
{\mathcal E}_2(b,k,\mathbf{s})&:=\{(\mathbf{u}_{{\mathcal L}_k}(w_{{\mathcal L}_k}),\mathbf{U}_{{\mathcal L}^k,[b-k-V+2]},\mathbf{U}_{d_i,[b-k-V+2]})\in\mathit{T}_{\epsilon}^n\}\nonumber\\
{\mathcal E}_3(b,k,\mathbf{s})&:=\{(\mathbf{X}_{{\mathcal L}_k}(s_{{\mathcal L}_k}),\mathbf{\hat{Y}}_{{\mathcal L}_{k-1}}(z_{{\mathcal L}_{k-1}}|\mathbf{X}_{{\mathcal L}_{k-1}[b-k+1]}),\mathbf{X}_{{\mathcal L}^k,[b-k+1]},
\mathbf{\hat{Y}}_{{\mathcal L}^{k-1},[b-k+1]},\mathbf{Y}_{d_i,[b-k+1]},\mathbf{X}_{d_i,[b-k+1]})\in\mathit{T}_{\epsilon}^n\}.
\end{align}
Then the error event ${\mathcal E}(b)$ corresponding to decoding at the end of block $b$ can be expressed as
\[ {\mathcal E}(b)=\cup_{k=1}^{\ell+1}\big({\mathcal E}_0(b,k)\bigcup\cup_{v\in{\mathcal V}}{\mathcal E}_1(b,k,v)\bigcup{\mathcal E}_3^C(b,k,\mathbf{S}_b)\big)\bigcup\cup_{\mathbf{s}\neq\mathbf{S}_b}\big(\cap_{k=1}^{\ell+1}{\mathcal E}_2(b,k,\mathbf{s})\cap{\mathcal E}_3(b,k,\mathbf{s})\big).\]
Using the union bound, we bound above the probability of error as follows:
\begin{align}
\mathbb{P}[{\mathcal E}(b)]&\le \mathbb{P}[\cup_{k=1}^{\ell+1}{\mathcal E}_0(b,k)]+\mathbb{P}[\cup_{k=1}^{\ell+1}\cup_{v\in{\mathcal V}}{\mathcal E}_1(b,k,v)]+\mathbb{P}[\cup_{k=1}^{\ell+1}({\mathcal E}_3^C(b,k,\mathbf{S}_b)\bigcap\cap_{v\in{\mathcal V}}{\mathcal E}_1^C(b,k,v))]+\nonumber\\
&\qquad \mathbb{P}[\cup_{\mathbf{s}\neq\mathbf{S}_b}\big(\cap_{k=1}^{\ell+1}{\mathcal E}_2(b,k,\mathbf{s})\cap{\mathcal E}_3(b,k,\mathbf{s})\big)].
\end{align}
By the typical lemma \cite[Theorem 1.1]{kramer:book}, the first term vanishes as $n\rightarrow\infty$, the second term vanishes since at each node $v$ and for each input $\mathbf{x}_{v,[b-k+1]}$, there are more than $2^{nI(Y_v;\hat{Y}_v|X_v)}$ codewords $\mathbf{\hat{y}}_v(z_v|\mathbf{x}_{v,[b-k+1]})$, and the third term vanishes by \cite[Theorem 1.2.]{kramer:book}. For the last term, let ${\mathcal E}_1(b)=\cup_{\mathbf{s}\neq\mathbf{S}_b}\big(\cap_{k=1}^{\ell+1}{\mathcal E}_2(b,k,\mathbf{s})\cap{\mathcal E}_3(b,k,\mathbf{s})\big)$.
\begin{align}
\mathbb{P}[{\mathcal E}_1(b)]&\le\sum_{\mathpalette\mathclapinternal{\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b}}p(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]})\cdots p(\mathbf{u}_{{\mathcal A}[b-V+1]})\mathbb{P}[\mathbf{S}_b=\mathbf{s}_b|\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]}]\nonumber\\
&\qquad\sum_{\mathbf{s}\neq\mathbf{s}_b}\mathbb{P}[\cap_{k=1}^{\ell+1}{\mathcal E}_2(b,k,\mathbf{s})|\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]}]\mathbb{P}[\cap_{k=1}^{\ell+1}{\mathcal E}_3(b,k,\mathbf{s})|\mathbf{S}_b=\mathbf{s}_b]\label{sal:1}\\
&=\sum_{\mathpalette\mathclapinternal{\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b}}p(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]})\cdots p(\mathbf{u}_{{\mathcal A}[b-V+1]})\mathbb{P}[\mathbf{S}_b=\mathbf{s}_b|\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]}]\nonumber\\
&\qquad\sum_{\mathbf{s}\neq\mathbf{s}_b}\prod_{k=1}^{\ell+1}
\mathbb{P}[{\mathcal E}_2(b,k,\mathbf{s})|\mathbf{u}_{{\mathcal A}[b-k-V+2]}]\mathbb{P}[{\mathcal E}_3(b,k,\mathbf{s})|\mathbf{S}_b=\mathbf{s}_b]\label{sal:2}\\
&=\sum_{\mathpalette\mathclapinternal{\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b}}p(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]})\cdots p(\mathbf{u}_{{\mathcal A}[b-V+1]})\mathbb{P}[\mathbf{S}_b=\mathbf{s}_b|\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]}]\nonumber\\
&\qquad\sum_{\mathbf{s}\neq\mathbf{s}_b}\prod_{k=1}^{\ell+1}
\mathbf{1}[(\mathbf{u}_{{\mathcal L}_k}(w_{{\mathcal L}_k}),\mathbf{u}_{{\mathcal L}^k,[b-k-V+2]},\mathbf{u}_{d_i,[b-k-V+2]})\in\mathit{T}_{\epsilon}^n]\mathbb{P}[{\mathcal E}_3(b,k,\mathbf{s})|\mathbf{S}_b=\mathbf{s}_b]\label{sal:3}
\end{align}
where \eqref{sal:1} follows from the fact that the codebook generation is independent of the sources $U_{{\mathcal A}}^{nB}$, \eqref{sal:2} follows from the fact that the codebooks used in any $\ell\le V$ consecutive blocks are generated independently and the fact that the sources are i.i.d., therefore the source sequences are independently generated in the consecutive blocks and \eqref{sal:3} follows from the definition of ${\mathcal E}_2(b,k,\mathbf{s})$, in which $\mathbf{1}$ represents the indicator function. Define,
\begin{equation}\label{sal:def-mn}\begin{split}
{\mathcal N}_{{\mathcal S},{\mathcal Z}}(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)=\Big\{\mathbf{s}: s_t\neq s_{t,[b-k+1]}, z_{t'}\neq z_{t',[b-k]},\\ \mbox{for all}\ k\in[1,\ell+1],t\in{\mathcal S}\cap{\mathcal L}_k,t'\in{\mathcal Z}\cap{\mathcal L}_k,
\ \mbox{and}\ s_{{\mathcal L}_k\backslash{\mathcal S}}=s_{{\mathcal L}_k\backslash{\mathcal S},[b-k+1]}, \mbox{for all $k\in[1,\ell]$}, \\
\mbox{and}\quad (\mathbf{u}_{{\mathcal L}_k}(w_{{\mathcal L}_k}),\mathbf{u}_{{\mathcal L}^k,[b-k-V+2]},\mathbf{u}_{d_i,[b-k-V+2]})\in\mathit{T}_{\epsilon}^n,\ \mbox{for all $k\in[1,\ell]$}\Big\}.
\end{split}\end{equation}
Then, \eqref{sal:3} can be rewritten as,
\begin{align}
\mathbb{P}[{\mathcal E}_1(b)]&\le\sum_{\mathpalette\mathclapinternal{\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b}}p(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]})\cdots p(\mathbf{u}_{{\mathcal A}[b-V+1]})\mathbb{P}[\mathbf{S}_b=\mathbf{s}_b|\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]}]\nonumber\\
&\qquad
\sum_{\emptyset\neq{\mathcal S}\subseteq{\mathcal V}_{-d_i}}\sum_{{\mathcal Z}\subseteq{\mathcal S}}\sum_{\mathbf{s}\in {\mathcal N}_{{\mathcal S},{\mathcal Z}}(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)}\prod_{k=1}^{\ell+1}\mathbb{P}[{\mathcal E}_3(b,k,\mathbf{s})|\mathbf{S}_b=\mathbf{s}_b]\label{sal:5}
\end{align}
Define,
\begin{equation}\label{sal:le}
\mathbb{P}_{{\mathcal S},{\mathcal Z}}=\sum_{\mathbf{s}\in {\mathcal N}_{{\mathcal S},{\mathcal Z}}(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)}\prod_{k=1}^{\ell+1}\mathbb{P}[{\mathcal E}_3(b,k,\mathbf{s})|\mathbf{S}_b=\mathbf{s}_b].\end{equation}
Notice that there are $3^{|{\mathcal V}_{-d_i}|}$ pairs $({\mathcal S},{\mathcal Z})$ such that ${\mathcal Z}\subseteq{\mathcal S}\subseteq{\mathcal V}_{-d_i}$. Using this fact, $\mathbb{P}[{\mathcal E}_1(b)]$ is upper bounded by,
\begin{equation}
\mathbb{P}[{\mathcal E}_1(b)]\le 3^{|{\mathcal V}_{-d_i}|}\max_{{\mathcal Z}\subseteq{\mathcal S}\subseteq{\mathcal V}_{-d_i}}\mathbb{P}_{{\mathcal S},{\mathcal Z}}.
\end{equation}
Therefore to show that $\mathbb{P}[{\mathcal E}_1(b)]$ vanishes as $n\rightarrow\infty$, it suffices to show that each of $\mathbb{P}_{{\mathcal S},{\mathcal Z}}$ vanishes as $n\rightarrow\infty$. To bound above the probability $\mathbb{P}_{{\mathcal S},{\mathcal Z}}$, we use the following lemmas which provide an upper bound on the probability inside the last summation.
\begin{lemma}\label{le:9}
For each $\mathbf{s}\in {\mathcal N}_{{\mathcal S},{\mathcal Z}}(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)$, we have
\begin{equation*}
\mathbb{P}[{\mathcal E}_3(b,k,\mathbf{s})|\mathbf{S}_b=\mathbf{s}_b]\stackrel{.}{\le} 2^{-n\beta_{{\mathcal S},{\mathcal Z}}(k)},
\end{equation*}
where $\beta_{{\mathcal S},{\mathcal Z}}(k)$ is given by
\begin{equation}\label{sal:simp1}
\beta_{{\mathcal S},{\mathcal Z}}(k)=\sum_{t\in{\mathcal S}\cap{\mathcal L}_k}H(X_t)+\sum_{t'\in{\mathcal Z}\cap{\mathcal L}_{k-1}}H(\hat{Y}_{t'}|X_{t'})-H(X_{{\mathcal S}\cap{\mathcal L}_k}\hat{Y}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}|X_{{\mathcal S}^C\cap{\mathcal L}_k}\hat{Y}_{{\mathcal Z}^C\cap{\mathcal L}_{k-1}}X_{{\mathcal L}^k}\hat{Y}_{{\mathcal L}^{k-1}}X_{d_i}Y_{d_i}).
\end{equation}
\end{lemma}
\begin{proof}
See Appendix \ref{app:5}.
\end{proof}
\begin{lemma}\label{le:7}
For each $(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)$, we have
\begin{equation*}
{\mathcal N}_{{\mathcal S},{\mathcal Z}}(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)\stackrel{.}{\le} 2^{n(\sum_{k=1}^{\ell}H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i})+\sum_{t\in{\mathcal Z}}I(Y_t;\hat{Y}_t|X_t))}.
\end{equation*}
\end{lemma}
\begin{proof}
See Appendix \ref{app:6}.
\end{proof}
Applying Lemma \ref{le:9} and Lemma \ref{le:7} to \eqref{sal:le} yields,
\begin{equation}
\mathbb{P}_{{\mathcal S},{\mathcal Z}}\stackrel{.}{\le} 2^{-n(\sum_{k=1}^{\ell+1}\beta_{{\mathcal S},{\mathcal Z}}(k)-\sum_{k=1}^{\ell}H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i})-\sum_{t'\in{\mathcal Z}}I(Y_{t'};\hat{Y}_{t'}|X_{t'}))}.
\end{equation}
Thus $\mathbb{P}_{{\mathcal S},{\mathcal Z}}$ vanishes as $n\rightarrow\infty$, provided that
\begin{equation}\label{sal:simp2}
\sum_{k=1}^{\ell+1}\beta_{{\mathcal S},{\mathcal Z}}(k)-\sum_{k=1}^{\ell}H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i})-\sum_{t'\in{\mathcal Z}}I(Y_{t'};\hat{Y}_{t'}|X_{t'})> 0.
\end{equation}
Substituting \eqref{sal:simp1} in \eqref{sal:simp2}, simplifies it as follows,
\begin{align}
0&<\sum_{t\in{\mathcal S}}H(X_t)+\sum_{t'\in{\mathcal Z}}H(\hat{Y}_t|X_tY_t)-\sum_{k=1}^{\ell+1}\left(H(X_{{\mathcal S}\cap{\mathcal L}_k}\hat{Y}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}|X_{{\mathcal L}_k\backslash{\mathcal S}}\hat{Y}_{{\mathcal L}_{k-1}\backslash{\mathcal Z}}X_{{\mathcal L}^{k}}\hat{Y}_{{\mathcal L}^{k-1}}X_{d_i}Y_{d_i})\right.\nonumber\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.+H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i})\right)\label{sal:simp}\\
&=\sum_{k=1}^{\ell+1}\left(H(X_{{\mathcal S}\cap{\mathcal L}_k})+H(\hat{Y}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}|X_{{\mathcal Z}\cap{\mathcal L}_{k-1}}Y_{{\mathcal Z}\cap{\mathcal L}_{k-1}})-H(X_{{\mathcal S}\cap{\mathcal L}_k}\hat{Y}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}|X_{{\mathcal L}_k\backslash{\mathcal S}}\hat{Y}_{{\mathcal L}_{k-1}\backslash{\mathcal Z}}X_{{\mathcal L}^{k}}\hat{Y}_{{\mathcal L}^{k-1}}Y_{d_i})\right.\nonumber\\
&\qquad\qquad\qquad\left.-H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i})\right)\label{sal:7}\\
&= \sum_{k=1}^{\ell+1}\left(I(X_{{\mathcal S}\cap{\mathcal L}_k};Y_{d_i}\hat{Y}_{({\mathcal L}_{k-1}\backslash{\mathcal Z})\cup{\mathcal L}^{k-1}}|X_{d_i}X_{({\mathcal L}_k\backslash{\mathcal S})\cup {\mathcal L}^k})-I(Y_{{\mathcal Z}\cap{\mathcal L}_{k-1}};\hat{Y}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}|X_{d_i}X_{{\mathcal L}^{k+1}}Y_{d_i}\hat{Y}_{({\mathcal L}_{k-1}\backslash{\mathcal Z})\cup{\mathcal L}^{k-1}})\right.\nonumber\\
&\qquad\qquad\left. -H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i})\right)\label{sal:8}
\end{align}
where in \eqref{sal:7} and \eqref{sal:8}, we have used the fact that $X_t$'s are independent and $\hat{Y}_t$ given $(X_t,Y_t)$ is independent of all the other random variables. Now consider the RHS of \eqref{sal:8}. Since ${\mathcal Z}\subseteq{\mathcal S}$, it can easily be shown that the first term inside the summation takes its minimum for ${\mathcal Z}={\mathcal S}$ while the second term simultaneously takes its maximum for ${\mathcal Z}={\mathcal S}$. On the other side, ${\mathcal Z}={\mathcal S}$ corresponds to the probability $\mathbb{P}_{{\mathcal S},{\mathcal S}}$. Hence if $\mathbb{P}_{{\mathcal S},{\mathcal S}}$ vanishes, then all $\mathbb{P}_{{\mathcal S},{\mathcal Z}}:{\mathcal Z}\subseteq{\mathcal S}$ vanish as $n\rightarrow\infty$. Therefore $\mathbb{P}[{\mathcal E}_1(b)]$ vanishes if all $\mathbb{P}_{{\mathcal S},{\mathcal S}}$ (${\mathcal S}\subseteq{\mathcal V}_{-d_i}$) vanish. Finally, substituting ${\mathcal Z}={\mathcal S}$ in \eqref{sal:simp} results in \eqref{eq:sufficient}, which completes the proof of Lemma \ref{le:first}. \par
\begin{remark}
If there exists only a single destination, one can use the offset encoding scheme of \cite{xie} and \cite{kramer2005} which has less delay compared to the proposed encoding scheme, to prove Lemma \ref{le:first}. In general however, since the ordered partitions corresponding to each receiver for reliable decoding are different, it is impossible to obtain the same offset encoding scheme for all the destinations. This makes it clear why the encoding scheme does not transmit any information in the first $V-1$ blocks.
\end{remark}
\subsection{Removing additional constraints}
\label{sub:c}
This subsection claims that for each $d_i$, we can reduce the constraints of \eqref{eq:adi} to the first term of it. A special case of our claim about the single relay channel has been studied in \cite{kang:itw}. We prove our claim by induction on $|{\mathcal V}_{-d_i}|$. For $|{\mathcal V}_{-d_i}|=1$, it is true. Now suppose the induction assumption is true for all $k<|{\mathcal V}_{-d_i}|$.
For each ${\mathcal Z}\subseteq{\mathcal V}$ which contains $d_i$ and each ${\mathcal S}\subseteq{\mathcal Z}\backslash\{d_i\}$, let
\[h^{(d_i)}_{{\mathcal Z}}({\mathcal S})=R^{(d_i)}_{{\mathcal S}} - H(\hat{Y}_{{\mathcal S}}X_{{\mathcal S}}|X_{{\mathcal Z}\backslash{\mathcal S}}\hat{Y}_{{\mathcal Z}\backslash({\mathcal S}\cup\{d_i\})}Y_{d_i})\]
\par Assume there exists a subset ${\mathcal T}$ of ${\mathcal A}^C\backslash\{d_i\}$ such that $h^{(d_i)}_{{\mathcal V}}({\mathcal T})<0$. For each ${\mathcal W}\subseteq{\mathcal V}_{-d_i}$ observe that,
\begin{IEEEeqnarray}{rLl}
h^{(d_i)}_{{\mathcal V}}({\mathcal W}\cup{\mathcal T})&=&h^{(d_i)}_{{\mathcal V}}({\mathcal T})+R^{(d_i)}_{{\mathcal W}\backslash{\mathcal T}}-
H(\hat{Y}_{{\mathcal W}}X_{{\mathcal W}}|X_{{\mathcal W}^C\backslash{\mathcal T}}\hat{Y}_{{\mathcal W}^C\backslash({\mathcal T}\cup\{d_i\})}Y_{d_i}) \nonumber\\
&<&R^{(d_i)}_{{\mathcal W}\backslash{\mathcal T}}-H(\hat{Y}_{{\mathcal W}}X_{{\mathcal W}}|X_{{\mathcal W}^C\backslash{\mathcal T}}\hat{Y}_{{\mathcal W}^C\backslash({\mathcal T}\cup\{d_i\})}Y_{d_i}) \nonumber\\
&\le&R^{(d_i)}_{{\mathcal W}}-H(\hat{Y}_{{\mathcal W}}X_{{\mathcal W}}|X_{{\mathcal W}^C}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Y_{d_i}) \nonumber\\ \label{eq:symp}
&=&h^{(d_i)}_{{\mathcal V}}({\mathcal W})
\end{IEEEeqnarray}
Using \eqref{eq:symp}, \eqref{eq:adi1} can be simplified as follows:
\begin{IEEEeqnarray}{rCl}
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})&\le& \min_{{\mathcal V}\supset{\mathcal W}\supseteq{\mathcal S}:\atop d_i\in{\mathcal W}^C} h^{(d_i)}_{{\mathcal V}}({\mathcal W})\nonumber\\
&\stackrel{(a)}{=}&\min_{{\mathcal V}\supset{\mathcal W}\supseteq{\mathcal S}:\atop d_i\in{\mathcal W}^C} h^{(d_i)}_{{\mathcal V}}({\mathcal W}\cup{\mathcal T})\nonumber\\
&\stackrel{(b)}{\le}&\min_{{\mathcal V}\supset{\mathcal W}\supseteq{\mathcal S}:\atop d_i\in{\mathcal W}^C}h^{(d_i)}_{{\mathcal V}\backslash{\mathcal T}}({\mathcal W}\backslash{\mathcal T})\nonumber\\
\label{eq:fin}
&=& \min_{{\mathcal V}\backslash{\mathcal T}\supset{\mathcal W}\supseteq{\mathcal S}:\atop d_i\in{\mathcal W}^C}h^{(d_i)}_{{\mathcal V}\backslash{\mathcal T}}({\mathcal W})
\end{IEEEeqnarray}
where (a) follows from \eqref{eq:symp}, since ${\mathcal S}\subset{\mathcal W}\cup{\mathcal T}$ and $d_i\notin{\mathcal T}$, and (b) follows from the first inequality in \eqref{eq:symp}.\par
Now by the induction assumption, the last term of \eqref{eq:fin} corresponds to the feasibility constraints of the reliable transmission of $U_{{\mathcal A}}$ to node $d_i$ over the cooperative network with the set of nodes ${\mathcal V}\backslash{\mathcal T}$. Hence node $d_i$ can decode $U_{{\mathcal A}}$, by treating $(X_{{\mathcal T}},\hat{Y}_{{\mathcal T}})$ as noise. We note that the encoding scheme, only results in more delay rather than a corresponding encoding/decoding scheme for a cooperative networks with the node's set ${\mathcal V}\backslash{\mathcal T}$. Therefore, the encoding scheme does not need any changes and the decoding is done only with respect to the cooperative network with ${\mathcal V}={\mathcal V}\backslash{\mathcal T}$. This proves our claim
\section{Slepian-Wolf coding over some classes of cooperative networks}\label{sec:6}
In this section, we extract some corollaries from Proposition \ref{pro:ob} and Theorem \ref{thm:sw} about semi-deterministic network, Aref networks and linear finite-field and state-dependent deterministic networks, for which Proposition \ref{pro:ob} and Theorem \ref{thm:sw} (partially) match.
\begin{definition}
A cooperative network with one destination $d$, is said to be \emph{semi-deterministic}, if each node $v\in{\mathcal V}\backslash\{d\}$ observes a deterministic function of all the channel inputs and the destination channel output, i.e., $Y_v=f_v(X_{{\mathcal V}},Y_d)$.
\end{definition}
\begin{remark}
The semi-deterministic cooperative network is a generalization of semi-deterministic relay channel \cite{aref} and a class of deterministic relay channels, recently defined in \cite{yhk}.
\end{remark}
\begin{definition}
A cooperative network is said to be \emph{deterministic}, if each node observes a deterministic function of all the channel inputs, i.e., $Y_v=f_v(X_{{\mathcal V}})$.
\end{definition}
\begin{definition}
A deterministic network is said to be an \emph{Aref network}, if each channel output $Y_v$ can be decomposed into $|{\mathcal V}|-1$ components $(Y_{v',v}:v'\in{\mathcal V}\backslash\{v\})$, where $Y_{v',v}$ is a deterministic function of $X_{v'}$. A semi-deterministic network with destination node $d$, is said to be a \emph{semi-deterministic Aref network}, if each channel output $Y_v$ can be decomposed into $|{\mathcal V}|-1$ components $(Y_{v',v}:v'\in{\mathcal V}\backslash\{v\})$, where $Y_{v',v}$ is a deterministic function of $X_{v'}$ for $v\in{\mathcal V}_{-d}$ and $Y_{v',d}$ is a stochastic function of $X_{v'}$.
\end{definition}
\begin{definition}
A deterministic network is said to be a \emph{linear finite-field deterministic network}, if all the channel inputs and outputs lie in the same field $\mathbf{GF}(q)$ and each channel output can be expressed as a linear combination of all the channel inputs. The relation between the channel inputs and the channel outputs can be determined via a matrix product, $Y_{{\mathcal V}}=\mathbf{G}X_{{\mathcal V}}$, where $\mathbf{G}$ is called the channel matrix of the network. $\mathbf{G}_{{\mathcal T}_1,{\mathcal T}_1}$ is a sub-matrix obtained by deleting the rows and columns of $\mathbf{G}$ corresponding to ${\mathcal T}_1$ and ${\mathcal T}_2$, respectively.
\end{definition}
\begin{definition}
A cooperative network is state-dependent (SD) \cite{yhk:isit09}, if there exists a set of states ${\mathcal S}$ such that the channel inputs and the channel outputs at each time are related via the current state of the network. A SD-cooperative network is said to be deterministic if each node observes a deterministic function of all the channel inputs and the state of the network, i.e., $Y_v=f_v(X_{{\mathcal V}},S)$. A SD-deterministic network is said to be an Aref network, if each channel output $Y_v$ can be decomposed into $|{\mathcal V}|-1$ components $(Y_{v',v}:v'\in{\mathcal V}\backslash\{v\})$, where $Y_{v',v}$ is a deterministic function of $(X_{v'},S)$. A SD-linear finite-field deterministic network is a network described by $Y_{{\mathcal V}}=\mathbf{G}(S)X_{{\mathcal V}}$, where $\mathbf{G}(S)$ is the matrix of coefficients corresponding to state $S$.
\end{definition}
\begin{proposition}
\label{pro:semi}
The set of DMCS $U_{{\mathcal A}}$ can reliably be transmitted over a semi-deterministic network, if there exists random variable $Q$, such that for each ${\mathcal S}\subseteq{\mathcal V}$, we have :
\begin{equation}
\label{eq:semi}
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{{\mathcal V}_{-d}\supseteq{\mathcal W}\supseteq{\mathcal S}}I(X_{{\mathcal W}};Y_{{\mathcal W}^C}|X_{{\mathcal W}^C}Q)
\end{equation}
where the joint p.m.f. of random variables factors as $p(q)[\prod_{v\in{\mathcal V}}p(x_v|q)]p(y_{{\mathcal V}}|x_{{\mathcal V}})$.\\
On the other side, multicasting is feasible, only if there
exists a joint p.m.f. $p(x_{{\mathcal V}})$ such that
\begin{equation}
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{{\mathcal V}_{-d}\supseteq{\mathcal W}\supseteq{\mathcal S}}I(X_{{\mathcal W}};Y_{{\mathcal W}^C}|X_{{\mathcal W}^C}).
\end{equation}
\end{proposition}
\begin{proposition}
\label{pro:det}
The set of DMCS $U_{{\mathcal A}}$ can reliably be multicast over a deterministic network, if there exists a product distribution $\prod_{v\in{\mathcal V}}p(x_v)$ such that for each ${\mathcal S}\subseteq{\mathcal V}$, we have:
\begin{equation}
\label{eq:det}
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}H(Y_{{\mathcal W}^C}|X_{{\mathcal W}^C})
\end{equation}
On the other side, multicasting is feasible, only if there
exists a joint p.m.f. $p(x_{{\mathcal V}})$ such that
\begin{equation}
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}H(Y_{{\mathcal W}^C}|X_{{\mathcal W}^C}).
\end{equation}
\end{proposition}
\begin{remark}
Comparing the direct part and converse part of Propositions \ref{pro:semi} and \ref{pro:det}, we see that the sufficient conditions partially match to necessary conditions and these conditions completely match together, if we can restrict the set of joint p.m.f. in the converse part to the set of product distributions.
\end{remark}
\begin{proposition}
\label{pro:aref}
The set of DMCS $U_{{\mathcal A}}$ can reliably be multicast over an Aref network, if and only if, there exists a product distribution $\prod_{v\in{\mathcal V}}p(x_v)$ such that for each ${\mathcal S}\subseteq{\mathcal V}$, we have:
\begin{equation}
\label{eq:aref}
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}\sum_{v\in{\mathcal W}}H(Y_{v,{\mathcal W}^C})
\end{equation}
\end{proposition}
\begin{remark}
This proposition was partially proved in \cite{babu} for acyclic Aref networks.
\end{remark}
\begin{proposition}
\label{pro:semi-aref}
The set of DMCS $U_{{\mathcal A}}$ can reliably be transmitted over a semi-deterministic Aref network, if and only if, there exists a product distribution $\prod_{v\in{\mathcal V}}p(x_v)$ such that for each ${\mathcal S}\subseteq{\mathcal V}$, we have:
\begin{equation}
\label{eq:semi-aref}
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{{\mathcal V}_{-d}\supseteq{\mathcal W}\supseteq{\mathcal S}}\sum_{v\in{\mathcal W}}I(X_v;Y_{v,{\mathcal W}^C})
\end{equation}
\end{proposition}
\begin{proposition}
\label{pro:finite}
The set of DMCS $U_{{\mathcal A}}$ can reliably be multicast over a linear finite-field deterministic network, if and only if,
\begin{equation}
\label{eq:finite}
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}\mbox{rank}(\mathbf{G}_{{\mathcal W},{\mathcal W}^C})\log q
\end{equation}
\end{proposition}
Now, consider the SD-network. In the sequel, assume that the state $S$ is an i.i.d. random process.
\begin{proposition}
\label{pro:state}
For reliable multicasting over a SD-deterministic network, if all destinations have the state information $S$, then a sufficient condition is given by,
\begin{equation}
\label{eq:state}
\forall{\mathcal S}\subseteq{\mathcal A}: H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}H(Y_{{\mathcal W}^C}|X_{{\mathcal W}^C},S)
\end{equation}
Moreover, condition \eqref{eq:state} is a necessary condition for reliable multicasting over a SD-Aref network and a SD-linear finite-field deterministic network with state information available at the destinations. In these cases, \eqref{eq:state} is simplified to,
\begin{align*}
\mbox{SD-Aref network}:& H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}\sum_{v\in{\mathcal W}}H(Y_{v,{\mathcal W}^C}|S)\\
\mbox{SD-linear finite-field deterministic network}:&
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}\mathbb{E}_{S}[\mbox{rank}(\mathbf{G}_{{\mathcal W},{\mathcal W}^C}(S))] \log q
\end{align*}
\end{proposition}
\begin{proof}[Proof of Propositions 2-7]
The direct part of Propositions \ref{pro:semi} and \ref{pro:det} follow from Theorem \ref{thm:sw}, by setting $\hat{Y}_v=Y_v$ in Theorem \ref{thm:sw}, because $(Y_{{\mathcal W}}: {\mathcal W}\subseteq{\mathcal V}_{-d})$ and $(Y_{{\mathcal W}}: {\mathcal W}\subseteq{\mathcal V})$ are deterministic functions of $(Y_{d},X_{{\mathcal V}})$ and $X_{{\mathcal V}}$, respectively. The converse part of Propositions \ref{pro:semi} and \ref{pro:det} are the direct consequence of Proposition \ref{pro:ob}. The direct part of Proposition \ref{pro:aref} follows from Proposition \ref{pro:det}, and the converse is deduced from Proposition \ref{pro:ob} as follows:
\begin{align*}
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})&<H(Y_{{\mathcal W}^C}|X_{{\mathcal W}^C})\\
&\le H(\cup_{v\in{\mathcal W}}Y_{v,{\mathcal W}^C})\\
&\le \sum_{v\in{\mathcal W}}H(Y_{v,{\mathcal W}})
\end{align*}
Now, since $Y_{v,{\mathcal W}^C}$ depends only on $X_v$, the last term of inequalities only depends on the mariginal p.m.f. of the random variables. Thus, we can restrict the set of joint p.m.f. of Proposition \ref{pro:ob} to the product distribution, which completes the proof of Proposition \ref{pro:aref}. The direct part of Proposition \ref{pro:semi-aref} follows from Proposition \ref{pro:semi} and the converse part is obtained from Proposition \ref{pro:ob} as follows:
\begin{align}
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})&<I(X_{{\mathcal W}};Y_{{\mathcal W},{\mathcal W}^C}Y_{{\mathcal W}^C,{\mathcal W}^C}|X_{{\mathcal W}^C})\label{eq:semi-aref-1}\\
&=I(X_{{\mathcal W}};Y_{{\mathcal W},{\mathcal W}^C}|X_{{\mathcal W}^C}Y_{{\mathcal W}^C,{\mathcal W}^C})\label{eq:semi-aref-2}\\
&\le I(X_{{\mathcal W}};Y_{{\mathcal W},{\mathcal W}^C})\label{eq:semi-aref-3}\\
&\le \sum_{v\in{\mathcal W}}I(X_v;Y_{v,{\mathcal W}^C})\label{eq:semi-aref-4}
\end{align}
where \eqref{eq:semi-aref-2} follows, because $X_{{\mathcal W}}-X_{{\mathcal W}^C}-Y_{{\mathcal W}^C,{\mathcal W}^C}$ form a Markov chain and \eqref{eq:semi-aref-3} follows from the fact that $(X_{{\mathcal W}^C}Y_{{\mathcal W}^C,{\mathcal W}^C})-X_{{\mathcal W}}-Y_{{\mathcal W},{\mathcal W}^C}$ form a Markov chain and \eqref{eq:semi-aref-4} follows, since $Y_{v,{\mathcal W}^C}$ given $X_v$ is independent of other random variables. Finally, note that the RHS of \eqref{eq:semi-aref-4} only depends on the marginal p.m.f. of the random variables $X_{{\mathcal V}}$ which implies the converse.\par
The direct part of Proposition \ref{pro:finite} is deduced from Proposition \ref{pro:det}, by computing the RHS of \eqref{eq:det} for the product distribution $\prod_{v\in{\mathcal V}}p(x_{v})$, in which each $X_v$ is uniformly distributed over the field $\mathbf{GF}(q)$. The converse follows from Proposition \ref{pro:ob}, since the product and the uniform distribution simultaneously maximized the RHS of \eqref{eq:sw2} for all ${\mathcal W}\subseteq{\mathcal V}$.\par
The sufficient condition of Proposition \ref{pro:state} is deduced from Theorem \ref{thm:sw}, by treating the state information at each destination as an additional output of the network and the fact that $(Y_{{\mathcal W}}:{\mathcal W}\subseteq{\mathcal V}_{-d_i})$ is a deterministic function of $(X_{{\mathcal V}},S)$. The necessary conditions for the SD-Aref network and the SD-linear finite-field deterministic network follow from similar arguments for the converse of these networks without state.
\end{proof}
\section{Slepian-Wolf coding over Gaussian cooperative networks}\label{sec:7}
In the previous section, we focused on some networks for which the cut-set type necessary conditions became sufficient conditions at least for product distribution of channel inputs. In this section, we focus on the Gaussian networks for which simple forwarding of the observations of each node is impossible. Instead, following \cite{aves:phd,aves:sub}, each node quantizes its observations at the noise level, then transmits these to the destinations. We compute sufficient conditions corresponding to this approach and compare it with the necessary conditions. \par
Consider a Gaussian cooperative network, in which the received signal $\mathbf{y}_{v}$ is given by,
\begin{equation}
\mathbf{y}_{v}=\sum_{v'\in{\mathcal V}_{-v}}h_{v',v}\mathbf{x}_{v'}+\mathbf{z}_{v}
\end{equation}
\noindent where $h_{v',v}$ is a complex number which represents the channel gain from node $v'$ to node $v$. Furthermore, we assume that each node has an average power constraint equal to one on its transmitted signal. Moreover, $Z_v$ is an i.i.d. complex Gaussian random process with variance $\sigma^2_v$. Theorem \ref{thm:gauss} is the main result of this section.
\begin{theorem}\label{thm:gauss}
A set of DMCS $U_{{\mathcal A}}$ can reliably be transmitted over a Gaussian network, if for each ${\mathcal S}\subseteq{\mathcal V}$, we have:
\begin{equation}
\label{eq:gauss-in}
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C)-\kappa_{{\mathcal W}}
\end{equation}
where
\[
C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C)=\max_{p(x_{{\mathcal W}}):\sum_{v\in{\mathcal W}}\mathbb{E}X^2_v=|{\mathcal W}|}I(X_{{\mathcal W}};Y_{{\mathcal W}^C}|X_{{\mathcal W}^C})
\]
and
\[
\kappa_{{\mathcal W}}=\min\{|{\mathcal W}|,|{\mathcal W}^C|\}\log(1+\dfrac{|{\mathcal W}|}{\min\{|{\mathcal W}|,|{\mathcal W}^C|\}})+V-1
\]
Moreover, $\kappa_{{\mathcal W}}$ is bounded above by $\frac{3}{2}V-1$.\\
On the other side, the multicasting is feasible, only if:
\begin{equation}
\label{eq:gauss-out}
H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C)
\end{equation}
\end{theorem}
\begin{remark}
This theorem establishes the fact that multicasting of all DMCS whose Slepian-Wolf region intersects cut-set bound region within a $\dfrac{3}{2}V-1$ bits, is feasible.
\end{remark}
\begin{proof}
$C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C)$ is the capacity of the ${\mathcal W}\times{\mathcal W}^C$ MIMO channel with antenna input $X_{{\mathcal W}}$ and antenna output $Y_{{\mathcal W}^C}$. Now constraint \eqref{eq:gauss-out} is a direct result of Proposition \ref{pro:ob}, since there exists an average power constraint equal to one at each node $v\in{\mathcal W}$. To show \eqref{eq:gauss-in}, we apply Theorem \ref{thm:sw} to the Gaussian network. Assume $(X_v:v\in{\mathcal W})$ be jointly complex Gaussian random variables with covariance matrix $I_{V\times V}$. Let $\hat{Y}_v=Y_v+\hat{Z}_v$ where $\hat{Z}_v$ is a complex Gaussian random variable with variance equal to $\sigma_v^2$ (In other words, $\hat{Y}_{v}$ quantizes $Y_v$ at the noise level, \cite{aves:sub}). Now consider,
\begin{align}
I(X_{{\mathcal W}};Y_{{\mathcal W}^C}|X_{{\mathcal W}^C})&=I(X_{{\mathcal W}};Y_{{\mathcal W}^C}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C})\label{eq:g-1}\\
&=I(X_{{\mathcal W}};Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C})+I(X_{{\mathcal W}};Y_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}})\label{eq:g-2}
\end{align}
where \eqref{eq:g-1} follows, since $X_{{\mathcal W}}-(X_{{\mathcal W}^C},Y_{{\mathcal W}^C})-\hat{Y}_{{\mathcal W}^C}$ form a Markov chain. Next consider,
\begin{align}
I(X_{{\mathcal W}};Y_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}})&=I(X_{{\mathcal W}};\hat{Z}_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C},Y_{{\mathcal W}^C\backslash\{d_i\}}+\hat{Z}_{{\mathcal W}^C\backslash\{d_i\}})\label{eq:g-3}\\
&\le h(\hat{Z}_{{\mathcal W}^C\backslash\{d_i\}})-h(\hat{Z}_{{\mathcal W}^C\backslash\{d_i\}}|Z_{{\mathcal W}^C\backslash\{d_i\}}+\hat{Z}_{{\mathcal W}^C\backslash\{d_i\}})\label{eq:g-4}\\
&= I(\hat{Z}_{{\mathcal W}^C\backslash\{d_i\}};Z_{{\mathcal W}^C\backslash\{d_i\}}+\hat{Z}_{{\mathcal W}^C\backslash\{d_i\}})\label{eq:g-5}\\
&= |{\mathcal W}^C|-1\label{eq:g-6}
\end{align}
where \eqref{eq:g-3} follows from the definition of $\hat{Y}_{v}$, \eqref{eq:g-4} follows from the fact that conditioning does not increase entropy and the fact that conditioning on $(Y_{{\mathcal W}^C}+\hat{Z}_{{\mathcal W}^C},X_{{\mathcal V}})$ is equivalent to conditioning on $(Z_{{\mathcal W}^C}+\hat{Z}_{{\mathcal W}^C},X_{{\mathcal V}})$ and $(Z_{{\mathcal W}^C},\hat{Z}_{{\mathcal W}^C})$ is independent of $X_{{\mathcal V}}$. \eqref{eq:g-6} follows, because $\{(Z_v,\hat{Z}_v):v\in{\mathcal W}^C\}$ are independent and $Z_v$ and $\hat{Z}_v$ are complex Gaussian r.v. with the same variance.
In a similar way consider,
\begin{align}
I(Y_{{\mathcal W}};\hat{Y}_{{\mathcal W}}|X_{{\mathcal V}}Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}})&=I(Z_{{\mathcal W}};Z_{{\mathcal W}}+\hat{Z}_{{\mathcal W}}|X_{{\mathcal V}},Z_{d_i},Z_{{\mathcal W}^C}+\hat{Z}_{{\mathcal W}^C})\nonumber
\\&=I(Z_{{\mathcal W}};Z_{{\mathcal W}}+\hat{Z}_{{\mathcal W}})\nonumber\\
&=|{\mathcal W}|\label{eq:g}
\end{align}
Next, we derive a slight modified version of Beam-Forming Lemma \cite[Appendix F]{aves:sub}. $C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C)$ with water-filling is given by
\[
C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C)=\sum_{i=1}^n \log(1+Q_{ii}\lambda_i)
\]
where $n=\min(|{\mathcal W}|,|{\mathcal W}^C|)$ and $\lambda_i$'s are the singular values of the channel matrix of the MIMO channel and $Q_{ii}$ is given by water-filling solution satisfying, $\sum_{i=1}^n Q_{ii}=|{\mathcal W}|$.
Following \cite[Appendix F, Equations 140-143]{aves:sub}, we obtain,
\begin{align}\label{eq:g-7}
C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C)-I(X_{{\mathcal W}};Y_{{\mathcal W}}|X_{{\mathcal W}^C})&\le n\log(1+\dfrac{|{\mathcal W}|}{n})\\
&\le n\log(\dfrac{V}{n})\label{eq:gg}
\end{align}
Finally, comparing \eqref{eq:g-2}, \eqref{eq:g-6}, \eqref{eq:g} and \eqref{eq:g-7} we get,
\begin{equation}
I(X_{{\mathcal W}};Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C})\ge C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C)-\kappa_{{\mathcal W}}
\end{equation}
Substituting it in \eqref{eq:sw}, we conclude that the constraint \eqref{eq:gauss-in} is a sufficient condition. Now note that $n\in [1,\frac{V}{2})$. Define $f(x)=x\log(\dfrac{V}{x})$ on $[1,\frac{V}{2}]$. $f$ is a convex function and gets its maximum at the end point $\frac{V}{2}$. Hence the RHS of \eqref{eq:gg} is equal to or less than $\dfrac{V}{2}$ which results in $\kappa_{{\mathcal W}}\le\frac{3}{2}V-1$.
\end{proof}
\section{Achievable rate region for cooperative relay networks}\label{sec:8}
Consider ${\mathcal A}={\mathcal V}$ and the sources $(U_v:v\in{\mathcal V})$ are statistically independent and uniformly distributed over the sets ${\mathcal M}_v=\{1,2,\cdots,2^{R_v}\}$, thus $H(U_v)=R_v$. Substituting these values in Theorem \ref{thm:sw}, we find an achievable rate region which is based on the CF, for \emph{cooperative relay networks} with multicast demands.
\begin{theorem}
\label{thm:ach}
A V-tuple $(R_1,R_2,\cdots,R_V)$ is contained in the achievable rate region of a cooperative network with multicast demands at each node $d_i\in{\mathcal D}$, if for each ${\mathcal S}\subseteq{\mathcal V}$ the following constraint holds:
\begin{equation}
\label{eq:ach}
R_{{\mathcal S}}< \min_{d_i\in{\mathcal D}\backslash{\mathcal S}}\min_{{\mathcal V}\supseteq{\mathcal W}\supseteq{\mathcal S}: \atop d_i\in{\mathcal W}^C} \big[I(X_{{\mathcal W}};Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C}Q)-I(Y_{{\mathcal W}};\hat{Y}_{{\mathcal W}}|X_{{\mathcal V}}Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Q)\big]^+
\end{equation}
where $[x]^+=\max\{x,0\}$ and the joint p.m.f. of $(q,x_{{\mathcal V}},y_{{\mathcal V}},\hat{y}_{{\mathcal V}})$ factors as $p(q)\prod_{v\in{\mathcal V}}p(x_v|q)p(\hat{y}_v|x_v,y_v,q)]p(y_{{\mathcal V}}|x_{{\mathcal V}})$.
\end{theorem}
\begin{proof}
Let ${\mathcal T}$ be the largest subset of ${\mathcal V}$ such that the RHS of \eqref{eq:sw} is non-negative subject to each ${\mathcal S}\subseteq{\mathcal T}$ (Note that if two subsets ${\mathcal T}_1,{\mathcal T}_2$ have this property, then ${\mathcal T}_1\cup{\mathcal T}_2$ also has this property, hence ${\mathcal T}$ is unique.). Substituting $R_{{\mathcal S}}=H(U_{{\mathcal S}}|U_{{\mathcal S}^C})$ in Theorem \ref{thm:sw} yields that $U_{{\mathcal T}}$ can reliably be multicast, if \eqref{eq:ach} holds. Hence $(R_1,\cdots,R_V)$ is achievable (Note that $R_v=0$ for each node $v\in{\mathcal T}^C$).
\end{proof}
\begin{corollary}
\label{cor:rel-1}
Consider a relay network with node $1$ as a transmitter which has no channel output, i.e., $Y_1=\emptyset$, $N-2$ relay nodes $\{2,\cdots,N-1\}$ and node $N$ as a destination which has no channel input, i.e., $X_N=\emptyset$. Substituting $R_2=\cdots=R_{N}=0$ in Theorem \ref{thm:ach} gives the following achievable rate ($R_{CF}$) for relay network.
\begin{equation}
\label{eq:ach:rel}
R_{CF}=\min_{{\mathcal S}\subseteq{\mathcal V}:\atop 1\in{\mathcal S},N\in{\mathcal S}^C}\big[I(X_{{\mathcal S}};\hat{Y}_{{\mathcal S}^C\backslash\{V\}}Y_N|X_{{\mathcal S}^C}Q)-\\I(Y_{{\mathcal S}};\hat{Y}_{{\mathcal S}}|X_{{\mathcal V}}Y_V\hat{Y}_{{\mathcal S}^C\backslash\{V\}}Q)\big]^+
\end{equation}
\end{corollary}
\begin{remark}
For the single relay channel, the achievable rate is reduced to the CF rate with time-sharing as given in \cite{elgamal}.
\end{remark}
\begin{remark}
In \cite{yassaee}, we obtain an achievable rate based on CF, which subsumes the CF rate given in \cite{kramer2005}, when the partial decoding part of the CF strategy is relaxed. The CF rate in \cite[Theorem 3]{yassaee} is given by:
\begin{equation}
R^*_{CF}=I(X_1;Y_V\hat{Y}_{{\mathcal V}_{-V}}|X_{{\mathcal V}_{-V}})
\end{equation}
\emph{subject to the constraints}
\begin{equation}
\label{eq:cf-cons}
\forall{{\mathcal S}\subseteq{\mathcal V}\backslash\{1,V\}}: I(Y_{{\mathcal S}};\hat{Y}_{{\mathcal S}}|X_{{\mathcal V}_{-1}}Y_V\hat{Y}_{{\mathcal S}^C\backslash\{V\}})\le I(X_{{\mathcal S}};Y_V\hat{Y}_{{\mathcal S}^C\backslash\{V\}}|X_{{\mathcal S}^C\backslash\{V\}})
\end{equation}
Now let ${\mathcal Q}=\emptyset$ in Corollary \ref{cor:rel-1}. It can be easily shown that when the constraints \eqref{eq:cf-cons} hold, then ${\mathcal S}={\mathcal V}$ reaches the minimum of the RHS of \eqref{eq:ach:rel}. Therefore, the rate of Corollary \ref{cor:rel-1} subsumes the CF-rate given in \cite[Theorem 3]{yassaee}.
\end{remark}
\begin{corollary}
Consider a two-way relay network with nodes $1$ and $V$ as the two transmitters each demanding the message of the other one, and $V-2$ relay nodes $\{2,\cdots,V-1\}$. Substituting $R_2=\cdots=R_{V-1}=0$ and $\hat{Y}_1=\hat{Y}_V=\emptyset$ in Theorem \ref{thm:ach} gives the following achievable rate region for the two-way relay network.
\begin{equation}
k=1,V:\ R_{k}=\min_{{\mathcal S}\subseteq{\mathcal V}:\atop k\in{\mathcal S},\bar{k}\in{\mathcal S}^C}\big[I(X_{{\mathcal S}};\hat{Y}_{{\mathcal S}^C\backslash\{\bar{k}\}}Y_{\bar{k}}|X_{{\mathcal S}^C})-\\I(Y_{{\mathcal S}\backslash\{k\}};\hat{Y}_{{\mathcal S}\backslash\{k\}}|X_{{\mathcal V}}Y_{\bar{k}}\hat{Y}_{{\mathcal S}^C\backslash\{\bar{k}\}})\big]^+
\end{equation}
\noindent where $\bar{1}=V$ and $\bar{V}=1$.
\end{corollary}
\begin{remark}
Propositions \ref{pro:semi}-\ref{pro:state} are generalizations of several recent works on deterministic relay networks including \cite[Theorem 3.9]{aref}, \cite[Theorem 4.2]{aves:sub}, \cite[Theorem 4.4]{aves:sub}, \cite[Theorem 1]{yhk}, \cite[Theorem 1]{multicast} and \cite[Theorem 1]{yhk:isit09}.
\end{remark}\par
Next, consider the Gaussian cooperative network. Applying Theorem \ref{thm:gauss} to $U_{{\mathcal V}}$, we conclude the following corollary which shows that the cut-set bound region is achievable within a constant number of bits.
\begin{corollary}\label{cor:gauss}
A V-tuple $(R_1,R_2,\cdots,R_V)$ is contained in the achievable rate region of a Gaussian cooperative network with multicast demands at each node $d_i\in{\mathcal D}$, if for each ${\mathcal S}\subseteq{\mathcal V}$ the following constraint holds:
\begin{equation}
\label{eq:rel-gauss-in}
R_{{\mathcal S}}<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C)-\kappa_{{\mathcal W}}
\end{equation}
where $C_{wf}$ and $\kappa_{{\mathcal W}}$ are as defined in Theorem \ref{thm:gauss}.
\end{corollary}
\begin{remark}
In \cite[Theorem 4.6]{aves:sub}, authors have shown that by quantization at noise level, Gaussian relay network achieves the cut-set bound within $14V$ bits. But Corollary \ref{cor:gauss} implies that quantization at noise level achieves the cut-set bound within $\dfrac{3}{2} V-1$ bits; thus we have tightened the gap between the achievable rate and the cut-set bound. A similar result holds for the two-way Gaussian relay network.
\end{remark}
\section{conclusions}\label{sec:9}
We derived sufficient and necessary conditions for reliable multicasting of DMCS over cooperative networks. Necessary conditions were based on the cut-set type outer bound for the relay network. Sufficient conditions are based on joint source-channel coding, compress and forward strategy for the relay network and an identity related to the sub-modularity property of the entropy function. We showed that the sufficient conditions are indeed necessary conditions for some classes of deterministic networks including Aref networks and the linear finite-field deterministic networks. We also proved that multicasting of DMCS whose Slepian-Wolf region intersects the cut-set outer bound within a constant number of bits are feasible. In particular, we reduced all results of the paper to obtain achievable rate regions for multiple messages-multicast over the cooperative relay networks. We showed that this achievable rate region subsumes some recent achievable rate (region) for relay networks.
\appendices
\section{Proof of Lemma \ref{le:cover}}\label{app:1}
We prove this lemma by contradiction. Let $d$ be the dimension of $\mathbf{P}$. Suppose ${\mathcal F}$ is not a closed covering of $\mathbf{P}$, so there exists a point $A$ inside $\mathbf{P}$ which is not covered by ${\mathcal F}$ (Note that by assumption 2, the points that lie on the boundary of $\mathbf{P}$ are covered). Let $B$ be the closest point in $\cup_{i=1}^n\mathbf{P}_i$ to $A$. It is clear that $B$ must lie on a facet of at least one of the polytopes $(\mathbf{P}_i:1\le i\le n)$. Denote this facet by $\mathbf{F}_{\mathbf{P}_j}$. Two situations arise:
\begin{enumerate}
\item $\mathbf{F}_{\mathbf{P}_j}$ lies inside $\mathbf{P}$. Now by assumption 3, there exists $k\neq j$, such that $\mathbf{P}_j\cap\mathbf{P}_k=\mathbf{F}_{\mathbf{P}_j}$. Let $\mathbf{S}(B,\epsilon)$ be a $d$-\emph{dimensional} sphere with center $B$ and radius $\epsilon$ which is small enough such that $\mathbf{S}(B,\epsilon)$ is contained in $\mathbf{P}_j\cup\mathbf{P}_k$. Then the segment $AB$ intersects $\mathbf{S}(B,\epsilon)$ at a point $C$ which belongs to one of $\mathbf{P}_j$ or $\mathbf{P}_k$. Now $C$ is closer than $B$ to $A$ and lies on $\cup_{i=1}^n\mathbf{P}_i$. This results in contradiction, which proves lemma in this case.
\item $\mathbf{F}_{\mathbf{P}_j}$ lies on the boundary of $\mathbf{P}$. Let $\mathbf{S}(B,\epsilon)$ be a sphere with center $B$ and radius $\epsilon$ which is small enough such that $\mathbf{S}(B,\epsilon)$ only intersects $\mathbf{P}_j$. Since, $A$ lies inside $\mathbf{P}$, the segment $AB$ intersects $\mathbf{S}(B,\epsilon)$ at a point $C$ inside $\mathbf{P}$. By assumption, $C$ belongs to $\mathbf{P}_j$, which again results in contradiction that proves the lemma.
\end{enumerate}
\section{Proof of Lemma \ref{le:facet}}\label{app:2}
Denote the RHS of \eqref{eq:app} by $\mathbf{F}^*_{f,{\mathcal T}}$. First, we prove that $\mathbf{F}^*_{f,{\mathcal T}}\subseteq\mathbf{F}_{f,{\mathcal T}}$. Suppose $\mathbf{x}$ belongs to $\mathbf{F}^*_{f,{\mathcal T}}$. Now for each ${\mathcal U}\subseteq{\mathcal V}$, we have:
\begin{align}
x_{{\mathcal U}}&=x_{{\mathcal U}\cap{\mathcal T}}+x_{{\mathcal U}\cap{\mathcal T}^C}\label{eq:app21}\\
&\ge f({\mathcal U}\cap{\mathcal T}|({\mathcal U}\cap{\mathcal T})^C)+f({\mathcal U}\cap{\mathcal T}^C|{\mathcal T}^C\cap{\mathcal U}^C)\label{eq:app22}\\
&=f({\mathcal V})-f({\mathcal U}^C\cup{\mathcal T}^C)+f({\mathcal T}^C)-f({\mathcal U}^C\cap{\mathcal T}^C)\label{eq:app23}\\
&\ge f({\mathcal V})-f({\mathcal U}^C)\label{eq:app24}\\
&=f({\mathcal U}|{\mathcal U}^C)\label{eq:app25}
\end{align}
where \eqref{eq:app22} follows from the definition of $\mathbf{F}^*_{f,{\mathcal T}}$ and \eqref{eq:app24} follows, since $f$ is a sub-modular function. Now, \eqref{eq:app25} yields $\mathbf{x}\in\mathbf{F}_{f,{\mathcal T}}$. Hence $\mathbf{F}^*_{f,{\mathcal T}}\subseteq\mathbf{F}_{f,{\mathcal T}}$. On the other side, assume $\mathbf{x}\in\mathbf{F}_{f,{\mathcal T}}$. Note that by definition, $\mathbf{x}_{{\mathcal T}}\in\mathbf{F}^{(1)}_{f,{\mathcal T}}$. For each ${\mathcal S}\subseteq{\mathcal T}^C$, consider:
\begin{align}
x_{{\mathcal S}}&=x_{{\mathcal T}\cup{\mathcal S}}-x_{{\mathcal T}}\\
&=x_{{\mathcal T}\cup{\mathcal S}}-f({\mathcal T}|{\mathcal T}^C)\label{eq:app26}\\
&\ge f({\mathcal T}\cup{\mathcal S}|{\mathcal T}^C\cap{\mathcal S}^C)-f({\mathcal T}|{\mathcal T}^C)\\
&=f({\mathcal T}^C)-f({\mathcal T}^C\cap{\mathcal S}^C)\\
&=f({\mathcal S}|{\mathcal T}^C\backslash{\mathcal S})\label{eq:app27}
\end{align}
where \eqref{eq:app26} follows, because $\mathbf{x}$ lies on the hyperplane $x_{{\mathcal T}}=f({\mathcal T}|{\mathcal T}^C)$. Now, \eqref{eq:app27} implies that $\mathbf{x}_{{\mathcal T}^C}\in\mathbf{F}^{(2)}_{f,{\mathcal T}}$ which results in $\mathbf{F}_{f,{\mathcal T}}\subseteq\mathbf{F}^*_{f,{\mathcal T}}$. Thus $\mathbf{F}^*_{f,{\mathcal T}}=\mathbf{F}_{f,{\mathcal T}}$. \par
Next, we show that $\mathbf{F}_{f,{\mathcal T}}^{(1)}=\mathbf{P}_{f_1}$ and $\mathbf{F}_{f,{\mathcal T}}^{(2)}=\mathbf{P}_{f_2}$. First observe that since $f$ is sub-modular, $f_1$ and $f_2$ are sub-modular functions. Hence $\mathbf{P}_{f_1}$ and $\mathbf{P}_{f_2}$ are well defined. Moreover, note that
\begin{align}
\forall{\mathcal S}\subseteq{\mathcal T}:
f_1({\mathcal S}|{\mathcal T}\backslash{\mathcal S}) &= f_1({\mathcal S}\cup{\mathcal T})-f_1({\mathcal T}\backslash{\mathcal S})\nonumber\\
&= f({\mathcal S}\cup{\mathcal T}|{\mathcal T}^C)-f({\mathcal T}\backslash{\mathcal S}|{\mathcal T}^C)\nonumber\\
&= f({\mathcal V})-f([{\mathcal T}\backslash{\mathcal S}]\cup{\mathcal T}^C)\nonumber\\
&= f({\mathcal V})-f({\mathcal S}^C)\nonumber\\
&= f({\mathcal S}|{\mathcal S}^C)\label{eq:eq}
\end{align}
Comparing \eqref{eq:eq} and \eqref{eq:pface1} with Definition \ref{def:associate}, we conclude that $\mathbf{F}_{f,{\mathcal T}}^{(1)}$ is the essential polytope of $f_1$ with dimension $|{\mathcal T}|-1$. Likewise, we can show that $\mathbf{F}_{f,{\mathcal T}}^{(2)}$ is the essential polytope of $f_2$ with dimension $|{\mathcal T}^C|-1$. This completes the proof.
\section{Formal Proof of Equation \eqref{eq:cl3}}\label{app:3}
By Lemma \ref{le:facet}, it suffices to prove the following identities:
\begin{align}
{\mathcal S}\subseteq{\mathcal T}^C:\quad & h_{\mathbf{C}}({\mathcal S}|{\mathcal T}^C\backslash{\mathcal S})=h_{\mathbf{C}^*}({\mathcal S}|{\mathcal S}^C)\\
{\mathcal S}\subseteq{\mathcal T}:\quad & h_{\mathbf{C}}({\mathcal S}|{\mathcal S}^C)=h_{\mathbf{C}^*}({\mathcal S}|{\mathcal T}\backslash{\mathcal S})
\end{align}
We prove the first identity. Proof of the second identity is similar. For each ${\mathcal S}\subseteq{\mathcal T}^C$ consider,
\begin{align}
h_{\mathbf{C}}({\mathcal S}|{\mathcal T}^C\backslash{\mathcal S})&=h_{\mathbf{C}}({\mathcal T}^C)-h_{\mathbf{C}}({\mathcal T}^C\cap{\mathcal S}^C)\\
&=\sum_{k=1}^{K+1}H(X_{{\mathcal T}^C\cap{\mathcal L}_k}Y_{{\mathcal T}^C\cap{\mathcal L}_{k-1}}|X_{{\mathcal L}^k}Y_{{\mathcal L}^{k-1}})-H(X_{{\mathcal T}^C\cap{\mathcal S}^C\cap{\mathcal L}_k}Y_{{\mathcal T}^C\cap{\mathcal S}^C\cap{\mathcal L}_{k-1}}|X_{{\mathcal L}^k}Y_{{\mathcal L}^{k-1}})\\
&=\sum_{k=1}^{K+1}H(X_{{\mathcal S}\cap{\mathcal L}_k}Y_{{\mathcal S}\cap{\mathcal L}_{k-1}}|X_{{\mathcal T}^C\cap{\mathcal S}^C\cap{\mathcal L}_k}Y_{{\mathcal T}^C\cap{\mathcal S}^C\cap{\mathcal L}_{k-1}}X_{{\mathcal L}^k}Y_{{\mathcal L}^{k-1}})\label{eq:app:c1}
\end{align}
Note that ${\mathcal L}^{*k}={\mathcal L}^{k-1}\cup({\mathcal T}^C\cap{\mathcal L}_{k-1})$. Moreover, for each ${\mathcal S}\subseteq{\mathcal T}^C$, simple calculations yield:
\begin{align}
{\mathcal S}\cap{\mathcal L}_k^*&={\mathcal S}\cap\left[({\mathcal T}\cap{\mathcal L}_{k-1})\cup({\mathcal T}^C\cap{\mathcal L}_k)\right]={\mathcal S}\cap{\mathcal L}_k\nonumber\\
{\mathcal S}^C\cap{\mathcal L}_k^*&=\left[{\mathcal T}\cap{\mathcal L}_{k-1}\right]\cup\left[{\mathcal S}^C\cap{\mathcal T}^C\cap{\mathcal L}_k\right]\nonumber\\
{\mathcal L}^{*k}\cup({\mathcal S}^C\cap{\mathcal L}_k^*)&={\mathcal L}^k\cup\left[{\mathcal S}^C\cap{\mathcal T}^C\cap{\mathcal L}_k\right]\label{eq:app:c2}
\end{align}
substituting \eqref{eq:app:c2} in \eqref{eq:app:c1} gives:
\begin{align}
h_{\mathbf{C}}({\mathcal S}|{\mathcal T}^C\backslash{\mathcal S})&=\sum_{k=1}^{K+1}H(X_{{\mathcal S}\cap{\mathcal L}_k^*}Y_{{\mathcal S}\cap{\mathcal L}_{k-1}^*}|X_{{\mathcal S}^C\cap{\mathcal L}_k^*}Y_{{\mathcal S}^C\cap{\mathcal L}_{k-1}^*}X_{{\mathcal L}^{*k}}Y_{{\mathcal L}^{*k-1}}Z )\\
&=\sum_{k=1}^{K+2}H(X_{{\mathcal S}\cap{\mathcal L}_k^*}Y_{{\mathcal S}\cap{\mathcal L}_{k-1}^*}|X_{{\mathcal S}^C\cap{\mathcal L}_k^*}Y_{{\mathcal S}^C\cap{\mathcal L}_{k-1}^*}X_{{\mathcal L}^{*k}}Y_{{\mathcal L}^{*k-1}}Z)\\
&=h_{\mathbf{C}^*}({\mathcal S}|{\mathcal S}^C)
\end{align}
where in the last step, we have used the fact that ${\mathcal S}\cap{\mathcal L}_{K+1}^*={\mathcal S}\cap{\mathcal L}_{K+1}=\emptyset$. This completes the proof.$\square$
\section{Equivalence of Constraints \eqref{eq:adi} and \eqref{eq:adi-adi}}\label{app:simplify}
It is sufficient to show that the RHS of \eqref{eq:adi1} and \eqref{eq:adi-adi1} are equal. Substituting $R_v=H(X_v)+H(\hat{Y}_v|X_vY_v)$ in the RHS of \eqref{eq:adi1} gives,
\begin{align}
R_{{\mathcal W}}^{(d_i)}-H(\hat{Y}_{{\mathcal W}}X_{{\mathcal W}}|X_{{\mathcal W}^C}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Y_{d_i})&=H(X_{{\mathcal W}})+H(\hat{Y}_{{\mathcal W}}|X_{{\mathcal W}}Y_{{\mathcal W}})-H(\hat{Y}_{{\mathcal W}}X_{{\mathcal W}}|X_{{\mathcal W}^C}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Y_{d_i})\label{eq:sal:7}\\
&=I(X_{{\mathcal W}};\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Y_{d_i}|X_{{\mathcal W}^C})+H(\hat{Y}_{{\mathcal W}}|X_{{\mathcal W}}Y_{{\mathcal W}})\nonumber\\&\qquad\qquad -H(\hat{Y}_{{\mathcal W}}|X_{{\mathcal V}}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Y_{d_i})\nonumber\\
&=I(X_{{\mathcal W}};\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Y_{d_i}|X_{{\mathcal W}^C})-I(Y_{{\mathcal W}};\hat{Y}_{{\mathcal W}}|X_{{\mathcal V}}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Y_{d_i})\label{eq:sal:8}
\end{align}
where \eqref{eq:sal:7} follows from the fact that $d_i\notin{\mathcal W}$, $X_t$'s are independent and $\hat{Y}_t$ given $(X_t,Y_t)$ is independent of all other random variables
and \eqref{eq:sal:8} follows, since $(X_{{\mathcal W}^C}\hat{Y}_{{\mathcal W}^C}Y_{d_i})-(X_{{\mathcal W}},Y_{{\mathcal W}})-\hat{Y}_{{\mathcal W}}$ forms a markov chain. Substituting \eqref{eq:sal:8} in \eqref{eq:adi1} shows that \eqref{eq:adi1} and \eqref{eq:adi-adi1} are equal. Also, using \eqref{eq:sal:8} with ${\mathcal W}={\mathcal S}$ shows that \eqref{eq:adi2} and \eqref{eq:adi-adi2} are equal.
\section{Proof of Lemma \ref{le:9}}\label{app:5}
According to the codebook generation and the definition of ${\mathcal N}_{{\mathcal S},{\mathcal Z}}(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)$ in \eqref{sal:def-mn}, $(\mathbf{X}_t(s_t):t\in{\mathcal S}\cap{\mathcal L}_k)$ and $(\mathbf{X}_{{\mathcal V}}(s_{{\mathcal V},[b-k+1]}))$ are drawn independently from the sets $\mathit{T}_{\epsilon''}^n(X_t)$ and $\mathit{T}_{\epsilon}^n(X_{{\mathcal V}})$. Also given $\mathbf{X}_{t',[b-k+1]}( t'\in{\mathcal Z}\cap{\mathcal L}_{k-1})$, $\mathbf{\hat{Y}}_{t'}(z_{t'}|\mathbf{X}_{t',[b-k+1]})$ is drawn uniformly from the set $\mathit{T}_{\epsilon}^n(\hat{Y}_t|\mathbf{X}_{t',[b-k+1]})$ and is independent from other random variables. Hence the joint p.m.f. of\\ $(\mathbf{x}_{{\mathcal S}\cap{\mathcal L}_k}(s_{{\mathcal S}\cap{\mathcal L}_k}),\mathbf{x}_{{\mathcal V}}(s_{{\mathcal V},[b-k+1]}),\mathbf{\hat{y}}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}(z_{{\mathcal Z}\cap{\mathcal L}_{k-1}}),\mathbf{\hat{y}}_{{\mathcal L}^{k-1}\cup({\mathcal L}_{k-1}\backslash{\mathcal Z}),[b-k+1]},\mathbf{y}_{d_i,[b-k+1]})$ factors as
\begin{equation}\label{sal:p}
\mathbb{P}[\mathbf{x}_{{\mathcal V}}(s_{{\mathcal V},[b-k+1]}),\mathbf{\hat{y}}_{{\mathcal L}^{k-1}\cup({\mathcal L}_{k-1}\backslash{\mathcal Z}),[b-k+1]},\mathbf{y}_{d_i,[b-k+1]}]\prod_{t\in{\mathcal S}\cap{\mathcal L}_k}P_{\mathbf{X}_t}(\mathbf{x}_{t}(s_t))\prod_{t'\in{\mathcal Z}\cap{\mathcal L}_{k-1}}P_{\mathbf{\hat{Y}}_t|\mathbf{X}_t}(\mathbf{\hat{y}}_{t'}(z_{t'}|\mathbf{x}_{t',[b-k+1]})),
\end{equation}
where $P_{\mathbf{X}_t}$ and $P_{\mathbf{\hat{Y}}_t|\mathbf{X}_t}$ are uniform distributions on the sets $\mathit{T}_{\epsilon''}^n(X_t)$ and $\mathit{T}_{\epsilon'}^n(\hat{Y}_t|X_t)$, respectively. Now, we upper bound $\mathbb{P}[{\mathcal E}_3(b,k,\mathbf{s})]$ for each $\mathbf{s}\in {\mathcal N}_{{\mathcal S},{\mathcal Z}}(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)$ as follows,
\begin{align}
\mathbb{P}[{\mathcal E}_3(b,k,\mathbf{s})]&=\sum_{\mathpalette\mathrlapinternal{\big(\mathbf{x}_{{\mathcal L}_k\backslash{\mathcal S}}(s_{{\mathcal L}_k\backslash{\mathcal S},[b-k+1]}),\mathbf{\hat{y}}_{{\mathcal L}_{k-1}\backslash{\mathcal Z}}(z_{{\mathcal L}_{k-1}\backslash{\mathcal Z},[b-k]}),\mathbf{x}_{{\mathcal L}^k,[b-k+1]},\mathbf{\hat{y}}_{{\mathcal L}^{k-1},[b-k+1]},\mathbf{x}_{d_i},\mathbf{y}_{d_i}\big)\in\mathit{T}_{\epsilon}^n}}
\mathbb{P}[\mathbf{x}_{{\mathcal L}_k\backslash{\mathcal S}}(s_{{\mathcal L}_k\backslash{\mathcal S},[b-k+1]}),\mathbf{\hat{y}}_{{\mathcal L}_{k-1}\backslash{\mathcal Z}}(z_{{\mathcal L}_{k-1}\backslash{\mathcal Z},[b-k]}),\mathbf{\hat{y}}_{{\mathcal L}^{k-1}[b-k+1]},\mathbf{x}_{d_i,[b-k+1]},\mathbf{y}_{d_i,[b-k+1]}]\nonumber\\
&\qquad\quad\sum_{\mathpalette\mathclapinternal{\mathbf{x}_{{\mathcal S}\cap{\mathcal L}_k}(s_{{\mathcal S}\cap{\mathcal L}_k}),\mathbf{\hat{y}}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}(z_{{\mathcal Z}\cap{\mathcal L}_{k-1}})\in\atop\mathit{T}_{\epsilon}^n(X_{{\mathcal S}\cap{\mathcal L}_k}\hat{Y}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}|\mathbf{x}_{{\mathcal L}_k\backslash{\mathcal S},[b-k+1]},\mathbf{\hat{y}}_{{\mathcal L}_{k-1}\backslash{\mathcal Z},[b-k]},\mathbf{\hat{y}}_{{\mathcal L}^{k-1}[b-k+1]},\mathbf{y}_{d_i,[b-k+1]})}}\qquad\qquad\qquad\prod_{t\in{\mathcal S}\cap{\mathcal L}_k}P_{\mathbf{X}_t}(\mathbf{x}_{t}(s_t))\prod_{t'\in{\mathcal Z}\cap{\mathcal L}_{k-1}}P_{\mathbf{\hat{Y}}_t|\mathbf{X}_t}(\mathbf{\hat{y}}_{t'}(z_{t'}|\mathbf{x}_{t',[b-k+1]}))\label{sal:p1}\\
&=\mathbb{P}[(\mathbf{X}_{{\mathcal L}_k\backslash{\mathcal S}}(s_{{\mathcal L}_k\backslash{\mathcal S},[b-k+1]}),\mathbf{\hat{Y}}_{{\mathcal L}_{k-1}\backslash{\mathcal Z}}(z_{{\mathcal L}_{k-1}\backslash{\mathcal Z},[b-k]}),\mathbf{\hat{Y}}_{{\mathcal L}^{k-1}[b-k+1]},\mathbf{X}_{d_i,[b-k+1]},\mathbf{Y}_{d_i,[b-k+1]})\in\mathit{T}_{\epsilon}^n]\nonumber\\&
\qquad\dfrac{|\mathit{T}_{\epsilon}^n(X_{{\mathcal S}\cap{\mathcal Z}}\hat{Y}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}|X_{{\mathcal L}_k\backslash{\mathcal S}}\hat{Y}_{{\mathcal L}_{k-1}\backslash{\mathcal Z}}X_{{\mathcal L}^{k}}\hat{Y}_{{\mathcal L}^{k-1}}X_{d_i}Y_{d_i})|}{\prod_{t\in{\mathcal S}\cap{\mathcal L}_k}|\mathit{T}_{\epsilon''}^n(X_t)|\prod_{t'\in{\mathcal Z}\cap{\mathcal L}_{k-1}}|\mathit{T}_{\epsilon'}^n(\hat{Y}_{t'}|X_{t'})|}\label{sal:p2}\\
&\stackrel{.}{\le} 2^{-n\beta_{{\mathcal S},{\mathcal Z}}(k)}\label{sal:p3}
\end{align}
where \eqref{sal:p1} follows from \eqref{sal:p}, \eqref{sal:p2} follows from the definition of $P_{X_t}$ and $P_{\hat{Y}_t|X_t}$ and \eqref{sal:p3} is a result of the properties of jointly typical sequences.
\section{Proof of Lemma \ref{le:7}}\label{app:6}
According to the definition of ${\mathcal N}_{{\mathcal S},{\mathcal Z}}(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]}\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)$, for $\mathbf{s}=(w_{{\mathcal L}_1},z_{{\mathcal L}_1},\cdots,w_{{\mathcal L}_{\ell}},z_{{\mathcal L}_{\ell}})$ in \eqref{sal:def-mn}, each $(z_v:v\in{\mathcal Z})$ takes $2^{n(I(\hat{Y}_v;Y_v|X_v)+\delta)}-1$ different values and each $(z_v:v\in{\mathcal V}_{-d_i}\backslash{\mathcal Z})$ takes a fixed value, thus $z_{{\mathcal V}_{-d_i}}$ takes less than $2^{n(\sum_{t\in{\mathcal Z}}I(Y_t;\hat{Y}_t|X_t))}$ different values. Also, according to the definition for each $k\in[1,\ell]$, $w_{{\mathcal L}_k\backslash{\mathcal S}}$ takes the fixed value $w_{{\mathcal L}_k\backslash{\mathcal S},[b-k+1]}$ and $w_{{\mathcal S}\cap{\mathcal L}_k}$ must satisfy the following relation:
\[
\mathbf{u}_{{\mathcal S}\cap{\mathcal L}_k}(w_{{\mathcal L}_k\cap{\mathcal S}})\in\mathit{T}_{\epsilon}^n(U_{{\mathcal L}_k}(w_{{\mathcal L}_k\cap{\mathcal S}})|\mathbf{u}_{{\mathcal L}_k\backslash{\mathcal S}}(w_{{\mathcal L}_k\backslash{\mathcal S},[b-k+1]}),\mathbf{u}_{{\mathcal L}^k,[b-k-V+2]},\mathbf{u}_{d_i,[b-k-V+2]}).
\]
Thus $\mathbf{u}_{{\mathcal S}\cap{\mathcal L}_k}(w_{{\mathcal L}_k\cap{\mathcal S}})$ (or equivalently $w_{{\mathcal L}_k\cap{\mathcal S}}$) takes at most $2^{n(H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i}))}$ different values. Therefore, $w_{{\mathcal V}_{-d_i}}$ takes at most $2^{n(\sum_{k=1}^{\ell}H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i}))}$ different values. Now, comparing the bounds on the number of possible choices for $z_{{\mathcal V}_{-d_i}}$ and $w_{{\mathcal V}_{-d_i}}$ yields the lemma.
\section*{acknowledgement}
We would like to thank the anonymous reviewers and the Associate Editor for their suggestions which greatly improved the paper in terms of its presentation as well as technical clarity and context. We would also like to thank the members of Information Theory and Security Lab at Sharif University for their comments.
| 59,699 |
\section{Introduction}
Let $S_{g,p}$ be the orientable surface of genus $g$ with $p$
punctures. The set of essential simple closed curves on the surface
$S_{g,p}$ can be arranged into the curve complex
$\mathcal{C}=\mathcal{C}(S_{g,p})$. The combinatorics of this
intricate object is used in particular to study the mapping class
group of $S_{g,p}$. The curve complex is particularly useful in view
of the result of Masur and Minsky \cite[Theorem 1.1]{MM} which says
that $\mathcal{C}$ is hyperbolic in the sense of Gromov. Hence,
using the Gromov product (see \cite[Chapter III.H]{BH}), one can
define the Gromov boundary $\partial \mathcal{C}$ in the usual way.
Note that the boundary $\partial \mathcal{C}$ is in general not
compact, since the curve complex $\mathcal{C}$ is not locally
finite. Hence the topology of $\partial \mathcal{C}$ can be a rich
subject to explore.
Interestingly, the boundary $\partial \mathcal{C}$ arises naturally
also from another construction. Namely, Klarreich \cite[Theorem
1.3]{Kla} (see also \cite[Section 1]{Ham}) proved that $\partial
\mathcal{C}$ is homeomorphic, by an explicit homeomorphism, to the
ending lamination space $\EL=\EL(S_{g,p})$. To define this space, we
need to recall some standard notions.
We denote the space of geodesic laminations on $S_{g,p}$ as usual by
$\mathcal{L}=\mathcal{L}(S_{g,p})$. An \emph{ending lamination} is a
minimal filling geodesic lamination. The set of all ending
laminations is denoted by $\EL \subset \mathcal{L}$. It remains to
describe the topology on the set $\EL$. Let $\PML$ denote the space
of projective measured laminations. Let $\phi\colon \PML \rightarrow
\mathcal{L}$ be the measure forgetting map which maps a projective
measured lamination to its support. Note that $\phi$ is not
surjective, because there are geodesic laminations which do not
admit a transverse measure of full support. However, every ending
lamination admits a transverse measure of full support, so $\EL$ is
contained in the image of $\phi$. Let $\PEML\subset \PML$ be the
preimage of $\EL$ under $\phi$. The space $\EL$ is naturally
equipped with the topology induced from $\PEML$ by the quotient map
$\phi$.
Here is a short account on what is known about the ending lamination
space $\EL$ depending on $g$ and $p$. We call $\xi = 3g-3+p$ the
\emph{complexity} of the surface. The cases where the complexity is
at most one are easy and well-known, in particular for $\xi=1$
(i.e.\ in the case where the surface is the four-punctured sphere or
the once-punctured torus) we have $\EL\simeq\R\setminus \Q$. Assume
now $\xi>1$, i.e.\ the surface in question is hyperbolic and
non-exceptional. In this case Gabai \cite{G} showed that $\EL$ is
connected, locally path-connected, and cyclic, which concluded a
series of results in this direction. Previously Leininger--Schleimer
\cite{LS} proved that $\EL$ is connected, provided $g\geq 4$, or
$g\geq 2$ and $p\geq 1$. Moreover, Leininger--Mj--Schleimer
\cite{LMS} proved that if $g\geq 2$ and $p=1$, then $\EL$ is locally
path connected.
In spite of the fact that so little about $\EL$ is known so far, we
were encouraged by Mladen Bestvina to address the following.
\begin{conj}
\label{conjecture} The ending lamination space of $S_{g,p}$ is
homeomorphic to the $(\xi-1)$--dimensional N\"obeling space.
\end{conj}
\begin{defin}
\label{definition noebeling} The \emph{$m$--dimensional N\"obeling
space} $N^{2m+1}_m$ is the topological space obtained from
$\R^{2m+1}$ by removing all points with at least $m+1$ rational
coordinates.
\end{defin}
In this terminology, the ending lamination space of the
four-punctured sphere (and the once-punctured torus) is homeomorphic
to the $0$--dimensional N\"obeling space. This agrees with
Conjecture~\ref{conjecture}.
The \emph{N\"obeling curve} is the $1$--dimensional N\"obeling
space, i.e.\ the topological space obtained from $\R^3$ by removing
all points with at least two rational coordinates. The main result
of this article is to confirm Conjecture~\ref{conjecture} in the
following case.
\begin{theorem}
\label{main} The ending lamination space of the five-punctured
sphere is homeomorphic to the N\"obeling curve.
\end{theorem}
Since $\EL$ is homeomorphic to the Gromov boundary of the curve
complex, which is the same (as a simplicial complex) for the
twice-punctured torus and the five-punctured sphere (see \cite[Lemma
2.1(a)]{Luo}), we have the following.
\begin{cor}
The ending lamination space of the twice-punctured torus is
homeomorphic to the N\"obeling curve.
\end{cor}
\medskip
The article is organised as follows. In Section~\ref{section The
Nobeling curve} we provide a topological characterisation of the
N\"obeling curve which we use in the proof of Theorem~\ref{main}. In
Section~\ref{section The ending lamination space}, using train
tracks, we choose a convenient neighbourhood basis for the ending
lamination space. Then, in Section~\ref{section Almost filling
paths}, we give an account on Gabai's method for constructing paths
in $\EL$.
The proof of Theorem~\ref{main} splits into two parts. In Section
\ref{section Outline of the proof} we begin the proof and in
particular we obtain a dimension bound. The main part of the proof
is to obtain a universality property, with which we conclude in
Section~\ref{section Universality}.
\medskip
We thank Andrzej Nag\'orko for discussions on the N\"obeling curve
and for the characterisation in Section~\ref{section The Nobeling
curve}; Ursula Hamenst\"adt for discussions on the ending lamination
space and for suggestions on how to improve our preprint; and
Mladen Bestvina and Saul Schleimer for encouragement. This work was
partially carried out during the stay of the second author at the
Hausdorff Institute of Mathematics in Bonn.
\section{The N\"obeling curve}
\label{section The Nobeling curve}
In this section we give a useful characterisation of the N\"obeling
curve following from Kawamura--Levin--Tymchatyn~\cite{KLT}. We
learned about this characterisation and the way to derive it from
\cite{KLT} using a standard topological argument from Andrzej
Nag\'orko.
\begin{theorem}
\label{characterization_easy} If a Polish space of dimension $1$ is
connected, locally path connected and satisfies the locally finite
$1$--discs property, then it is homeomorphic to the N\"obeling
curve.
\end{theorem}
Recall that a topological space is \emph{Polish} if it is separable
and admits a complete metric. A topological space has
\emph{dimension $0$} (resp.\ \emph{dimension at most $m$}, for
$m>0$) if each point has a basis of neighbourhoods with empty
boundaries (resp.\ with boundaries of dimension at most $m-1$). A
space has \emph{dimension $m$}, for $m>0$, if it has dimension at
most $m$, but does not have dimension $m-1$. In case of a Polish
space this coincides with the usual covering dimension (see
\cite[Theorem 1.7.7]{Eng}).
\begin{defin}
A topological space $X$ satisfies the \emph{locally finite
$m$--discs property} if we have the following. For any family of
continuous maps $f_n \colon I^m=[0,1]^m \rightarrow X$, where $n\in
\N$, and any open cover $\mathcal{U}$ of $X$, there are continuous
maps $g_n\colon I^m \rightarrow X$ such that
\begin{enumerate}[(i)]
\item
for each $x\in X$ there is a neighbourhood $U\ni x$ satisfying
$g_n(I^m)\cap U=\emptyset$ for sufficiently large $n$,
\item
for each $t\in I^m, n\in \N,$ there is $U\in \mathcal{U}$ such that
both $f_n(t)$ and $g_n(t)$ lie in $U$ (we say that such $f_n$ and
$g_n$ are \emph{$\mathcal{U}$--close}).
\end{enumerate}
If additionally $g_n(I^m)$ may be required to be pairwise disjoint,
then $X$ satisfies the \emph{discrete $m$--discs property}.
\end{defin}
In the remaining part of this section we explain how to derive
Theorem~\ref{characterization_easy} from the following.
\begin{theorem}[{\cite[Theorem 2.2]{KLT}}]
\label{characterization_hard} A $1$--dimensional Polish space is the
N\"obeling curve if and only if it is an absolute extensor in
dimension $1$ and strongly universal in dimension $1$.
\end{theorem}
In fact, in order to address Conjecture~\ref{conjecture} in the future,
we have decided to discuss the higher
dimensional analogue of Theorem~\ref{characterization_hard}.
\begin{theorem}[{\cite[Topological rigidity theorem]{N}}]
\label{characterization_hard multidim} An $m$--dimensional Polish
space is the $m$--dimensional N\"obeling space if and only if it is
an absolute extensor in dimension $m$ and strongly universal in
dimension $m$.
\end{theorem}
A metric space $X$ is an \emph{absolute extensor in dimension $m$},
if every continuous map into $X$ from a closed subset of an at most
$m$--dimensional metric space extends over the entire space. Assume
now that $X$ is \emph{locally $k$--connected} for every $k<m$ (see
\cite[Definition 3.1]{Du}, for $m=1$ this means that $X$ is locally
path connected). In that case, by Dugundji \cite[Theorem 9.1]{Du},
$X$ is an absolute extensor in dimension $m$ if and only if all of
its homotopy groups in dimension less than $m$ vanish. For $m=1$
this means that $X$ is connected. Summarizing, if a metric space is
locally $k$--connected for every $k<m$, and all of its homotopy
groups in dimension less than $m$ vanish, then it is an absolute
extensor in dimension $m$. In particular, if a metric space is
connected and locally path connected, then it is an absolute
extensor in dimension $1$.
A Polish space $X$ is \emph{strongly universal} in dimension $m$ if
any continuous map $f\colon Y \rightarrow X$ from an at most
$m$--dimensional Polish space $Y$ to $X$ is \emph{approximable} by
closed embeddings. This means that for any open cover $\mathcal{U}$
of $X$ there is a closed embedding $g\colon Y \rightarrow X$ such
that $f$ and $g$ are $\mathcal{U}$--close. We discuss below, under
what hypothesis strong universality in dimension $m$ follows from
the locally finite $m$--discs property.
By \cite[discussion after Theorem 2.4]{Cu}, any Polish space $X$
satisfying the locally finite $m$--discs property satisfies also the
discrete $m$--discs property. Bowers \cite[Theorem in Appendix, part
(2)]{Bo} proves that the latter implies strong universality in
dimension $m$, under the hypothesis that $X$ is an ANR. Recall that
a topological space $X$ is an \emph{absolute neighbourhood retract}
(shortly, an \emph{ANR}) if for each closed subset $A\subset X$,
which is normal, there is an open neighbourhood $U\subset X$ such
that $A$ is a retract of $U$. Unfortunately, N\"obeling spaces are
not ANR, hence Bowers' theorem as stated is not sufficient for our
purposes. However, his proof yields the following.
\begin{theorem}
\label{Bowers} Let $X$ be a Polish space which is locally
$k$--connected for all $k<m$. If $X$ satisfies the discrete $m$--discs
property, then it is strongly universal in dimension $m$.
\end{theorem}
In other words, we can replace the ANR hypothesis in
\cite[Theorem in Appendix]{Bo} by local $k$--connectedness for all $k<m$.
Indeed, the only two places in the proof, where the ANR hypothesis
is used, are lines 1 and 5 on page 129, in the proof of Lemma C.
However, in both cases the argument only requires the following
property (which is satisfied if $X$ is an ANR). Namely, let $k<m$ and
let $S^{k}$ be the $k$--sphere. Bowers' argument requires that for
every open cover $\mathcal{U}$ of $X$, there is a refinement
$\mathcal{U}'$, such that if $f_0,f_1\colon S^{k} \rightarrow X$ are
$\mathcal{U}'$--close, then there is a homotopy between $f_0$ and
$f_1$ with each track contained in some $U\in \mathcal{U}$. By
\cite[Theorem 5.1]{Du} this property follows from local
$k$--connectedness. This concludes the argument for Theorem~\ref{Bowers}.
\medskip
By Theorems~\ref{characterization_hard multidim} and~\ref{Bowers}
and by the preceding discussion we conclude with the following,
which in the case of $m=1$ amounts exactly to
Theorem~\ref{characterization_easy}.
\begin{cor}
\label{characterization_easy multidim} Let $X$ be Polish space of
dimension $m$ which is locally $k$--connected for every $k<m$, and
all of whose homotopy groups in dimension less than $m$ vanish.
Assume that $X$ satisfies the locally finite $m$--discs property.
Then $X$ is homeomorphic to the $m$--dimensional N\"obeling space.
\end{cor}
\section{Train track partitions}
\label{section The ending lamination space} Our strategy of proving
Theorem~\ref{main} is to use the topology of $\PML$, the space of
projective measured laminations, to obtain information about the
topology of the ending lamination space $\EL$. To this end, we
construct a sequence of finer and finer partitions of $\PML$ into
polyhedra using Thurston's notion of train tracks (see \cite[Section
8.9]{Th}). We then show that these polyhedra project to a convenient
neighbourhood basis of $\EL$.
For a thorough treatment of train tracks, as well as the basic
definitions, we refer the reader to the book of Penner and Harer
\cite{PH}. Note however that, in contrast to the treatment in
\cite{PH}, for us every train track is \emph{generic} (i.e.\ each
switch is at most trivalent). In the following, we briefly recall
some definitions and statements important to the current work.
Let $\tau$ be a recurrent train track. We denote by $P(\tau)$ the
\emph{polyhedron of projective measures} of $\tau$, that is the set
of all projective measured laminations which are carried by $\tau$.
$P(\tau)$ has the structure of an affine polyhedron, where the faces of
$P(\tau)$ correspond to recurrent proper subtracks $\tau' < \tau$
(see \cite[pp.\ 116--117]{PH}). The inclusion map $P(\tau) \subset
\PML$, where $P(\tau)$ is equipped with the topology coming from the
polyhedral structure, is continuous. In particular, for any train
track $\tau$ the polyhedron of projective measures $P(\tau)$ is a
closed set in $\PML$. The \emph{interior of $P(\tau)$} is its interior
with respect to the polyhedral structure, i.e.\ the set of transverse
measures which put positive mass on each branch of $\tau$. Note that
in general this is not the interior of the set $P(\tau) \subset \PML$ with
respect to the topology of $\PML$. In the sequel we denote the
interior of the polyhedron of projective measures by $V(\tau)\subset
\PML$. We denote the boundary $P(\tau) \setminus V(\tau)$ of the
polyhedron of projective measures by $\partial V(\tau)$. From now
on, the expression \emph{boundary of $X$} will always mean
$\mathrm{Fr} X=\overline{X}\setminus \mathrm{int} X$ (the boundary
in the topological sense). Note that in this terminology $\partial
V(\tau)$ might not be the boundary of $V(\tau) \subset \PML$. Let
$$U(\tau) = \phi(V(\tau) \cap \MPML)$$ (equivalently, $U(\tau)$ is the
set of ending laminations which are fully carried by $\tau$). We
denote the inverse correspondence between (families of) these sets
by $\Psi$, i.e.\ $\Psi(U(\tau)) = V(\tau)$.
Unless stated otherwise, from now on we restrict to the case of
$\Sp$, where $\PML$ is $3$--dimensional. We call a train track
$\eta$ \emph{complete} if it is recurrent, transversely recurrent
and maximal. Recall that if $\eta$ is complete, then $V(\eta)$ is
$3$--dimensional, hence open in $\PML$ (see e.g.\ \cite[Lemma
3.1.2]{PH}) and consequently $U(\eta)$ is open in $\EL$. In
particular we have $\partial V(\eta) = \mathrm{Fr}V(\eta)$. We call
a train track $\sigma$ \emph{nearly complete}, if it is birecurrent,
carries an ending lamination and $P(\sigma)$ is $2$--dimensional (in
particular $\sigma$ is not complete).
\begin{rem}
\label{no crossing boundaries}
Let $\mu_0, \mu_1$ be measured geodesic
laminations which do not intersect transversally. Suppose that for some train track $\tau$ its polyhedron of projective measures $P(\tau)$ contains a projective
class of $\mu_t = (1-t)\mu_0 + t\mu_1$ for
some $t \neq 0, 1$. Then the whole interval $\{\mu_t\}_{t \in [0,1]}$ projects into $P(\tau)$. This is because the support of $\mu_t$
equals $\phi(\mu_1) \cup \phi(\mu_2)$ except maybe for $t=0$ or $1$, and projective measured laminations are carried by train tracks if
and only if their supports are.
\end{rem}
We will need the following lemma, which shows how $\MPML$ can
intersect the polyhedron of projective measures of a complete or
nearly complete train track.
\begin{lemma}
\label{key lemma}~\\
\vspace{-.5cm}
\begin{enumerate}[(i)]
\item Let $\sigma$ be a nearly complete train track. Then $\partial V(\sigma)$ contains no filling
lamination. In particular, $\partial V(\sigma)$ is disjoint from $\MPML$.
\item Let $\eta$ be a complete train track. Then the $1$--skeleton of $\partial V(\eta)$ contains
no filling lamination. In particular, the intersection of the $1$--skeleton of $\partial V(\eta)$ with $\MPML$ is empty.
\item Let $\sigma$ be a nearly complete train track. Then $U(\sigma)$ is closed in $\EL$.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}[(i)]
\item Let $\sigma$ be a nearly complete train track. Recall that $\partial V(\sigma)$ is the union
of $P(\tau)$ over all recurrent proper subtracks $\tau < \sigma$. We show that no proper subtrack
of $\sigma$ is filling, which immediately implies assertion (i).
Since $\sigma$ is nearly complete, it carries a filling lamination. Hence its complementary regions
are topological discs or once-punctured discs. Thus, on a five-punctured sphere a nearly complete
train track has at least five complementary regions. Each of those regions gives a
contribution of at least $-\frac{1}{2}$ to the (generalised) Euler characteristic of $\Sp$, with
a contribution of exactly $-\frac{1}{2}$ if and only if the component is a triangle
or a once-punctured monogon.
If $\sigma$ had more than five complementary regions, the fact that $\chi(\Sp)$ equals $-3$ would imply
that these regions have to be five once-punctured monogons and one triangle --- which
would mean that $\sigma$ was complete.
Hence, $\sigma$ has four once-punctured monogons and one once-punctured bigon as complementary
regions. A proper subtrack $\tau < \sigma$ needs to erase at least one branch of $\sigma$, and hence
join two of these regions (or one to itself). Thus some complementary region of $\tau$
contains either two punctures or an essential curve --- hence $\tau$ is not filling.
\item Let $\eta$ be complete. The $1$--skeleton of $\partial V(\eta)$ is the
union of $P(\tau)$ over recurrent $\tau<\eta$ which are obtained from $\eta$ by removing at least
two branches. Now assertion (ii) follows with the same Euler characteristic argument as in the
proof of assertion (i).
\item Let $\sigma$ be a nearly complete train track. The polyhedron of projective measures $P(\sigma)$ is
a closed set in $\PML$. Since the topology of $\EL$ is induced by the map $\phi$, the set
$\phi(P(\sigma) \cap \MPML)$ is closed in $\EL$. However, by assertion (i), we have
$P(\sigma) \cap \MPML = V(\sigma) \cap \MPML$ and hence $U(\sigma)=\phi(V(\sigma)\cap\MPML)$ is closed in $\EL$.
\end{enumerate}
\qed
\end{proof}
\medskip
In the remaining part of this section, our aim is to find a
convenient neighbourhood basis for $\EL$, determined by a certain
set of train tracks.
\begin{defin}
\label{track basis partition}
Let $\Tau$ be a finite collection of complete
train tracks and let $\Sigma$ be a finite collection of nearly complete train tracks.
The pair $(\Tau, \Sigma)$ is called a \emph{train track partition} if
all $V(\tau)$ are pairwise
disjoint, for $\tau \in \Tau \cup \Sigma$, and together cover all of $\MPML$.
Note that in particular all $U(\tau)$ are pairwise
disjoint, for $\tau \in \Tau \cup \Sigma$, and cover all of $\EL$.
\end{defin}
\begin{example}
\label{examples}~\\
\vspace{-.5cm}
\begin{enumerate}[(i)]
\item
Let $P$ be any pants decomposition for $\Sp$. Let $\Tau$ be the set of complete
\emph{standard train tracks} with respect to $P$ (see \cite[Section 2.6]{PH}) and let $\Sigma$
be the set of their nearly complete subtracks.
We claim that $(\Tau, \Sigma)$ is a train track partition. To this end, first note that the $P(\eta)$,
for $\eta \in \Tau$,
cover all of $\PML$ and each projective measured lamination $\lambda$ is fully carried by a unique
subtrack $\tau$ of one of the $\eta \in \Tau$ (see \cite[Sections 2.7 and 2.8]{PH}).
In particular, $V(\tau)$ are disjoint for all $\tau \in \Tau \cup \Sigma$.
By Lemma~\ref{key lemma}(ii), every $\lambda \in \MPML$ lies in $V(\tau)$ for
some $\tau \in \Tau \cup \Sigma$.
We call such a pair $(\Tau,\Sigma)$ a \emph{standard train track partition}.
\item
Let $(\Tau, \Sigma)$ be a train track partition, and let $\eta \in T$ be a complete train track. Denote
by $\eta_L$ (and $\eta_R$) the left (respectively right) split of $\eta$ along some large branch $b$.
Note that splitting $\eta$ amounts to cutting $P(\eta)$ along a hyperplane, so that we have
$P(\eta) = P(\eta_L) \cup P(\eta_R)$ and $P(\eta_L) \cap P(\eta_R) = P(\sigma)$ for
a common subtrack $\sigma$ of $\eta_L$ and $\eta_R$.
If both $\eta_L$ and $\eta_R$ are complete, define $\Tau'$ by replacing $\eta \in T$ by
$\{\eta_L, \eta_R\}$. Then, if $\sigma$ is nearly complete, add $\sigma$ to $\Sigma$ to obtain $\Sigma'$. If only one of the two train tracks $\eta_L$ and $\eta_R$ is
complete, replace $\eta \in \Tau$ by this
train track to get $\Tau'$ and set $\Sigma' = \Sigma$.
Note that in both cases the resulting pair $(\Tau', \Sigma')$ is a train track partition.
We say that $(\Tau', \Sigma')$ is obtained from $(\Tau, \Sigma)$ by
a \emph{complete splitting move along $b$}.
\item
Let $(\Tau, \Sigma)$ be a train track partition. Let $\sigma \in \Sigma$ be any nearly
complete train
track. Consider the left (respectively right) split $\sigma_L$ (and $\sigma_R$) along some large
branch $b$. As above we have
$P(\sigma) = P(\sigma_L) \cup P(\sigma_R)$ and $P(\sigma_L) \cap P(\sigma_R) = P(\tau)$ for
a common subtrack $\tau$ of $\sigma_L$ and $\sigma_R$. If both $\sigma_L$ and $\sigma_R$ are nearly complete,
define $\Sigma'$ by replacing $\sigma \in \Sigma$ by $\{\sigma_L, \sigma_R\}$. Otherwise, replace $\sigma$
by the train track $\sigma_L$ or $\sigma_R$ which is nearly complete.
Note that in both cases, by Lemma~\ref{key lemma}(i), the resulting pair $(\Tau, \Sigma')$ is a train
track partition.
We say that $(\Tau, \Sigma')$ is obtained from $(\Tau, \Sigma)$ by a \emph{nearly complete
splitting move along $b$}.
\end{enumerate}
\end{example}
We now use the above examples to obtain the following.
\begin{theorem}
\label{tracks form basis}
There exists a sequence $\S = ((\Tau_k, \Sigma_k))_{k=0}^\infty$ of train track partitions satisfying the
following two properties.
\begin{description}
\item[(Subdivision)] Let $K\geq k \geq 0$. For each $\eta \in \Tau_{K}$ there is an $\eta' \in \Tau_k$ satisfying $V(\eta) \subset
V(\eta')$. For each $\sigma \in \Sigma_{K}$ there is a $\tau \in \Tau_k \cup
\Sigma_k$ satisfying
$V(\sigma) \subset V(\tau)$.
\item[(Fineness)] For each ending lamination $\lambda$ and each open set $W$ in $\PML$ containing
$\phi^{-1}(\lambda)$ there is an open set $V$ in $\PML$ satisfying $W \supset V \supset \phi^{-1}(\lambda)$ of the
following form.
Either $V = V(\eta)$ with $\eta \in \Tau_k$, for some $k \geq 0$, or
$V = V(\eta_1) \cup V(\sigma) \cup V(\eta_2)$ with $\eta_i \in \Tau_{k_i}, \sigma \in \Sigma_k$, for
some $k_1,k_2,k\geq 0$. In the latter case we additionally require
$P(\sigma) \subset \partial V(\eta_1) \cap \partial V(\eta_2)$ (see Figure~\ref{fig:triple_neighbourood}).
We denote by ${\cal V}(\S)$ the family of all open sets $V$ of above type.
\end{description}
\end{theorem}
\begin{figure}[htbp!]
\centering
\psfrag{v1}{$V(\eta_1)$}
\psfrag{v2}{$V(\sigma)$}
\psfrag{v3}{$V(\eta_2)$}
\psfrag{v}{$V$}
\includegraphics[width=0.5\textwidth]{triple_neighbourhood}
\caption{The case where $V = V(\eta_1) \cup V(\sigma) \cup V(\eta_2)$; we draw $V(\sigma)$ thick just to emphasise its presence; note that for simplicity the configuration is depicted in dimension $2$ instead of $3$.}
\label{fig:triple_neighbourood}
\end{figure}
Before we begin the proof of Theorem~\ref{tracks form basis}, we
record the following.
\begin{rem}
\label{subdiv} If we have a sequence of train track partitions so
that each $(\Tau_{k+1},\Sigma_{k+1})$ is obtained from
$(\Tau_{k},\Sigma_{k})$ by a complete splitting move or a nearly
complete splitting move (see Examples~\ref{examples}(ii,iii)), then
they satisfy property (Subdivision). Moreover, property
(Subdivision) is preserved under passing to a subsequence.
\end{rem}
\begin{defin}
\label{partition sequence} We call sequences $\S = ((\Tau_k,
\Sigma_k))_{k=0}^\infty$ satisfying (Subdivision) and (Fineness)
\emph{good partition sequences}. In this terminology Theorem~\ref{tracks form basis}
says that there exists a good partition sequence.
\end{defin}
\begin{rem}
\label{discarding} If $((\Tau_k, \Sigma_k))_{k=0}^\infty$ is a good
partition sequence, then for any $K\geq 0$ the sequence $((\Tau_k,
\Sigma_k))_{k=K}^\infty$ is also a good partition sequence.
\end{rem}
Properties (Subdivision) and (Fineness) have the following immediate
consequences for the sets $U(\tau)$ in the ending lamination space.
\begin{cor}
\label{properties of Us}
Let $\S = ((\Tau_k, \Sigma_k))_{k=1}^\infty$ be a good partition sequence. Then (Subdivision) holds
after replacing each $V(\tau)$ with $U(\tau)$.
Furthermore, let ${\cal U}(\S) = \{\phi(V \cap \MPML)\}_{V \in {\cal
V}(\S)}$.
Then ${\cal U}(\S)$ is a neighbourhood basis of $\EL$.
\end{cor}
We denote by $\Psi\colon {\cal U}(\S) \rightarrow {\cal V}(\S)$ the
map extending the map $\Psi(U(\eta))=V(\eta)$ by $\Psi(U(\eta_1)\cup
U(\sigma) \cup U(\eta_2))=\Psi(U(\eta_1))\cup \Psi(U(\sigma)) \cup
\Psi(U(\eta_2))$.
\medskip
The remaining part of this section is devoted to the proof of
Theorem~\ref{tracks form basis}. We need to recall some facts about
full splitting sequences.
Let $b_1, \ldots, b_l$ be the large branches of a train track
$\tau$. Note that, if $\tau'$ is obtained from $\tau$ by a split at
$b_i$, every $b_j$ is still a large branch of $\tau'$ for $j\neq i$.
A \emph{full split of $\tau$} (see \cite[Section 5]{Ham_MCG1}) is a
train track which is obtained from $\tau$ by splitting at each large
branch $b_i$ exactly once (we also say that this train track is \emph{obtained
from $\tau$ by a full split}). A \emph{full splitting sequence} is a
sequence of train tracks $(\tau^i)_i$ such that $\tau^{n+1}$ is
obtained from $\tau^n$ by a full split. For an ending lamination
$\lambda$ carried by $\tau$, a \emph{full $\lambda$--splitting
sequence of $\tau$} is a full splitting sequence $(\tau^{i})_i$ with
$\tau^0 = \tau$ and such that each $\tau^{i}$ carries $\lambda$.
The following immediate consequence of \cite[Theorem 8.5.1]{Mos} is
the central part of the upcoming proof of Theorem~\ref{tracks form
basis}. (A similar theorem is obtained in \cite{Ham_MCG1}.)
\begin{theorem}
\label{nesting}
Let $\lambda$ be an ending lamination and let $(\tau^i)_i$ be a full $\lambda$--splitting sequence
of some train track $\tau$. Then we have
$$\bigcap_{i=1}^\infty P(\tau^i) = \phi^{-1}(\lambda).$$
In particular, for any open neighbourhood $W$ of
$\phi^{-1}(\lambda)$ in $\PML$, there is some
$i_0 > 0$ such that for all $i > i_0$ we have $P(\tau^i) \subset W$.
\end{theorem}
\medskip\par\noindent\textbf{Proof of Theorem~\ref{tracks form basis}.} \ignorespaces
Let $P$ be a pants decomposition for $\Sp$ and let $(\Tau_0, \Sigma_0)$ be the associated standard
train track partition. We now describe an inductive procedure for building $(\Tau_k,
\Sigma_k)$, where $k\geq 0$,
which will satisfy property (Subdivision) and the following two additional properties.
\begin{description}
\item[(Adjacency)] If $\mu\in \MPML$ lies in $V(\sigma)$, where $\sigma\in \Sigma_k$ for some $k\geq 0$, then there are
$\eta\neq\eta'\in \Tau_k$ such that $\mu$ lies in $P(\eta)\cap P(\eta')$.
Moreover, the set obtained from $$V(\eta)\cup
V(\eta')\cup (\partial V(\eta)\cap \partial
V(\eta'))$$ by removing the $1$--skeleta of $P(\eta)$ and
$P(\eta')$ is an open neighbourhood of $\mu$ (see Figure~\ref{fig:adjecency}).
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{adjecency3}
\caption{The open neighbourhood as in property (Adjacency).}
\label{fig:adjecency}
\end{figure}
\item[(Full splits)] If $\tau \in \Tau_k \cup \Sigma_k$ for some $k \geq 0$ and $\tau'$ is
a complete or nearly complete train track obtained from $\tau$ by a full split, then
$\tau'$ belongs to $\Tau_{k+1} \cup \Sigma_{k+1}$.
\end{description}
Note that the standard train track partition $(\Tau_0, \Sigma_0)$
satisfies property (Adjacency). Moreover, by Lemma~\ref{key
lemma}(ii), a train track partition obtained by a complete splitting
move or a nearly complete splitting move (see Examples
\ref{examples}(ii,iii)) from another train partition satisfying
property (Adjacency), satisfies property (Adjacency) itself.
\medskip
We now describe our inductive procedure. Suppose that the train
track partition $(\Tau_k, \Sigma_k)$ has already been defined.
Roughly speaking we now perform the following operation. For each
$\eta \in \Tau_k$ we use complete splitting moves along all large
branches of $\eta$ to obtain $(\Tau_k', \Sigma_k')$. In the second
step we perform nearly complete splitting moves for each $\sigma \in
\Sigma_k$ along all large branches of $\sigma$ to obtain
$(\Tau_{k+1}, \Sigma_{k+1})$.
More precisely, let $\eta \in \Tau_k$ be a complete train track.
Denote the large branches of $\eta$ by $b_1, \ldots, b_l$. We now
perform a complete splitting move along $b_1$ to obtain a new
partition $(\Tau^1_k, \Sigma_k^1)$. The set $\Tau^1_k$ contains one
or two train tracks corresponding to the two possible splits of
$\eta$ along $b_1$. Each of those still contains $b_2, \ldots, b_l$
as large branches. We perform complete splitting moves along $b_2$
for those (one or two) train tracks in $\Tau^1_k$ to obtain the partition $(\Tau_k^2,
\Sigma_k^2)$. The set $\Tau_k^2$ contains now up to four train
tracks corresponding to the possible splits of $\eta$ along $b_1$
and $b_2$. We split all these (up to four) train tracks along $b_3$ and continue
this way for all large branches $b_1, \ldots, b_l$ until we
terminate with $(\Tau_k^l, \Sigma_k^l)$.
Note that this partition now contains every full split $\eta^1$ of
$\eta$, if $\eta^1$ is a complete train track. Moreover, we have $\Tau_k \setminus \{\eta\}\subset
\Tau^l_k$. We now repeat the same procedure for all $\eta' \in
\Tau_k\setminus\{\eta\}$. The resulting partition $(\Tau_k',
\Sigma_k')$ contains for every $\eta \in
\Tau_k$ all of its full splits which are complete.
In the second step, we obtain $(\Tau_{k+1}, \Sigma_{k+1})$ from
$(\Tau'_k, \Sigma'_k)$ by performing the analogous operation with
all elements of $\Sigma_k\subset \Sigma'_k$. Note that
$\Tau_{k+1}=\Tau'_k$. This completes the definition of
$\mathcal{S}=((\Tau_k, \Sigma_k))_{k=0}^\infty$.
\medskip
Since all $(\Tau_k, \Sigma_k)$ are obtained from $(\Tau_0,
\Sigma_0)$ by a sequence of complete splitting moves and nearly complete splitting
moves, $\S$ satisfies property (Adjacency). By Remark~\ref{subdiv},
$\S$ satisfies also property (Subdivision). Furthermore, by
construction, $\S$ satisfies property (Full splits). We now use this
information to derive property (Fineness).
\medskip
Let $\lambda \in \EL$, and let $W \subset \PML$ be any open set
containing $\phi^{-1}(\lambda)$. Let $(\eta^i)_i$ be a full
$\lambda$--splitting sequence of some $\eta^0\in \Tau_0$ carrying
$\lambda$. By property (Full splits), we have $\eta^k\in \Tau_k$. By
Theorem~\ref{nesting}, for sufficiently large $k$ we have
$V(\eta^k)\subset W$. Hence if $\phi^{-1}(\lambda)\subset
V(\eta^k)$, we are done. Otherwise, $\phi^{-1}(\lambda)$ is
contained in $\partial V(\eta^k)$.
Then the strategy is roughly speaking the following. We consider the
polyhedron $P(\eta'^k)$ ``on the other side'' of the face of
$P(\eta^k)$ containing $\phi^{-1}(\lambda)$. We then split $\eta'^k$
until $P(\eta'^K)$ is contained in $W$. The face of $P(\eta'^K)$
containing $\phi^{-1}(\lambda)$ might not itself be a train track
occurring in our partition sequence. However, there is some nearly
complete train track $\sigma^K \in \Sigma^K$ carrying $\lambda$. We
split $\sigma^K$ until its polyhedron of projective measures lies
in the interior of the appropriate faces of both $P(\eta^K)$ and
$P(\eta'^K)$. Then the resulting three train tracks define the
required neighbourhood.
More precisely, by property (Adjacency), there is $\eta'^k\in \Tau_k$
with $\phi^{-1}(\lambda)\subset P(\eta^k)\cap P(\eta'^k)$. By
Theorem~\ref{nesting} and property (Full splits), there are some
$K\geq k$ and $\eta'^K\in \Tau_{K}$ with $\phi^{-1}(\lambda)\subset
P(\eta'^K)\subset W$.
Let $\sigma^K \in \Sigma_K$ be the nearly complete train track carrying $\lambda$.
By property
(Adjacency), the set $N$ obtained from
$$V(\eta^K)\cup V(\eta'^K)\cup (\partial
V(\eta^K)\cap
\partial V(\eta'^K))$$
by removing the $1$--skeleta of $P(\eta^K)$ and $P(\eta'^K)$, is an
open neighbourhood of $\phi^{-1}(\lambda)$.
By Theorem~\ref{nesting} and property (Full splits), there is some
$L\geq 0$ and $\sigma^L\in \Sigma_{L}$ with $\phi^{-1}(\lambda)\subset
P(\sigma^L)\subset N$. By Lemma~\ref{key lemma}(i), we have
$\phi^{-1}(\lambda)\subset V(\sigma^L)$. We put $V=V(\eta^K)\cup
V(\sigma^L)\cup V(\eta'^K)$. Then we have $\phi^{-1}(\lambda)\subset
V\subset W$.
Since $P(\sigma^L)$ is $2$--dimensional in $\partial V(\eta^K)$ and
$\partial V(\eta'^K)$, we have that $V(\sigma^L)$ is open in
$\partial V(\eta^K)\cap \partial V(\eta'^K)$. Hence each point of
$V(\sigma^L)$ lies in the interior of $V$, and we conclude that $V$ is
open, as desired. \qed
\section{Almost filling paths}
\label{section Almost filling paths} In this section we give some
account on Gabai's method of constructing paths in $\EL$. This
discussion is valid for any surface $S_{g,p}$ with $\xi=3g-3+p\geq
2$. Gabai's main result is the following.
\begin{theorem}[{\cite[Theorem 0.1]{G}}]
\label{theorem gabai}
$\EL$ is connected, path connected and cyclic.
\end{theorem}
Here we give some details on Gabai's construction, which we need in the proof of Theorem
\ref{main}. Recall \cite{G} that a geodesic lamination $\lambda$
is \emph{almost minimal almost filling} if it has the form $\lambda=\lambda^*\cup\gamma$ where $\lambda^*$ has no isolated leaves, the closed (with respect to the path metric) complement of $\lambda^*$ supports at most one simple
closed geodesic, and $\gamma$ is either this geodesic or is empty.
We denote by $\AML\supset \EL$ the set of all almost minimal almost
filling geodesic laminations.
Gabai uses \emph{PL almost filling} paths $h\colon
I=[0,1]\rightarrow \PML$ satisfying $\phi(h(t))\in \AML,\
\phi(h(0)),\phi(h(1))\in \EL$ and some additional properties
satisfied by generic PL paths. We do not recall these properties,
since we use only the combination of the following results. We
assume that $\PML$ is equipped with a fixed metric, and we say that
two points in $\PML$ are \emph{$\e$--close} if they are distance at
most $\e$ in this metric.
\begin{lemma}[{\cite[Lemma 2.9]{G}}]
\label{existence of pl filling paths} Let $h\colon I\rightarrow
\PML$ be a path with $\phi(h(0)),\phi(h(1))\in \EL$. Then for
any $\e>0$ there is a PL almost filling path $h'\colon I\rightarrow
\PML$ with the same endpoints and such that $h'(t)$ is $\e$--close to $h(t)$, for
all $t\in I$.
\end{lemma}
We now fix a hyperbolic metric on $S_{g,p}$ and consider geodesic
laminations as subsets of the projective tangent bundle of
$S_{g,p}$.
The hyperbolic metric on $S_{g,p}$ induces a natural (Sasaki) metric on the
projective tangent bundle.
For a geodesic lamination $\lambda$, we denote by
$N^{PT}_\e(\lambda)$ its $\e$--neighbourhood in this metric. The key
element of the proof of Theorem~\ref{theorem gabai} is the following
crucial result.
\begin{lemma}[{\cite[Lemma 5.1]{G}}]
\label{Gabai} If $h\colon I\rightarrow \PML$ is a PL almost filling
path, $\e>0,\delta>0$, then there exists a path $g\colon I
\rightarrow \EL$ with $g(0)=\phi(h(0)),\ g(1)=\phi(h(1))$ such that
for each $t\in [0,1]$ there exists $s\in I$ with $|s-t|<\delta$
satisfying $$h(s)^*\subset N^{PT}_\e(\tilde{g}(t)),$$ for some
diagonal extension $\tilde{g}(t)$ of $g(t)$.
\end{lemma}
We also need the following lemma, which roughly says that for $h$ and $g$
as in the assertion of Lemma~\ref{Gabai}, the preimage
$\phi^{-1}(g(I))$ is not far away from $h(I)$ in $\PML$. We restrict
to the case of the five-punctured sphere (although a version of this
result is true in general). This way we may choose a good partition
sequence $\S$ (see Definition~\ref{partition sequence}). We denote
by $\mathcal{V}(\S),\,\mathcal{U}(\S)$ its associated families of
open sets in $\PML$ and $\EL$, respectively (see Theorem~\ref{tracks
form basis} and Corollary~\ref{properties of Us}).
\begin{lemma}
\label{local perturbing} Let $\lambda_0\in U\in \mathcal{U}(\S)$.
Then there is $U'\in \mathcal{U}(\S)$ with $\lambda_0\in U'$ and
$\e>0$ satisfying the following.
\item(i)
Let $\mu\in \Psi(U')$ with $\phi(\mu)\in \AML$. If
$\phi(\mu)^*$ lies in $N^{PT}_\e(\tilde{\lambda})$ for some diagonal extension
$\tilde{\lambda}$ of $\lambda\in \EL$, then $\lambda\in U$.
\item(ii)
Let $\lambda\in U'$, and $\mu\in \PML$ with $\phi(\mu)\in \AML$.
If $\phi(\mu)^*$ lies in $N^{PT}_\e(\tilde{\lambda})$ for some diagonal
extension $\tilde{\lambda}$ of $\lambda$, then $\mu\in \Psi(U)$.
\end{lemma}
\proof Part (i) is proved in course of the proof of \cite[Theorem
6.1]{G} (local path connectedness of $\EL$).
For part (ii), let $V=\Psi(U)$. By Theorem~\ref{tracks form
basis}(Fineness), there are neighbourhoods $V_1,V_2,V'\in
\mathcal{V}(\S)$ of $\phi^{-1}(\lambda_0)$ satisfying
$\overline{V_1}\subset V, \ \overline{V}_2\subset V_1$, and
$\overline{V}'\subset V_2$ (see Figure~\ref{fig:nesting}).
\begin{figure}
\centering
\scalebox{0.75}{
\psfrag{v1}{$V_1$}
\psfrag{v2}{$V_2$}
\psfrag{v}{$V$}
\psfrag{vv}{$V'$}
\psfrag{phi}{$\phi^{-1}(\lambda)$}
\psfrag{mu}{$\mu$}
\psfrag{nu}{$\nu$}
\includegraphics[width=0.6\textwidth]{nesting} }
\caption{Nested neighbourhoods with possible position of $\mu$ and $\phi^{-1}(\lambda)$.}
\label{fig:nesting}
\end{figure}
We prove that $U'=\phi(V'\cap \MPML)$ satisfies the assertion of the
lemma.
First we claim that if we have $\mu\in \PML \setminus V$ with
$\phi(\mu)\in \AML$, then there is a projective measured lamination
$\nu\in\PML\setminus V_1$ with support $\phi(\mu)^*$. Indeed, if
$\phi(\mu)$ is an ending lamination, then we can take $\nu=\mu$.
Otherwise we have $\phi(\mu)=\phi(\mu)^*\cup \gamma$ for some simple
closed geodesic $\gamma$. Let $\nu$ be the projective measured
lamination with support $\phi(\mu)^*$ obtained by restricting the
measure $\mu$ to $\phi(\mu)^*$. By Remark~\ref{no crossing
boundaries}, the interval of projective measured laminations
determined by $\nu$ and $\gamma$ is contained in $\PML\setminus
V_1$. This justifies the claim.
By Remark~\ref{no crossing boundaries}, the supports of any pair of
projective measured laminations in $\overline{V}'$ and
$\PML\setminus V_1$ intersect transversally. Observe that
$\overline{V}'$ and $\PML\setminus V_1$ are compact in $\PML$. By
super convergence of supports (\cite[Proposition 3.2(i)]{G}), there
is $\delta>0$ satisfying the following. For any $\lambda\in
\phi(\overline{V}'\cap \MPML)$ and $\nu\in\PML\setminus V_1$, the
maximal angle of intersection between $\lambda$ and $\phi(\nu)$ is
at least $\delta$. If we pick $\e$ sufficiently small, and we have
$\phi(\nu)=\phi(\mu)^*$, this violates $\phi(\mu)^*\in
N^{PT}_\e(\tilde{\lambda})$, for any diagonal extension
$\tilde{\lambda}$ of $\lambda$. \qed
\section{Proof of the main theorem}
\label{section Outline of the proof}
Our goal is to prove Theorem~\ref{main}, i.e.\ to verify that
$\mathcal{EL}(\Sp)$ is homeomorphic to the N\"obeling curve. By
Theorem~\ref{characterization_easy}, in order to do this we must
show that $\EL$ is a Polish space, that it is connected, locally
path connected, of dimension $1$, and satisfies the locally finite
$1$--discs property.
To see that $\EL$ is separable note that the orbit of any ending
lamination under the action of the mapping class group is dense in
$\EL$ (see e.g.\ \cite[Corollary 4.2]{Ham2}). Because the mapping
class group is finitely generated, this orbit is countable. Since
$\EL$ is homeomorphic to the Gromov boundary of the curve complex
(\cite[Theorem 1.3]{Kla}, compare \cite[Section 1]{Ham}), it carries
a metric defined using the Gromov product (see Bridson--Haefliger
\cite[Chapter III.H]{BH} and Bonk--Schramm \cite[Section 6]{BS}).
This metric is complete by \cite[Proposition 6.2]{BS}. Hence $\EL$
is a Polish space.
\medskip
By Theorem~\ref{theorem gabai}, $\EL$ is connected and locally path
connected.
\medskip
Now we prove that $\EL$ is of dimension $1$. Since there are paths
in $\EL$ (Theorem~\ref{theorem gabai}), it is not of dimension $0$.
In order to prove that $\EL$ is of dimension at most $1$, we need to
check that any point in $\EL$ has a neighbourhood basis with
boundaries of dimension $0$. Let $\S=((\Tau_k,\Sigma_k))_k$ be a
good partition sequence, guaranteed by Theorem~\ref{tracks form
basis}. Thus, by Corollary~\ref{properties of Us}, it is enough to
prove that the boundary of any $U\in\mathcal{U}(\S)$ is of dimension $0$.
The boundary of any $U\in\mathcal{U}(\S)$ is contained in a union of
boundaries of up to three $U(\tau)$'s, where $\tau\in \Tau_k\cup
\Sigma_k$, for some $k$'s. For $\sigma\in \Sigma_k$ the set
$U(\sigma)$ is closed (Lemma~\ref{key lemma}(iii)), so that
$\mathrm{Fr}\, U(\sigma)\subset U(\sigma)$. For fixed $k$, all
$U(\eta)$ with $\eta\in \Tau_k$ are open and disjoint. Hence, if we
denote $X_k=\bigcup_{\sigma\in\Sigma_k}U(\sigma)$, we have
$\mathrm{Fr }\,U(\eta)\subset X_k$. By property (Subdivision), for
all $K\geq k$ we have $X_K\supset X_k$. Hence the boundary of any
$U\in\mathcal{U}(\S)$ is contained in $X_K$, for sufficiently large
$K$. Thus it is enough to prove that for each $k$ the set
$X_k=\bigcup_{\sigma\in\Sigma_k} U(\sigma)\subset \EL$ is of
dimension $0$.
We obtain a neighbourhood basis for each $\lambda\in X_k$ by
restricting to $X_k$ the sets $U'\ni \lambda$ from the open
neighbourhood basis $\mathcal{U}(\S)$. By Remark~\ref{discarding},
we may assume that $U'=U(\eta)$ or $U'=U(\eta_1)\cup U(\sigma) \cup
U(\eta_2)$ where $\eta$ or all of $\eta_1, \sigma, \eta_2$ lie in
$\Tau_K$'s and $\Sigma_K$'s with $K$'s at least $k$. Since for
$K\geq k$ we have $\EL\setminus X_K\subset \EL \setminus X_k$, we
get that all $U(\eta)$, with $\eta\in \Tau_K$, are disjoint from
$X_k$. Hence $U'$ is of the form $U(\eta_1)\cup U(\sigma) \cup
U(\eta_2)$ and $U'\cap X_k\subset U(\sigma)$, for some $\sigma\in
\Sigma_K$.
We now prove that actually we have the equality $U'\cap X_k=
U(\sigma)$. By property (Subdivision), $U(\sigma)$ is contained in
some $U(\tau)$, where $\tau\in \Tau_k\cup\Sigma_k$. Since
$\lambda\in U(\sigma)\subset U(\tau)$ and $\lambda\in X_k$, we have
$\tau\in \Sigma_k$. Hence $U(\sigma)\subset X_k$. Summarizing, the
restriction of $U'$ to $X_k$ equals $U(\sigma)$. By Lemma~\ref{key
lemma}(iii), $U(\sigma)$ is closed in $\EL$, hence it is also closed
in $X_k$. Moreover, $U(\sigma)$ is also open in $X_k$, since its
complement is a finite union of some disjoint closed $U(\sigma')$
with $\sigma'\in \Sigma_K$. Thus the boundary of $U(\sigma)$ in
$X_k$ is empty, as desired.
\medskip
In order to finish the proof of Theorem~\ref{main}, it remains to
prove the following, which we do in Section~\ref{section
Universality}.
\begin{prop}
\label{main proposition}
$\EL(S_{0,5})$ satisfies the locally finite
$1$--discs property.
\end{prop}
\section{Universality}
\label{section Universality} In this section we prove Proposition
\ref{main proposition}, which completes the proof of Theorem
\ref{main}.
\medskip
We have to prove that for any family of paths $f_n \colon I
\rightarrow \EL$, where $n\in \N$, and any open cover $\mathcal{U}$
of $\EL$, there are paths $g_n\colon I \rightarrow \EL$ such that
\begin{description}
\item[(Local finiteness)]
for each $\lambda\in \EL$ there is a neighbourhood $U\ni \lambda$
satisfying
$g_n(I)\cap U=\emptyset$ for sufficiently large $n$,
\item[(Approximation)]
for each $t\in I, n\in \N,$ there is $U\in \mathcal{U}$ such that
both $f_n(t)$ and $g_n(t)$ lie in $U$.
\end{description}
Because the proof is technically involved, as an illustration we
prove the following.
\begin{prop} The N\"obeling curve $N^3_1\subset \R^3$ satisfies the locally finite $1$--discs
property.
\end{prop}
\proof We learned the idea of this proof from Andrzej Nag\'orko.
We say that a cube $I_1\times I_2\times I_3\subset \R^3$ is
\emph{$m$--diadic} if the lengths of all $I_i$ equal $\frac{1}{2^m}$
and the endpoints of $I_i$ lie in $\frac{1}{2^m}\Z$.
Assume first, for simplicity, that we are in the special case where
there is $m>0$ such that the open cover $\mathcal{U}$ consists of
the interiors of unions of pairs of adjacent $m$--diadic cubes. Let
$\Gamma\subset \R^3$ be the closed set which is the union of all
lines parallel to coordinate axes with the fixed coordinate in
$\frac{1}{2^m}\Z$. In other words $\Gamma$ is the union of
$1$--skeleta of all $m$--diadic cubes. Observe that $\Gamma$ is
disjoint from $N^3_1$.
Let $f_n \colon I \rightarrow N^3_1$ be the family of paths which we
want to approximate. We describe the construction of $g_n$ for a
fixed $n\in \N$.
There is a partition $J_1\cup I_1\cup\ldots \cup I_{l-1}\cup J_l$ of
$I$ into closed intervals with disjoint interiors, satisfying the
following. There are $m$--diadic open cubes $V_k$, where $0\leq
k\leq l$, satisfying $f_n(I_k)\subset V_k$ and $f_n(J_k)\subset
\mathrm{int}\, \overline{V_{k-1}\cup V_k}$. Denote the endpoints of
$J_k$ by $s_k,t_k$. We can assume $l\geq 1$.
For each pair of adjacent cubes $V_{k-1}, V_k$ we denote by
$\Gamma_k\subset \Gamma$ the square loop which is the intersection
of $\Gamma$ with $\partial V_{k-1}\cap \partial V_k$. For $A\subset
\R^3$ and $\delta>0$ we denote by $N_\delta(A)$ the open
$\delta$--neighbourhood of $A$ in $\R^3$.
For each $1< k\leq l$ we choose some $p_k\in
V_{k-1}\cap N_\frac{1}{n}(\Gamma_k)$ satisfying $p_k\in N^3_1$. We
put $g_n(s_k)=p_k$. Analogously, for each $1\leq k< l$ we choose
some $q_k\in V_k\cap N_\frac{1}{n}(\Gamma_k)$
satisfying $q_k\in N^3_1$. We put $g_n(t_k)=q_k$. We also put
$g_n(0)=q_1$ and $g_n(1)=p_l$.
For $0<k<l$ we choose paths $h^{I_k}$ between $q_k$ and $p_{k+1}$ in
the open sets $V_k\cap N_\frac{1}{n}(\Gamma)$. This is possible,
since the latter sets are neighborhoods of $1$--skeleta of $V_k$,
hence they are path connected. We define the path $g_n$ on $I_k$ by
slightly perturbing $h^{I_k}$ relative the endpoints so that we
obtain paths in $N^3_1$.
Similarly, for $1\leq k\leq l$ we choose paths $h^{J_k}$ between
$p_k$ and $q_k$ in the open sets $\mathrm{int}\,
\overline{V_{k-1}\cup V_k}\cap N_\frac{1}{n}(\Gamma_k)$. The latter
sets are path connected because $\Gamma_k$ are $1$--spheres. We
define the path $g_n$ on $J_k$ by slightly perturbing $h^{J_k}$
relative the endpoints so that we obtain paths in $N^3_1$.
By construction, paths $g_n$ are $\mathcal{U}$--close to $f_n$,
which means that they satisfy property (Approximation). Moreover,
for each $n$ the image of the path $g_n$ is contained in
$N_\frac{1}{n}(\Gamma)$, where $\Gamma$ is a closed set disjoint
from $N^3_1$. This yields property (Local finiteness). This ends the
proof in the case of special $\mathcal{U}$.
\medskip
In general, we may only assume that $\mathcal{U}$ consists of the
interiors of unions of pairs of adjacent $m$--diadic cubes without
the assumption that $m$ is fixed. In other words the cubes might be
arbitrarily small. However, we can at least assume that no element
of $\mathcal{U}$ is properly contained in another one. We also note
the property that if two open diadic cubes intersect, then one of
them is contained in the other. We define (the ``attracting
grid'') $\Gamma$ as the complement in $\R^3$ of the union of all
elements of $\mathcal{U}$.
\medskip\par\noindent\textbf{Claim.}\ignorespaces
\
\begin{enumerate}[(i)]
\item
For each pair of adjacent cubes $V_1,V_2$ with $\mathrm{int} \,
\overline{V_1\cup V_2}\in \mathcal{U}$, the square loop which is the
boundary of the common face of $V_1$ and $V_2$ is contained in
$\Gamma$.
\item
Let $V$ be a maximal (open) cube among all cube pairs from
$\mathcal{U}$. Then $\partial V\cap \Gamma$ is connected (and
non-empty).
\end{enumerate}
Assertion (i) of the claim follows directly from the maximality
assumption on the elements of $\mathcal{U}$. For assertion (ii)
observe first that $\partial V$ is a $2$--sphere. We obtain
$\partial V\cap \Gamma$ by removing from $\partial V$ the
intersections with elements of $\mathcal{U}$. By maximality
assumption on $V$, these elements have the form $\mathrm{int} \,
\overline{V_1\cup V_2}$, where $V_1\subset V$ and $V_2\subset
\R^3\setminus V$. Each intersection of such a set with $\partial V$
is an open $2$--disc. By the maximality assumption on the elements
of $\mathcal{U}$, all those $2$--discs are disjoint. Hence $\partial
V\cap \Gamma$ is obtained from a $2$--sphere by removing a disjoint
union of open $2$--discs, which yields assertion (ii).
\medskip
We leave it to the reader to verify that the claim allows to perform
the same argument as in the special case. \qed
\medskip
We are now prepared for the following.
\medskip\par\noindent\textbf{Proof of Proposition~\ref{main proposition}.}\ignorespaces
\ By Theorem~\ref{tracks form basis}, we may assume that
$\mathcal{U}\subset \mathcal{U}(\S)$, where $\mathcal{U}(\S)$ is the
open neighbourhood basis coming from some fixed good partition
sequence $\S=((\Tau_k, \Sigma_k))_k$ (see Definition~\ref{partition
sequence} and Corollary~\ref{properties of Us}). Let
$\mathcal{U}'\subset \mathcal{U}(\S)$ be an open cover which is a
refinement of $\mathcal{U}$ satisfying the assertion of Lemma
\ref{local perturbing}(i). In other words, we require that for any
$U'\in \mathcal{U}'$, there is $U=U(U')\in \mathcal{U}$ and
$\e=\e(U')>0$, so that we have the following. For any $\mu\in
V'=\Psi(U')$ with $\phi(\mu)\in \AML$, if $\lambda\in \EL$ and
$\phi(\mu)^*\in N^{PT}_\e(\tilde{\lambda})$ for some diagonal
extension $\tilde{\lambda}$ of $\lambda$, then $\lambda\in U$.
Without loss of generality we may assume that whenever $U(\eta_1)
\cup U(\sigma) \cup V(\eta_2)$ belongs to $\mathcal{U}'$, then also
$U(\eta_1)$ and $U(\eta_2)$ belong to $\mathcal{U}'$.
We say that a train track $\tau\in \Tau_k\cup \Sigma_k$
\emph{participates in} $\mathcal{U}'$, if $U(\tau)\in \mathcal{U}'$
or $\tau$ equals $\sigma$ for $U(\eta_1)\cup U(\sigma)\cup
U(\eta_2)\in \mathcal{U}'$. Let $\Tau'\subset \bigcup_k\Tau_k$ be
the family of all complete train tracks $\eta$ participating in
$\mathcal{U}'$ with maximal $V(\eta)$ with respect to inclusion. In
other words, we take all complete train tracks participating in
$\mathcal{U}'$ and remove those $\eta$ whose $V(\eta)$ is properly
contained in some $V(\eta')$, where $\eta'$ is also participating in
$\mathcal{U}'$. Note that by property (Subdivision) $V(\eta)$ and
$V(\eta')$ can only intersect if one is contained in the other. We
denote the family of $U(\eta)$ over $\eta\in \Tau'$ by
$\mathcal{U}(\Tau')$. Denote
$\mathcal{V}(\Tau')=\Psi(\mathcal{U}(\Tau'))$. Since $V(\eta)$ were
required to be maximal, the elements of $\mathcal{V}(\Tau')$ are
pairwise disjoint. Hence the elements of $\mathcal{U}(\Tau')$ are
also pairwise disjoint.
Let $\Sigma'\subset \bigcup_k\Sigma_k$ be the family of all nearly
complete train tracks $\sigma$ participating in $\mathcal{U}'$ with
maximal $V(\sigma)$ with respect to inclusion, among all $V(\tau)$ with
$\tau$ participating in $\mathcal{U}'$. We denote the family of
$U(\sigma)$ over $\sigma\in \Sigma'$ by $\mathcal{U}(\Sigma')$ and
we put $\mathcal{V}(\Sigma')=\Psi(\mathcal{U}(\Sigma'))$. The
elements of $\mathcal{V}(\Sigma')$ are pairwise disjoint and
disjoint from the elements of $\mathcal{V}(\Tau')$. Hence the
elements of $\mathcal{U}(\Sigma')$ are also pairwise disjoint and
disjoint from the elements of $\mathcal{U}(\Tau')$.
Let $\Gamma\subset \PML$ be the closed set which is the complement
of the union of all sets in $\mathcal{V}'=\Psi(\mathcal{U}')$. We have
$\Gamma \cap \PEML=\emptyset$.
\medskip\par\noindent\textbf{Claim 1.}\ignorespaces
\
\begin{enumerate}[(i)]
\item
For any $\sigma\in \Sigma'$, we have $\partial V(\sigma)\subset
\Gamma$.
\item
For any $\eta\in\Tau'$ the set $\partial V(\eta) \cap \Gamma$ is
connected and non-empty.
\end{enumerate}
\medskip\par\noindent\textbf{Proof of Claim 1.}\ignorespaces
\begin{enumerate}[(i)]
\item Let $\mu\in \partial V(\sigma)$. If
$\mu\notin \Gamma$, then there is $V'\in \mathcal{V}'$ with $\mu\in
V'$. Since $V'$ is open in $\PML$, it intersects $V(\sigma)$. The
set $V'$ is of the form $V'=V(\eta)$ or $V'=V(\eta_1)\cup
V(\sigma')\cup V(\eta_2)$. Thus $V(\sigma)$ intersects $V(\tau)$,
for $\tau$ equal to one of $\eta, \eta_i,\sigma'$. Since $\sigma\in
\Sigma'$, we have $V(\tau)\subset V(\sigma)$. Hence $\tau$ is a
nearly complete train track, and therefore $V'=V(\eta_1)\cup
V(\sigma')\cup V(\eta_2)$ where $\sigma'$ is equal to $\tau$. Since
$\sigma\in\Sigma'$, we have $V(\sigma)\supset V(\sigma')$. By
hypothesis $\mu$ is outside of $V(\sigma)$, hence it lies in
$V(\eta_i)$ for some $i$. But then $V(\eta_i)$ intersects
$V(\sigma)$ and like before we get $V(\eta_i)\subset V(\sigma)$,
which is a contradiction.
\item First note that $\partial V(\eta)$ is a
$2$--sphere. $\partial V(\eta)$ is disjoint from any $V(\eta')$, for
$\eta'$ participating in $\mathcal{U'}$: otherwise $V(\eta')$
intersects $V(\eta)$ and by maximality of $V(\eta)$ (since $\eta\in
\Tau'$) we have $V(\eta')\subset V(\eta)$, which is a contradiction.
Hence, if for some $V'\in \mathcal{V}'$ the intersection $\partial
V(\eta)\cap V'$ is non-empty, then $V'=V(\eta_1)\cup V(\sigma)\cup
V(\eta_2)$, where $\sigma\in \Sigma'$, and $\partial V(\eta)\cap
V'\subset V(\sigma)$. Moreover, since $V'$ is open, we have that
$V(\sigma)$ or one of $V(\eta_i)$ intersects $V(\eta)$. Since
$\sigma\in \Sigma'$, this must be one of $V(\eta_i)$. Because
$\eta\in \Tau'$, we then have $V(\eta_i)\subset V(\eta)$. In
particular, $P(\sigma) \subset
\partial V(\eta_i) \subset P(\eta)$. Again, since $\sigma\in
\Sigma'$ we have $P(\sigma)\subset \partial V(\eta)$. Summarizing,
for any $V'\in \mathcal{V}'$ we have $\partial V(\eta)\cap
V'=V(\sigma)$, for some $\sigma\in \Sigma'$, which is an open
$2$--disc. Hence $\partial V(\eta)\cap \Gamma$ is obtained from the
$2$--sphere $\partial V(\eta)$ by removing a (possibly infinite)
union of disjoint open $2$--discs.
\end{enumerate}
\qed
\medskip
Let $f_n\colon I\rightarrow \EL$, where $n\in \N$, be the family of
paths which we want to approximate. We now independently construct
the paths $g_n$. To this end, we fix $n\in\N$ and note the
following.
\medskip\par\noindent\textbf{Claim 2.}\ignorespaces
\ There is a partition $I_0\cup J_1\cup I_1\cup\ldots \cup J_l\cup I_l$ of $I$
into closed intervals with disjoint interiors, with possibly empty
$I_0, I_l$, satisfying the following (see Figure~\ref{fig:path}).
\begin{itemize}
\item
$f_n(I_k)\subset U'_k$, for some $U'_k\in \mathcal{U}(\Tau')\subset
\mathcal{U}'$, if $I_k$ is non-empty, where $0\leq k\leq l$.
\item
$f_n(J_k)\subset \hat{U}_k$, where $\hat{U}_k=U(\eta^1_k)\cup
U(\sigma_k)\cup U(\eta^2_k)\in \mathcal{U'}$ with $\sigma_k\in
\Sigma'$. Moreover, for $j=k-1,k$ there is $i$ such that
$U(\eta^i_k)\subset U'_j$, if $I_j$ is non-empty, where $1\leq k\leq
l$.
\end{itemize}
\medskip\par\noindent\textbf{Proof of Claim 2.}\ignorespaces
\ For the proof it is convenient to introduce the following
terminology. We call an element $U$ of $\mathcal{U}(\S)$ a
\emph{vertex block} if it is of the form $U(\eta)$ for some complete
train track $\eta$. We call the other elements of $\mathcal{U}(\S)$
\emph{edge blocks}.
\begin{figure}[htbp!]
\centering
\scalebox{0.75}{
\psfrag{v1}{$V'_0$}
\psfrag{v2}{$V'_1$}
\psfrag{v3}{$V'_2$}
\psfrag{v4}{$V'_3$}
\psfrag{v5}{$V'_4$}
\psfrag{w1}{$\hat{V}_1$}
\psfrag{w2}{$\hat{V}_2$}
\psfrag{w3}{$\hat{V}_3$}
\psfrag{w4}{$\hat{V}_4$}
\includegraphics[width=\textwidth]{path} }
\caption{Combinatorics of vertex blocks and edge blocks.}
\label{fig:path}
\end{figure}
Consider now the family $\mathcal{Y}$, which is the union of all
vertex blocks in $\mathcal{U}(\Tau')\subset \mathcal{U}'$ and edge
blocks $U(\eta^1)\cup U(\sigma)\cup U(\eta^2)\in \mathcal{U'}$ with
$\sigma\in \Sigma'$, where we pick only one such set for each
$\sigma$. Observe that
$\mathcal{Y}$ forms an open cover of $\EL$. By compactness of $I$, a
finite subset of $\mathcal{Y}$ covers $f_n(I)$. In particular, there
is a partition $\mathcal{I}$ of $I$ into finitely many nontrivial
closed intervals with disjoint interiors so that each interval is
mapped into a set of $\mathcal{Y}$. Observe that two consecutive
intervals in $\mathcal{I}$ cannot be mapped into different vertex
blocks, since the latter are disjoint. Moreover, if two consecutive
intervals $I_-, I_+\in \mathcal{I}$ are mapped into edge blocks
$U(\eta_-^1)\cup U(\sigma_-)\cup U(\eta_-^2),\ U(\eta_+^1)\cup
U(\sigma_+)\cup U(\eta_+^2)\in \mathcal{U'}$, then we have the
following. Since $\sigma_-\neq \sigma_+$, we have that $f_n(I_-\cap
I_+)$ lies in, say, $U(\eta_-^1)\cap U(\eta_+^1)$, hence
$U(\eta_-^1)$ and $U(\eta_+^1)$ are contained in the same vertex
block $U'\in \mathcal{U}(\Tau')$. We can then represent $I_-\cup
I_+=I_-'\cup J\cup I_+'$ with $I_-'\subset I_-, \ I_+'\subset I_+$
and $f_n(J)\subset U'$, where $J$ is nontrivial. To conclude this
discussion, we can assume that the intervals in $\mathcal{I}$ are
mapped alternatively into vertex blocks and edge blocks of
$\mathcal{Y}$. Furthermore, observe that for each pair of
consecutive intervals in $\mathcal{I}$ mapped by $f_n$ to $U(\eta)
\in \mathcal{Y}$ and $U(\eta^1)\cup U(\sigma)\cup U(\eta^2) \in
\mathcal{Y}$ there is some $i$ with $U(\eta^i)\subset U(\eta)$. This
gives rise to $I_k,J_k$ as required.\qed
\medskip
From now on we fix the objects and the notation as in Claim 2.
Before we describe the construction of the path $g_n$, note that to
guarantee property (Approximation) it suffices that $g_n$ satisfies the following
two properties.
\begin{description}
\item[(Approximation i)]
For each $0\leq k\leq l$ such that $I_k$ is non-empty we have
$g_n(I_k)\subset U(U'_k)$.
\item[(Approximation ii)]
For each $1\leq k\leq l$ we have $g_n(J_k)\subset U(\hat{U}_k)$.
\end{description}
\begin{figure}[htbp!]
\centering
\scalebox{0.6}{
\psfrag{g}{$N_{\frac{1}{n}}(\Gamma)$}
\psfrag{pi}{$p_k$}
\psfrag{qi}{$q_k$}
\psfrag{qii}{$p_{k+1}$}
\psfrag{hj}{$h^{J_k}$}
\psfrag{hi}{$h^{I_k}$}
\includegraphics[width=\textwidth]{gamma2} }
\caption{Construction of approximating paths attracted by $\Gamma$.}
\label{fig:gamma}
\end{figure}
\medskip
At this point we can finally define the path $g_n$. Denote
$V'_k=\Psi(U'_k)$ and $\hat{V}_k=\Psi(\hat{U}_k)$. If $l=0$, we
choose any point $p\in V'_0\cap N_ \frac{1}{n}(\Gamma)$ satisfying
$\phi(p)\in \EL$. This is possible since the open set $V'_0\cap N_
\frac{1}{n}(\Gamma)$ is non-empty by Claim 1(ii) and $\MPML$ is
dense in $\PML$. We put $g_n(I) \equiv p$, which obviously satisfies
properties (Approximation i) and (Approximation ii).
From now on we assume $l\geq 1$. Denote the endpoints of $J_k$ by
$s_k,t_k$. First we claim that for any $1\leq k \leq l$ with $s_k
\neq 0$ the open set $V'_{k-1}\cap \hat{V}_k\cap
N_\frac{1}{n}(\partial V(\sigma_k))$ is non-empty. Indeed, there is
an $i$ satisfying $V(\eta_k^i) \subset V'_{k-1}$. Then we have
$\partial V(\sigma_k) \subset \partial V(\eta_k^i) \subset
\overline{V}_{k-1}'$. Hence $\partial V(\sigma_k)$ is contained in
the closure of $V'_{k-1}\cap \hat{V}_k = V(\eta^i_k)$ and the claim
follows.
Thus for each $1\leq k\leq l$ with $s_k\neq 0$ we can choose some
$p_k\in V'_{k-1}\cap \hat{V}_k\cap N_\frac{1}{n}(\partial
V(\sigma_k))$ satisfying $\phi(p_k)\in \EL$. We put
$g_n(s_k)=\phi(p_k)$. Analogously, for each $1\leq k\leq l$ with
$t_k\neq 1$ we choose some $q_k\in V'_k\cap \hat{V}_k\cap
N_\frac{1}{n}(\partial V(\sigma_k))$ satisfying $\phi(q_k)\in \EL$.
We put $g_n(t_k)=\phi(q_k)$. By Claim 1(i) all $p_k,q_k$ lie in
$N_\frac{1}{n}(\Gamma)$.
If $s_1=0$ (i.e.\ if $I_0$ is empty), then we put $g_n(0)=q_1$,
otherwise we put $g_n(I_0)\equiv p_1$. Analogously, if $t_l=1$
(i.e.\ if $I_l$ is empty), then we put $g_n(1)=p_l$, otherwise we
put $g_n(I_l)\equiv q_l$.
\medskip
For $0<k<l$, we choose some PL almost filling paths $h^{I_k}$ between $q_k$ and $p_{k+1}$ in
the sets $V'_k\cap N_\frac{1}{n}(\Gamma)$ (see Figure~\ref{fig:gamma}).
This is possible, since the latter sets are open and path connected by
Claim 1(ii). To each $h^{I_k}$ we apply Lemma~\ref{Gabai} with $\e =
\e'_n = \min \{\e(U'_k),\ \frac{1}{n}\}$ (and any $\delta$). We obtain
paths $g_n^{I_k}\colon I_k\rightarrow\EL$ with endpoints $\phi(q_k),
\phi(p_{k+1})$, such that the image of $g_n^{I_k}$ lies in
$U(U'_k)$. We define $g_n$ on $I_k$ to be equal to $g_n^{I_k}$,
which gives property (Approximation i).
Similarly, for $1\leq k\leq l$ we choose some PL almost
filling paths $h^{J_k}$ between $p_k$ and $q_k$ in the sets $\hat{V}_k\cap N_\frac{1}{n}(\partial V(\sigma_k))$ (also see Figure~\ref{fig:gamma}).
This is possible since the latter sets are open and path
connected (because $\partial V(\sigma_k)$ are $1$--spheres).
To each
$h^{J_k}$ we apply Lemma~\ref{Gabai} with $\e = \hat{\e}_n = \min
\{\e(\hat{U}_k),\ \frac{1}{n}\}$. We obtain paths
$g_n^{J_k}\colon J_k\rightarrow\EL$ with endpoints $\phi(p_k),
\phi(q_k)$, such that the image of $g_n^{J_k}$ lies in
$U(\hat{U}_k)$. We define $g_n$ on $J_k$ to be equal $g_n^{J_k}$,
which gives property (Approximation ii).
This concludes the construction of the paths $g_n\colon I
\rightarrow \EL$. By the discussion above they satisfy property
(Approximation).
\medskip
It remains to verify property (Local finiteness). Let $\lambda_0\in
\EL$. Let $V\in \mathcal{V}(\S)$ be a neighbourhood of
$\phi^{-1}(\lambda_0)$ such that its closure is disjoint from
$\Gamma$ (guaranteed by Theorem~\ref{tracks form basis}(Fineness)).
Put $U=\phi(V\cap \MPML)\in \mathcal{U}(\S)$. Let $U'\subset U$ and
$\e>0$ be as in the assertion of Lemma~\ref{local perturbing}(ii)
applied to $U$ and $\lambda_0$. For sufficiently large $n$ we have
that $V$ is outside $N_\frac{1}{n}(\Gamma)$ and $\frac{1}{n}\leq\e$.
Then both $\hat{\e}_n$ and $\e'_n$ are smaller than $\e$, and
therefore the image of $g_n$ is outside $U'$. \qed
\begin{bibdiv}
\begin{biblist}
\bib{BS}{article}{
author={Bonk, M.},
author={Schramm, O.},
title={Embeddings of Gromov hyperbolic spaces},
journal={Geom. Funct. Anal.},
volume={10},
date={2000},
number={2},
pages={266--306}
}
\bib{Bo}{article}{
author={Bowers, P. L.},
title={Dense embeddings of sigma-compact, nowhere locally compact metric
spaces},
journal={Proc. Amer. Math. Soc.},
volume={95},
date={1985},
number={1},
pages={123--130}
}
\bib{BH}{book}{
author={Bridson, M. R.},
author={Haefliger, A.},
title={Metric spaces of non-positive curvature},
series={Grundlehren der Mathematischen Wissenschaften [Fundamental
Principles of Mathematical Sciences]},
volume={319},
publisher={Springer-Verlag},
place={Berlin},
date={1999},
pages={xxii+643}
}
\bib{Cu}{article}{
author={Curtis, D. W.},
title={Boundary sets in the Hilbert cube},
journal={Topology Appl.},
volume={20},
date={1985},
number={3},
pages={201--221}
}
\bib{Du}{article}{
author={Dugundji, J.},
title={Absolute neighborhood retracts and local connectedness in
arbitrary metric spaces},
journal={Compositio Math.},
volume={13},
date={1958},
pages={229--246 (1958)}
}
\bib{Eng}{book}{
author={Engelking, R.},
title={Dimension theory},
note={Translated from the Polish and revised by the author;
North-Holland Mathematical Library, 19},
publisher={North-Holland Publishing Co.},
place={Amsterdam},
date={1978},
pages={x+314 pp. (loose errata)}
}
\bib{G}{article}{
author={Gabai, D.},
title={Almost filling laminations and the connectivity of ending
lamination space},
journal={Geom. Topol.},
volume={13},
date={2009},
number={2},
pages={1017--1041}
}
\bib{Ham}{article}{
author={Hamenst{\"a}dt, U.},
title={Train tracks and the Gromov boundary of the complex of curves},
conference={
title={Spaces of Kleinian groups},
},
book={
series={London Math. Soc. Lecture Note Ser.},
volume={329},
publisher={Cambridge Univ. Press},
place={Cambridge},
},
date={2006},
pages={187--207}
}
\bib{Ham2}{article}{
author={Hamenst{\"a}dt, U.},
title={Geometric properties of the mapping class group},
conference={
title={Problems on mapping class groups and related topics},
},
book={
series={Proc. Sympos. Pure Math.},
volume={74},
publisher={Amer. Math. Soc.},
place={Providence, RI},
},
date={2006},
pages={215--232}
}
\bib{Ham_MCG1}{article}{
author={Hamenst{\"a}dt, U.},
title={Geometry of the mapping class groups. I. Boundary amenability},
journal={Invent. Math.},
volume={175},
date={2009},
number={3},
pages={545--609}}
\bib{KLT}{article}{
author={Kawamura, K.},
author={Levin, M.},
author={Tymchatyn, E. D.},
title={A characterization of 1-dimensional N\"obeling spaces},
booktitle={Proceedings of the 12th Summer Conference on General Topology
and its Applications (North Bay, ON, 1997)},
journal={Topology Proc.},
volume={22},
date={1997},
number={Summer},
pages={155--174}
}
\bib{Kla}{article}{
author={Klarreich, E.},
title ={The boundary at inifinity of the curve complex and the
relative Teichm\"uller space},
eprint={http://www.msri.org/people/members/klarreic/curvecomplex.ps}
date={1999}
}
\bib{LS}{article}{
author={Leininger, C.},
author={Schleimer, S.},
title ={Connectivity of the space of ending laminations},
status={preprint},
eprint={arXiv:0801.3058},
date={2008}
}
\bib{LMS}{article}{
author={Leininger, C.},
author={Mj, M.},
author={Schleimer, S.},
title ={Universal Cannon--Thurston maps and the boundary of the curve complex},
status={preprint},
eprint={arXiv:0808.3521},
date ={2008}
}
\bib{Luo}{article}{
author={Luo, F.},
title={Automorphisms of the complex of curves},
journal={Topology},
volume={39},
date={2000},
number={2},
pages={283--298}
}
\bib{MM}{article}{
author={Masur, H. A.},
author={Minsky, Y. N.},
title={Geometry of the complex of curves. I. Hyperbolicity},
journal={Invent. Math.},
volume={138},
date={1999},
number={1}
}
\bib{Mos}{article}{
author={Mosher, L.},
title ={Train track expansions of measured foliations},
date={2003},
status={unpublished manuscript},
eprint={http://andromeda.rutgers.edu/~mosher/arationality_03_12_28.pdf}
}
\bib{N}{article}{
author={Nag\'orko, A.},
title ={Characterization and topological rigidity of N\"obeling manifolds},
date={2006},
status={submitted}
note={PhD thesis},
eprint={arXiv:0602574}
}
\bib{PH}{book}{
author={Penner, R. C.},
author={Harer, J. L.},
title={Combinatorics of train tracks},
series={Annals of Mathematics Studies},
volume={125},
publisher={Princeton University Press},
place={Princeton, NJ},
date={1992},
pages={xii+216}
}
\bib{Th}{article}{
author={Thurston, W. P.},
title ={The geometry and topology of three-manifolds},
publisher={Princeton Univ. Math. Depr. Lecture Notes},
date={1980},
eprint={http:msri.org/publications/books/gt3m/}
}
\end{biblist}
\end{bibdiv}
\end{document}
| 25,219 |
\section{\label{}}
\section{The CDMS experiment}
\label{sec:cdms_experiment}
The Cryogenic Dark Matter Search (CDMS) experiment is designed to search for WIMPs through their elastic scattering with nuclei. The experiment is located at the Soudan Undeground Lab at 2090 meters of water equivalent (m.w.e.) depth. Nineteen Ge (250~grams each) and 11 Si (100~grams each) detectors are mounted into five towers. Each tower consists of six vertically stacked detectors. Each detector is a high-purity Ge or Si crystal in the shape of a 1~cm thick, 7.6~cm diameter disk and contains four photolithographically patterned phonon sensors on one side and two concentric ionization electrodes on the other. The detectors are operated at $\sim$40 mK to collect the athermal phonons caused by particle interactions in the crystal. A combination of active and passive shielding is used to reject events caused by the cosmogenic muons and to reduce the external environmental radioactivity. A detailed description of the CDMS apparatus is given in~\cite{cdms_PRD2005}.
At the heart of the experiment are Z(depth)-sensitive Ionization Phonon (ZIP) detectors which measure the ionization and the phonon signal from the interaction of particles with the crystal. External gammas and beta-particles interact with an electron in the crystal and such events are called ``electron recoils,'' whereas neutrons and WIMPs interact with a nucleus (``nuclear recoils''). The main signature of the nuclear recoil events is that they produce $\sim$1/3 fewer charge pairs than the electron recoils. Four independent phonon sensors provide phonon energy and position information for the event. Inner and outer electrodes on the ``ionization side'' veto events from the outer part of the detector and provide inner ionization energy measurement. The independent simultaneous measurements of the phonon and ionization energies of an event allow us to discriminate between nuclear and electron recoils by examining the ratio of ionization to the phonon recoil energy (the ``ionization yield parameter''). The ionization yield provides a primarily electron recoil rejection factor of $>10^{4}$, which is as high as $10^{6}$ when combined with timing information.
Passive and active shielding surround the icebox where the ZIP detectors and the cold hardware are located. Passive shielding consists of layers of lead to reduce external gammas with the inner layer made of ultra-low-activity ancient lead, and polyethylene to moderate neutrons from fission as well as from $(\alpha,n)$ processes due to U/Th decays. The active shielding consists of a muon veto system to reject events from cosmogenic muons or showers. Extensive Monte Carlo simulations of nuclear recoil events that are caused by neutrons due to radioactive processes or cosmogenic muons give an upper limit of $<0.1$ events from each source for the WIMP-search data presented in the next section.
\section{Data analysis and WIMP-search results}
\label{sec:wimp_search}
Data collected between October 2006 and July 2007, which correspond to cryogenic runs R123--R124, were analyzed and the analysis summary is presented here. For the analysis, 15 good Ge detectors were used in R123 and 7 good Ge detectors in R124 which gave a total raw exposure of 397.8 kg-days. Calibration data with $^{133}$Ba and $^{252}$Cf radioactive sources were taken periodically to characterize the detectors performance. They were also used to define the electron and nuclear recoil bands (Fig.~\ref{fig:yield}) and for WIMP-search efficiency studies.
The dominating background for the analysis consists of low-yield electron recoil events, also called ``surface'' events. They originate from interactions that occur in the first few microns of the crystal surfaces, have suppressed ionization signal, and can be misidentified as nuclear recoils. The timing characteristics of the phonon pulse can be used to discriminate surface events. A linear combination of the leading phonon pulse risetime with the delay of the phonon signal with respect to the ionization signal was employed, and its distribution for the calibration data is shown in Fig.~\ref{fig:timing}. Surface events have smaller risetime and delay values. The surface event rejection factor from the timing quantity is $>200$, which improves the overall rejection of electron recoils to $>10^{6}$.
WIMP-search candidate events were required to be inside the 2$\sigma$ nuclear-recoil band in ionization yield, to be veto-anticoincident single-scatters within the detector fiducial volume, and to satisfy data quality, noise rejection, and phonon timing cuts. Detection efficiency was calculated to be $\sim$31\% and gave a total spectrum averaged exposure of 121 kg-days after all cuts for a 60 GeV/c$^{2}$ WIMP. The estimated number of surface events that leak into the signal region, based on the observed number of single- and multiple- scatter events within and surrounding the 2$\sigma$ nuclear-recoil band, is $0.6^{+0.5}_{-0.3}(stat)^{+0.3}_{-0.2}(syst)$ events~\cite{CDMS-09}.
The WIMP-search blinded signal region was unmasked after all analysis cuts were finalized. No events were observed, and the 90\% CL upper limit on the spin-independent WIMP-nucleon cross section was set with a minimum of $6.6\times10^{-44}$ cm$^{2}$ for a 60 GeV/c$^{2}$ WIMP with the current analyzed dataset ($4.6\times10^{-44}$ cm$^{2}$ when combined with previous CDMS data).
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{fig_1.pdf}
\caption{Ionization yield vs. the full recoil energy. Electron recoil band (blue dots, upper band) and nuclear recoil band (green dots, lower band) are defined with $^{133}$Ba and $^{252}$Cf calibration sources respectively.}
\label{fig:yield}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{fig_2.pdf}
\caption{Timing parameter distributions for neutrons from $^{252}$Cf and low-yield events from $^{133}$Ba.}
\label{fig:timing}
\end{figure}
\subsection{Completing the 5-tower data run}
Data collected between July 2007 and September 2008 correspond to the four cryogenic runs R125--R128 and complete the 5-tower setup data-taking. The raw exposure for these four runs is $\sim$1.7 times larger than the exposure of R123--R124.
Improvements made since the previous analysis include a new, faster data processing package written in C++, better timing-based position reconstruction for low-energy events, functional form time-domain fit for phonon pulses, and automation of several data quality cuts. Timing discrimination of surface events looks good compared to the previous analysis and is shown in Fig.~\ref{fig:timing}.
A significant improvement was achieved for the process of setting the cut to allow a chosen number of surface events to leak into the signal region. An optimization technique was developed to vary the cut on each detector to maximize the total exposure-weighted neutron efficiency while keeping the number of the total leaked events the same. Systematic differences between Ba calibration and WIMP-search data are accurately taken into account. The analysis is in progress and the final results are expected soon.
\section{Low-energy electron-recoil spectrum analysis}
\label{sec:low_energy}
The DAMA collaboration reported an observation of an excess in the detection rate in the low energy spectrum together with the modulation signal centered at $\sim$3 keV~\cite{DAMA-08}. Interpretation of the result as nuclear recoil interactions requires non-trivial explanations (e.g.~\cite{Pierce-09}) to reconcile with the null result from other experiments~\cite{CDMS-09, XENON-08}. However, DAMA's result may be interpreted as the conversion of dark matter particles into electromagnetic energy in the detector since the DAMA detector does not discriminate between electron and nuclear recoils. In such a case it should be possible to observe the corresponding signal in the CDMS electron recoil band.
The same dataset (R123 and R124) as described in section~\ref{sec:wimp_search} was analyzed with the addition of those Ge detectors that were excluded for the WIMP-search analysis due to their poor surface event discrimination, but are adequate for the electron-recoil spectrum analysis. This increased the total raw exposure to 443.2 kg-days. Events were required to be inside the 2$\sigma$ electron-recoil band in ionization yield. Other selection criteria were similar to the main WIMP-search analysis and required events to be veto-anticoincident single-scatter within the detector fiducial volume satisfying data quality and noise rejection requirements. Detection efficiency varied from 30\% to 70\% depending on the detector.
An efficiency-corrected, coadded low-energy spectrum is shown in Fig.~\ref{fig:low_energy_spec}. It is important to note that for this analysis, an electron-equivalent energy range of 2--8.5 keV based on the ionization signal was considered. The spectrum was fit with a sum of a background model and three Gaussian distribution functions that correspond to the known spectral lines at 8.98 keV ($^{65}$Zn decay, remnant from cosmogenic activation), 10.36 keV (X-ray and Auger-electrons from $^{71}$Ge decay), and the line at 6.54 keV which is most likely due to de-excitation of $^{55}$Mn caused by electron-capture decay of $^{55}$Fe formed as the result of cosmogenic activation of germanium. Note, that 8.98 and 10.36 keV lines are outside of the analysis window. Background rate is $\sim$1.5 counts/kg/day/keV.
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{fig_3.pdf}
\caption{Efficiency corrected low-energy spectrum with the fit (red line) as described in the text.}
\label{fig:low_energy_spec}
\end{figure}
An unbinned log-likelihood fit was used to estimate the excess of event rate above the background. The event rate fit function was a simple sum of the Gaussian distribution function (signal) and the background model together with the Gaussian distribution for the 6.54 keV line multiplied by a weighting factor. The uncertainties in the production rate of $^{55}$Fe and in the time that the detectors spent above the ground, prevent an accurate estimation of the $^{55}$Mn contribution to the spectrum. Thus, the weighting factor is needed to suppress the importance of the 6.54 keV line in the background. By varying it, the most conservative limit on the excess of event rate was taken. The result of the fit indicated that there is no significant excess in the rate above the background.
An upper limit at the 90\% C.L. on event rate excess above the background is shown in Fig.~\ref{fig:low_energy_UL}~\cite{cdms_low_energy} together with the naive $Z^{2}$ scaling of the limits in Ge to the expected rate in NaI to be able to compare with the DAMA experiment. At 3.15 keV the upper limit curve is 6.8$\sigma$ below the DAMA result. The insert in Fig.~\ref{fig:low_energy_UL} compares the upper limit on the modulation amplitude which is assumed to be 6\% for the standard halo assumption with the 2$\sigma$ region of the annual modulation from DAMA in 2--4 keV and 2--6 keV ranges. Upper limits on the modulation are approximately half the value reported by DAMA.
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{fig_4.pdf}
\caption{Upper limit at the 90\% C.L. on event rate excess above the background together with the DAMA result (data point with error bars). The insert shows the upper limit on the modulation amplitude and compares them with the DAMA modulation signal in 2--4 keV (red, filled) and 2--6 keV (green, hatched) ranges.}
\label{fig:low_energy_UL}
\end{figure}
\section{Axion search}
The low energy electron-recoil spectrum can also be searched for a signal from the axion-like dark matter particles. These might be relic axions, for which we can set an upper limit on the axio-electric coupling $g_{a\bar{e}e}$, or solar axions, for which we set an upper limit on the Primakoff coupling $g_{a\gamma\gamma}$. Events were selected following the same requirements as for the low-energy electron recoil spectrum analysis described in section~\ref{sec:low_energy}.
For the low mass axion ($\sim$keV), pair production is kinematically forbidden. Thus, when interacting with the crystal, the axion is absorbed by a bound electron, which is then ejected from the atom, similar to the photoelectric effect. The interaction rate of the axion-like dark pseudoscalar is proportional to $A^{-1}g^{2}_{a\bar{e}e}\sigma_{p.e.}$, where $A$ is the atomic mass number~\cite{Pospelov:2008jk}. An expected event rate for germanium by the axio-electric coupling for $g_{a\bar{e}e}=10^{-10}$ is shown in the insert of Fig.~\ref{fig:coupling_aee}.
The expected observable from the interaction of axions with the Ge crystal is the peak at energy $m_{a}$ in the electron-recoil spectrum. The same profile likelihood calculation described in section~\ref{sec:low_energy} was used to set the upper limit on the axio-electric coupling in the absence of a statistically significant excess of event rate above the background. The 90\% C.L. upper limit on the coupling is shown in Fig.~\ref{fig:coupling_aee}~\cite{cdms_axions} together with the allowed region claimed by the DAMA experiment~\cite{Bernabei:2005ca} as a possible galactic axion interpretation of their signal\footnote{For comments on the DAMA allowed region see~\cite{Pospelov:2008jk}.}.
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{fig_5.pdf}
\caption{The 90\% C.L. upper limit on the axio-electric coupling together with the other experiment results. The insert shows the expected event rate for germanium by the axio-electric coupling for $g_{a\bar{e}e}=10^{-10}$.}
\label{fig:coupling_aee}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=80mm]{fig_6.pdf}
\caption{Estimated spectrum of the solar axion flux at Earth.}
\label{fig:solar_flux}
\end{figure}
Estimated spectrum of the solar axion flux at Earth for a given axion-photon coupling is shown in Fig.~\ref{fig:solar_flux}. For axions interacting with a Ge crystal, intense electric field in the proximity of the nucleus can trigger axion conversion to a photon by the Primakoff effect. Light axions will experience Bragg scattering in a crystal, which implies that the axion energy is inversely proportional to the product of the reciprocal lattice vector and the direction to the Sun. Thus, correlation of the expected rate with the position of the Sun is a signature of the solar axion signal.
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{fig_7.pdf}
\caption{Calculated expected solar axion event rate in Ge for $g_{a\gamma\gamma}=10^{-8}$ GeV$^{-1}$.}
\label{fig:solar_rate}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[width=80mm]{fig_8.pdf}
\caption{The 95\% C.L. upper limit on the axion-photon coupling together with other experiment results.}
\label{fig:solar_result}
\end{figure}
Each detector in a tower is rotated by $60^{0}$ with respect to the one above it. Each crystal's alignment, relative to true north, is known to $\pm3^{0}$. The expected calculated solar axion event rate in Ge for $g_{a\gamma\gamma}=10^{-8}$~GeV$^{-1}$ is shown in Fig.~\ref{fig:solar_rate}. An unbinned log-likelihood fit, similar to the described in section~\ref{sec:low_energy}, was used. The signal part of the fit included the expected event rate depicted in Fig.~\ref{fig:solar_rate} multiplied by the scale factor for the actual value of the axion-photon coupling. The scaling factor returned by the fit is $(1\pm1.5)\times10^{-3}$ and is consistent with zero~\cite{cdms_axions}. Thus, we did not observe any signatures of the solar axion conversion. The 95\% C.L. upper limit on the axion-photon coupling is shown in Fig.~\ref{fig:solar_result} together with other crystal search experiments (SOLAX/COSME and DAMA).
\section{Summary}
The CDMS experiment has a world-leading upper limit on the WIMP-nucleon spin-independent cross section with a minimum of $4.6\times10^{-44}$ cm$^{2}$ at the 90\% CL for a 60 GeV/c$^{2}$ WIMP. It has the world's best sensitivity for WIMP masses above 44 GeV/c$^{2}$. Ongoing analysis of the remaining four runs with a raw exposure of $\sim$750 kg-days is in its final stage and will complete the 5-tower setup data analysis. Analysis of the low-energy electron-recoil spectrum sets a stringent experimental upper limit on the axio-electric coupling of $1.4\times10^{-12}$ at the 90\% CL for a 2.5~keV/c$^{2}$ axion. No excess in the counting rate above background in the 2--8.5 keV electron-recoil spectrum was found. In the solar axion search, the upper limit on the axion-photon coupling of $2.4\times10^{-9}$~GeV$^{-1}$ at the 95\% CL was set.
\bigskip
| 4,461 |
\section{Introduction}
A $k$-tree is a graph reducible to a $k$-clique by successive
removals of a vertex of degree $k$ whose neighbors form a
$k$-clique. This class of $k$-trees has been widely studied in
combinatorics (for enumeration and characteristic
properties~\cite{beineke_number_1969,rose_simple_1974}), in graph
algorithms (many NP-complete problems on graphs can be solved in
polynomial time on $k$-trees~\cite{arnborg_complexity_1987}), and in
many other fields where $k$-trees were naturally encountered
(see~\cite{arnborg_complexity_1987}). By construction, vertices in
such structures are remarkably close, reflecting a highly strong
dependent graph structure, and they exhibit with no surprise the
scale-free property~\cite{gao_degree_2009}, yet somewhat
unexpectedly many properties of random $k$-trees can be dealt with
by standard combinatorial, asymptotic and probabilistic tools, thus
providing an important model of synergistic balance between
mathematical tractability and the predictive power for
practical-world complex networks.
While the term ``$k$-trees" is not very informative and may indeed
be misleading to some extent, they stand out by their underlying
tree structure, related to their recursive definition, which
facilitates the analysis of the properties and the exploration of
the structure. Indeed, for $k=1$, $k$-trees are just trees, and for
$k\ge 2$ a bijection~\cite{darrasse_limiting_2009} can be explicitly
defined between $k$-trees and a non trivial simple family of trees.
The process of generating a $k$-tree begins with a $k$-clique, which
is itself a $k$-tree; then the $k$-tree grows by linking a new
vertex to every vertex of an existing $k$-clique, and to these
vertices only. The same process continues; see Figure~\ref{fg-2-tr}
for an illustration. Such a simple process is reminiscent of several
other models proposed in the literature such as
$k$-DAGs~\cite{devroye_long_2009}, random
circuits~\cite{arya_expected_1999}, preferential
attachment~\cite{barabsi_emergence_1999,bollobs_degree_2001,
hwang_profiles_2007}, and many other models (see, for
example,~\cite{boccaletti_complex_2006,
durrett_random_2006,newman_structure_2003}). While the construction
rule in each of these models is very similar, namely, linking a new
vertex to $k$ existing ones, the mechanism of choosing the existing
$k$ vertices differs from one case to another, resulting in very
different topology and dynamics.
\begin{figure}[!h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
\ding{172} & \ding{173} & \ding{174} & \ding{175} & \ding{176} \\
\includegraphics{figure-3-1} &
\includegraphics{figure-3-2} &
\includegraphics{figure-3-3} &
\includegraphics{figure-3-4} &
\includegraphics{figure-3-5} \\
\includegraphics{figure-4-1} &
\includegraphics{figure-4-2} &
\includegraphics{figure-4-3} &
\includegraphics{figure-4-4} &
\includegraphics{figure-4-5} \\ \hline
\end{tabular}
\end{center}
\caption{\emph{The first few steps of generating a $3$-tree and a
$4$-tree. Obviously, these graphs show the high connectivity of
$k$-trees.}} \label{fg-2-tr}
\end{figure}
Restricting to the procedure of choosing a $k$-clique each time a
new vertex is added, there are several variants of $k$-trees
proposed in the literature depending on the modeling needs. So
$k$-trees can be either labeled~\cite{beineke_number_1969},
unlabeled~\cite{labelle_labelled_2004},
increasing~\cite{zhang_high-dimensional_2006},
planar~\cite{zhang_high-dimensional_2006},
non-planar~\cite{beineke_number_1969}, or
plane~\cite{palmer_number_1973}, etc.
For example, the family of random Apollonian networks, corresponding
to planar 3-trees, has recently been employed as a model for complex
networks~\cite{andrade_apollonian_2005,zhang_high-dimensional_2006}.
In these frameworks, since the exact topology of the real networks
is difficult or even impossible to describe, one is often led to the
study of models that present similarities to some observed
properties such as the degree of a node and the distance between two
nodes of the real structures.
For the purpose of this paper, we distinguish between two models of
random labeled non-plane $k$-trees; by non-plane we mean that we
consider these graphs as given by a set of edges (and not by its
graphical representation):
\begin{itemize}
\item[--] \emph{random simply-generated $k$-trees}, which correspond
to a uniform probability distribution on this class of $k$-trees,
and
\item[--] \emph{random increasing $k$-trees}, where we consider the
iterative generation process: at each time step, all existing
$k$-cliques are equally likely to be selected and the new vertex is
added with a label which is greater than the existing ones.
\end{itemize}
The two models are in good analogy to the simply-generated family of
trees of Meir and Moon~\cite{meir_altitude_1978} marked specially by
the functional equation $f(z) = z\Phi(f(z))$ for the underlying
enumerating generating function, and the increasing family of trees
of Bergeron et al.~\cite{bergeron_varieties_1992}, characterized by
the differential equation $f'(z) = \Phi(f(z))$. Very different
stochastic behaviors have been observed for these families of trees.
While similar in structure to these trees, the analytic problems on
random $k$-trees we are dealing with here are however more involved
because instead of a scalar equation (either functional, algebraic,
or differential), we now have a system of equations.
\begin{table}
\begin{center}
\begin{tabular}{|r||c|c|} \hline
\backslashbox{Properties}{Model} & Simply-generated structures&
Increasing structures \\ \hline Combinatorial description &
$\mathcal{T}_s = \mbox{Set}(\mathcal{Z}\times\mathcal{T}_s^k)$ &
$\mathcal{T} =
\mbox{Set}(\mathcal{Z}^\square \times\mathcal{T}^k)$\\
\hline Generating function & $T_s(z) = \exp(z T_s^k(z))$
& $ T'(z) = T^k(z)$ \\
\hline Expansion near singularity& $T_s(z) = \tau -
h\sqrt{1-z/\rho}+\ldots$ & $ T(z) = (1-kz)^{-1/k}$ \\ \hline Mean
distance of nodes & $O(\sqrt{n})$ & $O(\log n)$ \\ \hline Degree
distribution & Power law with exp.\ tails & Power
law~\cite{gao_degree_2009}
\\\hline Root-degree distribution & Power law with exp.\ tails &
Stable law~(Theorem~\ref{thm-ld}) \\\hline Expected Profile &
Rayleigh limit law & Gaussian limit law (\ref{Ecp-LLT})
\\ \hline
\end{tabular}
\end{center}
\caption{\emph{The contrast of some properties between
random simply-generated $k$-trees and
random increasing $k$-trees. Here $\mathcal{Z}$ denotes a node
and $\mathcal{Z}^\square$ means a marked node.}} \label{tb1}
\end{table}
It is known that random trees in the family of increasing trees are
often less skewed, less slanted in shape, a typical description
being the logarithmic order for the distance of two randomly chosen
nodes; this is in sharp contrast to the square-root order for random
trees belonging to the simply-generated family; see for
example~\cite{bergeron_varieties_1992,drmota_random_2009,
fuchs_profiles_2006,marckert_families_2008,meir_altitude_1978}. Such
a contrast has inspired and stimulated much recent research. Indeed,
the majority of random trees in the literature of discrete
probability, analysis of algorithms, and random combinatorial
structures are either $\log n$-trees or $\sqrt{n}$-trees, $n$ being
the tree size. While the class of $\sqrt{n}$-trees have been
extensively investigated by probabilists and combinatorialists,
$\log n$-trees are comparatively less addressed, partly because most
of them were encountered not in probability or in combinatorics, but
in the analysis of algorithms.
Table~\ref{tb1} presents a comparison of the two models: the classes
${\mathcal{T}}_s$ and $\mathcal{T}$, corresponding respectively to
simply-generated $k$-trees and increasing $k$-trees. The results
concerning simple $k$-trees are given
in~\cite{darrasse_limiting_2009, darrasse_unifying_????}, and those
concerning increasing $k$-trees are derived in this paper (except
for the power law distribution~\cite{gao_degree_2009}). We start
with the specification, described in terms of operators of the
symbolic method~\cite{flajolet_analytic_2009}. A structure of
${\mathcal{T}}_s$ is a set of $k$ structures of the same type, whose
roots are attached to a new node: $\mathcal{T}_s =
\mbox{Set}(\mathcal{Z}\times\mathcal{T}_s^k)$, while a structure of
${\mathcal{T}}$ is an increasing structure, in the sense that the
new nodes get labels that are smaller than those of the underlying
structure (this constraint is reflected by the box-operator)
$\mathcal{T} = \mbox{Set} (\mathcal{Z}^\square\times\mathcal{T}^k)$.
The analytic difference immediately appears in the enumerative
generating functions that translate the specifications: the
simply-generated structures are defined by $T_s(z)= \exp(z
T_s^k(z))$ and corresponding increasing structures satisfy the
differential equation $ T'(z)= T^k(z)$. These equations lead to a
singular expansion of the square-root type in the simply-generated
model, and a singularity in $(1 - kz)^{-1/k}$ in the increasing
model. Similar analytic differences arise in the bivariate
generating functions of shape parameters.
The expected distance between two randomly chosen vertices or the
average path length is one of the most important shape measures in
modeling complex networks as it indicates roughly how efficient the
information can be transmitted through the network. Following the
same $\sqrt{n}$-vs-$\log n$ pattern, it is of order $\sqrt n$ in the
simply-generated model, but $\log n$ in the increasing model.
Another equally important parameter is the degree distribution of a
random vertex: its limiting distribution is a power law with
exponential tails in the simply-generated model of the form
$d^{-3/2} \rho_k^d$, in contrast to a power-law in the increasing
model of the form $d^{-1-k/(k-1)}$, $d$ denoting the
degree~\cite{gao_degree_2009}. As regards the degree of the root,
its asymptotic distribution remains the same as that of any vertex
in the simply-generated model, but in the increasing model, the
root-degree distribution is different, with an asymptotic stable law
(which is Rayleigh in the case $k=2$); see Theorem~\ref{thm-ld}.
Our main concern in this paper is the connectivity-profile.
Recall that the profile of an usual tree is the sequence
of numbers, each enumerating the total number of nodes with the same
distance to the root. For example, the tree
\includegraphics{tree}
has the profile $\{1,2,2,1,3\}$. Profiles represent one of the
richest shape measures and they convey much information regarding
particularly the silhouette. On random trees, they have been
extensively studied recently; see~\cite{chauvin_martingales_2005,
drmota_profile_1997,drmota_functional_2008,fuchs_profiles_2006,
hwang_profiles_2007,marckert_families_2008,park_profiles_2009}.
Since $k$-trees have many cycles for $k\ge2$, we call the profile of
the transformed tree (see next section) \emph{the
connectivity-profile} as it measures to some extent the connectivity
of the graph. Indeed this connectivity-profile corresponds to the
profile of the ``shortest-path tree'' of a $k$-tree, as defined by
Proskurowski~\cite{proskurowski_k-trees:_1980}, which is nothing
more than the result of a Breadth First Search (BFS) on the graph.
Moreover, in the domain of complex networks, this kind of BFS trees
is an important object; for example, it describes the results of the
\texttt{traceroute} measuring
tool~\cite{stevens_chapter_1994,viger_detection_2008} in the study
of the topology of the Internet.
We will derive precise asymptotic approximations to the expected
connectivity-profile of random increasing $k$-trees, the major tools
used being based on the resolution of a system of differential
equations of Cauchy-Euler type (see~\cite{chern_asymptotic_2002}).
In particular, the expected number of nodes at distance $d$ from the
root follows asymptotically a Gaussian distribution, in contrast to
the Rayleigh limit distribution in the case of simply-generated
$k$-trees. Also the limit distribution of the number of nodes with
distance $d$ to the root will be derived when $d$ is bounded. Note
that when $d=1$, the number of nodes at distance $1$ to the root is
nothing but the degree of the root.
This paper is organized as follows. We first present the definition
and combinatorial specification of random increasing $k$-trees in
Section~\ref{sec:def}, together with the enumerative generating
functions, on which our analytic tools will be based. We then
present two asymptotic approximations to the expected
connectivity-profile in Section~\ref{sec:profile}, one for $d=o(\log
n)$ and the other for $d\to\infty$ and $d=O(\log n)$. Interesting
consequences of our results will also be given. The limit
distribution of the connectivity-profile in the range when $d=O(1)$
is then given in Section~\ref{sec:ld}.
\section{Random increasing $k$-trees and generating functions}
\label{sec:def}
Since $k$-trees are graphs full of cycles and cliques, the key step
in our analytic-combinatorial approach is to introduce a bijection
between $k$-trees and a suitably defined class of trees (\emph{bona
fide} trees!) for which generating functions can be derived. This
approach was successfully applied to simply-generated family of
$k$-trees in~\cite{darrasse_limiting_2009}, which leads to a system
of algebraic equations. The bijection argument used there can be
adapted \emph{mutatis mutandis} here for increasing $k$-trees, which
then yields a system of differential equations through the bijection
with a class of increasing trees~\cite{bergeron_varieties_1992}.
\begin{figure}
\begin{center}
\includegraphics{figure-bij}
\end{center}
\caption{\emph{A $2$-tree (left) and its corresponding increasing
tree representation (right).}}\label{fig:bij}
\end{figure}
\paragraph{Increasing $k$-trees and the bijection.}
Recall that a $k$-clique is a set of $k$ mutually adjacent vertices.
\begin{definition}
An increasing $k$-tree is defined recursively as follows. A
$k$-clique in which each vertex gets a distinct label from
$\{1,\dots,k\}$ is an increasing $k$-tree of $k$ vertices. An
increasing $k$-tree with $n > k$ vertices is constructed from an
increasing $k$-tree with $n-1$ vertices by adding a vertex labeled
$n$ and by connecting it by an edge to each of the $k$ vertices in
an existing $k$-clique.
\end{definition}
By \emph{random increasing $k$ trees}, we assume that all existing
$k$-cliques are equally likely each time a new vertex is being
added. One sees immediately that the number $T_n$ of increasing
$k$-trees of $n+k$ nodes is given by $T_n = \prod_{0\le i<n}(ik+1)$.
Note that if we allow any permutation on all labels, we obtain the
class of simply-generated $k$-trees where monotonicity of labels
along paths fails in general.
Combinatorially, simply-generated $k$-trees are in
bijection~\cite{darrasse_limiting_2009} with the family of trees
specified by $\mathcal{K}_s = \mathcal{Z}^k \times \mathcal{T}_s$,
where $\mathcal{T}_s = \mbox{Set}(\mathcal{Z} \times
\mathcal{T}_s^k)$. Given a rooted $k$-tree $G$ of $n$ vertices, we
can transform $G$ into a tree $T$, with the root node labeled
$\{1,\dots,k\}$, by the following procedure. First, associate a
white node to each $k$-clique of $G$ and a black node to each
$(k+1)$-clique of $G$. Then add a link between each black node and
all white nodes associated to the $k$-cliques it contains. Each
black node is labeled with the only vertex not appearing in one of
the black nodes above it or in the root. The last step in order to
complete the bijection is to order the $k$ vertices of the root and
propagate this order to the $k$ sons of each black node. This
constructs a tree from a $k$-tree (see Figure~\ref{fig:bij});
conversely, we can obtain the $k$-tree through a simple traversal of
the tree.
Such a bijection translates directly to increasing $k$-trees by
restricting the class of corresponding trees to those respecting a
monotonicity constraint on the labels, namely, on any path from the
root to a leaf the labels are in increasing order. This yields the
combinatorial specification of the class of increasing trees
$\mathcal{T} = \mbox{Set}(\mathcal{Z}^\square \times
\mathcal{T}^k)$. An increasing $k$-tree is just a tree in
$\mathcal{T}$ together with the sequence $\{1,\dots,k\}$
corresponding to the labels of the root-clique\footnote{We call
\textit{root-clique} the clique composed by the $k$ vertices
$(1,\ldots,k)$. The increasing nature of the $k$-trees guarantees
that these vertices always form a clique. We call
\textit{root-vertex} the vertex with label $1$.}. A tree in
$\mathcal{K}$ is thus completely determined by its $\mathcal{T}$
component, giving $\mathcal{K}_{n+k} \equiv \mathcal{T}_n$. For
example figure~\ref{fig:bij} shows a $2$-tree with $19$ vertices and
its tree representation with $17$ black nodes. In the rest of this
paper we will thus focus on class $\mathcal{T}$.
\paragraph{Generating functions.}
Following the bijection, we see that the complicated dependence
structure of $k$-trees is now completely described by the class of
increasing trees specified by $\mathcal{T} =
\mbox{Set}(\mathcal{Z}^\square \times \mathcal{T}^k)$. For example,
let $T(z) := \sum_{n\ge0} T_n z^n/n!$ denote the exponential
generating function of the number $T_n$ of increasing $k$-trees of
$n+k$ vertices. Then the specification translates into the equation
\[
T(z) = \exp\left(\int_0^z T^k(x) \dd x\right),
\]
or, equivalently, $T'(z) = T^{k+1}(z)$ with $T(0)=1$, which is
solved to be
\[
T(z) = (1-kz)^{-1/k},
\]
we then check that $T_n = \prod_{0\le i<n}(ik+1)$.
If we mark the number of neighbors of the root-node in $\mathcal{T}$
by $u$, we obtain
\[
T(z,u) = \exp\left(u \int_0^z T(x) T^{k-1}(x,u) \dd x\right),
\]
where the coefficients $n![u^\ell z^n] T(z,u)$ denote the number of
increasing $k$-trees of size $n+k$ with root degree equal to
$k+\ell-1$. Taking derivative with respect to $z$ on both sides and
then solving the equation, we get the closed-form expression
\begin{align} \label{F-sol}
T(z,u) = \left(1-u(1-(1-kz)^{1-1/k})\right)^{-1/(k-1)}.
\end{align}
Since $k$-trees can be transformed into ordinary increasing trees,
the profiles of the transformed trees can be naturally defined,
although they do not correspond to simple parameters on $k$-trees.
While the study of profiles may then seem artificial, the results do
provide more insight on the structure of random $k$-trees. Roughly,
we expect that all vertices on $k$-trees are close, one at most of
logarithmic order away from the other. The fine results we derive
provide in particular an upper bound for that.
Let $X_{n;d,j}$ denote the number of nodes at distance $d$ from $j$
vertices of the root-clique in a random $k$-tree of $n+k$ vertices.
Let $T_{d,j}(z,u)= \sum_{n\ge0} T_n \mathbb{E}(u^{X_{n;d,j}})
z^n/n!$ denote the corresponding bivariate generating function.
\begin{theorem} The generating functions $T_{d,j}$'s satisfy the
differential equations
\begin{equation} \label{bgf-Tdj}
\frac{\partial}{\partial z} T_{d,j}(z,u) =
u^{\delta_{d,1}}T_{d,j-1}^j(z,u) T_{d,j}^{k-j+1}(z,u),
\end{equation}
with the initial conditions $T_{d,j}(0,u)=1$ for $1\le j\le k$,
where $\delta_{a,b}$ denotes the Kronecker function,
$T_{0,k}(z,u)=T(z)$ and $T_{d,0}(z,u) = T_{d-1,k}(z,u)$.
\end{theorem}
\begin{proof}\ The theorem follows from
\begin{equation*}
T_{d,j}(z,u) = \exp\left(u^{\delta_{d,1}}
\int_0^z T_{d,j-1}^j(x,u) T_{d,j}^{k-j}(x,u)\dd x\right),
\end{equation*}
with $T_{d,j}(z,1) = T(z)$.
\end{proof}
For operational convenience, we normalize all $z$ by $z/k$ and write
$\tilde{T}(z) := T(z/k) = (1-z)^{-1/k}$. Similarly, we define
$\tilde{T}_{d,j}(z,u) := T_{d,j}(z/k,u)$ and have, by
(\ref{bgf-Tdj}),
\begin{align} \label{Tdj}
\frac{\partial}{\partial z} \tilde{T}_{d,j}(z,u)
=\frac{u^{\delta_{d,1}}}{k}\tilde{T}_{d,j-1}^j(z,u)
\tilde{T}_{d,j}^{k-j+1}(z,u),
\end{align}
with $\tilde{T}_{d,j}(1,z) = \tilde{T}(z)$,
$\tilde{T}_{0,k}(z,u)=\tilde{T}(z)$ and $\tilde{T}_{d,0}(z,u) =
\tilde{T}_{d-1,k}(z,u)$.
\section{Expected connectivity-profile}
\label{sec:profile}
We consider the expected
connectivity-profile $\mathbb{E}(X_{n;d,j})$ in this section.
Observe first that
\[
\mathbb{E}(X_{n;d,j}) = \frac{k^n[z^n]\tilde{M}_{d,j}(z)}
{T_n} ,
\]
where $\tilde{M}_{d,j}(z) := \partial \tilde{T}_{d,j}(z,u)/(\partial
u)|_{u=1}$. It follows from (\ref{Tdj}) that
\begin{align}\label{Mdj}
\tilde{M}_{d,j}'(z) = \frac1{k(1-z)}
\left((k-j+1)\tilde{M}_{d,j}(z)
+j\tilde{M}_{d,j-1}(z) + \delta_{d,1}\tilde{T}(z)\right).
\end{align}
This is a standard differential equation of Cauchy-Euler type whose
solution is given by (see~\cite{chern_asymptotic_2002})
\[
\tilde{M}_{d,j}(z) = \frac{(1-z)^{-(k-j+1)/k}}k
\int_0^z (1-x)^{-(j-1)/k}\left( j\tilde{M}_{d,j-1}(x)
+\delta_{d,1}\tilde{T}(x) \right) \dd x,
\]
since $\tilde{M}_{d,j}(0)=0$. Then, starting from
$\tilde{M}_{0,k}=0$, we get
\begin{align*}
\tilde{M}_{1,1}(z)= \frac1{k-1}\left(\frac1{1-z}
- \frac1{(1-z)^{1/k}}\right)
= \frac{\tilde{T}^k(z) - \tilde{T}(z)}{k-1}.
\end{align*}
Then by induction, we get
\[
\tilde{M}_{d,j}(z)
\sim \frac{j}{(k-1)(d-1)!}\cdot\frac{1}{1-z}
\log^{d-1}\frac1{1-z} \qquad(1\le j\le k;d\ge1; z\sim1).
\]
So we expect, by singularity analysis, that
\[
\mathbb{E}(X_{n;d,j}) \sim \Gamma(1/k)\frac{j}{k-1}
\cdot \frac{(\log n)^{d-1}}{(d-1)!}\,n^{1-1/k},
\]
for large $n$ and fixed $d$, $k$ and $1\le j\le k$. We can indeed
prove that the same asymptotic estimate holds in a larger range.
\begin{theorem} \label{thm-E}
The expected connectivity-profile $\mathbb{E}(X_{n;d,j})$
satisfies for $1\le d= o(\log n)$
\begin{align}\label{Ecp-1}
\mathbb{E}(X_{n;d,j}) \sim \Gamma(1/k)\frac{j}{k-1}
\cdot \frac{(\log n)^{d-1}}{(d-1)!}\,n^{1-1/k},
\end{align}
uniformly in $d$, and for $d\to\infty$, $d=O(\log n)$,
\begin{align}\label{Ecp2}
\mathbb{E}(X_{n;d,j}) \sim
\frac{\Gamma(1/k)h_{j,1}(\rho)
\rho^{-d} n^{\lambda_1(\rho)-1/k}}
{\Gamma(\lambda_1(\rho))\sqrt{2\pi(\rho\lambda_1'(\rho)
+\rho^2\lambda_1''(\rho))\log n}}
\end{align}
where $\rho=\rho_{n,d}>0$ solves the equation $\rho\lambda_1'(\rho)=
d/\log n$, $\lambda_1(w)$ being the largest zero (in real part) of
the equation $\prod_{1\le \ell\le k}(\theta-\ell/k)- k! w/k^k=0$ and
satisfies $\lambda_1(1) =(k+1)/k$.
\end{theorem}
An explicit expression for the $h_{j,1}$'s is given as follows. Let
$\lambda_1(w),\ldots,\lambda_k(w)$ denote the zeros of the equation
$\prod_{1\le \ell\le k}(\theta-\ell/k)- k! w/k^k=0$. Then for $1\le
j\le k$
\begin{align}\label{hj1}
h_{j,1}(w) = \frac{j!w(w-1)}{(k\lambda_1(w)-1)\left(
\sum_{1\le s\le k}\frac1{k\lambda_1(w)-s}\right)
\prod_{k-j+1\le s\le k+1}(k\lambda_1(w)-s)}.
\end{align}
The theorem cannot be proved by the above inductive argument and our
method of proof consists of the following steps. First, the
bivariate generating functions $\mathscr{M}_j(z,w) := \sum_{d\ge1}
\tilde{M}_{d,j}(z) w^d$ satisfy the linear system
\[
\left((1-z)\frac{\mbox{d}}{\dd{z}}
- \frac{k-j+1}{k}\right)\mathscr{M}_j
= \frac{j}{k} \mathscr{M}_{j-1}
+ \frac{w\tilde{T}}{k}\qquad(1\le j\le k).
\]
Second, this system is solved and has the solutions
\[
\mathscr{M}_j(z,w) = \sum_{1\le j\le k}
h_{j,m}(w)(1-z)^{-\lambda_m(w)}
- \frac{w-(w-1)\delta_{k,j}}{k}\,\tilde{T}(z),
\]
where the $h_{j,m}$ have the same expression as $h_{j,1}$ but with
all $\lambda_1(w)$ in (\ref{hj1}) replaced by $\lambda_m(w)$. While
the form of the solution is well anticipated, the hard part is the
calculations of the coefficient-functions $h_{j,m}$. Third, by
singularity analysis and a delicate study of the zeros, we then
conclude, by saddle-point method, the estimates given in the
theorem.
\begin{corollary} The expected degree of the root
$\mathbb{E}(X_{n,1,j})$ satisfies
\[
\mathbb{E}(X_{n,1,j}) \sim \Gamma(1/k)\frac{j}{k-1}
\,n^{1-1/k}\qquad(1\le j\le k).
\]
\end{corollary}
This estimate also follows easily from (\ref{F-sol}).
Let $H_k := \sum_{1\le\ell\le k} 1/\ell$ denote the harmonic
numbers and $H_k^{(2)} := \sum_{1\le\ell\le k} 1/\ell^2$.
\begin{corollary} The expected number of nodes at distance
$d= \left\lfloor \frac{1}{kH_k}\log n + x\sigma\sqrt{\log
n}\right\rfloor$ from the root, where $\sigma =
\sqrt{H_k^{(2)}/(kH_k^3)}$, satisfies, uniformly for $x=o((\log
n)^{1/6})$,
\begin{align}\label{Ecp-LLT}
\mathbb{E}(X_{n;d,j})\sim
\frac{n e^{-x^2/2}}{\sqrt{2\pi\sigma^2\log n}}.
\end{align}
\end{corollary}
This Gaussian approximation justifies the last item corresponding to
increasing trees in Table~\ref{tb1}.
Note that $\lambda_1(1)=(k+1)/k$ and $\alpha = d/\log n \sim
1/(kH_k)$. In this case, $\rho=1$ and
\[
\rho\lambda_1'(\rho) = \frac1{\sum_{1\le \ell\le k}
\frac{1}{\lambda_1(\rho)-\frac \ell k}},
\]
which implies that $\lambda_1(\rho)-1/k -\alpha\log \rho \sim 1$.
\begin{corollary} \label{cor-height}
Let $\mathscr{H}_{n;d,j} := \max_d X_{n;d,j}$ denote the
height of a random increasing $k$-tree of $n+k$ vertices. Then
\[
\mathbb{E}(\mathscr{H}_n) \le \alpha_+\log n -
\frac{\alpha_+}{2(\lambda_1(\alpha_+)-\frac1k)}\log\log n
+ O(1),
\]
where $\alpha_+>0$ is the solution of the system of equations
\[
\left\{\begin{array}{l}
\displaystyle\frac{1}{\alpha_+} = \sum_{1\le \ell\le k}
\frac1{v-\frac \ell k},\\ \displaystyle
v-\frac1k-\alpha_+\sum_{1\le \ell \le k}
\log\left(\frac{k}{\ell} v-1\right) = 0.
\end{array}\right.
\]
\end{corollary}
Table~\ref{tab1} gives the numerical values of $\alpha_+$ for small
values of $k$.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|}\hline
$k$ & $2$ & $3$ & $4$ & $5$ & $6$ \\ \hline%
$\alpha_+$ & $1.085480$ & $0.656285$ & $0.465190$ &
$0.358501$ & $0.290847$\\ \hline\hline%
$k$ & $7$ & $8$ & $9$ & $10$ & $20$ \\ \hline %
$\alpha_+$ & $0.244288$ & $0.210365$ & $0.184587$ & $0.164356$ &
$0.077875$\\ \hline
\end{tabular}
\end{center}
\caption{\emph{Approximate numerical values of $\alpha_+$.}}
\label{tab1}
\end{table}
For large $k$, one can show that $\alpha_+\sim 1/(k\log 2)$ and
$\lambda_1(\alpha_+) \sim 2$.
Corollary~\ref{cor-height} justifies that the mean distance of
random $k$-trees are of logarithmic order in size, as stated in
Table~\ref{tb1}.
\begin{corollary} The width $\mathscr{W}_{n;d,j} := \max_d X_{n;d,j}$
is bounded below by
\[
\mathbb{E}(\mathscr{W}_n) =
\mathbb{E}(\max_d X_{n,d}) \ge \max_d \mathbb{E}(X_{n,d})
\asymp \frac{n}{\sqrt{\log n}}.
\]
\end{corollary}
We may conclude briefly from all these results that \emph{in the
transformed increasing trees of random increasing $k$-trees, almost
all nodes are located in the levels with $d= \frac{1}{kH_k}\log n +
O(\sqrt{\log n})$, each with $n/\sqrt{\log n}$ nodes.}
\section{Limiting distributions}
\label{sec:ld}
With the availability of the bivariate generating functions
(\ref{bgf-Tdj}), we can proceed further and derive the limit
distribution of $X_{n;d,j}$ in the range where $d=O(1)$. The case
when $d\to\infty$ is much more involved; we content ourselves in
this extended abstract with the statement of the result for bounded
$d$.
\begin{theorem} \label{thm-ld}
The random variables $X_{n;d,j}$, when normalized by their mean
orders, converge in distribution to
\begin{align}\label{Xndj-cid}
\frac{X_{n;d,j}}{n^{1-1/k}(\log n)^{d-1}/(d-1)!}
\stackrel{d}{\to}\Xi_{d,j},
\end{align}
where
\begin{align*}
\mathbb{E}(e^{\Xi_{d,j} u}) &=\Gamma(\tfrac1k)
\sum_{m\ge0} \frac{c_{d,j,m}}{m!\Gamma(m(1-1/k)+1/k)}\,u^m\\
&= \frac{\Gamma(\frac1k)}{2\pi i}\int_{-\infty}^{(0+)} e^\tau
\tau^{-1/k}C_{d,j}\left(\tau^{-1+1/k} u\right) \dd \tau,
\end{align*}
and $C_{d,j}(u) :=1+ \sum_{m\ge1} c_{d,j,m} u^m/m!$ satisfies the
system of differential equations
\begin{align} \label{Cdju}
(k-1)uC_{d,j}'(u) + C_{d,j}(u) = C_{d,j}(u)^{k+1-j}
C_{d,j-1}(u)^j\qquad(1\le j\le k),
\end{align}
with $C_{d,0}=C_{d-1,k}$. Here the symbol $\int_{-\infty}^{(0+)}$
denotes any Hankel contour starting from $-\infty$ on the real axis,
encircling the origin once counter-clockwise, and returning to
$-\infty$.
\end{theorem}
We indeed prove the convergence of all moments, which is stronger
than weak convergence; also the limit law is uniquely determined by
its moment sequence.
So far only in special cases do we have explicit solution for
$C_{1,j}$: $C_{1,1}(u) = (1+u)^{-1/(k-1)}$ and
\[
C_{1,2}(u) = \left\{\begin{array}{ll}
\frac{e^{1/(1+u)}}{1+u},&\text{if } k=2;\\
\frac1{1+u^{1/2}\arctan(u^{1/2})},&
\text{if } k=3.
\end{array}\right.
\]
Note that the result (\ref{Xndj-cid}) when $d=0$ can also be derived
directly by the explicit expression (\ref{F-sol}). In particular,
when $k=2$, the limit law is Rayleigh.
\bibliographystyle{plain}
| 10,789 |
\section{Introduction}
We study the spectral stability of small-amplitude periodic traveling waves in scalar Hamiltonian partial differential equations:
\begin{equation}
u_t = \partial_x \frac{\delta H}{\delta u}\, .
\label{HPDE}
\end{equation}
Here
$$
u = u(x,t) = u(x+L,t), \quad x \in [0,L], \quad t > 0, \qquad \mbox{and} \quad H = \int_0^L \mathcal{H} (u, u_x, \dots) \, dx
$$
is the Hamiltonian with density $\mathcal H$. Without loss of generality, we let $L = 2\pi$. This class of equations includes the Korteweg--de Vries (KdV), the generalized and modified KdV equations, the Kawahara equation, and other equations that arise in the study of dispersive problems in water waves, plasma physics etc. \cite{AS, IR}.
We assume that \eq{HPDE} has a trivial solution, i.e.,
${\delta H}/{\delta u} = 0$ for $u = 0$, and $H$ has an expansion $H = H^0 + H^1$, where $H^0$ is the quadratic part of $H$ and $H^1$ contains the higher order terms:
\begin{equation}
H^0 = -\frac{1}{2} \int_0^{2\pi} \sum_{j=0}^{N} \alpha_j (\partial_x^{j} u)^2\, .
\label{H0}
\end{equation}
As a consequence, all linear terms in (\ref{HPDE}) are of odd degree, as even degree terms would introduce dissipation. We assume that $N$ is a finite positive integer, and $\alpha_j \in \mathbb{R}$. These assumptions exclude problems like the Whitham equation \cite{DT} ($N = \infty$) which remains a topic of investigation.
The now-standard approach to examine the stability of waves in Hamiltonian problems with symmetries is the theory developed by Vakhitov and Kolokolov \cite{VK} and Grillakis, Shatah, and Strauss \cite{GSS1, GSS2}, which allows for the determination of spectral stability of waves of arbitrary amplitude. In that setup, spectral stability implies orbital (nonlinear) stability under certain conditions, emphasizing the importance of the spectral information of the linearized problem.
Extensions of these results are found in \cite{KKS, KS2012, Pelinovsky2005, Pelinovsky2012}. Periodic problems within the same framework were considered in \cite{deckap, KapHar}. The use of any of these results relies on index theory
requiring additional information about the PDE. That information is typically provided, for instance, by assuming something about the dimension of the kernel of the linearized problem. For small-amplitude waves extra information is often obtained through a perturbation of the zero-amplitude problem. We avoid index theory and study directly the collision of eigenvalues. The parallel work \cite{TDK} illustrates how small-amplitude information is used to characterize the (in)stability of the waves. Here, we reduce the spectral stability problem for small-amplitude waves to the investigation of zeros of certain recurrently-defined polynomials, which appear in the theory of proper polynomial mappings \cite[p172]{DAngelo} and in orthogonal polynomial theory \cite[Chapter 18]{DLMF}.
To our knowledge, the connection between stability theory and these polynomials is new to the literature.
Our approach allows us to rigorously analyze the stability of KdV-type equations, including the
generalized KdV equation (gKdV), its higher order analogues, and also
the two-term balanced KdV equation. The results agree with the existing literature of spectral stability of periodic waves for gKdV and in the case of balanced high-order KdV equations they confirm and extend the analytical and numerical predictions in \cite{TDK}. Our method is closely related to the results in \cite{DT}, where the spectrum of small-amplitude periodic solutions of Hamiltonian PDEs is determined directly from the dispersion relation of the PDE linearized about the zero solution. Our theory adds to the results in \cite{DT}, and provides a simple and, importantly, a natural framework for studying the spectral stability of waves by perturbative methods. We refer the reader to \cite{DT} and \cite{TDK} for a number of numerical illustrations of the results presented here for KdV-type equations.
The spectral stability of small-amplitude waves bifurcating from the trivial solution $u=0$ at a critical velocity $c=c_0$ can be examined using regular perturbation theory of the spectrum of \eq{HPDE} linearized about $u=0$ at $c = c_0$. Our assumptions guarantee that $u=0$ is spectrally stable, i.e., the spectrum of the linearized problem is restricted to the imaginary axis, since \eq{HPDE} is Hamiltonian.
In the periodic setting the whole spectrum of the zero-amplitude problem is needed. However, Floquet theory \cite{KP} allows to decompose the continuous spectrum to an infinite union of sets of discrete eigenvalues of eigenvalue problems parametrized by the Floquet multiplier $\mu$. An important scenario for instability of small-amplitude waves on the bifurcation branch comes about through Hamiltonian-Hopf bifurcations \cite{MacKay,VDM1985} producing symmetric pairs of eigenvalues off the imaginary axis, i.e., exponentially growing and therefore unstable modes. Such bifurcations require non-simple eigenvalues of the linearized problem at zero amplitude, i.e., ``collided eigenvalues''. Furthermore, such colliding eigenvalues can split off from the imaginary axis only if they have opposite Krein signatures \cite{MacKay, KM2013}. Note that we stay away from the origin of the spectral plane and thus we do not consider modulation or Benjamin-Feir instability.
Both the location of the eigenvalues and their Krein signatures are characterized by the dispersion relation of the linearized problem \cite{DT}. We show that even the collision of eigenvalues and the agreement of their signatures is directly characterized by the dispersion relation. This characterization is through the roots of a polynomial, which is a reduction of the dispersion relation to a polynomial approximately half its degree. This is a surprising fact as it is by no means clear why such a characterization is possible, as the collisions of eigenvalues and their types are not itself objects that can be identified directly algebraically, particularly with a simpler algebraic relation than the eigenvalues themselves.
\section{General Setting}
\label{sec:GeneralSetting}
We follow the steps outlined in Section III of \cite{DT}. We use a coordinate transformation $x \rightarrow x-ct$ to a frame moving with the wave,
\begin{equation}
\partial_t u = \partial_x \frac{\delta H}{\delta u} + c\partial_ x u = \partial_x \left( \frac{\delta H}{\delta u} + c u\right) =\partial_x \frac{\delta H_c}{\delta u},
\label{HPDEc}
\end{equation}
where $H_c$ is the modified Hamiltonian. The quadratic part of $H_c$ is
\begin{equation}
H_c^0 = \frac{c}{2} \int_0^{2\pi} u^2 \, dx - \frac{1}{2} \int_0^{2\pi} \sum_{j=0}^{N} \alpha_j (\partial_x^{j} u)^2 \, dx\, .
\label{Hc0}
\end{equation}
Traveling wave solutions of \eq{HPDE} are stationary solutions $U(x)$ of \eq{HPDEc} and stationary points of $H_c$.
\subsection{Perturbation from the trivial state. Dispersion relation}
For all $c\in \mathbb{R}$, eq.~\eq{HPDEc} has the trivial solution $u(x,t) = 0$. We linearize \eq{HPDEc} about the zero solution to obtain an equation for the perturbation $v = v(x,t)$ from the trivial state
\begin{equation}
\partial_t v = c\partial_x v - \sum_{j=0}^{N} (-1)^j \alpha_j \partial_x^{2j+1} v \, .
\label{lineqc}
\end{equation}
We decompose $v$ into a Fourier series in $x$, $v = \sum_{k=-\infty}^{\infty} \exp(ikx) \hat{v}_k$, to obtain decoupled evolution equations for each of the Fourier coefficients $\hat{v}_k = \hat{v}_k(t)$:
\begin{equation}
\partial_t \hat{v}_k=-i \Omega(k)\hat{v}
_k
\qquad k \in \mathbb{Z},
\label{fourier}
\end{equation}
where $\Omega(k)$ is given by
\begin{equation}
\Omega(k) = \omega(k) - ck = \sum_{j=0}^N \eta_j k^{2j+1}\, , \qquad
\omega(k) = \sum_{j=0}^N \alpha_j k^{2j+1}\,, \qquad \eta_j = \alpha_j - c\delta_{j1}\, ,
\label{DR}
\end{equation}
is the dispersion relation of \eq{lineqc}, obtained by letting $v(x,t) = \exp(ikx - i\Omega t)$ in \eq{lineqc}. Here $\omega = \omega(k)$ is the dispersion relation in the original frame of reference corresponding to \eq{HPDE}--\eq{H0}. Note that $\omega(k)$ is an odd function.
\subsection{Non-zero amplitude branches}
Next, we discuss non-zero amplitude periodic solution branches of \eq{HPDEc} bifurcating from the trivial state.
A requirement for this is that a non-trivial stationary solution of \eq{fourier} exists, i.e.,
$\Omega(k) = 0$, for $k \in \mathbb{N}$, since we have imposed that the solutions are $2\pi$ periodic. Thus
\begin{equation}
c = c_k = \frac{\omega(k)}{k}, \qquad k \in \mathbb{N}.
\label{ckdef}
\end{equation}
For simplicity, we assume that a unique bifurcating branch emanates from $c=c_k$. The solutions with $k > 1$ are $2\pi / k$ periodic. We focus on $k = 1$, i.e., $c = \omega(1)$. The cases with $k >1$ may be treated analogously (see Section~\ref{s:ktwo} for a discussion of the $k\ge 2$ in the context of gKdV equation).
\subsection{Floquet theory at zero amplitude}
Using Floquet theory \cite{DK2006, KP} the spectral stability of the non-trivial solution $U = U(x)$ of \eq{HPDEc} on the bifurcation branch starting at $c$ is determined by the growth rates of perturbations of the form
\begin{equation}
v(x,t) = e^{\lambda t}V(x),
~~
V(x) = e^{i\tilde{\mu} x}\sum_{n = -\infty}^{\infty} a_n e^{i nx}\, .
\label{Floquet}
\end{equation}
\noindent
Here $\tilde{\mu} \in (-1/2, 1/2]$ is the Floquet exponent.
Using \eq{fourier} for the zero-amplitude case,
\begin{equation}
\lambda = \lambda_n^{(\tilde{\mu})}= -i \Omega(n+\tilde{\mu}) = - i \omega (n+\tilde{\mu}) + i (n+\tilde{\mu}) c, \qquad n \in \mathbb{Z}\, .
\label{spectrum}
\end{equation}
The expression \eq{spectrum} is an explicit expression for the spectrum of the linearized stability problem for solutions of zero amplitude. Next, we examine how the spectrum of the linearization changes as the solution bifurcates away from zero amplitude.
\subsection{Collisions of eigenvalues, Hamiltonian-Hopf bifurcations}
After Floquet decomposition \eq{Floquet}, the elements of the spectrum become eigenvalues of the $\tilde{\mu}$-parameterized operator obtained by replacing $\partial_x\rightarrow \partial_x+i \tilde{\mu}$ in the linear stability problem. The eigenfunctions associated with these eigenvalues are (quasi)periodic and are bounded on the whole real line, see \cite{KP,DO2011} for details.
For zero amplitude, the spectrum \eq{spectrum} is on the imaginary axis. Instabilities for small amplitude come about through collisions of purely imaginary eigenvalues at zero amplitude for a fixed value of $\tilde{\mu}$. Away from the origin, eigenvalues generically split off from the axis through the Hamiltonian-Hopf bifurcations \cite{MacKay,VDM1985} as the solution amplitude increases. Each such Hamiltonian-Hopf bifurcation produces a pair of eigenvalues off the imaginary axis that is symmetric with respect to the imaginary axis, thus yielding an exponentially growing eigenmode.
From \eq{spectrum}, it is easy to detect eigenvalue collisions away from the origin. They correspond to solutions of $\lambda_{n_1}^{(\tilde{\mu})} = \lambda_{n_2}^{(\tilde{\mu})} \neq 0$, $n_1,n_2 \in \mathbb{Z}$, $n_1 \neq n_2$, $\tilde{\mu} \in (-1/2, 1/2]$, i.e.,
\begin{equation}
-i\Omega(n_1+\tilde{\mu}) = -i\omega(n_1+\tilde{\mu}) + i (n_1+\tilde{\mu}) c = -i\omega(n_2+\tilde{\mu}) + i (n_2+\tilde{\mu}) c =- i\Omega (n_2 +\tilde{\mu})\, ,
\label{eq:collision}
\end{equation}
where $c=c_1$ is given by \eq{ckdef} with $k=1$. Solving this equation results in values of $\tilde{\mu}$ and $n_1$ for which $\lambda^{(\tilde{\mu})}_{n_1}$ is an eigenvalue colliding with another one. Typically this is done by solving \eq{eq:collision} for $\tilde{\mu}$ for different fixed $n_1$.
\subsection{Krein signature}
A necessary condition for two eigenvalues colliding on the imaginary axis to cause a Hamiltonian-Hopf bifurcation is that the eigenvalues have opposite Krein signatures. The Krein signature is the sign of the energy of the eigenmode associated with the eigenvalue. For a collision of eigenvalues to produce an instability this energy needs to be indefinite: a definite sign would entail bounded level sets of the energy, leading to perturbations remaining bounded.
For Hamiltonian systems with quadratic part given by \eq{Hc0} the eigenmode of the form
$v(x,t) = a_n \exp\left[i(n+\tilde{\mu}) x + \lambda_n^{(\tilde{\mu})}t\right] + \mbox{c.c.}$, where c.c.~stands for complex conjugate of the preceding term, contributes to $H_c^0$ the relative energy (see \cite{DT})
$$
H_c^0|_{(n, \tilde{\mu})} \sim - |a_p|^2\, \frac{\Omega (n+\tilde{\mu})}{n+\tilde{\mu}}.
$$
Thus the Krein signature of $\lambda_n^{(\tilde{\mu})}$ is given by
\begin{equation*}
\kappa(\lambda_n^{(\tilde{\mu})}) = -\mathop{\rm sign}\nolimits \left( \frac{\Omega(n+\tilde{\mu})}{n+\tilde{\mu}} \right)\, .
\label{Ksign}
\end{equation*}
A simple characterization of agreement of the signatures of two colliding eigenvalues $\lambda_{n_1}^{(\tilde{\mu})}$ and $\lambda_{n_2}^{(\tilde{\mu})}$ immediately follows.
\begin{proposition}
Let two eigenvalues $\lambda_{n_1}^{(\tilde{\mu})} = \lambda_{n_2}^{(\tilde{\mu})} = \lambda \neq 0$, $n_1 \neq n_2$, of the Bloch wave decomposition \eq{Floquet} of
\eq{lineqc} coincide, i.e., \eq{eq:collision} holds. Then the product of Krein signatures of the eigenvalues is characterized by the sign of the quantity
\begin{equation}
q = q_{n_1,n_2}^{(\tilde{\mu})} = \frac{\Omega(n_1+\tilde{\mu})}{n_1+\tilde{\mu}} \cdot \frac{\Omega(n_2+\tilde{\mu})}{n_2+\tilde{\mu}}
= \frac{|\lambda|^2}{(n_1+\tilde{\mu})(n_2+\tilde{\mu})}\,
\label{qdef}
\end{equation}
Let $Z = Z_{n_1,n_2}^{(\tilde{\mu})} = (n_1+\tilde{\mu}) (n_2+\tilde{\mu})$. Since $\lambda \neq 0$ the sign of $Z$ characterizes an agreement of Krein signatures of the coinciding eigenvalues:
\begin{equation}
\kappa(\lambda_{n_1}^{(\tilde{\mu})}) \kappa(\lambda_{n_2}^{(\tilde{\mu})}) = \mathop{\rm sign}\nolimits (q) = \mathop{\rm sign}\nolimits \left[(n_1+\tilde{\mu})(n_2+\tilde{\mu})\right] = \mathop{\rm sign}\nolimits (Z)\, .
\label{eq:Kreinproduct}
\end{equation}
\label{prop:1}
\end{proposition}
We denote
\begin{equation}
\mu := n_2 + \tilde{\mu}, \qquad \mbox{and} \qquad
\triangle n := n_1 - n_2\, .
\label{ndef}
\end{equation}
Here $\triangle n > 0$. Then $Z = \mu (\triangle n+\mu)$ and the collision condition \eq{eq:collision} reduces to
\begin{equation}
\Omega(\triangle n+ \mu) = \Omega(\mu)
\, .
\label{resonance}
\end{equation}
\section{Recurrent Sequences of Polynomials}
\label{sec:RecurrentSequences}
Before we revisit \eq{resonance} in the next section, we need to define some particular recurrent sequences of polynomials.
\begin{lemma}
Let $a, b \in \mathbb{C}$, $m \in \mathbb{N}_0$, and
$$
t_m = a^m + (-b)^m\, .
$$
Then
\begin{equation*}
t_{m+1} = (a-b) t_m + (ab)t_{m-1}\, ,\qquad m \ge 1.
\label{recurtn}
\end{equation*}
\label{lemma:recurrent}
\end{lemma}
\vspace*{-0.3in}
\begin{proof}
$$
t_{m+1} = (a-b)(a^m + (-1)^m b^m) + ab (a^{m-1} + (-1)^{m-1} b^{m-1}) =
(a-b)t_m + (ab)t_{m-1}\, .
$$
\end{proof}
Since $t_0 = 2$ and $t_1 = a-b$, by induction all $t_m$ can be written as polynomials in the two variables $a-b$ and $ab$,
$t_m = t_m (a-b,ab)$.
Further, $t_m$ is a homogeneous polynomial in $a$ and $b$ of degree $m$. We introduce $s_m$ by $t_m = (a-b)^m s_m (\gamma)$, i.e.,
\begin{equation}
s_m = s_m(\gamma) := \frac{t_m (a-b,ab)}{(a-b)^m}\, , \qquad
\mbox{with $\gamma := \displaystyle\frac{ab}{(a-b)^2}$.}
\label{def:ga}
\end{equation}
The sequence $s_m$ is characterized recursively by
\begin{eqnarray}
s_{m+1} & = & s_m + \gamma s_{m-1}\, , \quad m \ge 1, \qquad s_0= 2, \quad s_1 = 1, \label{rec:s}
\end{eqnarray}
which shows that $s_m$ is a polynomial in $\gamma$ of degree $m/2$ ($m$ even) or $(m-1)/2$ ($m$ odd).
One can easily see that
\begin{gather*}
s_2(\gamma) = 1+ 2\gamma , \quad
s_3(\gamma) = 1 + 3\gamma , \quad
s_4(\gamma) = 1 + 4\gamma + 2\gamma^2, \quad
s_5(\gamma) = 1 + 5\gamma + 5\gamma^2\, , \\
s_6(\gamma) = 1+ 6\gamma + 9 \gamma^2 + 3\gamma^3, \qquad
s_7(\gamma) = 1 + 7\gamma + 14\gamma^2 + 7 \gamma^3\, .
\end{gather*}
Solving the recurrence relationship,
\begin{equation}
s_m (\gamma) = \psi_+^m + \psi_-^m\, , \quad m \ge 0, \qquad
\psi_{\pm} := \frac{1}{2}\left(1 \pm \sqrt{1 + 4\gamma}\right)\, .
\label{s:exp}
\end{equation}
That implies
\begin{equation}\label{lemma2}
s_m(0)=1,~~s_m(-1/4)=2^{1-m}.
\end{equation}
Note that $s_m(\gamma)$ is increasing on $(-1/4,0)$ as
\begin{equation}
s_m'(\gamma) = \frac{m}{\sqrt{1+4\gamma}} (\psi_+^{m-1} - \psi_-^{m-1}) > 0\, .
\label{growths}
\end{equation}
A few lemmas characterizing the behavior of $s_m(\gamma)$ are proved in the Appendix.
\section{Reduction of the Equation for Signatures of Colliding Eigenvalues}
\label{Sec:Reduction}
We prove that for scalar Hamiltonian problems \eq{HPDEc}--\eq{Hc0} of order $2N+1$, the polynomial equation \eq{resonance} characterizing the collision of eigenvalues with indices $n+\mu$ and $\mu$ at zero-amplitude resulting in Hamiltonian-Hopf bifurcations, and thus instability of non-zero amplitude periodic waves, can be expressed as a polynomial of degree $N$ in a real variable $\gamma$ with coefficients independent of $\mu$, where $\gamma$ is defined as
\begin{equation}
\gamma :=\frac{ \mu (\triangle n + \mu)}{(\triangle n)^2}\, . \label{gammadef}
\end{equation}
\begin{theorem}
Let
$$
\Omega := \Omega(k)=\sum_{j=0}^N \eta_j k^{2j+1}\, ,
$$
be an odd polynomial of degree $2N+1$, $\eta_j \in \mathbb{C}$ for $j=0,\dots, N$.
Then
\begin{equation}
\Omega(\triangle n+\mu) - \Omega(\mu) = \sum_{j=0}^{N} \eta_j ({\triangle n})^{2j+1}s_{2j+1}\left( \gamma \right)\, ,
\label{qeq}
\end{equation}
where the polynomial $s_{2j+1} = s_{2j+1}(\gamma)$ of degree $j$ is defined recurrently by \eq{rec:s}.
\label{theorem1}
\end{theorem}
\begin{proof}
The claim follows immediately by \eq{def:ga} and Lemma~\ref{lemma:recurrent} by setting $a := \triangle n + \mu$ and $b:=\mu$:
$$
\Omega(a) - \Omega(b) = \sum_{j=0}^{N} \eta_j (a^{2j+1} - b^{2j+1})= \sum_{j=0}^{N} \eta_j t_{2j+1}(a-b,ab)
=\sum_{j=0}^{N} \eta_j ({\triangle n})^{2j+1}s_{2j+1}(\gamma)\,.
$$
\end{proof}
As before, the collision condition \eq{resonance} expressed using \eq{qeq} is solved for $\gamma$ for different fixed values of $\triangle n$. After solving for $\gamma$, it is necessary to check that $\gamma$ gives rise to a real value of $\mu$ by solving the quadratic equation with the unknown $\mu$:
$$
\mu(\mu + \triangle n) = \gamma (\triangle n)^2 \, .
$$
Thus
\begin{equation}
\mu_{1,2} = \frac{-\triangle n \pm \sqrt{(\triangle n)^2 + 4\gamma (\triangle n)^2 }}{2} = \frac{\triangle n}{2}\left(-1 \pm \sqrt{1 + 4\gamma} \right)
\, .
\label{mueq}
\end{equation}
By Proposition \eq{prop:1} we are interested in negative values of $\gamma$ that characterize a possible coincidence of two eigenvalues of opposite signature, as $\gamma$ has by \eq{gammadef} the same sign as $Z$ in \eq{eq:Kreinproduct}. Then any root $\gamma \in [-1/4,0)$ corresponds to a collision of two eigenvalues of opposite signature. If $\gamma < -1/4$, $\gamma$ does not correspond to a collision of two purely imaginary eigenvalues as $\mu$ is not real. If $\gamma >0$ then there is a collision of two eigenvalues of the same signature.
If $\gamma = 0$ the collision is located at the origin of the spectral plane, i.e., it does not correspond to the Hamiltonian-Hopf bifurcation.
We have proved the following main theorem characterizing the spectral stability of small-amplitude traveling waves of \eq{HPDE}.
\begin{theorem}
\sloppypar Consider a scalar $2\pi$-periodic Hamiltonian partial diff\-er\-en\-tial equation of the form \eq{HPDE} and assume that $u = 0$ is a spectrally stable solution.
Let \eq{DR} be the dispersion relation of the equation linearized about $u = 0$ in a reference frame moving with the velocity $c$. Then a branch of traveling wave solutions of \eq{HPDE} with velocity $c$ bifurcates from the trivial solution at $c = \omega(1)$, see \eq{ckdef}. A necessary condition for a Hamiltonian-Hopf bifurcation at zero-amplitude characterizing a loss of spectral stability of small-amplitude solutions on the bifurcating branch is that
\begin{equation}
\sum_{j=0}^{N} \eta_j ({\triangle n})^{2j+1}s_{2j+1}(\gamma) = 0\,
\label{ceq}
\end{equation}
has a root $\gamma$, $\gamma \in [-1/4, 0)$.
\label{th:gamma}
\end{theorem}
\section{Generalized KdV Equations}
As a simple example illustrating an application of Theorem~\ref{th:gamma} to study spectral stability of small-amplitude periodic traveling waves, we consider the generalized KdV equation (gKdV)
\begin{equation}
\partial_t v + \alpha\partial_x^3 v + \partial_x f(v) = 0\, ,
\label{gKdV}
\end{equation}
and the generalized higher-order KdV equation ($p \ge 2$)
\begin{equation}
\partial_t v + \alpha\partial_x^{2p+1} v + \partial_x f(v) = 0\, .
\label{HOgKdV}
\end{equation}
Here we assume $f(0) = 0$ and periodic boundary conditions, $x\in [0,2\pi]$. Within this work we study high-frequency instabilities, staying away from the
origin in the spectral plane, i.e., we do not discuss the modulational or Benjamin-Feir instability.
For simplicity we consider \eq{gKdV} first and then discuss the case of \eq{HOgKdV} as the reduction process and the results are completely analogous.
We will pay particular attention to the case of KdV equation with $f(v) = v^2$ in \eq{gKdV}.
For a detailed history of stability results of periodic traveling waves for KdV, mKdV (equation \eq{gKdV} with $f(u) = u^3$), and gKdV we refer the reader to \cite{BD2009, deckap},
see also \cite{KapHar, Johnson2009, DN2010}, see also \cite{DT}, Section 3.1, for numerical results illustrating the theory developed here. The results in the literature can be shortly summarized as: periodic traveling waves are spectrally stable away from the origin
of the spectral plane (with the exception of cn solutions to mKdV), and also nonlinearly orbitally stable with respect to certain classes of perturbations. The
techniques used to prove the results for KdV are based on its integrability.
The dispersion relation of the linearization of \eq{gKdV} in the traveling frame is
\begin{equation}
\Omega = \Omega(k) = ck + \alpha k^3\, .
\label{OgKdV}
\end{equation}
Branches of small-amplitude waves are bifurcating from the trivial solution for the critical values of $c$ for which $\Omega(k) = 0$ for a nonzero integer value of $k$:
\begin{equation}
c_k = -\alpha k^2\, .
\label{cgKdV}
\end{equation}
Let us now fix $k\in \mathbb{Z}/\{0\}$ and set $c = c_k$. The condition for a collision of eigenvalues \eq{resonance} has the form
\begin{equation}
c\triangle n + \alpha \left[(\triangle n)^3 + 3\triangle n\mu (\triangle n+\mu)\right] = 0\, .
\label{Raux1}
\end{equation}
According to Theorem~\ref{th:gamma} equation \eq{Raux1} can be rewritten in the form \eq{ceq}, i.e.,
\begin{equation}
c(\triangle n) + \alpha (\triangle n)^3 (1+3\gamma) = 0\, .
\label{redKdV}
\end{equation}
The root $\gamma$ of \eq{redKdV} that characterizes the nature of collisions of eigenvalues at zero amplitude is given by
\begin{equation}
\gamma =- \frac{c}{3\alpha (\triangle n)^2} - \frac{1}{3} = \frac{1}{3}\left( \frac{k^2}{(\triangle n)^2} - 1\right).
\label{gammaeq}
\end{equation}
The condition $-1/4 \le \gamma < 0$ can be expressed as
$$
-\frac{3}{4} \le \frac{k^2}{(\triangle n)^2} - 1 < 0, \qquad \mbox{i.e.,} \qquad
\frac{1}{4}(\triangle n)^2 \le k^2 < (\triangle n)^2\, ,
$$
or equivalently
\begin{equation}
k^2 < (\triangle n)^2 \le 4k^2\, , \qquad \mbox{and thus} \qquad
|k| < |\triangle n| \le 2|k|\, .
\label{kbound2}
\end{equation}
It is easy to see that in this special case the equality in the upper bound in \eq{kbound2} corresponds to a collision of eigenvalues $\lambda$ with indices $n_1+\tilde{\mu} = 1$ and $n_2+\tilde{\mu} = -1$ in \eq{spectrum}. But $\Omega(1) = \Omega(-1) = 0$ for (\ref{OgKdV}--\ref{cgKdV}). Thus the collision of opposite signature eigenvalues corresponding to the root $\gamma = -1/4$ in this particular case is located at the origin of the spectral plane and thus it is not associated with the Hamiltonian-Hopf bifurcation. Thus the instability condition is
\begin{equation}
|k| < |\triangle n| < 2|k|\, .
\label{kbound}
\end{equation}
Since the stability results are independent of $\alpha$ without loss of generality we assume $\alpha = 1$ in the rest of this section.
\subsection{gKdV Equation. Solutions with base period ${\pmb{2\pi}}$}
First, we consider KdV, i.e., $f(x) = u^2$, as the linear analysis is identical for all $f(x)$ satisfying $f(0) = 0$ and the characterization of the
collision condition in Theorem~\ref{th:gamma} does not dependent on the form of nonlinearity.
In that case, the solution branch indexed by $k=1$ bifurcating at $c_1 = -1$ from the trivial solution corresponds to the cnoidal waves with base-period $2\pi$, see \cite{DT}, Section 3.1, for the solution formula, numerical results, and analysis. The condition \eq{kbound} implies that collisions of eigenvalues with opposite Krein signature at zero-amplitude happen only for two eigenmodes of the form \eq{Floquet} with Fourier indices $n_1, n_2$, $\triangle n = n_1 - n_2$, where $1 < \triangle n < 2$. As no such $\triangle n$ exists the small-amplitude cnoidal waves of base-period $2\pi$ are spectrally stable (away from the origin of the spectral plane). This is in agreement with the results obtained in \cite{BD2009} and \cite{DT}, Section 3.1, step 5. The same result is true for any nonlinearity $f(x)$, including the case of mKdV, and thus, not accounting for a possible modulational instability, small-amplitude periodic traveling waves with base period $2\pi$ are spectrally stable for gKdV \eq{gKdV}.
\subsection{KdV Equation. Solutions with base period $\pmb{2 \pi /k}$}
\label{s:ktwo}
We discuss the case $k\ge 2$. Solutions on the branch bifurcating from the trivial solution at $c_k = -k^2$ also correspond in the case of KdV to the
cnoidal wave solutions, as the cnoidal waves comprise all periodic traveling waves to KdV. However, these solutions are subharmonic compared to the solutions on
the branch with index $k=1$, i.e., their base-period is $2\pi/k$. One way to see this is to consider \eq{gKdV} with $f(v) = v^2$ in the frame traveling with velocity $c$:
\begin{equation}
v_t +\alpha v_{xxx} + (v^2)_x + cv_x = 0\, .
\label{KawaharaEq}
\end{equation}
We set
\begin{equation}
y = \frac{x}{k}, \qquad \tau = \frac{t}{k^3}, \qquad u = k^2 v, \qquad \tilde{c} = k^2c.
\label{transform}
\end{equation}
Then \eq{KawaharaEq} transforms to
\begin{equation}
u_{\tau} + \alpha u_{yyy} + (u^2)_y + \tilde{c}u = 0\, .
\label{KawaharaEqTrans}
\end{equation}
Thus any solution $v(x,t)$ of \eq{KawaharaEq} with the base period $2\pi$ traveling with velocity $c$ corresponds 1-to-1 to a solution $u(y,\tau)$ of \eq{KawaharaEqTrans} with the base period $2\pi / k $ traveling with velocity $\tilde{c} = c k^2$. The $k$-repetition of $2\pi/k$-periodic solution of \eq{KawaharaEqTrans} is also a $2\pi$-periodic solution of \eq{KawaharaEqTrans} that is equivalent to \eq{KawaharaEq} with $c = c_k$. This relation allows to identify through \eq{transform} the branch of $2\pi$ periodic solutions of \eq{KawaharaEq} bifurcating at $c=c_k$ with the branch of solutions of the same equation bifurcating at $c = c_1$, i.e., the branch of solutions of \eq{KawaharaEq} bifurcating at $c= c_k$ consists of properly rescaled multicopies of the solutions of the same equation located on the branch bifurcating at $c=c_1$. Therefore perturbations that are subharmonic for $k=1$ are co-periodic for $k\ge 2$, etc. This leads to more eigenvalue collisions for $k\ge 2$ than for $k=1$ since the co-periodic spectrum, e.g. the spectrum for $k\ge 2$ for the Floquet multiplier $\mu=0$ includes (after a proper rescaling) the union of the spectrum for $k=1$ and $\mu=0$, $\mu = 1/k$, $\mu = 2/k, \dots$.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{Fig1.jpg}
\caption{
Illustration of the relation \eq{sigma8} of the spectrum $\sigma^{(2)}$ (left) and $\sigma^{(1)}$ (right) for KdV equation. Individual curves correspond to different values of $n$ with the index $n$ indicated. The spectrum partitions $\sigma_{\mu}$ correspond to all $\lambda$ for a given $\mu$.
Displayed are $\lambda = \lambda_n^{(\mu)}$ values for $\mu = -0.4$ ($k=2$, left) and $\mu = -0.2$ and $\mu = 0.3$ ($k=1$, right).
For better visibility we have removed the branches with indices $n$, $-2\le n\le 3$ ($k=2$) and $-1\le n \le 1$ ($k=1$), all undisplayed branches lie close to the horizontal axis. Note the scaling factor 8 on the $\lambda$ axis (left) for $\sigma^{(2)}$ compared to $\sigma^{(1)}$ (right).
\label{Fig:k12}}
\end{figure}
As an illustration consider the case $k=2$. The spectrum of the linearized problem is given by
\begin{equation}
\sigma^{(2)} = \displaystyle\bigcup_{\mu \in (-1/2,1/2]} \sigma^{(k=2)}_{\mu} =
\left\{ \lambda_n^{(\mu)}; \ \lambda_n^{(\mu)} = - i \left[4(n+\mu) - (n+\mu)^3\right], n \in \mathbb{Z} \right\}\,.
\label{sigma2}
\end{equation}
On the other hand, the spectrum for $k=1$ is given by
\begin{equation}
\sigma^{(1)} = \displaystyle\bigcup_{\mu \in (-1/2,1/2]} \sigma^{(k=1)}_{\mu} =
\left\{ \lambda_n^{(\mu)}; \ \lambda_n^{(\mu)} = - i \left[(n+\mu) - (n+\mu)^3\right], n \in \mathbb{Z} \right\}\,.
\label{sigma1}
\end{equation}
It is easy to see (see Fig.~\ref{Fig:k12} for a visualization) that for all $\mu \in (-1/2,1/2]$
\begin{equation}
\frac{1}{8}\sigma_{\mu}^{(k=2)} = \sigma_{\mu/2}^{(k=1)} \cup \sigma_{\mu/2 + 1/2}^{(k=1)}\, .
\label{sigma8}
\end{equation}
Here multiplication of the set by a scalar means multiplication of each of its elements by the scalar and we use the periodicity $\sigma_{\mu} = \sigma_{\mu+1}$ for all $\mu \in \mathbb{R}$ to properly define the second term $\sigma_{\mu/2 + 1/2}^{(k=1)}$.
The condition \eq{kbound} indicates that there are collisions of the eigenvalues of opposite signature at zero amplitude for modes of the form \eq{Floquet} for Fourier indices
$n_1, n_2$, with $\triangle n = n_1 - n_2$ satisfying $\triangle n \in \{ k+1, \dots, 2k-1\}$ and that is for $k\ge2$ a non-empty set. Generically, this would imply spectral instability of the waves. However, none of these collisions unfold for non-zero amplitude to a Hamiltonian-Hopf bifurcation. Such bifurcations are not possible as according to
\cite{BD2009} all periodic traveling wave solutions to KdV are spectrally stable. As a collision of eigenvalues of opposite Krein signature is only a necessary condition for a Hamiltonian-Hopf bifurcation, the analysis presented here does not allow to see this phenomenon directly. Some indication can be found in the fact that these new collisions at $c = c_k$ correspond to collisions of opposite signature eigenvalues arising from different components (as opposed to from the same component)
of the union on the right hand side of \eq{sigma8}. The different spectrum partitions and associated eigenspaces do not interact with each other, see \cite{DeMeMa1992} and [Koll{\'a}r \& Miller, preprint 2018] for a throughout discussion of avoided Hamiltonian-Hopf bifurcations.
It is possible to see within the analysis presented here that the collisions of the opposite Krein signature eigenvalues of the $2\pi / k$
periodic solutions are just an artifact of the $2\pi$ periodic setting, i.e., when one considers the stability of
the $2\pi / k$ periodic solutions as the stability of its $k$-repetition in the $2\pi$ periodic frame in \eq{KawaharaEq}. Due to the
periodic character of the solution the stability of such a $k$-repetition is equivalent to the stability of a
single $2\pi / k$ periodic repetition in \eq{KawaharaEqTrans}. But we have proved above that the waves with period $L$ considered on the interval $[0,L]$ are spectrally stable (this corresponds to $k = 1$ for \eq{KawaharaEq} where we have set without loss of generality $L = 2\pi$). Therefore the $2\pi / k$ periodic waves are spectrally stable and
all collisions at zero amplitude of \eq{KawaharaEq} at $c = c_k$ are only due to multi-coverage of the spectrum $\sigma^{(k)}$ as in \eq{sigma8}.
The same argument can be used for gKdV with the nonlinearity $f(v) =v^n$, $n \ge 2$. However, in regard to the spectral stability of
small-amplitude waves lying on branches bifurcating at $c = c_k$ for $k \ge 2$ for a general $f(v)$, $f(0) = 0$, we can only conclude that there are collisions of the opposite signature eigenvalues at zero amplitude. A lack of a transformation analogous to \eq{transform}, that requires existence of a positive $r$ such that $f(au) = a^r f(u)$ for all $a \in \mathbb{R}$, does not allow to rule out the potential Hamiltonian-Hopf bifurcations.
\subsection{Higher-order gKdV Equation}
A similar analysis can be performed for the higher-order gKdV equation \eq{HOgKdV}. In that case $\Omega(k) = -ck + (-1)^{p+1} \alpha k^p$ and $c_k = (-1)^p \alpha k^{p-1}$. The relation $\Omega(n+\mu) = \Omega(\mu)$ reduces to a polynomial equation of degree $p$ for $\gamma$.
Similarly as for $p=1$ it is possible for $p=2$ to explicitly show that all the waves on the branch $k=1$ are spectrally stable, as none of the roots of $\Omega(\triangle n+\mu) = \Omega(\mu)$ in terms of $\gamma$ are located in the interval $(-1/4,0)$. To see this one needs to determine for which integer values of $\triangle n$ the roots of
$$
-k^4 + (\triangle n)^4\left( 1 + 5\gamma + 5\gamma^2\right) = 0
$$
lie in the interval $\gamma \in (-1/4,0)$. A short calculation reveals that the condition reduces to $|k| < |\triangle n| < 2|k|$, i.e. the same condition as for $p=1$ analyzed above leading to stability for $k=1$. The same statement can be proved for any $p\ge 1$ for which the equation for $\gamma$ has the form
\begin{equation}
-k^{2p} + (\triangle n)^{2p} s_{2p+1}(\gamma) = 0\, .
\label{gammap}
\end{equation}
There $s_{2p+1}(-1/4) = 2^{-2p}$ and $s_{2p+1}(0) = 1$ by \eq{lemma2}, and also $s_{2p+1}(\gamma)$ is continuous on $[-1/4,0]$ and increasing on $(-1/4,0)$ by \eq{growths}. Therefore the roots of \eq{gammap} lie in the interval $\gamma \in (-1/4,0)$ if and only if $|k| < |\triangle n| < 2|k|$. Hence the small-amplitude periodic traveling wave solutions to \eq{HOgKdV} with the base period $2\pi$ ($k=1$) are spectrally stable, except perhaps with respect to modulational perturbations. The question of spectral stability of small-amplitude wave solutions to \eq{HOgKdV} with the base period $2\pi/k$, $k\ge 2$ is not addressed here.
\section{Balanced Higher Order KdV equations}
\label{Sec:CaseStudy}
We demonstrate the full power of Theorem~\ref{th:gamma} on a more complicated example. Here we explicitly characterize stability regions for
small-amplitude periodic traveling wave solutions of KdV-type equations with two balanced linear terms of odd order:
\begin{equation}
u_t = \partial_x f(u)+ A\, \partial^{2q+1}_x u + B \, \partial_x^{2p+1} u,
\label{KdVmn}
\end{equation}
subject to periodic boundary conditions.
Here $p > q$ are positive integers, $A, B \in \mathbb{R}$ are non-zero coefficients, and $f(u)$ is a smooth function of $u$ and its spatial derivatives with $f(0) = 0$, containing no linear terms. The literature on this topic is limited. Most relevant is \cite{haraguslombardischeel}, where $f(u)\sim u^2$ (the Kawahara equation), and the period of the solutions is not fixed. It is concluded there that for solutions for which the amplitude scales as the 1.25-th power of the speed, solutions are spectrally stable. No conclusion is obtained for other solutions. Our investigation does not require this scaling, nor does it restrict the type of nonlinearity. Also relevant is \cite{deckap}, where the typical stability approach of \cite{KapHar} is extended to systems with singular Poisson operator like \eq{HPDE}, but the theory is not applied to \eq{KdVmn}. A mostly numerical investigation of equations like \eq{KdVmn} is undertaken in \cite{TDK}. As stated, our theory builds almost exclusively on \cite{DT} and our rigorous results agree with numerical results in \cite{TDK} where the special case $p = 2$, $q = 1$, and $A, B > 0$ was considered.
Traveling wave solutions $u=U(x-ct)$ with wave velocity $c$ satisfy
$$
-c U' =\partial _x f(U)+ A U^{(2q+1)} + B U^{(2p+1)}.
$$
The spectral stability of small-amplitude waves that bifurcate at zero amplitude from the trivial solution $U=0$ is characterized by the growth of the solutions of the linear equation
\begin{equation}
v_t = c v_x+A v_{(2q+1)x} + B v_{(2p+1)x},
\label{eqlinear}
\end{equation}
with dispersion relation
\begin{equation*}
\Omega = \Omega_{p,q}(k) = -ck - A (-1)^q k^{2q+1} - B (-1)^p k^{2p+1}=-ck-\alpha k^{2q+1}+\beta k^{2p+1},
\label{DRgeneral}
\end{equation*}
where we have introduced
\begin{equation}
\alpha = A (-1)^q, \qquad \qquad \beta = - B(-1)^p.
\label{ABdef}
\end{equation}
Without loss of generality, we assume that $\alpha > 0$. If not, the transformation $x \rightarrow - x$ (i.e., $k\rightarrow -k$), and $c \rightarrow -c$ can be used to switch the sign of $\alpha$. The scaling symmetry of the equation allows us to equate $\alpha=1$ hereafter. The choice of opposite signs in front of $\alpha$ and $\beta$ in \eq{ABdef} is intuitive: if $\alpha$ and $\beta$ have opposite sign the Hamiltonian energy \eq{Hc0} is definite and all eigenvalues have the same signature. This rules out Hamiltonian-Hopf bifurcations and the spectral instabilities following from them. In other words, the interesting case for our considerations is that both $\alpha$ and $\beta$ are positive. Lastly, since we study bifurcations from the first Fourier mode $k = 1$,
$c = \beta-\alpha=\beta-1$.
According to Theorem~\ref{theorem1}, eigenvalue collisions at zero-amplitude are characterized by the roots $\gamma$ of
\begin{equation*}
\triangle n R(\gamma) := -c\triangle n - (\triangle n)^{2q+1} s_{2q+1}(\gamma) + \beta (\triangle n)^{2p+1} s_{2p+1}(\gamma) = 0.
\label{cd}
\end{equation*}
This is rewritten as
\begin{equation}
\beta \left[(\triangle n)^{2p} s_{2p+1}(\gamma) - 1 \right]- \left[ (\triangle n)^{2q} s_{2q+1}(\gamma) - 1\right]= 0.
\label{eq10}
\end{equation}
Our goal is to find the parameter range $(\beta, \triangle n)$ for which the root $\gamma$ of \eq{eq10} satisfies $\gamma\in [-1/4,0)$.
The results obtained in the next section are graphically summarized in Fig.~\ref{Fig:region}.
\begin{figure}[h]
\centering
\includegraphics[width=0.85\textwidth]{Fig2.jpg}
\caption{Spectral stability regimes of the small-amplitude $2\pi$ periodic traveling waves for the Kawahara equation \eq{KdVmn}, $p=2$, $q=1$, $\alpha = 1$, $k=1$. Unstable pairs $(\triangle n, \beta)$ are indicated by the dashed line segments, stable pairs
$(\triangle n, \beta)$ are above the curve $\beta= \beta_{-1/4}(\triangle n)$ and below the curve $\beta = \beta_0(\triangle n)$ given by \eq{beta00}--\eq{beta2} for $\triangle n \ge 3$, by \eq{betan2} for $\triangle n = 2$, and by $\eq{betan1}$ for $\triangle n = 1$.
\label{Fig:region}}
\end{figure}
An important role is played by the interval end points $\gamma = 0$ and $\gamma = -1/4$.
By \eqref{lemma2} for $\gamma= 0$ we have
$$
\beta((\triangle n)^{2p} - 1) - ((\triangle n)^{2q} - 1) = 0
$$
and therefore we set
\begin{equation}
\beta_0 = \beta_0(\triangle n) = \frac{(\triangle n)^{2q} -1}{(\triangle n)^{2p} - 1}.
\label{beta00}
\end{equation}
On the other hand \eq{eq10} reduces for $\gamma = -1/4$ by \eqref{lemma2} to
\begin{equation}
\beta_{-1/4} = \beta_{-1/4}(\triangle n) = \left[\left(\displaystyle\frac{\triangle n}{2}\right)^{2q} - 1\right]
/ \left[\left(\displaystyle\frac{\triangle n}{2}\right)^{2p} - 1\right] \, .
\label{beta2}
\end{equation}
It follows immediately from Lemma~\ref{thetalemma} that for $\triangle n\geq 3$, $\beta_0(\triangle n)<\beta_{-1/4}(\triangle n)$, since this inequality may be rewritten as $f_{2p,2q}(\triangle n)<f_{2p,2q}(2)$ (in the notation of the Lemma).
\subsection{Collisions of eigenvalues of opposite signature}
Since the thresholds $\gamma = 0$ and $\gamma = -1/4$ correspond, respectively, to $\beta = \beta_0(\triangle n)$ and $\beta = \beta_{-1/4}(\triangle n)$,
where $\beta_0(\triangle n) < \beta_{-1/4}(\triangle n)$, one may conjecture (for $\triangle n\geq 3$, since for $\triangle n=1, 2$ either $\beta_0$ or $\beta_{-1/4}$ is not defined) that collisions of eigenvalues of opposite Krein signature happen for $\beta \in (\beta_0(\triangle n), \beta_{-1/4}(\triangle n)]$.%
\footnote{Such a result would follow from monotonicity properties of the location of roots $\gamma$ with respect to $\beta$. Alternatively, we use an argument that proves that $\beta_0$ and $\beta_{-1/4}$ are the bounds of the stability region.}
For $\beta < \beta_0(\triangle n)$ one expects collisions of eigenvalues of the same signature and finally for $\beta > \beta_{-1/4}(\triangle n)$ one expects no collisions as the roots $\mu$ of \eq{mueq} are not real (see Fig.~\ref{Fig:beta}). As we prove next, this is true. The cases $\triangle n = 1$ and $\triangle n = 2$ are treated separately.
See \cite{TDK} for detailed numerical results (wave profiles and Fourier coefficients, spectrum diagrams) in the case $p=2$, $q=1$ and $f(u) = u^2$ (Kawahara equation), particularly numerical simulations at non-zero amplitude confirming presence of Hamiltonian-Hopf bifurcations (and thus spectral instability) that completely agree with the collisions of opposite Krein signature eigenvalues at zero-amplitude described here. In the numerical experiments all such collisions studied actually yielded the bifurcation.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{Fig3.png}
\caption{Parameter regimes for $\beta$, $\beta \le \beta_0(\triangle n)$, $\beta \in (\beta_0(\triangle n), \beta_{-1/4}(\triangle n)]$, and $\beta > \beta_{-1/4}(\triangle n)$. \label{Fig:beta}}
\end{figure}
\begin{theorem}
{\bf Case $\mathbf{\triangle n \ge 3}$.}
Let $p, q$, $p > q$, be positive integers and let $\triangle n$ is an integer, $\triangle n \ge 3$. The presence and character of collisions of eigenvalues of the linearized problem \eq{eqlinear} at zero amplitude at $c = c_1=\beta-\alpha$
depends on the difference of the indices of the Fourier modes $\triangle n$ of the perturbation in the following way:
\begin{itemize}
\item[(i)]
If $\triangle n$ is such that $\beta < \beta_0(\triangle n)$, then there is a collision of eigenvalues of the same signature, i.e., there is a root of \eq{eq10} with $\gamma > 0$ and there is no root with $\gamma \in [-1/4,0)$;
\item[(ii)]
If $\triangle n$ is such that $\beta_0(\triangle n) < \beta \le \beta_{-1/4}(\triangle n)$, then there is a collision of eigenvalues of opposite signature, i.e., there is a root $\gamma$ of \eq{eq10} such that $\gamma \in [-1/4, 0)$;
\item[(iii)]
If $\triangle n$ is such that $\beta_{-1/4}(\triangle n) < \beta$, then there is no collision of eigenvalues, i.e., all roots $\gamma$ of \eq{eq10} satisfy $\gamma < -1/4$.
\end{itemize}
\label{maintheorem}
\end{theorem}
\begin{proof}
\noindent
{\bf Part (ii).} We show that for all $\triangle n \ge 3$ and $\beta_0(\triangle n) < \beta \le \beta_{-1/4}(\triangle n)$
there exists $\gamma \in [-1/4, 0)$ satisfying $R(\gamma) = 0$.
Therefore by \eq{eq10}, in such a parameter regime there is a collision of eigenvalues of opposite Krein signature.
It is easy to see that
$$
R(0) = \beta[(\triangle n)^{2p}-1] - [(\triangle n)^{2q}-1] > \beta_0 [(\triangle n)^{2p}-1] - [(\triangle n)^{2q}-1] = 0,
$$
and,
\begin{eqnarray*}
R( -1/4)& =&
\beta \left( \frac{(\triangle n)^{2p}}{2^{2p}} - 1\right) - \left(\frac{(\triangle n)^{2q}}{2^{2q}} - 1\right) \\
& \le &
\beta_{-1/4} \left( \frac{(\triangle n)^{2p}}{2^{2p}} - 1\right) - \left(\frac{(\triangle n)^{2q}}{2^{2q}} - 1\right) = 0.
\end{eqnarray*}
Thus $R(0) > 0 \ge R\left( -1/4 \right)$ and the polynomial $R(\gamma)$ has a real root $\gamma \in [-1/4, 0)$.
\vspace{\baselineskip}
\noindent
{\bf Part (i).}
Since $\beta < \beta_0(\triangle n) < \beta_{-1/4}(\triangle n)$ the same argument as in Part (ii) yields $R(-1/4) < 0$. Also,
$$
R(0) = \beta[(\triangle n)^{2p}-1] - [(\triangle n)^{2q}-1] < \beta_0 [(\triangle n)^{2p}-1] - [(\triangle n)^{2q}-1] = 0\, .
$$
We prove that $R(\gamma) = \beta [(\triangle n)^{2p} s_{2p+1}(\gamma) -1] - [(\triangle n)^{2q} s_{2q+1}(\gamma) - 1] < 0$ for all $\gamma \in [-1/4, 0]$.
By Lemma~\ref{slemma} for $\triangle n \ge 3$ and $p \ge 1$,
$$
(\triangle n)^{2p}s_{2p+1}(\gamma) \ge \frac{3^{2p}}{2^{2p+1}} > 1 \, .
$$
Thus for all $\gamma \in [-1/4,0)$ and $\beta < \beta_0$,
\begin{eqnarray}
R(\gamma) & =& \beta[(\triangle n)^{2p}s_{2p+1}(\gamma) - 1] - [\triangle n)^{2q}s_{2q+1}(\gamma) - 1] \nonumber \\
& <& \beta_0(\triangle n) [(\triangle n)^{2p}s_{2p+1}(\gamma) - 1] - [(\triangle n)^{2q}s_{2q+1}(\gamma) - 1] \, .
\label{Req1}
\end{eqnarray}
We prove that the right-hand side of \eq{Req1} is non-positive. This is equivalent to
\begin{equation}
\beta_0(\triangle n) = \frac{(\triangle n)^{2q}-1}{(\triangle n)^{2p}-1} \le \frac{(\triangle n)^{2q}s_{2q+1}(\gamma) - 1}{(\triangle n)^{2p}s_{2p+1}(\gamma) - 1}\, ,
\label{betaeq}
\end{equation}
or to
\begin{equation}
s_{2q+1} \ge s_{2p+1}[1-\theta(\triangle n)] + \theta(\triangle n)\, , \qquad \mbox{where} \quad
\theta(n) : = \frac{(n)^{2p}-(n)^{2q}}{(n)^{2p+2q}-(n)^{2q}}.
\label{sest2}
\end{equation}
Clearly $0 < \theta(n) < 1$.
Since $s_{2p+1} < 1$ it suffices to prove \eq{sest2} for $\triangle n$ that maximizes $\theta(\triangle n)$, $\triangle n\ge 2$.
However, by Lemma \ref{thetalemma} for $p > q \ge 1$,
$\max_{n \ge 2} \theta(n) = \theta(2)$ and it suffices to prove
$s_{2q+1} \ge s_{2p+1} (1-\theta (2)) + \theta(2)$,
i.e.,
\begin{equation*}
s_{2q+1} 2^{2q} (2^{2p} - 1) \ge s_{2p+1} 2^{2p} (2^{2q} -1) + 2^{2p} - 2^{2q}.
\label{sest3}
\end{equation*}
Therefore \eq{betaeq} follows directly from Lemma~\ref{decrease0} as it is equivalent for $p > q \ge 1$ to
\begin{equation*}
\frac{2^{2q} s_{2q+1} - 1}{2^{2q} - 1} \ge \frac{2^{2p} s_{2p+1} - 1}{2^{2p} - 1} \, .
\label{separat}
\end{equation*}
Hence we proved $R(\gamma) < 0$ for all $\gamma \in [-1/4,0]$. On the other hand $R(\gamma)$ is an even order polynomial with a positive leading coefficient, i.e., $R(\gamma) \rightarrow \infty$ as $\gamma \rightarrow \infty$. Therefore there exists $\gamma_0 > 0$ such that $R(\gamma_0) = 0$. Such a root corresponds by \eq{mueq} to a real value of $\mu$. Therefore in this regime there is a collision of two eigenvalues of the same signature.
\vspace{\baselineskip}
\noindent
{\bf Part (iii).}
Note that $R(0) >0$. We show that $R(\gamma) >0$ for $\gamma \ge -1/4$.
First,
$$
R( -1/4) =
\beta \left( \frac{n^{2p}}{2^{2p}} - 1\right) - \left(\frac{n^{2q}}{2^{2q}} - 1\right) >
\beta_{-1/4} \left( \frac{n^{2p}}{2^{2p}} - 1\right) - \left(\frac{n^{2q}}{2^{2q}} - 1\right) = 0\, .
$$
For $\gamma \ge -1/4$,
\begin{eqnarray}
R(\gamma) &= &
\beta \left[ (\triangle n)^{2p}s_{2p+1}(\gamma) - 1\right] - \left[ (\triangle n)^{2q} s_{2q+1}(\gamma) - 1\right] \nonumber \\
& > &
\beta_{-1/4} \left[ (\triangle n)^{2p}s_{2p+1}(\gamma) - 1\right] - \left[ (\triangle n)^{2q} s_{2q+1}(\gamma) - 1\right]\, ,
\label{eq40}
\end{eqnarray}
since, by Lemma~\ref{slemma}, $(\triangle n)^{2p}s_{2p+1}(\gamma) \ge 1$.
We prove that
\begin{equation}
\frac{(\triangle n/2)^q-1}{(\triangle n/2)^p-1}
\ge
\frac{(\triangle n)^{q} s_{q+1}(\gamma) - 1}{ (\triangle n)^{p}s_{p+1}(\gamma) - 1}\, ,
\label{part3a}
\end{equation}
for any $p > q$. From \eq{eq40}, with $p \rightarrow 2p$ and $q \rightarrow 2q$, we obtain $R(\gamma) > 0$ for $\gamma \ge -1/4$.
Denote $m = \triangle n/2 \ge 1$ and $u_j = 2^j s_{j+1}$ for $j \ge 0$ to rewrite \eq{part3a} as
\begin{equation}
u_q \le u_p (1-\omega(m)) + \omega(m)\, , \qquad \mbox{where} \quad
\omega(m) = \frac{m^p-m^q}{m^{p+q}-m^q}.
\label{omegaeq}
\end{equation}
By Lemma~\ref{thetalemma}, the sequence $\omega(m)\in (0,1)$, is non-increasing for $m \ge 1$. Also, by Lemma~\ref{slemma},
$u_p = 2^p s_{p+1} \ge 1$, and \eq{omegaeq} follows from
$u_q \le u_p [1-\omega(1)] + \omega(1)$, where $\omega(1) = (p-q)/p$. Equation \eq{omegaeq} reduces to
$ (u_q - 1)/q \le (u_p-1)/p$, for $p > q \ge 1$. In terms of $s_q(\gamma)$ this is equivalent to
\begin{equation*}
\frac{2^q s_{q+1}(\gamma) - 1}{q} \le \frac{2^p s_{p+1}(\gamma) -1}{p}, \qquad \mbox{for $p > q \ge 1$},
\label{umono2}
\end{equation*}
which follows for $\gamma \ge -1/4$ from Lemma~\ref{decrease4}, since monotonicity of the positive sequence
$\displaystyle\frac{2^ms_{m +1}-1}{m(m+1)}$ directly implies monotonicity of the sequence $\displaystyle\frac{2^ms_{m +1}-1}{m}$.
Thus $R(\gamma) > 0$ for all $\gamma \ge -1/4$ and $R(\gamma)$ has no roots in $[-1/4, \infty)$ and there are no collisions of eigenvalues in this regime.
\end{proof}
For $\triangle n=1$, we use a similar argument. For $\triangle n= 1$ and $\gamma = 0$,
$R(0) = 0$. Hence $\gamma = 0$ is always a root of $R(\gamma) = 0$, corresponding to the relation\footnote{These eigenvalues are present due to symmetries; they do not leave the imaginary axis.} $
\Omega (1) = 0 = \Omega(0)
$.
For $p > q > 0$, denote
\begin{equation}
\beta_0^{(\triangle n=1)} = \frac{2q+1}{2p+1}\, ,
\qquad \mbox{and} \qquad
\beta_{-1/4}^{(\triangle n=1)} = \frac{1-2^{-2q}}{1 - 2^{-2p}}.
\label{betan1}
\end{equation}
\begin{theorem}
{\bf Case $\mathbf{\triangle n=1}$.}
Let $p, q$ be positive integers with $p>q$.
For the linearized problem \eq{eqlinear} at zero amplitude with $c = c_1$, the
presence and the character of eigenvalue collisions depend on the difference $\triangle n$ of the indices of the Fourier modes of the perturbation as follows:
\begin{itemize}
\item[(i)]
for $\beta < \beta^{(\triangle n=1)}_0$, eigenvalues of the same signature collide, i.e., there is a root of \eq{eq10} with $\gamma > 0$ and there is no root with $\gamma \in [-1/4,0)$;
\item[(ii)]
for $\beta^{(\triangle n=1)}_0 < \beta < \beta^{(\triangle n=1)}_{-1/4}$, eigenvalues of opposite signature collide, i.e., there is a root $\gamma$ of \eq{eq10} so that $\gamma \in [-1/4, 0)$;
\item[(iii)]
for $\beta_{-1/4}^{(\triangle n=1)} < \beta$, eigenvalues do not collide, i.e., $\gamma < -1/4$, for all roots $\gamma$ of \eq{eq10}.
\end{itemize}
\label{maintheorem1}
\end{theorem}
\begin{proof}
First, we show that $\beta_0^{(\triangle n=1)}< \beta_{-1/4}^{(\triangle n=1)}$, which follows from the function $f(y)=(1-2^{-y})/(1+y)$ being decreasing for $y>2$. Its derivative has the numerator $(1+y)2^{-y}\ln 2+2^{-y}-1$, which is negative at $y=2$, and itself has a derivative that is negative for $y>2$.
Next, for $\beta \le \beta_{-1/4}^{(\triangle n=1)}$,
\begin{eqnarray}
R(-1/4)& = & \beta \left(s_{2p+1}(-1/4)-1\right) -\left( s_{2q+1}(-1/4)-1\right)=
\beta(2^{-2p}-1) - (2^{-2q} - 1) \nonumber \\
& \ge & \beta_{-1/4}^{(n=1)} (2^{-2p}-1) - (2^{-2q} - 1) = 0\, ,
\label{eq90}
\end{eqnarray}
where equality holds only for $\beta = \beta_{-1/4}^{(\triangle n=1)}$.
On the other hand, if $\beta > \beta_{-1/4}^{(\triangle n=1)}$ then $R(-1/4) < 0$.
Further, for $\gamma = 0$ and all values of $\beta$, $R(0) = 0$. Finally, for $\gamma \in [-1/4,0)$
$$
R'(0) = \beta (2p+1) - (2q+1).
$$
Therefore, for $\beta < \beta_0^{(\triangle n=1)}$,
\begin{equation}
R(0) = 0, \qquad R'(0) < 0,
\label{eq91}
\end{equation}
and, for $\beta > \beta_0^{(\triangle n=1)}$,
\begin{equation*}
R(0) = 0, \qquad R'(0) > 0.
\label{eq92}
\end{equation*}
Note that $R(0) = R'(0) = 0$ for $\beta = \beta^{(\triangle n=1)}_0$.
{\bf Part (i).}
By \eq{eq90} one has $R(-1/4) >0$, and by \eq{eq91} $R(0) = 0$ and $R'(0) < 0$. We prove that $R(\gamma) >0$ for all $\gamma \in [-1/4,0)$.
Thus $R = R(\gamma)$ does not have any roots in $(-1/4,0)$.
Moreover, $R(\gamma)$ is an odd-degree polynomial with a positive leading coefficient, $R(\gamma) \rightarrow \infty$ as $\gamma \rightarrow \infty$ and $R(0) =0$ and $R'(0) < 0$.
Therefore $R$ has a positive root.
Assume $\gamma \in [-1/4,0)$ and $\beta < \beta^{(\triangle n=1)}_0$. Then, using \eq{sest2a},
$$
R(\gamma) = \beta (s_{2p+1}(\gamma) -1) - (s_{2q+1}(\gamma)-1) > \beta^{(\triangle n=1)}_0 (s_{2p+1}(\gamma) -1) - (s_{2q+1}(\gamma)-1)\, .
$$
To establish $R(\gamma) > 0$ it is enough to prove
\begin{equation}
\beta^{(\triangle n=1)}_{0} \le \frac{s_{2q+1}(\gamma)-1}{s _{2p+1}(\gamma) - 1}, \qquad
\mbox{for $\gamma \in [-1/4,0)$.}
\label{g5}
\end{equation}
By Lemma~\ref{slemma} one has $s_m(\gamma) < 1$ for $m\ge 2$, $\gamma \in [-1/4,0)$. Hence \eq{g5} can be rewritten as
\begin{equation*}
\frac{s _{2p+1}(\gamma) - 1}{2p+1} \ge \frac{s_{2q+1}(\gamma)-1}{2q+1},
\label{g6}
\end{equation*}
which follows for $p > q > 0$ and $\gamma \in [-1/4, 0)$ from Lemma~\ref{decrease2}. Therefore $R(\gamma) > 0$ for $\gamma \in [-1/4,0)$.
\noindent
{\bf Part (ii).}
By \eq{eq90} one has $R(-1/4) >0$, and by \eq{eq91} $R(0) = 0$, $R'(0) > 0$. Therefore there exist a $\gamma \in (-1/4,0)$ such that $R(\gamma) = 0$.
\noindent
{\bf Part (iii).}
In this case $R(-1/4) < 0$, and by \eq{eq91} $R(0) = 0$ and $R'(0) > 0$. We prove that $R(\gamma) < 0$ for $\gamma\in [-1/4,0)$ and $R(\gamma) > 0$ for $\gamma >0$. Therefore $R(\gamma)$ does not have a non-zero root for $\gamma \ge -1/4$.
First assume that $\gamma \in [-1/4,0)$. Then $\beta > \beta_{-1/4}^{(\triangle n=1)}$ implies, using \eq{sest2a},
$$
R(\gamma) =
\beta (s_{2p+1}(\gamma) - 1) - (s_{2q+1}(\gamma) - 1) < \beta_{-1/4}^{(\triangle n=1)} (s_{2p+1}(\gamma) - 1) - (s_{2q+1}(\gamma) - 1)\, .
$$
It suffices to prove
\begin{equation}
\beta^{(\triangle n=1)}_{-1/4} \ge \frac{s_{2q+1}(\gamma)-1}{s _{2p+1}(\gamma) - 1}, \qquad
\mbox{for $\gamma \in [-1/4,0)$,}
\label{g1}
\end{equation}
to establish $R(\gamma) < 0$.
The inequality \eq{g1} is rewritten as
\begin{equation*}
\frac{s _{2p+1}(\gamma) - 1}{2^{-2p}-1} \ge \frac{s_{2q+1}(\gamma)-1}{2^{-2q}-1},
\label{g2}
\end{equation*}
which follows from Lemma~\ref{increase2}. Thus $R(\gamma) < 0$ for $\gamma \in [-1/4,0)$.
Next, we assume $\gamma >0$. With $\beta > \beta_{-1/4}^{(\triangle n=1)}$ and using \eq{sest3a},
$$
R(\gamma) =
\beta (s_{2p+1}(\gamma) - 1) - (s_{2q+1}(\gamma) - 1) > \beta_{-1/4}^{(n=1)} (s_{2p+1}(\gamma) - 1) - (s_{2q+1}(\gamma) - 1)\, .
$$
It suffices to prove
\begin{equation*}
\frac{s _{2p+1}(\gamma) - 1}{2^{-2p}-1} \le \frac{s_{2q+1}(\gamma)-1}{2^{-2q}-1},
\label{g2h}
\end{equation*}
which follows from Lemma~\ref{increase2}. Thus $R(\gamma) > 0$ for $\gamma > 0$.
\end{proof}
It is easy to see that for $\triangle n= 2$, $R(-1/4)= 0$.
Thus $\gamma = -1/4$ is a root of $R(\gamma) = 0$ for all $\beta$. It corresponds to the fact that
$\Omega(-1) = 0 = \Omega(1)$, i.e., there is a collision of two eigenvalues of opposite Krein signature at the origin for all $\beta$.
This collision is due to the symmetries of the problem and these eigenvalues do not leave the imaginary axis in the weakly nonlinear regime. Thus this
collision does not affect stability. We focus on the remaining roots of $R(\gamma) = 0$.
We denote
\begin{equation}
\beta_{0}^{(\triangle n=2)} = \frac{2^{2q}-1}{2^{2p}-1}\, ,
\qquad \mbox{and} \qquad
\beta_{-1/4}^{(\triangle n=2)} = \frac{(2q+1)2q}{(2p+1)2p}\, .
\label{betan2}
\end{equation}
The inequality $\beta_0^{(\triangle n=2)}<\beta_{-1/4}^{(\triangle n=2)}$ follows similarly to $\beta_0^{(\triangle n=1)}<\beta_{-1/4}^{(\triangle n=1)}$, in the proof of the previous theorem.
\begin{theorem}
{\bf Case $\mathbf{\triangle n=2}$.}
Let $p, q$, $p > q$, be positive integers.
For the linearized problem \eq{eqlinear} at zero amplitude at $c = c_1$
the presence and the character of collisions of eigenvalues depends on the Fourier-mode parameter $n$ of the perturbation in the following way:
\begin{itemize}
\item[(i)]
for $\beta < \beta^{(\triangle n=2)}_0$, eigenvalues of the same signature collide, i.e. there is a root of \eq{eq10} with $\gamma > 0$ and there is no root with $\gamma \in (-1/4,0)$;
\item[(ii)]
for $\beta^{(\triangle n=2)}_0 < \beta < \beta^{(\triangle n=2)}_{-1/4}$, eigenvalues of the opposite signature collide, i.e. there is a root $\gamma$ of \eq{eq10} such that $\gamma \in (-1/4, 0)$;
\item[(iii)]
for $\beta_{-1/4}^{(\triangle n=2)} < \beta$, eigenvalues do not collide, i.e. all roots $\gamma$ of \eq{eq10} satisfy $\gamma \le -1/4$.
\end{itemize}
\label{maintheorem2}
\end{theorem}
\begin{proof}
{\bf Part (i).}
We prove that $R(\gamma) < 0$, for $\gamma \in (-1/4,0)$. First, $R(\gamma)$ is an odd-degree polynomial and $R(\gamma) \rightarrow \infty$ as $\gamma \rightarrow \infty$ and $R(0) =0$ and $R'(0) < 0$. Thus $R$ has a root $\gamma > 0$.
Assume $\gamma \in [-1/4,0)$ and $\beta < \beta^{(\triangle n=2)}_0$. Then
\begin{eqnarray*}
R(\gamma) &= & \beta (2^{2p}s_{2p+1}(\gamma) -1) - (2^{2q}s_{2q+1}(\gamma)-1)\\
& <& \beta^{(\triangle n=2)}_0 (2^{2p}s_{2p+1}(\gamma) -1) - (2^{2q}s_{2q+1}(\gamma)-1)\, .
\end{eqnarray*}
To establish $R(\gamma) < 0$ it suffices to prove
\begin{equation*}
\beta^{(\triangle n=2)}_{0} \le \frac{2^{2q}s_{2q+1}(\gamma)-1}{2^{2p} s _{2p+1}(\gamma) - 1}, \qquad
\mbox{for $\gamma \in (-1/4,0]$.}
\label{g7}
\end{equation*}
This inequality is rewritten as
\begin{equation*}
\frac{2^{2p}s _{2p+1}(\gamma) - 1}{2^{2p}-1} \le \frac{2^{2q}s_{2q+1}(\gamma)-1}{2^{2q}-1},
\label{g8}
\end{equation*}
which follows from Lemma~\ref{decrease0}. Therefore $R(\gamma) < 0$ for $\gamma \in (-1/4,0]$.
\noindent
{\bf Part (ii).}
First,
\begin{eqnarray*}
R(0) &= & \beta (2^{2p} s_{2p+1}(0) - 1) - (2^{2q}s_{2q+1} - 1)
= \beta (2^{2p}-1) - (2^{2q}-1) \\
& >& \beta_0^{(\triangle n=2)} (2^{2p} - 1) - (2^{2q} - 1) = 0.
\end{eqnarray*}
Next we show that $\lim_{\gamma \rightarrow -1/4^+} R'(\gamma) < 0$.
Indeed, for $\gamma > -1/4$, we have
\begin{eqnarray*}
R'(\gamma) &= & \beta \frac{2p+1}{\sqrt{1+4\gamma}} 2^{2p} (\psi_+^{2p} - \psi_-^{2p}) - 2^{2q} (\psi_+^{2q} - \psi_-^{2q})\\
& <& \beta_{-1/4}^{(n=2)} \frac{2p+1}{\sqrt{1+4\gamma}} \, 2^{2p} (\psi_+^{2p} - \psi_-^{2p}) - \frac{2q+1}{\sqrt{1+4\gamma}} 2^{2q} (\psi_+^{2q} - \psi_-^{2q})
\end{eqnarray*}
as $\psi_+^2 > \psi_-^2 \ge 0$. The result follows from l'Hopital's rule, since
\begin{eqnarray*}
\lim_{\gamma \rightarrow -1/4^+}&&\!\!\!\!
\frac{(2q+1) 2^{2q} (\psi_+^{2q}(\gamma) - \psi_-^{2q}(\gamma))}{(2p+1)2^{2p} (\psi_+^{2p}(\gamma) - \psi_-^{2p}(\gamma))}\\
&&~~~~=\lim_{\gamma \rightarrow -1/4^+}
\frac{2q(2q+1) 2^{2q}\frac{1}{\sqrt{1+4\gamma}} (\psi_+^{2q-1}(\gamma) + \psi_-^{2q-1}(\gamma))}
{2p(2p+1) 2^{2p} \frac{1}{\sqrt{1+4\gamma}} (\psi_+^{2p-1}(\gamma) + \psi_-^{2p-1}(\gamma))} \\
&&~~~~= \lim_{\gamma \rightarrow -1/4^+}
\frac{2q(2q+1) 2^{2q} s_{2q-1}(\gamma)}
{2p(2p+1) 2^{2p} s_{2p-1}(\gamma)} \\
&&~~~~= \frac{2q(2q+1) 2^{2q} 2^{-(2q-2)}}
{2p(2p+1) 2^{2p} 2^{-(2p-2)}} = \frac{2q(2q+1)}{2p(2p+1)}=\beta_{-1/4}^{(\triangle n=2)}.
\end{eqnarray*}
Thus $R(\gamma)< 0$ for $\gamma \in (-1/4,-1/4+\varepsilon)$, $\varepsilon > 0$, small. Since $R(0) > 0$, there exists $\gamma \in (-1/4,0)$ so that $R(\gamma) = 0$.
\noindent
{\bf Part (iii).}
We show that $R(\gamma) > 0$ for $\gamma > -1/4$. One has
\begin{eqnarray*}
R(\gamma) &= & \beta (2^{2p} s_{2p+1}(\gamma) - 1) - (2^{2q}s_{2q+1} (\gamma) - 1)\\
&>& \beta_{-1/4}^{(\triangle n=2)} (2^{2p} s_{2p+1}(\gamma) - 1) - (2^{2q}s_{2q+1} (\gamma) - 1)\, .
\end{eqnarray*}
We show that
\begin{equation*}
\beta_{-1/4}^{(\triangle n=2)} = \frac{2q(2q+1)}{2p(2p+1)} \ge \frac{2^{2q}s_{2q+1} (\gamma) - 1}{2^{2p} s_{2p+1}(\gamma) - 1},
\label{h1}
\end{equation*}
which is equivalent to
$$
\frac{2^{2p}s_{2p+1}(\gamma) - 1}{2p(2p+1)} \ge \frac{2^{2q} s_{2q+1} (\gamma) - 1}{2q(2q+1)}.
$$
This inequality follows from Lemma~\ref{decrease4}. Therefore $R(\gamma) = 0$ has no roots $\gamma > -1/4$ for $\beta > \beta_{-1/4}^{(\triangle n=2)}$.
\end{proof}
\section*{Appendix}
\begin{lemma}
Let $\alpha > 0$. The function
$$
g(x) = \frac{x\alpha^x}{\alpha^x - 1}
$$
is increasing on $(0,\infty)$.
\label{simple}
\end{lemma}
\begin{proof}
The condition $g'(x) > 0$ is equivalent to
$\alpha^x = e^{x \ln \alpha} > 1 + x \ln \alpha$.
This follows directly from the Taylor expansion of $e^x$ at $x = 0$ with equality reached for $x = 0$.
\end{proof}
\begin{lemma}
Let $a > b > 0$. Define
$$
f (n) = f_{a,b}(n) = \frac{n^{a-b} - 1}{n^a-1}.
$$
We define $f(1) = \lim_{n \rightarrow 1} f(n) = (a-b)/a$.
Then $f(n)$ is a decreasing function on $[1,\infty)$.
\label{thetalemma}
\end{lemma}
\begin{proof}
The inequality $f'(n) < 0$ is equivalent to $a(n^b - 1) < b (n^a- 1)$, i.e.,
\begin{equation}
\frac{a}{b} < \frac{n^a-1}{n^b-1}\, .
\label{abeq}
\end{equation}
The estimate \eq{abeq} for $n > 1$ follows from the fact that the function
$$
h(n) = \frac{n^a - 1}{n^b-1}, \qquad a > b > 0,
$$
is increasing on $[1,\infty)$, where
$h(1) = \lim_{n\rightarrow 1} h(n) = a/b$.
The inequality $h'(n) > 0$ reduces to
$$
\frac{an^a}{n^a-1} > \frac{b n^b}{n^b - 1},
$$
which holds for $a > b > 0$ and $n > 1$ by Lemma~\ref{simple}.
Lemma~\ref{thetalemma} follows by continuity of $h(n)$ at $n=1$.
\end{proof}
\begin{lemma}
Let $s_m(\gamma)$ be as above. Then
\begin{eqnarray}
s_m(\gamma) & \ge & 2^{-(m-1)}, \qquad \mbox{for all $\gamma \ge -1/4$ and $m \ge 0$,}
\label{sest1} \\
s_m(\gamma) & < & 1, \qquad \quad \quad \ \ \, \mbox{for all $\gamma \in [-1/4,0)$ and $m \ge 2$,}
\label{sest2a} \\
s_m (\gamma) & > & 1, \qquad \quad \quad\ \ \, \mbox{for all $\gamma >0$ and $m \ge 2$.}
\label{sest3a}
\end{eqnarray}
\label{slemma}
\end{lemma}
\vspace*{-0.3in}
\begin{proof}
\sloppypar First, for $\gamma\geq -1/4$,
$s_m(\gamma)$ is an increasing function of $\gamma$ since
$s_m'(\gamma) = (m/\sqrt{1+4\gamma}) \left( \psi_+^{m-1}(\gamma) - \psi_-^{m-1} (\gamma)\right) > 0$.
The inequality \eq{sest1} follows from this and $s_m(-1/4)=2^{1-m}$.
Equation \eq{sest2a} follows from the fact that $\psi_{\pm} \in (0,1)$ for $\gamma \in [-1/4, 0)$. Hence $s_{m+1}(\gamma) < s_m(\gamma)$ for all $m \ge 0$. Then $s_1(\gamma) = 1$ yields the claim.
Finally, we prove \eq{sest3a}. For $m=2$ and $m=3$, $s_2(\gamma) = 1 + 2\gamma > 1$, and $s_3(\gamma) = 1 + 3 \gamma > 1$ for $\gamma >0$. Then \eq{sest3a} follows directly from \eq{rec:s}.
\end{proof}
\begin{lemma}
For all $m \ge 0$ and $\gamma \ge -1/4$,
\begin{eqnarray}
s_{m+2}(\gamma) & \ge& -\gamma s_m(\gamma),
\label{db} \\
s_{m+1}(\gamma) &\ge& s_{m}(\gamma)/2,
\label{db1} \\
s_{m+1}(\gamma) &\le& \left[1+m(1+4\gamma)\right]s_m (\gamma)/2.
\label{TP1}
\end{eqnarray}
\label{doubling}
\end{lemma}
\vspace*{-0.3in}
\begin{proof}
The inequality \eq{db} is equivalent to
$s_{m+2} - s_{m+1} + (s_{m+1} + \gamma s_m) \ge 0$. Using the recurrence relation \eq{rec:s}, it reduces to
$2s_{m+2} - s_{m+1}\ge 0$, i.e., $2s_{m+2} \ge s_{m+1}$, $m \ge 0$.
Thus \eq{db} and \eq{db1} are equivalent except for \eq{db1} with $m = 0$, which is trivially satisfied ($2s_1 = 2 = s_0$).
Also note that $s_m(\gamma) \ge 0$ for $m \ge 0$ and $\gamma \ge 0$ and \eq{db} is satisfied for $\gamma \ge 0$. In the rest of the proof of \eq{db}, we assume that $m \ge 1$ and $\gamma \in [-1/4,0)$.
We shift $m \rightarrow m+1$ in \eq{db1}, $m \ge 0$, which becomes
\begin{equation}
\left(\psi_+ - \frac{1}{2}\right) \psi_+^{m+1} +\left(\psi_- - \frac{1}{2}\right)\psi_-^{m+1} \ge 0 \, .
\label{phi2}
\end{equation}
Since $\psi_- = 1-\psi_+$ for $\gamma \in [-1/4,0)$, \eq{phi2} is equivalent to
$$
\left(\psi_+ - \frac{1}{2}\right) \left[ \psi_+^{m+1} - \psi_-^{m+1} \right] \ge 0 \, ,
$$
which is satisfied for $\gamma \in [-1/4,0)$ since $\psi_+\geq 1/2$ and $\psi_+>\psi_-$. This proves \eq{db1} and \eq{db}.
We turn to \eq{TP1}.
Note that \eq{TP1} holds for $m=0$.
For $m \ge 1$, first we consider $\gamma \ge 0$. Using \eq{rec:s},
$$
2(s_m + \gamma s_{m-1}) \le \left[m(1+4\gamma) + 1\right] s_m,
$$
i.e.,
\begin{equation}
2\gamma s_{m-1} \le \left[m(1+4\gamma) - 1\right] s_m = (m-1) s_m + 4m\gamma s_m.
\label{TP2}
\end{equation}
But $m\ge 1$ and $s_m \ge 0$. Therefore $(m-1)s_m \ge 0$ and \eq{TP2} follows from $2\gamma s_{m-1} \le 4m \gamma s_m$, i.e., $s_{m} \ge s_{m-1}/2m$, which holds, according to \eq{db1}.
Next, consider $\gamma \in [-1/4,0)$. We write \eq{TP1} as
$2s_{m+1} - s_m \le m (1+4\gamma) s_m$,
and use \eq{s:exp} to obtain
\begin{equation*}
\psi_+^m \left(\psi_+ - \frac{1}{2}\right) + \psi_-^m \left( \psi_--\frac{1}{2}\right) \le \frac{m (1+4\gamma)}{2} (\psi_+^m + \psi_-^m)\, .
\label{j1}
\end{equation*}
Using $\psi_+ + \psi_- = 1$,
$$
\left(\psi_+ - \frac{1}{2}\right) (\psi_+^m - \psi_-^m) \le \frac{m (1+4\gamma)}{2} (\psi_+^m + \psi_-^m)\, .
$$
Since
$$
\psi_+ - \frac{1}{2} = \frac{\sqrt{1+4\gamma}}{2},
$$
Equation~\eq{TP1} is equivalent to
$$
(\psi_+^m - \psi_-^m) \le m \sqrt{1+4\gamma}(\psi_+^m + \psi_-^m),
$$
or
\begin{equation}
\psi_+^m \left( 1 - m \sqrt{1+4\gamma}\right) \leq \psi_-^m \left( 1 + m \sqrt{1+4\gamma}\right).
\label{j2}
\end{equation}
Both $\psi_+$ and $1 + m\sqrt{1+4\gamma}$ are positive, and
$$
\frac{\psi_-}{\psi_+} = \frac{1-\sqrt{1+4\gamma}}{1+\sqrt{1+4\gamma}} = \frac{1+2\gamma - \sqrt{1+4\gamma}}{-2\gamma}.
$$
It follows that proving
\eq{j2} is equivalent to proving
\begin{equation}
\frac{1-m\sqrt{1+4\gamma}}{1 +m\sqrt{1+4\gamma}} \le \left( \frac{1+2\gamma - \sqrt{1+4\gamma}}{-2\gamma}\right)^m \, .
\label{j3}
\end{equation}
We prove \eq{j3} by induction for $m \ge 0$. For $m = 0$, \eq{j3} is trivially satisfied. Assume that \eq{j3} holds for $m$. Using this, we have to show that \eq{j3} holds for $m+1$. This amounts to showing that
\begin{equation}
\frac{1-m\sqrt{1+4\gamma}}{1 +m\sqrt{1+4\gamma}} \, \frac{1+2\gamma - \sqrt{1+4\gamma}}{-2\gamma}\ge \frac{1-(m+1)\sqrt{1+4\gamma}}
{1 +(m+1)\sqrt{1+4\gamma}}.
\label{j4}
\end{equation}
Multiplying \eq{j4} by all (positive) denominators simplifies to an inequality which holds for all $\gamma \in [-1/4,0)$:
\begin{equation*}
m(m+1)(1+4\gamma)^{3/2}\left( 1 - \sqrt{1+4\gamma}\right) \ge 0.
\end{equation*}
\end{proof}
\begin{lemma}
For all $m\geq 2$,
\begin{eqnarray}
-\gamma (2^m - 1)s_{m-1}(\gamma) + s_{m+1} (\gamma) &\ge& 1\, ,
\
\mbox{for $\gamma\in [-1/4,0]$.}
\label{gamest} \\
-\gamma (2^m - 1)s_{m-1}(\gamma) + s_{m+1} (\gamma) &\le& 1\, ,
\
\mbox{for $\gamma \ge 0$.}
\label{gamest2}
\end{eqnarray}
\label{gamlemma}
\end{lemma}
\begin{proof}
We prove\eq{gamest} using induction. For $m=2$ and $m=3$
\begin{eqnarray*}
-\gamma (2^2 - 1) s_1(\gamma) + s_3(\gamma) = -3\gamma + 1 + 3\gamma & =& 1, \\
-\gamma(2^3 - 1)s_2(\gamma) + s_4(\gamma) = 1 - 3\gamma(1+4\gamma) & \ge & 1.
\end{eqnarray*}
Assume \eq{gamest} holds for some $m\ge 3$, i.e.,
\begin{equation}
-\gamma(2^m -1)s_{m-1} + s_{m+1} \ge 1.
\label{ind1}
\end{equation}
By Lemma~\ref{doubling}, $s_{m} + \gamma s_{m-2} \ge 0$. Using \eq{rec:s} this becomes $s_{m-1} + 2 \gamma s_{m-2} \ge 0$.
After multiplication by $2^m - 1 >0$, we obtain the equivalent form
$$
(2^m - 1) s_{m-1} + 2 \gamma (2^{m}-1) s_{m-2} =
(2^m - 1) s_{m-1} + \gamma (2^{m+1}-2) s_{m-2} \ge 0,
$$
which, using \eq{rec:s}, is rewritten as
\begin{equation}
2^{m} s_{m-1} + \gamma (2^{m+1} - 1) s_{m-2} - s_{m} \ge 0.
\label{ind2}
\end{equation}
Multiplying \eq{ind2} by $-\gamma \ge 0$ and adding \eq{ind1} gives
$$
-\gamma(2^{m+1} -1) (s_{m-1} + \gamma s_{m-2}) + ( s_{m+1} + \gamma s_m) \ge 1,
$$
which is rewritten as
$$
-\gamma(2^{m+1} -1) s_m +s_{m+2}\ge 1\, .
$$
This concludes the proof of the second induction step.
Next we prove \eq{gamest2}. The statement is true for $m = 2$ and $m=3$:
$$
-\gamma (2^2 - 1) s_1(\gamma) + s_3(\gamma) = 1\, , \qquad \quad
-\gamma(2^3 - 1)s_2(\gamma) + s_4(\gamma) = 1 - 3\gamma - 12 \gamma^2 \le 1.
$$
Assume \eq{gamest2} holds for some $m\ge 3$, i.e.,
\begin{equation}
-\gamma(2^m -1)s_{m-1} + s_{m+1} \le 1 .
\label{ind10}
\end{equation}
By Lemma~\ref{doubling}, $s_{m} + \gamma s_{m-2} \ge 0$ or equivalently $s_{m-1} + 2 \gamma s_{m-2} \ge 0$, so that
$$
(2^m - 1) s_{m-1} + 2 \gamma (2^{m}-1) s_{m-2} =
(2^m - 1) s_{m-1} + \gamma (2^{m+1}-2) s_{m-2} \ge 0.
$$
This is rewritten as
$$
2^{m} s_{m-1} + \gamma (2^{m+1} - 1) s_{m-2} - s_{m} \ge 0.
$$
We reverse this inequality by multiplying it by $-\gamma \le 0$, and add \eq{ind10} to it to obtain
$$
-\gamma(2^m -1)s_{m-1} + s_{m+1}
-\gamma 2^m s_{m-1} - \gamma (2^{m+1}-1) \gamma s_{m-2} + \gamma s_{m} \le 1,
$$
which reduces to
$$
-\gamma(2^{m+1} -1) s_m +s_{m+2}\le 1\, .
$$
This concludes the proof of the second induction step.
\end{proof}
\begin{lemma}
The sequence
$$
\frac{2^m s_{m+1} (\gamma) - 1}{2^m -1}, \qquad \qquad m \ge 1,
$$
is non-increasing in $m$ for $\gamma \in [-1/4,0]$.
\label{decrease0}
\end{lemma}
\begin{proof}
We prove that for $m \ge 1$,
\begin{equation*}
\frac{2^m s_{m+1} - 1}{2^m - 1} \ge
\frac{2^{m+1} s_{m+2} - 1}{2^{m+1} - 1} \, .
\label{mono}
\end{equation*}
Using the recurrence relation~\eqref{rec:s}, this is equivalent to
$$
s_{m+1} \ge \gamma (2^{m+1} - 2) s_{m} + 1 ~~\iff~~s_{m+2} - \gamma (2^{m+1}-1) s_{m} \ge 1,
$$
which follows directly from Lemma~\ref{gamlemma}.
\end{proof}
\begin{lemma}
The sequence
$$
\frac{2^m s_{m+1} (\gamma) - 1}{m(m+1)}, \qquad \qquad m \ge 1,
$$
is nondecreasing in $m$ for $\gamma \ge -1/4$.
\label{decrease4}
\end{lemma}
\begin{proof}
We use induction to show that for $m \ge 1$
\begin{equation*}
\frac{2^m s_{m+1} - 1}{m(m+1)} \le
\frac{2^{m+1} s_{m+2} - 1}{(m+1)(m+2)},
\label{mono20}
\end{equation*}
or equivalently, for $m\ge 1$,
\begin{equation}
(m+2)2^{m} s_{m+1} \le m 2^{m+1} s_{m+2} + 2\, .
\label{indas0}
\end{equation}
The inequality \eq{indas0} holds for $m= 1$
as $6s_2 = 6(1+2\gamma) = 4 (1+3\gamma) + 2 = 4s_3+2$.
Using \eq{rec:s} to expand $s_{m+2}$ in \eq{indas0} we obtain
$$
(m+2)2^{m} s_{m+1} \le m 2^{m+1} (s_{m+1} + \gamma s_{m}) + 2,
$$
and \eq{indas0} is equivalent to
\begin{equation*}
2^m s_{m+1} - \gamma m 2^{m+1} s_m \le (m-1) 2^m s_{m+1} + 2\, .
\label{eqform0}
\end{equation*}
It suffices to prove that
\begin{equation}
2^m s_{m+1} - \gamma m 2^{m+1} s_m \le (m+1)2^{m-1} s_m \, ,
\label{aux55}
\end{equation}
since the induction assumption \eq{indas0} for $m \rightarrow m-1$ implies
$(m+1)2^{m-1} s_m \le (m-1) 2^m s_{m+1} + 2$.
But \eq{aux55} follows directly from \eq{TP1} of Lemma~\ref{doubling} as it is equivalent to
$2s_{m+1} \le \left[1 + m(1+4\gamma)\right] s_m$.
\end{proof}
Finally, we prove two lemmas that provide bounds for growth of the sequence $\left\{s_{m}(\gamma)-1\right\}$.
\begin{lemma}
The sequence
$$
(s_{m} (\gamma) - 1)/m, \qquad \qquad m \ge 3,
$$
is non-decreasing in $m$ for $\gamma \in [-1/4,0)$.
\label{decrease2}
\end{lemma}
\begin{proof}
The statement is equivalent to $(m+1) s_{m} \le m s_{m+1} +1$, which we prove by induction.
First, for $m = 3$ we have
$4s_3 < 3s_4 + 1$, i.e.,
$4(1+3\gamma) < 3 (1+4\gamma + 2 \gamma^2)+1$ which holds for $\gamma \neq 0$.
Assume that the statement holds for $m \rightarrow m-1$, i.e., $ms_{m-1} \le (m-1) s_{m} +1$, which is equivalent to $s_m \le m(s_m - s_{m-1})+1$. Thus $s_m \le m\gamma s_{m-2} + 1$. However, for $\gamma \in [-1/4,0)$ and $m \ge 2$ one has $0 < s_{m-1} < s_{m-2}$ and thus $s_m \le m \gamma s_{m-1} + 1$. The claim follows by an application of \eq{rec:s} to $s_{m-1}$.
\end{proof}
\begin{lemma}
The sequence
$$
(s_{m+1} (\gamma) - 1)/(2^{-m}-1), \qquad \qquad m \ge 1,
$$
is (i) non-decreasing in $m$, for $\gamma \in [-1/4,0)$; (ii)
non-increasing in $m$, for $\gamma > 0$.
\label{increase2}
\end{lemma}
\begin{proof}
First, we prove (i), which is equivalent to $(2^{m+1}-2)s_{m+2} +1 \le (2^{m+1} - 1)s_{m+1}$.
Using \eq{rec:s} in the form $s_{m+2} = s_{m+1} + \gamma s_m$, this reduces to
$s_{m+1} - 2\gamma (2^m - 1)s_m \ge 1$. This follows directly from a combination of
$-\gamma (2^m -1) s_{m-1} + s_{m+1} \ge 1$, which holds for all $m \ge 2$, and $\gamma \in [-1/4,0)$ by Lemma~\ref{gamlemma} and
$s_{m-1} \le 2 s_m$ (see \eq{db1}).
Next we prove (ii) by an analogous argument. We have to show that
$(2^{m+1}-2)s_{m+2} +1 \ge (2^{m+1} - 1)s_{m+1}$,
which reduces (by \eq{rec:s} in the form $s_{m+2} = s_{m+1} + \gamma s_m$) to
$s_{m+1} - 2\gamma (2^m - 1)s_m \le 1$. This follows from
$-\gamma (2^m -1) s_{m-1} + s_{m+1} \le 1$ (by Lemma~\ref{gamlemma}) and $s_{m-1} \le 2 s_m$ (by \eq{db1}) for all $m \ge 2$.
\end{proof}
\section*{Acknowledgment}
This work was supported by the Slovak Research and Development Agency under the contract No.~APVV-14-0378, by the Scientific Grant Agency of the Slovak Republic under the grant 1/0755/19 (RK) and by the National Science Foundation under grant number NSF-DMS-1522677 (BD). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding sources.
The authors wish to thank Casa Mathem{\'a}tica Oaxaca and Erwin Schr{\H o}dinger Institute for their hospitality during the development of the ideas for this work. To appear in SIAM Journal on Mathematical Analysis. Published on arXiv with permission of The Society for Industrial and Apllied Mathematics (SIAM).
| 29,642 |
\section{Introduction}
It has long been known that, according to our standard paradigm for the formation of cosmic structure, the clustering of dark matter haloes depends strongly on their mass \citep{kaiser_spatial_1984,efstathiou_gravitational_1988,mo_analytic_1996}.
At fixed mass, large simulations of $\Lambda$CDM universes have shown that halo clustering depends in addition on a host of other properties such as formation time, concentration, spin, shape, substructure fraction and internal velocity dispersion structure \citep{gao_age_2005,wechsler_dependence_2006,gao_assembly_2007,li_halo_2008,dalal_halo_2008,faltenbacher_assembly_2010}. This additional dependence is generically called 'assembly bias', It is sensitive to the specific definition of the property considered, and it varies with halo mass in different ways for different properties.
There is still no detailed theoretical understanding of its origin, and our inability to measure the structure of individual dark haloes directly has made it difficult
to identify observationally.
Until recently, attempts to detect an observational signal of assembly bias were
inconclusive \citep[e.g][]{yang_observational_2006,tinker_correlated_2012,wang_detection_2013,hearin_dark_2014} and controversial \citep[e.g.][]{lin_detecting_2016}. A strong indication of assembly bias as a function of halo concentration was identified by \cite{miyatake_evidence_2016} in their study of weak gravitational lensing
by a large sample of clusters identified in the SDSS/DR-8 photometric data. Their result was confirmed at much higher signal-to-noise by \cite{more_detection_2016}, who cross-correlated this same cluster sample with individual SDSS galaxies. In both studies, the mean projected distance of individual cluster members from cluster centre was adopted as a measure of concentration and used to split the sample into equal high- and low-concentration subsamples. Differences at large radius in the mean projected mass and galaxy number density profiles of these two subsamples then provided the evidence for surprisingly strong assembly bias, $b_{lo}/b_{hi}\sim 1.5$.
\citeauthor{more_detection_2016} also used their stacked galaxy number density profiles to search for splashback signals produced by the outer caustics defined by material that is just reaching apocentre after its first passage through the inner cluster. The caustic radius is sharply defined for spherical infall models \citep[e.g.][]{fillmore_self-similar_1984,bertschinger_self-similar_1985,lithwick_self-similar_2011,adhikari_splashback_2014,shi_outer_2016} but
is significantly blurred, even in self-similar models, by realistic deviations from spherical symmetry \citep[e.g.][]{vogelsberger_caustics_2009}. In a $\Lambda$CDM universe, these outer caustics give rise to a sudden steepening of the spherically averaged mean density profile before it flattens out at larger radii due to contributions from neighbouring haloes. This behaviour was studied in some detail by \cite{diemer_dependence_2014} who showed it to depend on halo mass, redshift and recent halo growth. Halo growth histories are intimately connected to their concentration, so \cite{diemer_dependence_2014} also looked for a systematic dependence of splashback signal on concentration. They found that the steepest slope
attained by the mean density profile should become shallower and the radius at which it is attained should become larger as halo concentration increases. When \cite{more_detection_2016} examined the profiles of their low- and high-concentration subsamples, however, they found the opposite ordering both in the minimum slope value and in the radius where it is attained. In addition, these radii were smaller than they expected given their estimates of cluster mass, particularly for the high-concentration subsample. Assuming that cluster galaxies trace the dark matter density profile of their host halo at these outer radii this is in conflict with the simulation results.
The cluster sample analysed by \cite{miyatake_evidence_2016} and \cite{more_detection_2016} was based on application of the redMaPPer
algorithm \citep{rykoff_redmapper_2014} to the DSS/DR8 photometric
galaxy catalogues. As its name implies, this cluster finder uses only the non-star-forming 'red' galaxies in the catalogue. Clusters are assumed to be centred on their brightest red galaxy, and every red galaxy is assigned a probability of belonging to any particular cluster which depends on its projected distance and maximal possible redshift offset (based on the SDSS photometry) from the cluster central galaxy. This necessarily introduces a non-negligible uncertainty in the true redshift spread among cluster members. The effect of this uncertainty on cluster properties is one of the main focuses of the current paper. Another important element of redMaPPer is the introduction of an outer cluster radius that increases slowly with the number of cluster members and is used by the algorithm to define the cluster richness and to limit the projected region over which membership probabilities are non-zero. As we shall show below, this radius, in part because of its important role in the definition of cluster concentration used by \cite{miyatake_evidence_2016} and \cite{more_detection_2016}, has a significant influence on the apparent assembly bias and splashback signals identified by these authors.
This paper is organized in seven sections. Following this introduction, \autoref{sec:methodology} describes the publicly available simulation data we use, the simplified versions of the redMaPPer and concentration estimation procedures that we apply to them, and the global properties of the resulting cluster samples.
\autoref{sec:projection} begins by demonstrating that our simulated cluster samples reproduce quite well the projected mean mass and galaxy number density profiles obtained by \cite{miyatake_evidence_2016} and \cite{more_detection_2016}, including the strong apparent assembly bias signal and the surprising concentration-dependence of the apparent splashback signal. We then investigate how this apparent success is affected by the maximum offset in depth allowed for potential cluster members, our simplified representation of the effect of photometric redshift uncertainties. In \autoref{sec:three_dimensions}, we study how well the assembly bias and splashback features measured in projection correspond to their analogues inferred from the full three-dimensional mass and galaxy distributions. \autoref{sec:projection_effects} then looks in more detail at our stacked profiles to clarify the distribution in depth of the galaxies which give
rise to the differences in mean projected galaxy number profile between low- and high-concentration clusters, while \autoref{sec:rc_influence} examines how profile shapes are influenced by the radius used by redMaPPer as the effective
limit of clusters. Finally, \autoref{sec:conclusions} gives our principal conclusions.
While we were completing the analysis for this paper, \cite{zu_level_2016} published a preprint in which they repeat the lensing analysis of \cite{miyatake_evidence_2016}
but with the cluster sample split according to a modified definition of concentration which, as they demonstrate, is significantly less sensitive to projection effects.
With this new definition, low- and high-concentration clusters show no detectable large-scale assembly bias. \cite{zu_level_2016} conclude, as we do below, that the strong signal in the original analysis is a result of projection effects. Our own analysis (in \autoref{sec:projection_effects}) shows explicitly how this contamination of the low-concentration clusters is distributed in depth and explains why it produces an apparently constant assembly bias signal at large projected separations.
\section{Methodology}\label{sec:methodology}
Our goal in this paper is to see whether the assembly bias and splashback signals detected by \cite{miyatake_evidence_2016} and \cite{more_detection_2016} are consistent with current models for galaxy formation in a $\Lambda$CDM universe. In particular, we would like to understand the origin of the strong observed dependence of bias on cluster concentration, of the unexpectedly small scale of the detected splashback signal, and of the fact that this signal varies between high and low concentration clusters in the opposite sense to that expected both in strength and in radius. For this purpose, we need a realistic simulation of the formation and evolution of the galaxy population throughout a sufficiently large volume for our analogue of redMaPPer to identify a large sample of rich galaxy clusters.
\subsection{Data}\label{sec:data}
\subsubsection{Dark matter distribution}
Our analysis is based on the \emph{Millennium Simulation} described in \cite{springel_simulations_2005}. This followed structure development within a periodic box of side \SI{500}{\Mpch} assuming a flat $\Lambda$CDM cosmology with parameters from the first-year WMAP results. Although these parameters are not consistent with more recent data, the offsets are relatively small and are actually helpful for this paper since they enhance the abundance of rich clusters in the mass range of interest. The dynamical N-body simulation followed the collisionless dark matter only, representing it with $2160^{3} \sim 10^{10}$ particles of individual mass $8.6\times 10^8h^{-1}M_\odot$ and gravitational softening length \SI{5}{\kpch}.
Haloes and their self-bound subhaloes were identified in 64 stored outputs of this simulation using the \textsc{subfind} algorithm \citep{springel_populating_2001}, and these were linked across time to build subhalo trees which record the assembly history of every $z=0$ halo and its subhaloes. These trees are the basis for simulation (in post-processing) of the formation and evolution of the galaxy population. Galaxies are assumed to form as gas cools, condenses and turns into stars at the centre of every dark matter halo and are carried along as halos grow by accretion and merging. Both the subhalo merger trees and the specific galaxy formation simulation used in this paper (and discussed next) are publicly available in the Millennium Database\footnote{\url{http://www.mpa-garching.mpg.de/Millennium/}} \citep{lemson_halo_2006}.
\subsubsection{The galaxies}\label{sec:the_galaxies}
The particular galaxy population used in this paper was created using the semianalytic model described in detail in \cite{guo_dwarf_2011}. These authors implemented their model simultaneously on the Millennium Simulation and on the 125 times higher resolution but smaller volume Millennium-II Simulation \citep{boylan-kolchin_resolving_2009}. This allowed them to tune its parameters in order to reproduce the $z=0$ galaxy population over a very wide mass range. In this paper we will only need to consider relatively bright galaxies, well above the limit to which results for the two simulations converge. As a result we will only use data from the larger volume simulation. We will analyse the simulation data from a single snapshot at $z=0.24$. This is the mean redshift of the clusters in the SDSS sample we compare with and is the closest snapshot to its median redshift of 0.25.
For all galaxies, the simulated galaxy catalogue provides positions, velocities and a range of intrinsic properties, including estimated magnitudes in the SDSS photometric bands. We restrict ourselves to galaxies with $i$-band absolute magnitude, $M_{i} < -19.43 + 5\log_{10}h$, which, for our adopted value $h=0.7$, gives $M_{i} < -20.20$. The chosen magnitude limit is very close to the one corresponding to the redMaPPer luminosity limit of $0.2L_*$ at $z=0.24$, i.e. $M_{i} = -20.25$ \citep[see][]{rykoff_robust_2012}. This selection criterion leaves us with 2,239,661 galaxies and matches that adopted by \cite{more_detection_2016} for their SDSS galaxies in order to achieve volume completeness over the redshift range, $0.1\leq z \leq 0.33$.
The next step in mimicking redMaPPer procedures is to define a class of passive or `red' galaxies. For simplicity, we require the specific star formation rate (SSFR) of model galaxies to lie below \SI{1.5e-11}{\h\per\yr}. This avoids using model colour directly which would introduce a dependence on the (uncertain) modelling of dust effects. However, the two methods produce very similar results in practice, so the choice has has no significant effect on the analysis of this paper. 897,604 galaxies qualify as red by our criterion.
\subsection{Cluster Identification and Classification}
Given the galaxy data described above, we wish to identify clusters using a simplified version of the scheme applied to the SDSS photometric data to generate the catalogue analysed by \cite{miyatake_evidence_2016} and \cite{more_detection_2016}. We project the simulated galaxy and mass distributions along each of the three principal axes of the Millennium simulation to obtain three `sky' images, for each of which depth information is available for the galaxies either in real space or in redshift space. In the latter case, the line-of-sight peculiar velocities of galaxies are added to their Hubble velocities to produce
redshift space distortions (RSD). These are important when considering how the use of photometric redshifts affects the assignment of galaxies to clusters (see \ref{sec:clus_algo}). The following describes our cluster identification scheme and explains how we split the clusters into equal high- and low-concentration subsamples.
\subsubsection{Cluster identification algorithm}\label{sec:clus_algo}
Our cluster identification algorithm, inspired by redMaPPer, finds clusters in the projected distribution of red galaxies. Every red galaxy in each of our three projections is considered as the potential centre of a cluster. The algorithm grows clusters by adding new red galaxies
(defined as in \ref{sec:the_galaxies}) in order of increasing projected separation until the richness $\lambda$ and the cluster radius $R_c $ reach the largest values satisfying the relation given by \cite{rykoff_redmapper_2014},
\begin{equation}
\begin{gathered}
R_c(\lambda)=1.0\left(\frac{\lambda}{100}\right)^{0.2}\si{\Mpch}\label{eqn:rc}
\end{gathered}
\end{equation}
in physical (rather than comoving) units. Initialising with $\lambda = 1$ and $R_c(1)$,
\begin{enumerate}
\item we consider as possible members the $N_g$ red galaxies which lie within $R_c$ and have a (redshift space) depth offset below $\Delta z_m$ ,
\item we calculate $\bar N$, the expected number of uncorrelated ('background') galaxies within $R_c$ and $\Delta z_m$,
\item we update $\lambda=N_g-\bar N$ and $R_c(\lambda)$,
\item we check whether the current central galaxy still has a higher stellar mass than any other cluster member, otherwise we delete it as a potential central and move to the next one,
\item we start the next iteration at (i) if $\lambda$ has increased, otherwise we stop.
\end{enumerate}
This process usually converges quickly and only in a few cases is it unsuccessful in finding a cluster. Note that we choose to require that the central galaxy should be the one with the highest stellar mass. Only in $\sim5$ per cent of the cases is it not simultaneously the brightest in the $i$-band, and we have checked that choosing to require instead that it should the most luminous has a negligible effect on our results. In the following we will only consider clusters with $20\leq \lambda \leq 100$, again in accordance with \cite{more_detection_2016}.
We will consider three different values for the maximal redshift-space offset allowed for cluster members, $\Delta z_m = $ \SIlist{60;120;250}{\Mpch}; the largest of these is equivalent to
projecting through the full Millennium Simulation. For comparison, the $1\sigma$ uncertainty in the photometric redshift of a single SDSS red galaxy is estimated by \cite{rykoff_redmapper_2014} to be about $90\si{\Mpch}$ at the median redshift of the observed cluster sample. The total number of clusters found (summed over the three projections) is
given in \autoref{tab:cluster_p.pdf}.
\begin{table}
\caption{The size of simulated cluster samples for different maximal depth offsets, $\Delta z_m$.}
\centering
\label{tab:cluster_p.pdf}
\begin{tabular}{lccr}
\hline
\multirow{2}{*}{Sample Name} & $\Delta z_m$ & No. Members\\
& \si{\Mpch} & \\
\hline
CS60 & 60 & 9196 \\
CS120 & 120 & 9213 \\
CS250 & 250 & 8930 \\
\hline
\end{tabular}
\end{table}
These numbers are similar to the number of clusters (8,648) in the
observed sample we are comparing with. This is a coincidence since the volume of the Millennium Simulation is only about a tenth of that in the SDSS footprint over the redshift range $0.1 \leq z \leq 0.33$, but the abundance of rich clusters is enhanced by a factor of about three in the simulation because it assumes $\sigma_8 = 0.9$, significantly above current estimates\footnote{We checked the results of this paper using the public semianalytic catalogue of \cite{henriques_galaxy_2015} which is implemented on a version of the Millennium Simulation rescaled to the Planck 2013 cosmology \citep{planck_collaboration_planck_2014}. We find far fewer clusters: 2407, 2244 and 2307 for the equivalents of CS250, CS120, and CS60, respectively. This corresponds to 83.1\%, 77.5\% and 79.6\% of the expected number of clusters in three times the (rescaled) volume of the simulation. We decided to stay with the original cosmology since the larger number of clusters provides much better statistics.}.
There is, of course, a very substantial overlap between these three cluster samples, but it is not perfect. In \autoref{tab:cluster_overl.pdf} we give the fraction of clusters in a given sample that share their central galaxy (in the same projection) with a cluster in a comparison sample and pass the richness filter in both. We see that most clusters are indeed duplicated. Those that are not, fail because in one of the two samples either a more massive potential member is included or the richness falls outside the allowed range. Such differences are a first indication of sensitivity to projection effects, an issue that is discussed further in subsection \ref{sec:clus_subfind}.
\begin{table}
\caption{The fractional overlap between different cluster samples.}
\centering
\label{tab:cluster_overl.pdf}
\begin{tabular}{rccc}
\hline
\multirow{2}{*}{Base sample} & \multicolumn{3}{c}{Comparison sample}\\
\cline{2-4}
& CS60 & CS120 & CS250\\
\hline
CS60 & 1.0 & 0.876 & 0.736\\
CS120 & 0.874 & 1.0 & 0.783\\
CS250 & 0.758 & 0.808 & 1.0\\
\hline
\end{tabular}
\end{table}
Notice that the algorithm described above allows a given galaxy to be considered a member of more than one cluster. Although the majority of our simulated clusters do not have such overlaps, they are not negligible; the fraction of clusters which share at least one galaxy
with another cluster in the same projection is 18.8, 21.8 and 26.7 per cent for CS60, CS120 and CS250, respectively. The average number of galaxies in these overlaps is $\sim 14$, which should be compared with the mean number of galaxies per cluster which is 37 to 46. In order to check the importance of the overlaps, we have repeated our analysis using only the $\sim 80\text{--}75$ per cent of clusters which have no overlap. These are clearly a biased subset with respect to their surroundings, and as a result the stacked profiles change noticeably. However, the conclusions we draw below are not significantly affected, and for the rest of this paper we show only results based on the full cluster samples, noting
that the redMaPPer algorithm also allows a given red galaxy to be considered part of more than one cluster, albeit by assigning probabilities to each potential membership based on the galaxy's photometric redshift, its projected separation from each cluster centre, and the richness of the clusters. The consistent use of such probabilities is the principal difference between the actual redMaPPer algorithm and the simplified version we use here.
\subsubsection{Cluster concentrations}\label{sec:cgal}
At the core of the following analysis is the separation of each cluster sample into two equal subsamples with identical richness distributions, but disjoint distributions of concentration $c_{\rm gal}$ as introduced by \cite{miyatake_evidence_2016}. This concentration is based on the mean projected distance from cluster
centre of red galaxy members, $c_{\rm gal} = R_c/\langle R_{\rm mem}\rangle$ where in our case
\begin{equation}
\left<R_{\mathrm{mem}}\right> = \frac{1}{N_{\mathrm{mem}}}\sum\limits_i^{N_\mathrm{mem}} R_{\mathrm{mem},i} .
\end{equation}
We classify a particular cluster as high or low concentration, depending on whether $c_{\rm gal}$ lies above or below the median for all
clusters of the same richness. For richness values with fewer than 200 clusters in a given sample, we bin together neighbouring richness bins to exceed this number before determining the median. For the observed clusters \cite{miyatake_evidence_2016} binned clusters by both richness and redshift before determining the median, but redshift binning is not necessary for the simulated samples since they are all taken from the same simulation output.
\subsubsection{The cluster-halo correspondence}\label{sec:clus_subfind}
It is not straightforward to connect a galaxy cluster defined in projection with a specific three-dimensional cluster, in our case a specific \textsc{subfind} halo. The idealised model of a spherically
symmetric cluster centred on its most massive galaxy and containing all the cluster members identified in projection corresponds poorly to most of the clusters identified either in the simulation or, most likely, in the SDSS. In almost all cases, the galaxies identified as members in 2D reside in multiple 3D objects distributed along the line-of-sight. This makes the cross-identification between 2D and 3D ambiguous.
Here we consider two possibilities for defining the 3D counterpart
of each 2D cluster: the dark matter halo that hosts the central galaxy and the one that hosts the largest number of member galaxies. The former definition follows the logic of the cluster centring, while the latter ensures that the richness of the 3D counterpart corresponds most closely to that of the 2D system. It is interesting to see how often these definitions coincide, i.e., how often the central galaxy is actually part of the dominant galaxy aggregation along the line-of-sight. We give in \autoref{tab:pairing_stats} the fraction of clusters in each of our three samples for which both methods lead to the same FoF halo. These numbers show that that the two definitions are generally in good agreement, and that this is better for smaller maximal depth offsets
and for more concentrated clusters. These trends reflect the projection effects discussed in detail in \autoref{sec:projection_effects}.
It is also interesting to see how many of the potential cluster members identified in 2D are, in fact, part of the same 3D object. For each of our clusters we find the maximal fraction of its members contained in a single 3D FoF halo. The third column of \autoref{tab:pairing_stats} then gives the average of these fractions. This can be compared with the average fraction
of its members contained in the FoF halo of its central galaxy (fourth column) and with the average expected as a result of our background correction,
$\langle\lambda/N_g\rangle$, given in the last column.
The values for $\langle F_{\rm biggest}\rangle$, $\langle F_{\rm central}\rangle$ and $\langle\lambda/N_g\rangle$ in \autoref{tab:pairing_stats} show that we consistently find more 'foreign' galaxies in our clusters than we would expect from contamination by a uniform background. The more concentrated clusters have contamination ratios close to, yet still a few percent below the expected ones. The low-concentration clusters have contamination fractions more than twice the expected values. We therefore conclude that the identified clusters are biased towards arrangements of multiple objects along the LoS, especially in the low $c_{gal}$ case. Again, this is very much in line with our discussion on the preferential selection of aligned systems in \autoref{sec:projection_effects}.
\begin{table}
\caption{The fraction of clusters where the central galaxy resides in the FoF halo contributing the largest number of potential 2D members; the mean fraction of such members in this halo; the mean fraction of such members in the FoF halo of the central galaxy; the mean membership fraction predicted by 'standard' background subtraction.}
\centering
\label{tab:pairing_stats}
\begin{tabular}{rrcccc}
\hline
\multirow{1}{0.65cm}{Subs.} & \multirow{1}{*}{Sample} & $F_{\rm centred}$ & $\langle F_{\rm biggest}\rangle$ & $\langle F_{\rm central}\rangle$ & $\langle\lambda/N_g\rangle$\\
\hline
\multirow{3}{0.65cm}{All} & CS60 & 0.93 & 0.826 & 0.803 & 0.922 \\
& CS120 & 0.903 & 0.755 & 0.726 & 0.856 \\
& CS250 & 0.848 & 0.635 & 0.595 & 0.743 \\
\hline
\multirow{3}{0.65cm}{high $c_{gal}$} & CS60 & 0.983 & 0.880 & 0.874 & 0.922 \\
& CS120 & 0.973 & 0.819 & 0.812 & 0.855 \\
& CS250 & 0.948 & 0.709 & 0.697 & 0.742 \\
\hline
\multirow{3}{0.65cm}{low $c_{gal}$} & CS60 & 0.876 & 0.772 & 0.732 & 0.923 \\
& CS120 & 0.833 & 0.69 & 0.64 & 0.857 \\
& CS250 & 0.749 & 0.561 & 0.494 & 0.744 \\
\hline
\end{tabular}
\end{table}
\section{Results In Projection}\label{sec:projection}
We are now in a position to investigate whether the assembly bias
and splashback features identified in SDSS data by \cite{miyatake_evidence_2016} and \cite{more_detection_2016} are reproduced when our simplified version of the redMaPPer algorithm is applied to the public Millennium Simulation data . We begin by comparing the observed mean galaxy and mass profiles to directly analogous profiles for CS250, finding that both the surprisingly strong assembly bias and the unexpected properties of the apparent splashback signal are reproduced well. Most differences can be ascribed to the finite size of the simulation or to the simplifications of our cluster identification scheme. We then use our three cluster catalogues to investigate the dependence of these successes on $\Delta z_m$, the maximal depth offset allowed for potential cluster members, finding that the assembly bias signal is sensitive to this parameter but the splashback features are not. Finally we look in more detail at the radial dependence of the ratio of the profiles of low- and high-concentration clusters. Later sections employ the full 3D information available for the simulation to explore the origin of the observed features, and vary our cluster identification scheme to demonstrate how its imprint on the measured profiles can confuse identification of the splashback signal.
\subsection{Comparison of profiles for SDSS and CS250}
We collect the main profile results for the CS250 sample in Figures \ref{fig:deep_gprof} to \ref{fig:deep_mprof}. Here and in the following, unless noted otherwise, the solid line represents the median value from 10000 bootstrap resamplings of the relevant cluster sample. The shaded regions denote the 68 per cent (darker) and 95 per cent (lighter) confidence intervals around this median.
We calculate the mean galaxy surface number density profile for each cluster sample as
\begin{equation}
\Delta\Sigma_g(R)=\Sigma_g(R)-\bar\Sigma_g \label{eqn:dsg}
\end{equation}
where we use all galaxies brighter than $M_{i} = -19.43 + 5\log_{10}h$, not just the red ones, and we impose no maximal depth offset from the
cluster. $\Sigma_g(R)$ is then the mean over all clusters of the surface number density of such galaxies in an annular bin at projected distance $R$ from the central galaxy, and $\bar\Sigma_g $ is the mean surface density over the full projected area of the simulation.
\autoref{fig:deep_gprof} shows that CS250 reproduces the findings of \cite{more_detection_2016} remarkably well. The deviation at large scales ($>20\si{\Mpch}$) is expected and reflects a lack of large-scale power in the Millennium Simulation due to its finite size. The offset between the high- and low-concentration subsamples at $R>3\si{\Mpch}$ shows that the simulation reproduces the strong assembly bias seen in the SDSS data. On small scales ($< 300\si{\kpch}$) the number density profile is slightly too steep for the high-concentration clusters, but shows otherwise very good agreement, while there is an offset of $0.1\text{ dex}$ for the low-concentration subsample inside $\SI{400}{\kpch}$. The most notable differences are on intermediate scales, especially in the range $\SI{1}{\Mpch}\leq R \leq \SI{3}{\Mpch}$ for the low-concentration case. For high-concentration clusters the agreement in this range is excellent and extends out to well beyond \SI{10}{\Mpch}. This is the radial range where splashback features are expected, but is also comparable to the radius, $R_c$, used operationally to define clusters. These differences are highlighted in the radial variations of the profile slope, which we look at next.
\begin{figure}
\centering
\includegraphics{gal_surf_dens_prof_250.pdf}
\caption{Mean surface number density profiles $\Delta\Sigma_g$ for galaxies with $M_{i} < -20.20$ surrounding clusters in the low- and high-concentration subsamples of CS250 are compared with observational results from \protect\cite{more_detection_2016}.}\label{fig:deep_gprof}
\end{figure}
In \autoref{fig:deep_gderiv} we plot the logarithmic derivative $\mathrm{d}\log\Delta\Sigma_g/\mathrm{d}\log R$ over a restricted radial range for these same two CS250 subsamples, comparing with the same quantity for SDSS clusters as plotted by \cite{more_detection_2016}. The simulated curves appear noisier than those observed This is at least in part because of the more direct derivative estimator used here. Nevertheless, we reproduce the main features highlighted by \cite{more_detection_2016}, who identified the position of the minimum of these curves (i.e. the steepest profile slope) as their estimate of the splashback radius. The minima occur at similar radii in the observed and simulated data which, as \cite{more_detection_2016} pointed out, are smaller than expected given lensing estimates of cluster mass. Further the minimum is deeper for the high concentration sample and occurs at smaller radius, whereas the opposite is expected from earlier work on the dependence of splashback radius on halo accretion history (and hence concentration, see \cite{diemer_dependence_2014}). In addition, there are clear differences between the observed and simulated curves. In particular, the profiles of simulated low-concentration clusters are clearly shallower than observed in the range 200$\si{\kpch}$ $<R<\SI{1.5}{\Mpch}$.
We discuss these features in more detail in \autoref{sec:rc_influence}, showing them to result from the superposition of effects induced by the cluster selection algorithms on the true splashback signal.
\begin{figure}
\centering
\includegraphics{gal_surf_dens_deriv_250.pdf}
\caption{The logarithmic derivatives of the $\Delta\Sigma_g$ profiles for CS250 shown in \protect\autoref{fig:deep_gprof} are compared with those plotted for their SDSS clusters by More et al. (2016).}\label{fig:deep_gderiv}
\end{figure}
Mean mass density profiles can be computed much more straightforwardly for our simulated cluster samples than is possible observationally, where such profiles are obtained from the correlated orientation of background galaxies induced by gravitational lensing. In order to compare with the lensing results in \cite{miyatake_evidence_2016}, we bin up the projected mass distribution of the simulation around cluster central galaxies in exact analogy to the above procedure for creating galaxy number density profiles, and we manipulate the resulting profiles to obtain the directly observable quantity,
\begin{equation}
\ensuremath{\Delta\Sigma_m}(<R) = \Sigma_m(<R) -\Sigma_m(R) . \label{eqn:dsm}
\end{equation}
Here, $\Sigma_m(R)$ is the surface mass density profile analogous to
$\Delta\Sigma_g(R)$ above, while $\Sigma_m(<R)$ is the mean of this quantity over projected radii interior to $R$. Note that despite the similarity in notation (which we have inherited from earlier work)
$\ensuremath{\Delta\Sigma_m}(<R)$ is not directly analogous to $\Delta\Sigma_g(R)$ and will differ from it in shape even if the projected mass and galaxy number density profiles parallel each other exactly.
In \autoref{fig:deep_mprof} we compare $\ensuremath{\Delta\Sigma_m}(<R)$ obtained in this way for the high- and low-concentration subsamples of CS250 to the profiles inferred by \citeauthor{miyatake_evidence_2016} from their SDSS lensing data. Whereas the observational data show
at most small differences between the high- and low-concentration
subsamples for $R < 10\si{\Mpch}$, our simulated profiles differ significantly in a way which is related to the differences seen in
\autoref{fig:deep_gprof}. Indeed, we have plotted the surface mass density profiles $\Sigma_m(R)$ directly, and find they are very similar in shape and relative amplitude to the simulated galaxy surface density profiles of \autoref{fig:deep_gprof}. We note that the disagreement between simulation and observation is limited to low-concentration clusters -- agreement is very good for the high-concentration systems on all scales below about 15$\si{\Mpch}$.
We have found no explanation for this discrepancy. The uncertainties on the $\ensuremath{\Delta\Sigma_m}(<R)$ inferred from lensing data are much larger than the purely statistical uncertainty in the simulation results, but below 1$\si{\Mpch}$
the simulation results for low-concentration clusters lie systematically below the observations, while beyond 3$\si{\Mpch}$ they tend to lie above
them. (Note that the coloured bands in \autoref{fig:deep_mprof} show the estimated $1\sigma$ uncertainties in the observations.) This disagreement is in line with the stronger differences between the projected galaxy profiles for the low-concentration subsample. Our findings for the differences in the inner part are close to the findings of \cite{dvornik_kids_2017} who recently investigated the mass profiles of galaxy groups. These less massive objects were identified with a different group finder (based on the FoF algorithm), but the same $c_{gal}$ projected concentration measure was used to divide the sample. While they found a similar split at small scales in the lensing profiles, they did not see a significant signal of assembly bias on large scales. This is expected around the masses of groups when splitting by concentration.
\cite{miyatake_evidence_2016} inferred almost equal mean total masses,
$M_{200m} \sim 2\times 10^{14}h^{-1}M_\odot$, for high- low-concentration clusters from their measured $\ensuremath{\Delta\Sigma_m}(<R)$ profiles. Processed in the same way, our simulated profile for high-concentration clusters would give a very similar answer, whereas that for low-concentration clusters would give a lower value by a few tens of percent. (For $M_{200m} = 2\times 10^{14}h^{-1}M_\odot$, $R_{200m} = 1.5 \si{\Mpch}$, in the middle of the range where simulated and observed $\ensuremath{\Delta\Sigma_m}(<R)$ agree best.) Thus the overall mass-scale of the clusters identified in the \cite{guo_dwarf_2011} galaxy catalogues by our redMaPPer-like algorithm is close to that of the SDSS clusters studied by \cite{miyatake_evidence_2016} and \cite{more_detection_2016}.
\begin{figure}
\centering
\includegraphics{mat_surf_dens_prof_250.pdf}
\caption{The mean lensing observable \ensuremath{\Delta\Sigma_m} for high- and low-concentration clusters in the CS250 sample is compared to observational results for SDSS clusters from \protect\cite{miyatake_evidence_2016}.}\label{fig:deep_mprof}
\end{figure}
\subsection{The influence of cluster selection depth}
The simulation results shown in the last section all referred to CS250
for which any red galaxy projected within $R_c$ is considered a potential cluster member, regardless of its distance in depth (``redshift'') from the central galaxy. As noted previously, \cite{rykoff_redmapper_2014} estimate the $1\sigma$ uncertainty in the photometric redshift of an individual red SDSS member galaxy to increase with distance and to be $90h^{-1}$ Mpc at $z=0.25$, the median redshift of the SDSS cluster sample. Thus many of the observed clusters may be better localised in depth than in the CS250 catalogue. In this section we compare galaxy and mass profiles among our three simulated cluster catalogues, CS250, CS120 and CS60, for which the depth selection parameter $\Delta z_m = 250, 120$ and 60$\si{\Mpch}$, respectively. This allows us to assess how strongly the effective selection depth of clusters affects their apparent splashback and assembly bias signals. We find that effects are small on the former, but can be substantial on the latter.
\autoref{fig:gprof_rats} shows the overall shape of the projected galaxy number density profiles to be very similar in the three cluster catalogues. The high concentration profiles differ from each other by at most 10 per cent within $R_c$ and remain within the same bound out to $\sim 20 \si{\Mpch}$. Beyond this point the uncertainties increase drastically and the ratios of the profiles with smaller $\Delta z_m$ quickly depart from unity but stay within a less than the 68-percentile of the bootstrap distribution of it. The variation is somewhat smaller for low-concentration clusters and is also below 10 per cent within $R_c $, but also below 25 per cent all the way out $\sim 30 \si{\Mpch}$. Beyond $R_c$ the profile amplitude of low-concentration clusters decreases with decreasing $\Delta z_m$ at all separations where it is reliably determined.
This level of agreement is such that all three catalogues agree almost equally well with observation. In the profiles themselves, systematic differences only start to become noticeable outside $R_c$ and the largest effect is the shift in the large-scale amplitude of the profile for the low-concentration clusters, which, as we will see below (in \autoref{ssec:bias_ratios_2d}) is enough to affect the apparent level of assembly bias significantly. At the intermediate radii relevant for splashback detection, the profile shapes are sufficiently similar that curves like those of \autoref{fig:deep_gderiv} show almost no dependence on $\Delta z_m$.
The \ensuremath{\Delta\Sigma_m} profiles (shown in \autoref{fig:mprof_rats}) also vary only slightly as a function of effective cluster depth, $\Delta z_m$, with shifts of similar amplitude to those seen in the projected galaxy number density profiles. For high-concentration clusters these are even smaller than for the previous case, while for low-concentration clusters they are larger within $R_c$ and have the effect of increasingly smoothing the sudden changes in slope seen in the CS250 profile as $\Delta z_m$ decreases. For both cases the amplitude of the profiles on large scales is decreased
for smaller $\Delta z_m$, though by less than 25 per cent out to $\sim 50 \si{\Mpch}$.
\begin{figure*}
\begin{minipage}{.475\textwidth}
\centering
\includegraphics{gal_surf_dens_prof_rats.pdf}
\caption{Comparison of the \ensuremath{\Delta\Sigma_g} profiles for the high $c_{gal}$ and low $c_{gal}$ subsamples of our three simulated cluster catalogues (upper panel) and ratios of the profile amplitudes for CS120 and CS60 to that for CS250 (lower panel).}\label{fig:gprof_rats}
\end{minipage}
\hspace{0.04\textwidth}
\begin{minipage}{.475\textwidth}
\centering
\includegraphics{mat_surf_dens_prof_rats.pdf}
\caption{Comparison of the \ensuremath{\Delta\Sigma_m} profiles for the high $c_{gal}$ and low $c_{gal}$ subsamples of our three simulated cluster catalogues (upper panel) and ratios of the profile amplitudes for CS120 and CS60 to that for CS250 (lower panel).}\label{fig:mprof_rats}
\end{minipage}
\end{figure*}
\subsection{Profile ratios and assembly bias}\label{ssec:bias_ratios_2d}
By taking the ratio of the profiles discussed in the previous section we can obtain a measure of the relative bias of high- and low-concentration clusters at fixed cluster richness, hence of {\it assembly bias}. In \autoref{fig:surf_dens_rat_comp} we show this ratio for the $\Delta\Sigma_g$ profiles as a function of projected separation for our three catalogues of simulated clusters. In order to measure the large-scale bias, \cite{more_detection_2016} only plotted this ratio at $R \geq \SI{3}{\Mpch}$ (the orange points with error bars in \autoref{fig:surf_dens_rat_comp}). However, since they give the individual profiles for high- and low-concentration clusters, it is straightforward to reconstruct the observed ratio on smaller scale. We show this as a dashed orange line in \autoref{fig:surf_dens_rat_comp}.
The observed and the simulated ratios show similar behaviour which separates into three distinct radial regimes. At $R \geq \SI{3}{\Mpch}$, the relative bias varies little and the observed value of $1.48 \pm 0.07$ matches very well that for CS250 outside of $R=\SI{8}{\Mpch}$. CS120 gives a somewhat smaller value fitting the observations well between 3 and $\SI{10}{\Mpch}$, while at larger $R$ it is still within about $1\sigma$. CS60 has even weaker relative bias barely within $1\sigma$. Both these signals appear to decline with increasing $R$. The behaviour at smaller scales differs markedly on either side of a sharp peak which, for the simulated clusters, occurs almost exactly at $\langle R_c\rangle \sim \SI{1}{\Mpch}$, coinciding with that for the observed clusters. At smaller $R$, the ratio of the profiles increases smoothly and strongly with $R$, reflecting the requirement that the two cluster subsamples have similar richness but systematically different values of $\langle R_{\rm mem}\rangle$. This also enforces a ratio substantially above unity at $R=R_c$. At intermediate radii, $R_c<R<3\si{\Mpch}$, the ratio has to decline from the high value at the peak to the more modest value characteristic of the large-scale assembly bias.
In all three samples there is a noticeable change in slope just outside $2\si{\Mpch}$ which appears to reflect true splashback effects (see \autoref{sec:3Denvironment}).
These properties demonstrate that the operational definition of clusters has a substantial effect on the ratio of the profiles out to at least $3\si{\Mpch}$. These effects must therefore be present also in the individual profiles, and hence must affect their use for identifying splashback features. In addition, the variation of the ratios at large $R$ among our three cluster catalogues shows that the apparent assembly bias signal is significantly affected by projection effects.
The ratio of the \ensuremath{\Delta\Sigma_m} profiles for the high- and low concentration subsamples of each of our three simulated cluster catalogues are shown in \autoref{fig:mat_rat_cgal} in exactly analogous format to \autoref{fig:surf_dens_rat_comp}. They are compared to observational results taken directly from \cite{miyatake_evidence_2016}. The difference in shape between the simulation curves in Figures~\ref{fig:mat_rat_cgal} and~\ref{fig:surf_dens_rat_comp} is due primarily to the conversion of $\Sigma_m(R)$ to $\Delta\Sigma_m(<R)$. A ratio plot constructed using $\Sigma_m(R)$ directly is quite similar to \autoref{fig:surf_dens_rat_comp}, although the peak at $\langle R_c\rangle$ is less sharply defined. The behaviour of the observational points in \autoref{fig:mat_rat_cgal} is quite erratic and looks rather implausible when compared with the smooth variation predicted by the simulation. Over the ranges $3\si{\Mpch}<R<14\si{\Mpch}$ and $R>15\si{\Mpch}$ the predicted assembly bias signal is almost constant, but over the first range it is much larger than and apparently inconsistent with that observed, whereas over the second it is smaller than and again apparently inconsistent with that observed. It is our impression that the uncertainties of these observational points are too large for secure interpretation to be possible.
The differences in large-scale assembly bias between our three simulated cluster catalogues are similar to those seen for the cluster number density profiles of \autoref{fig:surf_dens_rat_comp}, although pushed out to systematically larger radii. Again this is a consequence of the conversion from $\Sigma_m(R)$ to $\Delta\Sigma_m(<R)$. On small scales the simulation curves lie well below the observational points. This is a restatement of the fact that the simulated profiles in \autoref{fig:deep_mprof} differ much more at these radii than the observed profiles.
\begin{figure*}
\hspace*{-0.02\textwidth}
\begin{tabular}{p{.475\textwidth} p{0.00\textwidth} p{.475\textwidth}}
\includegraphics{gal_surf_dens_rat_comp.pdf}
\caption[]{The ratio of the projected galaxy number density profiles of the low $c_{gal}$ and high $c_{gal}$ subsamples of our three simulated cluster catalogues (solid lines surrounded by their 68 per cent confidence regions). Points with error bars are observational data taken directly from \protect\cite{more_detection_2016}, while the continuation of these data to smaller scales (the dashed orange line) was calculated from the individual profiles in their paper. The dotted vertical line indicates $\langle R_c\rangle$ for the simulated clusters. The horizontal orange band is the observed assembly bias signal quoted by \protect\cite{more_detection_2016} with its 68 and 95 per cent confidence ranges.}\label{fig:surf_dens_rat_comp} & &
\includegraphics{mat_surf_dens_rat_comp.pdf}
\caption[]{Ratios of \ensuremath{\Delta\Sigma_m} for the high- and low-concentration subsamples of our three cluster catalogues (solid lines with their 68 per cent confidence ranges). Points with error bars are results derived from the gravitational lensing signal of SDSS clusters by \protect\cite{miyatake_evidence_2016}.}\label{fig:mat_rat_cgal}
\end{tabular}
\end{figure*}
\section{The 3D Perspective}\label{sec:three_dimensions}
\cite{miyatake_evidence_2016} and \cite{more_detection_2016} interpret their SDSS results under the implicit assumption that the features seen in the stacked 2D profiles correspond to similar features in the 'true' 3D profiles.
In our simulations, it is possible to test the extent to which this is the case, so in this section we compute stacked 3D profiles of mass density and of galaxy number density around the central galaxies of our three cluster catalogues, splitting them into high- and low-concentration subsamples as before using the 2D values of $c_{\rm gal} = R_c(\lambda)/\langle R_{\rm mem}\rangle$. This allows us to make plots directly analogous to those discussed above, and so to check the 2D -- 3D correspondence. In this section all profiles are calculated in true position space rather than in redshift space. Note that we here use a standard definition of the spherically averaged mass density profile rather than some 3D analogue of \ensuremath{\Delta\Sigma_m}. Note also that since each central galaxy can appear in one to three different projections, we give it the corresponding weight when constructing the 3D profiles in order to keep as close a correspondence as possible to the 2D results discussed previously.
\subsection{Splashback Radius}\label{sec:3d_gal_profiles}
As was the case in 2D, we find that plots of the 3D profile slope,
analogous to those of \autoref{fig:deep_gderiv}, are very
similar for our three cluster catalogues. In Figures~\ref{fig:mat_dens3d_deriv_comp} and \ref{fig:gal_dens3d_deriv_comp} we therefore show results for CS250 only. Since recent theoretical work on splashback properties has concentrated on cluster mass profiles \citep[e.g.][hereafter DK14]{diemer_dependence_2014}, we start with a discussion of
\autoref{fig:mat_dens3d_deriv_comp} which shows logarithmic slope (referred to as $\gamma$ below) as a function of 3D radius $r$ .
These slope profiles show relatively smooth behaviour
with well-defined minima at $r\sim 1.8\si{\Mpch}$. The mean $M_{200m}$ values in the two sub-samples correspond to $R_{200m}\sim 1.45\si{\Mpch}$ and $R_{200m}\sim 1.37\si{\Mpch}$, so these minima occur
at $1.2 R_{200m}$ and $1.3 R_{200m}$ for the high- and low-concentration samples, respectively. These values are very close to the expected values given in \cite{more_splashback_2015} for the expected mass accretion rates at the given masses and redshift. The slopes at minimum are significantly
shallower for our stacks ($\gamma \sim -2.8$) than DK14 found for halos of similar mass ($\gamma \sim -3.5$). As shown in the Appendix, this is because such profiles depend both on
the definition of the sample to be stacked and on the details of stack
construction. In particular, DK14 scale each individual
profile to its own $R_{200m}$ and then take the median density at each
$r/R_{200m}$, whereas we take the mean density at each radius directly.
The DK14 procedure typically produces deeper and
sharper minima, hence better defined splashback radii which occur at slightly smaller radii, but it is not easily implemented on observed samples. For example, the redMaPPer samples are defined to have similar (and known) values of $R_c$ but their individual values of $R_{200m}$ are unknown. In addition, weak lensing reconstructions of the mass distribution naturally produce mean rather than median mass profiles.
The two slope profiles of \autoref{fig:mat_dens3d_deriv_comp} differ
significantly in shape. In the inner regions ($r<R_c$) this reflects the
fact that the two samples are separated by galaxy concentration
(in practice, by $\langle R_{\rm mem}\rangle/R_c$) so that, by definition, the low-concentration clusters have shallower 2D galaxy density profiles within $R_c$ than the high-concentration clusters. \autoref{fig:gal_dens3d_deriv_comp} shows that this requirement carries over
to the 3D galaxy profiles, and it is still very visible in \autoref{fig:mat_dens3d_deriv_comp}. Similar effects are seen in Figure~14
of DK14 where they split their halo sample by
3D mass concentration. However, our results do not agree with the trend they
find for more concentrated clusters to have a shallower minimum slope and a larger splashback radius. We have checked that if we follow their scaling and median stacking procedures, our high-concentration clusters still have a steeper minimum slope and the same splashback radius as our low-concentration clusters. The discrepancy must reflect the difference between selecting halos by 3D mass and mass concentration and selecting clusters by 2D richness and galaxy concentration.
The shapes of the 3D slope profiles for the mass (\autoref{fig:mat_dens3d_deriv_comp}) and for the galaxies (\autoref{fig:gal_dens3d_deriv_comp}) are very similar, in particular,
beyond the splashback minimum. At smaller radii the features induced by
cluster selection are stronger in the galaxy profile, with a secondary minimum
just inside $\langle R_c\rangle$ which is just visible as a slight inflection in the mass profile. Overall, however, the features in the galaxy
profile are much less dramatic than in its 2D analogue, \autoref{fig:deep_gderiv}. This just reflects the fact that clusters were selected and their concentrations estimated using the 2D data
\begin{figure*}
\begin{minipage}{.475\textwidth}
\includegraphics{3d_mat_dens_deriv_comp.pdf}
\caption[Log-Derivative of $\delta_m$]{Logarithmic derivative profiles of the 3D mass overdensity around the central galaxies of the high- and
low-concentration subsamples of CS250. Vertical lines mark the $R_{200m}$
values for the two samples calculated directly from their stacked mass profiles.}\label{fig:mat_dens3d_deriv_comp}
\end{minipage}
\hspace{0.04\textwidth}
\begin{minipage}{.475\textwidth}
\includegraphics{3d_gal_dens_deriv_comp.pdf}
\caption[Log-Derivative of $\delta_g$]{Logarithmic derivative profiles of the 3D galaxy number overdensity around the central galaxies of the high- and low-concentration subsamples of CS250 in identical format to \protect\autoref{fig:mat_dens3d_deriv_comp} except that a solid vertical line indicates $\langle R_c\rangle$ for the two samples.}\label{fig:gal_dens3d_deriv_comp}
\end{minipage}
\end{figure*}
\subsection{Large-scale environment}\label{sec:3Denvironment}
We now look at the ratios of stacked 3D mass overdensity profiles for our low- and high-concentration clusters, and at the corresponding ratios of their galaxy number overdensity profiles. These are directly analogous to the ratios of 2D galaxy number overdensity profiles shown in \autoref{fig:surf_dens_rat_comp}. As in that figure, we here compare results for the three samples, CS60, CS120 and CS250. Ratios as a function of $r$ are shown for mass overdensities in \autoref{fig:mat_dens_rat_comp} and for galaxy number overdensities in \autoref{fig:gal_dens_rat_comp}. The shapes of the curves and their relative positions for the three samples are very similar in these two
figures.
In the inner regions, $r < R_c$, all curves are rapidly and smoothly rising, showing that the difference in 2D galaxy profiles resulting from our classification by concentration carries over to the 3D galaxy and mass profiles. In this regime and in both plots the ratio for CS60 is slightly larger than that for CS120 and significantly larger than that for CS250. This behaviour mirrors that of the ratio of the fractions of 2D potential members which are part of the central galaxy's FoF group (see
\autoref{tab:pairing_stats}). Interestingly, this ranking of amplitudes for the three samples persists to much larger scales and is opposite to that
seen in 2D (\autoref{fig:surf_dens_rat_comp}). Clearly, with increasing $\Delta z_m$, projection effects contribute more strongly to low- than to high-concentration clusters not only at $R\sim R_c$ but also
at much larger projected separation.
In the range $R_c < r < 5\si{\Mpch}$, all curves continue to rise to a sharp peak before dropping again to a value which remains approximately constant over the interval $5\si{\Mpch}$ $< r < 30 \si{\Mpch}$. The peak corresponds to the crossing of the derivative curves for the low- and high-concentration subsamples in Figures~\ref{fig:mat_dens3d_deriv_comp} and~\ref{fig:gal_dens3d_deriv_comp}. It thus reflects differences in the way the splashback feature merges into larger scale structure in the two cases. As noted above, it appears to be visible as a sharp change in slope in the profiles of \autoref{fig:surf_dens_rat_comp} (see also \autoref{fig:rc_deriv_comp} below). Between $R_c$ and the peak, effects from sample definition clearly modulate galaxy overdensity profile ratios
more strongly than mass overdensity profile ratios but the difference is
quite small.
The constant profile ratios seen over the range $5\si{\Mpch}$ $< r < 30 \si{\Mpch}$ are a direct measurement of the 3D assembly bias for cluster samples split by 2D concentration. These values are significantly smaller than the 2D values inferred from \autoref{fig:surf_dens_rat_comp}. In addition, they rank in the
opposite sense with $\Delta z_m$, they are consistent between Figures~\ref{fig:mat_dens3d_deriv_comp} and~\ref{fig:gal_dens3d_deriv_comp}, and they are similar to the values expected from previous work on assembly bias for cluster mass haloes split by concentration \citep[e.g.][]{more_detection_2016}. As we will see in the next section, a clue to the origin of this difference between the 2D and 3D estimates of assembly bias comes from the largest $r$ bins in these figures where, although noisy, the ratios of the profiles rise to large values.
\begin{figure*}
\hspace*{-0.02\textwidth}
\begin{tabular}{p{.475\textwidth} p{0.00\textwidth} p{.475\textwidth}}
\includegraphics{3d_mat_dens_rat_comp.pdf}
\caption[$\delta_{m}$ Ratios]{Ratios of the 3D mass overdensity
profiles of low- and high-concentration clusters for each of our
three cluster samples. The vertical line indicates the mean cluster radius $\langle R_c\rangle$.}\label{fig:mat_dens_rat_comp} & &
\includegraphics{3d_gal_dens_rat_comp.pdf}
\caption[$\delta_{g}$ Ratios]{Ratios of the 3D galaxy number overdensity profiles of low- and high-concentration clusters for each of our three cluster samples with a vertical line indicating the mean cluster radius $\langle R_c\rangle$.}\label{fig:gal_dens_rat_comp}
\end{tabular}
\end{figure*}
\section{Projection contamination}\label{sec:projection_effects}
In the preceding sections we found a number of differences in the apparent splashback and assembly bias signals between the 2D and the 3D profiles of
our simulated galaxy clusters. These differences are present both in the mass and in the galaxy number density profiles, and they affect the low- and high-concentration subsamples to differing degrees. In this section we focus specifically on galaxy number density profiles, compiling them in the two dimensions of projected separation and line-of-sight depth so that we can compare results for the two subsamples and isolate the distribution in depth
of the galaxies which give rise to the difference in projected profiles.
Let $R$, as above, denote projected separation, and $q>0$ denote line-of-sight separation, measured either in configuration space ($q = |d|$) or in redshift
space ($q = |\pi|$). We define a set of cells of constant width in $\ln R$
and $\ln q$ and compile galaxy counts in these cells around the central
galaxies of the low- and high-concentration subsamples of each of our cluster
samples, $N_{lo}(R,q)$ and $N_{hi}(R,q)$ respectively.
In Figures~\ref{fig:diff_pair_counts_norsd} and~\ref{fig:diff_pair_counts} we show the quantity
\begin{equation}
\beta(R,q) = \frac{N_{lo}(R,q)-N_{hi}(R,q)}{\sum_q [N_{lo}(R,q)+N_{hi}(R,q) - N_{c}n_{gal}V(R,q)]},\label{eqn:betaRq}
\end{equation}
for the real-space and redshift space cases respectively. In this equation,
$N_c$ is the total number of clusters in the sample, $n_{gal}$ is the mean space density of galaxies, and $V(R,q)$ is the volume of the cell at $(R,q)$. Thus $2\sum_q\beta(R,q) = b_{lo}(R) - b_{hi}(R)$, where the assembly bias factors $b_{lo}$ and $b_{hi}$ are the ratios of the stacked 2D galaxy
number overdensity profiles of the low- and high-concentration subsamples to
that of the cluster sample as a whole. The distribution of $\beta$ over $q$
at fixed $R$ thus indicates the distribution in depth of the difference in galaxy counts which gives rise to the apparent 2D assembly bias signal.
In the inner regions ($R < 400\si{\kpch}$) the projected profile of high $c_{gal}$ clusters lies above that of low $c_{gal}$ clusters for all three samples (see \autoref{fig:surf_dens_rat_comp}). \autoref{fig:diff_pair_counts_norsd} shows that, as expected, the additional galaxies which produce this excess lie in the inner regions of the clusters, with a median depth offset from the central galaxy
of $150\si{\kpch}$ or less. In redshift space, the random motions within clusters move this excess out to $|\pi| \sim \SI{700}{\km\per\s}$, as shown in \autoref{fig:diff_pair_counts}.
Beyond $R = 400\si{\kpch}$ the behaviour switches and the projected profile of low $c_{gal}$ clusters lies above that of high $c_{gal}$ clusters (again see \autoref{fig:surf_dens_rat_comp}). The galaxies which produce this excess lie
in two different ranges of depth whose relative contribution varies both with $R$ and with $\Delta z_m$. At $R < 2\si{\Mpch}$, a 'local' component centred near $R \sim |d| \sim \langle R_c\rangle$ contributes most of the excess low $c_{gal}$ counts in CS60, about half of them in CS120, and a minority of them in CS250, producing much of the pronounced peak seen at these $R$ in the profile ratios of \autoref{fig:surf_dens_rat_comp}. A second component, distributed relatively uniformly over $\pm \Delta z_m$, the full allowed depth for potential cluster members, contributes excess counts to the low $c_{gal}$ cluster profiles at all $R>R_c$ and is responsible for most of the large-scale assembly bias. It also dominates the excess counts near $\langle R_c\rangle$ in CS250. The systematic change in the relative weight of these two components with increasing $R$ results in a shift in the median depth offset of the excess counts, indicated by the black solid lines in Figures~\ref{fig:diff_pair_counts_norsd} and~\ref{fig:diff_pair_counts}. The increasing strength of the second component from CS60 to CS120 to CS250 is the cause of the increase in 2D assembly bias with $\Delta z_m$. \autoref{fig:diff_pair_counts} shows that redshift space distortions significantly smear out these two components and make them more difficult to distinguish.
These results explain why strong assembly bias is seen in 2D for CS250 and CS120 (see \autoref{fig:surf_dens_rat_comp}) but only a much weaker signal is seen in 3D
(\autoref{fig:gal_dens_rat_comp}). Many of the low-concentration clusters in these samples have significant foreground/background groups projected on their outer regions. These groups are distributed over the full depth $\pm \Delta z_m$, and are visible in Figures~\ref{fig:diff_pair_counts_norsd} and~\ref{fig:diff_pair_counts} as an excess in bins at large $q$ and $R\sim R_c$. Galaxies correlated with these foreground/background groups then produce excess galaxy counts at similar $q$ for all $R$ values shown in the plot. Since the fall-off in these counts with $R$ at the $q$ of the background group is similar to that of galaxy counts at relatively small $q$ correlated with the primary cluster, the induced apparent assembly bias is almost independent of $R$. The rise in 3D assembly bias seen at the largest $r$ in \autoref{fig:gal_dens_rat_comp} is a result of beginning to pick up this additional correlated component in the counts around low-concentration clusters.
The strength of this effect clearly depends on the sensitivity of the cluster identification algorithm to projection effects at $R\sim R_c$. This in turn depends both on the effective $\Delta z_m$ and on the weight assigned to potential members near the cluster edge. Hence, the apparent bias may differ between the real redMaPPer sample and our simulated samples. Nevertheless, the strong similarity seen in previous sections between the behaviour of our CS250 and CS120 samples and the SDSS sample analysed by \cite{more_detection_2016} and \cite{miyatake_evidence_2016} suggests that the assembly bias signal they found has a similar origin to that in the simulation. In the next section we will explore further the dependence of apparent splashback features on cluster definition and argue that the unexpected properties of the features detected by \cite{more_detection_2016} are a result of confusion with features imposed by the cluster selection procedure.
\begin{figure*}
\hspace*{-0.02\textwidth}
\begin{tabular}{p{.475\textwidth} p{0.00\textwidth} p{.475\textwidth}}
\includegraphics[width=.475\textwidth]{asymmetry_norsd.pdf}
\caption[Differential Pair Counts]{The quantity $\beta(R,q)$ of \protect\autoref{eqn:betaRq} for the case $q= |d|$. This shows the distribution over depth $q$ of the fractional difference between the projected galaxy count profiles of the
low $c_{gal}$ and high $c_{gal}$ subsets of each of our three simulated cluster samples. The black curves give the median offset in depth of the excess counts as a function of $R$.}\label{fig:diff_pair_counts_norsd} & &
\includegraphics[width=.475\textwidth]{asymmetry_rsd.pdf}
\caption[Differential Pair Counts]{Identical to \protect\autoref{fig:diff_pair_counts_norsd} except for the redshift space case, $q=|\pi|$.}\label{fig:diff_pair_counts}
\end{tabular}
\end{figure*}
\section{Cluster definition affects profile shape}\label{sec:rc_influence}
We have argued above that the details of our redMaPPer-like algorithm leave an
imprint on the stacked profiles of our simulated clusters. Although this is most evident in the strong peak at $R_c$ in the profile ratios of \autoref{fig:surf_dens_rat_comp} and in the steep gradient interior to this radius induced by our separation of the two subsamples by concentration, $c_{\rm gal}$, it is also visible in the crossing at $R_c$ of the individual gradient profiles of \autoref{fig:deep_gderiv} and in their minima close to and on opposite sides of this radius. In this section we investigate these effects further by varying the value of $R_c$ used to define clusters. Specifically, we set
\begin{equation}
R_c = 1.0 \eta \left(\frac{\lambda}{\lambda_{n}(\eta)}\right)^{0.2}\si{\Mpch}\label{eq:varlam}
\end{equation}
and we change $\eta$.
The variable normalisation $\lambda_{n}\left(\eta\right)$ in \autoref{eq:varlam} accounts for the fact that a given cluster will contain more galaxies within a larger
projected radius. In the following we will consider $\eta = 2/3, 1$ (the value used in all earlier sections) and 3/2. Based on the mean galaxy number overdensity stacks of \autoref{ssec:bias_ratios_2d}, we take $\lambda_{n}\left(\eta=\frac{2}{3}\right)=74$, $\lambda_n(1) =100$, as before, and $\lambda_{n}\left(\eta=\frac{3}{2}\right)=130$. For each choice of $\eta$ we repeat the cluster selection and concentration calculation procedures of Sections~\ref{sec:clus_algo} and~\ref{sec:cgal}. Since changing $R_c$ changes the richness value $\lambda$ assigned to each cluster, we shift the richness range defining our samples ($20\leq \lambda \leq 100$ for $\eta = 1$) so that the total numbers of simulated clusters above the upper and lower limits remain unchanged. In the following we show results for $\Delta z_m = 250\si{\Mpch}$ only, since the two other cases behave very similarly.
\autoref{fig:rc_bias_comp} repeats the observational and CS250 results from \autoref{fig:surf_dens_rat_comp} and compares them with predictions for $\eta = 2/3$ and 3/2. The peak of the profile ratio increases strongly with $\eta$ and shifts to match $\langle R_c\rangle$ in all three cases. Interestingly, the profile ratio for $\eta=2/3$ peaks at a value of 1.8 at a radius where it is 0.8 for $\eta=3/2$, and the ratio is unity for $\eta=2/3$ at a radius where it is only 0.6 for $\eta=3/2$. Thus, changing the limiting radius defining a cluster sample not only affects its stacked profiles in their outer parts, but also close to the centre. Beyond $R_c$, the secondary feature noted in \autoref{ssec:bias_ratios_2d} and apparently associated with true splashback effects is clearest for $\eta=2/3$ and is very weak for $\eta=3/2$. At large $R$, the strength of assembly bias increases noticeably with $\eta$. The stronger peak, the weaker splashback signal and the stronger large-scale assembly bias found with increasing $\eta$ are all consistent with the expectation that projection effects should increase in importance when clusters are identified within larger radii, hence at lower projected overdensities. Also as expected, overall the
SDSS results of \cite{more_detection_2016} behave most similarly to the $\eta=1$
curves in \autoref{fig:rc_bias_comp}. Nevertheless the large scale ratios agree equally well with the ones using $\eta=3/2$.
\begin{figure}
\includegraphics{gal_surf_dens_rat_Rc_comp.pdf}
\caption[]{The ratio of the projected galaxy number density profiles of the low $c_{gal}$ and high $c_{gal}$ subsamples of CS250, taken from \autoref{fig:surf_dens_rat_comp}, is compared with those found for cluster samples selected with the same value of $\Delta z_m$ but with $\eta = 2/3$ and 3/2 in \autoref{eq:varlam}, rather than $\eta=1$. Points with error bars and their continuation to smaller scales are
the same as in \protect\autoref{fig:surf_dens_rat_comp}. Vertical lines indicate $\langle R_c\rangle$ for the three samples.}\label{fig:rc_bias_comp}
\end{figure}
As shown in \autoref{fig:rc_deriv_comp}, the logarithmic derivative of $\Delta\Sigma_g$ shows a strong and complex response to $\eta$. The middle panel here is essentially a repeat of \autoref{fig:deep_gderiv}, while the upper and lower panels show similar plots for $\eta=2/3$ and $\eta=3/2$ respectively. A striking feature of these plots is that the slope profiles for the two subsamples always
cross around $R=\langle R_c\rangle$ and at a value of about -1.4. The crossing 'coincidence' is mathematically equivalent to the fact that all the profile ratios have a maximum at $R\sim R_c$ in \autoref{fig:rc_bias_comp}, which itself is easily understood as a consequence our creating subsamples with identical distributions of $\lambda$ but disjoint distributions of $c_{\rm gal}$, thus forcing the profile ratio to increase over the range $0<R<\langle R_c\rangle$. The uniform slope value at curve crossing reflects the fact that this value equals the slope for the sample as a whole, which is quite slowly varying and close to -1.4 at these projected radii.
Within the crossing point, the slope for low-concentration clusters rises rapidly to a maximum of about $\gamma=-0.5$ at $R\sim \langle R_c\rangle$, while the slope for the high-concentration clusters drops to a minimum at approximately the same radius but with a value which decreases strongly with increasing $\eta$. This behaviour is clearly a consequence of our definition of $c_{\rm gal}$ and our separation of clusters into subsamples according its value. On larger scale, the slope profiles
appear independent of $\eta$ when $R$ exceeds twice the largest value of $\langle R_c\rangle$ for the samples being compared. However, the curves for high- and
low-concentration clusters differ both from each other and from those of \cite{more_detection_2016} in this regime. In the intermediate range, $\langle R_c\rangle < R< 2\langle R_c\rangle$, the shape of the curves is set by the need
to interpolate between these two different behaviours, causing a minimum at or just outside $\langle R_c\rangle$ and a maximum at slightly larger radius in the low- and high-concentration cases respectively.
In none of these panels are the simulated curves a good fit to the observed ones.
The results for high $c_{gal}$ clusters match quite well for $\eta = 3/2$, but the best fit
for the low $c_{gal}$ clusters is for $\eta = 1$, and even here the overall depth and the general shape of the features differ significantly. Given the strong sensitivity to the cluster identification algorithm and to the splitting by $c_{\rm gal}$, it is likely that these discrepancies reflect detailed differences between the real redMaPPer and concentration definition procedures and the simplified versions used here. It is clear that it will be very difficult to infer reliable information about
splashback signals from data of this kind without a complete understanding of these effects.
\begin{figure}
\includegraphics{gal_surf_dens_deriv_Rc_comp.pdf}
\caption[$\Delta\Sigma_{g}$ Radial Log-Derivaties, $R_c$ Comparison]{The logarithmic derivatives of simulated and observed $\Delta\Sigma_g$ profiles from \protect\autoref{fig:deep_gderiv} are repeated in the middle panel and compared with results from simulated cluster catalogues with the same value of $\Delta z_m$ but $\eta=2/3$ and 3/2 (top and bottom panels respectively). A solid vertical line in each panel indicates the value of $\langle R_c\rangle$ for the relevant sample.}\label{fig:rc_deriv_comp}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
In their analysis of a volume-limited sample of 8648 clusters selected by applying the redMaPPer algorithm to the SDSS/DR8 photometric data, \cite{more_detection_2016} detected strong assembly bias as a function of cluster concentration on projected scales $\SI{5}{\Mpch}< R <\SI{30}{\Mpch}$, and substantial variations in the slope of cluster projected galaxy number density profiles in the range $\SI{500}{\kpch} < R < \SI{5}{\Mpch}$ which they attributed to splashback effects. The assembly bias signal had previously been seen at lower signal-to-noise by \cite{miyatake_evidence_2016} in gravitational lensing data for the same cluster sample. By using a simplified version of the redMaPPer scheme on three orthogonal projections of publicly available galaxy catalogues from the Millennium Simulation, we have been able to identify up to 9196 clusters of similar richness, which we classify by concentration in a similar way to the SDSS studies. This allows us to carry out analyses directly analogous to those of \cite{more_detection_2016} and \cite{miyatake_evidence_2016} and to compare with
results obtained from the full 3D information available for the simulation. This
gives considerable insight into the features seen in the SDSS analysis.
The mean projected profiles of mass and galaxy number density which we find for the simulation are very similar to those found observationally, both for the
cluster sample as a whole and for its low- and high-concentration subsamples.
The apparent assembly bias on large scales agrees well with that observed, as does
the shape of the ratio of the low- and high-concentration profiles which rises with decreasing projected radius $R$ to a peak at the mean value of $R_c$, the limiting radius used to define clusters, before dropping precipitously to smaller scales. The
variation with $R$ of the logarithmic slope of the mean galaxy number density
profiles shows a more complex structure than in SDSS, but reproduces the main
features pointed out by \cite{more_detection_2016}: the main minimum (the point where the profile is steepest) occurs at smaller radius than expected from the splashback studies of \cite{diemer_dependence_2014} and in addition the minima
for the low- and high-concentration subsamples rank oppositely to the splashback expectation both in depth and in radius.
The observed large-scale assembly bias is best reproduced when all red galaxies projected onto a cluster (hence within $\pm \SI{250}{\Mpch}$ in depth) are considered as potential members. The signal is slightly weaker if the maximal allowed depth offset is reduced to \SI{120}{\Mpch} and significantly weaker if it is reduced to \SI{60}{\Mpch}. Such changes have negligible effect on the logarithmic slope profiles of stacked galaxy counts. Hence projection over relatively large depths appear to be a significant factor in apparent assembly bias but not in apparent splashback features.
The above results, derived by stacking simulated clusters in projection, can be compared to results obtained from a directly analogous analysis of the full 3D data.
This shows some striking differences. The 3D assembly bias for separations between 3 and $30\si{\Mpch}$ is considerably smaller than that seen in 2D ($b\sim 1.15$ rather than $b\sim 1.5$) and varies in the opposite way with the maximum depth offset allowed for cluster members. The peak in the ratio of the galaxy number density profiles for low- and high-concentration clusters occurs at a substantially larger radius in 3D than in 2D ($r\sim 2.5\si{\Mpch}$ rather than $R\sim 800\si{\kpch}$). The logarithmic derivatives of the 3D mass and galaxy overdensity profiles vary more smoothly than in 2D, and show a single minimum which is at larger radius than in 2D and at the same position for the low- and high-concentration clusters. The ranking of the minima in depth remains opposite to that expected from splashback theory. (See the Appendix for a discussion of how cluster selection, scaling and stacking procedures can affect apparent splashback features).
The effects of projection and cluster definition on stacked cluster profiles can be clarified by examining them in the two-dimensional space of projected separation and line-of-sight depth. This allows identification of the depth ranges which give rise to the difference in projected counts around low- and high-concentration clusters. As expected, the galaxy excess at small projected radius which produces the high central surface density of high-concentration clusters is made up of objects which are close to the cluster centre also in 3D. In redshift space, these excess counts appear at offsets $\sim \SI{800}{\km\per\s}$, in the wings of the cluster velocity dispersion. At projected radii $500\si{\kpch}$ $<R<2\si{\Mpch}$, much of the projected count excess around low-concentration clusters comes from galaxies offset in depth by $\sim 1\si{\Mpch}$, apparently indicating that low-concentration clusters live in richer environments than their high-concentration analogues. At larger projected separation, the galaxies responsible for the strong assembly bias signal are distributed almost uniformly over the full depth accessible to potential cluster members, showing that they are correlated with background groups preferentially projected onto the low-concentration clusters, rather than with the clusters themselves. The overall effect of projection on 2D assembly bias clearly depends strongly both on the details of cluster and concentration definition and on the accuracy of the available photometric redshifts.
At projected radii $500\si{\kpch}<R <3\si{\Mpch}$ where splashback effects are expected to be present, distant foreground and background galaxies contribute negligibly to projected cluster profiles. These are, however, strongly affected by the specific algorithms used to identify clusters and to classify them according to concentration. We demonstrate this explicitly by changing the limiting radius $R_c$ within which red galaxies are counted as cluster members. Even though we take care to adjust parameters so that the abundance and typical mass of clusters are matched for different choices of limiting radius, we find that this radius is strongly imprinted on the mean projected profiles of the resulting samples. The effects are dramatic, both on the ratio of the profiles for low- and high-concentration clusters and on the shape of the logarithmic derivative profiles for the individual subsamples. It will be difficult to obtain reliable information about splashback without detailed understanding of such effects for the particular algorithms used to select an observed cluster sample.
\section*{Acknowledgements}
The authors thank Surhud More for useful discussions of this work.
| 21,366 |
\section{Introduction}
The study of fluctuating magnetic fields has found widespread application in a variety of disciplines. High-cadence (or effective sampling rate) magnetic field measurements have been used for magnetic-anomaly detection (MAD), which has numerous security applications, such as naval defense and unexploded ordnance detection \cite{sheinker:2009}. Additionally, geographically distributed magnetometers, operating at high sample rates, have been designed to study the global distribution of lightning strikes using the Schumann resonance \cite{Schlegel:2002}.\\
Low-frequency magnetic field measurements, typically corresponding to large length scales, provide important information relating to the nature of magnetic sources in the Earth's core, maps of near-surface fields, as well as crustal composition and structure models \cite{egbert:1997}. An international consortium of magnetometer arrays, known as SuperMag, comprises approximately 300 magnetometers operated by numerous organizations \cite{Gjerloev:2009,Gjerloev:2012}. The magnetic field data collected from these stations is uniformly processed and provided to users in a common coordinate system and timebase \cite{Gjerloev:2012}. Such data are important to global-positioning-system (GPS)-free navigation, radiation-hazard prediction \footnote{\url{http://www.nws.noaa.gov/os/assessments/pdfs/SWstorms_assessment.pdf}}, climate and weather modeling, and fundamental geophysics research. Additionally, measurements of the auroral magnetic field are necessary in testing models for space-weather prediction, which aims to mitigate hazards to satellite communications and GPS from solar storms \cite{Angelopoulos:2008,Peticolas:2008,Harris:2008}.\\
Magnetometry has additionally been applied in the search for earthquake precursors. Anomalous enhancements in ultralow frequency (ULF) magnetic fields were reported leading up to the October 17, 1989 Loma Prieta earthquake \cite{fraser-smith:1990}. Similar anomalous geomagnetic behavior was observed for the month preceding the 1999 Chi-Chi earthquake in Taiwan \cite{yen:2004}. Recently, attempts have been made to study precursors to the 2011 Tohoku earthquake in Japan \cite{hayakawa:2015}.\\
Despite its numerous applications, magnetometry is frequently limited by contamination from unrelated, and often unknown, sources. In their search for earthquake precursors, Fraser-Smith et al. (1978) found that magnetic noise from the Bay Area Rapid Transit (BART) system dominated their ULF sensors \cite{fraser-smith:1978}. This noise is evident in Fig. \ref{berkeley_garden}, which depicts the magnitude of the magnetic field recorded at the University of California (UC) Berkeley Botanical Garden. Despite the obvious presence of local permanent or time-varying magnetic contamination, the recorded magnetic field is dominated by fluctuations beyond the expected typical geomagnetic values. These fluctuations diminish dramatically between approximately 1 AM and 5 AM local time, indicating that these are the same fluctuations that are attributed to the operation of BART \cite{fraser-smith:1978}.\\
\begin{figure*}[ht]
\centering
\includegraphics[width=0.8\textwidth]{img_botanical_garden}
\caption{The Earth's magnetic field, as recorded at the UC Berkeley Botanical Garden. The red trace represents measurements taken by an all-optical self-oscillating magnetometer designed and constructed at UC Berkeley in partnership with Southwest Sciences, Inc. \cite{Higbie:2006}. The green and blue traces are data taken with a commercially available cesium magnetometer from Geometrics, Inc. The gray trace represents the average field seen by a geomagnetic observatory located in Fresno, CA.}
\label{berkeley_garden}
\end{figure*}
Even during the magnetically quieter nighttime period, there remain fluctuations which exceed expected geophysical values. Certainly, some of the field variation can be attributed to variations in the ionospheric dynamo and other natural sources: similar trends appear in the magnetic record from the Botanical Garden as well as Intermagnet data from a magnetometer located in the nearby city of Fresno \cite{chapman:1962,kerridge:2001}. Nevertheless, the contributions of human activity to these nighttime fluctuations remain poorly understood.\\
Only recently have investigators been able to accumulate and study the increasing quantity of spatially and temporally granular data that characterize the evolving state of a city. These data include information from social networks (e.g., twitter), financial transactions, transportation systems, environmental markers (pollution, temperature, etc.) and a wide range of other physical quantities. Passive observations of cities not only serve as a means of quantifying urban functioning for the purpose of characterizing cities as complex systems of study, but they can also yield tremendous benefit to city agencies and policy makers.\\
Recent work has shown that broadband visible observations at night can identify patterns of light that can be used to measure total energy consumption and public health effects of light pollution on circadian rhythms \cite{dobler:2015}. High frequency (${\sim}120$\,Hz) measurements of urban lighting allow for phase change detection of fluorescent and incandescent flicker, which may be used as a predictor for power outages \cite{bianco:2016}. In addition, infrared hyperspectral observations can determine the molecular content of pollution plumes produced by building energy consumption providing a powerful method for environmental monitoring of cities \cite{ghandehari:2017}.\\
Here we report on the development of a synchronized magnetometer array in Berkeley, California for the purpose of studying the signature of urban magnetic field over a range of spatiotemporal scales. The array, currently consisting of four magnetometers operating at a 3960 Hz sample rate, will make sustained and continued measurements of the urban field over years, observing the city's dynamic magnetic signature. Through systematically observing magnetic signatures of a city, we hope to complement advances made elsewhere in urban informatics and applied physics to provide a deeper understanding of urban magnetic noise for researchers in geophysics and earth science.\\
In Sec.~\ref{development} we briefly describe the components and performance of the hardware and software implemented in our magnetometer array. We emphasize our preference for commercially available hardware and advanced timing algorithms and utilize techniques similar to those of the Global Network of Optical Magnetometers for Exotic physics searches (GNOME) project \cite{pustelny:2013}.\\
The analysis of our data is presented in Sec.~\ref{signatures}. In Sec.~\ref{singleobservations} we present the signatures of several common urban sources and observe the signature of a lightning strike. Station data are analyzed and compared in Sec.~\ref{multistation}. Clear variations between weekday and weekend, day and night, and distance from BART tracks are observed. Measurements are examined with wavelets in Sec.~\ref{correlation}. Initial results of correlating station data are presented. In Sec.~\ref{extraction}, we present an initial method to isolate the BART signal. Conclusions and directions for future research are presented in Sec.~\ref{discussion}.
\section{Instrumentation, Hardware, and Data Acquisition}
\label{development}
\subsection{Description of the system}
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{img_map}
\caption{Map of Berkeley and location of the stations. The colored pins identify the locations of the magnetometers in our network. The black line shows the different paths of the BART trains. The shortest distance from each magnetometer station to the nearby BART line is represented by the dashed lines. Stations 1 to 4 are respectively located 1000, 130, 2000 and 360 meters from the closest BART line.}
\label{map}
\end{figure*}
Our magnetometer network consists of several spatially separated magnetometers with high-precision timing for correlation analysis. Figure \ref{map} shows a map of Berkeley with the sensor locations of the network. Each station consists of a commercially available fluxgate magnetometer, a general-purpose laptop computer, and an inexpensive GPS receiver. Our approach avoids bulky and expensive hardware, favoring consumer components wherever possible. As a result, we reduce the cost of the acquisition system and enable portability through battery operation. However, achieving the desired timing precision (${\sim}100$\,\uS{}) with affordable commercial hardware requires implementing a customized timing algorithm.\\
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{img_schematic}
\caption{A schematic of a magnetometer station. The computer (PC) retrieves the magnetic field vector value continuously from the data acquisition system (DAQ) that reads out the voltage on the sensor. The timing data are provided by a GPS receiver, with a dedicated 1 pulse-per-second (PPS) output through a Serial IO RS232-to-USB converter. The acquired data are uploaded to a shared Google Drive folder.}
\label{exp_setup}
\end{figure}
A schematic of a single magnetometer is presented in Fig. \ref{exp_setup}. Each setup is controlled by a computer (PC, ASUS X200M) running the Windows 10 operating system (OS). The PC acquires magnetic field data from the DAQ (Biomed eMains Universal Serial Bus, USB, 24\,bit) and timing data from the GPS receiver (Garmin 18x LVC). The DAQ continuously samples the fluxgate magnetometer (MAG, Biomed eFM3A) at a rate of 3960 sample/s. Absolute timing data are provided once per second by the GPS receiver, which is connected through a powered high-speed RS232-to-USB converter (SIO-U232-59). The GPS pulse-per-second signal, with 1\uS{} accuracy, is routed to the computer through the carrier detect (CD) pin of the RS232 converter. Data from the DAQ arrive in packets of 138 vector samples approximately every 35\,ms. As the data are received, they are recorded together with the GPS information and the computer system clock. Data are uploaded via wireless internet to a shared Google Drive folder.
\subsection{Time synchronization and filtering}
\label{timing}
Time intervals between the GPS updates are measured by the computer system clock (performance counter), which runs at 2533200\,Hz. A linear fit model is used to determine the absolute system time relative to the GPS. Only the GPS timing data from the last 120 seconds are used to determine linear fit parameters. When a magnetic-field data packet is received, the acquisition system immediately records the performance counter value. The packet time-tag is determined from interpolating the linear fit GPS time to the performance counter value. Typical jitter of the inferred time stamps is 120\uS{} and is limited by the USB latency \cite{Korver:2003}.\\
Some of the data packets cannot be processed immediately due to OS limitations. These packets are delayed by up to several milliseconds before being delivered to the data-acquisition software. The number of the delayed packets depends on the system load. During normal operation of a magnetometer, the average fraction of the delayed packets is about $3\%$. The intervals between both GPS and packet arrival times are measured with the performance counter in order to identify the delayed packets. Any GPS data that arrive more than 50\uS{} late are discarded from the linear-fit model. When magnetometer data packet arrives with a 200\uS{} or greater discrepancy from the expected time, their time stamp is replaced with the expected arrival time, which is inferred from the linear fit. Our time filtering algorithm and data acquisition software are publicly available on GitHub \footnote{\url{https://github.com/lenazh/UrbanMagnetometer}}.
\subsection{Performance characterization}
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{img_timing}
\caption{Characterization measurement of the time synchronization algorithm. The four magnetometers are subjected to the same square-wave-modulated magnetic field. The left column shows the measurement with the unprocessed time information. The right column shows the same data with the corrected time. (a) Time series of the four magnetometer traces. The inset shows a time discrepancy of magnetometer 4 at the zero crossing. (b) The difference of the mean zero crossing time of the square wave and the individual zero-crossing time for each magnetometer for two minutes of data. The data show a spread and delays of up to 10\,ms. (c) Histogram of the data in (b), with the red curve representing the best-fit Gaussian. While most zero-crossing events have the correct timing within 120\uS{} (standard deviation of the Gaussian), there are a significant number of outliers. (d), (e) and (f) show the same data after implementing the time synchronization algorithm.}
\label{Figure3}
\end{figure*}
In order to characterize the performance of the data acquisition system, we apply a reference signal simultaneously to the sensors. All four sensors are placed together into a single Helmholtz coil system driven by a pulse generator. The amplitude of the pulses is 2\,$\mu$T, the period is $200$\,ms, and the duty cycle is $50\%$ (Fig. \ref{Figure3}a). The top and bottom rows in Fig. \ref{Figure3} represent the data before and after application of the timing-correction algorithm. The inset in Fig. \ref{Figure3}a demonstrates how a delay in retrieving a data packet disrupts the timing of the field pulses. When a data packet is delayed, the magnetic-field samples are distributed over a slightly larger time period, which affects the estimated time of the field change. In Fig. \ref{Figure3}, interpolated time of the falling edge from sensor four has been several milliseconds after the field changed, causing the zero crossing of the square wave to shift in time from the other sensors. Figure \ref{Figure3}b shows the time discrepancies between the interpolated and expected square wave zero crossings. The zero crossings are recorded with 120\uS{} standard deviation from the expected interval; however, there are a large number of outliers with up to a 10 ms discrepancy. Figure \ref{Figure3}c shows the histogram of the data from \ref{Figure3}b, where the red curve represents the best fit Gaussian. After the timing correction algorithm is applied to the data, time stamps associated with outlier packets are replaced with the expected arrival times (Fig. \ref{Figure3} d,e,f). The remaining jitter is a Gaussian distributed with an error of 120\uS{}.
\subsection{Instrumental noise floor}
Figure \ref{noisefloor} shows the instrumental noise floors for each vector axis of a Biomed magnetometer. Data were obtained in a two-layer $\mu$-metal shield for approximately 35 minutes ($2^{23}$ samples at 3960 sample/s). Data were separated into an ensemble of 64 individual intervals of $2^{17}$ samples. Power spectral densities were calculated for each interval, with the noise floor taken as the ensemble average of the interval spectra. The noise floor varies between individual axes, with the most noise observed on the Z-axis. For all three directions, the noise floor is constant between $\approx$ 2\,Hz and $\approx$ 700\,Hz. Narrowband spectral features from 60\,Hz and harmonics are easily observed in the data. For frequencies above 1\,Hz, the noise floor is uniformly below 0.1 $\textrm{nT}/\sqrt{\textrm{Hz}}$. The peak observed in the noise floor, predominately in the Z and Y channels, between $\sim$ 300 and $\sim$1500\,Hz, is likely due to the operation of the fluxgate electronics.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.9\textwidth]{img_noise_floor}
\caption{Instrumental noise floors for each vector axis of a Biomed magnetometer.}
\label{noisefloor}
\end{figure*}
\section{Data Analysis}
\label{signatures}
\subsection{Observations of urban magnetic signatures}
\label{singleobservations}
The portability of our sensors has enabled the direct measurement of several urban field sources. Figure \ref{Figure4} shows the magnetic signatures of several field sources associated with transportation: traffic on a freeway, as well as both Amtrak and BART trains. Figure \ref{Figure5} shows a spike in the magnetic field due to a lightning strike recorded by three geographically distributed sensors. This lightning strike, observed before the implementation of the timing algorithm, highlights the need for time corrected data: e.g. interpreting the time lag between sensors as a physical transit time between stations corresponds to unreasonable spatial separations of ${\sim}15,000$\,km. Unfortunately, no further lightning strikes have been captured since implementation of the timing algorithm: the synchronous occurrence of lightning in our network will eventually provide an important test of our timing algorithm.
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{img_examples2}
\caption{Examples of urban magnetic sources. (a) Scalar magnitude field measurements from sensor placed 5\,m from highway. (b) Signature of large truck passing on highway [first peak in (a)]. (c) Magnetometer placed close to the tracks of a passing Amtrak train. The scalar magnitude field is shown in black with the vector components (DC removed) displayed in color. (d) Data taken at El Cerrito BART station on the side of southbound trains.}
\label{Figure4}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{img_lightning}
\caption{Magnetic anomaly detection. Observations from three geographically distributed magnetometers of a single lighting strike. At the time of the strike, the timing issues described in Section \ref{timing} had not been resolved. Magnetometer traces from each station were shifted to align the spikes, demonstrating the need for a precision timing algorithm. Looking at the apparent frequency change in the 60\,Hz power-line signature of the yellow trace, we infer that timing errors occurred in this sensor just before the lightning strike. The blue trace was shifted up by 2\,$\mu$T for plotting purposes. Station 1 served as an engineering unit and was not part of the ongoing observations.}
\label{Figure5}
\end{figure*}
\subsection{Multi-station analysis of magnetic field data}
\label{multistation}
Fraser-Smith et al. (1978) report the presence of strong ultralow frequency (ULF) magnetic fluctuations throughout the San Francisco Bay Area in the 0.022 to 5\,Hz range \cite{fraser-smith:1978}. The dependence of these fluctuations on proximity to the BART lines, and their correspondence with train timetable, led the authors to attribute this ULF signal to currents in the BART rail system. Subsequent ULF measurements made in \cite{ho:1979} at a distance of ~100\,m of BART suggested periodic bursts of magnetic field at roughly the periodicity of the BART train. The location of our network stations (up to 2\,kms from BART line) and length of recorded intervals ensure that urban signatures, such as BART, will be present in our data.\\
The timing algorithm provides sub-millisecond resolution for MAD; however many magnetic signatures related to anthropogenic activity occur at low frequencies where GPS alone provides adequate timing. To investigate urban magnetic fluctuations, we decimate the full 3,960 sample/s to a 1 sample/s cadence. Antialiasing is accomplished with a moving average (boxcar) filter. Low-cadence data provide adequate resolution for correlating multi-station observations, while simultaneously removing the 60\,Hz power signal. We only use data from Stations 2, 3 and 4 in multi-station comparisons. Station 1 served mainly as an engineering unit for characterisation measurements and other testing.\\
\begin{figure*}[p]
\centering
\includegraphics[width=0.85\textwidth]{img_time_series}
\caption{Magnetic field magnitudes for three stations at 1 sample/s cadence on 03-20-2016. Panel (a) shows close agreement between the USGS station in Fresno and sensor 3, demonstrating our stations' sensitivity to geomagnetic fields. Panel (b) shows the 24 hour scalar magnitudes (DC values were adjusted for plotting purposes) for all three active sensors; the magnitude of fluctuations decreases with increasing distance from each sensor to the BART train line. The grey bar represents one hour of data from 10-11\,AM (PDT), a better visualisation of that region can be seen in panel (c). The 24-hour scalar field averages have been removed from each time-series. The one-minute averaged USGS geomagnetic field data are shown in each plot.}
\label{Figure6}
\end{figure*}
Figure \ref{Figure6} shows the scalar magnitude fluctuations of three stations on Sunday, 03/20/2016 (PDT). One-day scalar average magnitudes are subtracted from the total magnitude. Each panel additionally shows geomagnetic field data (one-minute averaged) acquired from the United States Geological Survey (USGS) station in Fresno, CA. Though panel (a) demonstrates a general agreement between our record and the USGS data, consistent large fluctuations in excess of the geomagnetic field are observed in each sensor. Panel (b) shows that these non-geomagnetic fluctuations dominate the daytime magnetic field. Additionally, panel (c) shows a subset of the data from 10-11\,AM, revealing several synchronous spikes in each sensor.\\
A magnetically quiet period corresponding to BART non-operating hours \footnote{\url{https://www.bart.gov/}} is evident in the records of the three active sensors (stations 2-4). Figure \ref{Figure7} shows the distribution of magnitude fluctuations binned at 10$^{-3}$\,nT. The record naturally separates into two intervals, corresponding to the hours when BART trains are running (roughly, from 7:55AM to 1:26AM) and hours when BART trains are inactive. We refer to these intervals as ``daytime'' and ``nighttime'', respectively. The magnetic measurements reveal characteristically different distribution functions in daytime and nighttime. For each sensor, the nighttime distribution functions appear as a superposition of several individual peaks. Standard deviations, $\sigma_i$, for the nighttime distribution functions of the three active sensors are given by $\sigma_{2}=0.072\,\mu$T, $\sigma_{3}=0.009\,\mu$T, and $\sigma_{4}=0.026\,\mu$T.\\
\begin{figure*}[ht]
\centering
\includegraphics[width=0.75\textwidth]{img_distribution}
\caption{Distributions for 24hr, daytime, and nighttime magnetic field observations. The nighttime observations show several discrete peaks, daytime fluctuations follow broad distributions. The variance of the distributions increases as the distance from the BART train line decreases.}
\label{Figure7}
\end{figure*}
Figure \ref{Figure8} shows that the time-localized variance of magnetic field fluctuations, calculated in a 40 minute sliding-window, is significantly smaller than the variance calculated for the full nighttime interval. This indicates that the discrete peaks observed in the nighttime distribution functions are localized in time, while transitions in the DC field magnitude cause the appearance of several distinct peaks. These transitions in the DC field are evident as peaks in the sliding-window variance. The simultaneous occurrence of transitions in the DC field magnitude, evidenced by simultaneous peaks in the sliding window variance, suggests that these transitions are global phenomena, perhaps relating to nighttime maintenance on the BART line or activity in the ionosphere.\\
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{img_variance}
\caption{Magnetic field variance calculated over a 40-minute sliding window for all three sensors. Vertical lines mark the night interval when BART is inactive. Strong, discontinuous, transitions in the DC field amplitude, observed as peaks in the sliding window variance, cause the variance calculated over the entire night interval to be significantly larger than the time-local variance.}
\label{Figure8}
\end{figure*}
The two vertical bars in Fig. \ref{Figure8} mark the period for which BART is inactive. Increased field variance, i.e. signal power, is clearly tied to the operation of BART. Figure \ref{Figure8} also highlights the relatively constant daytime variance in each station. Additionally, there appear to be approximately constant ratios between the daytime variance (power) observed by each sensor. This suggests that the background noise level observed in each station is set by the distance to BART. Identifying other anthropogenic fields (for example, traffic) is complicated by the large BART signal. Accordingly, identification of further urban signals requires a thorough characterization of the magnetic background generated by BART.
\subsection{Time-Frequency (Wavelet) Analysis}
\label{correlation}
Frequency-domain analysis is typically used to reveal the spectral composition of a magnetic time series. Localizing the distribution of spectral power in time requires simultaneous analysis in both time and frequency domains. We implement a continuous wavelet transform (CWT), using Morlet wavelets \cite{Torrence:1998}. to investigate the time-frequency distribution of low-frequency fluctuations associated with BART. The unnormalized Morlet wavelet function $\psi(\tau)$ is a Gaussian modulated complex exponential,
\begin{equation}
\psi (\tau)= \pi^{1/4}e^{i\omega_0\tau }e^\frac{-\tau^2}{2},
\end{equation}
\noindent with non-dimensional time and frequency parameters $\tau$ and $\omega_0$. A value of $\omega_0=6$ meets the admissibility conditions prescribed in Ref.\ \cite{Farge:1992} and is commonly used across disciplines \cite{Podesta:2009,Torrence:1998}. At each time step, the CWT of the magnetic field $B$ is defined by the convolution of the time series record with a set of scaled wavelets,
\begin{equation}
W(s,t)=\sum_{i=0}^{N-1}B_x(t_i)\psi(\frac{t_i-t}{s}),
\end{equation}
which are normalized to maintain unit energy at each scale. The CWT provides a scale independent analysis of time-localized signals, and is additionally insensitive to time series with variable averages (non-stationary signals). These qualities provide some advantage over alternative time-frequency analysis techniques, such as the windowed Fourier transform, which calculates the spectral power density in a sliding window applied to the time series. Introducing a window imposes a preferred scale which can complicate analysis of a signal's spectral composition. For example, low-frequency components, with periods longer than the sliding window scale, are aliased into the range of frequencies allowed by the window, thereby degrading the estimate of spectral density.\\
Full day, 1 sample/s cadence, wavelet power spectral densities $|W(s,t)|^2$ ($\mu\textrm{T}^2/\textrm{Hz})$ for stations 3 and 4 on Sunday, 03/20/2016 (PST) are displayed in Fig. \ref{Figure9}. These spectrograms prominently display the quiet nighttime period. Additionally, strong power is observed in several scales corresponding to a fundamental 20-minute period ($8.33 \times 10^{-4}\ \textrm{Hz}$) and associated higher harmonics. This 20-minute period coincides with the Sunday BART timetable on the geographically closest BART line (Richmond-Fremont). The black lines display the region where boundary effects are likely\ --\ this region, known as the cone of influence (COI), corresponds to the $e$-folding time for the wavelet response to an impulse function. \\
\begin{figure*}[ht]
\centering
\includegraphics[width=0.85\textwidth]{img_wavelet_stations_bis}
\caption{Time-frequency analysis of scalar magnitude magnetometer data. Continuous wavelet transform power spectral densities ($\mu$T$^2$/Hz) for Magnetometer 3 (a) and Magnetometer 4 (b). The spectrograms reveal power in several bands from $8.3\times10^{-4}$ and $1\times10^{-2}$\,Hz common to both sensors. These frequencies correspond to a 20-minute period signal and subsequent harmonics. White dashed lines show the frequency range of a brick-wall filter applied to data.}
\label{Figure9}
\end{figure*}
A brick-wall bandpass filter (i.e., unity gain in the passband, full attenuation in the stop band) is applied to each sensor in the frequency domain between $7 \times 10^{-4}$ and $1 \times 10^{-2}\ \textrm{Hz}$ in order to isolate the bands of power observed in the wavelet power spectra. The top panel of Fig. \ref{Figure10} shows the bandpassed time series for 10-11\,AM on 3/20/2016. The bandpassed time series are normalized to their maximum values for the purpose of visualization. This plot immediately suggests that stations 3 and 4 are highly correlated, while station 2 is anti-correlated with stations 3 and 4. Indeed, this is verified by the bottom panel of Fig. \ref{Figure10}, which shows the cross correlation coefficients $C_{ij}(\tau)$ calculated for the 24 hour time series:
\begin{equation}
C_{ij}(\tau)=\frac{\sum_{n=0}^{N-\tau-1}(B_i[n+\tau]-\bar{B_i})(B_j[n]-\bar{B_j})}{\sqrt{\left[{\sum_{n=0}^{N-1}(B_i[n]-\bar{B_i})^2}\right]\left[{\sum_{n=0}^{N-1} (B_j[n]-\bar{B_j})^2}\right]}},
\end{equation}
\noindent where $N$ is the record length, $\tau$ is a translation between time series, and $n$ is the sample index \cite{Bendat:1990}. It is clear that stations 3 and 4 are in phase, while station 2 is out of phase of the other two instruments. These phase relationships which correspond to the geographical location of the sensors on the east/west side of the BART line (stations 3 and 4 are located east of the rails, while station 2 is located to the west, c.f. Fig. \ref{map}), suggest that the magnetic field generated by the BART has a strong azimuthal symmetry around the BART rail. Future work will look to determine the multipole components (e.g. line current, dipole, and higher corresponding to the BART field.\\
\begin{figure*}[ht]
\centering
\includegraphics[width=0.73\textwidth]{img_correlation}
\caption{(a) Brickwall bandpassed ($7 \times 10^{-4}$ to $1 \times 10^{-2}$\,Hz) time-series for all three sensors on 03-20-2016 between 10-11\,AM (PDT). The bottom panel shows the correlation coefficients between pairs of sensors as a function of lag. Sensors 3 and 4 are highly correlated (in phase), while sensor 2 is anti-correlated (out of phase) with the others. There is a 20-minute periodicity to the data, consistent with the powerbands observed in the wavelet spectra. This 20-minute signal coincides with the published Sunday/Holiday BART schedule for the Fremont/Richmond train line.}
\label{Figure10}
\end{figure*}
Figure \ref{Figure11} shows full day wavelet spectral densities for station 2 on both Wednesday, 03/16/2016 and Sunday, 03/20/2016. The quiet BART night is much shorter on Wednesday; this corresponds with the different weekend and weekday timetables. Additionally, Fig. \ref{Figure11} demonstrates the absence of a strong 20-minute period in the Wednesday data, instead revealing more complex spectral signatures. The increase in complexity observed in the weekday spectrogram is most certainly due to increases in train frequency, the addition of another active BART train, and variability associated with commuter hours. In our future work we will fully explore the effect of train variability on the urban magnetic field using correlated observations from the entire network.\\
The measurements of \cite{ho:1979} made between 0.001 and 4 Hz, at a range of several hundred meters from the BART rail, captured bursts of magnetic field corresponding approximately to the train schedule, but with an irregular variation in the waveform of the magnetic field. In contrast, our observations reveal a highly regular signature, with multiple spectral components, occurring at the BART train period. The repeatability of this signature, observed coherently in the three deployed sensors, enables for identification and extraction of the periodic waveform associated with the BART operation.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.85\textwidth]{img_wavelet_days_bis}
\caption{BART night signatures. (a) Continuous wavelet transform of 1 sample/s magnetic field magnitude data from Station 2 on 3/16/2016 (and 3/20/2016). The nighttime signature is significantly shorter in the data taken on Wednesday, this corresponds with BART operating hours. Additionally, the strong powerbands observed on Sunday are not present in the Wednesday data.}
\label{Figure11}
\end{figure*}
\section{Extracting the BART signal}
\label{extraction}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.73\textwidth]{img_bart_extraction2}
\caption{Extraction of BART signal. (a) 20-minute periodic signal of the BART extracted from an ensemble average of 46 intervals from station 2. (b) Comparison of the extracted average signal with an hour of observations from station 2 taken from 9-10 AM (PDT). (c) Comparison of extracted average signal with an interval containing local magnetic anomaly, (d) Comparison of extracted average signal with an interval containing global magnetic anomaly likely due to variation in BART operation.}
\label{bartextract}
\end{figure*}
Liu \& Fraser Smith \cite{liu:1998} attempt to extract features associated with the BART using wavelets to identify transient features in a geomagnetic time-series. Our observations of a periodic BART signature enable a statistical averaging over the observed period in order to extract features related to BART. The periodic 20-minute signature observed in the Sunday 03/20/2016 time-series from station 2 can be extracted using the technique known as superposed epoch analysis \cite{singh:2006}. We identified 46 sharp peaks in the magnetic field occurring with an approximately 20-minute period (e.g. Fig. \ref{Figure6}). From these 46 peaks, an ensemble $[X(t)_i]$ of intervals is constructed comprising the 3 minutes preceding and 17 minutes succeeding each individual peak. Averaging over the ensemble of intervals $\bar{X}(t)=\sum_i X_i(t)$ reveals a coherent signature with an approximate 20-minute period, Fig. \ref{bartextract}(a). The periodic signal observed in the data has the form of a sharp discrete peak of $\approx$1\,nT, followed by an oscillation with a period on order of several minutes. A quantitative comparison is obtained through computing the Pearson correlation
\begin{equation}
\rho_i=\frac{{cov}(X_i,\bar{X})}{\sigma_{X_i}\sigma_{\bar{X}}}
\end{equation}
of the extracted signal with each interval in the ensemble. On average, the correlation between the extracted signal and observed data has a correlation of $\bar{\rho}=0.7$, with $\rho_i$ ranging from 0.1 to 0.85. We can interpret these values as the fraction of power in each interval derived from the average signature. Figure \ref{bartextract}(b) demonstrates high correlation between the extracted average signal with an hour of observations taken from 9-10\,AM (PDT). Through extracting periodic magnetic signatures of BART, we enable the identification of transient events associated with BART operation as well as other urban phenomena. Fig.\ref{bartextract}(c) shows the occurrence of a local magnetic anomaly occurring at 12PM (PDT) in Station 2; in this case $\rho=0.67.$ The traces from the other stations, suggest that the event observed in station 2 is a local anomaly, not associated with BART. Additionally, Figure \ref{bartextract}(d) demonstrates an interval of data (with $\rho=0.17$) which includes a global transient feature, likely due to some variation in the BART system. Measurements from a single sensor allow us to identify events which deviate from the correlated periodic observations; our future work will employ the full network of magnetometers to identify correlated signals in both space and time, allowing for an extraction of the magnetic field local to each sensor from the global field dominated by the BART signal.
\section{Discussion}
\label{discussion}
An array of four magnetometers has been developed with bandwidth of DC-kHz and sensitivity better than 0.1\,nT/$\sqrt{\textrm{Hz}}$. The array is currently deployed in the area surrounding Berkeley, CA, providing measurements of an urban magnetic field. This array is sensitive to both natural magnetic activity, such as lightning and the low frequency variations in the Earth's geomagnetic field, as well as a variety of anthropogenic sources: currents associated with BART, traffic and 60 Hz powerlines. The operation of BART dominates the urban magnetic field generated broadband noise. In addition to this broadband noise, the network has identified the presence of coherent narrowband spectral features originating from the BART. Significant variation in the spectral features is observed between weekends and weekdays corresponding to variations in the BART train schedule. During the hours in which BART is non-operational, the anthropogenically generated fields are significantly decreased and agreement with the USGS magnetic field measurements is observed. However, the nighttime field still contains a number of features not attributable to geophysical activity. Further study is required to determine the nature and sources of these features.\\
Cross-correlating the sensors at high frequencies requires a high-precision timing algorithm to combine the absolute time, acquired through GPS, with the high-precision computer performance clock local to each station. This algorithm additionally corrects for latency issues associated with the USB interface between the data-acquisition hardware and the laptop operating system. This timing algorithm has been tested using magnetic fields generated by Helmholtz coils. We intend to use the impulsive globally-observable fields generated by lightning to further test our timing algorithm. Our high precision timing will allow for such magnetic anomaly detection on the order of $\approx100$\uS.\\
This paper presented a proof-of-concept deployment of what, to our knowledge, is the first synchronized network of magnetometers specifically designed for observing the effects of human activity on the magnetic field in an urban environment. Numerous potential applications and directions for future work have emerged. Further development of algorithms to remove the BART signal (or any other dominant signal whose source has been identified) must be developed. These algorithms may need to take advantage of other data sources (e.g., the realtime BART schedule) and machine learning techniques. The study of high frequency response (60\,Hz and above) has not yet been pursued. We note that anthropogenic fields mask geophysical field fluctuations, and that study of the latter is facilitated by understanding of anthropogenic noise. The magnetic fields due to humans may reflect identifiable aspects of urban dynamics (beyond BART) and these may have correlations to other measures of urban life (energy consumption being one of the first to consider). Studies of magnetic field correlations and anomalies may be used to identify and study local phenomena (traffic, elevators, etc.). One spin-off of this research may be improved identification and reduction of anthropogenic noise in geomagnetic measurements located in or near urban environments. The ultimate utility of the magnetometer array as an observational platform for urban systems will only become clear with further studies.
\section{Acknowledgements}
We are grateful to Brian Patton for his contributions in the early stages of the project. The views expressed in the publication are the author's and do not imply endorsement by the Department of Defense or the National Geospatial-Intelligence Agency.
| 9,203 |
\section{Introduction}
Let $\Omega$ be a domain in ${{\rm I\hspace{-0.2em}R}}^{2}$ with locally Lipschitz boundary and ${\cal O}=(0,0)\in \partial\Omega$
such that $\partial\Omega\setminus \{ {\cal O} \}$ is a $C^{4}$ curve and $\Omega\subset B_{1}\left(0,1\right),$
where $B_{\delta}\left({\cal N}\right)$ is the open ball in ${\rm I\hspace{-0.2em}R}^{2}$ of radius $\delta$ about ${\cal N}\in {{\rm I\hspace{-0.2em}R}}^{2}.$
Denote the unit exterior normal to $\Omega$ at $(x,y)\in\partial\Omega$ by $\nu(x,y)$ and let polar coordinates relative to ${\cal O}$
be denoted by $r$ and $\theta.$ We shall assume there exists a $\delta^{*}\in (0,2)$ and $\alpha \in \left(0,\frac{\pi}{2}\right)$
such that $\partial \Omega \cap B_{\delta^{*}}({\cal O})$ consists of the line segments
\[
{\partial}^{+}\Omega^{*} = \{(r\cos(\alpha),r\sin(\alpha)):0\le r\le \delta^{*}\}
\]
and
\[
{\partial}^{-}\Omega^{*} = \{(r\cos(-\alpha),r\sin(-\alpha)):0\le r\le \delta^{*}\}.
\]
Set $\Omega^{*} = \Omega \cap B_{\delta^{*}}({\cal O}).$
Let $\gamma:\partial\Omega\to [0,\pi]$ be given. Let $\left(x^{\pm}(s),y^{\pm}(s)\right)$ be arclength parametrizations of
$\partial^{\pm}\Omega$ with $\left(x^{+}(0),y^{+}(0)\right)=\left(x^{-}(0),y^{-}(0)\right)=(0,0)$ and set
$\gamma^{\pm}(s)=\gamma\left(x^{\pm}(s),y^{\pm}(s)\right).$
Consider the capillary problem of finding a function $f\in C^2(\Omega)\cap C^1(\overline{\Omega}\setminus\{{\cal O}\})$
satisfying
\begin{equation}
\label{CAP}
{\rm div}(Tf)=\frac{1}{2}f\ \ {\rm in}\ \Omega
\end{equation}
and
\begin{equation}
\label{CAPBC}
Tf\cdot\nu=\cos\left(\gamma\right)\ {\rm on}\ \partial\Omega\setminus\{{\cal O}\},
\end{equation}
where $Tf=\frac{\nabla f}{\sqrt{1+|\nabla f|^{2}}}.$
We are interested in the existence of the radial limits $Rf(\cdot)$ of a solution $f$ of (\ref{CAP})--(\ref{CAPBC}), where
\[
Rf(\theta)=\lim_{r \rightarrow 0^{+}} f(r\cos \theta, r\sin \theta), -\alpha < \theta < \alpha
\]
and $Rf(\pm \alpha)=\lim_{\partial^{\pm}\Omega^{*}\ni {\bf x} \rightarrow {\cal O}} f({\bf x}), {\bf x} = (x,y)$,
which are the limits of the boundary values of $f$ on the two sides of the corner if these exist.
In \cite{CEL1}, the following is proven:
\begin{prop}
\label{ONE}
Let $f$ be a bounded solution to (\ref{CAP}) satisfying (\ref{CAPBC}) on $\partial^{\pm}\Omega^{*} \setminus \{{\cal O}\}$
which is discontinuous at ${\cal O}.$ If $\alpha > \pi/2$ then $Rf(\theta)$ exists for all $\theta \in (-\alpha,\alpha).$
If $\alpha \le \pi/2$ and there exist constants $\underline{\gamma}^{\, \pm},
\overline{\gamma}^{\, \pm}, 0 \le \underline{\gamma}^{\, \pm} \leq \overline{\gamma}^{\, \pm} \le \pi,$ satisfying
\[
\pi - 2\alpha < \underline{\gamma}^{+} + \underline{\gamma}^{-} \le
\overline{\gamma}^{\, +} + \overline{\gamma}^{\, -} < \; \pi + 2\alpha
\]
so that $\underline{\gamma}^{\pm}\leq \gamma^{\pm}(s) \leq \overline{\gamma}^{\, \pm}$
for all $s, 0<s<s_{0},$ for some $s_{0}$, then again $Rf(\theta)$ exists for
all $\theta \in (-\alpha, \alpha)$.
\end{prop}
\noindent In \cite{LS1}, Lancaster and Siegel proved this theorem with the additional restriction that $\gamma$ be bounded away from
$0$ and $\pi;$ Figure \ref{FigOne} illustrates these cases.
\begin{figure}[ht]
\centering
\includegraphics{Figure_CFrects.pdf}
\caption{The Concus-Finn rectangle (A \& C) with regions ${\cal R}$ (yellow), $D_{2}^{\pm}$ (blue) and $D_{1}^{\pm}$ (green); the
restrictions on $\gamma$ in \cite{LS1} (red region in B) and in \cite{CEL1} (red region in D). \label{FigOne} }
\end{figure}
\noindent In Theorem 3 of \cite{LS1}, Lancaster and Siegel also proved
\begin{prop}
\label{TWO}
Let $\Omega$ be the disk of radius $1$ centered at $(1,0).$
Then there exists a solution to $Nf = \frac{1}{2} f$ in $\Omega,
|f| \leq 2, f \in C^{2}(\Omega) \cap C^{1}( \overline{\Omega} \setminus O),
O = (0,0)$ so that no radial limits $Rf(\theta)$ exist ($\theta \in [ -\pi/2,\pi/2 ]$).
\end{prop}
\noindent In this case, $\alpha=\frac{\pi}{2};$ if $\gamma$ is bounded away from $0$ and $\pi,$ then Proposition \ref{ONE}
would imply that $Rf(\theta)$ exists for each $\theta \in \left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ and therefore
the contact angle $\gamma = \cos^{-1}\left(Tf\cdot\nu\right)$ in Proposition \ref{TWO} is not bounded away from $0$ and $\pi.$
In our case, the domain $\Omega$ has a convex corner of size $2\alpha$ at ${\cal O}$ and we wish to investigate the question
of whether an example like that in Proposition \ref{TWO} exists in this case when $\gamma$ is bounded away from $0$ and $\pi.$
In terms of the Concus-Finn rectangle, the question is whether, given $\epsilon>0,$ there is a
$f\in C^2(\Omega)\cap C^1(\overline{\Omega}\setminus\{ {\cal O} \} )$ of (\ref{CAP})--(\ref{CAPBC}) such that
no radial limits $Rf(\theta)$ exist ($\theta \in [-\alpha,\alpha]$) and $\vert\gamma-\frac{\pi}{2}\vert\le \alpha+\epsilon;$
this is illustrated in Figure \ref{FigTwo}.
\begin{figure}[ht]
\centering
\includegraphics{Figure_NoRadial.pdf}
\caption{The Concus-Finn rectangle. When $\gamma$ remains in red region in E, $Rf(\cdot)$ exists; $\gamma$ in Theorem \ref{THREE}
remains in the red region in F . \label{FigTwo} }
\end{figure}
\begin{thm}
\label{THREE}
For each $\epsilon>0$, there is a domain $\Omega$ as described above and a solution
$f\in C^2(\Omega)\cap C^1(\overline{\Omega}\setminus\{ {\cal O} \} )$ of (\ref{CAP}) such that
the contact angle $\gamma = \cos^{-1}\left(Tf \cdot \nu\right):\partial\Omega\setminus\{ {\cal O}\} \to [0,\pi]$
satisfies $\vert\gamma-\frac{\pi}{2}\vert\le \alpha+\epsilon$ and there exist a sequence $\{r_{j}\}$ in $(0,1)$
with $\lim_{j\to\infty} r_{j}=0$ such that
\[
(-1)^{j}f\left(r_{j},0\right)>1 \ \ \ \ {\rm for \ each \ } j\in{\rm I\hspace{-0.2em}N}.
\]
Assuming $\Omega$ and $\gamma$ are symmetric with respect to the line $\{(x,0):x\in{\rm I\hspace{-0.2em}R}\},$ this implies that
no radial limit
\begin{equation}
\label{Rads}
Rf(\theta) \myeq \lim_{r\downarrow 0} f(r\cos(\theta),r\sin(\theta))
\end{equation}
exists for any $\theta\in[-\alpha,\alpha].$
\end{thm}
\noindent We note that our Theorem is an extension of Theorem 3 of \cite{LS1} to contact angle data in a domain with a convex corner.
As in \cite{Lan:89,LS1}, we first state and prove a localization lemma; this is analogous to the Lemma in \cite{Lan:89} and
Lemma 2 of \cite{LS1}.
\begin{lem}
\label{LEM}
Let $\Omega\subseteq{{\rm I\hspace{-0.2em}R}}^{2}$ be as above, $\epsilon >0,$ $\eta>0$
and $\gamma_{0}:\partial\Omega\setminus\{ {\cal O}\} \to [0,\pi]$
such that $\vert\gamma_{0}-\frac{\pi}{2}\vert\le \alpha+\epsilon.$
For each $\delta\in(0,1)$ and $h\in C^2(\Omega)\cap C^1(\overline{\Omega}\setminus\{{\cal O}\})$ which satisfies (\ref{CAP}) and
(\ref{CAPBC}) with $\gamma=\gamma_{0},$
there exists a solution $g\in C^2(\Omega)\cap C^1(\overline{\Omega}\setminus\{{\cal O}\})$ of (\ref{CAP}) such that
$\lim_{\overline\Omega\ni (x,y)\to (0,0) } g(x,y)=+\infty,$
\begin{equation}
\label{SUP}
\sup_{\Omega_{\delta}} \vert g-h\vert<\eta \ \ \ \ {\rm and} \ \ \ \ \left\vert\gamma_{g}-\frac{\pi}{2}\right\vert\le \alpha+\epsilon,
\end{equation}
where $\Omega_{\delta} = \overline\Omega\setminus B_{\delta}\left({\cal O}\right)$ and
$\gamma_{g}=\cos^{-1}\left(Tg\cdot \nu\right):\partial\Omega\setminus\{{\cal O}\}\to [0,\pi]$ is the contact angle
which the graph of $g$ makes with $\partial\Omega\times{\rm I\hspace{-0.2em}R}.$
\end{lem}
\begin{proof}
Let $\epsilon, \eta, \delta,\Omega,h$ and $\gamma_0$ be given. For $\beta\in(0,\delta)$, let
$g_{\beta}\in C^{2}\left(\Omega)\cap C^{1}(\overline{\Omega}\setminus\{ {\cal O}\}\right)$ satisfy (\ref{CAP}) and (\ref{CAPBC})
with $\gamma=\gamma_{\beta},$ where
\[
\gamma_{\beta}= \left\{ \begin{array}{ccc}
\frac{\pi}{2}-\alpha-\epsilon & {\rm on} & \overline{B_{\beta}\left({\cal O}\right)}\\
\gamma_{0} & {\rm on} & \overline{\Omega}\setminus B_{\beta}\left({\cal O}\right).\\
\end{array}
\right.
\]
As in the proof of Theorem 3 of \cite{LS1}, $g_{\beta}$ converges to $h,$ pointwise and uniformly in the $C^{1}$ norm on
$\overline{\Omega_{\delta}}$ as $\beta$ tends to zero.
Fix $\beta>0$ small enough that $\sup_{\Omega_{\delta}} \vert g-h\vert<\eta.$
Set $\Sigma= \{(r\cos(\theta),r\sin(\theta)): r>0, -\alpha\le \theta \le \alpha \}.$
Now define $w:\Sigma\to {\rm I\hspace{-0.2em}R}$ by
\[
w(r\cos \theta,r\sin\theta) = \frac{\cos\theta-\sqrt{k^{2}-\sin^{2}\theta}}{k\kappa r},
\]
where $k=\sin\alpha \sec\left(\frac{\pi}{2}-\alpha-\epsilon\right) = \sin\alpha \csc(\alpha+\epsilon).$
As in \cite{CF}, there exists a $\delta_{1}>0$ such that ${\rm div}(Tw)-\frac{1}{2}w\ge 0$ on $\Sigma\cap B_{\delta_{1}}({\cal O}),$
$Tw\cdot\nu=\cos\left(\frac{\pi}{2}-\alpha-\epsilon\right)$ on $\partial\Sigma \cap B_{\delta_{1}}({\cal O}),$ and
$\lim_{r\to 0^{+}} w(r\cos \theta,r\sin\theta) = \infty$ for each $\theta\in[-\alpha,\alpha].$
We may assume $\delta_{1} \le \delta^{*}.$ Let
\[
M=\sup_{\Omega\cap \partial B_{\delta_{1}}({\cal O})} |w-g_{\beta}| \ \ \ {\rm and} \ \ \ w_{\beta}=w-M.
\]
Since ${\rm div}(Tw_{\beta})-\frac{1}{2}w_{\beta}\ge \frac{M}{2}\ge 0={\rm div}(Tg_{\beta})-\frac{1}{2}g_{\beta}$
in $\Omega\cap B_{\delta_{1}}({\cal O}),$ $w_{\beta}\le g_{\beta}$ on $\Omega\cap\partial B_{\delta_{1}}({\cal O})$
and $Tg_{\beta}\cdot\nu\ge Tw_{\beta}\cdot\nu$ on $\partial\Omega\cap B_{\delta_{1}}({\cal O}),$
we see that $g_{\beta}\ge w_{\beta}$ on $\Omega\cap\partial B_{\delta_{1}}({\cal O}).$
\end{proof}
\noindent We may now prove Theorem \ref{THREE}.
\begin{proof}
We shall construct a sequence ${f_n}$ of solutions of (\ref{CAP}) and a sequence $\{r_{n}\}$ of positive real numbers
such that $\lim_{n\to\infty} r_{n}=0,$ $f_{n}(x,y)$ is even in $y$ and
\[
(-1)^{j}f_{n}\left(r_{j},0\right)>1 \ \ \ \ {\rm for \ each \ } j=1,\dots,n.
\]
Let $\gamma_{0}=\frac{\pi}{2}$ and $f_{0}=0.$ Set $\eta_{1}=1$ and $\delta_{1}=\delta_{0}.$
From Lemma \ref{LEM}, there exists a $f_{1}\in C^2(\Omega)\cap C^1(\overline{\Omega}\setminus\{ {\cal O}\})$ which satisfies (\ref{CAP})
such that $\sup_{\Omega_{\delta_{1}}} \vert f_{1}-f_{0}\vert<\eta_{1},$ $\left\vert\gamma_{1}-\frac{\pi}{2}\right\vert\le \alpha+\epsilon$
and $\lim_{\Omega\ni (x,y)\to {\cal O}} f_{1}(x,y)=-\infty,$ where $\gamma_{1}=\cos^{-1}\left(Tf_{1}\cdot \nu \right).$
Then there exists $r_{1}\in \left(0,\delta_{1}\right)$ such that $f_{1}\left(r_{1},0\right)<-1.$
Now set $\eta_{2}=-\left(f_{1}\left(r_{1},0\right)+1\right)>0$ and $\delta_{2}=r_{1}.$
From Lemma \ref{LEM}, there exists a $f_{2}\in C^2(\Omega)\cap C^1(\overline{\Omega}\setminus\{ {\cal O}\})$ which satisfies (\ref{CAP})
such that $\sup_{\Omega_{\delta_{2}}} \vert f_{2}-f_{1}\vert<\eta_{2},$ $\left\vert\gamma_{2}-\frac{\pi}{2}\right\vert\le \alpha+\epsilon$
and $\lim_{\Omega\ni (x,y)\to {\cal O}} f_{2}(x,y)=\infty,$ where $\gamma_{2}=\cos^{-1}\left(Tf_{2}\cdot \nu \right).$
Then there exists $r_{2}\in \left(0,\delta_{2}\right)$ such that $f_{2}(r_{2},0)>1.$
Since $(r_{1},0)\in \Omega_{\delta_{2}},$
\[
f_{1}\left(r_{1},0\right)+1<f_{2}\left(r_{1},0\right)-f_{1}\left(r_{1},0\right)<-\left(f_{1}\left(r_{1},0\right)+1 \right)
\]
and so $f_{2}\left(r_{1},0\right)<-1.$
Next set $\eta_{3}=\min\left\{-\left(f_{2}\left(r_{1},0\right)+1\right), f_{2}\left(r_{2},0\right)-1\right\}>0$
and $\delta_{3}=r_{2}.$
From Lemma \ref{LEM}, there exists a $f_{3}\in C^2(\Omega)\cap C^1(\overline{\Omega}\setminus\{ {\cal O}\})$ which satisfies (\ref{CAP})
such that $\sup_{\Omega_{\delta_{3}}} \vert f_{3}-f_{2}\vert<\eta_{3},$ $\left\vert\gamma_{3}-\frac{\pi}{2}\right\vert\le \alpha+\epsilon$
and $\lim_{\Omega\ni (x,y)\to {\cal O}} f_{3}(x,y)=-\infty,$ where $\gamma_{3}=\cos^{-1}\left(Tf_{3}\cdot \nu \right).$
Then there exists $r_{3}\in \left(0,\delta_{3}\right)$ such that $f_{3}(r_{3},0)<-1.$
Since $(r_{1},0),(r_{2},0)\in \Omega_{\delta_{2}},$ we have
\[
f_{2}\left(r_{1},0\right)+1<f_{3}\left(r_{1},0\right)-f_{2}\left(r_{1},0\right)<-\left(f_{2}\left(r_{1},0\right)+1\right)
\]
and
\[
-\left(f_{2}\left(r_{2},0\right)-1\right)<f_{3}\left(r_{2},0\right)-f_{2}\left(r_{2},0\right)<f_{2}\left(r_{2},0\right)-1;
\]
hence $f_{3}\left(r_{1},0\right)<-1$ and $1<f_{3}\left(r_{2},0\right).$
Continuing to define $f_{n}$ and $r_{n}$ inductively, we set
\[
\eta_{n+1} = \min_{1\leq j\leq n}\vert f_n(r_j,0)-(-1)^j \vert \ \ \ {\rm and} \ \ \
\delta_{n+1}=\min\left\{r_{n},\frac{1}{n}\right\}.
\]
From Lemma \ref{LEM}, there exists $f_{n+1}\in C^2(\Omega)\cap C^1(\overline{\Omega}\setminus\{ {\cal O}\})$ which satisfies (\ref{CAP})
such that $\sup_{\Omega_{\delta_{n+1}}} \vert f_{n+1}-f_{n}\vert<\eta_{n+1},$
$\left\vert\gamma_{n+1}-\frac{\pi}{2}\right\vert\le \alpha+\epsilon$
and $\lim_{\Omega\ni (x,y)\to {\cal O}} f_{n+1}(x,y)=(-1)^{n+1}\infty,$ where $\gamma_{n+1}=\cos^{-1}\left(Tf_{n+1}\cdot \nu \right).$
Then there exists $r_{n+1}\in \left(0,\delta_{n+1}\right)$ such that $(-1)^{n+1}f_{n+1}(r_{n+1},0)>1.$
For each $j\in \{1,\dots,n\}$ which is an even number, we have
\[
-\left(f_{n}\left(r_{j},0\right)-1\right)<f_{n+1}\left(r_{j},0\right)-f_{n}\left(r_{j},0\right)<f_{n}\left(r_{j},0\right)-1
\]
and so $1<f_{n+1}\left(r_{j},0\right).$ For each $j\in \{1,\dots,n\}$ which is an odd number, we have
\[
f_{n}\left(r_{j},0\right)+1<f_{n+1}\left(r_{j},0\right)-f_{n}\left(r_{j},0\right)<-\left(f_{n}\left(r_{j},0\right)+1\right)
\]
and so $f_{n+1}\left(r_{j},0\right)<-1.$
As in \cite{LS1,Siegel}, there is a subsequence of $\{f_{n}\},$ still denoted $\{f_{n}\},$ which converges pointwise and
uniformly in the $C^{1}$ norm on $\overline{\Omega_{\delta}}$ for each $\delta>0$ as $n\to\infty$ to a solution
$f\in C^2(\Omega)\cap C^1\left(\overline\Omega\setminus {\cal O}\right)$ of (\ref{CAP}).
For each $j\in{\rm I\hspace{-0.2em}N}$ which is even, $f_{n}\left(r_{j},0\right)>1$ for each $n\in{\rm I\hspace{-0.2em}N}$ and so $f\left(r_{j},0\right)\ge 1.$
For each $j\in{\rm I\hspace{-0.2em}N}$ which is odd, $f_{n}\left(r_{j},0\right)<-1$ for each $n\in{\rm I\hspace{-0.2em}N}$ and so $f\left(r_{j},0\right)\le -1.$
Therefore
\[
\lim_{r\to 0^{+}} f(r,0) \ \ {\rm does \ not\ exist, \ even \ as\ an \ infinite\ limit},
\]
and so $Rf(0)$ does not exist.
Since $\Omega$ is symmetric with respect to the $x-$axis and $\gamma_{n}(x,y)$ is an even function of $y,$
$f(x,y)$ is an even function of $y.$
Now suppose that there exists $\theta_{0}\in [-\alpha,\alpha]$ such that $Rf(\theta_0)$ exists; then $\theta_{0}\neq 0.$
From the symmetry of $f,$ $Rf(-\theta_{0})$ must also exist and $Rf(-\theta_{0})=Rf(\theta_{0})$.
Set $\Omega'=\{(r\cos\theta,r\sin\theta): 0<r<\delta_{0},-\theta_{0}<\theta<\theta_{0}\}\subset \Omega.$
Since $f$ has continuous boundary values on $\partial\Omega',$ $f\in C^{0}\left(\overline{\Omega'}\right)$
and so $Rf(0)$ does exist, which is a contradiction. Thus $Rf(\theta)$ does not exist for any $\theta\in [-\alpha,\alpha].$
\end{proof}
| 7,076 |
\section{Introduction}\label{section:intro}
The field of \emph{compressed computation}---i.e. computation on compressed representations of the data without first fully decompressing it---is lately receiving much attention due to the ever-growing rate at which data is accumulating in archives such as the web or genomic databases. Being able to operate directly on the compressed data can make an enormous difference, considering that repetitive collections, such as sets of same-species genomes or software repositories, can be compressed at rates that often exceed 1000x. In such cases, this set of techniques makes it possible to perform most of the computation directly in primary memory and enables the possibility of manipulating huge datasets even on resource-limited machines.
Central in the field of compressed computation are \emph{compressed data structures} such as compressed full-text indexes, geometry (e.g. 2D range search), trees, graphs. The compression of these structures (in particular those designed for unstructured data) is based on an array of techniques which include entropy compression, Lempel-Ziv parsings~\cite{ziv1977universal,ziv1978compression} (LZ77/LZ78), grammar compression~\cite{charikar2005smallest}, and the Burrows-Wheeler transform~\cite{burrows1994block} (BWT). Grammar compression, Run-Length encoding of the BWT~\cite{siren2009run,siren2012compressed} (RLBWT), and LZ77 have been shown superior in the task of compressing highly-repetitive data and, as a consequence, much research is lately focusing on these three techniques.
In this paper we address a central point in compressed computation: can we convert between different compressed representations of a text while using an amount of working space proportional to the input/output? Being able to perform such task would, for instance, open the possibility of converting between compressed data structures (e.g. self-indexes) based on different compressors, all within compressed working space.
It is not the fist time that this problem has been addressed. In~\cite{rytter2003application} the author shows how to convert the LZ77 encoding of a text into a grammar-based encoding, while in~\cite{bannai2012efficient,bannai2013converting} the opposite direction (though pointing to LZ78 instead of LZ77) is considered. In~\cite{tamakoshi2013run} the authors consider the conversions between LZ78 and run-length encoding of the text. Note that LZ77 and run-length encoding of the BWT are much more powerful than LZ78 and run-length encoding of the text, respectively, so methods addressing conversion between LZ77 and RLBWT would be of much higher interest.
In this work we show how to efficiently solve this problem in space proportional to the sizes of these two compressed representations. See the Definitions section for a formal definition of $RLBWT(T)$ and $LZ77(T)$ as a list of $r$ pairs and $z$ triples, respectively. Let $RLBWT(T)\rightarrow LZ77(T)$ denote the computation of the list $LZ77(T)$ using as input the list $RLBWT(T)$ (analogously for the opposite direction). The following results are illustrated below:
\begin{enumerate}
\item[(1)] We can compute $RLBWT(T)\rightarrow LZ77(T)$ in $\mathcal{O}(n\log r)$ time and $\mathcal{O}(r)$ words of working space
\item[(2)] We can compute $LZ77(T) \rightarrow RLBWT(T)$ in $\mathcal{O}\big(n(\log r + \log z)\big)$ time and $\mathcal{O}(r+z)$ words of working space
\end{enumerate}
Result (1) is based on our own recent work~\cite{policriti2016computing} and requires space proportional to the input \emph{only} as output is streamed to disk. Result (2) requires space proportional to the input \emph{plus} the output, since data structures based on both compressors are used in main memory. In order to achieve result (2), we show how we can (locally) decompress $LZ77(T)$ while incrementally building a run-length BWT data structure of the reversed text. Extracting text from LZ77 is a computationally expensive task, as it requires a time proportional to the parse height $h$ per extracted character~\cite{kreft2013compressing} (with $h$ as large as $\sqrt n$, in the worst case). The key ingredient of our solution is to use the run-length BWT data structure itself to efficiently extract text from $LZ77(T)$.
\section{Basics}\label{basics}
Since we work with both LZ77~\cite{ziv1977universal} and the Burrows-Wheeler transform~\cite{burrows1994block} (see below for definitions), we assume that our text $T$ contains both \emph{LZ} and \emph{BWT terminator} characters. More specifically, let $T$ be of the form $T=\#T'\$ \in \Sigma^n$, with $T'\in(\Sigma \setminus \{\$,\#\})^{n-2}$, where $ \$ $ is the LZ77-terminator, and $\#$---lexicographically smaller than all elements in $\Sigma$---is the BWT-terminator.
Note that adding the two terminator characters to our text increases only by two the number of LZ77 factors and by at most four the number of BWT runs.
The \emph{Burrows-Wheeler Transform}~\cite{burrows1994block} $BWT(T)$ is a permutation of $T$ defined as follows. Sort all cyclic permutations of $T$ in a \emph{conceptual} matrix $M\in\Sigma^{n\times n}$. $BWT(T)$ is the last column of $M$. With $F$ and $L$ we will denote the first and last column of $M$, respectively, and we will say \emph{F-positions} and \emph{L-positions} to refer to positions on these two columns. On compressible texts, $BWT(T)$ exhibits some remarkable properties that permit to boost compression. In particular, it can be shown~\cite{siren2012compressed} that repetitions in $T$ generate equal-letter runs in $BWT(T)$. We can efficiently represent this transform as the list of pairs
$$
RLBWT(T) = \langle \lambda_i, c_i \rangle_{i=1,\dots, r_T}
$$
where $\lambda_i>0$ is the length of the \emph{maximal} $i$-th $c_i$-run, $c_i\in\Sigma$. Equivalently, $RLBWT(T)$ is the \emph{shortest} list of pairs $\langle \lambda_i, c_i \rangle_{i=1,\dots, r_T}$ satisfying $BWT(T) = c_1^{\lambda_1}c_2^{\lambda_2}\dots c_{r_T}^{\lambda_{r_T}}$. Let $\overleftarrow T$ be the reverse of $T$. To simplify notation we define $r=\max\{r_T, r_{\overleftarrow T}\}$ (in practical cases $r_T \approx r_{\overleftarrow T}$ holds~\cite{belazzougui2015composite}, and this definition simplifies notation).
With $RLBWT^+(T)$ we denote a run-length encoded BWT \emph{data structure} on the text $T$, taking $\mathcal{O}(r)$ words of space and supporting \texttt{insert}, \texttt{rank}, \texttt{select}, and \texttt{access} operation on the BWT. Using these operations, functions LF and FL (mapping L-positions to F-positions and \textit{vice versa}) and function \texttt{extend} (turning $RLBWT^+(T)$ into $RLBWT^+(aT)$ for some $a\in\Sigma$) can be supported in $\mathcal{O}(\log r)$ time. We leave to the next sections details concerning the particular implementation of this data structure.
We recall that $BWT(\overleftarrow T)$ can be built online with an algorithm that reads $T$-characters left-to-right and inserts them in a dynamic string data structure~\cite{hon2007space,chan2007compressed}. Briefly, letting $a\in\Sigma$, the algorithm is based on the idea of backward-searching the extended reversed text $\overleftarrow{T\!a}$ in the BWT index for $\overleftarrow T$. This operation leads to the F-position $l$ where $\overleftarrow{T\!a}$ should appear among all sorted $\overleftarrow T$'s suffixes. At this point, it is sufficient to insert $\#$ at position $l$ in $BWT(\overleftarrow T)$ and replace the old $\#$ with $a$ to obtain $BWT(\overleftarrow{T\!a})$.
The \emph{LZ77 parsing}~\cite{ziv1977universal} (or \emph{factorization}) of a text $T$ is the sequence of \emph{phrases} (or \emph{factors})
$$LZ77(T) = \langle \pi_i,\lambda_i,c_i \rangle_{i=1,\dots,z}$$
where $ \pi_i\in \{0, \ldots, n-1\}\cup \{\bot\}$ and $ \bot $ stands for ``undefined'', $ \lambda_i \in \{0, \ldots, n-2\}$, $c_i\in\Sigma$, and:
\begin{enumerate}
\item $T = \omega_1c_1\ldots \omega_zc_z$, with $\omega_i=\epsilon$ if $\lambda_i=0$ and $\omega_i=T[\pi_i,\ldots ,\pi_i+\lambda_i-1]$ otherwise.
\item For any $i=1,\ldots ,z$, the string $\omega_i$ is the \emph{longest} occurring at least twice in $\omega_1c_1\ldots \omega_i$.
\end{enumerate}
\section{From RLBWT to LZ77}
Our algorithm to compute $RLBWT(T) \rightarrow LZ77(T)$ is based on the result~\cite{policriti2016computing}: an algorithm to compute---in $\mathcal{O}(r)$ words of working space and $\mathcal{O}(n\log r)$ time---$LZ77(T)$ using $T$ as input. The data structure at the core of this result is a dynamic run-length compressed string:
\begin{theorem}\label{th:dynamic RL}~\cite{makinen2010storage,policriti2016computing}
Let $S\in \Sigma^n$ and let $\barr$ be the number of equal-letter runs in $S$. There exists a data structure taking $\mathcal{O}(\barr)$ words of space and supporting \texttt{rank}, \texttt{select}, \texttt{access}, and \texttt{insert} operations on $S$ in $\mathcal{O}(\log\barr)$ time.
\end{theorem}
The algorithm works in two steps, during the first of which builds $RLBWT^+(\overleftarrow T)$ by inserting left-to-right $T$-characters in a \emph{dynamic} $RLBWT$ represented with the data structure of Theorem \ref{th:dynamic RL}---using the procedure sketched in the previous section. In the second step, the procedure scans $T$ once more left-to-right while searching (reversed) LZ77 phrases in $RLBWT^+(\overleftarrow T)$. At the same time, a dynamic suffix array sampling is created by storing, for each BWT equal-letter run, the two most external (i.e. leftmost and rightmost in the run) text positions seen up to the current position; the key property proved in~\cite{policriti2016computing} is that this sparse suffix array sampling is sufficient to locate LZ77 phrase boundaries and sources. LZ77 phrases are outputted in text order, therefore they can be directly streamed to output. The total size of the suffix array sampling never exceeds $2r$.
From Theorem \ref{th:dynamic RL}, all operations (\emph{insert}, \emph{LF-mapping}, \emph{access}) are supported in $\mathcal{O}(\log r)$ time and the structure takes $\mathcal{O}(r)$ words of space. The claimed space/time bounds of the algorithm easily follow.
Note that, using the algorithm described in~\cite{policriti2016computing}, we can only perform the conversion $RLBWT^+(\overleftarrow T) \rightarrow LZ77(T)$. Our full procedure to achieve conversion $RLBWT(T) \rightarrow LZ77(T)$ consists of the following three steps:
\begin{enumerate}
\item convert $RLBWT(T)$ to $RLBWT^+(T)$, i.e. we add support for \texttt{rank}/\texttt{select}/\texttt{access} queries on $RLBWT(T)$;
\item compute $RLBWT^+(\overleftarrow T)$ using $RLBWT^+(T)$;
\item run the algorithm described in~\cite{policriti2016computing} and compute $LZ77(T)$ using $RLBWT^+(\overleftarrow T)$.
\end{enumerate}
Let $RLBWT(T) = \langle \lambda_i, c_i \rangle_{i=1,\dots, r}$ (see the previous section). Step 1 can be performed by just inserting characters $c_1^{\lambda_1}c_2^{\lambda_2}\dots c_{r}^{\lambda_{r}}$ (in this order) in the dynamic run-length encoded string data structure of Theorem \ref{th:dynamic RL}.
Step 2 is performed by extracting characters $T[0], T[1], \dots, T[n-1]$ from $RLBWT^+(T)$ and inserting them (in this order) in a dynamic $RLBWT$ data structure with the BWT construction algorithm sketched in the Section (\ref{basics}). Since this algorithm builds the $RLBWT$ of the \emph{reversed} text, the final result is $RLBWT^+(\overleftarrow T)$.
We can state our first result:
\begin{theorem}\label{th:rlbwt-lz77}
Conversion $RLBWT(T) \rightarrow LZ77(T)$ can be performed in $\mathcal{O}(n\log r)$ time and $\mathcal{O}(r)$ words of working space.
\end{theorem}
\begin{proof}
We use the dynamic RLBWT structure of Theorem \ref{th:dynamic RL} to implement components $RLBWT^+(T)$ and $RLBWT^+(\overleftarrow T)$. Step 1 requires $n$ \texttt{insert} operations in $RLBWT^+(T)$, and terminates therefore in $\mathcal{O}(n\log r)$ time. Since the string we are building contains $r_T$ runs, this step uses $\mathcal{O}(r)$ words of working space. Step 2 calls $n$ \texttt{extend} and \texttt{FL} queries on dynamic RLBWTs. \texttt{extend} requires a constant number of \texttt{rank} and \texttt{insert} operations~\cite{chan2007compressed}.
FL function requires just an \texttt{access} and a \texttt{rank} on the F column and a \texttt{select} on the L column.
From Theorem \ref{th:dynamic RL}, all these operations are supported in $\mathcal{O}(\log r)$ time, so also step 2 terminates in $\mathcal{O}(n\log r)$ time. Recall that $r$ is defined to be the maximum between the number of runs in $BWT(T)$ and $BWT(\overleftarrow T)$. Since in this step we are building $RLBWT^+(\overleftarrow T)$ using $RLBWT^+(T)$, the overall space is bounded by $\mathcal{O}(r)$ words. Finally, step 3 terminates in $\mathcal{O}(n\log r)$ time while using $\mathcal{O}(r)$ words of space~\cite{policriti2016computing}. The claimed bounds for our algorithm to compute $RLBWT(T) \rightarrow LZ77(T)$ follow.
\end{proof}
\section{From LZ77 to RLBWT}\label{sec:lz77->rlbwt}
Our strategy to convert $LZ77(T)$ to $RLBWT(T)$ consists of the following steps:
\begin{enumerate}
\item extract $T[0], T[1], \dots, T[n-1]$ from $LZ77(T)$ and build $RLBWT^+(\overleftarrow T)$;
\item convert $RLBWT^+(\overleftarrow T)$ to $RLBWT^+(T)$;
\item extract equal-letter runs from $RLBWT^+(T)$ and stream $RLBWT(T)$ to the output.
\end{enumerate}
Step 2 is analogous to step 2 discussed in the previous section. Step 3 requires reading characters $RLBWT^+(T)[0]$, ..., $RLBWT^+(T)[n-1]$ (\texttt{access} queries on $RLBWT^+(T)$) and keeping in memory a character storing last run's head and a counter keeping track of last run's length. Whenever we open a new run, we stream last run's head and length to the output.
The problematic step is the first. As mentioned in the introduction, extracting a character from $LZ77(T)$ requires to follow a chain of character copies. In the worst case, the length $h$ of this chain---also called the parse height (see~\cite{kreft2013compressing} for a formal definition)---can be as large as $\sqrt n$. Our observation is that, since we are building $RLBWT^+(\overleftarrow T)$, we can use this component to efficiently extract text from $LZ77(T)$: while decoding factor $\langle \pi_v, \lambda_v, c_v \rangle$, we convert $\pi_v$ to a position on the RLBWT and extract $\lambda_v$ characters from it. The main challenge in efficiently achieving this goal is to convert text positions to RLBWT positions (taking into account that the RLBWT is dynamic and therefore changes in size and content).
\subsection{Dynamic functions}
Considering that $RLBWT^+(\overleftarrow T)$ is built incrementally, we need a data structure to encode a function $\mathcal Z :\{\pi_1,...,\pi_z\} \rightarrow \{0,...,n-1\}$ mapping those text positions that are the source of some LZ77 phrase to their corresponding $RLBWT$ positions. Moreover, the data structure must be \emph{dynamic}, that is it must support the following three operations (see below the list for a description of how these operations will be used):
\begin{itemize}
\item \texttt{map}: $\mathcal Z(i)$. Compute the image of $i$
\item \texttt{expand}: $\mathcal Z.expand(j)$. Set $\mathcal Z(i)$ to $\mathcal Z(i)+1$ for every $i$ such that $\mathcal Z(i)\geq j$
\item \texttt{assign}: $\mathcal Z(i) \leftarrow j$. Call $\mathcal Z.expand(j)$ and set $\mathcal Z(i)$ to $j$
\end{itemize}
To keep the notation simple and light, we use the same symbol $\mathcal Z$ for the function as well as for the data structure representing it.
We say that $\mathcal Z(i)$ is \emph{defined} if, for some $j$, we executed an \texttt{assign} operation $\mathcal Z(i) \leftarrow j$ at some previous stage of the computation. For technical reasons that will be clear later, we restrict our attention to the case where we execute \texttt{assign} operations $\mathcal Z(i) \leftarrow j$ for increasing values of $i$, i.e. if $\mathcal Z(i_1) \leftarrow j_1, \dots, \mathcal Z(i_q) \leftarrow j_q$ is the sequence (in temporal order) of the calls to \texttt{assign} on $\mathcal Z$, then $i_1 < \dots < i_q$.
This case will be sufficient in our case and, in particular, $i_1, \dots, i_q$ will be the sorted non-null phrases sources $\pi_1,\dots, \pi_z$. Finally, we assume that $\mathcal Z(i)$ is always called when $\mathcal Z(i)$ has already been defined---again, this will be the case in our algorithm.
Intuitively, $\mathcal Z.expand(j)$ will be used when we insert $T[i]$ at position $j$ in the partial $RLBWT^+(\overleftarrow T)$ and $j$ is not associated with any phrase source (i.e. $i\neq \pi_v$ for all $v=1,\dots,z$). When we insert $T[i]$ at position $j$ in the partial $RLBWT^+(\overleftarrow T)$ and $i = \pi_v$ for some $v=1,\dots,z$ (possibly more than one), $\mathcal Z(i) \leftarrow j$ will be used.
\medskip
The existence and associated query-costs of the data structure $\mathcal Z$ are proved in the following lemma.
\begin{lemma}
Letting $z$ be the number of phrases in the LZ77 parsing of $T$, there exists a data structure taking $\mathcal{O}(z)$ words of space and supporting \texttt{map}, \texttt{expand}, and \texttt{assign} operations on $\mathcal Z :\{\pi_1,...,\pi_z\} \rightarrow \{0,...,n-1\}$ in $\mathcal{O}(\log z)$ time
\end{lemma}
\begin{proof}
First of all notice that, since $LZ77(T)$ is our input, we know beforehand the domain $\mathcal D = \{ \pi\ |\ \langle\pi, \lambda, c\rangle \in LZ77(T)\ \wedge \pi\neq \bot \}$ of $\mathcal Z$. We can therefore map the domain to rank space and restrict our attention to functions $\mathcal Z':\{0,...,d-1\} \rightarrow \{0,...,n-1\}$, with $d = |\mathcal D| \leq z$. To compute $\mathcal Z(i)$ we map $0\leq i < n$ to a rank $0\leq i' < d$ by binary-searching a precomputed array containing the sorted values of $\mathcal D$ and return $\mathcal Z'(i')$. Similarly, $\mathcal Z(i) \leftarrow j$ is implemented by executing $\mathcal Z'(i') \leftarrow j$ (with $i'$ defined as above), and $\mathcal Z.expand(j)$ simply as $\mathcal Z'.expand(j)$.
We use a dynamic gap-encoded bitvector $C$ marking (by setting a bit) those positions $j$ such that $j=\mathcal Z(i)$ for some $i$. A dynamic gap-encoded bitvector with $b$ bits set can easily be implemented using a red-black tree such that it takes $\mathcal{O}(b)$ words of space and supports \texttt{insert}, \texttt{rank}, \texttt{select}, and \texttt{access} operations in $\mathcal{O}(\log b)$ time; see~\cite{policriti2016computing} for such a reduction.
Upon initialization of $\mathcal Z$, $C$ is empty. Let $k$ be the number of bits set in $C$ at some step of the computation.
We can furthermore restrict our attention to \emph{surjective} functions $\mathcal Z'':\{0,...,d-1\} \rightarrow \{0,...,k-1\}$ as follows. $\mathcal Z'(i')$ (\texttt{map}) returns $C.select_1(\mathcal Z''(i'))$. The \texttt{assign} operation $\mathcal Z'(i') \leftarrow j$ requires the \texttt{insert} operation $C.insert(1,j)$ followed by the execution of $\mathcal Z''(i') \leftarrow C.rank_1(j)$. Operation $\mathcal Z'.expand(j)$ is implemented with $C.insert(0,j)$.
To conclude, since we restrict our attention to the case where---when calling $\mathcal Z(i) \leftarrow j$---argument $i$ is greater than all $i'$ such that $\mathcal Z(i')$ is defined, we will execute \texttt{assign} operations $\mathcal Z''(i') \leftarrow j''$ for increasing values of $i'=0,1,\dots,d-1$. In particular, at each \texttt{assign} $\mathcal Z''(i') \leftarrow j''$, $i'= k$ will be the current domain size. We therefore focus on a new operation, \texttt{append}, denoted as $\mathcal Z''.append(j'')$ and whose effect is $Z''(k) \leftarrow j''$. We are left with the problem of finding a data structure for a \emph{dynamic permutation} $\mathcal Z'':\{0,...,k-1\} \rightarrow \{0,...,k-1\}$ with support for \texttt{map} and \texttt{append} operations. Note that both domain and codomain size ($k$) are incremented by one after every \texttt{append} operation.
\begin{example}
Let $k=5$ and $\mathcal Z''$ be the permutation $\langle 3,1,0,4,2 \rangle$. After $\mathcal Z''.append(2)$, $k$ increases to $6$ and $\mathcal Z''$ turns into the permutation $\langle 4,1,0,5,3,2\rangle$. Note that $\mathcal Z''.append(j'')$ has the following effect on the permutation: all numbers larger than or equal to $j''$ are incremented by one, and $j''$ is appended at the end of the permutation.
\end{example}
To implement the dynamic permutation $\mathcal Z''$, we use a red-black tree $\mathcal T$. We associate to each internal tree node $x$ a counter storing the number of leaves contained in the subtree rooted in $x$. Let $m$ be the size of the tree. The tree supports two operations:
\begin{itemize}
\item $\mathcal T.insert(j)$. Insert a new leaf at position $j$, i.e. the new leaf will be the $j$-th leaf to be visited in the in-order traversal of the tree. This operation can be implemented using subtree-size counters to guide the insertion. After the leaf has been inserted, we need to re-balance the tree (if necessary) and update at most $\mathcal{O}(\log m)$ subtree-size counters. The procedure returns (a pointer to) the tree leaf $x$ just inserted. Overall, $\mathcal T.insert(j)$ takes $\mathcal{O}(\log m)$ time
\item $\mathcal T.locate(x)$. Take as input a leaf in the red-black tree and return the (0-based) rank of the leaf among all leaves in the in-order traversal of the tree. $\mathcal T.locate(x)$ requires climbing the tree from $x$ to the root and use subtree-size counters to retrieve the desired value, and therefore runs in $\mathcal{O}(\log m)$ time.
\end{itemize}
At this point, the dynamic permutation $\mathcal Z''$ is implemented using the tree described above and a vector $N$ of red-black tree leaves supporting \texttt{append} operations (i.e. insert at the end of the vector). $N$ can be implemented with a simple vector of words with initial capacity 1. Every time we need to add an element beyond the capacity of $N$, we re-allocate $2|N|$ words for the array. $N$ supports therefore constant-time access and amortized constant-time append operations. Starting with empty $\mathcal T$ and $N$, we implement operations on $\mathcal Z''$ as follows:
\begin{itemize}
\item $\mathcal Z''.map(i)$ returns $\mathcal T.locate(N[i])$
\item $\mathcal Z''.append(j)$ is implemented by calling $N.append(\mathcal T.insert(j))$
\end{itemize}
Taking into account all components used to implement our original dynamic function $\mathcal Z$, we get the bounds of our lemma.
\end{proof}
\subsubsection{The algorithm}
The steps of our algorithm to compute $RLBWT^+(\overleftarrow T)$ from $LZ77(T)$ are the following:
\begin{enumerate}
\item sort $\mathcal D = \{ \pi\ |\ \langle\pi, \lambda, c\rangle \in LZ77(T)\ \wedge \pi\neq \bot \}$;
\item process $\langle \pi_v, \lambda_v, c_v \rangle_{v=1,...,z}$ from the first to last triple as follows. When processing $\langle \pi_v, \lambda_v, c_v \rangle$:
\begin{enumerate}
\item use our dynamic function $\mathcal Z$ to convert text position $\pi_v$ to RLBWT position $j'=\mathcal Z(\pi_v)$
\item extract $\lambda_v$ characters from RLBWT starting from position $j'$ by using the LF function; at the same time, extend RLBWT with the extracted characters.
\item when inserting a character at position $j$ of the RLBWT, if $j$ corresponds to some text position $i\in \mathcal D$, then update $\mathcal Z$ accordingly by setting $\mathcal Z(i)\leftarrow j$. If, instead, $j$ does not correspond to any text position in $\mathcal D$, execute $\mathcal Z.expand(j)$.
\end{enumerate}
\end{enumerate}
Our algorithm is outlined below as Algorithm 1. Follows a detailed description of the pseudocode and a result stating its complexity.
\medskip
In Lines 1-5 we initialize all structures and variables. In order: we compute and sort set $\mathcal D$ of phrase sources, we initialize current text position $i$ ($i$ is the position of the character to be read), we initialize an empty RLBWT data structure (we will build $RLBWT^+(\overleftarrow T)$ online), and we create an empty dynamic function data structure $\mathcal Z$. In Line 6 we enter the main loop iterating over LZ77 factors. If the current phrase's source is not empty (i.e. if the phrase copies a previous portion of the text), we need to extract $\lambda_v$ characters from the RLBWT. First, in Line 8 we retrieve the RLBWT position $j'$ corresponding to text position $\pi_v$ with a \texttt{map} query on $\mathcal Z$. Note that, if $\pi_v\neq\bot$, then $i>\pi_v$ and therefore $\mathcal Z(\pi_v)$ is defined (see next). We are ready to extract characters from RLBWT. For $\lambda_v$ times, we repeat the following procedure (Lines 10-19). We read the $l$-th character from the source of the $v$-th phrase (Line 10) and insert it in the RLBWT (Line 11). Importantly, the \texttt{extend} operation at Line 11 returns the RLBWT position $j$ at which the new character is inserted; RLBWT position $j$ correspond to text position $i$. We now have to check if $i$ is the source of some LZ77 phrase. If this is the case (Line 12), then we link text position $i$ to RLBWT position $j$ by calling a \texttt{assign} query on $\mathcal Z$ (Line 13). If, on the other hand, $i$ is not the source of any phrase, then we call a \texttt{expand} query on $\mathcal Z$ on the codomain element $j$. Note that, after the \texttt{extend} query at Line 11, RLBWT positions after the $j$-th are shifted by one. If $j'$ is one of such positions, then we increment it (Line 17). Finally, we increment text position $i$ (Line 19). At this point, we finished copying characters from the $v$-th phrase's source (or we did not do anything if the $v$-th phrase consists of only one character). We therefore extend the RLBWT with the $v$-th trailing character (Line 20), and (as done before) associate text position $i$ to RLBWT position $j$ if $i$ is the source of some phrase (Lines 21-24). We conclude the main loop by incrementing the current position $i$ on the text (Line 25). Once all characters have been extracted from LZ77, RLBWT is a run-length BWT structure on $\overleftarrow T$. At Line 26 we convert it to $RLBWT^+(T)$ (see previous section) and return it as a series of pairs $\langle \lambda_v, c_v \rangle_{v=1,\dots, r}$.
\begin{figure}[h!]
\begin{center}
\includegraphics[trim=3cm 9cm 3cm 3.5cm, clip=true, width=1\textwidth]{alg}
\end{center}
\end{figure}
\begin{theorem}\label{th:lz77-rlbwt}
Algorithm 1 converts $LZ77(T) \rightarrow RLBWT(T)$ in $\mathcal{O}(n(\log r+\log z))$ time and $\mathcal{O}(r+z)$ words of working space
\end{theorem}
\begin{proof}
Sorting set $\mathcal D$ takes $\mathcal{O}(z\log z) \subseteq \mathcal{O}(n\log z)$ time. Overall, we perform $\mathcal{O}(z)$ \texttt{map}/\texttt{assign} and $n$ \texttt{expand} queries on $\mathcal Z$. All these operations take globally $\mathcal{O}(n\log z)$ time. We use the structure of Theorem \ref{th:dynamic RL} to implement $RLBWT^+(T)$ and $RLBWT^+(\overleftarrow T)$. We perform $n$ \texttt{access}, \texttt{extend}, and \texttt{LF} queries on $RLBWT^+(\overleftarrow T)$. This takes overall $\mathcal{O}(n\log r)$ time. Finally, inverting $RLBWT^+(\overleftarrow T)$ at Line 26 takes $\mathcal{O}(n\log r)$ time and $\mathcal{O}(r)$ words of space (see previous section). We keep in memory the following structures: $\mathcal D$, $\mathcal Z$, $RLBWT^+(\overleftarrow T)$, and $RLBWT^+(T)$. The bounds of our theorem easily follow.
\end{proof}
\section{Conclusions}
In this paper we presented space-efficient algorithms converting between two compressed file representations---the run-length Burrows-Wheeler transform (RLBWT) and the Lempel-Ziv 77 parsing (LZ77)---using a working space proportional to the input and the output. Both representations can be significantly (up to exponentially) smaller than the text; our solutions are therefore particularly useful in those cases in which the text does not fit in main memory but its compressed representation does. Another application of the results discussed in this paper is the optimal-space construction of compressed self-indexes based on these compression techniques (e.g.~\cite{belazzougui2015composite}) taking as input the RLBWT/LZ77 \emph{compressed} file.
We point out two possible developments of our ideas. First of all, our algorithms rely heavily on dynamic data structures. On the experimental side, it has been recently shown~\cite{prezza2017framework} that algorithms based on compressed dynamic strings can be hundreds of times slower than others not making use of dynamism (despite offering very similar theoretical guarantees). This is due to factors ranging from cache misses to memory fragmentation; dynamic structures inherently incur into these problems as they need to perform a large number of memory allocations and de-allocations. A possible strategy for overcoming these difficulties is to build the RLBWT by merging two static RLBWTs while using a working space proportional to the output size. A second improvement over our results concerns theoretical running times. We note that our algorithms perform a number of steps proportional to the size $n$ of the text. Considering that the compressed file could be \emph{exponentially} smaller than the text, it is natural to ask whether it is possible to perform the same tasks in a time proportional to $r+z$. This seems to be a much more difficult goal due to the intrinsic differences among the two compressors---one is based on suffix sorting, while the other on replacement of repetitions with pointers.
\bibliographystyle{splncs}
| 9,328 |
\section{Introduction}
The presence of a rigidly rotating magnetosphere in the early B-type stars HD\,23478 and HD\,345439
was recently discovered
in the Apache Point Observatory Galactic Evolution Experiment (APOGEE; \citealt{Eikenberry2014})
using high-resolution ($R\sim22\,500$) near-infrared H-band spectra. The authors detected in the
APOGEE bandpass prominent Brackett series emission lines with a characteristic double-horned profile. The type
of profile and peak separation is typical for the rigidly rotating magnetosphere (RRM, \citealt{Townsend2005})
feature previously
discovered in the fast rotating helium-strong star $\sigma$\,Ori\,E,
which possesses an extremely large magnetic field (e.g., \citealt{Landstreet1978,Oksala2015}).
Such stars are extremely rare: the discovery of HD\,23478 and HD\,345439 has enhanced the number
of ``extreme'' rotators by 50\% \citep{Eikenberry2014}.
The authors reported that the optical spectra of HD\,345439 reveal strong \ion{He}{i} lines and very fast
rotation of $\sim270\pm20$\,km\,s$^{-1}$
Subsequently, \citet{Wisn2015} analysed multi-epoch photometric observations
of this star from the Kilodegree Extremely Little Telescope, Wide Angle Search for Planets, and ASAS
surveys revealing the presence of a $\sim$0.7701\,day rotation period in each data set. The authors suggest
that the He-strong star HD\,345439 of spectral type B2\,V
is among the faster known He-strong $\sigma$\,Ori\,E analogs, HR\,7355 ($P_{\rm rot}=0.52\,d$ -- \citealt{Rivinius2013}) and
HR\,5907 ($P_{\rm rot}=0.51\,d$ -- \citealt{Grunhut2012}).
\citet{Hubrig2015a} carried out a spectropolarimetric follow-up of HD\,345439 on one occasion, obtaining eight subexposures
over 88\,minutes in 2014 June with the
FOcal Reducer low dispersion
Spectrograph (FORS\,2; \citealt{Appenzeller1998}) mounted on the 8\,m Antu telescope of the VLT.
The authors reported that the mean longitudinal magnetic field was changing from about $+$500\,G measured in the first
pair of subexposures to about $-$1200\,G measured in the last pair of subexposures.
Multi-epoch FORS\,2 spectropolarimetric observations distributed over about two months were recently
obtained in service mode in the framework of our programme 097.D-0428.
In the following sections, we present the results of our magnetic fields measurements, the search for
a magnetic field periodicity, and discuss the detected spectral variability with respect to the magnetic
field geometry.
\section{Observations and magnetic field measurements}
\label{sect:obs}
\begin{table*}
\caption{
Logbook of the FORS\,2 polarimetric observations of HD\,345439, including
the modified Julian date of mid-exposure followed by the
achieved signal-to-noise ratio in the Stokes~$I$ spectra around 5200\,\AA{},
and the measurements of the mean longitudinal magnetic field using the
Monte Carlo bootstrapping test, for the hydrogen lines and all lines.
In the last columns, we present the results of our measurements using the null spectra for the set
of all lines and the phases calculated
relative to the zero phase corresponding to a positive field extremum at MJD56925.5425.
All quoted errors are 1$\sigma$ uncertainties.
}
\label{tab:log_meas}
\centering
\begin{tabular}{lrr@{$\pm$}rr@{$\pm$}rr@{$\pm$}rr}
\hline
\hline
\multicolumn{1}{c}{MJD} &
\multicolumn{1}{c}{SNR$_{5200}$} &
\multicolumn{2}{c}{$\left< B_{\rm z}\right>_{\rm hydr}$} &
\multicolumn{2}{c}{$\left< B_{\rm z}\right>_{\rm all}$} &
\multicolumn{2}{c}{$\left< B_{\rm z}\right>_{\rm N}$} &
\multicolumn{1}{c}{Phase}\\
&
&
\multicolumn{2}{c}{[G]} &
\multicolumn{2}{c}{[G]} &
\multicolumn{2}{c}{[G]} &
\\
\hline
56810.2547& 413 & 414 & 282 & 436 & 212 & \multicolumn{2}{c}{} & 0.311 \\
56810.2745& 455 & 789 & 246 & 565 & 188 & \multicolumn{2}{c}{} & 0.336\\
56810.2860& 392 & $-$303 & 282 &$-$298 & 212 & \multicolumn{2}{c}{} & 0.352 \\
56810.3018& 420 & $-$840 & 262 &$-$689 & 198 & \multicolumn{2}{c}{} & 0.372 \\
57525.2797& 521 & 1202 & 552 & 1310 & 374 &$-$275 & 223& 0.697\\
57527.2841& 729 & 287 & 302 & 416 & 206 & $-$29 &166 & 0.300\\
57530.3230& 960 & 1714 & 245 & 1237 & 186 &$-$181 &122 & 0.246\\
57530.3753& 1086 & 514 & 185 & 518 & 141 & $-$5 &104 & 0.314\\
57531.2763& 756 & $-$829 & 408 & $-$475 & 222 &$-$274 &274 & 0.483\\
57531.3233& 811 & $-$103 & 371 & $-$576 & 203 & 76 &205 & 0.544 \\
57534.3069& 786 & $-$853 & 280 & $-$926 & 181 & 123 &194 & 0.418 \\
57560.1708& 881 & 3415 & 344 & 3044 & 235 & 113 &178 & 0.000 \\
57590.1750& 1127 & 2546 & 184 & 2551 & 121 & 130 & 97 & 0.957 \\
57590.2287& 1174 & 1905 & 200 & 2176 & 129 & $-$98 & 95 & 0.027 \\
57590.2812& 1056 & 2084 & 292 & 2156 & 169 & $-$45 &113 & 0.095 \\
57591.1437& 1053 & 1344 & 265 & 1280 & 173 & $-$6 &145 & 0.215 \\
57591.1997& 1178 & 826 & 199 & 583 & 137 & $-$21 & 91 & 0.288 \\
57591.2521& 1133 & $-$372 & 229 & $-$314 & 149 & $-$51 &115 & 0.356 \\
\hline
\end{tabular}
\end{table*}
Fourteen FORS\,2 spectropolarimetric observations of HD\,345439 were obtained
from 2016 May 17 to 2016 July 22.
The FORS\,2 multi-mode instrument is equipped with polarisation analysing optics
comprising super-achromatic half-wave and quarter-wave phase retarder plates,
and a Wollaston prism with a beam divergence of 22$\arcsec$ in standard
resolution mode.
We used the GRISM 600B and the narrowest available slit width
of 0$\farcs$4 to obtain a spectral resolving power of $R\sim2000$.
The observed spectral range from 3250 to 6215\,\AA{} includes all Balmer lines,
apart from H$\alpha$, and numerous helium lines.
For the observations, we used a non-standard readout mode with low
gain (200kHz,1$\times$1,low), which provides a broader dynamic range, hence
allowed us to reach a higher signal-to-noise ratio (SNR) in the individual spectra.
The exposure time for each subexposure
accounted for 7.8\,min.
A first description of the assessment of longitudinal magnetic field
measurements using FORS\,1/2 spectropolarimetric observations was presented
in our previous work (e.g.\ \citealt{Hubrig2004a,Hubrig2004b},
and references therein).
To minimize the cross-talk effect,
and to cancel errors from
different transmission properties of the two polarised beams,
a sequence of subexposures at the retarder
position angles
$-$45$^{\circ}$$+$45$^{\circ}$,
$+$45$^{\circ}$$-$45$^{\circ}$,
$-$45$^{\circ}$$+$45$^{\circ}$,
etc.\ is usually executed during the observations. Moreover, the reversal of the quarter wave
plate compensates for fixed errors in the relative wavelength calibrations of the two
polarised spectra.
According to the FORS User Manual, the $V/I$ spectrum is calculated using:
\begin{equation}
\frac{V}{I} = \frac{1}{2} \left\{
\left( \frac{f^{\rm o} - f^{\rm e}}{f^{\rm o} + f^{\rm e}} \right)_{-45^{\circ}} -
\left( \frac{f^{\rm o} - f^{\rm e}}{f^{\rm o} + f^{\rm e}} \right)_{+45^{\circ}} \right\}
\end{equation}
where $+45^{\circ}$ and $-45^{\circ}$ indicate the position angle of the
retarder waveplate and $f^{\rm o}$ and $f^{\rm e}$ are the ordinary and
extraordinary beams, respectively.
Rectification of the $V/I$ spectra was
performed in the way described by \citet{Hubrig2014}.
Null profiles, $N$, are calculated as pairwise differences from all available
$V$ profiles. From these, 3$\sigma$-outliers are identified and used to clip
the $V$ profiles. This removes spurious signals, which mostly come from cosmic
rays, and also reduces the noise. A full description of the updated data
reduction and analysis will be presented in a separate paper (Sch\"oller et
al., in preparation, see also \citealt{Hubrig2014}).
The mean longitudinal magnetic field, $\left< B_{\rm z}\right>$, is
measured on the rectified and clipped spectra based on the relation
following the method suggested by \citet{Angel1970}
\begin{eqnarray}
\frac{V}{I} = -\frac{g_{\rm eff}\, e \,\lambda^2}{4\pi\,m_{\rm e}\,c^2}\,
\frac{1}{I}\,\frac{{\rm d}I}{{\rm d}\lambda} \left<B_{\rm z}\right>\, ,
\label{eqn:vi}
\end{eqnarray}
\noindent
where $V$ is the Stokes parameter that measures the circular polarization, $I$
is the intensity in the unpolarized spectrum, $g_{\rm eff}$ is the effective
Land\'e factor, $e$ is the electron charge, $\lambda$ is the wavelength,
$m_{\rm e}$ is the electron mass, $c$ is the speed of light,
${{\rm d}I/{\rm d}\lambda}$ is the wavelength derivative of Stokes~$I$, and
$\left<B_{\rm z}\right>$ is the mean longitudinal (line-of-sight) magnetic field.
The longitudinal magnetic field was measured in two ways: using the entire spectrum
including all available lines, excluding lines in emission, or using exclusively hydrogen lines.
Furthermore, we have carried out Monte Carlo bootstrapping tests.
These are most often applied with the purpose of deriving robust estimates of standard errors.
The measurement uncertainties obtained before and after the Monte Carlo bootstrapping tests were found to be
in close agreement, indicating the absence of reduction flaws.
Since the presence of $\beta$~Cep-like pulsations is frequently found in early B-type stars,
we also checked the stability
of the spectral lines along full sequences of sub-exposures. We have compared
the profiles of several spectral lines recorded in the parallel beam with the retarder waveplate
at $+45^{\circ}$. The same was done for spectral lines recorded in the perpendicular beam.
The line profiles looked identical within the noise.
The results of our magnetic field measurements, those for the entire spectrum
or only the hydrogen lines are presented in
Table~\ref{tab:log_meas}, where we also include in the first four rows the information about the previous
magnetic field measurements presented by \citet{Hubrig2015a}. A non-detection was obtained by the authors, if all
four consecutive observations recorded as pairs of position angles
separated by 90$^{\circ}$ were combined. On the other hand, after splitting the observations into two
data sets, i.e.\ using the first two pairs and the second two pairs consisting
of observations at the retarder waveplate positions ($-45^{\circ}, +45^{\circ}, +45^{\circ}, -45^{\circ}$),
they obtained 3.0 to 3.8$\sigma$ detections, but with $\left< B_{\rm z} \right>$ values with opposite
sign, indicating a very fast rotation of HD\,345439. The measurements in Table~\ref{tab:log_meas} refer
to observations at just two position angles with a time lap of 22\,min. In this case,
the null profile cannot be extracted.
The rotation phase presented in the last column of Table~\ref{tab:log_meas} was calculated assuming a period
of 0.77018\,d, which was determined from our period search described in Sect.~\ref{sect:mag}.
\section{Period determination from the magnetic data}
\label{sect:mag}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{HD345439.freq.eps}
\caption{
Frequency periodogram (in d$^{-1}$) for the longitudinal magnetic field measurements of HD\,345439 using both
the entire spectrum and only the hydrogen lines. The window function is indicated by the red color.
}
\label{fig:period}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{HD345439.phase.eps}
\caption{Longitudinal magnetic field variation of HD\,345439 phased with the 0.77018\,d period.
Measurements of the magnetic field using the entire spectrum are presented by red circles,
and those using hydrogen lines by blue circles.
The solid line represents a fit to the data with a mean value for the magnetic field of
$\overline{\left<B_{\rm z}\right>} = 939\pm96$\,G and an amplitude of
$A_{\left<B_{\rm z}\right>} = 1607\pm99$\,G.
For the presented fit, we assume a zero phase corresponding to a positive field extremum at MJD56925.5425.
}
\label{fig:rot}
\end{figure}
Magnetic early B-type stars usually exhibit photometric, spectral, and
magnetic variability with the rotation period (e.g.\ \citealt{Landstreet1978}).
Using Kilodegree Extremely Little Telescope (KELT)
photometry, \citet{Wisn2015} detected a rotation period of $0.7701\pm0.0003$\,d with the zero point
of the ephemeris $JD=2454252.3432$ corresponding to the first minimum in the light curve (see their
Fig.~1.). From archival photometric observations in the Wide Angle Search for Planets
(SuperWASP) survey and the All Sky Automated Survey (ASAS),
the authors derived $P_{\rm rot}=0.7695\pm0.0078$\,d and $P_{\rm rot}=0.7702\pm0.0001$\,d,
respectively, with the phase-folded light curves exhibiting similar complex morphology.
The result of our frequency analysis based on the longitudinal magnetic field measurements
presented in Table~\ref{tab:log_meas} and
performed using a non-linear least squares fit to the multiple harmonics utilizing the Levenberg-Marquardt
method \citep{Press1992} with an optional possibility of pre-whitening the trial harmonics is presented in
Fig.~\ref{fig:period}.
Since the results of the measurements using the whole spectrum or exclusively the hydrogen lines are rather similar,
the frequency analysis
was performed using both, the measurements on the entire spectrum and on the hydrogen lines.
To detect the most probable period, we calculated the frequency spectrum and for each trial
frequency we performed a statistical F-test of the null hypothesis for the absence of periodicity
\citep{Seber1977}. The resulting F-statistics can be thought of as the total sum including covariances of the ratio
of harmonic amplitudes to their standard deviations, i.e.\ a signal-to-noise ratio.
The highest peak in the periodogram not coinciding with the window function
is detected at a frequency of 1.298\,d$^{-1}$.
Using this value as an initial guess for a least-squares fit of the period,
we obtain a value of $0.77018\pm0.00002$\,d.
This period is in good agreement with
the results of the period search by \citet{Wisn2015} using photometric data.
In Fig.~\ref{fig:rot}, we present all measurements, those using the entire spectrum and those using only the
hydrogen lines, phased with the rotation period and the best sinusoidal fit calculated for these
measurements. The largest gap in the phase coverage occurs in the phase range between 0.70 and 0.95.
From the sinusoidal fit to our data, we obtain
a mean value for the variable longitudinal magnetic field
$\overline{\left< B_{\rm z}\right>}= 939\pm96$\,G, an amplitude of the field variation
$A_{\left< B_{\rm z}\right>}=1607\pm99$\,G, and a reduced $\chi^2$ value of 3.1. For the presented fit, we assume a zero
phase corresponding to a positive
field extremum at MJD56925.5425$\pm$0.0015.
The observed sinusoidal modulation indicates that the magnetic field structure
exhibits two poles and a symmetry axis, tilted with respect to the rotation axis.
The simplest model for this magnetic field geometry is based on the assumption that the studied stars
are oblique dipole rotators,
i.e., their magnetic field can be approximated by a dipole with its magnetic axis
inclined to the rotation axis.
In Fig.~\ref{fig:rot}, we observe around the rotational phase 0.4 noticeable
deviations of our measurements from the simple dipole model, which may
indicate a more complex topology of the magnetic field structure.
On the other hand, as we show later in Sect.~\ref{sect:var}, the largest dispersion in
the hydrogen equivalent width measurements appears around the same phase range and is most likely due to an
occultation by circumstellar gas clouds magnetically confined to the magnetic equatorial
plane (e.g.\ \citealt{Hubrig2015b}).
Using the estimate of the stellar radius $R= 4.3\pm 0.3\,R_\odot$ for a B2\,V type star \citep{Harmanez1988},
$v \sin i = 270\pm 20$\,km\,s$^{-1}$ \citep{Wisn2015}, and
the rotation period $P_{\rm rot} = 0.77018\pm0.00002$\,d,
we obtain $v_{\rm eq}=283\pm20$\,km\,s$^{-1}$ and an inclination angle of the stellar rotation axis to the line of
sight $i=73\pm19^{\circ}$.
From the variation of the phase curve for the
field measurements with a mean of
$\overline{\left< B_{\rm z}\right>} = 939\pm 96$\,G and an amplitude of
$A_{\left< B_{\rm z}\right>} = 1607 \pm 99$\,G, we calculate
$\left< B_{\rm z} \right>^{\rm min}= -669\pm139$\,G and
$\left< B_{\rm z} \right>^{\rm max}=2545\pm139$\,G.
Using the definition by \citet{Preston1967}
\begin{equation}
r = \frac{\left< B_{\rm z}\right>^{\rm min}}{\left< B_{\rm z}\right>^{\rm max}}
= \frac{\cos \beta \cos i - \sin \beta \sin i}{\cos \beta \cos i + \sin \beta
\sin i},
\end{equation}
\noindent
we find
$r=-0.263\pm0.05$ and finally following
\begin{equation}
\beta = \arctan \left[ \left( \frac{1-r}{1+r} \right) \cot i \right],
\label{eqn:4}
\end{equation}
\noindent
we calculate the magnetic obliquity angle $\beta=28\pm28^{\circ}$.
%
We can estimate the dipole strength of HD\,345439 following
the model by \citet{Stibbs1950} as formulated by \citet{Preston1967}:
\begin{eqnarray}
B_{\rm d} & = & \left< B_{\rm z}\right>^{\rm max} \left( \frac{15+u}{20(3-u)} (\cos \beta \cos i + \sin \beta \sin i) \right)^{-1}\\
& \ge & \left< B_{\rm z}\right>^{\rm max} \left( \frac{15+u}{20(3-u)}\right)^{-1}.
\end{eqnarray}
Assuming a limb-darkening coefficient of 0.3, typical for the spectral type B2V \citep{Claret2011},
we can give a lower limit for the dipole strength of $B_d \ge 8.98\pm0.49$\,kG.
Given the high inclination angle $i=73\pm19^{\circ}$ and the low inferred obliquity angle
$\beta=28\pm28^{\circ}$, both with rather large errors,
the estimation of the dipole strength is rather uncertain,
leading to $12.7^{+15.0}_{-3.7}$\,kG.
\section{Spectral variability}
\label{sect:var}
\citet{Wisn2015} studied the variability of several \ion{He}{i} lines, the H$\alpha$ line and two Brackett
hydrogen lines in the near-infrared (NIR). Although their optical and NIR spectroscopy
did not cover the full rotation cycle, the temporary changes in line profiles showed a clear correlation with
the rotational phase.
As FORS\,2 spectra have a much lower spectral resolution, we were able to carry out a variability study
using only the strongest lines belonging to three elements: hydrogen, helium, and silicon.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{HD345439_H_DynSp_LPV_ResSpGapCorr.eps}
\caption{
Variability of hydrogen lines in the FORS\,2 spectra of HD\,345439 over the rotational cycle.
The middle and lower panels show the overplotted profiles of the hydrogen lines
H$\delta$, H$\gamma$, and H$\beta$
and the differences between individual and the average line profiles. The upper panel presents the
temporal behaviour of these differences. The average profile is shown by the red line.
}
\label{fig:hydr}
\end{figure}
In Fig.~\ref{fig:hydr}, we present the overplotted profiles of the hydrogen lines
H$\delta$, H$\gamma$, and H$\beta$,
the differences between the individual and average profiles, and
the temporal behaviour of these differences in differential dynamic plots.
Significant emission in the wings of the hydrogen lines, best visible in the differential dynamic plot
of H$\beta$, appears at the rotational phase around zero,
which corresponds to the maximum of the positive magnetic field strength.
Notably, we observe in the H$\beta$ emission wings a slight asymmetry, i.e.\ the
blue-shifted emission is somewhat stronger and more extended than the redshifted emission. This behaviour
of the H$\beta$ line differs from the behaviour of the H$\alpha$, Br$11$, and Br$\gamma$ lines presented by
\citet{Wisn2015} in the phase range 0.86--0.18, indicating a decrease of the blue-shifted emission with increasing
wavelength. The phase range 0.86--0.18 was calculated taking into account the difference
in the adopted zero points of ephemeris between the work of \citet{Wisn2015} and our work.
In Fig.~\ref{fig:ew_hydr}, we present the variability of the equivalent widths (EWs) of
hydrogen absorption lines showing a minimum at rotational phase 0.1-0.2, which is slightly offset from
the positive magnetic pole, and a secondary less pronounced minimum close to the negative magnetic pole.
The presence of intensity minima at these phases is likely related to the stronger hydrogen line profile fill-in
by the emission presumably originating in
the corrotating magnetospheric clouds (e.g.\ \citealt{Sundqvist2012,Hubrig2015b}).
As already mentioned in Sect.~\ref{sect:mag}, a large dispersion of EW measurements is detected around the phase 0.4.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{HD345439_Hlines_EWrowCorr.eps}
\caption{
Variability of EWs of hydrogen lines in FORS\,2 spectra of HD\,345439 obtained at eighteen different rotational phases.
}
\label{fig:ew_hydr}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{HD345439_HeSi_DynSp_LPV_ResSp_g10corr.eps}
\caption{
Same as in Fig.~\ref{fig:hydr}, but for the helium lines \ion{He}{i}~4388,
\ion{He}{i}~4471, and the silicon line \ion{Si}{iii}~4553.
}
\label{fig:he}
\end{figure}
In Fig.~\ref{fig:he}, we present the overplotted profiles,
the differences between the individual and average profiles, and
the differential dynamic plots
for the helium lines \ion{He}{i}~4388 and
\ion{He}{i}~4471 and the only sufficiently strong silicon line
detected in the low-resolution FORS\,2 spectra, \ion{Si}{iii}~4553.
Distinct differences are detected in the behaviour between the two elements: He absorption lines
are red-shifted in the phase ranges from about 0.55 to 0.70, around the phase 0, and from 0.1 to 0.2.
In the phase 0.3-0.4, He lines and the silicon absorption line \ion{Si}{iii}~4553 are blue-shifted.
The offsets to the blue
and to the red are indicative of the presence of surface He and Si spots similar to the finding
of He and Si spots on the surface of $\sigma$\,Ori\,E \citep{Reiners2000}.
The results of the analysis of the variability of EWs of He and Si lines support the presumption of
the presence of an inhomogeneous He and Si distribution.
As is shown in Fig.~\ref{fig:ew_he}, the Si line strength increases
in the phase range from 0.5 to 0.7, while the intensity of the He lines decreases in the same phase range. For both
elements we do not detect any clear correlation with the location of the magnetic poles.
The error bars of all presented EW measurements are of the order of the symbol size and less.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth] {HD345439He_SiIIIlines_EWcorr.eps}
\caption{
Variability of EWs of He and Si lines in FORS\,2 spectra obtained at eighteen different rotational phases.
}
\label{fig:ew_he}
\end{figure}
\section{Discussion}
\label{sect:disc}
Our spectropolarimetric monitoring using FORS\,2 at the VLT of the rigidly rotating magnetosphere
star HD\,345439 revealed the presence of a strong magnetic field with a minimum polar
strength of 9\,kG reversing over the very short
rotation period of 0.77018\,d. Both the dipole strength and the very short
rotation period of this star are similar to those
discovered in two other stars, HR\,5907 and HR\,7355 with half-day rotation periods
\citep{Grunhut2012,Rivinius2013}, known to belong to
the group called the $\sigma$\,Ori\,E analogs (e.g.\ \citealt{Groote1997}). Apart from
HD\,345439, \citet{Eikenberry2014} identified another rigidly rotating magnetosphere
star, HD\,23478, rotating with a period of 1.04\,d \citep{Jerzykiewicz1993} and a strong
kG magnetic field \citep{Hubrig2015a,Sikora2015}. Among the four known
very fast rigidly rotating magnetosphere stars,
three of them, HD\,345439, HD\,23478, and HR\,5907, show
low obliquities of their magnetic axes. For these stars it is expected that the plasma clouds are located
close to the equatorial plane (e.g. \citealt{Townsend2005}).
Due to the presence of strong kG magnetic fields and fast rotation, such stars can serve as excellent
laboratories to study the magnetic field and element abundance distributions using Zeeman Doppler Imaging, as well as
the effect of the magnetic field configuration on the angular momentum loss and the
associated spin-down.
The study of the variability of the He and Si lines showed the presence of significant chemical
abundance variations across the stellar photosphere. However, no clear correlation with the position of the
magnetic poles is indicated in our data.
Future high-resolution high signal-to-noise spectropolarimetric observations will be worthwhile to determine
the locations of these abundance spots as well as the surface distribution of other elements.
Variable emission wings, most clearly detected in the H$\beta$ line, become stronger at the rotational phase
0, which corresponds to the best visibility of the positive magnetic pole.
The blue-shifted emission appears stronger and more extended than the redshifted emission. This behaviour,
which differs from the behaviour of the near-IR lines in HD\,345439, was already observed in a few other
stars with magnetospheres (e.g., HR\,5907 -- \citealt{Grunhut2012}; HR\,7355 -- \citealt{Rivinius2013};
HD\,23478 -- \citealt{Sikora2015}; CPD\,$-$62$^{\circ}$\,2124 -- Hubrig et al., in preparation).
Due to the shortness of the rotation periods and the presence of very strong magnetic fields in the
atmospheres of the $\sigma$\,Ori\,E analogs,
these stars are the best candidates to carry out multiwavelength observations at different optical depths
to constrain their magnetospheres in more detail (e.g. \citealt{Carciofi2013}) and to study various atmospheric effects that
interact with a strong magnetic field.
\section*{Acknowledgments}
\label{sect:ackn}
We thank the anonymous referee for useful comments.
Based on observations obtained in the framework of the ESO Prg.\ 097.D-0428(A).
AK acknowledges financial support from RFBR grant 16-02-00604A.
We would like to thank J.~R.~Lomax for valuable comments on the manuscript.
| 8,164 |
\section{Introduction}
Abductive natural language inference ($\alpha$NLI) \citep{DBLP:conf/iclr/BhagavatulaBMSH20} is a newly established branch of natural language inference (NLI) and is an interesting task in the area of natural language processing (NLP) based commonsense reasoning. Originating from NLI which targets at the semantic relationship between the two sentences, $\alpha$NLI further estimates the abductive reasoning of each sentence by explicitly deducing its cause. In the past years, $\alpha$NLI has attracted increasing attentions as it makes NLP tools more explainable and comprehensible. As of today, typical applications of $\alpha$NLI include knowledge graph Completion \citep{DBLP:conf/akbc/YuZSNS20} \citep{DBLP:conf/eacl/BauerB21}, question answering \citep{DBLP:conf/aaai/MaIFBNO21}, sentence in-filling \citep{DBLP:conf/acl/HuangZEC20}, knowledge integration \citep{DBLP:conf/iclr/ZhouLSL021} and so on.
To better motivate this work, we have shown a comparison between NLI and $\alpha$NLI in Table \ref{table:task}. For NLI, the task is to judge the relationship between the premise statement $\rm{P}$ and the hypothetical sentence $\rm{H}$ based on the given information in $\rm{P}$. Options of the answer can be implication, neutrality, or contradiction. For $\alpha$NLI, a pair of observations ($\rm O_1$ and $\rm O_2$) and some hypotheses (e.g., two competing hypotheses $\rm H^1$ and $\rm H^2$ in the example) are given. The task of $\alpha$NLI is to deduce the more plausible reason between $\rm H^1$ and $\rm H^2$ that can explain the situational change from $\rm O_1$ to $\rm O_2$. In addition to constructing the $\alpha$NLI task, the authors of \cite{DBLP:conf/iclr/BhagavatulaBMSH20} has released a new challenge data set, called ART and reported comprehensive baseline performance for $\alpha$NLI by directly employing and retraining a solution for NLI, i.e., ESIM+ELMo\citep{DBLP:conf/acl/ChenZLWJI17,DBLP:conf/naacl/PetersNIGCLZ18}. They also found that the pretrained language model can apparently influence the performance of an algorithm and demonstrated some test results with the latest language models like GPT\citep{radford2018improving} and BERT\citep{DBLP:conf/naacl/DevlinCLT19}.
\begin{table}[htb]
\begin{center}
\caption{Comparison of NLI tasks and $\alpha$NLI tasks, where E, N, and C represent entailment, neutral and contradiction, respectively} \label{table:task}
\begin{tabular}{|l|l|c|}
\hline
\rowcolor[HTML]{D0CECE}
Task & Context & \multicolumn{1}{l|}{\cellcolor[HTML]{D0CECE}Answer} \\ \hline
& P: A man inspects the uniform of a figure in some East Asian country. & \\
& \quad H: The man is sleeping. & \multirow{-2}{*}{E , N or \textbf{C}} \\
& P: An older and younger man smiling. & \\
& \quad H: Two men are smiling and laughing at the cats playing on the floor. & \multirow{-2}{*}{E , \textbf{N} or C} \\
& P: A soccer game with multiple males playing. & \\
\multirow{-6}{*}{NLI} & \quad H: Some men are playing a sport. & \multirow{-2}{*}{\textbf{E} , N or {C}} \\ \hline
\multicolumn{1}{|c|}{} & $\rm O_1$: Dotty was being very grumpy. & \\
\multicolumn{1}{|c|}{} & \quad $\rm H^1$: Dotty ate something bad. & \\
\multicolumn{1}{|c|}{} & \quad $\rm H^2$: Dotty call some close friends to chat. & \\
\multicolumn{1}{|c|}{\multirow{-4}{*}{$\alpha$NLI}} & $\rm O_2$: She felt much better afterwards. & \multirow{-4}{*}{$\rm{H^1}$ or $ \mathbf {H^2}$} \\ \hline
\end{tabular}
\end{center}
\end{table}
We note that there is still a considerable gap between the human performance and the class of baseline models in \cite{DBLP:conf/iclr/BhagavatulaBMSH20}. More recently, \cite{DBLP:conf/sigir/ZhuPLC20} argued that the former framework cannot measure the rationality of the hypotheses, and reformulated $\alpha$NLI as a learning-to-rank task for abductive reasoning. In their approach, RoBERTa\citep{DBLP:journals/corr/abs-1907-11692}, BERT\citep{DBLP:conf/naacl/DevlinCLT19}, and ESIM\citep{DBLP:conf/acl/ChenZLWJI17} are all tested to work as the pretrained language model. Under this new ranking-based framework, \cite{DBLP:conf/emnlp/PaulF20} introduces a novel multi-head knowledge attention model which learns to focus on multiple pieces of knowledge at the same time, and is capable of refining the input representation in a recursive manner for $\alpha$NLI.
Despite the performance improvement achieved by the ranking framework, there are still some weaknesses calling for further investigation. For instance, a practical training sample (e.g., two observations and four hypotheses) from ART is shown in Figure \ref{fig:method}. It is easy to conclude that both $\rm H^1$ and $\rm H^2$ are correct answers; while the other two ($\rm H^3, \rm H^4$) are false. However, in previous ranking-based $\alpha$NLI method such as $\rm L2R^2$ (Learning to Rank for Reasoning) (Zhu et al., 2020, four hypotheses will be trained simultaneously by treating one of the two correct answers as a more correct one. Similarly, the wrong answers are also treated as a wrong one and a worse one. Meanwhile, the ranked hypotheses are trained separately, but the sum of their probabilities is set as a fixed value - e.g., the probability of correct hypothesis $\rm H^2$ decreases when the probability of answer $\rm H^1$ increases.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=140mm]{task.pdf}
\end{center}
\vspace{-0.2in}
\caption{Comparison of $\rm L2R^2$ method and IMSL method. Among them, $\rm O_1$, $\rm O_2$ represent observations, $\rm H^1$, $\rm H^2$ are correct answers, $\rm H^3$, $\rm H^4$ are wrong answers, S($\rm H^i$) represents the score of the i-th hypothesis correctness.} \label{fig:method}
\end{figure}
In this paper, we advocate a new approach for $\alpha$NLI as shown in Figure \ref{fig:method}. Our principle of abductive reasoning is constructed based on following two arguments: 1) a hypothesis is correct because its meaning explains the change of the observations. In practice, the causes of situational changes are often diverse, and therefore the answers are seldom unique. It follows that we do not need to intentionally distinguish or rank the correct answers. 2) a hypothesis is wrong because it can not explain the cause of some event. Therefore, all wrong answers contribute the same - i.e., it is plausible to treat all wrong hypotheses equally in the process of constructing our reasoning network. We argue that the proposed abductive reasoning principle is closer to commonsense reasoning by humans than previous ranking-based approaches
\begin{figure}[htb]
\begin{center}
\includegraphics[width=105mm]{model.pdf}
\end{center}
\caption{Comparison of the interaction between the traditional model and the IMSL model, where S ($\rm H^i$) represents the input sequence containing the i-th hypothesis} \label{fig:comparison}
\end{figure}
Based on the above reasoning, we propose a new abductive reasoning model called Interactive Model with Structural Loss (IMSL) for $\alpha$NLI as shown in Figure \ref{fig:comparison}.
The IMSL model mainly consists of two components: interactive model and structural loss. On the one hand, note that in the process of extracting the language features of an arbitrary hypothesis, its relationship to other hypotheses should also be considered because the hypotheses are often semantically related \citep{pearl1986evidential}. To this end, we can design an information interaction layer to capture the relationship between different hypotheses and produce more discriminative language feature vectors. On the other hand, we have constructed a new loss function called ``joint softmax focal loss'' inspired by a recent work \citep{DBLP:conf/iccv/LinGGHD17}. It is essentially a structural softmax based Focal loss formed by sequentially constructing a loss for each score group that composed by a correct hypothesis and all wrong hypotheses. When compared with conventional models, we argue that IMSL is more powerful for the task of $\alpha$NLI by jointly exploiting the rich relation among competing hypotheses. The main technical contributions of this work can be summarized as follows.
1) For $\alpha$NLI task, we claim that, the correct hypotheses of a given observation pair are often diverse, and there is no need to tell them apart. The wrong hypotheses contribute the same to the task. We regroup instead of ranking all hypotheses, as shown in Figure \ref{fig:method}.
2) Aiming at the problem of incorrect probability distribution between correct hypotheses in the training process, a joint softmax focal loss is proposed. For the hypotheses groups formed in the rearrange process, we design a softmax-based focal loss for each group and combine them into a joint loss.
3) In view of the problem that traditional models cannot capture the language relationship between different hypotheses, we have added an information interaction layer between different hypothesis models. The information interaction layer increases the area under the receiver's operating characteristic curve (AUC) by about 5\%.
4) Impressive abductive reasoning performance is achieved by IMSL when tested using RoBERTa as the pretrained language model. The best language model DeBERTa \citep{DBLP:conf/iclr/HeLGC21} is not tested due to the constraint by our limited GPU resources (4-piece RXT 2080Ti). In our experiment, compared with all recent algorithms whose codes have been made publicly available, the IMSL method has achieved state-of-the-art results in ACC and AUC on both the validation set and test set. Besides, on the public leaderboard\footnote{\url{https://leaderboard.allenai.org/anli/submissions/public}}, IMSL is the best non-DeBERTa based algorithm and ranks 4/56 in all (including both DeBERTa based and non-DeBERTa based) competing methods.
\section{Related Work}
$\alpha$NLI task solves an abductive reasoning problem based on natural language inference (NLI). In the past years, there has been an explosion of NLI benchmarks, since the Recognizing Textual Entailment (RTE) Challenges was introduced by \cite{DBLP:conf/mlcw/DaganGM05} in the early 2000s. Then, in order to find the most reasonable explanation of the incomplete observations, \cite{DBLP:conf/iclr/BhagavatulaBMSH20} studied the feasibility of language-based abductive reasoning and proposed the task of $\alpha$NLI. It pays more attention to the information provided in the premise than the RTE task. For traditional RTE, the main task is to judge the relationship between the premise sentence and the hypothetical sentence, but the main objective of $\alpha$NLI is to select the most plausible hypothesis among the hypotheses given two observations.
$\alpha$NLI is the first language-based abductive reasoning study. This shift from logic-based to language-based reasoning draws inspirations from a significant body of works on language-based entailment \citep{DBLP:conf/emnlp/BowmanAPM15}; \citep{DBLP:conf/naacl/WilliamsNB18}, language-based logic \citep{lakoff1970linguistics}; \citep{DBLP:conf/acl/MacCartneyM07}, and language-based commonsense reasoning \citep{DBLP:conf/naacl/MostafazadehCHP16}; \citep{DBLP:conf/emnlp/ZellersBSC18}. In addition to establish $\alpha$NLI, \cite{DBLP:conf/mlcw/DaganGM05} have also released a new challenge dataset, i.e., ART, which can be visited through the first footnote in this paper. The authors have also formulate the task as a multiple-choice task to support easy and reliable automatic evaluation. Specifically, from a given context, the task is to choose the more reliable explanation from a given pair of hypotheses choices.
However, discriminating correct from wrong does not measure the plausibility of a hypothesis in $\alpha$NLI \citep{DBLP:conf/sigir/ZhuPLC20}. So, to fully model the plausibility of the hypotheses, Zhu et al. turn to the perspective of ranking and propose a novel learning to rank for reasoning ($\rm L2R^2$) approach for the task. The authors rank the hypotheses based on the number of times they appear in the dataset, and use some pairwise rankings as well as a listwise ranking as loss. Pairwise rankings contains Ranking SVM \citep{herbrich2000large}, RankNet\citep{DBLP:conf/icml/BurgesSRLDHH05}, LambdaRank\citep{DBLP:conf/nips/BurgesRL06}, and Listwise Ranking contains ListNet\citep{DBLP:conf/icml/CaoQLTL07}, ListMLE
\citep{DBLP:conf/icpr/LiL0R20} and ApproxNDCG\citep{DBLP:journals/ir/QinLL10}. The experiments on the ART dataset show that reformulating the $\alpha$NLI task as ranking task really brings obvious improvements. After that, \cite{DBLP:conf/emnlp/PaulF20} proposes a novel multi-head knowledge attention model that encodes semi-structured commonsense inference rules and learns to incorporate them in a transformer based reasoning cell. The authors still prove that a model using counterfactual reasoning is useful for predicting abductive reasoning tasks. Accordingly, they have established a new task called Counterfactual Invariance Prediction (CIP) and provide a new dataset for this.
In addition to the abductive reasoning models, the pre-trained language model still plays an important role in $\alpha$NLI task. Early ways for language inference are constructed directly by some simple statistical measures like bag-of-words and word matching. Later, various kinds of neural network architectures are used to discover useful features in the languages, like word2vec\citep{DBLP:journals/corr/abs-1301-3781} and GloVe\citep{DBLP:conf/emnlp/PenningtonSM14}. Recent works have developed contextual word representation models, e.g.,Embeddings from Language Models (ELMO) by \cite{DBLP:conf/naacl/PetersNIGCLZ18} and Bidirectional Encoder Representations from Transformers(BERT) by \cite{DBLP:conf/naacl/DevlinCLT19}. The original implementation and architecture of BERT has been outperformed by several variants and other transformer-based models, such as RoBERTa, DeBERTa and UNIMO. RoBERTa\citep{DBLP:journals/corr/abs-1907-11692} replaces training method of BERT and uses larger batches and more data for training. DeBERTa\citep{DBLP:conf/iclr/HeLGC21} uses the disentangled attention mechanism and an enhanced mask decoder to improves the BERT and RoBERTa models. In order to effectively adapt to unimodal and multimodal understanding task, \cite{DBLP:conf/acl/LiGNXLL0020} proposes the UNIMO model. In this paper, however, restricted by our computing resources, RoBERTa is selected as our language model.
\section{Interactive Model with Structural Loss (IMSL) Method}
IMSL model consists of two components: context coding layer and information interaction layer (the backbone network) as well as a joint softmax focal loss (objective function). The design of the model architecture and loss function are described in detail below.
\subsection{information interaction Model}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=120mm]{ModelStructure.pdf}
\end{center}
\vspace{-0.1in}
\caption{The proposed IMSL model consists of a context coding layer (using a pre-trained model) and an information interaction layer (characterizing the relationship among different hypotheses).} \label{fig:model}
\vspace{-0.1in}
\end{figure}
\textbf{Model input:} Under the framework of IMSL, a training sample $X$ includes two given observations (i.e., $\rm O_1$ and $\rm O_2$) and a group of candidate hypotheses denoted by $\mathbf{H}=\left\{\rm{H^{j}}\right\}_{j=1}^{N}$ ($N$ is the number of candidate hypotheses). Then, binary labels $\textbf{y}=\left\{y_{j}\right\}_{j=1}^{N}$ are assigned to each hypothesis by $y_{j}=1$ when $\rm{H}^j$ is the correct, and $y_{j}=0$ when $\rm{H}^j$ is the wrong. The task of abductive reasoning can be characterized by a mapping from $X$ to $\textbf{y}$.
For explicitly estimating the relation between each hypothesis and the two observations, we can construct a triad for each hypothesis as $x_{j}=\left[\mathrm{O}_{1} ; \rm{H^j} ; \mathrm{O}_{2}\right]\left(\rm{H^j} \in \mathbf{H}\right)$. This way, each sample in the training set $X$ can be represented by $[x_{1},x_{2},\cdots,x_{N}]\rightarrow [y_{1},y_{2},\cdots,y_{N}]$.
\textbf{Context coding layer:} We use a pre-trained language model (RoBERTa-large is used in our experiment) to calculate the contextual representation of the text. For each word in a single input $x_j$, an embedding vector with context information is generated. For each sentence in a single input $x_j$, a sentence-level embedding matrix $v_{j}=\operatorname{encode}\left(x_{j}\right)$ is first obtained, where $\operatorname{encode(\cdot)}$ denotes the pre-trained model for encoding. Then we can sum the word embedding dimensions in the feature matrix to generate the feature vector $z_j$.
\textbf{Information interaction layer:} Traditional models only consider one single input $x_j$ during scoring as shown in Fig. \ref{fig:comparison}, which makes it difficult to capture the relationship between different inputs (e.g., $x_j$ and $x_{j+1}$). To exploit the dependency between two different inputs, we propose to construct a novel information interaction layer as follows. First, a pair of feature vectors $z_j$ and $z_{j+1}$ can be generated after $x_j$ and $x_{j+1}$ are passed through the context encoding layer. Second, we plug $z_j$ and $z_{j+1}$ into the information interaction layer and use BiLSTM to acquire the distributed feature representation $f_j$ and $f_{j+1}$. Finally, a fully connected module outputs the corresponding scores $s_j$ and $s_{j+1}$. A flowchart of the context coding and information interaction layers is shown in Figure \ref{fig:model}.
To efficiently use contextual information, we use $z_j$ as the input of BiLSTM, which aim at exploiting the dependency relationship between the feature vectors. BiLSTM uses a forward LSTM and a backward LSTM for each sequence to obtain two separate hidden states: $\overrightarrow{{h}_{j}}, \overleftarrow{h}_{j}$. The key idea of BiLSTM is to generate the final output at time $t$ by concatenating these two hidden states:
\vspace{-0.02in}
\begin{equation}
\begin{aligned}
{h}_{j}=\left[\overrightarrow{{h}_{j}}, \overleftarrow{{h}}_{j}\right].
\end{aligned}
\end{equation}
\vspace{-0.02in}
After passing the BiLSTM layer, we can get a bidirectional hidden state vector $h_j$, then use the fully connected layer to generate the final score $s_j$. For computational efficiency, a linear regression formula is adopted here for prediction score:
\begin{equation}
\begin{aligned}
s_{j}=W_{j} \cdot h_{j}+b_{j},
\end{aligned}
\end{equation}
where $W_{j} \in \mathbb{R}^{2 d \times d}, b_{j} \in \mathbb{R}^{d}$.
\subsection{Joint Softmax focal Loss function}
Based on the output score layer of the IMSL model, we propose to design a new structural loss function based on the principle of abductive reasoning. Instead of ranking-based approach, the proposed loss function for each sample is formulated as a linear combination of softmax focal losses for several rearranged groups, which is called joint softmax focal loss. The detailed procedure of generating multiple rearranged groups is shown in Figure \ref{fig:cross}. We note that it is unnecessary to compare the group of correct hypotheses; while the exploration of the relationship between correct hypothesis and wrong hypotheses is sufficient for the task of score prediction. Therefore, we can rearrange the set of $N$ prediction scores into several groups, each of which only contains a single correct hypothesis. A toy example is given in Figure \ref{fig:cross} where the two hypotheses are correct, and all other hypotheses are wrong. In this example, the total $N$ scores can be divided into two groups associated with two correct hypotheses, respectively.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=100mm]{crosssoftmax.pdf}
\end{center}
\caption{An example of rearrange groups for joint softmax focal loss.} \label{fig:cross}
\end{figure}
With the above construction, we can first apply the softmax operation to each rearranged group and then combine them into a joint loss function. In the first step, each prediction score is given by:
\begin{equation}
\begin{aligned}
{\hat y_n} = \left\{ \begin{array}{l}
\frac{{{e^{s_n^1}}}}{{{e^{s_n^1}} + \sum\nolimits_i {{e^{s_i^0}}} }},\qquad \quad \ \ if\quad{y_n} = 1.\\
\sum\nolimits_j {\frac{{{e^{s_n^0}}}}{{K\left( {{e^{s_j^1}} + \sum\nolimits_i {{e^{s_i^0}}} } \right)}}} ,if\quad{y_n} = 0.
\end{array} \right.
\end{aligned}
\end{equation}
where $y_n$ is the correct/wrong label, and $\hat{y}_{n}$ represents a predicted value. The normalization factor $K = \sum\nolimits_j {{y_j}}$ represents the number of correct hypotheses. Note that $s_{i}^{0}$ represents the scores of the wrong hypotheses, where $i$ is the position of the false label. Similarly, $s_{i}^{1}$ indicates the score of the correct hypotheses.
In addition to the softmax loss, we have borrowed the idea of focal loss from \cite{DBLP:conf/iccv/LinGGHD17} and introduce a balancing factor $a \in (0,1)$ to control the shared weight of the correct hypothesis and the wrong ones. Here, $a$ is used for the correct hypotheses, and $1-a$ is used for the wrong hypotheses, i.e.,
\begin{equation}
\begin{aligned}
\beta_{n}=y_{n} \cdot a+\left(1-y_{n}\right)(1-a).
\end{aligned}
\label{eq:bal}
\end{equation}
Putting things together, we can rewrite the joint softmax focal loss as
\begin{equation}
\begin{aligned}
{\cal L }=F_{l}(y, \hat{y})=-\sum_{n} \beta_{n} \cdot\left(1-p_{n}\right)^{\gamma} \cdot \log \left(p_{n}\right).
\end{aligned}
\label{eq:focal}
\end{equation}
where
\begin{equation}
\begin{aligned}
p_{n}=y_{n} \cdot \hat{y}_{n}+\left(1-y_{n}\right)\left(1-\hat{y}_{n}\right)+\varepsilon.
\end{aligned}
\end{equation}
Here, the parameter $\gamma$ is included for regulating the model's attention to hard hypotheses during the training of IMSL model. As suggested in \cite{DBLP:conf/iccv/LinGGHD17}, $\gamma \in [0.5,5]$. Then, a small positive real number $\varepsilon$ of 1\emph{e}-8 is used to avoid the numerical instability problem. In practice, both $a$ and $\gamma$ are used as hyper-parameters which can be tuned by cross-validation.
For the example shown in Figure \ref{fig:cross}, by assuming that the softmax focal losses for the two groups are ${\cal L}_{group1} $ and ${\cal L}_{group2} $, we can obtain the overall loss by ${\cal L}_{sample} = {\cal L}_{group1} + {\cal L}_{group2} $. Furthermore, the total loss for all training samples can be estimated by the sum of the losses over individual samples.
\section{Experiment}
In this section, the experimental results on public data sets are presented to evaluate the method proposed in this paper.
In recent years, more and more Pre-Training models have been proposed, such as DeBERTa \citep{DBLP:conf/iclr/HeLGC21}, UNIMO \citep{DBLP:conf/acl/LiGNXLL0020}, etc. They use more data for training and have more parameters. Due to limited computational resources, we did not conduct comparative experiments with these high-performing yet computationally demanding pretrained models.
\textbf{Evaluation indicators:} AUC and ACC are the most common evaluation indicators. Since the original ACC cannot evaluate the model that is far away from the test data, AUC is added as an additional evaluation index to handle skewed sample distribution. AUC is a measurement method that is statistically consistent and more discriminative than ACC.
\subsection{Experimental setup}
Tasks and settings: The $\alpha$NLI task uses the ART dataset, which is the first large-scale benchmark dataset used for abductive reasoning in narrative texts. It consists of about 20,000 observations and about 200,000 pairs of hypotheses. The observations come from a collection of manually curated stories, and the hypotheses are collected through crowdsourcing. In addition, the candidate hypotheses for each narrative context in the test set are selected through the adversarial filtering algorithm with BERT-L (Large) as the opponent. The input and output formats are shown in Table \ref{table:form}.
\begin{table}[htb]
\caption{The format of data input and output in $\alpha$NLI task} \label{table:form}
\begin{center}
\begin{tabular}{c|c|c}
\hline
Task & Input Format & Output Format \\ \hline
$\alpha$NLI & {[}CLS{]} $\rm O_1$ {[}SEP{]} $\rm H^i$ {[}SEP{]} $\rm O_2$ {[}SEP{]} & $\rm H^1$ or $\rm H^2$ \\ \hline
\end{tabular}
\end{center}
\end{table}
\textbf{Hyperparameter details:} Due to the difference in the amount of data, the focusing parameter and the amount of training data will vary. For different training data, select the hyperparameter that produces the best performance on the test set. Specifically, the learning rate is fixed at 1e-6, the batch size is fixed at 1, and the training batch will vary with the amount of training data. Training uses Cross Softmax+Focal Loss. For the validation set, ACC and AUC are used for evaluation. Use the results of five different seeds to evaluate the performance of the test set.
\textbf{Baseline:} We have used the following four baseline models for comparison:
A) BERT \citep{DBLP:conf/naacl/DevlinCLT19} is a language model that uses a masked language model and predicts the next sentence as the target training. For example, it masks certain words in the input, and then trains it and predicts the words that are blocked.
B) RoBERTa \citep{DBLP:journals/corr/abs-1907-11692} has the same structure as BERT, but there is no prediction (NSP) for the next sentence. RoBERTa-B (ase) and RoBERTa-L (arge) use more data and larger batches for training.
C) Learning to Rank for Reasoning ($\rm L2R^2$) \citep{DBLP:conf/sigir/ZhuPLC20} reprogrammed the $\alpha$NLI task as a ranking problem, using a learning ranking framework that includes a score function and a loss function.
D) Multi-Head Knowledge Attention (MHKA) \citep{DBLP:conf/emnlp/PaulF20} proposed a new multihead knowledge attention model, and used a novel knowledge integration technology.
\subsection{Experimental results}
Our experimental results in the $\alpha$NLI task are shown in Table \ref{table:results}. The baseline comparison models are: Majority, GPT, BERT-L, RoBERTa-L, $\rm L2R^2$ and MHKA related results. It can be observed that the IMSL method improves about 3.5\% in ACC and about 7.5\% in AUC compared with RoBERTa-L. The results show that the improvement of ACC is mainly attributed to the new IMSL loss function, and the improvement of AUC is mainly attributed to the exploitation of the relationship between the hypotheses by the proposed information interaction layer.
\begin{table}[htb]
\caption{Results on the $\alpha$NLI task: The results are quoted from \cite{DBLP:conf/iclr/BhagavatulaBMSH20}, L=Large} \label{table:results}
\begin{center}
\begin{tabular}{llll}
\hline
Model & Dev(ACC\%) & Dev(AUC\%) & Test(ACC\%) \\ \hline
Human Perf & - & \textbf{-} & 91.40 \\ \hline
Majority & 50.80 & - & - \\
GPT & 62.70 & - & 62.30 \\
BERT-L & 69.10 & 69.03 & 68.90 \\
RoBERTa-L & 85.76 & 85.02 & 84.48 \\
$\rm L2R^2$ & 88.44 & 87.53 & 86.81 \\
MHKA & 87.85 & - & 87.34 \\
\rowcolor[HTML]{E7E6E6}
Ours & & & \\
RoBERTa-L+IMSL & \textbf{89.20} & \textbf{92.50} & \textbf{87.83} \\ \hline
\end{tabular}
\end{center}
\end{table}
\textbf{Low-resource setting}: Testing the robustness of the model to sparse data on $\alpha$NLI tasks refers to the low-resource scenario where the MHKA model uses \{1,2,5,10,100\}\% training data respectively. Figure \ref{fig:lowset} shows how the model improves on MHKA, RoBERTa-Large, and $\rm L2R^2$. The experimental results show that the model in this paper can achieve better results in the case of low resource setting. When using 1\% training data only, the improvement brought by IMSL is the most significant, which is about 4\% higher than that of $\rm L2R^2$ and MHKA. Experimental results show that our method performs consistently better than other competing methods on low-resource data sets.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=105mm]{bar.pdf}
\end{center}
\vspace{-0.1in}
\caption{The accuracy of $\alpha$NLI under low resource settings}
\vspace{-0.1in}
\label{fig:lowset}
\end{figure}
\section{Detailed analysis via ablation study}
To more clearly show the contribution of each module, we have done corresponding comparative experiments on both information interaction layer and hyperparameter tuning.
\subsection{Ablation study of the information interaction layer}
First, we have conducted ablation study experiments on the related BiLSTM to investigate the role played by the information interaction layer. The hyperparameters of Focal Loss will be fixed to reduce the impact on BiLSTM. Through the experimental results, it can be found that the addition of BiLSTM greatly improves the AUC, but does not have a significant impact on ACC. The following Figure \ref{fig:nointer} shows the visualization results on the validation set. In the plot, the abscissa is the score of hypothesis 1 and the ordinate is the score of hypothesis 2. The red points in the upper left corner correspond to the subset of correct hypotheses, so do the blue points in the lower right corner. It can be seen that the introduction of the information interaction layer pushes all points further away from each other and toward the four corners. It follows that the margin between the positive and negative samples is larger, implying improved discriminative power.
\begin{figure}[htb]
\begin{center}
\subfigure[using the information interaction layer ]{\includegraphics[width=55mm]{melt-IMSL.pdf}}
\quad
\subfigure[without the information interaction layer ]{\includegraphics[width=55mm]{melt-bilstm.pdf} }
\end{center}
\caption{Score distribution using and without using the information interaction layer} \label{fig:nointer}
\end{figure}
\subsection{Parameter comparison}
When we select the parameters of Focal Loss, several experiments were carried out on the two hyperparameters of the balancing factor and the focusing parameter.
The focusing parameter $\gamma$ in Eq. \eqref{eq:focal} can automatically down-weight the contribution of easy examples during training and rapidly focus the model on hard examples; while the balancing factor $0<a<1$ in Eq. \eqref{eq:bal} controls the tradeoff between correct and incorrect hypotheses. Figure \ref{fig:heat} below shows the ACC performance of IMSL model with different focusing parameters and balance factors. In this study, \{1, 2, 3\} is used as the option of focusing parameter, and \{0.45, 0.5, 0.55\} is used as the set of balancing factor. It can be observed that as the most effective parameter couple is given by $\gamma= 2$, $a = 0.55$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=75mm]{heatmap.pdf}
\end{center}
\caption{The ACC results of adjusting the balance factor $\gamma$ and focusing parameter $a$.} \label{fig:heat}
\end{figure}
\section{Summary}
In this paper, an IMSL method is proposed for commonsense abductive reasoning. It includes an information interaction layer that captures the relationship between different hypotheses, and a joint loss for our proposed way of grouping the correct/wrong hypotheses. Experimental results show that on $\alpha$NLI tasks, IMSL has better performance on ACC and AUC, especially in low-resource settings, IMSL can significantly improve the accuracy.
| 10,360 |
1 |
|
\section{introduction}
In contrast to bulk isotropic superconductors, where longitudinal
collective modes (i.e., plasmons) associated with the density fluctuation
response is virtually of no particular interest or significance in the
context of superconducting properties, there has been substantial
recent theoretical interest in the longitudinal collective mode
spectra of layered high-$T_c$ superconductors
\cite{fertig,hwang,cote,wu,for,das}. This interest arises
primarily from the highly anisotropic two dimensional layered
structure of these materials which, in principle, allow for sub-gap
plasmon modes residing inside the superconducting gap in the low wave
vector regime. This gives rise to interesting collective mode behavior
\cite{fertig,hwang,cote,wu,for,das,kake,kado,dunm,buis}
in layered anisotropic superconductors which have no analogs in bulk
isotropic superconductors. In this paper we consider the effect of
having multilayer complex unit cells, as existing in YBCO and BISCO
high-$T_c$ superconductor materials, on the longitudinal electronic
collective mode spectrum. We find a number of collective modes arising
from the complex unit cell structure, and comment on their possible
experimental relevance. One of our goals is to critically assess
whether observable electronic collective mode behavior could shed some
light on the interesting and unusual mechanism producing high $T_c$
superconductivity in these materials. The other goal is to predict
novel collective mode behavior peculiar to layered superconductors
with no analogs in bulk systems.
The collective mode spectrum is characterized by the energy dispersion
($\hbar = 1$ throughout this paper)
relation $\omega \equiv \omega(k_{\|},k_z)$, which we calculate in
this paper, where $k_{\|} \equiv |{\bf k}_{\|}|$ is the two dimensional wave
vector in the so-called {\it a-b} plane (along the layer) and $k_z$ is the
wave vector along the $c$-axis, then $k_z = |{\bf k}| \cos\theta$,
$k_{\|} = |{\bf k}|
\sin \theta$. Because of the strong {\it a-b} plane versus {\it
c}-axis anisotropy in these materials, the dependence of the collective
mode frequency on $k_{\|}$ and $k_z$ is very different. [We ignore any
anisotropy, which is invariably rather small, in the {\it a-b} plane
and assume intralayer planar isotropy, i.e., $\omega({\bf k}_{\|},k_z) \equiv
\omega(k_{\|},k_z)$.] The structural model we employ considers the layered
superconductor to be a one dimensional superlattice along the $z$
direction ($c$-axis) composed of a periodic system of bilayer unit
cells with an intracell layer separation of $c$ and a superlattice
period of $d$ ($>c$). The two active layers separated by a distance
$c$ within each unit cell are taken to
be identical and are assumed to be planar two dimensional electron gas
(2D EG) systems of charge density $n_s$ per unit area and zero layer
thickness each. In most of our calculations presented in this paper
the intercell electron hopping (or tunneling) between neighboring unit
cells (separated by a distance $d$)
is neglected (i.e., we neglect any superlattice band width along
the $z$ direction), but we critically examine the effect of intracell
electron hopping between the two layers within each unit cell on the
collective mode dispersion. We comment upon the effect of a finite
{\it inter}cell hopping in the conclusion of this article. We include
in our theory the long range (intracell and intercell) Coulomb
interaction among all the layers. This long range Coulomb interaction,
which couples all the layers, is of great importance in determining
the collective mode spectrum. We also include in our theory of
collective mode dispersion the effect of the superconducting pairing
interaction, assumed in our model to be a short-range in-plane
attractive interaction of the BCS-Fermi liquid type, which is treated
in a fully gauge invariant Nambu-Gorkov formalism. Our work is thus a
generalization of the earlier work \cite{fertig,hwang} by Fertig and
Das Sarma, and by Hwang and Das Sarma (who considered
only the monolayer superconducting
superlattice situation with only a single layer per unit cell) to a
complex unit cell situation with two layers per unit cell. To keep the
situation simple we will consider only the $s$-wave gap symmetry
\cite{fertig},
which, according to ref. \onlinecite{hwang} gives a very good account
of the collective
mode dispersion even for the $d$-wave case except at very large wave
vectors. Following the work of Fertig and Das Sarma \cite{fertig}
there has been a
great deal of theoretical and experimental work
\cite{hwang,cote,wu,for,das,kake,kado,dunm,buis} on the electronic
collective mode properties in layered superconducting materials, but
the specific issue considered in this paper has not earlier been
discussed in the
literature for a multilayer superconducting system. It should also be
pointed out
that, while the focus of our work is the collective mode behavior in
layered high-$T_c$ cuprate superconductors (which are {\it intrinsic}
superlattice systems due to their highly anisotropic crystal structure
with $CuO$ layers), our results equally well describe {\it artificial}
superconducting superlattices made of multilayer metallic structures
provided $k_{\|}$ and $k_{z}$ are good wave vectors in the system.
The collective mode dispersion in bilayered superconducting
superlattices is quite complicated.
There are essentially two different branches of long wavelength
collective modes: in-phase ($\omega_+$) modes and out-of-phase
($\omega_-$) modes, depending on whether the electron density
fluctuations in the two layers are in-phase or
out-of-phase. Each of these collective modes disperses as a function
of wave vector, showing strong anisotropy in $k_{\|}$ and $k_z$
dispersion. In particular, the limits ($k_z=0$, $k_{\|} \rightarrow
0$) and ($k_z \rightarrow 0 $, $k_{\|} = 0$) are {\it not} equivalent
because the $k_z=0$ three dimensional limit is singular.
For $k_z=0$ the in-phase $\omega_+$ collective mode is a gapped three
dimensional plasma mode at long wavelengths ($k_z=0$, $k_{\|}
\rightarrow 0$) by virtue of the Higgs mechanism arising from the long
range Coulomb interaction coupling all the layers. This mode
characterizes the long wavelength in-phase charge fluctuations of all
the layers. For non-zero $k_z$ the $\omega_+$ mode vanishes at long
wavelengths ($k_{\|} \rightarrow 0$) because at finite $k_z$ the
system is essentially two dimensional. The out-of-phase $\omega_-$
collective mode branch arises purely from the bilayer character of the
system and indicates the out-of-phase density fluctuations in the two
layers. In the absence of any interlayer hopping (either intracell and
intercell) the $\omega_-$ mode is purely acoustic in nature vanishing
at long wavelengths ($k_{\|} \rightarrow 0 $) as $\omega_-(k_z,k_{\|}
\rightarrow 0) \sim O(k_{\|})$ independent of the value of $k_z$. For
finite interlayer tunneling $\omega_-$ exhibits a tunneling gap at
$k_{\|} = 0$. The Higgs gap for $\omega_+(k_z=0, k_{\|} \rightarrow
0)$ is not qualitatively affected by intracell interlayer tunneling
because the three dimensional plasma energy is usually substantially
larger then the tunneling energy.
Note that, in the absence of any intracell and intercell tunneling,
both in-phase and out-of-phase collective mode branches lie below
the superconducting energy gap for small $k_{\|}$ [except for the
$\omega_+(k_z=0)$ mode which is pushed up to the three dimensional
plasma frequency]. This remains true even for weak intracell and
intercell tunnelings, and in this paper we concentrate mainly on this
long wavelength ``below gap'' regime where the phase fluctuation modes
could possibly lie in the superconducting gap. For simplicity we also
restrict ourselves to $s$-wave gap symmetry of the superconducting
order parameter. This approximation may at first sight appear to be
unusually restrictive as it seems to rule out the applicability of our
theory to bilayer high-$T_c$ materials (such as YBCO, BISCO) which are
now widely accepted to have $d$-wave ground state symmetry. This,
however, is not the case because at long wavelengths (small $k_{\|}$),
which is what we mostly concentrate on, the collective mode spectrum is
insensitive to the order parameter symmetry \cite{hwang}, and
therefore our results
apply equally well to high-$T_c$ bilayer materials. The modes we
predict and their dispersion should most easily be observable via the
resonant inelastic light scattering spectroscopy, but may also be
studied via frequency domain far infrared spectroscopy using a grating
coupler.
\section{theory, approximations, and results}
In our calculation we assume that the two layers in each unit cell can be
considered to be 2D EG, and all layers are identical,
having the same 2D charge density $n_s$ per unit area.
Two identical layers separated by a
distance $c$ in each unit cell are strongly coupled through the
interlayer intracell
electron tunneling. The interlayer tunneling is between the
well-defined CuO layers in high T$_c$ materials.
The intercell tunneling between different unit cells separated by a
distance $d$ (in our model $d > c$) is
neglected at first (see section III for the effect of intercell
tunneling). Although we neglect the electron tunneling between
different unit cells, the electrons in {\it all} layers are coupled
via the intercell long
range Coulomb potential which we keep in our theory.
Since the long wavelength plasma modes are independent of the gap
function symmetry
\cite{hwang}, we work in the BCS approximation with s-wave pairing for
simplicity.
Then, in the Nambu representation\cite{namb}
the effective Hamiltonian
of a bilayered superconductor with 2D quasiparticle energy
$\varepsilon(k)$, a tight-binding coherent single-electron intracell
hopping $t(k)$, and an additional coherent intracell Josephson
coupling $T_J$ between two nearest layers is given by
\begin{equation}
H = H_0 - \mu N + H_{\rm int} + H_{T_J},
\label{ham}
\end{equation}
with
\begin{eqnarray}
H_0 - \mu N = & & \sum_{n,i} \sum_{\bf k} \tilde{\varepsilon}_{\bf k}
\Psi^{\dagger}_{{\bf k}, ni} \tau_3 \Psi_{{\bf k}, ni} \nonumber \\
&+& \sum_{n,i} \sum_{\bf k} t({\bf k})
\Psi^{\dagger}_{{\bf k}, ni} \tau_3 \Psi_{{\bf k}, n\bar{i}},
\label{h0}
\end{eqnarray}
\begin{equation}
H_{\rm int} = \frac{1}{2} \sum_{ni,mj} \sum_{\bf q}
\rho_{{\bf q},mi} \tilde{V}_{mi,nj}({\bf q}) \rho_{-{\bf q},nj} ,
\label{hint}
\end{equation}
\begin{equation}
H_{T_J} = \sum_{n,i}\sum_{{\bf k,k',q}}T_J \left ( \Psi_{{\bf k+q},ni}\tau_3
\Psi_{{\bf k},n\bar{i}}\right ) \left ( \Psi_{{\bf k'-q},ni}\tau_3
\Psi_{{\bf k'},n\bar{i}} \right ),
\label{htj}
\end{equation}
where $n$, $m$ are the unit cell indices and $i$, $j=1,2$ label the two
layers within a given unit cell ($\bar{i} = 3-i$).
Here, $\Psi_{{\bf k},ni}$ and $\Psi^{\dagger}_{{\bf k},ni}$ are the
column and row vectors
\begin{equation}
\Psi_{{\bf k},ni} \equiv \left ( \begin{array}{c}
c_{{\bf k},ni,\uparrow} \\
c^{\dagger}_{-{\bf k},ni,\downarrow}
\end{array}
\right ), \;\;\;\;\;
\Psi^{\dagger}_{{\bf k},ni} \equiv \left (c^{\dagger}_{{\bf
k},ni,\uparrow}, c_{-{\bf k},ni,\downarrow} \right ),
\label{psi}
\end{equation}
where $c^{\dagger}_{{\bf k},ni,\sigma}$ ($c_{{\bf k},ni,\sigma}$) creates
(annihilates) an
electron with wave vector ${\bf k}$ and spin
$\sigma$ in the $i$-th layer within the $n$th unit cell,
and $\rho_{{\bf q},ni}$ denotes the density operator defined by
\begin{equation}
\rho_{{\bf q},ni} = \sum_{\bf k}\Psi^{\dagger}_{{\bf k+q},ni} \tau_3
\Psi_{{\bf k},ni},
\end{equation}
where
$\tilde {\varepsilon}_{\bf k} =
{k^2}/{2m} -\mu$ ($\mu$ being the chemical potential), and $\tau_i$
($i$=1,2,3) are the Pauli matrices. In Eq. (\ref {hint}), the
effective interaction
$\tilde{V}_{ni,mj}({\bf q})$ contains the long range Coulomb
interaction coupling all the layers and the short range attractive
intralayer pairing interaction (giving rise to superconductivity in
the problem)
\begin{equation}
\tilde{V}_{ni,mj}({\bf q}) = V_{c}(q_{\|}) \exp[-q_{\|}|z_{ni}-z_{mj}|]
+V_0 \delta_{n,m}\delta_{i,j}
\end{equation}
where $V_{c}(q_{\|}) = 2\pi e^2/(\kappa q_{\|})$ is the
two dimensional Coulomb interaction and $\kappa$ is the
background dielectric constant of the system.
$V_0$ represents a weak, short-ranged
attractive intra-layer pairing interaction which produces
superconductivity, and is a model parameter in our theory.
We should comment on one unusual feature of our Hamiltonian defined in
Eqs. (\ref{ham})--(\ref{htj}). This is the existence of {\it both} a
coherent single-particle
hopping term, defined by the single-particle hopping amplitude $t(k)$
in Eq. (\ref{h0}), and a coherent Cooper pair Josephson tunneling
term, defined by $T_J$ in Eq. (\ref{htj}). Usually the existence of a
single-particle hopping $t$
automatically generates an equivalent Josephson coupling $T_J$ in the
superconducting system, and keeping both of them as we do, namely, $t$
in the single particle Hamiltonian $H_0$ [Eq. (\ref{h0})] and $T_J$ in the
two-particle Josephson coupling [Eq. (\ref{htj})], is redundant. We
do, however,
wish to investigate separately effects of both coherent single
particle hopping and Josephson coupling along the $c$-axis on the
collective mode spectra because of recent suggestions \cite{anderson}
of a novel
interlayer tunneling mechanism for superconductivity in cuprates which
explicitly postulates $t = 0$ (in the normal state) and $T_J \neq
0$ (in the superconducting state). Our model therefore uncritically
includes both $t$ and $T_J$ as distinct contributions, and one could
think of the interlayer Josephson coupling $T_J$ in our model
Hamiltonian arising from some interlayer pairing interaction not
included in our model pairing interaction $V_0$ which is exclusively
intralayer in nature. In the following we take $t$ and $T_J$ to be
independent parameters of our model without worrying about their
microscopic origins.
The collective modes of the system are given by the poles of the
reducible density response function $\chi({\bf k},\omega)$.
We apply the conserving gauge invariant ladder
diagram approximation\cite{fertig,namb} in calculating the density response
of the system including the effect of the pairing interaction induced
vertex and self-energy corrections. The density response function is
defined as
\begin{equation}
\chi({\bf k},\omega)=-i\int^\infty_0 dt e^{i\omega t} < \left [
\rho({\bf k},t), \rho(-{\bf k},0) \right ] >,
\end{equation}
where $\rho({\bf k},t)$ is the three dimensional Fourier transform of
the density operator in the Heisenberg representation. Here,
${\bf k}\equiv(k_{\|},k_z) $ is the 3D wave
vector, where $k_z$ measures the wave vector along the $z$-axis (i.e., the
$c$-direction ) and $k_{\|}$ is the 2D {\it x-y} plane (i.e., {\it a-b} plane)
wave vector. The density response may be written
in terms of an irreducible polarizability $\Pi({\bf k},\omega)$ as
shown in Fig. 1(a).
Following Anderson's arguments for the absence of
single particle tunneling \cite{anderson}
we first neglect inter-layer single
electron
tunneling effects ($t=0$) and only consider the
\begin{figure}
\epsfysize=6.0cm
\epsffile{fig1.eps}
\vspace{1.0cm}
\caption{
(a) Diagrammatic representation of the dielectric response in terms of the
irreducible polarizability $\Pi$. Here, $V_1$ and $V_2$ are given in
Eq. (\ref{v12}), and $\bar j = 3 - j$.
(b) Irreducible polarizability used
in this calculation.}
\label{fig1}
\end{figure}
\noindent
Josephson coupling
effect. The polarizability $\Pi$ is then diagonal in the unit cell
index and becomes the corresponding 2D polarizability matrix,
$\Pi( {\bf k},\omega) \equiv \Pi(k_{\|},\omega)$
\begin{equation}
\chi({\bf k},\omega) = \frac{
\Pi({k_{\|}},\omega)}{\epsilon({\bf k},\omega)},
\label{chi}
\end{equation}
where $\Pi(k_{\|},\omega)$ is the irreducible polarizability matrix
for a single isolated unit cell,
\begin{equation}
\Pi({k_{\|}},\omega) = \left ( \begin{array}{cc}
\Pi_{11}({k_{\|}},\omega) & \Pi_{12}({k_{\|}},\omega) \\
\Pi_{21}({k_{\|}},\omega) & \Pi_{22}({k_{\|}},\omega)
\end{array} \right ),
\end{equation}
where $\Pi_{11}$, $\Pi_{22}$ and $\Pi_{12}$, $\Pi_{21}$ indicate the
intra-layer and inter-layer
irreducible polarizability, respectively.
Within our approximation, the inter-layer polarizabilities vanish when
the single-particle tunneling is neglected. We will see that the
plasma gap of the
out-of-phase mode arises entirely from the non-vanishing inter-layer
irreducible polarizability. In Eq. (\ref{chi})
the effective dynamical dielectric function $\epsilon({\bf
k},\omega)$ is given by
\begin{equation}
\epsilon({\bf k},\omega) = {\bf 1} - \tilde{V}({\bf k})\Pi(k_{\|},\omega),
\label{eps}
\end{equation}
where {\bf 1} is a $2 \times 2$ unit matrix and $\tilde{V}({\bf k})$
is the effective
interaction which in our simple model is given by
\begin{equation}
\tilde{V}(\bf k) = \left ( \begin{array}{cc}
V_{1}(\bf k) & V_{2}(\bf k) \\
V_{2}(\bf k) & V_{1}(\bf k)
\end{array} \right ),
\end{equation}
where $V_1({\bf k})$ corresponds to the intralayer interaction ($i=j$)
and $V_2({\bf k})$ the interlayer interaction ($i\neq j$), which arises
entirely from the long-range Coulomb coupling in our model, and they
are given by
\begin{eqnarray}
V_{1}({\bf k})& = & V_{c}(k_{\|}) f({\bf k})+ V_0, \nonumber \\
V_{2}({\bf k})& = & V_{c}(k_{\|}) g({\bf k}),
\label{v12}
\end{eqnarray}
where $f({\bf k})$ and $g({\bf k})$,
the superlattice form factors which modify the 2D Coulomb interaction
due to Coulomb coupling between all the layers in our multilayer
superlattice system, are given by
\begin{equation}
f({\bf k}) = \frac{\sinh (k_{\|}d)}{\cosh (k_{\|}d) - \cos (k_zd)},
\label{fq}
\end{equation}
\begin{equation}
g({\bf k}) = \frac{\sinh [k_{\|}(d-c)] + e^{-i k_z d} \sinh
(k_{\|}c)} {\cosh (k_{\|}d) - \cos (k_zd)} e^{ik_zc}.
\label{gq}
\end{equation}
In order to obtain the collective mode spectrum, it is necessary to
construct a gauge invariant and number-conserving approximation for
$\Pi({\bf k},\omega)$. In the conserving gauge invariant ladder diagram
approximation\cite{fertig,namb} the irreducible polarizability
obeys the ladder integral equation
which is shown diagrammatically in Fig. 1(b).
It may be written in the form
\begin{eqnarray}
\Pi_{i,j}(k,\omega) = -i {\rm Tr}& &\int \frac{d\omega_1 dp_1}{(2\pi)^3}
\tau_3 G_{i}(p_1,\omega_1) \nonumber \\
& & \times \Gamma_{i,j}(p_1,k,\omega)
G_{i}(k-p_1,\omega-\omega_1),
\end{eqnarray}
where $G_{i}(k,\omega)$ is the $i$-th layer Green's function with
self-energy corrections (self-consistent
Hartree approximation in
the Coulomb interaction and self-consistent Hatree-Fock approximation
in the short-range pairing interaction)
and $\Gamma_{i,j}$ is a vertex function.
The vertex part satisfies the linear integral equation
\begin{eqnarray}
\Gamma_{ij}&(&p_1,k,\omega) = \tau_3 \delta_{ij} + i \sum_{l=1}^{2}
\int \frac{d^2q d\omega_1}{(2 \pi)^3}
\tau_3 G_l(q,\omega_1) \nonumber \\
& \times & \Gamma_{ij}(q,k,\omega)
G_l(q-k_1,\omega-\omega_1)\tau_3 \left [ V_0 \delta_{li} + T_J
\delta_{\bar{l} i} \right ],
\label{gaq1}
\end{eqnarray}
where $\bar{l} = 3-l$.
In order to solve this vertex function, we expand $\Gamma_{ij}$ in
Pauli matrices
\begin{equation}
\Gamma_{ij} = \sum_{l=0}^{3}\gamma_{ij,l}\tau_l.
\label{gaq2}
\end{equation}
Since our model assumes two identical 2D layers in the unit cell, we have
$\Gamma_{11} = \Gamma_{22} = \Gamma_a$ and $\Gamma_{12} = \Gamma_{21}
= \Gamma_b$.
By introducing the polarization function
\begin{eqnarray}
P_{i} & = & i \int \frac{d^2qd\omega_1}{(2\pi)^3} \tau_3 G(q,\omega)
\tau_i G(q-k,\omega_1-\omega)\tau_3 \nonumber \\
& = & \sum_{j=0}^{3} \bar{P}_{i,j}\tau_{j},
\label{pq}
\end{eqnarray}
the vertex function, Eq. (\ref{gaq1}), becomes
\begin{equation}
\left( \begin{array}{c}
\gamma_a \\ \gamma_b
\end{array} \right ) =
\left ( \begin{array}{c}
{\bf I}_3 \\ 0
\end{array} \right ) +
V_0 \left ( \begin{array}{c}
\underline{P} \gamma_a \\ \underline{P} \gamma_b
\end{array} \right ) +
T_J \left ( \begin{array}{c}
\underline{P} \gamma_b \\ \underline{P} \gamma_a
\end{array} \right ),
\end{equation}
where $\gamma$'s are column vectors, $I_3^{T} = (0,0,0,1)$, and
$\underline{P}$ is a $4 \times 4$
matrix whose components are given by $\bar{P}_{ij}$ in Eq. (\ref{pq}).
Then, the polarizability function $\Pi_{ij}$ becomes
\begin{eqnarray}
\Pi_{ij} & = & -{\rm Tr}\sum_{l=0}^{3} \bar{P}_{i,l}\tau_3 \gamma_{ij,l}
\nonumber \\
& = & -\sum_{l=0}^{3} \left [ P_i \gamma_{ij} \right ]_{3,l}.
\label{e21}
\end{eqnarray}
The poles of the vertex function or polarizability $\Pi$ give the
collective mode spectra for the neutral superconductor (i.e.,
neglecting the long range Coulomb coupling which couples all the
layers). In
the long wavelength limit we have two collective modes (``phasons'')
for the {\it neutral} system
\begin{equation}
\omega_{+}^2(k) = (v_0 k)^2 \left [ 1 + (V_0 + T_J)N_0/2 \right ],
\label{wpn}
\end{equation}
\begin{equation}
\omega_{-}^2(k) = \omega_0^2 + v_0^2 k^2
\left [ 1 + N_0 (V_0 - T_J)/2 \right ],
\label{wmn}
\end{equation}
where $v_0 = v_F/\sqrt{2}$ with $v_F$ as the Fermi velocity, $N_0 =
m/\pi$ is the 2D density of states at the Fermi surface, and $\omega_0^2 =
{16 T_J \Delta^2}/[N_0(V_0^2 - T_J^2)]$ is the tunneling gap induced
by the finite Josephson coupling
\begin{figure}
\epsfysize=22.cm
\epsffile{fig2.eps}
\caption{
(a) The plasmon mode ($\omega_{\pm}$) dispersions in the presence of
Josephson
tunneling for the neutral bilayered superconducting
superlattice as a function of $k_{\|}$ for fixed
$k_zd = \pi$. Here, $x=T_J/V_0$ indicates the Josephson tunneling
strength with respect to the intra-layer pairing interaction. Inset
shows the ratio of the oscillator strength of $\omega_+$ to that of
$\omega_-$.
(b) The plasmon mode dispersions ($\omega_{\pm}$) for the
charged system. Inset shows the ratio of the oscillator strength of
$\omega_-$ to that of $\omega_+$.
(c) The $\omega_-({\bf k})$ band in the superlattice for the charged
system as a function
of in-plane wave vector
($k_{\|}d$) in the presence of the tunneling. Inset shows the
$\omega_{\pm}$ band of the bilayer superconducting superlattice.
We use parameters roughly
corresponding to YBCO in these figures: the sheet density $n_s =
10^{14} cm^{-2}$,
effective in-plane mass $m=5m_0$, lattice dielectric constant $\kappa = 4$,
$d = 12 \AA$, and $c = 3\AA$.
}
\label{fig2}
\end{figure}
\noindent
($T_J \neq 0$).
The $\omega_{+}$ mode corresponds to the in-phase motion of the order
parameter, or, equivalently the 2-D Goldstone-Anderson-Bogoliubov
phase fluctuation mode
due to the spontaneously broken continuous gauge symmetry of the
superconducting state. The $\omega_{-}$ mode corresponds to the
out-of-phase mode first predicted for a two-band superconductor
\cite{legg}, which has recently been
calculated within the time-dependent Hartree-Fork-Gor'kov (mean-field)
approximation\cite{wu} for a two-layer superconductor system. In
Fig. \ref{fig2}(a) we show the calculated
collective mode dispersion for different Josephson tunneling
strengths with respect to the intra-layer pairing interaction, $x =
T_J/V_0$. When the
Josephson tunneling is absent, $x=0$, the two
phason modes $\omega_{\pm}$ are degenerate and have identical
dispersion (solid line).
But in the presence of finite Josephson tunneling between the nearest
layers, $x \neq 0$,
the out-of phase mode ($\omega_-$) develops a plasma gap ($\omega_0$)
depending on the
tunneling strength.The in-phase mode
$\omega_+$ is not affected qualitatively by finite Josephson tunneling
and remains an acoustic Goldstone mode (i.e., $\omega_+ \sim O(k)$ for $k
\rightarrow 0$) although the velocity of the acoustic plasmon does
depend on $T_J$ (cf. Eq. (\ref{wpn})). In
Fig. \ref{fig2}(a) the inset shows the relative
oscillator strength of the two phason modes, the ratio of the spectral
weight of $\omega_-$ to
that of $\omega_+$. The ratio decreases as
tunneling amplitude increases. This is due to the approach of the
$\omega_-$ mode to the
pair-breaking excitation region ($\omega \approx 2\Delta$) at large
tunneling, which causes decay of the $\omega_-$ mode to single
particle excitations, and the
strength of the mode transfers to pair-breaking excitations.
These results apply to any bilayered {\it neutral} superconductors (which,
of course, do not exist in nature because Coulomb interaction is
always present in real systems).
By looking for zeros of the dynamical dielectric function defined by
Eq. (\ref{eps}) we find the collective
modes of the charged superconducting superlattices.
Since the two layers within the cell are identical we have
$\Pi_{11}=\Pi_{22}$ and $\Pi_{12} = \Pi_{21}$, which gives rise to
distinct in-phase and
out-of-phase collective charge density fluctuations of the charged
superconductor. Coupling of the in-phase (out-of-phase) mode of the
neutral system via the long range Coulomb interaction
to the charge density fluctuation of the layers gives rise to the in-phase
(out-of-phase) collective mode of the charged bilayer system. The
dielectric function is a matrix, and
the zeros of the det[$\epsilon$], which define the collective mode
spectra, are given by
\begin{eqnarray}
{\rm det}[\epsilon] = & & \left [ 1 - (\Pi_{11} + \Pi_{12})(V_1
+ V_2) \right ] \nonumber \\
& & \times \left [ 1- (\Pi_{11} - \Pi_{12})(V_1 - V_2) \right ] =0.
\label{e0}
\end{eqnarray}
In the long wavelength limit Eq. (\ref{e0}) can be analytically
solved using Eqs. (\ref{v12}) -- (\ref{e21}), and we find two distinct
collective modes
corresponding to the relative phase of the charge density fluctuations
in the two layers within each unit cell:
\begin{equation}
\omega_{+}^2({\bf k}) = \omega_p^2 \frac{k_{\|}d}{4} \left [
f({\bf k}) + |g({\bf k})| \right ]_{k_{\|}\rightarrow 0},
\label{wp}
\end{equation}
\begin{equation}
\omega_-^2({\bf k}) = \frac{ (1 + \Delta V - \Delta V_0 )(\omega_0^2 +
v_0^2k^2/2)}{1 -
\omega_0^2(\Delta V - \Delta V_0 )/6},
\label{wmm}
\end{equation}
where $\omega_p=(4\pi n_Be^2/\kappa m)^{1/2}$ is a three dimensional
plasma frequency with the effective three-dimensional electron
density of the double-layered supperlattice $n_B = 2 n_s/d$,
and $k^2 = k_{\|}^2 + k_z^2$ with ${\bf k} \equiv (k_{\|}, k_z)$;
$\Delta V = N_0(V_1 - V_2)$ and $\Delta V_0 = N_0(V_0 - T_J)/2$.
In Fig. \ref{fig2}(b) we show the calculated charge density mode
dispersion for fixed $k_zd = \pi$ as a function of
$k_{\|}d$. Tunneling has little effect on the in-phase mode (thin
solid line) but profoundly affects the out-of-phase mode (thick
lines) by introducing a gap at $\omega_-(k_{\|} = 0)$ similar to the
neutral case. Since in the
presence the tunneling the out-of-phase mode acquires a gap, the two
modes cross at the resonant frequency ($\omega_+ = \omega_-$), but the
symmetry (``parity'') associated with the two identical layers does not
allow any mode coupling or anti-crossing effect. If the two layers in
the unit cell are not identical then there is a mode coupling induced
anti-crossing around $\omega_+ \approx \omega_-$.
The inset shows the ratio of the oscillator strength of
the in-phase mode to that of the out-of-phase mode. In sharp contrast
to the neutral system, in
the long wavelength limit the
out-of-phase mode $\omega_-$ completely dominates the spectral weight
in the presence of interlayer tunneling.
In the absence of tunneling ($x = 0$), however, the in-phase mode
$\omega_+$ dominates the spectral weight.
Our results for the collective mode dispersion in the presence of
finite single-particle tunneling but vanishing Josephson coupling
(i.e., $t\neq 0$, $T_J =0$) are qualitatively identical to the
situation with $t=0$, $T_J \neq 0$, and are therefore not shown
separately. This is, of course, the expected result because $t$
automatically generates an effective Josephson tunneling, i.e., an
effective $T_J$, in the superconducting system, and therefore the
qualitative effect of having a finite $T_J$ or a finite $t$ in the
superconducting system is similar.
We also calculate the collective modes of the bilayered superconducting
system by including {\it both} the
single particle tunneling and the Josephson tunneling
between the nearest layers (i.e., $t, T_J \neq 0$). The two layers in
the unit cell
hybridized by the single particle tunneling matrix element, $t(\rm
k)$, would lead to a symmetric and an antisymmetric combination of the
quasiparticle states for each value of the wave vector ${\bf k}$ in
the plane.
By introducing the symmetric and antisymmetric single electron
operators with respect to
an interchanging of the two layers, $\alpha_{n,k,\sigma} =
\frac{1}{\sqrt2}(c_{n1,k\sigma} + c_{n2,k\sigma})$ and
$\beta_{n,k,\sigma} = \frac{1}{\sqrt2}(c_{n1,k\sigma} - c_{n2,k\sigma})$,
the total effective Hamiltonian can be written as
\begin{eqnarray}
H & & =
\sum_n\sum_{k\sigma}\left [ \alpha_{n,k\sigma}^{\dagger}{\varepsilon_1}(k)
\alpha_{n,k\sigma} + \beta_{n,k\sigma}^{\dagger}{\varepsilon_2}(k)
\beta_{n,k\sigma} \right ] \nonumber \\
&+ & \frac{1}{2}\sum_{nn'}\sum_{\bf q} \left [
\rho_{1,n{\bf q}}^{T}\bar U({\bf q})\rho_{1,n'-{\bf q}} +
\rho_{2,n{\bf q}}^{T}\bar V({\bf q})\rho_{2,n'-{\bf q}} \right ],
\label{ham1}
\end{eqnarray}
where $\varepsilon_1(k) = \varepsilon(k) + t(k)$ and $\varepsilon_2(k)
= \varepsilon(k) - t(k)$,
and
\begin{eqnarray}
\rho_{1,n{\bf q}} & = & \sum_{{\bf k}\sigma}
\left ( \begin{array}{c}
\alpha_{n,{\bf k}+{\bf q}\sigma}^{\dagger}\alpha_{n,{\bf k} \sigma} \\
\beta_{n,{\bf k}+{\bf q}\sigma}^{\dagger}\beta_{n,{\bf k} \sigma}
\end{array} \right ), \\
\rho_{2,n{\bf q}} & = & \sum_{{\bf k}\sigma}
\left ( \begin{array}{c}
\alpha_{n,{\bf k}+{\bf q}\sigma}^{\dagger}\beta_{n,{\bf k} \sigma} \\
\beta_{n,{\bf k}+{\bf q}\sigma}^{\dagger}\alpha_{n,{\bf k} \sigma}
\end{array} \right ),
\end{eqnarray}
and
\begin{equation}
\bar U({\bf q}) = \left ( \begin{array}{cc}
U_+ & U_- \\
U_- & U_+
\end{array} \right ), \;\;\;\;\;
\bar V({\bf q}) = \left ( \begin{array}{cc}
V_+ & V_- \\
V_- & V_+
\end{array} \right ),
\end{equation}
where $U_{\pm} = V_1 + V_2 \pm T_J$ and $V_{\pm} = V_1 - V_2 \pm T_J$.
This Hamiltonian is identical to that in the corresponding two subband
model, which is well
studied in semiconductor quantum well systems \cite{jain}. Since there
are no
off-diagonal elements of the interaction with respect to the subband
index we have well separated intra-subband and inter-subband
collective modes corresponding to the
in-phase and out-of-phase modes, respectively. Within our
gauge-invariant ladder diagram approximation we can easily calculate
the mode dispersions by following the procedure outlined in
Eqs. (\ref{psi}) -- (\ref{wmm}).
The in-phase-mode for both
the neutral and the charged system is
insensitive to tunneling in the long wavelength limit, and has
essentially the same long
wavelength dispersion as in
Eq. (\ref{wpn}) and Eq. (\ref{wp}) respectively, up to second
order in $k$. The out-of-phase mode is, however, strongly affected by
both the coherent single
particle tunneling and the Josephson tunneling, and has a dispersion
\begin{equation}
\omega_{-}^2(k) = \omega_0^2 + \left [ (2t)^2 +
v_0^2 k^2 \right ]
\left [ 1 + \Delta V_0 \right ],
\label{wmc}
\end{equation}
for neutral superconductors, and
\begin{equation}
\omega_-^2({\bf k}) = \frac{\left ( 1 + \Delta V - \Delta V_0 \right )
\left [ \omega_0^2 + (2t)^2 + v_0^2k_{\|}^2 \right ]}{ 1 -
\frac{\omega_0^2}{6}\left ( \Delta V - \Delta V_0 \right )},
\end{equation}
for charged systems in the presence of finite tunneling.
In Fig. \ref{fig3}, we show the calculated mode dispersions as a
function of the in-plane wave vector $k_{\|}d$ for a fixed
\begin{figure}
\epsfysize=7.1cm
\epsffile{fig3.eps}
\vspace{0.5cm}
\caption{The dispersion
of the out-phase mode ($\omega_-$) in the charged system in
the presence of both the single particle
tunneling and Josephson tunneling as a function of $k_{\|}$ for a
fixed $k_z d = \pi$.
Here, $x = T_J/V_0$ and $t$ is the strength of the single particle
tunneling with respect to the superconducting energy gap. We
use the same parameters as Fig. \ref{fig2}.
}
\label{fig3}
\end{figure}
\noindent
$k_zd = \pi$.
As emphasized before, the collective mode dispersion is qualitatively
independent of the specific tunneling mechanism (i.e., $t$ or $T_J$),
and therefore experiments involving collective modes cannot
distinguish between the existing tunneling mechanisms in high-$T_c$
superconductors as has recently been emphasized \cite{das} in a
related context.
\section{discussion and conclusion}
We calculate in this paper the collective charge density fluctuation
excitation spectra of both the
neutral and the charged superconducting bilayerd superlattices with
interlayer intra-cell single particle and Josephson tunneling.
We use the conserving gauge-invariant ladder diagram
approximation in the Nambu-Gorkov formalism. In general, there are
two types of density fluctuation modes: in-phase ($\omega_+$) and
out-of-phase ($\omega_-$) modes.
For neutral superconductors, the out-of-phase collective mode with
interlayer tunneling has a
plasma gap depending on the tunneling intensity, and the in-phase
mode, lying lower in energy, dominates
the oscillator strength for all wave vectors. However, for
charged superconductors the two phase modes couple to the long range
Coulomb interaction differently, and
the out-of-phase mode with tunneling dominates the oscillator strength
in the long wavelength limit ($k_{\|} \rightarrow 0$) and finite $k_z$.
Since we have used two identical 2D layers in each unit cell there is
no mode coupling effect in our theory between $\omega_{\pm}$ modes at
the resonant
frequency ($\omega_+ \sim \omega_-$).
If the two layers forming the unit cell are not identical then there
will be a resonant mode coupling effect (``anti-crossing'') between
the in-phase and the out-of-phase modes around $\omega_+ \approx
\omega_-$ resonance point -- the nature of this anti-crossing
phenomena will be similar to what is seen in the corresponding
intrasubband-intersubband collective mode coupling in semiconductor
quantum well systems \cite{jain}. We have mostly concentrated in the long
wavelength regime ($k_{\|} \rightarrow 0$) -- at large wave vectors
there is significant
coupling between the collective modes and the pair-breaking
excitations, which has been extensively studied in the literature
\cite{fertig,hwang}. We
have also neglected the amplitude fluctuation modes because they
usually carry negligible spectral weights compared with the
$\omega_{\pm}$ phase fluctuation modes. We have also used an $s$-wave
ground state symmetry which should be a good approximation
\cite{hwang} even for $d$-wave cuprate systems as far as the long
wavelength collective mode properties are concerned. Our use of a
BCS--Fermi liquid model in our theory is more difficult to defend
except on empirical grounds \cite{das} and for reasons of simplicity.
Finally, we consider the effect of {\it intercell} tunneling on the
collective mode spectra, which we have so far neglected in our
consideration. (Our theory includes both intracell and intercell
Coulomb coupling between all the layers, and {\it intracell}
interlayer single electron and Josephson tunneling.) The neglect of
intercell tunneling is justified by the fact that $d \gg c$ (e.g., in
YBCO $d = 12 \AA$, $c=3 \AA$). The general
effect of intercell tunneling becomes quite complicated theoretically
because one has far too many interlayer coupling terms in the
Hamiltonian in the presence of {\it both} intracell and
intercell interlayer tunneling involving {\it both} single particle and
Josephson tunneling. It is clear, however, that the main effect of a
weak intercell interlayer tunneling (either single particle or
Josephson type, or both) would be to cause a 2D to 3D transition in
the plasma mode by opening up a small gap in both $\omega_{\pm}$ modes
at long wavelengths (in the charged system). The size of this gap
(which is the effective 3D plasma frequency of the $k_z$-motion
of the system) will depend on the intercell tunneling strength.
This small gap is the 3D $c$-axis plasma frequency of the system,
which has been the subject of several recent studies in the literature
\cite{das,anderson,tama}.
The introduction of a weak {\it intercell} interlayer tunneling will
therefore modify our calculated results simply through a shift of the
energy/frequency origin in our calculated dispersion curves. The
origin of the ordinate (i.e., the energy/frequency axis) in our
results will shift from zero to $\omega_c$, where $\omega_c$ is the
c-axis plasma frequency arising from the {\it intercell} interlayer
hopping. For an effective single band tight binding intercell hopping
parameter $t_c$ (i.e., the single electron effective bandwidth in the
$c$-direction is $2t_c$), one obtains $\omega_c = \omega_p t_c d/v_F$,
where $\omega_p=[4\pi n_B e^2/(\kappa m)]^{1/2}$ is the effective 3D
plasma frequency with the 2D {\it a-b} plane band mass $m$ [see
Eq. (\ref{wp})] and $v_F$ is the Fermi velocity in the {\it a-b}
plane. Note that $\omega_c \ll \omega_p$ because $t_c$ is very small
by virtue of weak intercell coupling. Note also that if one defines an
effective ``3D'' $c$-axis plasma frequency $\omega_{pc}=[4\pi n_B
e^2/(\kappa m_c)]^{1/2}$ in analogy with $\omega_p$, where $m_c$ is
now the effective mass for electron dynamics along the $c$-axis, then
$\omega_c = \omega_{pc} [t/(2E_F)]^{1/2}$ due to the tight bind nature
of $c$-motion. We emphasize that in the presence of intercell hopping
$\omega_c$ sets the scale for the lowest energy that a collective
mode can have in the multilayer superconductor -- $\omega_c$ is
sometimes referred \cite{cote,kake,kado} to as a Josephson plasmon
\cite{anderson} in the literature. In general, it is difficult to
theoretically estimate $\omega_c$ in high-$T_c$ materials \cite{das}
because the effective $t_c$ (and other parameters) may not be known. It
is therefore important to emphasize \cite{das,anderson} that
$\omega_c$ can be measured directly from the $c$-axis plasma edge in
reflectivity experiments, (we emphasize that {\it a-b} plane plasma
edge gives $\omega_p$ and the $c$-axis plasma edge gives $\omega_c$
\cite{tama}), and such measurements \cite{tama} show that $\omega_c$
is below the superconducting gap in many high-$T_c$
materials \cite{das}.
This implies that the effective $c$-axis hopping, $t_c$, in high-$T_c$
materials (either due to single particle hopping or due to Josephson
coupling arising from coherent Cooper pair hopping) has to be very
small (much smaller than that given by direct band structure
calculations) in these systems for the Josephson plasma frequency
$\omega_c$ to be below the superconducting gap, a point first
emphasized by Anderson \cite{anderson}.
The collective mode situation in a bilayer system in the presence of
both intracell and intercell interlayer coupling is obviously quite
complex, and as emphasized in ref. \onlinecite{anderson}, there could
in general be several collective phase fluctuation modes depending on
the detailed nature of intracell and intercell interlayer hopping
matrix.
In the most general bilayer system intercell coupling will give rise
to two separate $\omega_+$ plasma bands arising from the two distinct
possible intercell interlayer coupling --- the two $\omega_+$ bands
lying in energy lower that the two $\omega-$ bands in the charged
system as we show in this paper.
In the most general situation \cite{anderson}, there could be
two low energy Josephson plasma frequencies $\omega_{c1}$,
$\omega_{c2}$ ($>$$\omega_{c1}$), corresponding to the bottoms of the
two $\omega_+$ bands, arising respectively from the larger
and the smaller of the intercell interlayer hopping amplitudes. To
make things really complicated one of these modes ($\omega_{c1}$)
could be below the gap and the other ($\omega_{c2}$) above the
gap, (or, both could be below or above the gap). While each of these
scenarios is possible, $c$-axis optical response
experimental results on inter-bilayer charge dynamics in $YBCO$ have
been interpreted \cite{cooper} to exhibit only one $c$-axis plasma
edge in the superconducting state with the frequency $\omega_c$
between 60 cm$^{-1}$ and 200 cm$^{-1}$, depending on the oxygen
content. There are three possibilities: (1) The two plasma modes
($\omega_{c1} \approx \omega_{c2} \approx \omega_c$) are almost
degenerate because the corresponding intercell hopping amplitudes are
close in magnitudes; (2) $\omega_{c2}$ is much lager than $\omega_{c1}$
($\ll \omega_{c2}$) because the two intercell hopping amplitude are
very different in magnitudes (we consider this to be an unlikely
scenario); (3) one of the two modes carries very little optical
spectral weight and is not showing up in $c$-axis reflectivity
measurements, leaving only the other one as the observed $c$-axis
plasma edge. There is, in principle, a fourth (very unlikely)
possibility: the observed plasma edge is really $\omega_{c2}$, and the
other mode $\omega_{c1}$ ($\ll \omega_{c2}$) is too low in energy to
show up in $c$-axis reflectivity measurements.
Within a {\it nearest-neighbor} $c$-axis interlayer coupling model, there is
only a {\it single} intercell hopping amplitude, giving rise to only a
single $c$-axis plasma edge $\omega_c$, which now defines the lowest
value that the in-phase collective mode $\omega_+$ can have, $\omega_c
\equiv \omega_{c+} \equiv \omega_+(k=0)$ --- $\omega_c$ is shifted up
from zero at long wavelengths due to finite $c$-axis intercell
hopping. The out-of-phase plasma edge, $\omega_{c-} \equiv
\omega_{-}(k=0)$, will obviously lie much higher in energy than
$\omega_{c+} \equiv \omega_c$ because the intracell interlayer hopping
is much stronger than the intercell interlayer hopping. In particular,
even though the $\omega_{c+}$ mode may lie in the superconducting gap
\cite{cooper,anderson}, we expect $\omega_{c-}$ to
lie much above the superconducting gap energy in $YBCO$. A crude
qualitative estimate can be made by assuming that the intra- and
intercell hopping amplitudes scale as inverse squares of lattice
parameters: $t_{\rm intra}/t_{\rm inter} \approx (d/c)^2 = 16$. This
then leads to the approximate formula $\omega_{c-} \approx 16^2 \;
\omega_{c+} = 256 \; \omega_c$, which, for $YBCO$, implies that the
long wavelength out-of-phase mode should lie between 2 eV and 6 eV,
depeding on the oxygen content (assuming that the c-axis plasma edge
varies between 60 cm$^{-1}$ and 200 cm$^{-1}$, as reported in
ref. \onlinecite{cooper}, depending on the oxygen content). While
there is some minor observable structure in optical experiments at
high energies, we cannot find any compelling evidence in favor of the
existence of a high energy out-of-phase mode in the currently
available experimental data. We feel that a spectroscopic experiment,
using, for example, the inelastic electron energy loss spectroscopy
which could probe the mode dispersion (and which has been highly
successful in studying bulk plasmons in metal films) of the $\omega_-$
mode at high energy, may be required to unambiguously observe the
out-of-phase collective mode.
What we have shown in this paper is that under suitable conditions
(finite $k$ and $k_z$) the $\omega_-$ out-of-phase mode carries
reasonable spectral weight and should be observable in principle ---
actual observation, however, awaits experimental investigations using
external probes which can study mode dispersion at finite wave vectors
(which optical experiments by definition cannot do; they are long
wavelength probes).
\section*{ACKNOWLEDGMENTS}
This work is supported by the U.S.-ARO and the U.S.-ONR.
| 14,537 |
\section{Simulating Tidal Interactions}
Why simulate interacting galaxies? First, simulations can {\it test
theoretical ideas\/}. Second, detailed simulations may help in
gaining {\it insight into real systems\/}. Third, simulations may
{\it constrain galaxy parameters\/} such as dark halo masses.
Simulation is not a straightforward business. A dynamical model
specifies the distribution function $f({\bf r}, {\bf v})$, which
depends on {\it six\/} variables. Observations, at best, yield $f(X,
Y, V_{\rm Z})$, a function of just three variables: two coordinates on
the plane of the sky, and a line of sight velocity. Thus simulations
are underdetermined; further constraints are needed to make progress.
In cosmology, one may stipulate that the observed structures grew from
a linear density field $\delta \rho({\bf r}) / \rho$ which depends on
three coordinates; this is how the ``least action'' method (Peebles
1994) can yield well-determined results. But in studying interacting
galaxies we want to understand the {\it stellar\/} distribution, and
the stars did not evolve from linear initial conditions!
So in simulating interacting galaxies, the practice has been to build
equilibrium models and drop them towards each other. This approach
seems to work provided that the galaxy models and their trajectories
are cosmologically plausible. One example is NGC~7252, which Hibbard
\& Mihos (1994) simulated successfully as the result of a {\it
direct\/} parabolic encounter of two disk galaxies; an earlier attempt
to reproduce this system with a {\it retrograde\/} encounter (Borne \&
Richstone 1991) required the galaxies to start on implausibly tight
circular orbits and proved inconsistent with subsequent HI
observations.
\subsection{Towards a model of the Antennae}
The Antennae galaxies (NGC~4038/9) are fast becoming the ``Rosetta
stone'' of interacting systems; detailed observations in almost every
waveband from $21 {\rm\, cm}$ to X-rays provide a remarkably complete
picture of the behavior of interstellar material and star formation in
the earlier stages of a galactic merger. These galaxies have also
long been a favorite of N-body experimenters. But until recently, the
available line-of-sight velocity data were not good enough to support
detailed simulations. New VLA observations (Hibbard, van der Hulst,
\& Barnes, in preparation) offer the chance to refine existing models.
Goals for an improved model of the Antennae include:
\begin{enumerate}
\item Matching the observed velocity field. The radial velocities of
the two galaxies differ by only $\sim 40 {\rm\,km\,sec^{-1}}$. To
produce this, the galaxies must either be near apocenter, or falling
together almost perpendicular to our line-of-sight.
\item Reconciling the adopted orbit with cosmological expectations.
Simulations by Toomre \& Toomre (1972) and Barnes (1988) adopted
elliptical ($e \simeq 0.6$) orbits; parabolic orbits seem more
plausible.
\item Reproducing the gas-rich ring in NGC~4038. This ring, clearly
seen in maps of HI as well as in mid-IR (Mirabel et al. 1998),
contains many luminous young star clusters (Whitmore \& Schweizer
1995).
\item Explaining the ``overlap region''. Recent ISO maps show this
dusty region is brighter than either disk in mid-IR wavebands (eg.,
Mirabel et al. 1998).
\end{enumerate}
Goals one and two involve adjusting the orbit, the viewing angle, and
the orientations of the two disks. To rapidly explore this vast
parameter space, I run ``semi-consistent'' simulations in which each
galaxy is represented by a self-gravitating spheroid with a number of
embedded test particle disks; the two disks best matching the
observations are selected interactively after the calculation has run.
Starting with orbits as eccentric as $e = 0.8$, this technique yields
models which roughly reproduce the velocity field as well as the
crossed-tail morphology of NGC~4038/9. But still less than
satisfactory are the shapes of the gently curving tails and the
orientations of their parenting disks; experiments are under way to
study these problems and make models with parabolic initial orbits.
Goals three and four depend on gas dynamics. In high-resolution HI
maps, gas in the southern tail seems to join continuously onto the
ring in NGC~4038. Rings of similar size and morphology may arise as a
result of gas falling back along tidal tails; the formation of such a
ring is illustrated in Figure~\ref{ringform}. Simulations of the
Antennae reproducing this feature might shed some light on the
conditions of star-formation in this system. Perhaps more challenging
is to account for the IR-luminous overlap region. This seems to be
more than just the superposition of two disks; it is probably some
sort of bridge, perhaps extended along the line of sight.
\begin{figure}
\begin{center}
\epsfig{figure=fig1a.ps, height=2.5 true cm, angle=-90}
\epsfig{figure=fig1b.ps, height=2.5 true cm, angle=-90}
\epsfig{figure=fig1c.ps, height=2.5 true cm, angle=-90}
\epsfig{figure=fig1d.ps, height=2.5 true cm, angle=-90}
\epsfig{figure=fig1e.ps, height=2.5 true cm, angle=-90}
\end{center}
\caption{Formation of a gas ring by accretion of material from a tidal
tail (see Barnes \& Hernquist 1998, video segment 5, section 2).}
\label{ringform}
\end{figure}
\subsection{Halo mass and orbit decay}
Detailed simulations of systems like the Antennae may also provide
useful constraints on dark halos. To produce proper tidal tails, disk
material must escape to infinity. Very massive, quasi-isothermal
halos prevent interacting galaxies from forming long tails (Dubinski,
Mihos, \& Hernquist 1996); clearly, such halos are not present around
galaxies like NGC~4038/9 or NGC~7252 (Mihos, Dubinski, \& Hernquist
1998). But equally massive halos with density profiles falling off as
$\rho \propto r^{-3}$ at large $r$ are {\it not\/} excluded, as N-body
experiments explicitly demonstrate (Springel \& White 1998, Barnes
1999). In sum, tail length tells us something about the structure of
halos, but little about their total mass.
However, it seems unlikely that arbitrary amounts of dark mass can be
included in simulations of interacting systems. The orbital evolution
of a pair of galaxies is largely governed by the interaction of their
dark halos (White 1978, Barnes 1988). Too much or too little orbital
decay will hinder the construction of models which evolve from
plausible initial conditions to configurations matching the observed
morphologies and velocity fields of real systems. Possible indicators
of halo mass in interacting systems include:
\begin{enumerate}
\item Tail kinematics; the run of velocities along a tidal tail may
constrain the potential.
\item Tail fallback; if orbit decay is strong, returning tail material
may miss the disk by a wide margin.
\item Galaxy velocities; do the hulks preserve their original sense of
motion about each other?
\end{enumerate}
\noindent
The last of these, in particular, seems relevant to NGC~4038/9; the
galaxies must retain a good deal of their orbital angular momentum to
produce the crossed tails emblematic of this system.
\section{Dissipation and Thermodynamics}
Unlike stars, gas responds to pressure forces as well as gravity;
moreover, gas flows develop shocks whereas streams of stars freely
interpenetrate. Even without the complications of star formation, the
dynamics of gas in interacting galaxies is a difficult problem. But
dissipative dynamical systems generally possess {\it attractors\/}; in
the long run, most trajectories are captured by one attractor or
another. Consequently, gas in interacting galaxies tends to end up in
a few stereotypical structures.
The thermodynamic history of the gas is probably the factor which
determines its fate. To date, most simulations treat gas
thermodynamics rather crudely; the cooling function is cut off at
$T_{\rm c} = 10^4 {\rm\,K}$ to prevent the gas from ``curdling'', and
the resulting behavior is basically that of an isothermal fluid with
$T = T_{\rm c}$ (Barnes \& Hernquist 1996, hereafter BH96). Improving
on the present treatments may require including star formation and
feedback; one possible approach to this difficult problem is described
in \S~4. The rest of this section reviews results obtained with and
without cooling in an attempt to anticipate the results of more
realistic experiments.
Work by several investigators confirms that tidal perturbations of
gas-rich disk galaxies result in rapid gas inflows (Icke 1985, Noguchi
1988, Hernquist 1989, Barnes \& Hernquist 1991). The immediate
physical cause of these rapid inflows is a systematic transfer of
angular momentum from the gas to the disk stars; tidally perturbed
disks develop bars (or other non-axisymmetric structures) which exert
gravitational torques on the gas (Combes, Dupraz, \& Gerin 1990,
Barnes \& Hernquist 1991). Such inflows require strong shocks and
rapid cooling, which work together to drive the gas irreversibly
toward the center of the potential. As a perturbed disk settles down
the gas often converges onto kpc-scale closed orbits aligned with the
stellar bar (BH96).
Dissipative mergers between disk galaxies lead to further inflows,
with large amounts of gas collecting in $\sim 0.2 {\rm\,kpc}$-scale
clouds (Negroponte \& White 1983, Barnes \& Hernquist 1991). These
nuclear clouds contain material driven in toward the centers of
galaxies by earlier tidal interactions; during the final merger, the
gas again loses angular momentum to the surrounding material. It
seems likely that the same physical mechanism lies behind the inflows
in perturbed disks and in merger remnants; in both cases the entropy
of the system grows as gas falls inwards.
A different fate awaits the gas which does not suffer strong shocks
and subsequent cooling in the early stages of an encounter. This
material does not participate in rapid inflows, and retains much of
its initial angular momentum. Consequently, it tends to collect in an
extended, rotationally supported rings or disks; one such example has
already been presented in Figure~\ref{ringform}. In merger remnants,
such disks may be strongly warped by gas falling back from tidal tails
(BH96). Early-type galaxies with warped gas disks include NGC~4753
(Steiman-Cameron, Kormendy, \& Durisen 1992) and NGC~5128 (van Gorkom
et al.~1990); though these disks are usually attributed to accretions
of gas-rich satellite galaxies, some may actually result from major
mergers.
The two outcomes just described -- nuclear clouds or extended disks --
seem to be the only real attractors available to dissipative gas in
merger simulations. However, if the gas fails to cool then another
outcome is likely -- a pressure-supported atmosphere about as extended
as the stellar distribution (BH96). Though most phases of the ISM
cool efficiently, initially hot gas ($T \ga 10^5 {\rm\,K}$, $n \la
10^{-3} {\rm\,cm^{-3}}$) could be shock-heated during a merger and
might produce envelopes of X-ray gas like those found around some
elliptical galaxies. On the other hand, X-ray observations of the
Antennae (Read, Ponman, \& Wolstencroft 1995) and Arp~220 (Heckman et
al.~1996) reveal apparent {\it outflows\/} of up to $10^9
{\rm\,M_\odot}$ of hot gas. The properties of these outflows are
inconsistent with shock-heating and seem to require significant
injections of mass and energy from merger-induced starbursts.
\section{Structure of Merger Remnants}
Much of this meeting has focused on possible ways in which nuclear
mass concentrations -- such as steep central cusps or black holes --
influence the global structure of elliptical galaxies. This
discussion is motivated by the apparent dichotomy (Kormendy, these
proceedings) between faint ellipticals (which have steep central
profiles, ``disky'' isophotes, and rapid rotation) and bright
ellipticals (which have shallow central profiles, ``boxy'' isophotes,
and slow rotation). But other factors besides central density profile
can influence galaxy structure.
\subsection{Unequal-mass mergers}
Rapidly-rotating systems may result when a large disk galaxy merges
with a smaller companion, as illustrated in a modest survey of
unequal-mass encounters (Barnes 1998). In these experiments, both
galaxies contained bulges, disks, and halos; the larger galaxy had $3$
times the mass of the smaller, and rotated $\sim 1.32$ times faster.
The galaxies were launched on initially parabolic orbits and went
through several passages before merging; remnants were evolved for
several more dynamical times before being analyzed.
\begin{figure}[b!]
\begin{center}
\epsfig{figure=fig2a.ps, height=3.0 true cm}
\epsfig{figure=fig2c.ps, height=3.0 true cm}
\end{center}
\begin{center}
\epsfig{figure=fig2b.ps, height=3.0 true cm}
\epsfig{figure=fig2d.ps, height=3.0 true cm}
\end{center}
\caption{Remnant produced by an unequal-mass merger. Top: edge-on
views. Bottom: line-of-sight velocities versus major axis position.
Left: natural grey scale. Right: cyclic grey scale.}
\label{remprof}
\end{figure}
Figure~\ref{remprof} shows edge-on views and velocity distributions
for an unequal-mass merger remnant. Unlike the products of equal-mass
mergers (Barnes 1992), this object is fairly oblate, with axial ratios
$b/a \simeq 0.9$, $c/a \simeq 0.6$. A good deal of ``fine structure''
is still present due to incomplete phase-mixing, but the edge-on views
show a distinctly disky morphology. The velocity profiles, which
mimic the result of placing a narrow slit along the remnant's major
axis, show that this object is rapidly rotating; in fact, $v/\sigma
\ga 2$ or more at larger radii. And as the cyclic version of the
velocity plot makes clear, the line profiles are asymmetric, rising
sharply on the leading side of the peak, but falling off gradually on
the trailing side.
The initial conditions used for this experiment were fairly generic;
the larger disk was inclined by $i = 71^\circ$ with respect to the
orbital plane, the smaller disk by $i = 109^\circ$. Between half and
three-quarters of a small sample of unequal-mass merger remnants
exhibit the rapid rotation and asymmetric line profiles seen in this
example (Bendo \& Barnes, in preparation). These objects clearly
don't resemble bright ellipticals, but do seem fairly similar to faint
ellipticals or S0 galaxies. The morphological and kinematic features
which support this classification are due to the incomplete scrambling
of galactic {\it disks\/} in unequal-mass mergers. If dissipative
collapse is responsible for the rapid rotation of faint ellipticals
(Kormendy 1989), a subsequent merger may still be needed to {\it
transform\/} the resulting object into something resembling an
early-type galaxy.
\subsection{Dissipative mergers}
By producing inflows, dissipation can dramatically deepen galactic
potential wells, and these deeper wells seem to influence the dynamics
of collisionless material (eg., Katz \& Gunn 1991, Udry 1993, Dubinski
1994, BH96). But these studies mostly examined effects of dissipation
on dark halos; only the last one focused on disk-galaxy mergers, and
that work compared but one pair of carefully-matched simulations.
The two remnants compared by BH96 were produced by mergers of
equal-mass bulge/disk/halo galaxies. Both experiments started with
{\it exactly\/} the same initial conditions, using disk inclinations
of $0^\circ$ and $71^\circ$; both were evolved with the same spatial
resolution (a.k.a.~``force softening''). In the dissipative version,
a tenth of the disk mass was treated as gas with a cooling cut-off at
$T_{\rm c} = 10^4 {\rm\,K}$, while in the collisionless version
everything obeyed the collisionless Boltzmann equation.
\begin{figure}[b!]
\begin{center}
\epsfig{figure=fig3.ps, height=3.5 true cm}
\end{center}
\caption{Ellipticity profiles for collisionless (left) and dissipative
(right) versions of the same merger remnant. Open circles represent
$b/a$, filled circles $c/a$.}
\label{elliprof}
\end{figure}
Figure~\ref{elliprof} compares the ellipticity profiles of these two
remnants. Beyond their half-light radii ($r_{\rm hl} \simeq 0.18$
model units) both remnants are nearly oblate and rotate rapidly in
memory of the direct ($i = 0^\circ$) disks used in the initial
conditions. But inside $r_{\rm hl}$ the two remnants are quite
different; the collisionless version is a triaxial ellipsoid rapidly
tumbling about its minor axis, while the dissipative version is fairly
oblate and slowly rotating.
How does dissipation influence the shape of merger remnants? The
dissipative remnant has a deeper potential well as a result of its
central gas cloud, which contains $\sim 4.5$\% of the luminous mass,
or $\sim 0.9$\% of the total. But the finite resolution of the force
calculation spreads this central mass over a radius of $\sim 0.04 \,
r_{\rm hl}$; thus compared to a black hole or singular logarithmic
potential, this mass may be ineffective at scattering box orbits
(Valluri, these proceedings). Moreover, the oblate shape of the
remnant seems to be established at the moment of the merger itself
instead of developing progressively from the inside out (Ryden, these
proceedings).
Thinking that the shapes of these remnants might be constrained by the
scarcity of box orbits, I constructed a composite mass model with the
density profile of the dissipational remnant and the ellipticity
profile of its collisionless counterpart, and used its potential to
evaluate the phase-space volumes of the major orbit families (Barnes
1998). While this composite offered fewer boxes and more z-tubes
than the collisionless remnant, bona-fide box orbits were present at
all binding energies. Thus self-consistent equilibria as centrally
concentrated as the dissipational remnant and as flattened as the
collisionless remnant may exist. However, some finesse is probably
required to realize such equilibria. Merging sows stars far and wide
across phase space; not all physically consistent systems may be
constructed with such a blunt instrument.
All of this work is based on only one pair of simulations, and the two
remnants compared by BH96 may not be entirely typical. For example,
the pre-merger disks in these experiments developed bars, and the bars
in the dissipational version had significantly higher pattern speeds.
Thus when the disks merged, their bars had different orientations, and
this might influence remnant structure. Comparison of a larger sample
of collisionless and dissipative merger remnants is clearly warranted,
but sufficient computer power is hard to find. Meanwhile,
collisionless mergers between models of various central concentrations
may help expose the connection between density profile and remnant
shape (Fulton \& Barnes, in preparation).
\section{Simulations of Starburst Galaxies}
The crude treatment of gas thermodynamics in most work to date is
perhaps the greatest barrier to simulating star formation in
interacting galaxies. As described in \S~2, radiative cooling is
typically cut off at $10^4 {\rm\,K}$, and most of the gas remains
close to this temperature. Stars, on the other hand, form at much
lower temperatures; consequently, sites of star formation can't be
directly located in the simulated gas.
Within the framework of most simulations, gas density is the only
variable with an interesting range of values, so most treatments
assume the star formation rate is a function of the gas density. This
approach has some justification; studies of star formation in systems
ranging from quiescent disk galaxies to violent starbursts find that
star formation rates roughly follow a Schmidt (1959) law of the form
$\dot\Sigma_s \propto \Sigma_g^n$, where $\Sigma_s$ and $\Sigma_g$ are
the stellar and gaseous surface densities, respectively, and the index
$n \simeq 1.4 \pm 0.15$ (eg., Kennicutt 1998). The usual approach is
thus to adopt a star formation law of the form $\dot\rho_s \propto
\rho_g^n$, where $\rho_s$ and $\rho_g$ are the stellar and gaseous
volume densities, respectively.
The implementation of feedback effects due to stellar evolution and
supernovae is particularly difficult. Cooling is so rapid that the
otherwise plausible strategy of dumping thermal energy into the gas
proves ineffective; the energy is radiated away before anything else
can happen (Katz 1992, Summers 1993). Another trick is to impart some
outward momentum to gas particles surrounding sites of star formation
and/or supernovae; this seems more effective, but involves an
arbitrary efficiency factor (Navarro \& White 1993, Mihos \& Hernquist
1994). It's unlikely that feedback can be properly implemented as
long as the gas is effectively treated as a single-phase medium.
A promising alternative to density-driven star formation is now
available (Gerritsen \& Icke 1997). In this approach the gas is
allowed to cool below $10^4 {\rm\,K}$, and sites of star formation are
defined by a Jeans criterion. The stellar radiation field, calculated
in the optically thin limit, is used to heat the gas. Star formation
is thus a self-regulating process; negative feedback maintains the
system in a quasi-stable state while slowly converting gas to stars.
Competition between radiative heating and cooling creates a two-phase
medium with temperatures of $10^2 {\rm\,K}$ and $10^4 {\rm\,K}$; a
third phase at $10^6 {\rm\,K}$ appears when the effects of supernovae
are included. As a bonus, the resulting star formation obeys a
Schmidt law with index $n \simeq 1.3$.
It may turn out that many of the desirable features of this approach
are simple consequences of combining radiative cooling and negative
feedback. Some details surely require modification; the treatment of
the radiation field seems particularly suspect since galactic disks,
edge-on, are not optically thin. But the general view of star
formation as a self-regulating process and the re-introduction of gas
temperature as a physically interesting variable are surely major
improvements on previous treatments.
Does the treatment of star formation make a real difference in the
outcome of simulations? In at least one respect, it does.
Simulations using the Schmidt law predict that interacting late-type
disk galaxies consume most of their gas shortly after their first
passage; merger-induced starbursts only result if the disks are
protected from bar formation by compact central bulges (Mihos \&
Hernquist 1996). In contrast, simulations using self-regulated star
formation predict that bulgeless disk galaxies retain enough gas to
fuel ultra-luminous starbursts during their final mergers; while star
formation rates increase after the first passage, radiative heating
delays violent star formation until the merger drives most of the gas
into a compact central cloud (Gerritsen 1997).
To date, outflows like those seen in interacting starburst galaxies
have not been reproduced with either treatment of merger-induced star
formation. This remains an important challenge for the future.
\section{Discussion}
Within the space of this brief review, it's impossible to do more than
touch on a few aspects of galaxy interactions; I've said nothing about
the tidal genesis of grand-design spirals, origins of ring galaxies,
multiple mergers and the formation of cD galaxies, or interactions in
groups and clusters, to name a few. The main points I have tried to
address are listed here:
1. Simulating interacting galaxies is an {\it art\/}; it can't be
reduced to a recipe. Picasso defined art as ``a lie which makes us
realize truth'', and this seems to be a good stance to adopt when
trying to reproduce real galaxies. Some features of the Antennae may
be clues leading to fundamental insights, while others may be due to
quirks of the pre-encounter disks. We don't always know which is
which; experience is the only guide.
2. In interacting galaxies, thermodynamics is the key to the fate of
the gas. Gas which encounters strong radiative shocks in the early
phases of a collision will diverge from the stars and fall into the
centers of interacting galaxies and merger remnants. Gas which does
not suffer such shocks until the later stages of a collision, on the
other hand, retains much of its initial angular momentum and builds up
extended disks.
3. Remnant structure is determined by many factors. Steep central
cusps (or nuclear black holes) may suppress strong triaxiality, but
this doesn't explain why galaxies with such profiles rotate rapidly.
More generally, sheer {\it existence\/} of self-consistent equilibria
is not enough to explain the properties of elliptical galaxies; the
details of formation play an important role.
4. Simulations including star formation are still in their early days;
a good deal of further work is needed to develop and test alternate
approaches. Plausible treatments of feedback from star formation and
evolution require abandoning the assumptions which effectively limit
the simulated gas to a single phase.
As noted in the abstract, galaxy mergers have deep connections to
galaxy formation. For example, the issues reviewed in \S~2 and~4
arise in cosmological simulations of disk galaxy formation; in
dissipative CDM simulations, gas inflows are so efficient that little
remains to build disks (Navarro \& Benz 1991, Navarro \& White 1994,
Navarro \& Steinmetz 1997). The resolution of this problem is
probably to implement strong feedback of the kind long assumed in
hierarchical models of galaxy formation (White \& Rees 1978, White \&
Frenk 1991, Kauffmann, White, \& Guiderdoni 1993, Navarro, Frenk, \&
White 1995). This may well be the same sort of feedback needed to
reproduce the outflows of hot gas in violently interacting starburst
galaxies.
\acknowledgements
I thank John Hibbard for allowing me to discuss our unpublished work
on NGC~4038/9 and Lars Hernquist for providing me with a copy of
TREESPH. I'm also grateful to Jun Makino and the University of Tokyo
for hospitality while I was writing this report. This research has
made use of NASA's Astrophysics Data System Abstract Service.
| 6,658 |
\section{Introduction}
In 1970, Kadomtsev and Petviashvili \cite{kadom} derived two equations as
generalizations of the Korteweg-de Vries (KdV) equation to two spatial
dimensions:
\renewcommand{\theequation}{\mbox{KP}}
\begin{equation}\la{kp}
\left(-4 u_t+6 u u_x+u_{xxx}\right)_x+3 \sigma^2 u_{yy}=0,
\end{equation}
\setcounter{equation}{0}
\renewcommand{\theequation}{\mbox{\arabic{equation}}}
\noindent where $\sigma^2=\pm 1$ and the subscripts denote differentiation.
Depending on the physical situation, one derives the equation
either with $\sigma^2=-1$ or $\sigma^2=+1$. The resulting partial differential
equations are referred to as (KP1) and (KP2) respectively. For real-valued
solutions, the two equations have different physical meaning and different
properties \cite{kadom}. Nevertheless, for some purposes the sign of $\sigma^2$
is irrelevant and we equate $\sigma \equiv 1$. In this case, we simply refer to
``the KP equation'' or to ``(KP)''. This amounts to working with (KP2). If
necessary, (KP1) is obtained through a complex scaling of the $y$-variable.
The KP equation can be written as the compatibility condition of two linear
equations for an auxiliary wave function $\Psi$ \cite{dryuma, shabat}:
\setcounter{saveeqn}{\value{equation}
\begin{eqnarray}\la{linear1}
\sigma \Psi_y&=&\Psi_{xx}+u \Psi,\\\la{linear2}
\Psi_t&=&\Psi_{xxx}+\frac{3}{2} u \Psi_x+\frac{3}{4} \left(u_x+w\right) \Psi.
\end{eqnarray}
\setcounter{equation}{\value{saveeqn}
Expressing the compatibility of \rf{linear1} and \rf{linear2},
$\Psi_{yt}\equiv \Psi_{ty}$, and eliminating $w$ results in \rf{kp}, if we
assume that a complete basis for the wave function $\Psi$ exists. Note that if
the KP solution is independent of $y$, the above Lax pair (\ref{linear1},
\ref{linear2}) reduces to the Lax pair for (KdV) \cite{ggkm} by simple
separation of variables of the wave function $\Psi$.
The two equations \rf{linear1}, \rf{linear2} are only two equations of an
infinite hierarchy of linear evolution equations for the wave function $\Psi$
with respect to {\em higher-order time variables} $t_n$ \cite{sato}. We refer
to this collection of linear flows as the {\em linear KP hierachy}:
\begin{equation}\la{lkphier}
\Psi_{t_k}=A_k \Psi,~~\mbox{for~}k=1,2,3,\ldots
\end{equation}
\noindent with $A_k$ a linear differential operator in $x$ of order $k$ with
(usually) nonconstant coefficients. We refer to the $t_k$, $k=1,2,3,4, \ldots$
as higher-order time variables. A consequence of our definition of the KP
hierarchy given in Section 3 is that $t_1$ can be identified with $x$.
Furthermore, $y$ and $t$ are related to $t_2$ and $t_3$ respectively.
By expressing the compatibility of these different linear flows,
$\Psi_{t_{k_1}t_{k_2}}=\Psi_{t_{k_2}t_{k_1}}$, and assuming the existence of a
complete basis for $\Psi$, we obtain an infinite number of nonlinear partial
differential equations for the evolution of $u$, $w$ (and other functions
referred to as potentials) with respect to the $t_k$ \cite{sato}, called the
{\em KP hierarchy}:
\begin{equation}\la{kphier}
\pp{A_{k_1}}{t_{k_2}}-\pp{A_{k_2}}{t_{k_1}}=\left[A_{k_2},A_{k_1}\right],
\end{equation}
\noindent where $\left[A,B\right]\equiv AB-BA$. The linear KP hierarchy \rf{lkphier}
and the KP hierarchy \rf{kphier} are the fundamental ingredients for the
methods presented in this paper.
A large family of quasiperiodic solutions of the KP equation was found by
Krichever \cite{krich1, kricheverintegration}. Each of these solutions has a
finite number of phases. They are given by
\setcounter{saveeqn}{\value{equation}
\begin{eqnarray}\la{reconstruction}
u&=&\tilde{u}+2 \partial_x^2 \ln \Theta_g(\phi_1, \phi_2, \ldots, \phi_g |
\mbf{B}),\\\la{phases}
\phi_j&=&k_j x+l_j y+\omega_j t+\phi_{0j},~~\mbox{for~}j=1,2,\ldots, g,
\end{eqnarray}
\setcounter{equation}{\value{saveeqn}
\noindent for some constants $\tilde{u}, k_j, l_j, \omega_j$ and $\phi_{0j}$, $j=1,
2,\ldots, g$. $\Theta_g$ is a {\em Riemann theta function} of {\em genus} g,
parametrized by a $g \times g$ {\em Riemann matrix} $B$:
\begin{equation}\la{thetafunction} \Theta_g(\mbf{\phi}| \mbf{B})\equiv \sum_{\mbf{m}\in
\mbf{Z}^g} \exp\left(\frac{1}{2} \mbf{m} \cdot \mbf{B} \cdot \mbf{m}+i \mbf{m}
\cdot \mbf{\phi}\right). \end{equation}
\noindent Here $\mbf{\phi}\equiv(\phi_1, \ldots, \phi_g)$. The vector $\mbf{m}$ runs
over all $g$-dimensional vectors with integer components. The Riemann matrix
$\mbf{B}$ is a symmetric $g \times g$ matrix with negative definite real part.
Whenever the matrix $\mbf{B}$ is obtained from a compact, connected Riemann
surface in a standard way \cite{dub}, \rf{reconstruction}
defines a solution of the KP equation \cite{krich1, kricheverintegration}. In
what follows, the dependence of the theta function $\Theta_g$ on the Riemann
matrix $\mbf{B}$ and the index $g$ denoting the number of phases will be
surpressed for notational simplicity. By construction, $\Theta(\mbf{\phi})$ is
periodic in each component of $\mbf{\phi}$. Hence the restriction to the linear
winding \rf{phases} makes \rf{reconstruction} by definition a quasiperiodic
function in $x$ or $y$ or $t$. A solution of the form \rf{reconstruction} is
said to have genus $g$. In the terminology of Krichever and Novikov
\cite{kricheverrank, krichrank}, all solutions of the form
\rf{reconstruction} have {\em rank 1}.
The problem addressed in this paper is to find $u(x,y,t)$ such that:
\begin{equation}\la{problem}
\left\{
\begin{array}{l}
(-4 u_t+6 u u_x+u_{xxx})_x+3 \sigma^2 u_{yy}=0\\
u(x,y,0)=\mbox{rank 1, finite-genus solution of KP,}\\
\phantom{u(x,y,0)=}~\mbox{evaluated at~}t=0.
\end{array}
\right.
\end{equation}
\noindent The initial data are purposely not written in the form \rf{reconstruction}.
The problem is not only to determine the frequencies $\omega_j$, for
$j=1,2,\ldots, g$, but also to fix the other parameters $g, \mbf{B},
\tilde{u}, k_j, l_j, \phi_{0j}$, for $j=1,2,\ldots, g$: the acceptable class
of initial data consists of all finite-genus solutions of the KP equations
evaluated at a fixed time, with an unspecified but finite number of phases.
A solution to this problem was offered in \cite{ds1}, where a seven-step
algorithm was presented. This algorithm was a mixture of new ideas and
known work by Krichever \cite{kricheverchap, krich1,
kricheverintegration} and Previato \cite{previato}. The main idea of
the algorithm is to provide the ingredients required for Krichever's inverse
procedure for the reconstruction of a finite-genus solution of the KP equation
\cite{krich1, kricheverintegration}, {\em i.e.} a compact connected Riemann
surface and a divisor on this surface.
In the present paper, a different algorithm to solve the problem \rf{problem}
is presented. This algorithm shares some steps with that in \cite{ds1}.
However, in contrast to the first algorithm, the second algorithm does not
work towards Krichever's inverse procedure \cite{krich1,
kricheverintegration}. The main idea here is to examine the structure of a
set of ordinary differential equations obtained in step 5 of \cite{ds1}.
In this paper, we show the following:
\begin{itemize}
\item The rank 1, finite-genus solutions of the KP equation are governed by a
system of ordinary differential equations. This system is constructed
explicitly.
\item This system of ordinary differential equations is Lagrangian.
\item With some care, the Lagrangian equations are written as a Hamiltonian
system of ordinary differential equations in $x$.
\item This Hamiltonian system of ordinary differential equations is completely
integrable in the sense of Liouville \cite{arnold}. A complete set of
conserved quantities in involution under the Poisson bracket is explicitly
constructed.
\item From these conserved quantities, one easily constructs completely
integrable Hamiltonian systems of ordinary differential equations describing
the evolution of the given initial condition of \rf{problem} under any of the
higher-order time variables $t_k$, including $k=1,2,3$. This provides a
solution of \rf{problem}.
\end{itemize}
In the next section, it is shown how the information listed above is used in
an algorithm to solve problem \rf{problem}. As with the algorithm in
\cite{ds1}, most of the steps of the algorithm in this paper are due to
others. The work of Bogoyavlenskii and Novikov \cite{bogoyavlenskii} provided
the main inspiration: the algorithm presented here is a generalization to the
KP equation of their approach to solve problem \rf{problem} for the KdV
equation. The work by Gel'fand and Dikii \cite{gd1}, Veselov \cite{veselov},
Adler \cite{adler} and Dikii \cite{dickey} was used to execute some of the
steps of the algorithm. Although all of the above authors have considered
parts of the problem considered here, to the best of our knowledge a complete
solution of problem \rf{problem} using ordinary differential equations to
solve the posed initial-value problem was never given.
The algorithm presented here offers an alternative approach to that in
\cite{ds1}. There are however some natural consequences of the new approach
that do not appear from the approach in \cite{ds1}. These include
\begin{itemize}
\item {\bf Canonical variables} for the rank 1, finite-genus solutions of the
KP equation. Any rank 1, finite-genus solution of the KP equation satisfies a
Hamiltonian system of ordinary differential equations. The Poisson bracket on
the phase space of this Hamiltonian system is the canonical Poisson bracket,
resulting in a description of the rank 1, finite-genus solutions of the KP
equation in terms of canonical (Darboux) variables.
\item {\bf Conserved Quantities} for rank 1, finite-genus solutions
\rf{reconstruction} of the KP equation. The Hamiltonian system of ordinary
differential equations is completely integrable in the sense of Liouville. A
sufficient number of conserved quantities is constructed explicitly. These
conserved quantities are mutually in involution under the canonical Poisson
bracket.
\item {\bf Parameter count} of the theta-function solutions \rf{reconstruction}
of the KP equation. It is known \cite{dub} that a generic\footnote{for a
precise definition, see section \ref{sec:algorithm}} solution of genus $g$ of
the KP equation is characterized by $4g+1$ independent parameters. We reconfirm
this result, and extend it, by providing a parameter count for nongeneric
solutions of the KP equation as well. Furthermore, the parameters naturally
divide in two classes: parameters with ``dynamical significance'' and other
parameters. The dynamically significant parameters are the initial conditions
of the Hamiltonian system of ordinary differential equations describing the
solution. The other parameters foliate the phase space of the Hamiltonian
system.
\item {\bf Minimal characterization of the initial data}. The approach
presented here demonstrates that a finite-genus solution $u(x,y,t)$ of the KP
equation is completely determined by a one-dimensional slice of the initial
condition $u(x,y=0,t=0)$. In other words, it suffices to specify the initial
data of \rf{problem} at a single $y$-value, say at $y=0$.
\end{itemize}
Krichever \cite{krich3} proposed another method to solve an initial-value
problem for the KP2 equation with initial data that are spatially periodic (in
both $x$ and $y$). Krichever's method is not restricted to initial data of
finite genus, hence it is in that sense more general than the algorithm
presented here. On the other hand, the methods of this paper require no
restriction to periodic initial data.
\section{Overview of the algorithm}\la{sec:algorithm}
A solution for problem \rf{problem} is obtained using a seven-step algorithm.
In this section, an overview of this algorithm is given, along with references
to earlier work.
\begin{enumerate}
\item {\bf Determine the genus of the initial data} \cite{ds1} Let us rewrite
\rf{phases} in the form
\begin{equation}\la{newphases}
\phi_j=\mbf{\kappa_j} \cdot \mbf{x}+\omega_j t+\phi_{0j}, ~~~~j=1,2,\ldots, g,
\end{equation}
\noindent with $\mbf{\kappa_j}=(k_j,l_j)$ and $\mbf{x}=(x,y)$. If all wave vectors
$\mbf{\kappa_j}$ are incommensurable, {\em i.e.,} if there is no relationship
\begin{equation}\la{commensurable}
\sum_{i=1}^g n_i \mbf{\kappa_i}=0
\end{equation}
for integers $n_i$ not all zero, then a two-dimensional Fourier transform of
the initial data resolves the vectors $\mbf{\kappa_j}$. Because the initial
data contain only a finite number of phases, the Fourier transform is
necessarily discrete; {\em i.e.,} it consists of isolated spikes. Since the
condition \rf{commensurable} almost never holds, we can almost always find the
genus of the initial condition by counting the number of spikes in the Fourier
transform, modulo harmonics.
If condition \rf{commensurable} holds, then the prescribed method finds only a
lower bound on the genus of the initial data. The method fails especially
dramatically in one important special case: if the initial data are spatially
periodic, then \rf{commensurable} holds automatically for any two wave vectors
$\mbf{\kappa_i}$ and $\mbf{\kappa_j}$. The lower bound obtained in this case
for the number of phases is 1. This problem was already pointed out in
\cite{ds1}. If the initial data are spatially periodic, it is most convenient
to impose that the genus of the initial data also be given, as part of the
initial data.
The method of Fourier transform to determine the genus of the initial condition
has been used in \cite{hmss, currysegur}.
\item {\bf Determine two stationary flows of the KP hierarchy: find $(r,n)$}
Mulase \cite{mulase} and later Shiota \cite{shiota} showed that a rank 1,
finite-genus solution of the KP equation \rf{reconstruction} is a simultaneous
solution to all flows of the KP hierarchy by using \rf{reconstruction} with
\begin{equation}\la{phaseshier}
\phi_j=\sum_{i=1}^\infty k_{j,i} t_i,
\end{equation}
\noindent for $j=1,2,\ldots, g$, instead of \rf{phases}. Mulase and Shiota
demonstrated that the corresponding rank 1, finite-genus solutions are
stationary with respect to all but a finite number of the higher-order times
in the KP hierarchy. A rank 1, finite-genus solution of the KP
equation is said to be stationary with respect to $t_k$ if
\begin{equation}\la{stat}
\sum_{i=1}^k d_{i} \pp{u}{t_i}=0,
\end{equation}
\noindent with all the $d_{i}$ constant and $d_{k}=1$.
The algorithm presented here requires the knowledge of two independent
higher-order times of the KP hierarchy $t_r$ and $t_n$, such that $u$ is
stationary with respect to both $t_r$ and $t_n$. First, $r$ is the minimal $k$
for which \rf{stat} holds for $k=r$. For this $r$, $n$ corresponds to the
lowest-order higher-order time $t_n$, such that the $t_n$-flow is independent
of the $t_r$-flow and \rf{stat} holds for $k=n$.
In \cite{ds1}, a recipe was presented to find $(r,n)$, given the genus $g$ of
the initial data. Actually, a finite number of pairs $(r,n)$ is determined for
any given $g$. As we will see in step 4, each one of the resulting pairs
$(r,n)$ gives rise to a set of ordinary differential equations, one of which
the initial condition necessarily satisfies. The pairs $(r,n)$ for which the
initial condition does not satisfy the differential equations need to be
rejected. Hence, only at step 4 do we nail down a pair of stationary flows of
the KP hierarchy for the given initial data. Here, at step 2, the numbers of
pairs $(r,n)$ is reduced to a finite number.
For initial data with $g$ phases, the following constraints on $(r,n)$ are
known \cite{ds1}:
\begin{itemize}
\item All values of $r$ with $2 \leq r \leq g+1$ are allowed.
\item For each $r$, let $n_j(r)$ be the $j$-th integer greater than $r$ that is
coprime with $r$. The lowest $(g-r+2)$ of these integers are possible values of
$n$.
\item Exclude from the list of pairs $(r,n)$ obtained above the values of $n$
for which \linebreak $(r-1)(n-1)/2<g$.
\item The remaining pairs $(r,n)$ are all possible for genus $g$.
\end{itemize}
\newpage
\noindent {\bf Remarks}\la{remrem}
\begin{enumerate}
\item When $r=2$, the only possibility for $n$ is $n=2g+1$. This is
the case corresponding to one-dimensional solutions.
\item A solution of the KP equation of genus $g$ is called {\em generic} if
the first $g$ vectors $\mbf{k_i}=(k_{1,i},k_{2,i}, \ldots, k_{g,i})$ are
linearly independent. For a generic solution of genus $g$ of the KP equation,
$(r,n)=(g+1, g+2)$.
\item We have the following confusing situation: to find a generic genus $g$
KP solution, we need $(r,n)=(g+1,g+2)$. This choice leads to a Hamiltonian
system of ordinary differential equations (in step 5). However, a generic
solution of genus $g$ is not a typical solution of this Hamiltonian system
({\em i.e.,} it does not depend on a maximal number of parameters for this
system). Typical solutions of the Hamiltonian system are nongeneric solutions
of higher genus. Since these higher-genus solutions depend on more parameters,
one must search carefully to find the generic genus $g$ solutions among them.
This is further discussed in Sections \ref{sec:reductions} and
\ref{sec:parameters}.
\end{enumerate}
\item {\bf Impose the $r$-reduction}
For a given value of $r$, obtained from step 2, we impose on the KP hierarchy
\rf{kphier} the reduction that the KP solution is independent of $t_r$. Hence,
the coefficients of $A_k$, for all k are independent of $t_r$. Following
Gel'fand and Dikii \cite{gd1}, Adler \cite{adler} and Strampp and Oevel
\cite{strampp}, this allows us to rewrite the {\em $r$-reduced KP hierarchy}
as an infinite ($r\times r$ matrix) hierarchy of partial differential
equations, each with one space dimension, and all of them mutually commuting.
In other words, \rf{kphier} is replaced by a hierarchy of the form
\begin{equation}\la{11hier}
\pp{B}{t_k}=\left[B_k,B\right], ~~~r~\mbox{does not divide}~k.
\end{equation}
The matrices $B$ and $B_k$ contain $r-1$ unknown functions. The higher-order
time variables of the hierarchy in \rf{11hier} are inherited from the KP
hierarchy \rf{kphier}. Only $t_r$ and the higher-order times of the form
$t_{(ir)}$ for integer $i$ do not appear any more. In particular $t_1\equiv
x$. Each equation of the hierarchy \rf{11hier} is Hamiltonian, as is shown in
Section \ref{sec:rred}, where the details of the $r$-reduction are given.
\item {\bf Impose the $n$-reduction}
After imposing stationarity of the KP solution with respect to $t_r$, we now
impose stationarity of the KP solution with respect to $t_n$ as well. Imposing
the $n$-reduction in addition to the $r$-reduction leads to the {\em
$(r,n)$-reduced KP equation}. The $(r,n)$-reduced KP equation is a system of
$(r-1)$ ordinary differential equations in $x$ for $(r-1)$ unknown functions
$\mbf{u}$.
Again, following Gel'fand and Dikii \cite{gd1}, Adler \cite{adler} and Strampp
and Oevel \cite{strampp}, we write the $(r,n)$-reduced KP equation in
Lagrangian form:
\begin{equation}\la{lagode}
\dd{{\cal L}}{\mbf{u}}=\mbf{0},
\end{equation}
\noindent where $\delta{{\cal L}}/\delta{\mbf{u}}$ denotes the variational derivative
of ${\cal L}$ with respect to a certain vector function $\mbf{u}=(f_1,
f_2, \ldots, f_{r-1})$, which is explicitly determined in terms of the
solution of (KP):
\begin{equation}\la{vardervec} \dd{{\cal L}}{\mbf{u}}\equiv\left(\dd{{\cal L}}{f_1},
\dd{{\cal L}}{f_2}, \ldots, \dd{{\cal L}}{f_{r-1}}\right)^T \end{equation}
\noindent and for any function $f$, the {\em variational derivative} of ${\cal L}$
with respect to $f$ is defined as
\begin{equation}\la{varder}
\dd{}{f}{\cal L}(u,u_x, u_{xx}, \ldots)\equiv \sum_{k\geq 0} (-1)^k \ppn{k}{}{x}
\pp{{\cal L}}{f^{(k)}}.
\end{equation}
\noindent Here, $f^{(k)}$ denotes the $k$-th derivative of $f$ with respect to $x$.
Equations \rf{lagode} are a set of ordinary differential equations that the
initial condition needs to satisfy. This constitutes a test on the validity of
the pair $(r,n)$, chosen after step 2.
The details of imposing the $n$-reduction in addition to the $r$-reduction are
found in Section \ref{sec:nred}
\item {\bf The Ostrogradskii transformation, canonical variables and the
Hamiltonian system}
In Section \ref{sec:ostro}, the Lagrangian system of ordinary differential
equations in $x$ is transformed to a Hamiltonian system of ordinary
differential equations in $x$ with canonical variables. Since the Lagrangian
${\cal L}$ depends on more than the first derivatives of $\mbf{u}$, an
extension of the Legendre transformation is needed. This is the {\em
Ostrogradskii transformation} \cite{ostro, whittaker}, defined in Section
\ref{sec:ostro}. It defines {\em canonical variables} $\mbf{q}$ and $\mbf{p}$
in terms of the Lagrangian variables $\mbf{u}$:
\begin{equation}\la{ostrosimple}
\mbf{q}=\mbf{q}(\mbf{u}, \mbf{u}_x, \mbf{u}_{xx}, \ldots), ~~~
\mbf{p}=\mbf{p}(\mbf{u}, \mbf{u}_x, \mbf{u}_{xx}, \ldots).
\end{equation}
The Lagrangian ${\cal L}$ is called {\em nonsingular} \cite{krupkova, dfn2} if
the Ostrogradskii transformation is invertible, {\em i.e.,} if the
transformation \rf{ostrosimple} can be solved for the Lagrangian variables
$\mbf{u}$ and their derivatives in terms of the canonical variables $\mbf{q}$
and $\mbf{p}$. If the Lagrangian is nonsingular, the Euler-Lagrange equations
corresponding to the Lagrangian ${\cal L}$ are equivalent to the Hamiltonian
system
\begin{equation}\la{hamsyssimple}
\pp{\mbf{q}}{x}=\pp{H}{\mbf{p}},~~~ \pp{\mbf{p}}{x}=-\pp{H}{\mbf{q}}
\end{equation}
\noindent where the Hamiltonian $H$ is determined explicitly in terms of the
Lagrangian.
If both $r$ and $n$ are odd, the Lagrangian ${\cal L}$ is shown to be singular
in Section \ref{sec:sing}. Nevertheless, the dynamics in terms of the
Lagrangian variables is still well-posed, as shown by Veselov \cite{veselov}.
In Section \ref{sec:sing}, the singular Lagrangians are further investigated.
We indicate how one might be able to avoid dealing with singular Lagrangians:
a simple invertible transformation on the Lagrangian variables should be able
to transform the singular Lagrangian into a nonsingular one. Otherwise, one can
always resort to the more general methods of Krupkova \cite{krupkova} or to the
theory of constrained Hamiltonian systems \cite{dirac}.
\item {\bf Complete integrability of the Hamiltonian system}
The Hamiltonian system \rf{hamsyssimple} is shown to be {\em completely
integrable in the sense of Liouville} in Section \ref{sec:comp}. If the
dimension of the vectors $\mbf{q}$ and $\mbf{p}$ is $N$, the Hamiltonian system
is $2N$-dimensional. A set of $N$ functionally independent conserved quantities
$T_{k}$ is constructed. Generalizing the work of Bogoyavlenskii and Novikov
\cite{bogoyavlenskii}, these conserved quantities are
shown to be mutually {\em in involution}, {\em i.e.,}
\begin{equation}\la{involsimple}
\left\{T_{k},T_{l} \right\}\equiv 0,
\end{equation}
\noindent where $\{f,g\}$ denotes the {\em Poisson bracket} of the functions $f$
and $g$:
\begin{equation}\la{pb}
\left\{f,g\right\}\equiv \pp{f}{\mbf{q}} \pp{g}{\mbf{p}}-
\pp{f}{\mbf{p}} \pp{g}{\mbf{q}}.
\end{equation}
A consequence of proving the involutivity of the conserved quantities $T_{k}$
is that $T_k=-H_k$, where $H_k$ is the Hamiltonian describing the evolution of
the canonical variables along the higher-order time variable $t_k$:
\begin{equation}\la{hamsysksimple}
\pp{\mbf{q}}{t_k}=\pp{H_k}{\mbf{p}},~~~ \pp{\mbf{p}}{t_k}=-\pp{H_k}{\mbf{q}}.
\end{equation}
The canonical variables are related to the dependent variable $u$ of the KP
equation. Hence, we have constructed a set of ordinary differential Hamiltonian
systems, each one of which describes the evolution of a rank 1, finite-genus
solution of the KP equation according to a different higher-order time
variable. Since all these Hamiltonian systems are $2N$-dimensional and share a
common set of $N$ functionally independent conserved quantities $T_k$, mutually
in involution, they are all completely integrable in the sense of Liouville.
\item {\bf Solve the Hamiltonian system; reconstruct the solution of the KP
equation}
The final step of the algorithm is to integrate explicitly the Hamiltonian
systems obtained in the previous step. From Liouville's theorem \cite{arnold}
it is known that the Hamiltonian equations of motion can be solved in terms of
quadratures.
This last step is not executed in this paper. For the KdV equation
it can be found in \cite{dickey}. Some partial results for the KP equation are
also discussed there.
\end{enumerate}
\section{The KP hierarchy}\la{sec:hier}
In this section, the KP hierarchy is redefined, using the terminology of
Gel'fand and Dikii \cite{gd1}, Adler \cite{adler} and others. More
specifically, the notation of Strampp and Oevel \cite{strampp} is used.
Consider the pseudo-differential operator
\begin{equation}\la{pseudo1}
L=\partial+u_2\partial^{-1}+u_3 \partial^{-2}+u_4 \partial^{-3}+\ldots=\sum_{j=-\infty}^{1}u_{1-j}\partial^j,
\end{equation}
\noindent with $u_0\equiv 1$, $u_1\equiv 0$. We have used the notation $\partial=\partial_x$. The
coefficients $u_j$ can be functions of $x$. The $u_j$ are referred to as {\em
potentials}. This term is also used for any other set of functions, related to
the $u_j$ by an invertible transformation.
\vspace*{12pt}
\noindent {\bf Remark}\la{u1rem}
In order to compare with the results in \cite{ds1}, we need $u_1\neq 0$, but
constant, extending the definition of the pseudo-differential operator $L$.
Although this changes some of the formulas in this section, the added results for
the KP equation are minor. In \cite{dickey} and \cite{decthesis}, it is shown that
this amounts to assigning a fixed value to the constant $\tilde{u}$ in
\rf{reconstruction}. In the remainder of the paper, we assume $u_1\equiv 0$, unless
stated otherwise.
\vspace*{12pt}
The action of the operator $\partial^j$ is defined by the generalized Leibniz rule:
\begin{equation}\la{leibniz}
\partial^j f=\sum_{i=0}^\infty \binomial{j}{i} f^{(i)} \partial^{j-i},
\end{equation}
\noindent where $f$ is a function, $f^{(i)}$ is its $i$-th derivative with respect
to $x$, and the binomial coefficients are defined as
\begin{equation}\la{binomial}
\binomial{j}{i}=\frac{j(j-1)\cdots (j-i+1)}{i!}, ~~\mbox{for}~i>0,
~~\mbox{and}~\binomial{j}{0}=1.
\end{equation}
\noindent Note that this definition makes sense for negative integers $j$. For
non-negative integers $j$, \rf{leibniz} is a finite sum. Otherwise,
\rf{leibniz} results in an infinite series.
Next, consider positive integer powers of the pseudo-differential operator $L$:
\begin{eqnarray}\nonumber
L^r&=&\left(\partial+u_2\partial^{-1}+u_3\partial^{-2}+u_4 \partial^{-3}+\ldots\right)^r\\\la{def}
&=&\sum_{j=-\infty}^r \alpha_j(r)\partial^j=\sum_{j=-\infty}^r \partial^j \beta_j(r).
\end{eqnarray}
\noindent The last two equalities define the functions $\alpha_j(r)$ and
$\beta_j(r)$, for $j \leq r$, $r>0$. These are in general functions of $x$.
One has
\begin{equation}\la{initialization}
\alpha_1(1)=1, \alpha_0(1)=0, \alpha_j(1)=u_{j+3}, ~~~~\mbox{for}~~j=-1, -2, -3,
\ldots,
\end{equation}
\noindent and
\begin{equation} \alpha_r(r)=1, ~~\beta_r(r)=1, ~~~\mbox{and}~~~\alpha_{r-1}(r)=0,~~
\beta_{r-1}(r)=0. \end{equation}
\noindent Clearly the functions $\alpha_{j}(r)$ and $\beta_j(r)$ are related. Using
\rf{leibniz}, we get from \rf{def} that
\begin{equation}\la{triangular}
\alpha_j(r)=\sum_{k=0}^{r-j}\binomial{j+k}{k} \beta_{j+k}^{(k)}(r).
\end{equation}
\noindent This triangular system can be solved to obtain the functions $\beta_j(r)$ in
terms of the functions $\alpha_j(r)$, if so desired. Note in particular that
$\alpha_{-1}(r)=\beta_{-1}(r)$, since the binomial coefficient \rf{binomial}
vanishes for positive $j$ less than $i$.
The functions $\alpha_j(r)$ can be determined explicitly in terms of the
potentials $(u_2, u_3, \ldots)$. A convenient way to do this is to use a
recursion relationship obtained from $L^r=LL^{r-1}$:
\begin{equation}\la{recursion}
\alpha_j(r)=\alpha_{j-1}(r-1)+\pp{}{x}\alpha_j(r-1)+u_{r-j}+\sum_{k=j-r+3}^{-1}
u_{1-k} \sum_{m=j}^{k+r-3}\binomial{k}{m-j}\alpha_{m-k}^{(m-j)}(r-1).
\end{equation}
\noindent It is possible to obtain an explicit formula which expresses $\alpha_j(r)$ in
terms of only the potentials $(u_2, u_3 \ldots)$ \cite{decthesis}, but such a
formula is not as practical as the recursion relationship \rf{recursion}.
However, the following result will be used. It extends a result of Date $et~al$
\cite{date}:
\begin{equation}\la{linearpart}
\alpha_j(r)=\sum_{k=1}^{r-j-1}\binomial{r}{k} \ppn{k-1}{}{x}
u_{r-j-k+1}+\hat{\alpha}_j(r),
\end{equation}
\noindent and $\hat{\alpha}_j(r)$ is a differential polynomial in $(u_2,u_3, \ldots,
u_{r-j-2})$ containing only nonlinear terms. This follows easily from
\rf{recursion}.
The differential part (including the purely multiplicative term) of the operator
$L^r$ is denoted by $L^r_+$:
\begin{equation}\la{positive}
L^r_+=\sum_{j=0}^r \alpha_j(r)\partial^j=\sum_{j=0}^r \partial^j \beta_j(r).
\end{equation}
\noindent Observe from \rf{triangular} that the purely differential
part of $L^r$ is independent of the representation \rf{def} used for $L^r$. This
is also true for
\begin{equation}\la{negative}
L^r_-=L^r-L^r_+=\sum_{j=-\infty}^{-1} \alpha_j(r)\partial^j=\sum_{j=-\infty}^{-1} \partial^j
\beta_j(r).
\end{equation}
Having introduced the above notation, the KP hierarchy is expressed quite easily.
Consider the linear evolution equation for the wave function $\Psi$
\begin{equation}\la{laxpsi}
\pp{\Psi}{t_r}=L^r_+ \Psi, ~~~~\mbox{for}~r=1,2,\ldots.
\end{equation}
\noindent This is the {\em linear KP hierarchy} ({\em cf.} \rf{lkphier}). For $r=1$,
this equation given $\Psi_{t_1}=\Psi_x$, hence the identification $t_1\equiv x$.
Assuming completeness of states, the {\em KP hierarchy} is obtained from the
compatibility of the equations in \rf{laxpsi} ({\em cf.} \rf{kphier}):
\begin{equation}\la{laxkp}
\frac{\partial^2 \Psi}{\partial_{t_{r_1}} \partial_{t_{r2}}}=
\frac{\partial^2 \Psi}{\partial_{t_{r_2}} \partial_{t_{r1}}}\Rightarrow
\pp{L_+^{r_1}}{t_{r_2}}-\pp{L_+^{r_2}}{t_{r_1}}=\left[L_+^{r_2},L_+^{r_1}\right].
\end{equation}
\noindent These equations determine how the potentials depend on the higher-order
time variables $t_r$ so that the equations \rf{laxpsi} are compatible. Again
assuming completeness of states, equations \rf{laxkp} can also be obtained
from the compatibility of the following sequence of Lax-like equations
\begin{equation}\la{lax}
\pp{L}{t_r}=\left[L_+^r,L\right]=\left[L,L_-^r\right], ~~r\geq 1.
\end{equation}
\noindent The last equality is a consequence of $L^r=L^r_++L_-^r$.
Introducing the KP hierarchy as in \rf{lax} is equivalent to the approach used in
\cite{ds1}. Below, we use that \rf{laxpsi} is essentially equivalent to \rf{lax}.
Our approach consists mainly of rewriting \rf{lax} and its reductions.
Each time we increase $r$ by one, another potential appears in $L^r_+=L^r_+(u_2,
u_3, \ldots, u_r)$. Furthermore, $u_r$ appears only in $\alpha_0(r)$:
$\alpha_{0}(r)=r u_r+\tilde{\alpha}_0(r;u_2, u_3, \ldots, u_{r-1})$, as is seen
from \rf{recursion}. As a consequence, there is a one-to-one correspondence
between the potentials $u, w_1, w_2, \ldots$ appearing in the KP hierarchy as it
is defined in \cite{ds1} and the set of potentials $u_2, u_3, u_4,\ldots$
appearing in \rf{pseudo1}.
As an example, consider equations (\ref{linear1}, \ref{linear2}). These are
contained in the formulation of the KP hierarchy given here: writing out
\rf{laxpsi} for $r=2$ and $r=3$ and equating coefficients with \rf{linear1} and
\rf{linear2} respectively gives $u_2=u/2$ and $u_3=w/4-u_x/4$.
The explicit form of the Lax equations \rf{lax} is needed later on. We have
\cite{strampp}
\begin{equation}\la{laxexp}
\pp{u_i}{t_r}=\sum_{j=1}^{i} M_{i,j}\beta_{-j}(r), ~~~~i=0,1,2, \ldots,
\end{equation}
\noindent where the differential operator $M_{i,j}$ is given by
\begin{equation}\la{opera}
M_{i,j}=\sum_{k=0}^{i-j}\left(\binomial{1-j-k}{i-j-k}u_k
\partial^{i-j-k}-\binomial{-j}{i-j-k}\partial^{i-j-k}u_k\right).
\end{equation}
\noindent Here and in what follows, the contribution of a sum is assumed to be zero if
its upper limit is less than its lower limit, as happens in \rf{opera} when $i=0$.
Note that this immediately gives $\partial u_0/\partial t_r=0$ and $\partial u_1/\partial t_r=0$, for all
$r$, as expected. Furthermore, $M_{i,i}=0, M_{i,i-1}=\partial$. The differential
equations \rf{laxexp} determine a first-order system for the $t_r$ evolution of
the infinite-dimensional vector of potentials $(u_2, u_3, u_4, \ldots)$.
\section{Impose the $r$-reduction}\la{sec:rred}
Next we obtain a closed first-order system of partial differential equations for
finitely many of the potentials, by imposing an {\em $r$-reduction}. This is the
first reduction step in our scheme towards our goal of finding a set of ordinary
differential equations describing the rank 1, finite-genus solutions of (KP).
The $r$-reduction of the operator $L$ is obtained by imposing that the $r$-th power
of $L$ is purely differential:
\begin{equation}\la{rred}
L^r=L_+^r ~~\mbox{or}~~L^r_-=0~\Rightarrow~\beta_k(r)\equiv
0~\mbox{for}~k<0~\Rightarrow~\alpha_k(r)\equiv 0~\mbox{for}~k<0.
\end{equation}
\noindent Notice that the $r$-reduction implies immediately that all potentials are
independent of $t_r$, from \rf{laxexp} or \rf{lax}. The $r$-reduction determines
the potentials $u_{r+1}, u_{r+2}, u_{r+3}, \ldots$ as differential polynomials of
the potentials $u_2, u_3, \ldots, u_{r}$. This is a consequence of the triangular
structure of the system relating the potentials $(u_2, u_3, u_4, \ldots)$ to the
potentials $(\alpha_{r-2}(r), \alpha_{r-3}(r), \ldots)$.
\vspace*{12pt}
\noindent {\bf Remark}\la{p:rem2}
If we impose an $r$-reduction, for some positive integer number $r$, then we have
automatically achieved an $rk$ reduction, for any positive integer $k$. If $L^r$ is
purely differential, then so is $L^{rk}=(L^r)^k$.
\vspace*{12pt}
Under $r$-reduction, the infinite system of evolution equations in \rf{laxexp}
reduces to a finite number of equations for the independent potentials $(u_2,
u_3, \ldots, u_r)$. We write this finite system in Hamiltonian form. First,
we write the system in matrix-operator form. The matrices involved are now
finite-dimensional, as there are only $r-1$ independent potentials.
For a given $n$, define the $(r-1)$-dimensional vectors
\begin{equation}\la{capu}
\mbf{U}(r)=\left(
\begin{array}{c}
u_2\\u_3\\u_4\\\vdots\\u_{r}
\end{array}
\right), ~~~
\mbf{\beta}(r,n)=\left(
\begin{array}{c}
\beta_{-1}(n)\\\beta_{-2}(n)\\\beta_{-3}(n)\\\vdots\\\beta_{-r+1}(n)
\end{array}
\right),
\end{equation}
\noindent and the operator-valued matrix
\begin{equation}\la{opermat}
\mbf{M}(r)=\left(
\begin{array}{ccccc}
M_{2,1}&0&0&\cdots&0\\
M_{3,1}&M_{3,2}&0&\cdots&0\\
\vdots&\vdots&\vdots&\ddots&\vdots\\
M_{r-1,1}&M_{r-1,2}&M_{r-1,3}&\cdots&0\\
M_{r,1}&M_{r,2}&M_{r,3}&\cdots&M_{r,r-1}
\end{array}
\right),
\end{equation}
\noindent with the operators $M_{i,j}$ defined by \rf{opera}. Then under $r$-reduction
\rf{laxexp} can be written as
\begin{equation}\la{explicitlaxrred}
\pp{\mbf{U}(r)}{t_n}=\mbf{M}(r)\mbf{\beta}(r,n).
\end{equation}
In order to write the system of equations \rf{explicitlaxrred} in Hamiltonian
form, we first introduce new coordinates on the phase space of the system.
Define
\begin{equation}\la{capalpha}
\mbf{\alpha}(r)=\left(
\begin{array}{c}
\alpha_0(r)\\\alpha_{1}(r)\\\vdots\\\alpha_{r-2}(r)
\end{array}
\right).
\end{equation}
\noindent The Jacobian matrix of the transformation from the coordinates
$\mbf{U}(r)\rightarrow \mbf{\alpha}(r)$ is the Fr\'{e}chet derivative of the
transformation (which depends also on the spatial
derivatives of the original coordinates, see for instance \rf{recursion}). The
Jacobian for such a transformation is an operator-valued matrix, whose action
on an arbitrary vector of functions $\mbf{v}=(v_2, v_3, \ldots, v_n)^T$ is
given by
\begin{eqnarray}\nonumber
\mbf{D}(r) \mbf{v}&=&\pp{\mbf{\alpha}(r)}{\mbf{U}(r)} \mbf{v}\\\la{jacobian}
&=&\left.\pp{}{\epsilon}\left(
\begin{array}{c}
\alpha_0(r)(u_2+\epsilon v_2, u_3+\epsilon v_3, \ldots, u_r+\epsilon v_r)\\
\alpha_1(r)(u_2+\epsilon v_2, u_3+\epsilon v_3, \ldots, u_r+\epsilon v_r)\\
\vdots\\
\alpha_{r-2}(r)(u_2+\epsilon v_2, u_3+\epsilon v_3, \ldots, u_r+\epsilon v_r)\\
\end{array}
\right)\right|_{\epsilon=0}.
\end{eqnarray}
\noindent This Jacobian matrix is upper-triangular. This is a direct consequence of
the triangular structure of the system relating the potentials $u_j$ to the
potentials $\alpha_j(r)$, $j=2,3,\ldots, r$.
We rewrite the equations \rf{explicitlaxrred} in terms of the coordinates
$\mbf{\alpha}(r)$:
\begin{equation}\la{first}
\pp{\mbf{\alpha}(r)}{t_n}=\pp{\mbf{\alpha}(r)}{\mbf{U}(r)}
\pp{\mbf{U}(r)}{t_n}=\mbf{D}(r) \mbf{M}(r)
\mbf{\beta}(r,n)=\mbf{J}(r) \mbf{\beta}(r,n),
\end{equation}
\noindent where we introduce the operator-valued matrix $\mbf{J}(r)=\mbf{D}(r)
\mbf{M}(r)$. Note that $\mbf{J}(r)$ is always upper-triangular. This follows
from the upper-triangular structure of $\mbf{D}(r)$ and of the lower-triangular
structure of $\mbf{M}(r)$. Next we rewrite the vector $\mbf{\beta}(r,n)$. We
use an identity from the calculus of exterior derivatives for
pseudo-differential operators \cite{manin1}:
\begin{equation}\la{manin1}
d \beta_{-1}(r+n)=\frac{r+n}{r} \sum_{j=-1-n}^{r-2} \beta_{-1-j}(n) d
\alpha_{j}(r),
\end{equation}
\noindent which gives
\begin{equation}\la{manin2}
\beta_{-j}(n)=\frac{r}{r+n}\dd{\beta_{-1}(r+n)}{\alpha_{j-1}(r)},
\end{equation}
\noindent where $\delta/\delta \alpha_{j-1}(r)$ is the {\em variational derivative}
with respect to $\alpha_{j-1}(r)$. Hence \begin{equation}\la{maninfinal}
\mbf{\beta}(r,n)=\frac{r}{r+n} \dd{}{\mbf{\alpha}(r)} \beta_{-1}(r+n). \end{equation}
\noindent Equations \rf{first} become
\begin{equation}\la{second}
\pp{\mbf{\alpha}(r)}{t_n}=\frac{r}{r+n}\mbf{J}(r)
\dd{}{\mbf{\alpha}(r)} \beta_{-1}(r+n).
\end{equation}
\noindent This set of equations is Hamiltonian \cite{strampp}, with Hamiltonian
\begin{equation}\la{hamil}
H(r,n)=\frac{r}{r+n} \beta_{-1}(r+n)=\frac{r}{r+n} \alpha_{-1}(r+n).
\end{equation}
\noindent (We have used the observation that $\alpha_{-1}(r)=\beta_{-1}(r)$.) It
suffices to prove that the operator $J(r)$ is Hamiltonian \cite{anton},
$i.e.$, that the operator $\mbf{J}(r)$ defines a Poisson bracket. This Poisson
bracket is given by \cite{strampp}
\begin{equation}\la{pbpde}
\{S,T\}=\left(\dd{S}{\mbf{\alpha}(r)}\right)^T \mbf{J}(r)
\left(\dd{T}{\mbf{\alpha}(r)}\right).
\end{equation}
Denote by ${\cal H}$ the quotient space of all smooth functionals of the
potentials in $\mbf{\alpha}(r)$, modulo total derivatives with respect to $x$.
\noindent For \rf{pbpde} to define a Poisson bracket on ${\cal H}\times {\cal H}$, we
need three properties:
\begin{enumerate}
\item {\bf bilinearity:} This is obvious.
\item {\bf skew-symmetry:} This is less obvious. Notice that the functions
appearing in the bracket \rf{pbpde} appear only through their variational
derivatives. Hence, these functions are only defined up to total derivatives
with respect to $x$, $i.e.$, they are elements of $\cal H$. The
skew-symmetry of the Poisson bracket \rf{pbpde} operating on ${\cal H}\times {\cal
H} $ is then easily obtained by integration by parts.
\item {\bf Jacobi identity:} This is also not obvious. The proof can be found
in \cite{strampp}. There it is shown that the above bracket is essentially the
bracket Adler defines in \cite{adler}. The proof of the Jacobi identity then
reduces to Adler's proof.
\end{enumerate}
The Hamiltonian system of PDE's \rf{second} describes a whole hierarchy of
Hamiltonian PDEs for a fixed $r$. All the members of the hierarchy have the
same Poisson structure with different Hamiltonians: for each $n$ coprime with
$r$, a different system of Hamiltonian partial differential equations is
obtained, describing the $t_n$-evolution of the potentials. Note that the
first member of every hierarchy is trivial. From the first flow of \rf{lax},
we get
\begin{equation}\la{trivial}
\pp{L}{t_1}=\left[L_+,L\right]=\left[\partial,L\right]=\partial L-L \partial=\pp{L}{x}+L \partial-L
\partial=\pp{L}{x}.
\end{equation}
\noindent which is the same for all $r$-reductions. Hence, the first member of every
hierarchy is $\partial \mbf{\alpha}(r)/\partial t_1=\partial \mbf{\alpha}(r)/\partial x$.
For example, choosing $r=2$ results in the KdV hierarchy with Poisson operator
$J(2)=2 \partial$. Choosing $r=3$ gives the Boussinesq hierarchy \cite{zak,mckean1} with
Poisson operator
\begin{equation}\la{operjbous}
J(3)=\left(\begin{array}{cc}0 & 3 \partial\\3 \partial & 0\end{array}\right).
\end{equation}
Some remarks are in order about the Hamiltonian PDE's
\begin{equation}\la{hamsys}
\pp{\mbf{\alpha}(r)}{t_n}=\mbf{J}(r) \dd{H(r,n)}{\mbf{\alpha}(r)}.
\end{equation}
\noindent {\bf Remarks}
\begin{description}
\item[(a)~] Note that in contrast with the Poisson brackets in \cite{gardner,
zak, mckean1}, the bracket \rf{pbpde} is local, $i.e.$, it does not involve
integrations. This is an immediate consequence of working in the quotient
space ${\cal H}$. The Poisson bracket \rf{pbpde} for $r=2$ and $r=3$ are the
integrands of the brackets introduced in \cite{gardner} and \cite{zak,
mckean1} respectively.
\item[(b)~] Bogoyavlenskii and Novikov \cite{bogoyavlenskii} considered only the
Korteweg-deVries equation. As a consequence, none of the algebraic machinery of
this and the previous section was required for their approach. Their starting point was
the KdV equivalent of \rf{hamsys}. This same starting point is used if one
considers any other integrable partial differential equation which has only one
spatial dimension, such as the nonlinear Schr\"{o}dinger equation, the modified
Korteweg-deVries equation, etc. By starting with a one-dimensional partial
differential equation, the first step (imposing the $r$-reduction) is skipped. It
is for this step that the algebraic language of the previous two sections is
required.
\item[(c)~] An infinite number of conserved quantities exists for each member of the
hierarchy given by \rf{hamsys}. This is a necessary condition for the
integrability of these partial differential equations. Adler \cite{adler}
showed that the different members of the $r$-reduced hierarchy define mutually
commuting flows. The infinite set of Hamiltonians $\{H(r,n): n\geq
1\}$ is a set of conserved densities for every member of the hierarchy. That
different members of the hierarchy \rf{hamsys} commute is expressed as the
commutation of their respective Hamiltonians under the Poisson bracket \rf{pbpde}:
\begin{equation}\la{pbcommute}
\left\{H(r,k_1),H(r,k_2)\right\}=0,
\end{equation}
for a fixed $r$ and all $k_1, k_2$.
Denote the solution operator of the $n$-th member of the hierarchy \rf{hamsys}
by $\mbf{K}_n(t_n)$. In other words, given initial conditions
$\mbf{\alpha}(r)(x,t_n=0)$,
the solution $\mbf{\alpha}(r)(x,t_n)$ for any $t_n$ is written as
\begin{equation}
\mbf{\alpha}(r)(x,t_n)=\mbf{K}_n(t_n) \mbf{\alpha}(r)(x,0).
\end{equation}
\noindent Adler's statement \cite{adler} that different flows in the hierarchy
\rf{hamsys} commute is then expressed as
\begin{equation}
\mbf{K}_n(t_n) \mbf{K}_m(t_m)=\mbf{K}_m(t_m) \mbf{K}_n(t_n).
\end{equation}
\item[(d)~] The Hamiltonian operator $\mbf{J}(r)$ is usually degenerate,
$i.e.$, its kernel is not empty. Adler \cite{adler} showed that the
variational derivatives of the elements of the set \linebreak
$\{\alpha_{-1}(r+n): -r+1 \leq n\leq -1\}$ are all annihilated by
$\mbf{J}(r)$. In other words, these elements are Casimir functionals for the
flows generated by the $r$-reduction. It is easy to see from the triangular
form of $\mbf{J}(r)$ that the dimension of its kernel is exactly $r-1$. This
implies that the set of Casimir functionals found by Adler forms a complete
basis for the kernel of $\mbf{J}(r)$ (see also \cite{veselov}).
\end{description}
\section{Impose the $n$-reduction}\la{sec:nred}
Next, we consider stationary solutions of the system \rf{hamsys}, for the $n$
value determined in step 2. Hence, from \rf{stat},
\begin{equation}\la{stathamsys}
\sum_{k=1}^n d_k \mbf{J}(r) \dd{H(r,k)}{\mbf{\alpha}(r)}=0
~\Rightarrow~\mbf{J}(r) \dd{}{\mbf{\alpha}(r)}\sum_{k=1}^n d_k H(r,k)=0,
\end{equation}
\noindent with $d_n=1$. Furthermore, without loss of generality, $d_{n-1}$ can be
equated to 0, as was shown in \cite{krich4, ds1}.
The following theorem was first proved for the KdV equation by Lax \cite{lax1}
and Novikov \cite{novikov}.
\begin{theo}\la{theo:statpoint}
The set of stationary solutions with respect to $t_n$ is invariant under the
action of any of the other higher-order flows.
\end{theo}
\noindent {\bf Proof} Consider the hierarchy of mutually commuting
Hamiltonian systems
\begin{equation}\la{hamsysgen}
\pp{\mbf{\alpha}(r)}{\tilde{t}_n}=\mbf{J}(r)\dd{}{\mbf{\alpha}(r)}\left(\sum_{k=1}^{n}
d_k H(r,k)\right).
\end{equation}
Denote the solution operator of the $n$-th member of this hierarchy as
$\tilde{\mbf{K}}_n(\tilde{t}_n)$. Clearly these solution operators commute with
the solution operators of the higher-order flows, $\mbf{K}_m(t_m)$:
\begin{equation}\la{comgen}
\tilde{\mbf{K}}_n(\tilde{t}_n) \mbf{K}_m(t_m)=
\mbf{K}_m(t_m) \tilde{\mbf{K}}_n(\tilde{t}_n).
\end{equation}
\noindent A stationary solution with respect to $t_n$ is a fixed point of the
operator $\tilde{\mbf{K}}_n(\tilde{t}_n)$:\linebreak
$\tilde{\mbf{K}}_n(\tilde{t}_n) \mbf{\alpha}(r)(x)=\mbf{\alpha}(r)(x)$. Hence
\begin{eqnarray}
\mbf{K}_m(t_m)\tilde{\mbf{K}}_n(\tilde{t}_n) \mbf{\alpha}(r)(x)&=&
\mbf{K}_m(t_m)\mbf{\alpha}(r)(x)\\
\Rightarrow~~\tilde{\mbf{K}}_n(\tilde{t}_n)\mbf{K}_m(t_m) \mbf{\alpha}(r)(x)&=&
\mbf{K}_m(t_m)\mbf{\alpha}(r)(x),
\end{eqnarray}
\noindent since the two operators commute. Hence $\mbf{K}_m(t_m)\mbf{\alpha}(r)(x)$ is
a fixed point of $\tilde{\mbf{K}}_n(\tilde{t}_n)$ and hence a stationary
solution with respect to $t_n$. \hspace*{\fill}$\rule{3mm}{3mm}$
Let us examine the structure of the equations \rf{stathamsys}, determining the
stationary solutions with respect to $t_n$. From \rf{stathamsys},
$\dd{}{\mbf{\alpha}(r)}\sum_{k=1}^n d_k H(r,k)$ is in the kernel of the Poisson
operator $\mbf{J}(r)$. Hence it is a linear combination of the Casimir
functionals:
\begin{equation}\la{cashamsys}
\dd{}{\mbf{\alpha}(r)}\sum_{k=1}^n d_k H(r,k)+\sum_{k=1}^{r-1}h_k \frac{r}{k}
\dd{\alpha_{-1}(k)}{\mbf{\alpha}(r)}=0;
\end{equation}
\noindent the coefficient of the Casimir functional $\alpha_{-1}(r)$ has been
written as $h_k r/k$ for convenience. Equation \rf{cashamsys} is a system of
Euler-Lagrange equations, with Lagrangian depending on the $r-1$ potentials
$\mbf{\alpha}(r)$ and their derivatives:
\begin{eqnarray}\nonumber
{\cal L}(r,n)&=&H(r,n)+\sum_{k=1}^{n-2}d_k H(r,k)+\sum_{k=1}^{r-1} h_k \frac{r}{k}
\alpha_{-1}(k)\\\la{lagrangian}
&=&H(r,n)+\sum_{k=1}^{n-2}d_k H(r,k)+\sum_{k=1}^{r-1} h_k H(r,k-r),
\end{eqnarray}
\noindent since $d_{n-1}\equiv 0$. The last term in this equation is a slight abuse
of notation. It is to be interpreted using \rf{hamil}.
The set of $r-1$ Euler-Lagrange equations \rf{cashamsys}
\begin{equation}\la{el}
\dd{{\cal L}(r,n)}{\mbf{\alpha}(r)}=0
\end{equation}
\noindent will be referred to as the {\em $(r,n)$-th (stationary) KP equation}. This
system of Euler-Lagrange equations is extremely important: it is a
finite-dimensional system of {\em ordinary differential equations} describing
how solutions of the $(r,n)$-th KP equation depend on $x$. At this point, the
study of rank 1, finite-genus solutions of (KP) is immensely simplified: it is
reduced to the study of a finite-dimensional set of ordinary differential
equations \rf{el} that are derived from one scalar quantity, the
Lagrangian \rf{lagrangian}. The remainder of this paper examines the special
structure of the set of equations \rf{el}.
\vspace*{12pt}
\noindent {\bf Remarks}
\begin{description}
\item[(a)~] As pointed out before, the first flow of the KP hierarchy defines
$t_1$ to be $x$. This first flow imposes no constraints on the $x$-dependence
of the potentials. After one imposes the $r$- and $n$-reductions this
$x$-dependence is determined by the Lagrangian system \rf{el}.
\item[(b)~] The Euler-Lagrange equations are a {\em minimal} set of
differential equations the potentials in $\mbf{\alpha}(r)$ have to satisfy to
make the $t_r$-flow and the $t_n$-flow stationary. In step 5 of \cite{ds1}, a
system of differential equations was proposed which the initial conditions of
the KP equation needs to satisfy. Because of the way the $r$- and
$n$-reductions were performed, it was not clear in \cite{ds1} that these
differential equations are in fact {\em ordinary} differential equations.
Furthermore, the differential equations obtained in \cite{ds1} were not
necessarily functionally independent. The equations \rf{el} are functionally
independent, as was shown by Veselov \cite{veselov}.
\item[(c)~]
Since the order of imposing the $r$-reduction and the $n$-reduction can be
reversed (in \cite{ds1} they were executed simultaneously), the remark on
page~\pageref{p:rem2}
can be repeated with $n$ instead of $r$: if we impose an $n$-reduction, we
automatically achieve an $nk$ reduction for any positive integer $k$. Imposing
an $n$-reduction implies that $L^n$ is purely differential. In that case
$L^{nk}=(L^n)^k$ is also purely differential.
\item[(d)~] At this point, {\em Krichever's criterion \cite{krichcom}}
surfaces\footnote{There is some discussion in the literature about the
necessity of this criterion. See for instance \cite{kasman}}: for a
finite-genus solution of rank 1 of the KP equation, $r$ and $n$ need to be
coprime. Non-coprime $r$ and $n$ result in higher-rank solutions. We show
next that to determine a rank 1, finite-genus solution completely, non-coprime
$r$ and $n$ are not allowed.
Imposing the $r$- and $n$-reductions amounts (up to including lower order flows)
to imposing that both $L^r$ and $L^n$ are purely differential operators. If $r$
and $n$ are not coprime, let $r=k\hat{r}$ and $n=k \hat{n}$, for integers $k,
\hat{r}$ and $\hat{n}$. So, $r$ and $n$ have the common factor $k$. If
$L^k=L^k_+$ ($i.e.$, $L^k$ is purely differential) then the solution is
stationary with respect to $t_k$. Since $L^r=(L^k)^{\hat{r}}$ and
$L^n=(L^k)^{\hat{n}}$ are purely differential, the solution is trivially
stationary with respect to $t_r$ and $t_n$. Thus, imposing stationarity with
respect to $t_k$ implies stationarity with respect to $t_r$ and $t_n$. Imposing
stationarity with respect to only one higher order flow $t_k$ however, does not
provide enough information for the determination of the solution using our
methods. Therefore, $r$ and $n$ are required to be coprime.
\end{description}
\section{The explicit dependence of the Lagrangian on the potentials and their
derivatives}\la{sec:exp}
We want to examine the explicit functional dependence of the Lagrangian ${\cal
L}(r,n)$ on the potentials $\mbf{\alpha}(r)=(\alpha_0(r), \alpha_1(r), \ldots,
\alpha_{r-2}(r))^T$ and their derivatives. We are especially interested in the
order of the highest derivative of $\alpha_i(r)$, for $i=0, 1, \ldots, r-2$.
This information is necessary in order to carry out the generalization of the
Legendre transformation in Section \ref{sec:ostro}.
\vspace*{12pt}
\noindent {\bf Definition:}
{\bf The weight of an operator f(x), W[f(x)]}, is defined to be an
integer-valued functional with the following properties:
\begin{enumerate}
\item $W[f g]=W[f]+W[g]$, and $W[f^N]=N W[f]$ for integer N.
\item If $W[f]=W[g]$ then $W[f \pm g]=W[f]=W[g]$.
\item $W[L]=1$.
\end{enumerate}
\vspace*{12pt}
The usefulness of this definition is connected with the scaling symmetry of
the KP equation. This symmetry is shared by the whole KP hierarchy through its
definition using the operator $L$. Introducing the weight function turns the
algebra of operators used here into a so-called graded algebra \cite{dickey}.
Essentially, the weight function introduced here is identical with the `rank'
introduced by Miura, Gardner and Kruskal in \cite{mgk}. We use the name
`weight' because `rank' has a very different meaning in KP theory \cite{nmpz,
krichcom}.
Using the defining properties of the weight function, we calculate the weight
of some quantities we have used:
\vspace*{12pt}
\noindent {\bf Examples}
\begin{itemize}
\item Since $W[L]=1$, any term in $L$ has weight 1. In particular $W[\partial]=1$.
\item Hence $W[u_k \partial^{-k+1}]$ $=$ $1 \Rightarrow W[u_k]+W[\partial^{-k+1}]=1
\Rightarrow$ \newline
$W[u_k]+(-k+1)W[\partial]=1 \Rightarrow W[u_k]=k$.
\item $W[L^r]=r$, hence $W[\alpha_k(r) \partial^k]=r~\Rightarrow$~
$W[\alpha_k(r)]+W[\partial^k]=r~\Rightarrow~W[\alpha_k(r)]=r-k$. Analogously
$W[\beta_{k}(r)]=r-k$.
\item $W[\partial/\partial t_r]=W[L^r]=r$.
\item $W[H(r,n)]=r+n+1$, from \rf{hamil}. Then also, $W[{\cal L}(r,n)]=r+n+1$.
\item $W[d_k]=n-k$ and $W[h_k]=r+n-k$.
\end{itemize}
\vspace*{12pt}
We now use the weight function to calculate how the Lagrangian depends on the
$r-1$ potentials in $\alpha(r)$ and their derivatives. For $j=2, 3, \ldots,
r$, let us denote by $N_j$ the highest order of differentiation of the
potential $\alpha_{r-j}(r)$ in the Lagrangian \rf{lagrangian}. We say a term
in the Lagrangian is of degree $M$ if it contains $M$ factors (counting
multiplicities) that are linear in one of the potentials or in one of their
derivatives. The Lagrangian has linear terms ($i.e.$, terms of degree one),
quadratic terms ($i.e.$, terms of degree two), cubic terms ( $i.e.$, terms of
degree three), terms of degree four and so on. Clearly the linear terms can
not depend on the derivatives of the potentials: such a term would be a total
derivative and would be disregarded.
All terms in the Lagrangian have the same weight, as a consequence of the
scaling invariance of the KP hierarchy. In order to find the highest derivative
of a potential, we need only consider the quadratic terms of the Lagrangians.
All higher degree terms have contributions from more than one other potential.
The nontrivial potential with the lowest weight is $\alpha_{r-2}(r)$:
$W[\alpha_{r-2}(r)]=2$. Every time a potential
appears in a term of the Lagrangian, the number of $x$-derivatives of the other
potentials in that term decreases by at least 2. Since the linear terms cannot
contain derivatives, it follows that the highest derivatives of the potentials
are found in the quadratic terms of the Lagrangian.
Similarly, every time one of the coefficients $h_k$, for $k=1, 2,
\ldots, r-1$ or $d_j$ for $j=1,2,\ldots, n-2$ appears in a term of the
Lagrangian, the number of $x$-derivatives in that term has to decrease by the
weight of the coefficient $h_k$ or $d_j$ involved. It follows that the highest
derivatives of the potentials are found in the quadratic terms of the
Lagrangian, not containing any of these coefficients, $i.e.$, in the
quadratic terms of $H(r,n)$.
We use these observations to find how the Lagrangian depends on the highest
derivatives of the potentials, $i.e.$, to find $N_j$, for $j=2,3,\ldots,
r$.
\begin{theo}\la{theo:lagdep}
The order of differentiation in the Lagrangian ${\cal L}(r,n)$ of
$\alpha_{r-2}(r)$ is $[(r+n-3)/2]$, $i.e.$, $N_2=[(r+n-3)/2]$. The order of
differentiation in the Lagrangian ${\cal L}(r,n)$ of any other potential
$\alpha_{r-i}(r)$ is $[(r+n-2i+2)/2]$, $i.e.$, $N_i=[(r+n-2i+2)/2]$ for
$i=3,4,\ldots, r$. The square brackets denote the integer part of the
expression inside.
\end{theo}
\noindent {\bf Proof}
The proof is a tedious check on all the possibilities of
combinations of derivatives of the potentials appearing in the Lagrangian
${\cal L}(r,n)$. We start with a few specific cases before examining the
general case.
\vspace*{0.5cm}
\noindent{\em Dependence of ${\cal L}(r,n)$ on $\alpha_{r-2}(r)$}\vspace{0.5cm}
We consider terms of the form $\aa{r-2}{k_1} \aa{r-j}{k_2}$. We want to
find the maximal allowable value for $k_1$, $i.e.$, for the highest
derivative of $\alpha_{r-2}(r)$ appearing in the Lagrangian. We have
\begin{displaymath}
W[\aa{r-2}{k_1} \aa{r-j}{k_2}]=k_1+k_2+2+j=r+n+1=W[{\cal L}(r,n)],
\end{displaymath}
\noindent hence
\begin{displaymath}
k_1+k_2=r+n-1-j.
\end{displaymath}
Only values of $k_1, k_2$ with $|k_1-k_2|\leq 1$ need to be considered in this
case. Other values are reduced to these cases using integration by parts. If
$j=2$, then necessarily $k_1=k_2$. Otherwise the term we are considering is a
total derivative, equivalent to $([\aa{r-2}{k_2}]^2/2)'$. In this case we find
\begin{displaymath}
k_1=\frac{r+n-3}{2}.
\end{displaymath}
\noindent This value is not necessarily an integer. If it is, when $r+n-3$ is even,
it would be the maximum value for $k_1$. Otherwise, this term does not appear
in ${\cal L}(r,n)$, so we consider next $j=3$. If $k_1=k_2+1$, we find
$k_1=(r+n-5)/2$. If this is an integer, than so is $(r+n-3)/2$, hence this does
not raise the order of differentiation with which $\alpha_{r-2}(r)$ appears. On
the other hand, if $k_1=k_2$, we find
\begin{displaymath}
k_1=\frac{r+n-4}{2}.
\end{displaymath}
\noindent Either $(r+n-3)/2$ or $(r+n-4)/2$ is guaranteed to be an integer. This
integer is the maximal order of differentiation with which $\alpha_{r-2}(r)$
appears in the Lagrangian: hence
\begin{equation}\la{N2}
N_2=\left[\frac{r+n-3}{2}\right],
\end{equation}
\noindent where the square brackets denote the integer part of the
expression inside. If $r+n-3$ is even, this results in the first value we
obtained for $k_1$. If $r+n-3$ is odd, we find the second expression.
\vspace*{0.5cm}
\noindent{\em Dependence of ${\cal L}(r,n)$ on $\alpha_{r-3}(r)$}\vspace{0.5cm}
Next consider terms of the form $\aa{r-3}{k_1} \aa{r-j}{k_2}$. We want to
find the maximal allowable value for $k_1$, $i.e.$, $N_3$. We have
\begin{displaymath}
W[\aa{r-3}{k_1} \aa{r-j}{k_2}]=k_1+k_2+3+j=r+n+1=W[{\cal L}(r,n)],
\end{displaymath}
\noindent or
\begin{displaymath}
k_1+k_2=r+n-2-j.
\end{displaymath}
If $j=2$, then for the case $k_1=k_2$, we find
\begin{displaymath}
k_1=\frac{r+n-4}{2}.
\end{displaymath}
\noindent In the other case, $k_2=k_1+1$ (we can always write the Lagrangian such
that the potentials with the lowest weight have the higher order of
differentiation), we obtain
\begin{displaymath}
k_1=\frac{r+n-5}{2}.
\end{displaymath}
\noindent In this case, $k_2=(r+n-3)/2$, which corresponds to $N_2$ (if $r+n-3$ is
even), found in the previous section.
The analysis for $j>2$ does not increase the possible values of $k_1$. Either
$(r+n-4)$ or $(r+n-5)$ is guaranteed to be even, so
\begin{equation}\la{N3}
N_3=\left[\frac{r+n-4}{2}\right].
\end{equation}
\vspace*{0.5cm}
\noindent{\em Dependence of ${\cal L}(r,n)$ on $\alpha_{r-4}(r)$}\vspace{0.5cm}
Consider terms of the form $\aa{r-4}{k_1} \aa{r-j}{k_2}$. We want to
find the maximal allowable value for $k_1$, $i.e.$, $N_4$. We have
\begin{displaymath}
W[\aa{r-4}{k_1} \aa{r-j}{k_2}]=k_1+k_2+4+j=r+n+1=W[{\cal L}(r,n)],
\end{displaymath}
\noindent or
\begin{displaymath}
k_1+k_2=r+n-3-j.
\end{displaymath}
Consider the case when $j=2$. If $k_1=k_2$, then $k_1=(r+n-5)/2$ and
$k_2=(r+n-5)/2=(r+n-3)/2-1$. If $r+n-5$ is an integer, then so is $r+n-3$. In
this case we can use integration by parts to decrease $k_1$ by 1 and increase
$k_2$ by 1. Therefore this possibility is not allowed. If $k_2=k_1+1$, then we
obtain
\begin{displaymath}
k_1=\frac{r+n-6}{2} ~~\mbox{and}~~ k_2=\frac{r+n-4}{2}.
\end{displaymath}
\noindent This possibility is allowed. Also, if we let $k_2=k_1+1$, we find
\begin{displaymath}
k_1=\frac{r+n-7}{2} ~~\mbox{and}~~ k_2=\frac{r+n-3}{2}.
\end{displaymath}
\noindent This possibility is also allowed. Examining the other possibilities for $j,
k_2$ does not improve the allowed values for $k_1$, therefore
\begin{equation}\la{N4}
N_4=\left[\frac{r+n-6}{2}\right].
\end{equation}
\noindent{\em Dependence of ${\cal L}(r,n)$ on $\alpha_{r-i}(r)$, $3\leq i
\leq r$}\vspace{0.5cm}
We now state the general case. Using arguments as above (we want potentials
with lower weight to have a higher order of differentiation in each term of the
Lagrangian), we have
\begin{displaymath}
W[\aa{r-i}{k_1} \aa{r-j}{k_2}]=k_1+k_2+i+j=r+n+1=W[{\cal L}(r,n)],
\end{displaymath}
\noindent or
\begin{displaymath}
k_1+k_2=r+n+1-i-j.
\end{displaymath}
Consider the case when $j=2$. Then if $k_2=k_1+m$,
\begin{displaymath}
k_1=\frac{r+n-1-i-m}{2} ~~\mbox{and}~~ k_2=\frac{r+n-1-i+m}{2}.
\end{displaymath}
\noindent Using the above argument, we obtain an allowed possibility if either
$k_2=(n+r-3)/2$ or $k_2=(n+r-4)/2$. This gives two possible values for $m$:
\begin{displaymath}
m=i-2 ~~ \mbox{or} ~~ m=i-3.
\end{displaymath}
\noindent These respectively give
\begin{displaymath}
k_1=\frac{r+n-2i+1}{2} ~~\mbox{and}~~ k_2=\frac{r+n-2i+2}{2}.
\end{displaymath}
The other possibilities for $j$ give no additional information, hence
\begin{equation}\la{Ni}
N_i=\left[\frac{n+r-2i+2}{2}\right].
\end{equation}
This formula is valid for all $i$, except $i=2$, the first value.
\hspace*{\fill}$\rule{3mm}{3mm}$
Table \ref{table1} gives an overview of the possibilities.
\settowidth{\mylength}{$\left[\frac{r+n-3}{2}\right]$}
\settoheight{\myheight}{$\left[\frac{r+n-3}{2}\right]$}
\addtolength{\myheight}{8pt}
\begin{table}[htb]
\begin{center}
\caption{\bf The order of differentiation with which the potentials appear
in the Lagrangian \la{table1}}
\vspace*{0.2in}
\begin{tabular}{|c|c|}
\hline
$i$ & $N_i$ \\
\hline\hline
\pb{2} & $\left[\frac{r+n-3}{2}\right]$ \\
\hline
\pb{3} & $\left[\frac{r+n-4}{2}\right]$ \\
\hline
\pb{$\vdots$} & $\vdots$ \\
\hline
\pb{$j$} & $\left[\frac{r+n-2 j+2}{2}\right]$ \\
\hline
\pb{$\vdots$} & $\vdots$ \\
\hline
\pb{$r$} & $\left[\frac{n-r+2}{2}\right]$\\
\hline
\end{tabular}
\end{center}
\end{table}
\vspace*{12pt}
\noindent{\bf Remark}
In \cite{ds1}, it
was argued that a generic rank 1, genus $g$ solution of the KP equation
corresponds to a solution of the $(r,n)$-th KP equation with $r=g+1$ and
$n=g+2$. The dependence of the Lagrangian ${\cal L}(r,n)={\cal L}(g+1,g+2)$ on
the potentials for this case is found from Table \ref{table1} by using these
values for $r$ and $n$. It follows that in the generic case, the potential
$\alpha_{r-j}(r)$ appears with $g-j+2$ derivatives, for $j=2,3,\ldots, g+1$.
\section{The Ostrogradskii transformation, canonical variables and the
Hamiltonian system}\la{sec:ostro}
If we have a Lagrangian system, where the Lagrangian only depends on the
potentials and their first derivatives, then under certain conditions we can
use the Legendre transformation \cite{arnold} to write the Lagrangian system in
first-order form as a Hamiltonian system with canonical variables
$(\mbf{q},\mbf{p})$. Here the variables $\mbf{q}$ are the potentials appearing
in the Lagrangian and all of their derivatives, except the highest derivative.
Their {\em conjugate variables} $\mbf{p}$ are defined as the partial
derivatives of the Lagrangian with respect to the $\mbf{q}$ variables
\cite{arnold}.
The Lagrangian system \rf{el} we constructed from the KP hierarchy, assuming
two of its flows are stationary, depends on more than just the first
derivatives of the potentials $\mbf{\alpha}(r) = (\alpha_{0}(r), \alpha_1(r),
\ldots, $ $\alpha_{r-2}(r))^T$. The Legendre transformation is generalized to
write the Lagrangian system \rf{el} in Hamiltonian form. This is achieved by
Ostrogradskii's theorem, given later in this section. Consider the
Ostrogradskii transformation (see \cite{dfn2, whittaker} for a simpler
version) \begin{equation}\la{ostrotrans} q_{ij}=\ppn{j-1}{}{x}\alpha_{r-i-1}(r),
~~p_{ij}=\dd{{\cal L}(r,n)}{\aa{r-i-1}{j}} \end{equation}
\noindent for $i=1,2, \ldots, r-1$ and $j=1, 2, \ldots, N_{i+1}$.
Note that when all the $N_j=1$, for $j=2,3,\ldots, r$ ($i.e.$, when the
Lagrangian only depends on first derivatives), then the Ostrogradskii
transformation \rf{ostrotrans} reduces to the Legendre transformation.
Using the definition of the variational derivative, we establish the recurrence
relations
\begin{equation}\la{ostrorecur}
p_{ij}=\pp{{\cal L}(r,n)}{\aa{r-i-1}{j}}-\pp{p_{i(j+1)}}{x}, ~~\mbox{for} ~~
j=1,2,\ldots, N_{i+1}.
\end{equation}
Here we have defined $p_{i(N_{i+1}+1)}$ to be zero. These recurrence relations
will be useful later on. Furthermore, from the definition of the Ostrogradskii
transformation \rf{ostrotrans}, we obtain
\begin{equation}\la{ostroweight}
W[q_{ij}]=i+j~~\mbox{and}~~W[p_{ij}]=r+n-(i+j).
\end{equation}
\noindent Weight relationships such as these provide a quick check for the validity of
large expressions, such as the ones encountered below. Many typographical
errors are easily avoided by checking that only terms with the same weight are
added.
The Lagrangian needs to fulfill a nonsingularity requirement for the
Ostrogradskii transformation to be invertible:
\vspace*{12pt}
\noindent {\bf Definition \cite{dfn2}:}
The Lagrangian ${\cal L}(r,n)$ is {\bf (strongly) nonsingular} if
the Ostrogradskii transformation \rf{ostrotrans} can be solved in the form
\begin{displaymath}
\aa{r-i}{j}=\aa{r-i}{j}(\mbf{q},\mbf{p}), ~~\mbox{for}~i=2,3,\ldots, r
~~\mbox{and}~j=1,2,\ldots, 2 N_i-1.
\end{displaymath}
\noindent Here $\mbf{q}$ and $\mbf{p}$ denote the vector of all variables $q_{ij}$
and $p_{ij}$ respectively. In other words, the Ostrogradskii transformation
\rf{ostrotrans} is invertible if and only if the Lagrangian is nonsingular.
Then the Euler-Lagrange equation \rf{el} can be written in first-order
form using the variables $\mbf{q}$ and $\mbf{p}$.
Define the vector $\mbf{X}=(\aa{r-2}{N_2}, \aa{r-3}{N_3}, \ldots,
\aa{0}{N_r})^T$. This is the vector containing the highest derivatives of the
potentials. We already know that the highest derivatives of the potentials
are found in the quadratic terms of the Lagrangian. The Lagrangian is
conveniently written in the following form:
\begin{equation}\la{quadraticform}
{\cal L}(r,n)=\frac{1}{2} \mbf{X}^T \mbf{{\cal G}}(r,n) \mbf{X}+\mbf{{\cal
A}}^T (r,n) \mbf{X}+ \tilde{\mbf{{\cal L}}}(r,n),
\end{equation}
\noindent with $\mbf{{\cal G}}(r,n)$, $\mbf{{\cal A}}(r,n)$, $\tilde{{\cal L}}(r,n)$
independent of $\mbf{X}$. $\mbf{{\cal G}}(r,n)$ is a constant symmetric $(r-1)
\times (r-1)$ matrix. $\mbf{{\cal A}}(r,n)$ is an $(r-1)$-dimensional
vector.In the classical case of the Legendre transformation $\mbf{{\cal
G}}(r,n)$ can be regarded as either a metric tensor or as the inverse of a
mass tensor \cite{arnold}.
The following theorem generalizes a well-known result for the Legendre
transformation.
\begin{theo}\la{prop:sing}
The Lagrangian ${\cal L}(r,n)$ is nonsingular if and only if
the matrix $\mbf{{\cal G}}(r,n)$ in \rf{quadraticform} is nonsingular.
\end{theo}
\noindent{\bf Proof}
The proof is an extension of the proof in the case when the Lagrangian depends
on only one potential \cite{dfn2}. We demonstrate that under the assumption
that $\mbf{{\cal G}}(r,n)$ is nonsingular, the Ostrogradskii transformation
\rf{ostrotrans} is invertible. Then by definition the Lagrangian
is nonsingular.
First note that the variables $\mbf{q}$ are expressed in terms of the
potential and its derivatives, by their definition \rf{ostrotrans}.
Furthermore, the Lagrangian ${\cal L}(r,n)$ is a function of only $\mbf{q}$
and $\mbf{X}$: it follows from the Ostrogradksii transformation
\rf{ostrotrans} that all derivatives of the potentials in the Lagrangian are
$\mbf{q}$-variables, except the highest derivative of the potentials. These are
the components of $\mbf{X}$.
We now construct the vector
\begin{eqnarray*}
\mbf{P}&=&\left(
\begin{array}{cccc}
p_{1N_2},&
p_{2N_3},&
\cdots,&
p_{(r-1)N_r}
\end{array}
\right)^T\\
&=&
\left(
\dd{{\cal L}(r,n)}{\aa{r-2}{N_2}},
\dd{{\cal L}(r,n)}{\aa{r-3}{N_3}},
\cdots,
\dd{{\cal L}(r,n)}{\aa{0}{N_r}}
\right)^T\\&=&
\dd{{\cal L}(r,n)}{\mbf{X}}.
\end{eqnarray*}
\noindent Since $\mbf{X}$ contains the highest derivatives of the potentials, by
definition of the variational derivative we get
\begin{eqnarray}\la{labeliguess}
\left(
\begin{array}{c}
p_{1N_2}\\
p_{2N_3}\\
\vdots\\
p_{(r-1)N_r}
\end{array}
\right)=\pp{{\cal L}(r,n)}{\mbf{X}}=\mbf{{\cal G}}(r,n) \mbf{X}+
\mbf{{\cal A}}(r,n).
\end{eqnarray}
\noindent Now we solve \rf{labeliguess} for $\mbf{X}$, since by assumption
$\mbf{{\cal G}}(r,n)$ is nonsingular. We denote $\mbf{{\cal
G}}^{-1}(r,n)=\mbf{{\cal M}}(r,n)$. Since $\mbf{{\cal G}}(r,n)$ is symmetric,
so is $\mbf{{\cal M}}(r,n)$.
\begin{eqnarray}\nonumber
\mbf{X}&=&\left(
\begin{array}{c}
\aa{r-2}{N_2}\\
\aa{r-3}{N_3}\\
\vdots\\
\aa{0}{N_r}
\end{array}
\right)=\mbf{\cal M}(r,n)
\left(
\begin{array}{c}
p_{1N_2}\\
p_{2N_3}\\
\vdots\\
p_{(r-1)N_r}
\end{array}
\right)-\mbf{\cal M}(r,n)\mbf{\cal A}(r,n)\\\la{xintermsofp}
&=&\mbf{\cal M}(r,n)\left(\mbf{P}-\mbf{\cal A}(r,n)\right).
\end{eqnarray}
We have expressed $\mbf{X}$ in terms of the coordinates $\mbf{q}$ and
$\mbf{p}$. We want to do the same with its derivatives. Now consider the
following set of Ostrogradskii equations, using the recurrence relations
\rf{ostrorecur}
\begin{eqnarray*} \left( \begin{array}{c} p_{1(N_2-1)}\\ p_{2(N_3-1)}\\ \vdots\\
p_{(r-1)(N_r-1)} \end{array} \right)&=& \left( \begin{array}{c} \pp{{\cal
L}(r,n)}{\aa{r-2}{N_2-1}}-\pp{p_{1N_2}}{x}\\ \pp{{\cal
L}(r,n)}{\aa{r-3}{N_3-1}}-\pp{p_{2N_3}}{x}\\ \vdots\\ \pp{{\cal
L}(r,n)}{\aa{0}{N_r-1}}-\pp{p_{(r-1)N_r}}{x} \end{array} \right)\\&=& \left( \begin{array}{c}
\pp{{\cal L}(r,n)}{\aa{r-2}{N_2-1}}\\ \pp{{\cal L}(r,n)}{\aa{r-3}{N_3-1}}\\
\vdots\\ \pp{{\cal L}(r,n)}{\aa{0}{N_r-1}} \end{array} \right)-\pp{}{x} \left( \begin{array}{c}
p_{1N_2}\\ p_{2N_3}\\ \vdots\\ p_{(r-1)N_r} \end{array} \right). \end{eqnarray*}
\noindent Note that the first term depends only on $\mbf{q}$ and $\mbf{X}$. The last
term is the derivative of $\mbf{\cal G}(r,n) \mbf{X}+\mbf{\cal
A}(r,n)$. Since $\mbf{\cal A}(r,n)$ depends only on
$\mbf{q}$, the derivative of $\mbf{\cal A}(r,n)$ depends only on
$\mbf{q}$ and on $\mbf{X}$. Since $\mbf{\cal G}(r,n)$ is constant, we can
solve this relationship for $\mbf{X}'$:
\begin{displaymath} \mbf{X}'=\mbf{\cal M}(r,n) \left( \begin{array}{c} \pp{{\cal
L}(r,n)}{\aa{r-2}{N_2-1}}\\ \pp{{\cal L}(r,n)}{\aa{r-3}{N_3-1}}\\ \vdots\\
\pp{{\cal L}(r,n)}{\aa{0}{N_r-1}} \end{array} \right)-\mbf{\cal M}(r,n) \left( \begin{array}{c}
p_{1(N_2-1)}\\ p_{2(N_3-1)}\\ \vdots\\ p_{(r-1)(N_r-1)} \end{array} \right)-\mbf{\cal
M}(r,n)\pp{\mbf{\cal A}(r,n)}{x}, \end{displaymath}
\noindent and we have expressed $\mbf{X}'$ in terms of $\mbf{q}, \mbf{p}$ and
$\mbf{X}$. Since in the previous steps, $\mbf{X}$ was expressed in terms of
$\mbf{q}$ and $\mbf{p}$, this proves that $\mbf{X}'$ is expressible in terms of
$\mbf{q}$ and $\mbf{p}$. Continued use of the recursion relation
\rf{ostrorecur} allows us to do the same with higher derivatives of $\mbf{X}$,
which are needed as the Euler-Lagrange equations \rf{el} depend on twice
as many derivatives of the potentials as the Lagrangian. This proves that the
Ostrogradskii transformation \rf{ostrotrans} can be inverted. Hence the
Lagrangian is nonsingular if $\mbf{\cal G}(r,n)$ is nonsingular.
The converse statement is clearly also true: if the matrix $\mbf{\cal G}(r,n)$
is singular, then the Ostrogradksii transformation is not invertible (step 1 in
the proof fails, since \rf{labeliguess} cannot be solved for $\mbf{X}$). Hence
the Lagrangian is singular if the matrix $\mbf{\cal G}(r,n)$ is singular.
\hspace*{\fill}$\rule{3mm}{3mm}$
\vspace*{12pt}
In sharp contrast to the Korteweg-deVries hierarchy \cite{nmpz}, the KP
hierarchy contains both singular and nonsingular Lagrangians. This is an
indication that the set of potentials $\mbf{\alpha}(r)=(\alpha_{0}(r),
\alpha_1(r), \ldots, \alpha_{r-2}(r))^T$ is not a good set of variables to
describe the dynamics of the system. These points are further explained in
Section \ref{sec:sing}.
Denote by
\begin{equation}\la{count}
N=\sum_{i=2}^r N_i=N_2+N_3+\ldots+N_r.
\end{equation}
We have the following theorem:
\begin{theo}\la{ostrotheo}{\bf (Ostrogradskii \cite{dfn2, ostro})}
If the Lagrangian ${\cal L}(r,n)$ is nonsingular, then the first-order system
obtained by rewriting the Euler-Lagrange equations in terms of the new
variables $\mbf{q}$ and $\mbf{p}$ is Hamiltonian with Hamiltonian
\begin{equation}\la{ostrohamil}
H(\mbf{q},\mbf{p})=\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}} p_{ij} \aa{r-i-1}{j}-
{\cal L}(r,n).
\end{equation}
Here the inverse Ostrogradskii transformation is to be used in order to
express the Hamiltonian in terms of $\mbf{q}$ and $\mbf{p}$ only.
The Euler-Lagrange
equations are equivalent to the $N$-dimensional Hamiltonian system
\begin{equation}\la{ostrodynsys}
\pp{q_{ij}}{x}=\pp{H}{p_{ij}},~~~\pp{p_{ij}}{x}=-\pp{H}{q_{ij}},
\end{equation}
for $i=1,2,\ldots, r-1$ and $j=1,2,\ldots, N_{i+1}$.
\end{theo}
\noindent{\bf Proof} The proof is identical to the proof in \cite{dfn2}, except that
more than one potential appears in the Lagrangian. This slight modification does
not change the proof in any fundamental way.
\vspace*{12pt}
In other words, the variables $q_{ij}$ and $p_{ij}$ are canonically conjugate
variables under the classical symplectic structure where the Poisson bracket
is given by
\begin{equation}\la{cpb}
\{f,g\}=\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}}\left(\pp{f}{q_{ij}}\pp{g}{p_{ij}}-
\pp{f}{p_{ij}}\pp{g}{q_{ij}}\right)=(\mbf{\nabla} f)^T J (\mbf{\nabla} g),
\end{equation}
\noindent and
\begin{displaymath}\la{J}
\mbf{J}=\left(
\begin{array}{cc}
\mbf{0}_N&\mbf{I}_N\\
-\mbf{I}_N & \mbf{0}_N
\end{array}
\right),
\end{displaymath}
\noindent where $\mbf{0}_N$ is the $N \times N$-null matrix, $\mbf{I}_N$ is the
$N\times N$-identity matrix, and
\begin{eqnarray}\nonumber
\mbf{\nabla} f&=&\left(
\pp{f}{q_{11}},
\ldots,
\pp{f}{q_{1N_2}},
\pp{f}{q_{21}},
\ldots,
\pp{f}{q_{2N_3}},
\ldots,
\pp{f}{q_{(r-1)N_r}}\right.,\\\la{nabla}
&&\left.\pp{f}{p_{11}},
\ldots,
\pp{f}{p_{1N_2}},
\pp{f}{p_{21}},
\ldots,
\pp{f}{p_{2N_3}},
\ldots,
\pp{f}{p_{(r-1)N_r}}
\right)^T
\end{eqnarray}
\noindent is a $2N$-dimensional vector.
\begin{theo}
The Hamiltonian in \rf{ostrohamil} can be rewritten in the form
\begin{eqnarray}\nonumber
H(\mbf{q},\mbf{p})&=&
\frac{1}{2} \left(\mbf{P}-\mbf{{\cal A}}(r,n)(\mbf{q})\right)^T
\mbf{{\cal M}}(r,n)
\left(\mbf{P}-\mbf{{\cal A}}(r,n)(\mbf{q})\right)
+\\\la{goodform}
&&\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}-1}
p_{ij} q_{i(j+1)}-\tilde{\cal L}(r,n)(\mbf{q}),
\end{eqnarray}
\noindent with $\mbf{P}=(p_{1N_2}, p_{2N_3}, \ldots, p_{(r-1)N_r})^T$.
\end{theo}
\noindent{\bf Proof} (using \rf{ostrotrans}, \rf{quadraticform} and \rf{xintermsofp})
\begin{eqnarray*}
H(\mbf{q},\mbf{p})&\hspace*{-4pt}=\hspace*{-4pt}&\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}} p_{ij} \aa{r-i-1}{j}-
{\cal L}(r,n)\\
&\hspace*{-4pt}=\hspace*{-4pt}&\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}-1}p_{ij}
\aa{r-i-1}{j}+\sum_{i=1}^{r-1}
p_{iN_{i+1}}\aa{r-i-1}{N_{i+1}}-
\frac{1}{2}\mbf{X}^T \mbf{\cal G}(r,n) \mbf{X}-\mbf{\cal A}^T(r,n)(\mbf{q})
\mbf{X}-\\&&\tilde{\cal L}(r,n)(\mbf{q})\\
&\hspace*{-4pt}=\hspace*{-4pt}&\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}-1}p_{ij} q_{i(j+1)}+
\mbf{P}^T \mbf{X}
-\frac{1}{2}\mbf{X}^T \mbf{\cal G}(r,n) \mbf{X}-
\mbf{\cal A}^T(r,n)(\mbf{q}) \mbf{X}-\tilde{\cal L}(r,n)(\mbf{q})\\
&\hspace*{-4pt}=\hspace*{-4pt}&\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}-1}p_{ij} q_{i(j+1)}+
\mbf{P}^T \mbf{\cal M}(r,n)
\left(\mbf{P}-\mbf{\cal A}(r,n)(\mbf{q})\right)-\\
&&\frac{1}{2}\left(\mbf{P}-\mbf{\cal A}(r,n)(\mbf{q})\right)^T
\mbf{\cal M}^T(r,n)
\mbf{\cal G}(r,n)\mbf{\cal M}(r,n) \left(\mbf{P}-\mbf{\cal A}(r,n)(\mbf{q})
\right)-\\
&&\mbf{\cal A}^T(r,n)(\mbf{q}) \mbf{\cal M}(r,n) \left(\mbf{P}-
\mbf{\cal A}(r,n)(\mbf{q})\right)-
\tilde{\cal L}(r,n)(\mbf{q})\\
&\hspace*{-4pt}=\hspace*{-4pt}&\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}-1}p_{ij} q_{i(j+1)}+
\frac{1}{2}\left(\mbf{P}-\mbf{\cal A}(r,n)(\mbf{q})\right)^T
\mbf{\cal M}(r,n) \left(\mbf{P}-\mbf{\cal A}(r,n)(\mbf{q})\right)-\tilde{\cal
L}(r,n)(\mbf{q})\\
&\hspace*{-4pt}=\hspace*{-4pt}&\frac{1}{2} \left(\mbf{P}-\mbf{\cal A}(r,n)(\mbf{q})\right)^T
\mbf{\cal M}(r,n)
\left(\mbf{P}-\mbf{\cal A}(r,n)(\mbf{q})\right)
+\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}-1}
p_{ij} q_{i(j+1)}-\tilde{\cal L}(r,n)(\mbf{q}).
\end{eqnarray*}\hspace*{\fill}$\rule{3mm}{3mm}$
\vspace*{12pt}
\noindent Note that if the middle term were missing from \rf{goodform}, the
Hamiltonian would be of the form: Kinetic Energy plus Potential Energy. Such
Hamiltonians are called natural \cite{arnold}. (Note that $\tilde{\cal
L}(r,n)$ plays the role of minus the potential energy by its definition
\rf{quadraticform}.) However, the term natural is conventionally only used if
the mass tensor $\mbf{{\cal M}}(r,n)$ is positive definite. This is not the
case here, as the examples will illustrate.
The next theorem is well known \cite{dfn2} for a Lagrangian
depending on one potential.
\begin{theo}\la{prop2}
\begin{equation}\la{compare}
\pp{H}{x}=-\mbf{\alpha}'^T(r) \dd{{\cal L}(r,n)}{\mbf{\alpha}(r)}.
\end{equation}
\end{theo}
\noindent{\bf Proof}
The proof is by direct calculation.
\begin{eqnarray*}
\pp{H}{x}&=&\pp{}{x}\left(
\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}} p_{ij} \aa{r-i-1}{j}-{\cal L}(r,n)
\right)\\
&=&\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}}\left[
\pp{p_{ij}}{x}\aa{r-i-1}{j}+p_{ij}\pp{\aa{r-i-1}{j}}{x}
\right]-\pp{{\cal L}(r,n)}{x}\\
&=&\sum_{i=1}^{r-1}\pp{p_{i1}}{x}\pp{\alpha_{r-i-1}(r)}{x}+
\sum_{i=1}^{r-1}\sum_{j=2}^{N_{i+1}}\pp{p_{ij}}{x}\aa{r-i-1}{j}+\\&&
\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}}p_{ij} \aa{r-i-1}{j+1}-
\sum_{i=1}^{r-1}\sum_{j=0}^{N_{i+1}}\pp{{\cal L}(r,n)}{\aa{r-i-1}{j}}
\aa{r-i-1}{j+1}\\
&=&\sum_{i=1}^{r-1}\pp{p_{i1}}{x}\pp{\alpha_{r-i-1}(r)}{x}+
\sum_{i=1}^{r-1}\sum_{j=2}^{N_{i+1}}\pp{p_{ij}}{x}\aa{r-i-1}{j}+
\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}}p_{ij} \aa{r-i-1}{j+1}-\\&&
\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}}\pp{{\cal L}(r,n)}{\aa{r-i-1}{j}}
\aa{r-i-1}{j+1}-\sum_{i=1}^{r-1}\pp{{\cal L}(r,n)}{\alpha_{r-i-1}(r)}
\pp{\alpha_{r-i-1}(r)}{x}\\
&\since{ostrorecur}&\sum_{i=1}^{r-1}\left[
\pp{p_{i1}}{x}-\pp{{\cal L}(r,n)}{\alpha_{r-i-1}(r)}
\right] \pp{\alpha_{r-i-1}(r)}{x}+
\sum_{i=1}^{r-1}\sum_{j=2}^{N_{i+1}}\left[
\pp{{\cal L}(r,n)}{\aa{r-i-1}{j-1}}-p_{i(j-1)}
\right] \aa{r-i-1}{j}+\\
&&\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}}p_{ij} \aa{r-i-1}{j+1}-
\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}}\pp{{\cal L}(r,n)}{\aa{r-i-1}{j}}
\aa{r-i-1}{j+1}\\
&\since{ostrotrans}&-\sum_{i=1}^{r-1} \left[
\pp{{\cal L}(r,n)}{\alpha_{r-i-1}(r)}-\pp{}{x}\dd{{\cal
L}(r,n)}{\alpha_{r-i-1}'(r)}
\right]\pp{\alpha_{r-i-1}(r)}{x}+\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}-1}
\pp{{\cal L}(r,n)}{\aa{r-i-1}{j}} \aa{r-i-1}{j+1}
-\\&&\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}-1}p_{ij}\aa{r-i-1}{j+1}+
\sum_{i=1}^{r-1}
\sum_{j=1}^{N_{i+1}-1} p_{ij}
\aa{r-i-1}{j+1}+\sum_{i=1}^{r-1}p_{iN_{i+1}}\aa{r-i-1}{N_{i+1}+1}-\\
&&\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}-1}\pp{{\cal L}(r,n)}{\aa{r-i-1}{j}}
\aa{r-i-1}{j+1}-
\sum_{i=1}^{r-1}\pp{{\cal L}(r,n)}{\aa{r-i-1}{N_{i+1}}}
\aa{r-i-1}{N_{i+1}+1}\\
&=&-\sum_{i=1}^{r-1}\dd{{\cal L}(r,n)}{\alpha_{r-i-1}(r)}
\pp{\alpha_{r-i-1}(r)}{x}+\sum_{i=1}^{r-1}\left[
p_{iN_{i+1}}-\pp{{\cal L}(r,n)}{\aa{r-i-1}{N_{i+1}}}
\right]\aa{r-i-1}{N_{i+1}+1}\\
&=&-\mbf{\alpha}'^T(r) \dd{{\cal L}(r,n)}{\mbf{\alpha}(r)}.
\end{eqnarray*}\hspace*{\fill}$\rule{3mm}{3mm}$
The simplest consequence of Theorem \ref{prop2} is that the Hamiltonian is
conserved along trajectories of the system ($i.e.$, where the Euler-Lagrange
equations \rf{el} are satisfied). But this result is a direct
consequence of Ostrogradskii's theorem: since the resulting canonical
Hamiltonian system is autonomous, the Hamiltonian is conserved. However,
Theorem \ref{prop2} will be useful when we construct more conserved
quantities in the next section. It shows how the Hamiltonian fits in
with the other conserved quantities.
\section{Complete integrability of the Hamiltonian system}\la{sec:comp}
Denote
\begin{equation}
S(r,n)=\{H(r,k)| k=1,2, \ldots, \mbox{with} ~k~\mbox{not an integer multiple
of}~r~\mbox{or}~n\}.
\end{equation}
We know that in the quotient space where the Poisson bracket \rf{pb} is
defined, using \rf{pbcommute} we have
\begin{equation}\la{commute}
\left\{H(r,k_1), H(r,k_2)\right\}=\left(\dd{H(r,k_1)}{\mbf{\alpha}(r)}\right)^T
\mbf{J}(r)
\left(\dd{H(r,k_2)}{\mbf{\alpha}(r)}\right)=0.
\end{equation}
\noindent In particular, in the quotient space ($i.e.$, up to total derivatives)
\begin{equation}\la{commute2}
\left\{{\cal L}(r,n),H(r,k)\right\}=0,
\end{equation}
\noindent for $H(r,k)\in S(r,n)$. In other words, the Poisson bracket
of these two quantities is a total derivative,
\begin{equation}\la{ostroconsalmost}
\left\{{\cal L}(r,n),H(r,k)\right\}=\pp{T_k}{x}.
\end{equation}
Along the trajectories of the Euler-Lagrange equations, $i.e.$, along the
trajectories of the Hamiltonian system \rf{ostrodynsys}, the quantities $T_k$
are conserved. A list of conserved quantities for the system \rf{ostrodynsys}
is hence obtained from
\begin{equation}\la{ostrocons}
T_k(\mbf{q},\mbf{p})=\int \left(\dd{{\cal L}(r,n)}{\mbf{\alpha}(r)}\right)^T
\mbf{J}(r)
\left(\dd{H(r,k)}{\mbf{\alpha}(r)}\right) dx,
\end{equation}
\noindent where $k$ is not an integer multiple of $r$ or $n$. The inverse
Ostrogradskii transformation has to be used to express the right-hand side of
this equation in terms of $\mbf{q}$ and $\mbf{p}$. Note that \rf{ostrocons} is
analogous to the expression for the conserved quantities in
\cite{bogoyavlenskii}.
From the previous section, we know the Hamiltonian \rf{ostrohamil} is a
conserved quantity for the system \rf{ostrodynsys}. The theorem below shows
that up to a sign, the Hamiltonian is the first member of the newly
constructed list of conserved quantities, \rf{ostrocons}.
\begin{theo}
\begin{equation}\la{cons1}
H(\mbf{q},\mbf{p})=-T_1(\mbf{q},\mbf{p})
\end{equation}
\end{theo}
\noindent{\bf Proof} From Theorem \ref{prop2},
\begin{displaymath}
\pp{H}{x}=-\mbf{\alpha}'^T(r) \dd{{\cal L}(r,n)}{\mbf{\alpha}(r)}=
-\left(\dd{{\cal L}(r,n)}{\mbf{\alpha}(r)}\right)^T \pp{\mbf{\alpha}(r)}{x}.
\end{displaymath}
But the first flow of each hierarchy is the $x$-flow, therefore
\begin{eqnarray*}
\pp{H}{x}&=&-\left(\dd{{\cal L}(r,n)}{\mbf{\alpha}(r)}\right)^T \mbf{J}(r)
\pp{H(r,1)}{\mbf{\alpha}(r)}\\
&=&-\pp{T_1}{x},
\end{eqnarray*}
\noindent from which we obtain the theorem. \hspace*{\fill}$\rule{3mm}{3mm}$
\vspace*{12pt}
The $N$-dimensional Hamiltonian system \rf{ostrodynsys} and a list of conserved
quantities \rf{ostrocons} for it have been constructed. Complete integrability
in the sense of Liouville \cite{arnold} can be concluded under the following
conditions:
\begin{itemize}
\item In the phase space spanned by the independent coordinates $\mbf{q}$ and
$\mbf{p}$ there are $N$ nontrivial functionally independent conserved
quantities.
\item These $N$ conserved quantities are mutually in involution with respect
to the Poisson bracket \rf{cpb}, $i.e.$, their mutual Poisson brackets vanish.
\end{itemize}
The list of conserved quantities \rf{ostrocons} for $k$ not an integer
multiple of $r$ or $n$ is infinite. It is clearly impossible for all of these
conserved quantities to be functionally independent: a dynamical system in a
$2 N$-dimensional phase space has at most $2N$ independent conserved
quantities. In particular, a $2N$-dimensional autonomous Hamiltonian system
has at most $N$ independent integrals of the motion that are mutually in
involution. Below it is shown that all the conserved quantities
$T_k(\mbf{q},\mbf{p})$ are mutually in involution; therefore at most $N$ of
them are functionally independent. We wait until Theorem
\ref{theo:atthispoint} to show that exactly $N$ different
$T_k(\mbf{q},\mbf{p})$ are nontrivial and functionally independent.
In the rest of this section, it is proved that all the $T_{k}(\mbf{q},\mbf{p})$
are in involution. This is not done by direct inspection of the mutual Poisson
brackets. Instead, we follow the approach of Bogoyavlenskii and Novikov
\cite{bogoyavlenskii}: we know from Adler \cite{adler} that all flows of the
hierarchy \rf{hamsys} commute. If we denote by $X_{t_k}$ the vector field that
evolves the phase space variables in the direction $t_k$, then the fact that
all the flows in \rf{hamsys} commute can be equivalently stated as the mutual
commutation of the vector fields $X_{t_k}$. Consider the Hamiltonian system
with canonical variables $(\mbf{q},\mbf{p})$, Poisson bracket \rf{cpb} and
Hamiltonian $H_k(\mbf{q},\mbf{p})=-T_k(\mbf{q},\mbf{p})$. We show below that
the vector field of this system, $X_{H_k}$, is the restriction of $X_{t_k}$ to
the phase space $(\mbf{q},\mbf{p})$ of the finite-dimensional Hamiltonian
system. So, the different $t_k$-flows commute, even when they are restricted to
the phase space consisting of the $(\mbf{q},\mbf{p})$ variables. In particular
the $t_k$- and the $t_1$-flow ($i.e.$, the $x$-flow) commute. Hence we have a
family of mutually commuting Hamiltonian systems, all with the same Poisson
bracket \rf{cpb}. In \cite{arnold}, it is proved that then the family of
Hamiltonians have mutually vanishing Poisson brackets. As a consequence, the
$T_k(\mbf{q},\mbf{p})$ are mutually in involution and the system
\rf{ostrodynsys} is completely integrable in the sense of Liouville, which is
what we set out to prove.
We remark however, that this way of proving complete integrability also
provides us with $N$-dimensional Hamiltonian systems for the evolution of the
phase space variables, not only in $x$, but in all `time' variables $t_k$.
These different Hamiltonian systems have a common list of conserved quantities
in involution. Hence each is completely integrable in the sense of
Liouville. This will be spelt out in more detail once we finish proving that
all the $T_k(\mbf{q}, \mbf{p})$ are in involution.
The following lemma is used in the proof of the Bogoyavlenskii-Novikov theorem.
\begin{lemma}\la{lemmalemma}
\begin{eqnarray}\la{lemma1}
\frac{\partial^2 H}{\partial p_{ij} \partial q_{i(j+1)}}&=&1, ~~~~~\mbox{for}~j\leq
N_{i+1}-1\\\la{lemma2}
\frac{\partial^2 H}{\partial p_{ij} \partial q_{is}}&=&0, ~~~~~\mbox{for}~s\neq j+1, j\leq
N_{i+1}-1\\\la{lemma3}
\frac{\partial^2 H}{\partial p_{i_1 j_1} \partial q_{i_2 j_2}}&=&0,~~~~~\mbox{for}~i_1\neq i_2
\end{eqnarray}
\end{lemma}
\noindent{\bf Proof} We use the form \rf{goodform} of the Hamiltonian. We get
\begin{eqnarray*}
\pp{H}{p_{ij}}=q_{i(j+1)},~~~~~\mbox{for}~j\leq N_{i+1}-1,
\end{eqnarray*}
from which \rf{lemma1} and \rf{lemma2} easily follow. Also, \rf{lemma3}
follows from this if $j_1\leq N_{i_1+1}-1$ and ${j_2 \leq~N_{i_2+1}-1}$.
For other values of $j_1, j_2$, \rf{lemma3} follows immediately
from \rf{goodform}.\hspace*{\fill}$\rule{3mm}{3mm}$
The following theorem generalizes the fundamental idea of Bogoyavlenskii and
Novikov \cite{bogoyavlenskii}:
\begin{theo}{\bf (Bogoyavlenskii-Novikov)}
On the solution space of the $(r,n)$-th stationary KP equation, the action of
the $k$-th time variable $t_k$ can be written as an $N$-dimensional Hamiltonian
system with Hamiltonian $H_k=-T_k$ and the same canonical variables as
determined by Ostrogradksii's theorem for the $(r,n)$-th stationary KP
equation.
\end{theo}
\newpage
\noindent{\bf Proof} The proof consists of four steps:
\begin{enumerate}
\item Prove that for $i=1, 2, \ldots, r-1$
\begin{equation}\la{step1}
\pp{q_{i1}}{t_k}=\pp{H_k}{p_{i1}}.
\end{equation}
\item This is an induction step. Assuming the validity of step 1, prove that
for $i=1,2,\ldots, r-1$ and $j=1,2,\ldots, N_{i+1}-1$
\begin{equation}\la{step2}
\pp{q_{i(j+1)}}{t_k}=\pp{H_k}{p_{i(j+1)}}.
\end{equation}
\item Prove that for $i=1, 2, \ldots, r-1$
\begin{equation}\la{step3}
\pp{p_{iN_{i+1}}}{t_k}=-\pp{H_k}{q_{iN_{i+1}}}.
\end{equation}
\item The last step is a `backwards' induction step: assuming step 3 is valid,
show that for $i=1,2,\ldots, r-1$ and $j=N_{i+1}+1, \ldots, 2,1$
\begin{equation}\la{step4}
\pp{p_{i(j-1)}}{t_k}=-\pp{H_k}{q_{i(j-1)}}.
\end{equation}
\end{enumerate}
\noindent{\bf Proof of step 1:}
During the course of this step, the index $i$ can attain any value from
$1,2,\ldots, r-1$.
Using the definition of the variational derivative,
\begin{eqnarray*}
\dd{{\cal L}(r,n)}{\alpha_{r-i-1}(r)}&=&\pp{{\cal
L}(r,n)}{\alpha_{r-i-1}(r)}-\frac{\partial}{\partial x} \dd{{\cal
L}(r,n)}{\alpha_{r-i-1}'(r)}\\
&=&\pp{{\cal
L}(r,n)}{\alpha_{r-i-1}(r)}-\pp{p_{i1}}{x}.
\end{eqnarray*}
\noindent This shows that the Euler-Lagrange equations are essentially the equations
of motion for the variables $p_{i1}, ~i=1,2,\ldots, r-1$. The other equations
of motion in Ostrogradskii's theorem are merely a consequence of the way the
new variables in the Ostrogradskii transformation are introduced.
On the other hand, from the definition of the Hamiltonian \rf{ostrohamil},
\begin{eqnarray*}
\pp{H}{q_{i1}}&=&-\pp{{\cal L}(r,n)}{q_{i1}}\\
&=&-\pp{{\cal L}(r,n)}{\alpha_{r-i-1}(r)}.
\end{eqnarray*}
Combining the two results,
\begin{equation}\la{step11}
\dd{{\cal L}(r,n)}{\alpha_{r-i-1}(r)}=-\pp{H}{q_{i1}}-\pp{p_{i1}}{x}.
\end{equation}
We expand $\partial T_k/\partial x$ in two different ways:
\begin{eqnarray*}
\pp{T_k}{x}&=&\left(\dd{{\cal L}(r,n)}{\mbf{\alpha}(r)}\right)^T \mbf{J}(r)
\left(\dd{H(r,k)}{\mbf{\alpha}(r)}\right)\\
&=&\sum_{i=1}^{r-1}\sum_{j=1}^{r-1}\dd{{\cal L}(r,n)}{\alpha_{i-1}(r)}
J_{ij}(r)\dd{H(r,k)}{\alpha_{j-1}(r)}\\
&\since{step11}&-\sum_{i=1}^{r-1}\sum_{j=1}^{r-1}\left(\pp{H}{q_{(r-i)1}}+
\pp{p_{(r-i)1}}{x}\right)J_{ij}(r)\dd{H(r,k)}{\alpha_{j-1}(r)},
\end{eqnarray*}
\noindent and
\begin{eqnarray*}
\pp{T_k}{x}&=&\sum_{i=1}^{r-1} \sum_{j=1}^{N_{i+1}}\left(\pp{T_k}{q_{ij}}
\pp{q_{ij}}{x}+\pp{T_k}{p_{ij}}\pp{p_{ij}}{x}\right).
\end{eqnarray*}
As long as we do not impose the Euler-Lagrange equations, the derivatives of
the phase space variables are independent variations. Hence their coefficients
in both expressions for $\partial T_k/\partial x$ are equal. Expressing this equality for
the coefficient of $\partial p_{i1}/ \partial x$ gives
\begin{eqnarray*}
\pp{T_k}{p_{i1}}&=&-\sum_{j=1}^{r-1}\left(J_{(r-i)j}(r)
\dd{H(r,k)}{\alpha_{j-1}(r)}\right)\\
&\since{hamsys}&-\pp{\alpha_{r-i-1}(r)}{t_k}\\
&\since{ostrotrans}&-\pp{q_{i1}}{t_k},
\end{eqnarray*}
\noindent or
\begin{displaymath}
\pp{q_{i1}}{t_k}=\pp{H_k}{p_{i1}},
\end{displaymath}
\noindent which we needed to prove.\hspace*{\fill}$\rule{3mm}{3mm}$
\noindent{\bf Proof of step 2:}
Assume
\begin{displaymath}
\pp{q_{i\tilde{j}}}{t_k}=\pp{H_k}{p_{i\tilde{j}}},
\end{displaymath}
\noindent for $\tilde{j}=1,2,\ldots, j$. Then
\begin{eqnarray*}
\pp{q_{i(j+1)}}{t_k}&\since{ostrotrans}&\pp{\aa{r-i-1}{j}}{t_k}\\
&\since{ostrotrans}&\pp{}{x}\pp{q_{ij}}{t_k}\\
&=&\pp{}{x}\pp{H_k}{p_{ij}},
\end{eqnarray*}
\noindent since the $x$-flow and the $t_k$-flow commute. In the last step the
induction hypothesis is used. For any function of the variable $x$,
\begin{eqnarray}\nonumber
\pp{f}{x}&=&\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}}\left(\pp{f}{q_{ij}}
\pp{q_{ij}}{x}+\pp{f}{p_{ij}}\pp{p_{ij}}{x}\right)\\\nonumber
&=&\sum_{i=1}^{r-1}\sum_{j=1}^{N_{i+1}}\left(\pp{f}{q_{ij}}
\pp{H}{p_{ij}}-\pp{f}{p_{ij}}\pp{H}{q_{ij}}\right)\\\la{step21}
&=&\{f,H\},
\end{eqnarray}
\noindent by definition of the Poisson bracket. With this well-known result, the above
becomes
\begin{eqnarray*}
\pp{q_{i(j+1)}}{t_k}&=&\left\{\pp{H_k}{p_{ij}}, H\right\}\\
&\since{cpb}&\left\{\left\{q_{ij},H_k\right\},H\right\}\\
&=&-\left\{\left\{H_k,H\right\},q_{ij}\right\}-
\left\{\left\{H,q_{ij}\right\},H_k\right\}\\
&\since{step21}&-\left\{\pp{H_k}{x},q_{ij}\right\}+
\left\{\left\{q_{ij},H\right\},H_k\right\}\\
&=&\left\{\left\{q_{ij},H\right\},H_k\right\}\\
&\since{step21}&\left\{\pp{q_{ij}}{x},H_k\right\}\\
&\since{ostrotrans}&\left\{q_{i(j+1)},H_k\right\}\\
&\since{cpb}&\pp{H_k}{p_{i(j+1)}},
\end{eqnarray*}
\noindent where we have used the fact that the Poisson bracket \rf{cpb} satisfies the
Jacobi identity. This is what we needed to prove.\hspace*{\fill}$\rule{3mm}{3mm}$
\noindent{\bf Proof of step 3:}
From the commutativity of the $x$-flow and the $t_k$-flow,
\begin{eqnarray}\nonumber
\pp{}{x}\pp{q_{iN_{i+1}}}{t_k}&=&\pp{}{t_k}\pp{q_{iN_{i+1}}}{x}\\\la{step31}
\Rightarrow~~~~~~~~\pp{}{x}\pp{H_k}{p_{iN_{i+1}}}&=&\pp{}{t_k}
\pp{H}{p_{iN_{i+1}}},
\end{eqnarray}
\noindent using steps 1 and 2 of the proof. We examine the left-hand side of this
equation separately.
\begin{eqnarray*}
\mbox{Left-hand side}&=&\pp{}{x} \pp{H_k}{p_{iN_{i+1}}}\\
&\since{step21}&\left\{\pp{H_k}{p_{}iN_{i+1}},H\right\}\\
&\since{cpb}&-\left\{\left\{H_k,q_{iN_{i+1}}\right\},H\right\}\\
&=&\left\{\left\{q_{iN_{i+1}},H\right\},H_k\right\}+
\left\{\left\{H, H_k\right\},q_{iN_{i+1}}\right\}\\
&\since{step21}&\left\{\pp{q_{iN_{i+1}}}{x},H_k\right\}-\left\{
\pp{H_k}{x},q_{iN_{i+1}}\right\}\\
&=&\left\{\pp{H}{p_{iN_{i+1}}},H_k\right\},
\end{eqnarray*}
\noindent again using the Jacobi identity. The factor $\partial H/\partial p_{iN_{i+1}}$ now
appears both on the left and on the right of our equation. From \rf{goodform}
\begin{eqnarray*}
\hspace*{-12pt}&&\pp{H}{p_{iN_{i+1}}}\\
\hspace*{-12pt}&&=\pp{}{p_{iN_{i+1}}}\left(\frac{1}{2} \left(\mbf{P}-\mbf{\cal
A}(r,n)(\mbf{q})\right)^T \mbf{\cal M}(r,n) \left(\mbf{P}-\mbf{\cal
A}(r,n)(\mbf{q})\right)
\right)\\
\hspace*{-12pt}&&=\pp{}{p_{iN_{i+1}}}\left(\frac{1}{2}\sum_{i=1}^{r-1} \sum_{j=1}^{r-1}
{\cal M}_{ij}(r,n) \left(p_{iN_{i+1}}-{\cal A}_i(r,n)(\mbf{q})\right)
\left(p_{jN_{j+1}}-{\cal A}_j(r,n)(\mbf{q})\right)\right)\\
\hspace*{-12pt}&&=\sum_{j=1}^{r-1}{\cal M}_{ij}(r,n) \left(p_{jN_{j+1}}-{\cal
A}_j(r,n)(\mbf{q})
\right).
\end{eqnarray*}
Using this result in \rf{step31},
\begin{eqnarray*}
&&\pp{}{t_k}\sum_{j=1}^{r-1} {\cal M}_{ij}(r,n)\left(
p_{jN_{j+1}}-{\cal A}_j(r,n)(\mbf{q})
\right)\\
&&=\left\{\sum_{j=1}^{r-1} {\cal M}_{ij}(r,n)
\left(p_{jN_{j+1}}-{\cal A}_j(r,n)(\mbf{q})\right), H_{k}\right\}\\
&&=\sum_{j=1}^{r-1} {\cal M}_{ij}(r,n)\left\{\left(p_{jN_{j+1}}
-{\cal A}_j(r,n)(\mbf{q})\right), H_{k}\right\}\\
&&=-\sum_{j=1}^{r-1} {\cal M}_{ij}(r,n)\left(\pp{H_k}{q_{jN_{j+1}}}+
\left\{{\cal A}_j(r,n)(\mbf{q}), H_k\right\}
\right).
\end{eqnarray*}
\noindent Multiplying both sides of this equation by ${\cal G}_{si}(r,n)$ and
summing over $i$ from $1$ to $r-1$, this becomes
\begin{eqnarray*}
\pp{p_{sN_{s+1}}}{t_k}-\pp{{\cal A}_s(r,n)(\mbf{q}),}{t_k}=
-\pp{H_k}{q_{sN_{s+1}}}-\left\{{\cal A}_s(r,n)(\mbf{q}), H_k\right\}.
\end{eqnarray*}
Since ${\cal A}_s(r,n)(\mbf{q})$ depends only on $\mbf{q}$, it follows from
step 2 that the second term on the left-hand side is equal to the second term
on the right-hand side. Hence
\begin{displaymath}
\pp{p_{sN_{s+1}}}{t_k}=-\pp{H_k}{q_{sN_{s+1}}},
\end{displaymath}
\noindent for $s=1,2,\ldots, r-1$. This
terminates the proof of step 3. \hspace*{\fill}$\rule{3mm}{3mm}$
Note that it is necessary for the Lagrangian ${\cal L}(r,n)$ to be nonsingular,
as we are using the matrix ${\cal M}(r,n)$.
\noindent{\bf Proof of step 4:}
Assume
\begin{displaymath}
\pp{p_{i\tilde{j}}}{t_k}=-\pp{H_k}{q_{i\tilde{j}}},
\end{displaymath}
\noindent for $\tilde{j}=N_{i+1}, N_{i+1}-1, \ldots, j$. We have
\begin{eqnarray*}
\pp{}{t_k}\pp{p_{ij}}{x}&=&\pp{}{x}\pp{p_{ij}}{t_k}\\
&\since{step21}&\left\{\pp{p_{ij}}{t_k},H\right\}\\
&=&-\left\{\pp{H_k}{q_{ij}},H\right\}\\
&\since{cpb}&\left\{\left\{p_{ij},H_k\right\},H\right\}\\
&=&-\left\{\left\{H_k, H\right\},p_{ij}\right\}-
\left\{\left\{H, p_{ij}\right\},H_k\right\}\\
&\since{step21}&-\left\{\pp{H_k}{x}, H\right\}+\left\{\pp{p_{ij}}{x},
H_k\right\}\\
&=&-\left\{\pp{H}{q_{ij}}, H_k\right\}\\
&\since{cpb}&
-\sum_{\gamma=1}^{r-1}\sum_{\delta=1}^{N_{\gamma+1}}\left( \frac{\partial^2 H}{\partial
q_{ij}\partial q_{\gamma \delta}}\pp{H_k}{p_{\gamma \delta}}-\frac{\partial^2 H}{\partial q_{ij}
\partial p_{\gamma \delta}}\pp{H_k}{q_{\gamma\delta}}\right)\\
&=&-\sum_{\gamma=1}^{r-1}\sum_{\delta=1}^{N_{\gamma+1}} \frac{\partial^2 H}{\partial
q_{ij}\partial q_{\gamma \delta}}\pp{H_k}{p_{\gamma \delta}}+
\sum_{\gamma=1}^{r-1}\sum_{\delta=1}^{N_{\gamma+1}}\frac{\partial^2 H}{\partial q_{ij}
\partial p_{\gamma \delta}}\pp{H_k}{q_{\gamma\delta}}\\
&\since{lemma3}&-\sum_{\gamma=1}^{r-1}\sum_{\delta=1}^{N_{\gamma+1}}
\frac{\partial^2 H}{\partial
q_{ij}\partial q_{\gamma \delta}}\pp{H_k}{p_{\gamma \delta}}+\sum_{\delta=1}^{N_{i+1}}
\frac{\partial^2 H}{\partial q_{ij} \partial p_{i \delta}} \pp{H_k}{q_{i\delta}}\\
&\since{lemma1}&-\sum_{\gamma=1}^{r-1}\sum_{\delta=1}^{N_{\gamma+1}}
\frac{\partial^2 H}{\partial
q_{ij}\partial q_{\gamma \delta}}\pp{H_k}{p_{\gamma \delta}}+\frac{\partial^2 H}{\partial q_{ij}
\partial p_{i(j-1)}} \pp{H_k}{q_{i(j-1)}}+\\&&
\frac{\partial^2 H}{\partial q_{ij} \partial p_{iN_{i+1}}}
\pp{H_k}{q_{iN_{i+1}}}\\
&\since{lemma2}&-\sum_{\gamma=1}^{r-1}\sum_{\delta=1}^{N_{\gamma+1}}
\frac{\partial^2 H}{\partial
q_{ij}\partial q_{\gamma \delta}}\pp{H_k}{p_{\gamma \delta}}+\pp{H_k}{q_{i(j-1)}}.
\end{eqnarray*}
The left-hand side of this equation can be expressed another way as well:
\begin{eqnarray*}
\pp{}{t_k}\pp{p_{ij}}{x}&=&\pp{}{t_k}\left(-\pp{H}{q_{ij}}\right)\\
&=&-\sum_{\gamma=1}^{r-1}\sum_{\delta=1}^{N_{\gamma+1}}\left(\frac{\partial^2 H}{\partial
q_{ij} \partial q_{\gamma \delta}}\pp{q_{\gamma \delta}}{t_k}+\frac{\partial^2 H}{\partial q_{ij}
\partial p_{\gamma \delta}} \pp{p_{\gamma \delta}}{t_k}\right)\\
&=&-\sum_{\gamma=1}^{r-1}\sum_{\delta=1}^{N_{\gamma+1}}\left(\frac{\partial^2 H}{\partial
q_{ij} \partial q_{\gamma \delta}}\pp{H_k}{p_{\gamma \delta}}
+\frac{\partial^2 H}{\partial q_{ij}
\partial p_{\gamma \delta}} \pp{p_{\gamma \delta}}{t_k}\right)\\
&=&-\sum_{\gamma=1}^{r-1}\sum_{\delta=1}^{N_{\gamma+1}}\frac{\partial^2 H}{\partial
q_{ij} \partial q_{\gamma \delta}}\pp{H_k}{p_{\gamma \delta}}-\pp{p_{i(j-1)}}{t_k},
\end{eqnarray*}
\noindent where the second term has been simplified using lemma \ref{lemmalemma}, as
before. Comparing the two right-hand sides, the double-summed term drops out
and one finds
\begin{displaymath}
\pp{p_{i(j-1)}}{t_k}=-\pp{H_k}{q_{i(j-1)}}.
\end{displaymath}
This completes the proof of step 4 and hence of the Bogoyavlenskii-Novikov
theorem. \hspace*{\fill}$\rule{3mm}{3mm}$
\vspace*{12pt}
Let us recapitulate the most important results of the last few sections.
\begin{itemize}
\item The $(r,n)$-th stationary KP equation can be written as an $N$-dimensional
Hamiltonian system in $x$, given by Ostrogradskii's theorem \rf{ostrodynsys}.
\item This Hamiltonian system is completely integrable in the sense of
Liouville. $N$ independent conserved quantities in involution
$T_k(\mbf{q},\mbf{p})$ can be
constructed explicitly.
\item These conserved quantities can be interpreted as Hamiltonians: The
$t_k$-flow induces on the solution space of the $(r,n)$-th KP equation an
evolution which is Hamiltonian with Hamiltonian $H_k=-T_k$. This Hamiltonian
system shares its phase space variables, symplectic structure and conserved
quantities with the $x$-system. The $t_k$-evolution of the phase space
variables is given by
\begin{equation}\la{tkdynsys}
\pp{q_{ij}}{t_k}=\pp{H_k}{p_{ij}},~~~~\pp{p_{ij}}{t_k}=-\pp{H_k}{q_{ij}},
\end{equation}
\noindent for $i=1,2,\ldots, r-1$ and $j=1,2,\ldots, N_{i+1}$. The Hamiltonian is
given by
\begin{equation}\la{tkham}
H_k(\mbf{q},\mbf{p})=-T_{k}(\mbf{q},\mbf{p}).
\end{equation}
This only gives non-trivial results only if $k$ is not a multiple of $r$ and
$n$. Strictly speaking however, this is not required for the proof of the
Bogoyavlenskii-Novikov theorem.
\end{itemize}
At this point, we have established enough results
to argue that $N$ of the conserved quantities $T_k(r,n)$ are nontrivial and
functionally independent:
\begin{theo}\la{theo:atthispoint}
In the phase space spanned by the independent coordinates $\mbf{q}$ and
$\mbf{p}$ there are $N$ nontrivial functionally independent conserved
quantities
\end{theo}
\noindent{\bf Proof}
In the previous theorem, we have shown that the conserved
quantity $T_k(\mbf{q},\mbf{p})$ is minus the Hamiltonian for the evolution of
the phase space variables $(\mbf{q},\mbf{p})$ in the $t_k$-direction. So, if
$T_k(\mbf{q},\mbf{p})$ is trivial ($i.e.$, $T_k(\mbf{q},\mbf{p})$=0 on the
entire phase space), then so is the dependence of the solution of the
$(r,n)$-th KP equation, and conversely. A typical solution of the $(r,n)$-th
KP equation is a solution of genus $N$ of the KP equation (see \cite{ds1}) of
the form
\begin{equation}\la{compkpsol} u=u_0+2\partial_x^2 \ln \Theta(\sum_{j=1}^\infty
\mbf{K}_j t_j).
\end{equation}
\noindent Here all the $\mbf{K}_j$ are $N$-dimensional vectors. If the conserved
quantity $T_k(\mbf{q}, \mbf{p})$ is functionally dependent on any of the other
$T_j(\mbf{q}, \mbf{p})$, $j<k$, then the vectorfield $X_{H_k}$ is a linear
combination of the vectorfields $X_{H_j}, j<k$. Hence the vector $\mbf{K}_k$
is a linear combination of the vectors $\mbf{K}_j, j<k$. If $\mbf{K}_k$, is
linearly dependent on the vectors with lower indices, we can use a linear
transformation to obtain a solution of the form \rf{compkpsol} which depends
on $t_1, t_2, \ldots, t_{k-1}$, but is independent of $t_k$ (for instance,
this is possible for $t_r$ and $t_n$). If $\mbf{K}_k$ is independent of the
vectors $\mbf{K}_j$, with $j<l$, then the solution depends on $t_k$ in a
nontrivial manner. In this case, the conserved quantity $T_k(\mbf{q},\mbf{p})$
has to be nontrivial and functionally independent of $T_j$, for $j<k$. A
linear space of dimension $N$ is spanned by $N$ linearly independent vectors.
Hence a typical solution of the $(r,n)$-th KP equation has $N$ nontrivial
functional independent conserved quantities $T_k(\mbf{q},\mbf{p})$.
\hspace*{\fill}$\rule{3mm}{3mm}$
\vspace*{12pt}
\noindent{\bf Remark}
One often convenient way to integrate an integrable Hamiltonian
system explicitly is analogous to the method of forward and inverse scattering,
but restricted to systems of ordinary differential equations \cite{wojo}. To
invoke this method, a Lax representation \cite{wojo} for the system of
Hamiltonian equations is required. Such a representation was obtained in step 5
of the algorithm presented in \cite{ds1}.
\vspace*{12pt}
In what follows, only the Hamiltonian system in $x$ is considered. Any
conclusions reached are however also valid for the Hamiltonian systems in any
of the `time' variables $t_k$.
\section{Examples}
In this section, the abstract formalism of the previous sections is
illustrated using concrete examples, by assigning concrete values to $r$ and
$n$. The simplest cases are discussed: $r=2$ and various values for $n$ (the
KdV hierarchy); $r=3$ and various values for $n$ (the Boussinesq hierarchy). A
special case of $r=3$ gives rise to stationary three-phase solutions of the KP
equation, namely for $n=4$. This case is illustrated in more detail than the
other examples. It is the easiest example not covered by the work of
Bogoyavlenskii and Novikov \cite{bogoyavlenskii}.
\vspace*{12pt}
\noindent {\bf (a) The KdV hierarchy: one-dimensional solutions of the KP equation}
\vspace*{12pt}
The KdV hierarchy is obtained from the KP hierarchy by imposing the reduction
$r=2$, hence
\begin{equation}\la{kdvrred}
L^2_+=L^2.
\end{equation}
\noindent Since $L^2_+=\partial^2+2 u_2$, $u_2$ is the only independent potential with the
other potentials determined in terms of it. From $L^2_-=0$
\begin{equation}\la{triangkdv}
u_3=-\frac{u_2'}{2},~u_4=\frac{u_2''}{4}-\frac{u_2^2}{2}, ~u_5=\frac{3}{2} u_2
u_2' -\frac{u_2'''}{8}, ~etc.
\end{equation}
\noindent Other ingredients needed for \rf{explicitlaxrred} are: $M(2)=M_{2,1}=\partial$
and $\beta(2,n)=\beta_{-1}(n)=\alpha_{-1}(n)$. Hence the KdV hierarchy has the
form \begin{equation}\la{kdvhiernonham} \pp{u_2}{t_n}=\pp{}{x} \alpha_{-1}(n). \end{equation}
\noindent In order to write this in Hamiltonian form, we use the potential
$\alpha_0(2)=2 u_2=u$. Then $D(2)=2$, by \rf{jacobian}. Hence the Poisson
structure of the KdV hierarchy is given by $J(2)=D(2)M(2)=2\partial$. Using
\rf{manin2} with $j=1$ and $r=2$,
\begin{equation}\la{kdvhiermanin}
\alpha_{-1}(2)=\beta_{-1}(2)=\frac{2}{2+n} \dd{\alpha_{-1}(2+n)}{u}
\end{equation}
we can recast the KdV hierarchy in its familiar Hamiltonian form \cite{gardner}:
\begin{equation}\la{kdvhierhamsys}
\pp{u}{t_n}=2 \pp{}{x}\dd{H(2,n)}{u},
\end{equation}
\noindent with $H(2,n)=2 \alpha_{-1}(2+n)/(2+n)$. If the factor 2 is absorbed into
the definition of the Hamiltonian, then this form of the KdV hierarchy is
identical to that introduced by Gardner \cite{gardner}. Note that immediately
all even flows ({\em i.e.,} $t_{n}=t_{2k}$, for $k$ a positive integer) are
trivial because $2+2k$ is not coprime with $r=2$, so
$H(2,n)=\alpha_{-1}(2+2k)/(1+k)\equiv 0$. We write out some nontrivial flows
explicitly:
\begin{description}
\item[(i)~~] $n=1$: $H(2,1)=u^2/4$ and
\begin{equation}
\pp{u}{t_1}=u_x,
\end{equation}
as expected.
\item[(ii)~] $n=3$: $H(2,3)=u^3/8-u_x^2/16$ and
\begin{equation}
\pp{u}{t_3}=\frac{1}{4}\left(6 u u_x+u_{xxx}\right),
\end{equation}
the KdV equation.
\item[(iii)] $n=5$: $H(2,5)=5 u^4/64-5 u u_x^2/32+u_{xx}^2/64$ and
\begin{equation}\la{kdv5}
\pp{u}{t_5}=\frac{1}{16}\left(30 u^2 u_x+20 u_x u_{xx}+10 u u_{xxx}+u_{5x}
\right),
\end{equation}
the 5th-order KdV equation.
\end{description}
There is only one Casimir functional for the KdV hierarchy, namely
$H(2,-1)=2 \alpha_{-1}(1)=2 u_2=u$. It is easily verified that this is indeed a
Casimir functional for the Poisson structure $J(2)=2 \partial$: $J(2)(\delta
H(2,-1)/\delta u)=2 \partial (\delta u/\delta u)=2 \partial (1)=0$.
Imposing the $n$-reduction, the Lagrangian ${\cal L}(2,n)$ has the form
\begin{equation}\la{kdvlag} {\cal L}(2,n)=H(2,n)+\sum_{k=1}^{n-2}d_{k} H(2,k)+h_1 u. \end{equation}
This Lagrangian was first obtained by Bogoyavlenskii and Novikov
\cite{bogoyavlenskii}. From Table 1, $N_2=[(n-1)/2]=(n-1)/2$, since $n$ is
odd. Necessarily, the Lagrangian has the form \begin{equation}\la{kdvhierlagform} {\cal
L}(2,n)=\frac{1}{2} a \left(u^{((n-1)/2)}\right)^2+\hat{\cal L}(2,n), \end{equation} for
some nonzero constant $a$ and $\hat{\cal L}(2,n)$ independent of
$u^{((n-1)/2)}$. The Lagrangian is always nonsingular.
Because the case $r=2$ was covered by Bogoyavlenskii and Novikov, no examples
of the Ostrogradksii transformation and the resulting Hamiltonian system of
ordinary differential equations will be given. A typical solution of the
$(2,n)$-th KP equation, {\em i.e.,} a stationary solution of the $n$-KdV
equation, depends on $N=N_2=(n-1)/2$ phases. These solutions are also
one-dimensional since they are independent of $y=t_2$.
\vspace*{12pt}
\noindent {\bf (b) The Boussinesq hierarchy: stationary solutions of the KP equation}
\vspace*{12pt}
The Boussinesq hierarchy is obtained from the KP hierarchy by imposing the
reduction $r=3$, hence
\begin{equation}\la{bousrred}
L^3_+=L^3.
\end{equation}
\noindent Since $L^3_+=\partial^3+3 u_2\partial+3 u_2'+3 u_3$, $u_2$ and $u_3$ are the only
independent potentials with the other potentials determined in terms of
these two. From $L^3_-=0$
\begin{equation}\la{triangbous}
u_4=-u_3'-u_2^2-\frac{u_2''}{3}, ~u_5=-2 u_2 u_3+2 u_2 u_2'+\frac{2}{3}
u_3''+\frac{u_2'''}{3}, ~etc.
\end{equation}
\noindent Furthermore,
\begin{equation}
\mbf{U}(3)=\left(\begin{array}{c}u_2\\u_3\end{array}\right), ~~
\mbf{\beta}(3,n)=\left(\begin{array}{c}\beta_{-1}(n)\\\beta_{-2}(n)\end{array}\right), ~~
\mbf{M}(3)=\left(\begin{array}{cc}\partial & 0\\ -\partial^2 & \partial\end{array}\right),
\end{equation}
\noindent so that the Boussinesq hierarchy is
\begin{equation}\la{bousshiernonham}
\pp{\mbf{U}(3)}{t_n}=\mbf{M}(3)\mbf{\beta}(3,n) ~\Rightarrow~\left\{
\begin{array}{rcl}
\displaystyle{\pp{u_2}{t_n}}&\displaystyle{=}&\displaystyle{\pp{\beta_{-1}(n)}{x}}\\
\displaystyle{\pp{u_3}{t_n}}&\displaystyle{=}&\displaystyle{-\ppn{2}{\beta_{-1}(n)}{x}+\pp{\beta_{-2}(n)}{x}}
\end{array}
\right. .
\end{equation}
\noindent In order to write this in Hamiltonian form, we use the potentials
$\alpha_1(3)=3 u_2=u$ and $\alpha_{0}(3)=3 u_2'+3 u_3=v$, where $u$ and $v$
are introduced for notational simplicity. Then
\begin{equation}
\mbf{\alpha}(3)=\left(
\begin{array}{c}
v\\u
\end{array}
\right),
~
\mbf{D}(3)=\left(
\begin{array}{cc}
3\partial & 3\\3&0
\end{array}
\right),~
\mbf{\beta}(3,n)=\frac{3}{3+n}\dd{}{\mbf{\alpha}(3)}\alpha_{-1}(3+n).
\end{equation}
\noindent Hence the Poisson structure of the Boussinesq hierarchy is
\begin{equation}\la{poissonbous}
J(3)=D(3)M(3)=3 \left(\begin{array}{cc}0 & \partial\\
\partial & 0\end{array}\right).
\end{equation}
\noindent The Boussinesq hierarchy is written in Hamiltonian form as:
\begin{equation}\la{boushierhamsys}
\pp{}{t_n}\left(\begin{array}{c}v\\u\end{array}\right)=3 \left(\begin{array}{cc}0 & \partial\\
\partial & 0\end{array}\right) \left(\begin{array}{c}
\displaystyle{\dd{H(3,n)}{v}}\\
\displaystyle{\dd{H(3,n)}{u}}
\end{array}\right)=3 \pp{}{x}
\left(
\begin{array}{c}
\displaystyle{\dd{H(3,n)}{u}}\\
\displaystyle{\dd{H(3,n)}{v}}
\end{array}
\right)
,
\end{equation}
\noindent with $H(3,n)=3 \alpha_{-1}(3+n)/(3+n)$. Up to a factor 3, this form of the
Boussinesq hierarchy is identical to the one introduced by McKean
\cite{mckean1}. We write some flows explicitly:
\begin{description}
\item[(i)~~] $n=1$: $H(3,1)=uv/3$ and
\begin{equation}
\left\{
\begin{array}{rcl}
\displaystyle{v_{t_1}}&=&\displaystyle{v_x}\\
\displaystyle{u_{t_1}}&=&\displaystyle{u_x}
\end{array}
\right.,
\end{equation}
as expected.
\item[(ii)~] $n=2$: $H(3,2)=-u^3/27+u_x^2/9+v^2/3-v u_x/3$ and
\begin{equation}
\left\{
\begin{array}{rcl}
\displaystyle{v_{t_2}}&=&\displaystyle{-\frac{2}{3}u u_x-\frac{2}{3}u_{xxx}+v_{xx}}\\
\displaystyle{u_{t_2}}&=&\displaystyle{2 v_x-u_{xx}}
\end{array}
\right..
\end{equation}
Elimination of $v$ from these two equations gives the Boussinesq equation,
\begin{equation}\la{bouss}
u_{t_2t_2}+\frac{1}{3}u_{xxxx}+\frac{2}{3}\left(u^2\right)_{xx}=0.
\end{equation}
\item[(iii)] $n=4$: $H(3,4)=-u_{xx}^2/27+v_x u_{xx}/9-v_{x}^2/9+u u_x^2/9-2 u v
u_x/9-u^4/81+2 u v^2/9$ and
\begin{equation}\la{bouss4}
\left\{
\begin{array}{rcl}
\displaystyle{v_{t_4}}&=&\displaystyle{-\frac{4}{9}u^2 u_x+\frac{4}{3}v v_x-\frac{4}{3}u_x
u_{xx}-\frac{2}{3}u u_{xxx}+\frac{2}{3}u_x v_x+\frac{2}{3}u v_{xx}-\frac{2}{9}
u_{5x}+\frac{2}{3} v_{xxxx}}\\
&&\\
\displaystyle{u_{t_4}}&=&\displaystyle{-\frac{2}{3}u_x^2-\frac{2}{3}u u_{xx}+\frac{4}{3}v
u_x-\frac{2}{3}u_{xxxx}+\frac{2}{3}v_{xxx}}
\end{array}
\right..
\end{equation}
This is the next member of the Boussinesq hierarchy.
\end{description}
There are two Casimir functionals for the Boussinesq hierarchy, namely
$H(3,-1)=3 \alpha_{-1}(2)/2=3 (u_{2}'+u_3)/2=v/2$ and $H(3,-2)=3
\alpha_{-1}(1)=3 u_2=u$. For convenience $u$ and $v$ are used as Casimir
functionals below.
Imposing the $n$-reduction, the Lagrangian ${\cal L}(3,n)$ has the form
\begin{equation}\la{bouslag}
{\cal L}(3,n)=H(3,n)+\sum_{k=1}^{n-2}d_{k} H(3,k)+h_1 u+h_2 v .
\end{equation}
Theorem \ref{theo:sing}, given in the next section, shows that this Lagrangian
is always nonsingular. A typical solution of the $(3,n)$-th KP equation, {\em
i.e.,} a stationary solution of the $n$-Boussinesq equation, depends on
$N=N_2+N_3=n-1$ phases. These solutions are stationary solutions of the KP
equation, since they are independent of $t=t_3$.
\vspace*{12pt}
\noindent {\bf (c) Stationary 3-phase solutions of the KP equation}
\vspace*{12pt}
Consider the $r=3$, $n=4$ reduction. The Lagrangian is
\begin{eqnarray}\nonumber
{\cal L}(3,4)&=&-\frac{u_{xx}^2}{27}+\frac{v_x
u_{xx}}{9}-\frac{v_{x}^2}{9}+\frac{u u_x^2}{9}-\frac{2 u vu_x}{9}-
\frac{u^4}{81}+\frac{2 u v^2}{9}+\\\la{lag34}
&&d_2\left(-\frac{u^3}{27}+\frac{u_x^2}{9}+\frac{v^2}{3}-\frac{vu_x}{3}\right)+
d_1 \frac{uv}{3}+h_1 u+h_2 v.
\end{eqnarray}
The Ostrogradskii transformation \rf{ostrotrans} for this Lagrangian is
\begin{eqnarray}\nonumber
q_{11}=u,&&p_{11}=\dd{{\cal
L}(3,4)}{u_x}=\frac{u_{xxx}}{54}+\frac{uu_x}{9}+\frac{d_2
u_x}{18}+\frac{d_1u}{6}+\frac{h_2}{2},\\\la{ostrotrans34}
q_{12}=u_x,&&p_{12}=\dd{{\cal L}}{u_{xx}}=-\frac{2
u_{xx}}{27}+\frac{v_x}{9},\\\nonumber
q_{21}=v,&&p_{21}=\dd{{\cal L}(3,4)}{v_x}=\frac{u_{xx}}{9}-\frac{2v_x}{9}.
\end{eqnarray}
\noindent In the definition of $p_{11}$, the Euler-Lagrange equations have been used
to eliminate $v_{xx}$. The Ostrogradskii transformation can be inverted:
\begin{eqnarray}\nonumber
&u=q_{11},~~ u_x=q_{12},~~ v=q_{21},&\\\la{invostro34}
&u_{xx}=-54 p_{12}-27 p_{21},~~v_x=-27
p_{12}-18p_{21},&\\\nonumber
& u_{xxx}=54p_{11}-6 q_{11}q_{12}-3 d_2 q_{12}-9d_1q_{11}-27 h_2.&
\end{eqnarray}
\noindent Using the inverse Ostrogradskii transformation, the Hamiltonian
corresponding to the Lagrangian \rf{lag34} is
\begin{eqnarray}\nonumber
H&=&-27p_{12}^2-27 p_{12} p_{21}-9 p_{21}^2+p_{11}
q_{12}+\frac{q_{11}^4}{81}-\frac{q_{11} q_{12}^2}{9}-\frac{2 q_{21}^2
q_{11}}{9}+\frac{2 q_{11} q_{12}
q_{21}}{9}+\\\la{ham34}
&&d_2\left(\frac{q_{11}^3}{27}-\frac{q_{12}^2}{9}-\frac{q_{21}^2}{3}+
\frac{q_{12}q_{21}}{3}\right)-d_1 \frac{q_{11} q_{21}}{3}-h_1 q_{11}-h_2 q_{21}.
\end{eqnarray}
\noindent For simplicity the constants $d_1, d_2, h_1$ and $h_2$ are equated to zero
in the remainder of this example. Using \rf{ostrocons}, three conserved
quantities are computed for the Hamiltonian system generated by \rf{ham34}:
\begin{eqnarray}\la{t134}
T_1&=&-H\\\nonumber
T_2&=&3 \int \left(\dd{{\cal L}(3,4)}{v}\pp{}{x}\dd{H(3,2)}{u}+
\dd{{\cal L}(3,4)}{u}\pp{}{x}\dd{H(3,2)}{v}\right) dx\\\nonumber
&=&\frac{4 q_{12}q_{11}^3}{81}-\frac{8 q_{21} q_{11}^3}{81}-2
p_{12}q_{12}q_{11}-\frac{4p_{21} q_{12} q_{11}}{3}+4 p_{12}q_{11}q_{21}+2 p_{21}
q_{21}q_{11}-\frac{q_{12}^3}{27}+\\\la{t234}
&&\frac{q_{21}^3}{27}-\frac{2q_{12}q_{21}^2}{9}+9
p_{11}p_{21}+\frac{4q_{12}^2q_{21}}{27}\\\nonumber
T_5&=&3 \int \left(\dd{{\cal L}(3,4)}{v}\pp{}{x}\dd{H(3,5)}{u}+
\dd{{\cal L}(3,4)}{u}\pp{}{x}\dd{H(3,5)}{v}\right) dx\\\nonumber
&=&-\frac{2 q_{11}^6}{729}+\frac{8p_{12}q_{11}^4}{27}+\frac{4 p_{21}
q_{11}^4}{27}-\frac{q_{12}^2 q_{11}^3}{243}+3 p_{11}p_{21}q_{21}+\frac{2q_{21}^2
q_{11}^3}{243}-\frac{2 q_{12} q_{21} q_{11}^3}{243}-9
p_{12}^2q_{11}^2-\\\nonumber
&&2
p_{21}^2q_{11}^2-9 p_{12} p_{21} q_{11}^2+\frac{p_{11} q_{12} q_{11}^2}{3}-3
p_{11}^2 q_{11}+\frac{p_{21} q_{12}^2 q_{11}}{9}+\frac{q_{21}^4}{27}+54
p_{12}^3-\frac{2 q_{12} q_{21}^3}{27}+\\\la{t534}
&&27 p_{12} p_{21}^2+\frac{4 q_{12}^2
q_{21}^2}{81}+81 p_{12}^2 p_{21}-3 p_{11}p_{12} q_{12}-3
p_{11}p_{21}q_{12}-\frac{q_{12}^3q_{21}}{81}-
\frac{2 p_{21} q_{12} q_{21} q_{11}}{9}.
\end{eqnarray}
\noindent Since $r=3$ and $n=4$, the conserved quantities $T_3$ and $T_4$ are trivial.
It is easy to check by direct computation that these three conserved quantities
are in involution. Furthermore, the dependence of the solution of the KP
equation on $t_2$ ({\em i.e.,} y) is governed by the Hamiltonian system
generated by $H_2=-T_2$. The same statement holds for $t_5$ and $H_5=-T_5$. We
will return to this example in Section \ref{sec:reductions}
\vspace*{12pt}
\section{Singular and nonsingular Lagrangians}\la{sec:sing}
We have shown that the Hamiltonian system in $x$ \rf{ostrodynsys} is
completely integrable in the sense of Liouville when Ostrogradksii's theorem
applies, $i.e.$, when the Lagrangian ${\cal L}(r,n)$ is nonsingular. This has
immediate consequences for the structure of the solutions. On a compact
component of phase space, almost all solutions are quasi-periodic with $N$
phases. These phases move linearly on an $N$-dimensional torus \cite{arnold}.
Such a torus is topologically equivalent to a compact component of
\begin{equation}\la{torus}
\Lambda(r,n)=\left\{T_k(\mbf{q},\mbf{p})={\cal T}_k, k \in\Omega(r,n)\right\}
\end{equation}
\noindent where $\Omega(r,n)=\{\mbox{first N values of}~k, \mbox{not integer
multiples of $r$ or $n$}\}$. The constants ${\cal T}_k$ are
determined by the initial conditions. The torus $\Lambda(r,n)$ is shared by all
$t_k$-flows. The only difference between these different flows from this point
of view is the linear motion on the torus. This linear motion determines the
$N$ frequencies with respect to the variable $t_k$.
We know from \cite{ds1} that a typical solution of the $(r,n)$-th KP equation
has $g_{max}=(r-1)(n-1)/2$ phases. In the proof of Theorem \ref{theo:sing},
it is shown that $N=g_{max}$ if one of $r$, $n$ is even and the other one is
odd. The case of both $r$ and $n$ even is not allowed, since $r$ and $n$ need
to be coprime. On the other hand, if both $r$ and $n$ are odd, $N=g_{max}$
only if $r=3$. Otherwise $N>g_{max}$ and the Ostrogradskii transformation
introduces more variables than are needed to span the phase space. The
transformation is then not invertible and the Lagrangian ${\cal L}(r,n)$ is
nonsingular. This situation does not occur in the work of Bogoyavlenskii and
Novikov \cite{bogoyavlenskii}, because for the KdV hierarchy $r=2$. From the
results in this section it follows that some rank 1, finite-genus solutions
(namely $r>3$ and odd, $n>r$ and odd), give rise to interesting examples of
singular Lagrangians.
\begin{theo}\la{theo:sing}
The Lagrangian ${\cal L}(r,n)$ is singular if and only if $r$ and
$n$ are both odd and $r>3$. For all other cases of $(r,n)$, $N=g_{max}$.
\end{theo}
\noindent {\bf Proof}
First note that $\left[R/2\right]+\left[(R+1)/2\right]=R$, for all integers
$R$. Using Table \ref{table1} we can rewrite
\begin{eqnarray}\nonumber
N&=&\sum_{j=2}^{r}N_j\\\nonumber
&=&N_2+N_3+\sum_{j=4}^{r}N_j\\\la{rewrite}
&=&r+n-4+\sum_{j=4}^{r}\left[\frac{n+r-2 j+2}{2}\right].
\end{eqnarray}
In the calculations below, we use that $n$ can always be chosen to be greater
than $r$. The calculations are slightly different depending on whether $r$
and/or $n$ are even or odd. Since $r$ and $n$ cannot both be even, there are
three cases.
\begin{enumerate}
\item $r$ is even and $n$ is odd. We write $r=2R$, $n=2M+1$. Use \rf{rewrite},
\begin{eqnarray*}
N&=&r+n-4+\sum_{j=4}^{2R}\left(M+R-j+1\right)\\
&=&r+n-4+2RM-3M-2R+3\\
&=&\frac{(r-1)(n-1)}{2}\\
&=&g_{max}.
\end{eqnarray*}
\item $r$ is odd and $n$ is even. We write $r=2R+1$, $n=2M$. The calculations
are similar to those in the previous case.
\begin{eqnarray*}
N&=&r+n-4+\sum_{j=4}^{2R+1}(M+R-j+1)\\
&=&\frac{(r-1)(n-1)}{2}\\
&=&g_{max}.
\end{eqnarray*}
\item $r$ is odd and $n$ is odd. We write $r=2R+1$, $n=2M+1$. In this case, the
result is quite surprising.
\begin{eqnarray*}
N&=&r+n-4+\sum_{j=4}^{2R+1}\left(R+M-j+2\right)\\
&=&r+n-4+\frac{(R+M-2)(R+M-1)}{2}-\frac{(M-R+1)(M-R)}{2}\\
&=&\frac{(r-1)(n-1)}{2}+\frac{r-3}{2}\\
&=&g_{max}+\frac{r-3}{2}.
\end{eqnarray*}
\noindent Hence, in this case, $N\neq g_{max}$. So, if $r$ and $n$ are both odd and
$r > 3$, the dimension of the torus $\Lambda(r,n)$ in \rf{torus} is seemingly
greater than the maximal number of phases of a solution of the $(r,n)$-th KP
equation, according to \cite{ds1}. This dimension exceeds the maximal genus by
the amount of $(r-3)/2$, which is a positive integer when $r$ is odd and
greater than three. This is an indication that the assumptions necessary for
Ostrogradskii's theorem have been violated. Hence $(r,n ~\mbox{both odd},
r>3)$ is a sufficient condition for the Lagrangian ${\cal L}(r,n)$ to be
singular.
That this condition is necessary for ${\cal L}(r,n)$ to be singular follows
from the results of Veselov \cite{veselov}. There it is demonstrated that the
dimension of the phase space of the Euler-Lagrange equations \rf{el} is always
equal to $2 g_{max}$, which is the desired dimension of the phase space of the
Hamiltonian system \rf{hamsys}. In such a case the Lagrangian is only singular
if the Ostrogradskii transformation introduces more variables than the
dimension of the phase space \cite{krupkova}, which only happens when $r,n$ are
noth odd and $r>3$. Hence this condition is also necessary for the singularity
of the Lagrangian.
\hspace*{\fill}$\rule{3mm}{3mm}$
\end{enumerate}
Note that in the generic case of \cite{ds1}, $r=g+1$, $n=g+2$, we always have
$g_{max}=N$, since this automatically puts one in case 1 or case 2.
\vspace*{12pt}
\noindent {\bf Example}
\vspace*{8pt}
The smallest odd value possible for $r$ is $r=3$. In this case, the count of
the number of phases is still right. This corresponds to the Boussinesq
hierarchy. The smallest values of $r$ and $n$ where the count of the number of
phases is wrong occurs when $(r,n)=(5,7)$. This example is discussed below.
In this example, the Lagrangian ${\cal L}(5,7)$ expressed in the $4$ variables
$\mbf{\alpha}(5)=(\alpha_0(5),\alpha_1(5),\alpha_2(5), \alpha_3(5))^T$ is
singular. It is shown below how one deals with the singular Lagrangian case.
As it turns out, a relatively straightforward transformation reduces the
Lagrangian ${\cal L}(5,7)$ to a nonsingular Lagrangian, expressed in the
transformed variables. The Ostrogradskii transformation is applicable to this
Lagrangian and it results in a completely integrable system with $N=g_{max}$.
For simplicity, denote $u=\alpha_{3}(5)$, $v=\alpha_2(5)$, $w=\alpha_1(5)$ and
$z=\alpha_0(5)$. The Lagrangian is
\begin{eqnarray}\nonumber
{\cal L}(5,7)&=&-\frac{2u_{xxxx}}{25}\left(z-\frac{5}{2}w_x+\frac{9}{5} v_{xx}
\right)_{xx}
+\frac{w_{xx}v_{xxxx}}{5}-\frac{7 z_x
v_{xxxx}}{25}-\frac{7 z_{xx} w_{xx}}{25}+\\\la{lag57} &&\tilde{\cal
L}(u,u_x,u_{xx}, u_{xxx},v,v_x, v_{xx},v_{xxx}, w, w_x, w_{xx}, z, z_x),
\end{eqnarray}
\noindent where all dependence on the vector $\mbf{X}=(u_{xxxx}, v_{xxxx}, w_{xxx},
z_{xx})^T$ is explicit. We have
\begin{equation}\la{ganda}
\mbf{\cal G}(5,7)=\left(\begin{array}{cccc}
0&-18/125&1/5&-2/25\\
-18/125&0&0&0\\
1/5&0&0&0\\
-2/25&0&0&0
\end{array}
\right),~\mbf{\cal A}(5,7)=\left(\begin{array}{c}
0\\w_{xx}/5-7z_x/25\\0\\-7 w_{xx}/25
\end{array}
\right).
\end{equation}
\noindent Clearly $\mbf{\cal G}(5,7)$ is singular. If canonical variables $\mbf{q}$
and $\mbf{p}$ were introduced using the Ostrogradskii transformation on this
Lagrangian, this would lead to $2N=2(N_2+N_3+N_4+N_5)=26$ phase space
variables. The dimension of the phase space can be obtained from the
Euler-Lagrange equations \cite{veselov} corresponding to ${\cal L}(5,7)$ and
is 24. Since the corank of the matrix $\mbf{\cal G}(5,7)$ is 2, it follows
from the Ostrogradskii transformation \rf{ostrotrans} that two linear
combinations between $p_{14},p_{24}, p_{33}$ and $p_{42}$ exist. These are
also linear in $\mbf{q}$ because $\mbf{\cal A}(5,7)$ is linear in $\mbf{q}$:
\begin{equation}\la{lindep}
p_{24}=-\frac{2}{5}p_{33}+\frac{7}{25}q_{33},~~~p_{42}=-\frac{18}{25}p_{33}+
\frac{1}{5}q_{33}-\frac{7}{25}q_{42}.
\end{equation}
\noindent At this point the theory of constrained Hamiltonian systems can be used
\cite{dirac}. Another possibility is to use the general methods of Krupkova
\cite{krupkova} for singular Lagrangians. However, it is possible to
transform the potentials to a new set of potentials such that the Lagrangian
is nonsingular when expressed in the new potentials. Motivated by the form of
the Lagrangian, let
\begin{equation}\la{nonsingtrans}
\hat{u}=u,~~
\hat{v}=v,~~
\hat{w}=w,~~
\hat{z}=z-\frac{5}{2}w_x+\frac{9}{5}v_{xx}.
\end{equation}
\noindent This transformation is clearly invertible. In the new variables, the
Lagrangian is
\begin{eqnarray}\nonumber
{\cal L}(5,7)=
-\frac{2}{25}\hat{u}_{xxxx}\hat{z}_{xx}+\frac{1}{250}\hat{v}_{xxx}\hat{w}_{xx}-
\frac{7}{25}\hat{v}_{xxxx}\hat{z}_x-\frac{7}{25}\hat{w}_{xx}\hat{}_{xx}+\\
\la{lag57transformed}
\hat{\cal
L}(\hat{u},\hat{u}_x,\hat{u}_{xx}, \hat{u}_{xxx},\hat{v},\hat{v}_x,
\hat{v}_{xx},\hat{v}_{xxx}, \hat{w}, \hat{w}_x, \hat{w}_{xx}, \hat{z},
\hat{z}_x),
\end{eqnarray}
\noindent up to total derivatives. Define a new vector $\hat{\mbf{X}}=(\hat{u}_{xxxx},
\hat{v}_{xxxx}, \hat{w}_{xx}, \hat{z}_{xx})^T$. The Lagrangian is
\begin{equation}\la{newlag}
{\cal L}(5,7)=\frac{1}{2}\hat{\mbf{X}}^T \hat{\mbf{\cal G}}(5,7)
\hat{\mbf{X}}+\hat{\mbf{\cal A}}(5,7) \hat{\mbf{X}}+\hat{\cal L}(5,7),
\end{equation}
\noindent with
\begin{equation}\la{newg}
\hat{\mbf{\cal G}}(5,7)=\left(\begin{array}{cccc}
0&0&0&-2/25\\
0&0&1/250&0\\
0&1/250&0&-7/25\\
-2/25&0&-7/25&0
\end{array}
\right),
\end{equation}
\noindent which is nonsingular, hence by Theorem \ref{prop:sing} the Lagrangian is
nonsingular. The Ostrogradskii transformation acting on the transformed
Lagrangian \rf{newlag} introduces 24 canonical variables, as many variables as
the dimension of the phase space \cite{veselov}.
\vspace*{12pt}
It is not clear if the approach of this example always works, $i.e.$, for
other values of $r$ and $n$, both odd. There is no proof and the problem of
dealing with a singular Lagrangian for values of $(r,n)$ other than $(5,7)$
describing the $(r,n)$-th KP equation may require the general methods alluded
to above.
\section{Autonomous symmetry reductions of the Hamiltonian
system}\la{sec:reductions}
As discussed in the remarks on page \pageref{remrem}, not all solutions of the
$(r,n)$-th KP equation have the same number of phases. A generic genus $g$
solution of the KP equation is usually a very non-typical solution of the
$(r,n)$-th KP equation, with $r=g+1$ and $n=g+2$.
A solution of the $(r,n)$-th KP equation is completely determined by the
phase-space variables $q$ and $p$. In the $2N$-dimensional phase space
coordinatized by $\mbf{q}$ and $\mbf{p}$, for a given initial condition, the
solution evolves on the torus $\Lambda(r,n)$ determined by \rf{torus}. Usually
($i.e.$, for almost all initial data in the class of solutions of the
$(r,n)$-th KP equation), this torus is $N$-dimensional \cite{arnold}. However,
special solutions correspond to lower-dimensional tori. For example, suppose
that $N=2$. Then almost all solutions evolve on a two-dimensional torus: for
almost all values of ${\cal T}_1$ and ${\cal T}_2$, the surface $\Lambda$
determined by $T_1={\cal T}_1$ and $T_2={\cal T}_2$ is topologically
equivalent to a two-dimensional torus, like the one drawn in Figure
\ref{fig1}. For special values of ${\cal T}_1$ and ${\cal T}_2$, however,
this torus is degenerate and the resulting `surface' is only a one-dimensional
torus, $i.e.$, a circle $C$. This is drawn in Figure \ref{fig1}.
\begin{figure}[htb]
\centerline{\psfig{file=torus.eps,width=3in}}
\caption{\label{fig1} {\bf Usually solutions evolve on 2-D tori. Special
solutions correspond to lower-dimensional tori.}}
\end{figure}
To a certain degree this scenario is the typical one that occurs. For almost
all values of the constants $\{{\cal T}_k, ~k\in \Omega(r,n)\}$, the torus
$\Lambda(r,n)$ in \rf{torus} is $N$-dimensional. For a special set of values
$\{{\cal T}_k, ~k\in \Omega(r,n)\}$, the torus is $(N-1)$-dimensional,
corresponding to a solution with $(N-1)$ phases. For an even more limited
class of constants $\{{\cal T}_k, ~k\in \Omega(r,n)\}$, the torus
$\Lambda(r,n)$ is $(N-2)$-dimensional, corresponding to solutions with
$(N-2)$ phases, $etc$.
The conditions on the set of constants $\{{\cal T}_k, ~k\in \Omega(r,n)\}$ can
be expressed in terms of the variables $\mbf{q}$ and $\mbf{p}$. When these
variables satisfy certain constraints (to be derived below), the dimension of
the torus $\Lambda(r,n)$ decreases.
We have argued above that the conserved quantities $T_k$ for $k \in
\Omega(r,n)$ are functionally independent quantities if the $\mbf{q}$ and
$\mbf{p}$ variables are considered independent variables. The only way for the
torus $\Lambda(r,n)$ to be less than $N$-dimensional is if the conserved
quantities are $not$ functionally independent. This is only possible if there
are functional relationships between the variables $(\mbf{q},\mbf{p})$.
The constraints on the variables $\mbf{q}$ and $\mbf{p}$ are obtained as
follows:
\begin{enumerate}
\item Require that the conserved quantities $T_k$ for $k \in \Omega(r,n)$ be
functionally dependent. This is expressed as
\begin{equation}\la{funcdep}
\mbox{rank}\left(
\nabla T_{i_1}~\nabla T_{i_2}~\ldots~\nabla T_{i_N}
\right)=g<N,
\end{equation}
where the indices $i_k$ are the elements of $\Omega(r,n)$ and $\nabla T_{i_k}$
is defined in \rf{nabla}. In this case, $N-g$
of the conserved quantities are functionally dependent on the remaining $g$
functionally independent conserved quantities. Without loss of generality, we
assume that the first $g$ conserved quantities remain functionally independent,
whereas the last $N-g$ conserved quantities are functionally dependent:
\begin{equation}\la{firstg}
T_{i_k}=F_{i_k}(T_{i_1}, T_{i_2}, \ldots, T_{i_g}),
\end{equation}
\noindent for $k=g+1, g+2, \ldots, N$.
\item In this case, there are only $g$ functionally independent conserved
quantities $T_{i_k}$, $k=1,2,\ldots, g$. The manifold \rf{torus} reduces to
\begin{equation}\la{reducedtorus}
\Lambda_g(r,n)=\{T_{i_k}={\cal T}_{k}, ~k=1,2,\ldots,g\},
\end{equation}
which is topologically equivalent to a $g$-dimensional torus. This
$g$-dimensional torus is parametrizable by $g$ phase variables moving
linearly on the torus. In other words, if the evolution of a solution of the
$(r,n)$-th KP equation is restricted to this $g$-dimensional torus, it has
only $g$ phases. Since this solution is also a solution of the KP equation and
it has rank 1 and $g$ phases, it must be a genus $g$, rank 1 solution of the
KP equation.
\item Equations \rf{funcdep} results in a number of conditions on the
variables $q$ and $p$:
\begin{equation}\la{pqcon}
K_j(\mbf{q},\mbf{p})=0, ~~~~j=1, 2, \ldots, m,
\end{equation}
\noindent where $m$ is the number of conditions. If these conditions are satisfied
for the `initial' conditions $\mbf{q}(0)$ and $\mbf{p}(0)$ then they are
automatically satisfied for all other $x$-values, $i.e.$, the conditions on the
variables $\mbf{q}$ and $\mbf{p}$ are invariant under the $x$-flow. This is
easy to see: The conditions \rf{pqcon} are equivalent to the conditions
\rf{funcdep} which are equivalent to the conditions \rf{firstg}, which only
involve the conserved quantities. Clearly \rf{firstg} are invariant conditions.
The conditions on the variables $\mbf{q}$ and $\mbf{p}$ \rf{pqcon} are
polynomial in the $\mbf{q}$ and $\mbf{p}$ variables, since all the entries of
the matrix on the left-hand side of \rf{funcdep} are polynomial in these
variables. In practice, the conditions on the variables $\mbf{q}$ and $\mbf{p}$
\rf{funcdep} can be written as combinations of simpler conditions,
\begin{equation}\la{simplercon}
K_j=\sum_{k=1}^{m_g} Q_{j,k}(\mbf{q},\mbf{p}) P_k(\mbf{q},\mbf{p}),
~~~~j=1, 2, \ldots, m.
\end{equation}
\noindent Here both $Q_{j,k}(\mbf{q},\mbf{p})$ and $P_k(\mbf{q},\mbf{p})$ are
polynomials in $q$ and $p$. If $P_{k}(\mbf{q},\mbf{p})=0$, for $k=1,2, \ldots,
m_g$ then clearly the conditions \rf{pqcon} are satisfied. Clearly the
decomposition \rf{simplercon} is not unique. The factors $P_k$ of a
given decomposition are not necessarily invariant under the $x$-flow. In order
to find a minimal ($i.e.$, smallest number of elements) set of conditions on
the $\mbf{q}$ and $\mbf{p}$ variables, the invariance of such factors needs to
be tested separately. Since the conditions \rf{funcdep} are invariant, as
argued above, such a minimal set of invariant factors is guaranteed to exist.
The existence of a {\em minimal} decomposition is essentially a restatement
of Hilbert's Basis theorem \cite{abhyankar}. Below, we prove that the number
of elements in this set is $m_g=2(N-g)$. Once this minimal set of conditions
has been found, \rf{pqcon} can be replaced by the conditions
\begin{equation}\la{pqmincon}
P_k(\mbf{q},\mbf{p})=0, ~~~k=1,2,\ldots, m_g.
\end{equation}
\noindent The invariance of the factors $P_k(\mbf{q},\mbf{p})$ is easily tested. It
is necessary and sufficient that \cite{olver}
\begin{equation}\la{inv}
\left\{P_k(\mbf{q},\mbf{p}),H\right\}=0,~~~~~~\mbox{for~} k=1,2,\ldots, m_g,
\end{equation}
\noindent on the solution manifold $P_k(\mbf{q},\mbf{p})=0$, for $k=1,2,\ldots, m_g$.
\item The conditions on the variables $\mbf{q}$ and $\mbf{p}$ \rf{pqmincon} are
autonomous, since the conditions do not depend explicitly on $x$. The
conditions \rf{pqmincon} determine {\em autonomous invariant symmetry
reductions} of the Hamiltonian system \rf{ostrodynsys}.
\end{enumerate}
\begin{theo}\cite{dullin}\la{dull}
In order for a solution of the $(r,n)$-th KP equation to have genus
$g$ instead of $N$, $2(N-g)$ conditions need to be imposed on the variables
$\mbf{q}$
and $\mbf{p}$, $i.e.$, $m_{g}=2(N-g)$.
\end{theo}
\noindent{\bf Proof} By Ostrogradskii's theorem, a typical solution of the $(r,n)$-th
KP equation resides in the $2N$-dimensional phase space with coordinates
$\mbf{q}$
and $\mbf{p}$. The existence of $N$ conserved quantities $T_{i_k}$, for
$k=1,2,\ldots, N$ guarantees that the motion starting from any initial
condition evolves on a torus determined by the $N$ conditions $T_{i_k}={\cal
T}_k$, for $k=1,2,\ldots, N$. Hence this torus is a hypersurface of codimension
$N$, or of dimension $2N-N=N$.
If we impose that the rank of the matrix $\left(\nabla T_{i_1}, \nabla T_{i_2},
\ldots, \nabla T_{i_N}\right)$ is $N-1$ in order to obtain genus $N-1$
solutions, then the motion of the solutions is restricted to an
$(N-1)$-dimensional torus. An $(N-1)$-dimensional hypersurface in a
$2N$-dimensional phase space is determined by $N+1$ equations. $N-1$ equations
are provided by the relations $T_{i_k}={\cal T}_k$, for $k=1,2,\ldots, N-1$.
Hence another two conditions are required on the coordinates $\mbf{q}$ and
$\mbf{p}$.
The proof of the theorem is now easily obtained by repeating this argument $N-g$
times. \hspace*{\fill}$\rule{3mm}{3mm}$
\vspace*{12pt}
\noindent {\bf Remarks}
\begin{description}
\item[(a)~]
The fact that $m_{N-1}=2$ is not easily seen from \rf{funcdep}. For
\rf{funcdep} to be satisfied with $g=N-1$, the determinants of all $N\times N$
minors need to be zero. This seemingly results in $\binomial{2N}{N}$
$N$-dimensional minors of which $2N-1$ are functionally independent. The
determinant of all these minors are decomposable in terms of two polynomials
$P_1(\mbf{q},\mbf{p})$ and $P_2(\mbf{q},\mbf{p})$, which are both invariant
under the $x$-flow. This is explicitly worked out in the example below, for
$N=3$ and $g=2$.
\item[(b)~] It should be mentioned that in \cite{ds1}, some ideas were given
to find conditions on nontypical solutions of the $(r,n)$-th KP equation.
Those ideas were correct, but seemingly hard to implement, as is seen in the
example discussed there. This same example is discussed below. The algorithm
offerered in this section to find the determining conditions on the nontypical
solutions of the $(r,n)$-th KP equation is much more efficient.
\end{description}
\noindent {\bf Example: Generic genus 2 solutions of the KP equation}
\vspace*{6pt}
\noindent A generic solution of genus 2 of the KP equation is a solution of the
$(3,4)$-th KP equation. The Hamiltonian system for this case was discussed on
page \pageref{ham34}. The Hamiltonian system \rf{ostrodynsys} corresponding to
the Hamiltonian \rf{ham34} is three-dimensional, hence a typical solution of
this system executes linear motion on a 3-torus, topologically equivalent to
\begin{equation}\la{3torus} \Lambda(3,4)=\left\{T_1={\cal T}_1,~T_2={\cal T}_2, ~T_5={\cal
T}_5 ~\right\}. \end{equation}
\noindent Such a solution has three phases and is hence a rank 1, genus 3 solution
of the KP equation. Nevertheless, the generic rank 1, genus 2 solutions of the
KP equation are obtained as solutions of the $(3,4)$-th KP equation. These
solutions are special, nontypical solutions of the $(3,4)$-th KP equation.
They are obtained by imposing an autonomous invariant symmetry reduction on
the Hamiltonian system \rf{ostrodynsys}, as outlined in this section.
The condition \rf{funcdep} for a solution of the $(3,4)$-th KP equation to
have genus 2 is
\begin{equation}\la{g2con}
\mbox{rank}\left(
\begin{array}{cccccc}
\displaystyle{\pp{T_1}{q_{11}}} & \displaystyle{\pp{T_1}{q_{12}}} & \displaystyle{\pp{T_1}{q_{21}}} &
\displaystyle{\pp{T_1}{p_{11}}} & \displaystyle{\pp{T_1}{p_{12}}} & \displaystyle{\pp{T_1}{p_{21}}}\\
\vspace{-5pt}\\
\displaystyle{\pp{T_2}{q_{11}}} & \displaystyle{\pp{T_2}{q_{12}}} & \displaystyle{\pp{T_2}{q_{21}}}&
\displaystyle{\pp{T_2}{p_{11}}} & \displaystyle{\pp{T_2}{p_{12}}} & \displaystyle{\pp{T_2}{p_{21}}}\\
\vspace{-5pt}\\
\displaystyle{\pp{T_3}{q_{11}}} & \displaystyle{\pp{T_3}{q_{12}}} & \displaystyle{\pp{T_3}{q_{21}}}&
\displaystyle{\pp{T_3}{p_{11}}} & \displaystyle{\pp{T_3}{p_{12}}} & \displaystyle{\pp{T_3}{p_{21}}}\\
\end{array}
\right)=2.
\end{equation}
From Theorem \ref{dull}, it follows that \rf{g2con} is equivalent to two
invariant conditions on the variables $q_{11}$, $q_{12}$, $q_{21}$, $p_{11}$,
$p_{12}$ and $p_{21}$. These conditions are readily found in this specific
case \cite{decthesis}. The expressions for these conditions are quite long,
so we do not repeat them here. Let us denote them by
\begin{equation}\la{g2cond}
P_1(\mbf{q}, \mbf{p})=0,~~p_2(\mbf{q}, \mbf{p})=0.
\end{equation}
There is a more geometrical way to look at the conditions \rf{g2con}. Consider
the three-dimensional space spanned by the conserved quantities $T_1, T_2$ and
$T_5$. If the conditions \rf{g2cond} are satisfied, there is a functional
relationship between the three conserved quantities: \begin{equation}\la{funcrelcons}
\Omega: f(T_1,T_2,T_5)=0, \end{equation}
\noindent which represents a surface in the space spanned by $T_1, T_2$ and $T_5$.
By solving the conditions \rf{g2cond} for two of the phase space variables
($p_{11}$ and $p_{12}$ respectively, for instance), and substituting the
result in the form of the conserved quantities \rf{t134}, \rf{t234} and
\rf{t534}, a parametric representation of the surface $\Omega$ is obtained:
\begin{equation}\la{g2parametric} \Omega:\left\{ \begin{array}{rcl} T_1&=&T_1(q_{11}, q_{12}, q_{21},
p_{21}),\\ T_2&=&T_2(q_{11}, q_{12}, q_{21}, p_{21}),\\ T_5&=&T_3(q_{11},
q_{12}, q_{21}, p_{21}). \end{array} \right. \end{equation}
\noindent Apparently, two too many parameters are present in this set of equations,
since the parametric representation of a surface should only contain two
parameters. However, the existence of a functional relationship
\rf{funcrelcons} guarantees that these parameters appear only in two different
combinations such that there are essentially two parameters present in
\rf{g2parametric}. The most convenient way to plot the resulting surface is to
equate two of the `parameters' $q_{11}$, $q_{12}$, $q_{21}$ and $p_{21}$ to
zero, while letting the remaining two vary. In Figure \ref{fig2}, two
different views of the surface $\Omega$ are given.
\begin{figure}
\begin{tabular}{cc}
\psfig{file=surf1.ps,width=3in} &
\psfig{file=surf2.ps,width=3in}\\
(a) & (b)\\
\end{tabular}
\caption{\label{fig2} {\bf The surface $\Omega$ from two different view points.
The cusp of the surface is at the origin. Figure (b) shows the same surface as
Figure (a), but rotated around the $T_5$-axis by 180 degrees. This figure was
obtained using \rf{g2parametric} with $q_{21}=0=p_{21}$.}}
\end{figure}
Every point in the space spanned by $T_1$, $T_2$ and $T_5$ corresponds to a
three-dimensional torus, on which the motion of the solutions of the $(3,4)$-th
KP equation takes place. A point on the surface $\Omega$ corresponds to a
degenerate three-dimensional torus, on which there are in essence only two
independent directions, analogous to the idea demonstrated in Figure
\ref{fig1}. In other words, points on the surface $\Omega$ correspond to
two-dimensional tori and to genus two solutions of the KP equation.
These genus two solutions are the generic rank 1, genus 2 solutions of the KP
equation, as argued above.
Note that a more drastic reduction to genus one solutions is possible. These
solutions correspond to points on the singular curves on the surface $\Omega$.
As the genus one solutions are more easily obtained from the $r=2$ case, this
case is not essential.
\section{Parameter count}\la{sec:parameters}
The previous sections put us in a position to count easily the
parameters that determine a solution of the $(r,n)$-th KP equation.
The first column of table \ref{table2} lists the parameters that determine a
solution of the $(r,n)$-th KP equation. They are the `initial' values variables
$\mbf{q}(0)$ and $\mbf{p}(0)$ (not only for the $x$-flow, but any other
$t_k$-flow), the constants $h_k$ that are the coefficients of the Casimir
functionals in \rf{lagrangian}, the constants $d_k$ which are the coefficients
of the lower-order flows in \rf{lagrangian} and the constant $u_1$, discussed
in the remark on page \pageref{u1rem}.
\settowidth{\mylength}{$\displaystyle{N=\frac{(r-1)(n-1)}{2}}$}
\settoheight{\myheight}{\framebox{$\displaystyle{N=\frac{(r-1)(n-1)}{2}}$}}
\addtolength{\myheight}{8pt}
\begin{table}
\begin{center}
\caption{\bf The number of parameters determining the solutions of the
$(r,n)$-th KP
equation.\la{table2}}
\vspace*{0.2in}
\begin{tabular}{|c|c|c|c|}
\hline
& typical solution of & generic genus $g$ & non-typical solution of \\
&the $(r,n)$-th KP equation& solution of (KP)& the $(r,n)$-th KP equation\\
\hline\hline
\pb{$\mbf{q}(0)$} & $\displaystyle{N=\frac{(r-1)(n-1)}{2}}$ & $g$ & $g$\\
\hline
\pb{$\mbf{p}(0)$} & $\displaystyle{N=\frac{(r-1)(n-1)}{2}}$ & $g$ & $g$\\
\hline\hline
\pb{$h_k$} & $r-1$ & $g$ & $r-1$\\
\hline
\pb{$d_k$} & $n-2$ & $g$ & $n-2$\\
\hline
\pb{$u_1$} & 1 & 1 & 1\\
\hline\hline
\pb{total \#} & $rn-1$ & $4g+1$ & $2g+r+n-2$\\
\hline
\end{tabular}
\end{center}
\end{table}
How many of each of these parameters determine a typical solution of the
$(r,n)$-th KP equation is indicated in the second column. A typical solution of
the $(r,n)$-th KP equation has $N=(r-1)(n-1)/2$ variables $\mbf{q}$ and $N$
variables $\mbf{p}$. Each of these is determined by its initial conditions
$\mbf{q}(0)$ and $\mbf{p}(0)$. For a typical solution of the $(r,n)$-th KP
equation, $N$ is also the genus of the solution. Any solution of the $(r,n)$-th
KP equation is determined by $r-1$ Casimir functionals (see \rf{lagrangian}).
Also from \rf{lagrangian}, it follows that $n-2$ lower-order flows are
included, accounting for the coefficients $d_k$. With the addition of the
constant $u_1$, this results in a total number of $rn-1$ parameters that
determine a typical solution of the $(r,n)$-th KP equation.
The third column expresses the number of parameters in terms of the genus of
the solution, for a generic genus $g$ solution of the KP equation. For such a
solution $r=g+1$ and $n=g+2$ \cite{ds1}. Furthermore, the
Hamiltonian system \rf{ostrodynsys} reduces significantly such that there are
exactly $g$ variables $\mbf{q}$ and $\mbf{p}$. The total number of parameters
adds up to $4g+1$. This is a known result, implied for instance in \cite{dub}.
Not every nontypical solution of the $(r,n)$-th KP equation is generic. For
such solutions, the number of variables $\mbf{q}$ is equal to the genus $g$ of
the solution, as is the number of variables $\mbf{p}$. These results are given
in the last column of Table \ref{table2}.
There is an important distinction between the different types of parameters in
table \ref{table2}. The entries in the top two rows have dynamical
significance: they are initial values for the variables $\mbf{q}$ and
$\mbf{p}$. The Hamiltonian system \rf{ostrodynsys} is a dynamical system for
the determination of the variables $\mbf{q}$ and $\mbf{p}$. The other
parameters merely show up as parameters in this Hamiltonian system. This
distinction between two kinds of parameters, dynamical and nondynamical,
is to our knowledge new.
\section{Minimal Characterization of the initial data}
A rank 1, finite-genus solution of the KP equation is completely determined by
a solution of an $(r,n)$-th KP equation, for a certain $r$ and $n$. The
$(r,n)$-th KP equation is given by the Euler-Lagrange equation \rf{el}. This
is a set of $(r-1)$ ordinary differential equations in $x$. Various quantities
appear in this system of ordinary differential equations: $(r-1)$ potentials
$\mbf{\alpha}(r)$ and their derivatives with respect to $x$, $(r-1)$ constants
$h_k$, $(n-2)$ constants $d_k$ and one constant potential $u_1$.
Next we argue that the knowledge of the initial condition for KP, $u(x,y,t=0)$
along one direction (say the $x$-axis) is sufficient to determine the
corresponding rank 1, finite genus solution for all $x$, $y$ and $t$.
\begin{enumerate}
\item If the initial condition is specified along the $x$-axis for $y=0$, all
potentials and their derivatives can be found at any point on the $x$-axis
(Note that a rank 1, finite-genus solution is analytic in all its independent
variables). This is done in the following way: The Euler-Lagrange equations
and their derivatives with respect to $x$ determine $algebraic$ conditions on
the potentials and their derivatives, as well as the unknown parameters $h_k$,
$d_k$ and $u_1$. In these conditions, $u$ and its derivatives with respect to
$x$ are known, by assumption. Hence taking more derivatives of the
Euler-Lagrange equations \rf{el} with respect to $x$ adds more conditions than
unknowns. Taking enough derivatives of the Euler-Lagrange equations, we
obtain a set of polynomial equations for the unknown potentials and their
derivatives, as well as the parameters $h_k$, $d_k$ and $u_1$.
\item We have already argued that the knowledge of the $x$-dependence
completely determines the dependence of the solution on any higher-order time
variable: the Hamiltonians determining the dependence of the solution on $t_k$
are conserved quantities for the $x$-evolution of the solution, $H_k=-T_k$.
Furthermore, the initial conditions $\mbf{q}(0)$, $\mbf{p}(0)$ for $t_k$ are
identical to the initial conditions for the $x$-evolution at $x=0$, $y=0$.
\end{enumerate}
This shows that it suffices to specify the rank 1, finite-genus initial
condition along one direction: $u(x, y=0, t=0)$. It is clear that the above
argument can be repeated if the initial condition is specified along any of the
higher-order flows. This is specifically relevant for $t_2~(y)$ and $t_3~(t)$.
\vspace*{12pt}
\noindent {\bf Remarks} The procedure for finding the parameters $h_k$, $d_k$ and
$u_1$ given above is not very practical. A large number of derivatives of the
potential are required and a large polynomial system needs to be solved to
find the values of the parameters.
\section*{Acknowledgements} The author acknowledges useful discussions with O.
I. Bogoyavlenskii and A. P. Veselov. H. Segur is thanked for his continued
support and the invaluable interactions with the author on this subject.
This work was carried out at the University of Colorado and
supported by NSF grant DMS-9731097.
\bibliographystyle{unsrt}
| 59,047 |
\section{Introduction}
Finding a viable model for the formation of large scale structure
(LSS) is an important problem in cosmology.
Models with a minimal number of free parameters, such as standard
cold dark matter (sCDM) or standard cold plus hot, mixed
dark matter (sMDM) only marginally match observational
data. Better agreement between predictions and observational
data can be achieved in models with a larger numbers of parameters
(CDM or MDM with baryons, tilt of primordial power spectrum,
3D curvature, cosmological constant, see, {\it e.g.},
\cite{vkn98} and refs. therein).
In view of the growing amount of observational data, we seriously
have to discuss the precise quantitative differences
between theory and observations for the whole class of
available models by varying all the input parameters such as
the tilt of primordial spectrum, $n$, the density of cold dark matter,
$\Omega_{CDM}$, hot dark matter,
$\Omega_{\nu}$, and baryons, $\Omega_b$, the vacuum energy or
cosmological constant, $\Omega_{\Lambda}$, and the
Hubble parameter $h$ ($h=H_0/100\;km/s/Mpc$), to find the values which
agree best with observations of large scale structure
(or even to exclude the whole family of models.).
Publicly available fast codes to calculate the transfer function
and power spectrum of fluctuations in the cosmic microwave background
(CMB) (\cite{sz96}, CMBfast) are an essential ingredient in this process.
But even CMBfast is too bulky and too slow for an effective search
of cosmological parameters by means of a $\chi^2$-minimization, like that
of Marquardt (see \cite{nr92}). To solve this problem, analytic
approximations of the transfer function are of great value.
Recently, such an approximation has been proposed by \cite{eh3}
(this reference is denoted by EH2 in the sequel).
Previously, approximations
by \cite{hol89,pog95,ma96} have been used.
Holtzman's
approximation is very accurate but it is an approximation for fixed
cosmological parameters. Therefore it can not be useful
for the purpose mentioned above. The analytic approximation by
\cite{pog95} is valid in the 2-dimensional parameter space $(\Omega_{\nu},
h)$, and $z$ (the redshift). It has the correct asymptotic behavior at small
and large $k$, but the systematic error of the transfer function $T(k)$
is relatively large (10\%-15\%) in the
important range of scales $0.3\le k\le 10\;h/$Mpc. This error,
however introduces discrepancies of 4\% to 10\% in $\sigma_R$ which
represents an integral over $k$.
Ma's analytic approximation is
slightly more accurate in this range, but has an incorrect asymptotic behavior
at large $k$, hence it cannot be used for the analysis of the
formation of small scale objects (QSO, damped $Ly_\alpha$ systems,
$Ly_\alpha$ clouds etc.).
Another weak point of these analytic
approximations is their lack of dependence on the baryon density.
Sugiyama's correction of the CDM transfer function in the
presence of baryons (\cite{bar86,sug95})
works well only for low baryonic content. Recent data on the high-redshift
deuterium abundance (\cite{tyt96}), on clustering at $100$Mpc$/h$
(\cite{eh4}) and
new theoretical interpretations of the $Ly_\alpha$ forest (\cite{wei97})
suggest that $\Omega_{b}$ may be higher than the standard
nucleosynthesis value.
Therefore pure CDM and MDM models have to be modified.
(Instead of raising $\Omega_b$, one can also look for other
solutions, like, {\it e.g.} a cosmological constant, see below.)
For CDM this has been achieved by Eisenstein $\&$ Hu (1996,
1997a\footnote{This reference is denoted by EH1 in this paper.}) using an
analytical approach for the
description of small scale cosmological perturbations in the
photon-baryon-CDM system. Their analytic
approximation for the matter transfer function
in 2-dimensional parameter space
($\Omega_{M}h^2$, $\Omega_b/\Omega_{M}$)
reproduces acoustic oscillations, and is quite accurate for $z<30$ (the
residuals are smaller than 5\%) in the range $0.025\le \Omega_{M}h^{2}\le 0.25$,
$0\le \Omega_{b}/\Omega_{M}\le 0.5$,
where $\Omega_M$ is the matter density parameter.
In EH2 an analytic approximation of the matter transfer
function for MDM models is proposed for a wide range of parameters
($0.06\le \Omega_{M}h^{2}\le 0.4$, $\Omega_b/\Omega_{M}\le 0.3$,
$\Omega_{\nu}/\Omega_{M}\le 0.3$ and $z\le 30$). It is more accurate than
previous approximations by \cite{pog95,ma96} but not as precise
as the one for the CDM+baryon model. The baryon oscillations are
mimicked by a smooth function, therefore the approximation looses
accuracy in the important range $0.03\le k\le 0.5~h/$Mpc.
For the parameter choice $\Omega_{M}=1$, $\Omega_{\nu}=0.2$,
$\Omega_b=0.12$, $h=0.5$, {\it e.g.}, the systematic residuals are about
6\% on these scales. For higher $\Omega_{\nu}$ and $\Omega_{b}$ they
become even larger.
For models with cosmological constant, the motivation to go to high
values for $\Om_\nu$ and $\Om_b$ is lost, and the parameter space
investigated in EH2 is sufficient. Models without cosmological
constant, however, tend to require relatively high baryon or HDM
content. In this paper, our goal is thus to construct a very
precise analytic approximation for the redshift dependent transfer
function in the 4-dimensional space of spatially flat matter dominated
MDM models, $T_{MDM}(k;\Omega_{\nu},N_{\nu},\Omega_{b},h;z)$,
which is valid for $\Omega_M =1$ and allows for
high values of $\Om_\nu$ and $\Om_b$. In
order to keep the baryonic features, we will use the EH1
transfer function for the cold particles+baryon system,
$T_{CDM+b}(k;\Omega_{b},h)$, and then correct
it for the presence of HDM by a function
$D(k;\Omega_{\nu},N_{\nu},\Omega_{b},h;z)$, making use of the exact
asymptotic solutions. The resulting MDM transfer function is the
product $T_{MDM}(k)=T_{CDM+b}(k)D(k)$.
To compare our approximation with the numerical result, we use the
publicly available code 'CMBfast' by Seljak $\&$ Zaldarriaga 1996.
The paper is organized as follows: In Section 2 a short description of
the physical parameters which affect the shape of the MDM transfer
function is given. In Section 3 we derive the analytic approximation for
the function $D(k)$. The precision of our approximation for $T_{MDM}(k)$,
the parameter range where it is applicable, and a comparison with the
other results are discussed in Sections 4 and 5. In Section~6 we
present our conclusions.
\section{Physical scales which determine the form of MDM transfer function}
We assume the usual cosmological paradigm: scalar primordial density
perturbations which are generated in the early Universe, evolve in a
multicomponent medium of relativistic (photons and massless neutrinos) and
non-relativistic (baryons, massive neutrinos and CDM)
particles. Non-relativistic matter dominates the density today,
$\Omega_{M}=\Omega_b+\Omega_{\nu}+\Omega_{CDM}$. This model is usually
called 'mixed dark matter' (MDM). The total energy density may also include a
vacuum energy, so that $\Omega_0=\Omega_M+\Omega_\Lambda$. However,
for reasons mentioned in the introduction, here we investigate
the case of a matter-dominated flat Universe with $\Omega_M=1$ and
$\Omega_\Lambda=0$. Even though $\Omega_\Lambda\neq 0 $ seems to be
favored by some of the present data, our main point, allowing for high values
of $\Omega_b$, is not important in this case and the approximations by
EH2 can be used.
Models with hot dark matter or MDM have been described in the literature
by \cite{fan84,sha84,vb85,hol89,luk91}, Davis, Summers $\&$ Schlegel 1992,
\cite{sch92,van92},
Pogosyan $\&$ Starobinsky 1993, 1995, \cite{nov94}, Ma $\&$ Bertschinger 1994,
1995, \cite{sz96}, EH2, \cite{vkn98} and refs. therein. Below, we simply present
the physical parameters
which determine the shape of the MDM transfer function and which will be used
explicitly in the approximation which we derive here\footnote{Recall
the definitions and relationship
between the MDM and the partial transfer functions
$$T_{MDM}=\Omega_{CDM}T_{CDM}+\Omega_\nu T_\nu +\Omega_b T_b\;,$$
$$T(k)\equiv {\delta (k,z)\over \delta (0,z)}
{\delta (0,z_{in})\over \delta (k,z_{in})}\;\;,$$ where $\delta (k,z)$
is the density perturbations in a given component and $z_{in}$ is a very high
redshift at which all scales of interest are still super horizon.}.
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig1.eps}
\caption{The transfer function of density perturbations of CDM, baryons
and HDM at $z=10$ (calculated numerically).}
\end{figure}
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig2.eps}
\caption{The same as in Fig.1 but for z=0.}
\end{figure}
Since cosmological perturbations cannot grow significantly in a
radiation dominated universe, an important parameter
is the time of equality between the densities
of matter and radiation
$$z_{eq}=\frac{2.4\times 10^4}{1-N_{\nu}/7.4}h^{2}t_{\gamma}^{-4}-1,\eqno(1)$$
where $t_{\gamma}\equiv T_{\gamma}/2.726 K$ is the CMB temperature today,
$N_{\nu}$=1, 2 or 3 is the number of species of massive neutrinos
with equal mass (the number of massless neutrino species is then $3-N_{\nu}$).
The scale of the particle horizon at this epoch,
$$k_{eq}=4.7\times 10^{-4}\sqrt{1+z_{eq}}\;h/Mpc,\eqno(2)$$
is imprinted in the matter transfer function: perturbations
on smaller scales ($k>k_{eq}$) can only start growing after $z_{eq}$,
while those on larger scales ($k<k_{eq}$) keep growing at any time.
This leads to the suppression of the transfer
function at $k>k_{eq}$. After $z_{eq}$ the fluctuations in the CDM component
are gravitationally unstable on all scales.
The scale $k_{eq}$ is thus the single physical
parameter which determines the form of the CDM transfer function.
The transfer function for HDM ($\nu$) is more complicated because
two more physical scales enter the problem. The time and horizon scale when
neutrino become non-relativistic ($m_\nu\simeq 3T_\nu$) are given by
$$z_{nr}=x_{\nu}(1+z_{eq})-1\;\;,$$
$$k_{nr}=3.3\times 10^{-4}\sqrt{x_{\nu}(1+x_{\nu})(1+z_{eq}})\;h/Mpc,\eqno(3)$$
where $x_{\nu}\equiv \Omega_{\nu}/\Omega_{\nu\;eq}\;$,
$\;\;\Omega_{\nu\;eq}\simeq N_\nu /(7.4-N_\nu )$
is the density parameter for a neutrino component becoming non-relativistic just
at $z_{eq}$.
The neutrino mass can be expressed in terms of $\Om_\nu$ and $N_\nu$ as
(\cite{pb93}) $m_{\nu}=94\Omega_{\nu}h^{2}N_{\nu}^{-1}t_{\gamma}^{-3}\;$eV.
The neutrino free-streaming (or Jeans\footnote{Formally the
Jeans scale is 22.5$\%$ less than the free-streaming
scale (Bond $\&$ Szalay 1983, Davis, Summers $\&$ Schlegel 1992),
however, $k_F$ is the relevant physical parameter
for collisionless neutrini.}) scale at $z\le z_{nr}$ is
$$k_{F}(z)\simeq 59\sqrt{{1\over 1+z_{eq}}+{1\over 1+z}}\;
\Omega_{\nu}N_{\nu}^{-1}t_{\gamma}^{-4}\;h^3/Mpc,\eqno(4)$$
which corresponds to the distance a neutrino travels in one Hubble time,
with the characteristic velocity
$v_{\nu}\simeq {1\over x_{\nu}}{1+z\over 1+z_{eq}}.$
Obviously, $k_F\ge k_{nr}$, and $k_{nr}{}^{>}_{<}k_{eq}$ for
$\Omega_{\nu}{}^{>}_{<}\Om_{\nu\;eq}\;$.
The amplitude of $\nu$-density
perturbation on small scales ($k>k_{nr}$) is reduced in comparison
with large scales ($k<k_{nr}$). For scales larger than the free-streaming
scale ($k<k_F$) the amplitude of density perturbations
grows in all components like $(1+z)^{-1}$ after $z_{eq}$.
Perturbations on scales below the free-streaming scale ($k>k_F$)
are suppressed by free streaming which is imprinted in the transfer
function of HDM. Thus the latter should be parameterized by two ratios:
$k/k_{nr}$ and $k/k_F$.
The transfer function of the baryon component is determined by the
sound horizon and the Silk damping scale at the time of recombination
(for details see EH1).
In reality the transfer function of each component is more complicated due to
interactions between them. At late time ($z<20$),
the baryonic transfer function is practically
equal to the one of CDM, for models with $\Omega_{b}<\Omega_{CDM}$
(see Figs.~1,2).
After $z_{eq}$, the free-streaming scale decreases with time (neutrino
momenta decay with the expansion of the Universe whereas the Hubble
time grows only as the square root of the scale factor, see Eq.~(4)), and
neutrino density perturbations at smaller and smaller scales become
gravitationally unstable and cluster with the CDM+baryon component.
Today the $\nu$ free-streaming scale may lie in the range of
galaxy to clusters scales depending on the $\nu$ mass.
On smaller scales the growing mode of perturbation is concentrated
in the CDM and baryon components. Matter density perturbation
on these scales grow like $\sim t^{\alpha}$, where
$\alpha=(\sqrt{25-24\Omega_{\nu}}-1)/6$ (\cite{dor80}).
\section{An analytic approximation for the MDM transfer function}
To construct the MDM transfer function we use the analytic
approximation of EH1 for the transfer function of cold particles plus
baryons and correct it for the presence of a $\nu$-component like
\cite{pog95} and \cite{ma96}:
$$T_{MDM}(k)=T_{CDM+b}(k)D(k)~.\eqno(5)$$
The function $D(k)$ must have the following asymptotics:
$$D(k\ll k_{nr})=1,$$
$$D(k\gg k_F)=(1-\Omega_{\nu})\left({1+z\over
1+z_{eq}}\right)^{\beta},$$
where $\beta=1-1.5\alpha$.
After some numerical experimentation we arrive at
the following ansatz which satisfies these asymptotics
$$
D(k)=\left[{1+(1-\Omega_{\nu})^{1/\beta}{1+z\over 1+z_{eq}}
({k_F\over k_{nr}})^{3} \Sigma _{i=1}^{3}
\alpha_{i}\left({k\over k_F}\right)^i
\over 1+(\alpha_{4}k/k_{nr})^{3}}\right]^{\beta}.\eqno(6)
$$
We minimize the
residuals in intermediate region ($k_{nr}<k<k_F$) by determining
$\alpha_{i}$ as best fit coefficients
by comparison with the numerical results.
By $\chi^2$ minimization (\cite{nr92}) we first determine the
dependence of the coefficients $\alpha_{i}$ on $\Omega_{\nu}$
keeping all other
parameters fixed, to obtain an analytic approximation
$\alpha_{i}(\Omega_{\nu},z)$. The main dependence of
$T_{MDM}(k)$ on $\Omega_{b},\;N_{\nu},\;h$ and $z$ is taken care of by the
dependence of $T_{CDM+b}$, $k_{nr}$, $k_F$ and of the
asymptotic solution on these parameters. We then
correct $\alpha_{i}$ by minimization of the residuals to include the
slight dependence on these parameters.
Finally, the correction coefficients have the following form:
$$\alpha_{i}=a_{i}A_{i}(z)B_{i}(\Omega_b)C_{i}(h)D_{i}(N_{\nu}),$$
where $a_{i}=a_{i}(\Omega_{\nu})$,
$A_{i}(0)=B_{i}(0.06)=C_{i}(0.5)=D_{i}(1)=1$.
The functions $A_{i}$
depend also on $\Omega_{\nu}$.
For all our calculations we assume a CMB temperature of
$T_{\gamma}=2.726K$ (\cite{mat94,kog96}).
\subsection{Dependence on $\Omega_{\nu}$ and $z$.}
We first set $h=0.5$, $\Omega_{b}=0.06$, $N_{\nu}=1$
and determine $\alpha_{i}$ for
$\Omega_{\nu}=0.1,\;0.2,\;0.3,\;0.4,\;0.5$ and $z=0,\;10,\;20$.
We then approximate $D(k)$ by setting
$\alpha_{i}=a_{i}A_{i}(z)$, where
$A_i(z)=(1+z)^{b_i+c_i(1+z)}$. The dependences
of $a_i$, $b_i$ and $c_i$ on $\Omega_{\nu}$, as well as
$B_{i}(\Omega_b)$, $C_{i}(h)$ and $D_{i}(N_{\nu})$ are given in the Appendix.
The functions $D(k)$ for different $\Omega_{\nu}$ and its fractional
residuals are shown in Figs.~3 and~4.
\begin{figure}[t]
\epsfxsize=8truecm
\epsfbox{fig3.eps}
\caption{$D(k)=T_{MDM}(k)/T_{CDM+b}(k)$
as calculated numerically (solid line) and our analytic
approximation (dotted line) for different values of $\Omega_{\nu}$. The
numerical results and the approximations overlay perfectly.}
\end{figure}
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig4.eps}
\caption{Fractional residuals of $D(k)$ given by
$(D(k)-D_{CMBfast}(k))/D_{CMBfast}(k)$ for different values of $\Omega_{\nu}$.}
\end{figure}
We now analyze the accuracy of our analytic approximation for the MDM
transfer function $T_{MDM}(k)=T_{CDM+b}D(k)$ which in addition to
the errors in $D(k)$ contains also those of $T_{CDM+b}(k)$ (EH1).
We define the fractional residuals for $T_{MDM}(k)$ by
$(T(k)-T_{CMBfast}(k))/T_{CMBfast}(k)$. In Fig.~5 the numerical result
for $T_{MDM}(k)$ (thick solid lines) and the
analytic approximations (dotted thin lines) are shown for different
$\Omega_{\nu}$. The fractional residuals for the latter are given in Fig.~6.
Our analytic approximation of $T_{MDM}(k)$ is sufficiently accurate for a
wide range of redshifts (see Fig.7). For $z\le 30$ the fractional residuals
do not change by more than 2\% and stay small.
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig5.eps}
\caption{The numerical MDM transfer function
at $z=0$ (thick solid lines)
and the analytic approximations $T_{MDM}(k)=T_{CDM+b}D(k)$ (dotted thin
lines, which perfectly overlay with the numerical result).}
\end{figure}
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig6.eps}
\caption{The fractional residuals of $T(k)$ given by $(T(k) -
T_{CMBfast}(k))/T_{CMBfast}(k)$ for different values of $\Omega_{\nu}$.}.
\end{figure}
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig7.eps}
\caption{Fractional residuals of the analytic approximation for the MDM
transfer function at different redshifts.}
\end{figure}
\subsection{Dependence on $\Omega_{b}$ and $h$.}
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig8.eps}
\caption{Fractional residuals of the analytic approximation of the MDM
transfer function for different values of $\Omega_{b}$.}
\end{figure}
We now vary $\Omega_{b}$ fixing different values of $\Omega_{\nu}$
and setting the other parameters $h=0.5$, $N_{\nu}=1$. We analyze
the ratio $D(k;\Omega_{\nu},\Omega_{b})/D(k;\Omega_{\nu},\Omega_{b}=0.06)$.
Since the dominant dependence of $T_{MDM}(k)$ on $\Omega_{b}$ is
already taken care of in
$T_{CDM+b}(k)$, $D(k)$ is only slightly corrected for this
parameter. Correction factors $B_i(\Omega_b)$ ($\sim 1$) as a second order
polynomial dependence on $\Omega_b/0.06$ with best-fit coefficients are
presented in the Appendix.
The fractional residuals of
$T_{MDM}(k)$ for different $\Omega_{b}$ are shown in Fig.~8.
The maximum of the residuals grows for higher baryon fractions.
This is due to the acoustic oscillations which become more prominent and
their analytic modeling in MDM models is more complicated.
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig9.eps}
\caption{Fractional residuals of the analytic approximation of $D(k)$,
$T_{CDM+b}(k)$ and $T_{MDM}(k)$
transfer function for different values of the Hubble parameter, $h$.}
\end{figure}
A similar situation occurs also for the dependence of $T_{MDM}(k)$
on $h$. Since the $h$-dependence is included properly in $k_F$ and
$k_{nr}$, $D(k)$ does not require any correction in the asymptotic regimes.
Only a tiny correction of $D(k)$ in the intermediate range,
$k$ ($0.01<k<1$) is necessary to minimize the residuals.
By numerical experiments we find that this can be achieved by
multiplying $\alpha_{1},...\alpha_{4}$
by the factors $C_i(h)$ which are approximated by second order polynomial
on $h/0.5$ with coefficients determined by $\chi^2$
minimization (see Appendix). The fractional residuals of
$D(k)$ for different $h$ are shown in Fig.~9 (top panel), they remain
stable in the range $0.3\le h\le 0.7$. But the fractional residuals of
$T_{MDM}(k)$ slightly grow (about 2-3\%, bottom Fig.~9) in the range
$0.1\le k \le 1$ when $h$ grows from 0.3 to 0.7. This is caused by
the fractional residuals of $T_{CDM+b}(k)$ (see middle panel).
\subsection{Dependence on $N_{\nu}$.}
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig10.eps}
\caption{
Fractional residuals for the analytic approximation of the MDM
transfer function of density perturbations for different numbers of
massive neutrino species, $N_{\nu}$, in models with different values
of $\Omega_{\nu}$, $\Omega_{b}$ and $h$.}
\end{figure}
The dependence of $D(k)$ on the number of massive neutrino species, $N_{\nu}$,
is taken into account in our analytic approximation by the
corresponding dependence of the physical
parameters $k_{nr}$ and $k_F$ (see Eq.(6)). It has the correct asymptotic
behaviour on small and large scales but rather large residuals in
the intermediate region $0.01<k<10$.
Therefore, the coefficients $\alpha_{i}$
($i=1,...,4$) must be corrected for $N_{\nu}$. To achieve this,
we multiply each
$\alpha_{i}$ by a factor $D_{i}(N_{\nu})$ ($\sim 1$) which we
determine by $\chi^2$ minimization. These factors depend on
$N_{\nu}$ as second order polynomials. They are given in the Appendix. In
Fig.~10 we show the fractional residuals of $T_{MDM}(k)$ for different
numbers of massive neutrino species, $N_{\nu}$, and several values of
the parameters $\Omega_{\nu}$, $\Omega_b$, $h$ and $z$. The
performance for $N_{\nu}=2,3$ is approximately the same as for $N_{\nu}=1$.
\section{Performance}
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig11.eps}
\caption{The variance of density perturbations
smoothed by a top-hat filter of radius $R$ for different MDM models.
Numerical transfer functions $T_{MDM}$ are shown as thick
lines and the analytic approximations $T_{MDM}(k)=T_{CDM+b}D(k)$ are thin
lines which are superimposed on the numerical results (The residuals
are not visible on this scale.). }
\end{figure}
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig12.eps}
\caption{The fractional residuals of $\sigma(R)$ defined
by $(\sigma(R)-\sigma_{CMBfast}(R))/\sigma_{CMBfast}(R)$
for different values of $\Omega_{\nu}$ with all other parameters fixed.}
\end{figure}
The analytic approximation of $D(k)$ proposed here has maximal fractional
residuals of less than $ 5\%$ in the range $0.01\le k \le 1$. It is
oscillating around the exact numerical result (see Fig.~4), which
essentially reduces the fractional residuals of integral quantities
like $\sigma(R)$. Indeed, the mean square density perturbation
smoothed by a top-hat filter of radius $R$
$$\sigma^{2}(R)={1\over 2\pi^{2}}\int_{0}^{\infty}k^{2}P(k)W^{2}(kR)dk,$$
where $W(x)=3(\sin x-x \cos x)/x^3$, $P(k)=AkT_{MDM}^{2}(k)$ (Fig.11)
has fractional residuals which are only about half the residuals
of the transfer function (Fig.12).
To normalize the power spectrum to the 4-year COBE data we have
used the fitting formula by \cite{bun97}.
The accuracy of $\sigma(R)$ obtained by
our analytic approximation is better than $2\%$ for a wide range of
$\Omega_{\nu}$ for $\Omega_{b}=0.06$ and $h=0.5$. Increasing
$\Omega_{b}$ slightly degrades the approximation for $N_{\nu}> 1$,
but even for a baryon content as high as $\Omega_{b} \sim 0.2$,
the fractional residuals of
$\sigma(R)$ do not exceed $5\%$. Changing $h$ in the range $0.3-0.7$
and $N_{\nu}=1-3$ do also not reduce the accuracy of $\sigma(R)$
beyond this limit.
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig13.eps}
\caption{Deviation of $\sigma_{8}$ of our analytic
approximation for $T_{MDM}(k)$ from the numerical result
for different values of $\Omega_{\nu}$, $\Omega_{b}$ and $h$ ($N_{\nu}=1$). }
\end{figure}
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig14.eps}
\caption{Same as Fig.13, but for $N_{\nu}=2$.}
\end{figure}
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig15.eps}
\caption{Same as Fig.13, but for $N_{\nu}=3$.}
\end{figure}
We now evaluate the quality and fitness of our approximation in the four dimensional
space of parameters.
We see in Fig.12 that the largest errors of our approximation for
$\sigma(R)$ come from scales of $\sim 5-10h^{-1}Mpc$\footnote{Actually, the
oscillations in the error of $T(k)$ are somewhat misleading: they
are mainly due to
baryonic oscillations in the numerical $T(k)$ entering the denominator for
the error estimate, so that a slight shift of the phase enhances the
error artificially. This is why we concentrate on the error of
$\sigma (R)$ (otherwise the error estimate of T(k) should be
averaged, see {\it e.g.} EH2).}.
Since these scales are used for the evaluation of
the density perturbation amplitude on galaxy cluster scale,
it is important to know how
accurately we reproduce them. The quantity $\sigma_{8}\equiv
\sigma(8h^{-1}Mpc)$ is actually the most often used value to test
models. We calculate it for the set of
parameters $0.05\le \Omega_{\nu}\le 0.5$,
$0.06\le \Omega_{b} \le 0.3$, $0.3\le h \le 0.7$ and $N_{\nu}=1,\;2,\;3$
by means of our analytic approximation and numerically.
The relative deviations of $\sigma_{8}$ calculated with our $T_{MDM}(k)$
from the numerical results are shown in Fig.13-15.
As one can see from Fig.~13, for $0.3\le h\le 0.7$ and
$\Omega_{b}h^{2}\le 0.15$ the largest error in $\sigma_8$
for models with one sort of massive neutrinos $N_{\nu}=1$ does
not exceed $4.5\%$ for $\Omega_{\nu}\le 0.5$.
Thus, for values of $h$ which are followed by direct measurements of the
Hubble constant, the range of $\Omega_{b}h^{2}$ where
the analytic approximation is very accurate for $\Omega_{\nu}\le 0.5$
is six times as wide as the range given by nucleosynthesis constraints,
($\Omega_{b}h^{2}\le 0.024$, \cite{tyt96}). This is important if one
wants to determine cosmological parameters by the minimization
of the difference between the observed and predicted characteristics of the
large scale structure of the Universe.
For models with more than one species of massive neutrinos of equal mass
($N_{\nu}=2,3$), the accuracy of our analytic approximation is
slightly worse (Fig.~14,~15). But even for extremely high values of parameters
$\Omega_{b}=0.3$, $h=0.7$, $N_{\nu}=3$ the error in
$\sigma_{8}$ does not exceed $6\%$.
In redshift space the accuracy of our analytic approximation is stable and
quite high for redshifts of up to $z= 30$.
\section{Comparison with other analytic approximations}
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig16.eps}
\caption{Fractional residuals of $\sigma(R)$ calculated by the
analytic approximation of EH2 for the same parameters
as in Fig.~12.}
\end{figure}
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig17.eps}
\caption{
Fractional residuals of the analytic approximation by EH2
of the MDM transfer function for the same parameters
as in Fig.16. For comparison see Fig.6.}
\end{figure}
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig18.eps}
\caption{Deviation of $\sigma_{8}$ as obtained by the fitting formula
of EH2 from numerical results
for different values of $\Omega_{\nu}$, $\Omega_{b}$ and $h$
($N_{\nu}=1$). See Fig.~13 for comparison.}
\end{figure}
We now compare the accuracy of our analytic approximation with those
cited in the introduction. For comparison with Fig.~12 the
fractional residuals of $\sigma(R)$ calculated with the
analytic approximation of $T_{MDM}(k)$ by EH2
are presented in Fig.~16. Their approximation is only slightly less
accurate ($\sim 3\%$) at scales
$\ge 10 Mpc/h$. In Fig.~17 the fractional residuals of the
EH2 approximation of $T_{MDM}(k)$ are
shown for the same cosmological parameters as in Fig.~16.
For $\Omega_{\nu}=0.5$ (which is not shown) the deviation from
the numerical result is $\ge 50\%$ at $k\ge 1h^{-1}$Mpc, and the
EH2 approximation completely
breaks down in this region of parameter space.
The analog to Fig.~13 ($\sigma_8$) for the fitting formula of
EH2 is shown in Fig.~18 for different values of
$\Omega_{b}$, $\Omega_{\nu}$ and $h$.
Our analytic approximation of $T_{MDM}(k)$ is
more accurate than EH2
in the range $0.3\le \Omega_{\nu}\le 0.5$ for all $\Omega_{b}$ ($\le 0.3$).
For $\Omega_{\nu}\le 0.3$ the accuracies of $\sigma_{8}$
are comparable.
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig19.eps}
\caption{
Fractional residuals of different analytic approximations for the
MDM transfer function at $z=0$ for one flavor of massive neutrinos.}
\end{figure}
\begin{figure}[th]
\epsfxsize=8truecm
\epsfbox{fig20.eps}
\caption{Fractional residuals of $\sigma(R)$ calculated with the
same analytic approximations as Fig.~19.}
\end{figure}
To compare the accuracy of the analytic approximations for $T_{MDM}(k)$
given by \cite{hol89}, Pogosyan $\&$ Starobinsky 1995, \cite{ma96}, EH2
with the one presented here, we
determine the transfer functions for the fixed set of parameters
($\Omega_{\nu}=0.3$, $\Omega_{b}=0.1$, $N_{\nu}=1$, $h=0.5$)
for which all of them are reasonably accurate. Their deviations (in \%) from
the numerical transfer function are shown in
Fig.~19. The deviation of the variance of density fluctuations
for different smoothing scales from the numerical result
is shown in Fig.~20. Clearly, our analytic approximation
of $T_{MDM}(k)$ opens the possibility to determine the spectrum and
its momenta more accurate in wider range of scales and parameters.
\section{Conclusions}
We propose an analytic approximation for the linear power spectrum of density
perturbations in MDM models based on a correction of the approximation
by EH1 for CDM plus baryons. Our formula
is more accurate than previous ones (\cite{pog95,ma96}, EH2) for matter
dominated Universes ($\Omega_{M}=1$) in a wide range of parameters:
$0\le \Omega_{\nu}\le 0.5$, $0\le \Omega_{b}\le 0.3$, $0.3\le h\le
0.7$ and $N_{\nu}\le 3$. For models with one, two or three flavors of massive
neutrinos ($N_{\nu}=1,\;2,\;3$) it is significantly more accurate than the
approximation by EH2 and has a relative error
$\le 6\%$ in a wider range for $\Omega_{\nu}$ (see Figs.~13,~18).
The analytic formula given in this paper provides an essential tool
for testing a broad class of MDM models by comparison with
different observations like the galaxy power spectrum, cluster
abundances and evolution, clustering properties of Ly-$\alpha$ lines
etc. Results of such an analysis are presented elsewhere.
Our analytic approximation for $T_{MDM}(k)$ is available in the form
of a FORTRAN code and can be requested at
{\bf novos@astro.franko.lviv.ua} or copied from
{\bf http://mykonos.unige.ch/$\boldmath{\sim}$durrer/}
\medskip
{\it Acknowledgments} This work is part of a project supported by the
Swiss National Science Foundation (grant NSF 7IP050163).
B.N. is also grateful to DAAD
for financial support (Ref. 325) and AIP for hospitality, where the
bulk of the numerical calculations were performed. V.L. acknowledges a
partial support of the Russian Foundation for Basic Research (96-02-16689a).
\onecolumn{
\begin{appendix}{\bf Appendix}
The best fit coefficients $a_i(\Omega_{\nu})$, $b_i(\Omega_{\nu})$,
$c_i(\Omega_{\nu})$, $B_i(\Omega_b)$, $C_{i}(h)$ and $D_i(N_\nu)$:
$$a_{1}=1.24198-3.88787\Omega_{\nu}+28.33592\Omega_{\nu}^2-70.9063\Omega_{\nu}^2
+84.15833\Omega_{\nu}^4-41.16667\Omega_{\nu}^5\;,$$
$$a_{2}=0.7295-3.6176\Omega_{\nu}+21.45834\Omega_{\nu}^2-54.63036\Omega_{\nu}^3+70.80274
\Omega_{\nu}^4-35.20905\Omega_{\nu}^5\;,$$
$$a_{3}=0.28283-0.53987\Omega_{\nu}+5.80084\Omega_{\nu}^2-14.18221\Omega_{\nu}^3
+16.85506\Omega_{\nu}^4-8.77643\Omega_{\nu}^5\;,$$
$$a_{4}=0.48431+1.89092\Omega_{\nu}-4.04224\Omega_{\nu}^2+8.09669\Omega_{\nu}^3
-10.05315\Omega_{\nu}^4+5.34405\Omega_{\nu}^5~.$$
\medskip
$$b_{1}=0.2667-1.67\Omega_{\nu}+3.56\Omega_{\nu}^2-3.1\Omega_{\nu}^3\;,$$
$$c_{1}=0.008-0.055\Omega_{\nu}+0.135\Omega_{\nu}^2-0.124\Omega_{\nu}^3\;,$$
$$b_{2}=0.226-0.47\Omega_{\nu}+1.27\Omega_{\nu}^2-1.39\Omega_{\nu}^3\;,$$
$$c_{2}=0.004-0.026\Omega_{\nu}+0.053\Omega_{\nu}^2-0.039\Omega_{\nu}^3\;,$$
$$b_{3}=0.076-0.258\Omega_{\nu}+0.215\Omega_{\nu}2\;,$$
$$c_{3}=0.0026-0.0205\Omega_{\nu}+0.055\Omega_{\nu}^2-0.051\Omega_{\nu}^3\;,$$
$$b_{4}=0.0158-0.055\Omega_{\nu}+0.0228\Omega_{\nu}^2\;,$$
$$c_{4}=0.00094-0.0072\Omega_{\nu}+0.018\Omega_{\nu}^2-0.016\Omega_{\nu}^3~.$$
\medskip
$$B_{1}(\Omega_b)= 1.202-0.2065(\Omega_{b}/0.06)+0.005(\Omega_{b}/0.06)^2\;,$$
$$B_{2}(\Omega_b)=1.033-0.036(\Omega_{b}/0.06)+0.003(\Omega_{b}/0.06)^2\;,$$
$$B_{3}(\Omega_b)=1.166-0.17(\Omega_{b}/0.06)+0.005(\Omega_{b}/0.06)^2\;,$$
$$B_{4}(\Omega_b)=0.97985+0.01525(\Omega_{b}/0.06)+0.00626(\Omega_{b}/0.06)^2~.$$
\medskip
$$C_{1}(h)=1.09-0.09(h/0.5)\;,$$
$$C_{2}(h)=1.65-0.88(h/0.5)+0.23(h/0.5)^2\;,$$
$$C_{3}(h)=1.055-0.055(h/0.5)\;,$$
$$C_{4}(h)=0.981+0.03(h/0.5)-0.012(h/0.5)^2~.$$
\medskip
$$D_{1}(N_{\nu})=1.315-0.431N_{\nu}+0.116N_{\nu}^2\;,$$
$$D_{2}(N_{\nu})=1.108-0.225N_{\nu}+0.117N_{\nu}^2\;,$$
$$D_{3}(N_{\nu})=1.316-0.432N_{\nu}+0.116N_{\nu}^2\;,$$
$$D_{4}(N_{\nu})=1.256-0.302N_{\nu}+0.046N_{\nu}^2~.$$
\end{appendix}
}
\clearpage
\twocolumn
| 12,074 |
\section{INTRODUCTION}
The response of nuclei to high excitations or temperatures has been a
subject of intense study both theoretically and experimentally for the
last several years. From theoretical
investigations of hot nuclear matter \cite{kup,jaq,ban}
and also of finite nuclei \cite{bon}, it has been suggested that
the nuclear system may undergo a liquid-gas phase transition at high
temperatures. Recent experimental measurements
of the nuclear caloric curve by the ALADIN Group \cite{poc}
in the $Au+Au$ collisions at 600 $AMeV$ tentatively support
such a conjecture. The key element that enters in
such a surmise is the extraction of the
nuclear temperature that they observed to
be nearly constant in the
excitation energy range of $\sim$ 3 $-$ 10
$MeV$ per nucleon beyond which the caloric curve
rises almost linearly with a slope close
to that of a classical gas. Experimental data from the EOS collaboration
\cite{hau,elli} are also suggestive of critical behavior in nuclei;
here too exact determination of the nuclear temperature has the most
essential role to play.
The temperatures of hot fragmenting systems are generally measured
from the double ratios of isotope multiplicities employing the prescription
proposed by Albergo {\it et al} \cite{alb}
based on the statistical model of prompt multifragmentation (PM) \cite{ran}.
In arriving at the prescription, several simplifying assumptions
are made, namely,
(i) the fragments are non-interacting, (ii) the fragments
follow Maxwell-Boltzmann distribution, (iii) they
are formed in their ground states and
(iv) all their kinetic energies are in the thermal mode,
i.e. collective flow energy is absent. The effects of
the interaction have later been simulated through an effective
excluded volume interaction \cite{gul}; to our knowledge, the
effect of fragment-fragment interaction on the isotope ratio
temperature ($T_r$) within the freeze-out configuration has however
not been taken into account. Though it is expected that
at high temperatures and low densities the quantum system would
behave like a classical Maxwell-Boltzmann system,
the importance of invoking quantum statistics in multifragmentation
has been emphasised by several authors \cite{gul,sub,maj}. The qualitative
effect of quantum statistics is to increase the number of bosons
with respect to fermions at low temperatures and high densities, the
isotope ratio and hence the extracted temperature
$T_r$ might have some sensitivity to the choice of statistics.
The assumption of the formation of the fragments
in their ground states is an oversimplification. In general,
the fragments are expected to be formed in various
excited states which are not too short-lived.
These excited fragments subsequently decay either by
particle or $\gamma$-ray emission. These side-feeding effects
are shown \cite{gul,kol,bon1,hua} to have an important bearing
on the observed multiplicities and hence
on the
deduced nuclear temperature. The hot fragmenting nuclear
complex that is formed in nuclear collisions may be compressed
depending on the collision geometry which subsequently
decompresses to the freeze-out configuration generating
significant amount of collective nuclear flow energy. The
important role played by collective
flow on the fragmentation pattern has been shown
earlier \cite{pal,des}. Its effect on the nuclear
temperature has only been qualitatively studied by
Shlomo {\it et al} \cite{shl} and found to be
nonnegligible. In a systematic step by step approach,
we explore in this paper the effects of the four approximations
listed earlier on the isotopic temperatures by considering
different isotope double ratios and examine whether
they can be considered as good pointers to the thermodynamic
temperature of the fragmenting system.
The physics of the nuclear multifragmentation is not yet
fully established beyond question. The one-step prompt
break-up (PM) looks a very plausible scenario at high excitations
and the sequential binary decay (SBD) model \cite{swi,pal1}
may provide a better description of the reaction mechanism at lower excitation.
Both these processes are thermal in nature. From the inclusive
mass or charge distributions or even the scaling of the multiplicities of
the intermediate mass fragments (IMF),
it is however difficult \cite{pal2} to discuss the
relative merits of these two competing
models. If the SBD model is the more viable model,
say, for the yield of nuclear fragments in nuclear
collisions, then the Albergo prescription of extracting
nuclear temperature from the double
isotope ratios is called into
question. One notes that in the SBD
model, there is no unique temperature
but a succession of temperatures till
the nuclear fragments are produced in their particle stable
configurations. It would still be interesting to know what values of
temperatures one extracts from double ratios in the SBD model
and whether they can offer some added insight
in the nature of nuclear disassembly.
The paper is organised as follows. In Sec. II, we
briefly outline the PM and SBD models.
In Sec. III, temperatures calculated from both models
are presented and discussed in the context
of experimental data. The conclusions are drawn in Sec. IV.
\section{THEORETICAL FRAMEWORK}
The multiplicities of fragments produced in
nuclear collisions are experimentally measured quantities;
the nuclear temperature is a derived entity. In the following,
we outline the models for fragment production
and relate the nuclear temperature to the fragment yield.
\subsection{Prompt multifragmentation}
A hot nuclear system with $N_0$ neutrons and $Z_0$ protons
may be formed in nuclear collisions at a temperature
$T_0$ with excitation energy $\epsilon^*$ per particle. It may be
initially compressed in a volume smaller than its normal
volume. The compressed matter decompresses
and develops a collective radial flow in addition to thermal excitation.
We still assume that the system evolves in thermodynamic equilibrium and
undergoes multifragmentation after reaching the
'freeze-out' volume at a temperature $T$ different from $T_0$.
If the time scale involved in expansion
is larger compared to the equilibration time in the
expanding complex (i.e. the expansion is quasi-static),
this assumption is not unjustified . We further assume
that at the freeze-out volume, the system reaches
chemical equilibrium.
The expansion of the compressed system may be simulated through a
negative external pressure \cite{pal}. If there was no flow,
at the freeze-out, the kinetic contribution of the thermal pressure
is generally assumed to be cancelled by
interaction contributions, i.e. the system is
at equilibrium under zero external pressure.
A positive pressure corresponds to compression of the system; similarly
a negative pressure would cause decompression. If $P_i$ is the
internal partial pressure exerted by the radially outflowing
fragments of the $i$th species at the surface, the total external
pressure $P$ is then given by $P=-\sum_i
P_i$. The total thermodynamic potential of the system
at the freeze-out volume is given by \cite{pal,ma}
\begin{equation}
G=E-TS-\sum_{i=1}^{N_s} \mu_i\omega_i + P\Omega,
\end{equation}
where $E$ and $S$ are the internal energy and entropy of the system,
$\Omega= V - V_0$ with $V$ as the freeze-out volume
and $V_0$ the normal nuclear volume of the fragmenting system,
$N_s$ the total number of fragment species, $\mu_i$ the chemical
potential and $\omega_i$ the multiplicity. The occupancy
of the fragments is obtained
by minimising the total thermodynamic potential
$G$ and is given by
\begin{equation}
n_i (p_i) = \frac{1}{exp\{(e_i-\mu_i)/T\}\pm 1}
\end{equation}
where ($\pm$ ) sign refers to the fermionic and bosonic
nature of the fragments. The single particle energy $e_i$ is \cite{pal,de}
\begin{equation}
\label{ei}
e_i=\frac{p_i^2}{2m_i}-B_i+{\cal V}_i -\frac{P_i}{\rho_i}.
\end{equation}
Here $B_i$ refers to
the binding energy, $\rho_i$ is the density of the $i$th fragment
species obtained from the momentum integration of the distribution
function given by eq.(2) and ${\cal V}_i$ corresponds to the
single particle potential , evaluated in the
complementary fragment approximation \cite{gro,pal3}.
It is given by
\begin{equation}
{\cal V}_i=\frac
{\int exp[-U_i(R)/T] U_i(R) d^3 R}
{\int exp[-U_i(R)/T] d^3R}
\end{equation}
where $U_i(R)$ is the interaction energy of the fragment with
its complementary at a separation $R$ and
the integration is over the whole freeze-out volume with
the exclusion of the volume of the complementary
fragment.
Under chemical equilibrium, the chemical potential of
the $i$th fragment species is
\begin{equation}
\mu_i=\mu_n N_i +\mu_p Z_i.
\end{equation}
The neutron and proton chemical potentials
$\mu_n$ and $\mu_p$ are obtained from the
conservation of baryon and charge number, $N_i$ and
$Z_i$ being the number of neutrons and protons in the fragment .
The fragment yield is obtained from the phase-space integration
of the occupancy function and for fermions it is given by
\begin{equation}
\omega_i = \frac{2}{\sqrt{\pi}}\Omega \lambda_i^{-3}
J_{1/2}^{(+)}(\eta_i)\phi_i(T).
\label{f-mul}
\end{equation}
For bosons, the corresponding multiplicity is given by
\begin{equation}
\omega_i=g_0 [e^{-\eta_i}-1]^{-1}
+\frac{2}{\sqrt{\pi}}\Omega \lambda_i^{-3}
J_{1/2}^{(-)}(\eta_i)\phi_i(T).
\label{b-mul}
\end{equation}
In eqs. (\ref{f-mul}) and (\ref{b-mul}), $\eta_i$ is the
fugacity defined as
\begin{equation}
\eta_i = \frac{\mu_i+ B_i -{\cal V}_i + P_i/\rho_i}{T},
\end{equation}
$\lambda_i= h/\sqrt{2\pi m_i T}$ is the thermal wavelength with
$m_i$ as the mass of the $i$th fragment species and $J_{1/2}^{(\pm)}$
are the Fermi and Bose integrals \cite{path} given by
\begin{equation}
J_{1/2}^{(\pm)} (\eta) = \int_0^\infty \frac{x^{1/2}dx}
{exp\{(x-\eta)\}\pm 1}.
\end{equation}
The first term on the right hand side of eq. (\ref{b-mul}) gives
the number of condensed bosons, $g_0$ being their ground state
spin degeneracy.
The quanity $\phi_i(T)$ is the
internal partition function of the fragments
and is defined as
\begin{equation}
\phi_i(T)=\sum_s g_s e^{-\epsilon_s^*(i)/T}
\end{equation}
where $g_s$ is the spin degeneracy of the
excited state $s$ of the cluster with excitation energy $\epsilon_s^*(i)$.
The flow pressure $P_i$ is shown
to be related \cite{pal} to the flow energy $E_i$
of the $i$th species in the form
\begin{equation}
\frac{P_i}{\rho_i} = C(v_{fi},T) E_i
\end{equation}
where $C$ is dependent on the fragment species, the
temperature and also on the flow velocity of
the fragments $v_{fi}$. It is found to be
close to 4 except for very light fragments.
In the limit $\eta_i << 0$, (which is
true when the density is very low ), $J_{1/2}^{(+)}(\eta) \rightarrow
\frac{\sqrt\pi}{2}e^\eta$, and then from eq. (\ref{f-mul}) the
yield of the fermion fragments reduces
to
\begin{equation}
\omega_i = \Omega \lambda_T^{-3} A_i^{3/2} e^{\eta} \phi_i(T)
\label{c-mul}
\end{equation}
where $\lambda_T = h/\sqrt{2\pi m T}$ is the nucleon thermal
wavelength with $m$ as the nucleon mass and $A_i$ the mass number
of the $i$th fragment species. In the same limit,
eq. (\ref{b-mul}) for boson yield reduces also to eq.
(\ref{c-mul}). This is also the result
obtained from the classical Maxwell-Boltzmann distribution.
If one chooses two sets of fragment pairs $(A_1,Z_1)$,
$(A_1',Z_1')$ and $(A_2,Z_2)$, $(A_2',Z_2')$ such
that $Z_1' = Z_1 +p$, $Z_2' = Z_2+ p$,
$N_1' = N_1 + n$, $N_2'=N_2+n$ where $n$ and $ p$
are integers, then from eq. (\ref{c-mul}) it follows that the measured
double ratio $R_2$ of the fragment yields can be
used to determine the temperature of the fragmenting system:
\begin{eqnarray}
R_2 &=& \frac{\omega(A_1',Z_1')/\omega(A_1,Z_1)}
{\omega(A_2',Z_2')/\omega(A_2,Z_2)}\nonumber\\
&=& \left ( \frac{A_1' A_2}{A_1 A_2'}\right )^{3/2}
\frac{\phi(A_1',Z_1',T)\phi(A_2,Z_2,T)}{\phi(A_1,Z_1,T)\phi(A_2',Z_2',T)}
e^{(\Delta B/T)}e^{-(\Delta {\cal V}/T)} e^{(\Delta F/T)}
\end{eqnarray}
where
\begin{eqnarray}
\Delta B &=& B(A_1',Z_1')-B(A_1,Z_1)+B(A_2,Z_2)-B(A_2',Z_2')\nonumber\\
\Delta {\cal V} &=& {\cal V}(A_1',Z_1')-{\cal V}(A_1,Z_1)+
{\cal V}(A_2,Z_2)-{\cal V}(A_2',Z_2')\nonumber\\
\Delta F &=& C[E(A_1',Z_1')-E(A_1,Z_1)+E(A_2,Z_2)-E(A_2',Z_2')].
\end{eqnarray}
In the limit of low density, the nuclear part
of single-particle potential becomes relatively unimportant; further
choosing $p=0$ and $n=1$, the Coulomb
contribution to $\Delta{\cal V} $ practically vanishes.
Albergo {\it et al} \cite{alb} further assumed the fragments to be
formed in their ground states and they did not consider
any collective flow.
Then with $\Delta F = 0$
and $\Delta {\cal V} = 0$ the
temperature is easily determined from
\begin{equation}
R_2= \left ( \frac{A_1' A_2}{A_1 A_2'}\right )^{3/2}
\frac{g_0(A_1',Z_1',T)g_0(A_2,Z_2,T)}{g_0(A_1,Z_1,T)g_0(A_2',Z_2',T)}
e^{(\Delta B/T)}
\end{equation}
since the ground state degeneracy $g_0(A,Z)$ and binding
energies are a-priori known.
If prompt multifragmentation is the real physical
mechanism for fragment production, eq. (15) then
provides an approximate but simple
way to find out the thermodynamic temperature
of the disassembling system. Influences
from other effects as already mentioned are however
embedded in the experimental data for isotope
yield ratios. One can not obtain
informations on the perturbations caused by these
effects on the double-ratio thermometer simply from
the experimental isotopic yields without the
help of further model calculations. If there were
no other effects except from side-feeding through
$\gamma$-decay, the experimental data could
be exploited to delineate side-feeding effects by
using eq. (13) with $\Delta{\cal V} = 0$ and $\Delta F=0$ with
the choice of the internal partition function
from eq. (10). Effects from particle decay \cite{kol} or those
coming from the inclusion of Coulomb force for yield ratios
involving isotopes differing by proton number \cite{kolo} could also
be approximately reconstructed from the
experimental fragment multiplicities. Influence
of nuclear interaction, quantum statistics or collective
expansion can not however be singled out
without recourse to models. We have therefore
done calculations in the prompt multifragmentation
model with the barest scenario (classical statistics,
no interaction, no side-feeding and no nuclear flow) and then
included the said effects step by step to generate
fragment multiplicities. The multiplicities
so generated under different approximations are used
to extract double-ratio temperatures using eq. (15)
to delineate the role of various effects on the temperatures.
\subsection{Sequential binary decay}
Fragmentation may also proceed via a sequence
of binary fission-like events, particulary at
relatively lower excitation energies. We employ the transition-state
model of Swiatecki \cite{swi} to find the decay probability of a
hot nucleus with mass $A$, charge $Z$ and excitation energy $E^*$ into two
fragments of mass and charge $(A_1,Z_1) $ and
$(A- A_1, Z - Z_1)$ respectively. At the saddle point, the binary
fragmentation probability is given by
\begin{equation}
P(A,Z,E^*; A_1, Z_1)\propto exp\left [ 2\sqrt{a (E^*-V_B-K)}-2\sqrt{a E^*}\right ]
\end{equation}
where $a$ is the level density parameter taken as
$A/10$ $MeV^{-1}$, $K$ is the relative kinetic energy at
the saddle point and $V_B$ the barrier height dependent on
the saddle point temperature $T_s$ which is different from the
temperature $T_0$ of the parent nucleus given by $T_0=\sqrt{E^*/a}$.
The barrier height is determined in the two sphere approximation
as
\begin{equation}
V_B(T_s) = V_c + V_N + E_{sep}(T_0, T_s)
\label{vb}
\end{equation}
where $E_{sep}$ is the separation energy. It is
evaluated as
\begin{equation}
E_{sep}(T_0,T_s)= B(T_0)-B_1(T_s)-B_2(T_s).
\end{equation}
The binding energies are taken to be temperature dependent \cite{pal3}.
The saddle-point temperature which is also the
temperature of the fragmented daughter nuclei is given as
\begin{equation}
T_s = \sqrt{(E^* - V_B - K)/a}.
\label{ts}
\end{equation}
The evaluation of $T_s$ from eq. (\ref{ts}) requires a
knowledge of the relative kinetic energy $K$.
We assume it to follow a thermal distribution
$P(K) \propto \sqrt{K} e^{-K/T_s}$.
The complicated interrelationship between $V_B$, $K$ and $T_s$
renders evaluation of $T_s$ difficult; to simplify the problem,
$K$ in eq. (\ref{ts}) is replaced by its average value $\frac{3}{2}T_s$
and then $T_s$ is evaluated in an iterative procedure with
$T_0$ as the starting value. This is expected
to be a good approximation since the dispersion in kinetic
energy is of the $\sim$ $T_s$ and $(E^*-V_B)$ is generally much greater than
$T_s$. The so extracted value of $T_s$ is used only to evaluate the
barrier $V_B$ from eq. (\ref{vb}), the decay probability
and the thermal distribuion. In eq. (\ref{vb}), $V_c$ is the
Coulomb interaction taken to be that between two uniformly charged
spheres and $V_N$ is the interfragment nuclear interaction \cite{pal3}.
The relative kinetic energy $K$ of the two fragmented nuclei lying in the
range $0\le K \le (E^*-V_B)$ is generated in a Monte-Carlo
method obeying the thermal distribution as mentioned. To ensure energy conservation,
this kinetic energy is plugged into eq. (\ref{ts}) to evaluate
the temperature of the daughter nuclei for further dynamical evolution. The fragment
kinetic energy and hence their velocities are obtained from momentum
conservation.
The trajectories of the fragments are calculated
under the influence of Coulomb interaction in the
overall centre of mass frame. If the fragments have sufficient excitation energy,
they decay in flight. The integration of the trajectories
is continued till the asymptotic region is reached when the interaction
energy is very small ($\sim$ 1 $MeV$) and the excitation energy of the
fragments are below particle emission threshold.
\section{RESULTS AND DISCUSSIONS}
In this section we present the results of the
calculations for temperatures extracted from double ratios
of different isotope yields obtained from nuclear
multifragmentation. These calculations are performed under
different approximations mentioned in the introduction in the
PM model. For this purpose we have taken
$^{150}Sm$ as a representative case for the fragmenting system. We also
obtained the double ratio temperatures assuming that the fragmentation
proceeds via sequential binary decay.
\subsection{Prompt multifragmentation}
The initial temperature $T_0$ of the hot system formed
is different from the kinetic temperature $T$
(also referred to as the thermodynamic temperature) of the fragments
at the freeze-out. What remains constant is the
total energy $E$ of the system or equivalently its
excitation energy $E^*= E+B_0$ where $B_0$ is the
binding energy of the system. The total energy of the fragmented system
may be written as
\begin{equation}
E=\frac{3}{2}T (M-1) - \sum_{i=1}^{N_s}
\omega_i B_i - \frac{1}{2} \sum
\omega_i {\cal V}_i + \sum \omega_i<\epsilon^*(i)>,
\end{equation}
where $M=\sum_i\omega_i$ is the total number of the
fragments produced in the grand canonical
model for PM and ${\cal V}_i$ the single-particle potential.
The quantity $<\epsilon^*(i)>$ is the average excitation energy
of the $i$th fragment species given by
\begin{equation}
<\epsilon^*(i)> =\frac{ \int \epsilon \rho_i(\epsilon)e^{-\epsilon/T}
d\epsilon}{\int \rho_i(\epsilon)e^{-\epsilon/T}d\epsilon}
\end{equation}
where the integration extend upto the particle
emission threshold and $\rho_i$ is the level density
obtained from Bethe ansatz\cite{pal1}.
To compare the temperature $T$ and $T_0$ taken as $T_0=
\sqrt{E^*/a}$ , we plot in fig. 1 these
temperatures as a function of $\epsilon^* = E^*/A$, the
excitation energy per particle. The dashed line
corresponds to the temperature $T_0$ and the solid
and dot-dash lines correspond to the thermodynamic temperatures
evaluated at the freeze-out volumes 6$V_0$ and 10$V_0$
respectively, $V_0$ being the normal volume of the fragmenting system.
The curve for $T_0$ is parabolic but it is
interesting to note that the caloric curves corresponding to
the different freeze-out volumes mentioned show plateaux
in the excitation energy . In the
canonical model of multifragmentation with multiplicity-dependent
freeze-out volume, Bondorf {\it et al} \cite{bon} reported first
such a plateau reminiscent of the onset of a phase
transition in nuclei. With increase in freeze-out volume,
we find in our calculation that the temperature decreases and the
plateau gets extended in the excitation energy.
Such a dependence of caloric curve on the freeze-out volume was
also observed in a self-consistent Thomas-Fermi
calculation \cite{de1}.
In figures 2-7, we display the isotope double
ratio temperatures $T_r$ from the prompt break-up
of $^{150}Sm$ with different choices of isotope combinations
fixing the freeze-out volume at $6V_0$. The combinations are:
$(^{4}He/^{3}He)/(d/p)$, $(^{4}He/^{3}He)/(t/d)$,
$(^{7}Li/^{6}Li)/(d/p)$,
$(^{7}Li/^{6}Li)/(^{4}He/^{3}He)$,
$(^{10}Be/^{9}Be)/(^{4}He/^{3}He)$ and
$(^{13}C/^{12}C)/(^{7}Li$ $/^{6}Li)$. They would be referred to as
$(He-d)$, $(He-t)$, $ (Li-d)$, $(Li-He)$, $(Be-He)$
and $(C-Li)$ respectively.
In all these figures, the dotted lines correspond to the
temperatures obtained from the multiplicities generated in
the barest Albergo prescription as mentioned earlier.
It is obvious that the thermodynamic temperature and the
double-ratio temperatures are identical in this case.
The dashed lines
($V_{int}$ ) refer to the temperatures calculated from
eq.(15) but with the inclusion of
final state interaction (nuclear+Coulomb) over the
barest scenario for the fragment generation. In all the
cases investigated , it is found that the inclusion of
fragment-fragment interaction (${\cal V}$) shifts the temperature by
nearly a constant amount at all excitations; the
amount of shift or its sign depends on the particular isotope combination
chosen. The shift is found to be negligible for double ratios
($He-d$),$(Li-He)$
and $(Be-He)$. The dot-dash lines (QS) in the
figures refer to calculations done with further inclusion of quantum statistics.
As comparison of the dashed and dot-dash curves shows,
no appreciable quantum
effects are evident except in the case of
the temperature obtained from the
double ratios $(Li-d)$. In this particular case,
it is further seen that the difference between the quantum and classical
(Maxwell-Boltzmann) calculations widens with excitation energy or with
temperature. It is normally expected that at
low density and high temperature \cite{maj}, quantum
effects would not be discernible, to be more exact, as explained
earlier it depends on whether the fugacity $\eta << 0$. It is seen that
the densities of the fragment species
or alternatively their fugacity $\eta$ vary in a complex
way with the temperature. When the temperatue
is low, the density is extremely low and hence the value of $\eta$
is relatively large and negative; with increase in temperature
along with density the value of $\eta$ increases initially and then
again decreases for the complex fragments. However for nucleons
$\eta$ increases monotonically in the
energy regime that we consider. This complex
variation of $\eta$ is reflected in the temperatures
extracted from the double ratio of yields obtained with quantum statistics.
In order to take into account effects due to side-feeding, we next
assume that the fragments are produced in particle-stable excited
states so that the ground state population from the
$\gamma$-decaying states have to be considered. Side-feeding
from particle decay is thus ignored. Kolomiets {\it et al}
\cite{kol} have shown that particle-decay effects are rather
negligible, further there is uncertainty about the cut-off
limit to the particle decay width $\Gamma$ that one should
take which is intimately coupled with the time scale for
prompt multifragmentation. Side-feeding effects are studied
after generating the fragment yield by using eqs.(6),(7) and
(8) with flow pressure $P=0$. In these equations, $\phi$ is
the internal partition function that includes a sum extending
over the ground and $\gamma$-decaying excited states.
For the fragments considered, isotopes upto $^4He$ were taken as
billiard balls with no internal excitation as it has no low-lying
$\gamma$-decaying state.
Similarly for $^9Be$, only the ground state was considered.
For the rest, the excited states considered are 3.563 $MeV$
for $^6Li$, 0.478 $MeV$ for $^7Li$, 3.37, 5.958,
5.960, 6.018 and 6.26 MeV for $^{10}Be$, 4.439 for $^{12}C$
and 3.089, 3.685 and 3.854 $MeV$ for
$^{13}C$. For other heavier nuclei, continuum approximation is used for the
single-particle levels and internal partition function is taken as
$\phi=\int \rho(\epsilon)e^{-\epsilon/T} d\epsilon$ where the integration
extends upto particle emission threshold.
Over and above the quantum statistical
effects, when we consider effects due to $\gamma$-feeding, it is
found from figs. 4-7 ( by comparing the dot-dash and the full lines)
that these effects are very sizeable.
The $(He-d)$ and
$(He-t)$ thermometers show no side-feeding
effects (figs. 2 and 3) as these fragments are taken to have
no excited states. A dramatic effect is seen for the
$(Be-He)$ thermometer ( displayed
in fig.6)
where the sharply upward going full line refers to the temperature
$T_r$ obtained this way. Bondorf {\it et al} \cite{bon} found a similar
behaviour for the $Be-He$ thermometer.
In central or near-central collisions between medium-heavy
or heavy nuclei at intermediate or higher energies, compression
and eventual decompression of the nuclear matter manifests itself
in nuclear collective flow energy which might be a significant part of
the total excitation. Collective flow influences the
multifragmentation pattern to a significant
extent \cite{pal,des}. The double-ratio isotope
thermometer may then need to be recalibrated a great deal due
to the nuclear flow. This is manifest from the
figures 2 - 7 where the full line with crosses
correspond to calculated temperatures with inclusion of flow
above the effects induced by fragment-fragment interaction, quantum
statistics and whereever applicable, $\gamma$-feeding.
The flow energy is taken to be
$25\%$ of the total excitation energy. Comparison of the full
line with the line with crosses shows that at a given
excitation energy, the temperature is always
lower or for the same temperature, the
excitation energy is always higher. In fig. 8, all the
double-ratio isotope thermometers except for Be-He are displayed
for comparison. Except for the
flow effects, other effects are included here.
The behaviour of the temperature profiles
with excitation energy look nearly the same
but their magnitudes differ depending on the choice of
the thermometers.
At lower excitations, an uncertainty in the $T_r$ $\sim 2.0$ $MeV$
involving $(Li-d)$ and $(Li-He)$ thermometers
is found which increases progressively with
excitation energy. The uncertainty involving $(He-t)$,
$(He-d)$ and $(C-Li)$ thermometers however decreases
with excitation energy, all three temperatures converging
at the highest excitation we study.
In fig. 9,
the isotope temperature corresponding to $(Li-He)$
is shown with inclusion of different magnitudes of flow.
The full and dashed curves refer to cases when half ($50\%$) and
one fourth ($25\%$) of the total excitation
have gone to the flow energy; the dotted
curve corresponds to no flow. As an illustration,
data from the ALADIN \cite{poc} and EOS \cite{hau} experiments
are displayed in the figure, which use the $(Li-He)$ and
$(He-t)$ thermometers respectively. To have a contact with
the EOS data, we also display the calculated temperature
from the $(He-t)$ thermometer with 50$\%$ flow
energy (dot-dash curve). In an analysis
of the same data in Ref \cite{sam}, it was
pointed out that the data could be better explained
invoking progressive increase of the percentage of flow
energy with increasing total excitation; comparison of the
present calculations with the experimental data validates this
observation.
\subsection{Sequential binary decay}
Hot nuclear systems may release energy through binary fission-like
decay, the decay chain continues till there is no further energy for binary division.
At the end of such decay process, fragments of different species are produced
in ground states and in $\gamma$-decaying excited states,
the multiplicity depending on the initial system and excitation energy.
It has been noted earlier \cite{lop} that the frequency distribution of the
fragments follows almost a power-law distribution
and that it is not too different from the
one obtained from prompt multifragmentation at the same
excitation energy. Our calculations done at different excitation
energies also show that the inclusive
mass or charge distrbutions obtained from both PM
and SBD models are roughly the same.
The isotopic distributions are however seen to have significant differences.
In the SBD model, the hot nucleus prepared initially at an
excitation energy or temperature goes through a succession of decays,
the temperature of the produced fragments (assuming equilibration
before each decay) therefore also decreases as time proceeds.
In fig. 10, we display the average temperature $T_{av}$
of the produced fragments as a function of time when the
initial system $^{150}Sm$ has been prepared at
three different excitation energies, namely, $\epsilon^*
=13.5,$ 10.0 and 6.0 $MeV$. The temperature of the
fragments is calculated from $T_{av}=(10 <\epsilon^*>)^{1/2}$
where $<\epsilon^*>$ is the ensemble averaged excitation
energy per particle of the fragments
at any particular instant of time.
It is found that the higher the initial excitation energy of the system,
the faster is the cooling rate which is expected. An experimentalist
does not know a-priori whether multifragmentation is a
one-step process (PM) or is an outcome
of a sequence of binary decays. If one takes the fragmentation yields from
the SBD model as the 'experimental data' ,
it would be interesting to see the results
for the double ratio temperatures calculated with
the Albergo prescription as given by eq. (15).
The double ratio temperatures so calculated for the combinations
$(He-d)$, $(He-t)$
$(Li-d)$ and $(Li-He)$
are
displayed in fig. 11. One finds that except for $(Li-d)$,
the temperatures are very weakly dependent on the initial excitation
energy and are very low ($\sim 3\> MeV$) even at the
highest excitation energy we study.
Such apparent temperatures were obtained by Ma {\it et al}
\cite{ma1} in their Albergo-type analysis of the experimental
data in $^{36}Ar+^{58}Ni$ collisions at 95$A\> MeV$.
For the $(Li-d)$ thermometer, the temperature
however rises steadily with initial excitation.
Thus the functional dependence of the temperature $T_r$ with
excitation energy obtained from the SBD and PM
models are very different; the thermometers in the SBD model also
register too low a temperature compared to the PM model.
The kinetic energy distribution of the fragments at the
end of the decay process would reflect the overall kinetic temperature
of the system. In the SBD model,
since the system proceeds through a sequence of temperatures,
the kinetic energy distribution reflects an
apparent temperature. In fig. 12,
this apparent temperature $T_{kin}$ is shown as a function of initial
excitation energy from the slope of the final energy
distributions of $p$, $d$, $^3He$ and $^4He$ produced
from $^{150}Sm$. The temperatures extracted from
the four distributions are not very different.
Closer inspection however shows that except
for the one for $^4He$, the 'Caloric curves' show broad
plateaux mimicing a liquid-gas phase transition. This arises
possibly from the changing temperature scenario and a
complicated energy dependence of the fragment partial
widths for decay in the SBD model.
\section{CONCLUSIONS}
We have calculated the apparent temperatures from
several combinations of the double-ratio of isotope yields
in two different physical scenarios in perspective; the
one-step prompt multifragmentation and the sequential
binary decay. In the PM model, the
inclusion of final state interaction gives rise to nearly a constant
shift in the temperature $T_r$ calculated as a function of
excitation energy from the one obtained from the Albergo
prescription, the shift being different for
different isotope combinations. The
effect of quantum statistics on the apparent temperatures
is found to be nominal; the effect
of $\gamma$-feeding is very substantial and is found to be
rather dramatic for the $(Be-He)$ thermometer.
The presence of collective
flow reduces the apparent temperature $T_r$ for a given total excitation
energy. Moreover, a soft plateau, generally
seen in the caloric curves obtained for
the double-ratio temperatures becomes extended with inclusion of flow
energy. The import of our calculations
is that better contact with the experimental data can be achieved
if one assumes that the excitation
energy has a collective flow component in it.
One can not rule out the sequential binary decay as a possible reaction
mechanism for the fragment yields, particularly
at not too high excitation. This prompted us to study the
caloric curves where the apparent temperatures
$T_r$ are calculated from the fragment yields in the SBD model, both from
the double-ratios and slopes of the energy
distributions of the fragments. The double
ratio temperatures generally show extended plateaux but
no subsequent rise at higher excitations; on the
other hand the caloric curves calculated from the
slopes of the energy distributions display broad shoulders
with subsequent rise at higher excitations mimicing a first-order phase
transition. Since caloric curves obtained in both
the PM model and the SBD model show apparent signatures
of a phase transition, conclusion
regarding phase transition in nuclear collision requires utmost
caution and search for additional signatures is called for.
| 10,168 |
\section{Introduction}
\label{introduction}
\setcounter{equation}{0}
The gauge structure of the Standard Model (SM) has two important
consequences in agreement with the experimental data: the proton
stability is ensured by a discrete baryon number symmetry, and there
are no tree-level flavor changing neutral currents (FCNC). On the
other hand, a mass term for the Higgs doublet is not prevented by any
symmetry in the Standard Model, leading to the well known hierarchy
and naturalness problems.
The Minimal Supersymmetric Standard Model (MSSM) solves the
naturalness problem (because supersymmetry [SUSY] ensures the
cancellation of the quadratic divergences in the Higgs self-energy),
but does not solve the hierarchy problem ({\it i.e.}, it does not
explain the large hierarchy between the $\mu$-term or the soft
supersymmetry breaking parameters and the Planck scale). Moreover, the
MSSM does not have the attractive features of the Standard Model
mentioned above: the gauge structure allows both proton decay
operators and FCNCs.
The resolution to these issues may be provided by physics beyond the
MSSM. For example, the exponential hierarchy between the soft
breaking parameters and the Planck scale is naturally produced if
supersymmetry is dynamically broken. The tree level FCNCs are
eliminated if there is a global $R$-symmetry, while radiative FCNCs
can be kept sufficiently small if supersymmetry breaking is
communicated to the MSSM fields by generation-independent gauge
interactions. The proton decay operators can be avoided by invoking a
discrete baryon number symmetry, and the $\mu$-term can be kept small
compared with the Planck scale by a discrete symmetry whose breaking
is triggered by the supersymmetry breaking. Likewise, some discrete
symmetries may be used to eliminate other unacceptable operators
associated with the new physics beyond the MSSM, such as large mass
terms for the gauge singlets required by many gauge mediated
supersymmetry breaking models.
At present, all viable supersymmetric extensions of the Standard Model
rely on the existence of some discrete symmetries which are not known
to be associated with gauge symmetries. This situation is rather
unfortunate given that currently it is not known whether the global
symmetries are preserved by quantum gravitational effects. In fact
there are some arguments that support the idea that any global
symmetry is badly violated in the presence of nonperturbative
gravitational effects \cite{wormhole}: the global charges may leak
through a wormhole, or they may disappear into a black hole which may
evaporate. In the low energy effective theory, these effects give
rise to gauge-invariant operators which break explicitly the global
symmetries. Generically, one expects these operators to have
coefficients of order one times the appropriate power of the Planck
scale. This results in a $\mu$ term which is too large by 16 orders
of magnitude, dimension-four baryon number violating operators which
have coefficients 22 orders of magnitude larger than the upper bound
set by the proton decay measurements, and other disastrous effects.
However, in certain cases where general relativity is modified at
energies significantly below the Planck scale \cite{relativ}, it is
possible to suppress the coefficients of the symmetry violating
operators. In any case, the extent of global symmetry violation
appears to be highly sensitive to the underlying theory of quantum
gravity, which is not known yet.
Hence, it would be useful to show that the global symmetries required
in the MSSM are remnants of some spontaneously broken gauge
symmetries. In string theory and M-theory there are situations where
discrete symmetries in the low energy theory are remnants of gauge
groups spontaneously broken by the string dynamics \cite{string}.
However, it is by no means clear that once the appropriate vacuum of a
viable string theory is found, the necessary discrete symmetries of
the MSSM would be preserved. Therefore, it has been often attempted
to extend the SM gauge group so that the harmful operators allowed in
the MSSM are no longer gauge invariant. The simplest extension is to
include a spontaneously broken $U(1)$ gauge symmetry, and it has been
used to avoid baryon number violating operators \cite{Weinberg} or a
large $\mu$-term \cite{mu}. Nevertheless, no chiral ({\it i.e.}
without gauge invariant mass terms) and generic ({\it i.e.} without
unnaturally small dimensionless couplings) supersymmetric model has
been constructed yet.
In a previous paper \cite{CDM} we showed that a new $U(1)$ gauge
symmetry, in conjunction with supersymmetry and the standard $SU(3)_C
\times SU(2)_W \times U(1)_Y$ gauge group, is sufficient to prevent
{\it any} mass terms (including the $\mu$-term), so that the only
fundamental dimensional parameter is the Planck scale. Although this
is a chiral supersymmetric model, it relies as much as the MSSM on
discrete symmetries to eliminate the proton decay operators. Given
that our goal is to construct a self-consistent theory which does not
invoke arbitrary assumptions about quantum gravity, we must use a
gauge symmetry to eliminate the proton decay operators, as well as
any other dimension-four and higher operators forbidden by
phenomenology.
In this paper we show that the gauge group introduced in \cite{CDM} is
in fact sufficient to replace any discrete symmetry required by the
phenomenological constraints, provided the charge assignments under
the new $U(1)$ gauge symmetry are chosen carefully. We find several
classes of phenomenologically viable models of this type. These are
chiral and generic supersymmetric models to the extent that we do not
attempt to explain the quark and lepton masses, so that we allow
Yukawa couplings as small as $\sim \lambda_e \sim 10^{-5}$.
An interesting feature of our models is that the new $U(1)$
communicates supersymmetry breaking from a dynamical supersymmetry
breaking (DSB) sector to the MSSM fields. Furthermore, unlike the
previous models in which a spontaneously broken $U(1)$ mediates
supersymmetry breaking \cite{Mohapatra, bdob}, the existence of a DSB
sector and of a sector responsible for gaugino masses are required by
the gauge anomaly cancellation conditions. As a consequence, the
superpartner spectrum is quite distinctive. We discuss the resulting
phenomenology and find some interesting cases with unexpected
experimental signatures.
The plan of the paper is as follows. In Section \ref{framework} we
discuss the theoretical and phenomenological constraints, and use them
to find a fairly exhaustive class of viable models. In Section
\ref{phenomenology} we study the phenomenology of this class of
models. We describe their low-energy spectrum and discuss the
experimental search strategy in each of the typical scenarios, by
singling out the most promising channels to look for in the upcoming
Tevatron runs. The implications of relaxing some of the
phenomenological constraints are considered in Section
\ref{conclusions}, where we also draw our conclusions.
\section{Framework and Constraints}
\label{framework}
\setcounter{equation}{0}
If the gauge group acting on the MSSM chiral superfields is $SU(3)_C
\times SU(2)_W \times U(1)_Y \times U(1)_\mu$, then the $H_u H_d$
term in the superpotential is forbidden provided the $U(1)_\mu$
charges of the two Higgs superfields satisfy $z_{H_u} + z_{H_d} \neq
0$. In order to produce a Higgsino mass, we introduce a $S H_u H_d$
term in the superpotential, where the Standard Model singlet $S$ has
$U(1)_\mu$ charge $z_S = - z_{H_u} - z_{H_d}$, and its scalar
component acquires a vev.
In order to have quark and lepton masses and mixings (we allow lepton
mixings in compliance with the recent Super-Kamiokande results
\cite{superK}), the most general Yukawa couplings of the Higgs
doublets to quarks and leptons require the $U(1)_\mu$ charges of the
quark and lepton superfields, $Q_i,\bar{U}_i, \bar{D}_i, L_i,
\bar{E}_i, {\nu_R}_i$ ($i = 1, 2, 3$ is a generational index), to be
family-independent and to satisfy \begin{eqnarray} z_Q &=& -z_{H_u} -
z_{\bar{U}} = -z_{H_d} - z_{\bar{D}} ~, \nonumber \\ z_L &=& -z_{H_u}
- z_{\nu} = -z_{H_d} -z_{\bar{E}}. \end{eqnarray} These conditions can be
relaxed if the quark and lepton mass matrices have textures produced
by a non-standard mechanism, such as Frogatt-Nielsen~\cite{FN}, but we
will not study this possibility here.
For $U(1)_\mu$ to be anomaly free, additional chiral superfields have
to be included. The $[SU(3)_C]^2 \times U(1)_\mu$, $[SU(2)_W]^2 \times
U(1)_\mu$, $[U(1)_Y]^2 \times U(1)_\mu$ anomalies from the MSSM
fields are\footnote{We use the normalization ${\rm tr}(T^c \{T^a,
T^b\})$ for the anomalies, so the $[U(1)_Y]^2 \times U(1)_\mu$
anomaly from a field with $U(1)_Y \times U(1)_\mu$ charges $(y, \, z)$
is $2y^2 z$.} \begin{eqnarray}
\label{A3}
(A3) & \equiv & \left[SU(3)_C\right]^2 \times U(1)_\mu :\quad 3\left(2
z_Q + z_{\bar{U}} + z_{\bar{D}}\right) = 3 z_S , \nonumber \\
\label{A2}
(A2) &\equiv &\left[SU(2)_W\right]^2 \times U(1)_\mu : \quad
9 z_Q+3z_L-z_S , \nonumber \\
\label{A1}
(A1) &\equiv &\left[U(1)_Y\right]^2 \times U(1)_\mu :\quad
-9 z_Q-3z_L+7z_S . \end{eqnarray} They have to be cancelled by fields which
carry both SM and $U(1)_\mu$ quantum numbers. In order not to
introduce anomalies to the SM gauge group, and to be able to decouple
at low energies after $U(1)_\mu$ is broken, these fields should be
vector-like under the SM gauge group. As a result, they can naturally
be identified with the messengers of gauge mediated supersymmetry
breaking.
The masses of the messengers are induced by a $X \phi \bar{\phi}$ term
in the superpotential, where $\phi$, $\bar{\phi}$ represent the
messenger fields, $X$ is a SM singlet, and their $U(1)_\mu$ charges
are related by $z_\phi +z_{\bar{\phi}}=-z_X$.
In order to generate the soft supersymmetry breaking masses for the
MSSM fields through gauge mediation, the $X$ superfield should have
both scalar and $F$-type vevs with $\VEV{F_X}/\VEV{X} \sim 10^4-10^5$
GeV and hence can not be identified with the $S$ field (otherwise it
will give a too big $B$ term for the Higgs sector). The simplest way
to have a (local) minimum in which $S$ and $X$ obtain the desired vevs
is having only one $X$ field which couples to all messengers, and
introducing another SM singlet $N$, with the superpotential in
Ref.~\cite{CDM}, \be
\label{WSXN}
W= f X \phi \bar{\phi} + \frac{\lambda}{2} X N^2 - \frac{\epsilon}{2}
S N^2 + \kappa S H_u H_d \, . \end{equation} Phenomenological contraints require
$\lambda^{3/2} < \epsilon \ll \lambda \ll f \sim 1$ \cite{CDM}. For
$\kappa > \sqrt{\lambda^2 +\epsilon^2}$, there is a desired minimum in
which all $S$, $X$, and $N$ fields obtain vevs, after they receive
negative masses squared of their scalar components from the DSB sector
\cite{CDM}. This choice of superpotential imposes the following
relation between the $U(1)_\mu$ charges of $S,\, X,$ and $N$ fields \be
\label{ZSXN}
z_S =z_X=-2 z_N. \end{equation} There are two other possible terms in the
superpotential, which are allowed by the gauge symmetries, $f' S \phi
\bar{\phi}$ and $\kappa' X H_u H_d$. The minimum will not be affected
if the $\kappa'$ coupling is small. The first term contributes to the
messenger masses and the second term gives extra contribution to the
$B$ term of the Higgs sector. We assume that the couplings $f'$ and
$\kappa'$, if they exist, are small so that the messenger masses
receive dominant contributions from $X$ and the desired minimum is not
destablized.
For a vector-like pair of messengers $\phi, \bar{\phi}$ with $SU(3)_C$
index $T_{3\phi}=(T_{3\bar{\phi}})$, (normalized to 1/2 for the
fundamental representation), $SU(2)_W$ index $T_{2\phi}$, and $U(1)_Y$
charges $\pm y_\phi$, the contributions to the anomalies (A3)--(A1)
are \begin{eqnarray} &(A3)& \quad 2T_{3\phi}(z_\phi+z_{\bar{\phi}}) = -2T_{3\phi}
z_X, \nonumber \\ &(A2)& \quad 2T_{2\phi}(z_\phi+z_{\bar{\phi}}) =
-2T_{2\phi} z_X, \nonumber \\ &(A1)& \quad
2y_\phi^2(z_\phi+z_{\bar{\phi}}) = -2y_\phi^2 z_X . \end{eqnarray} A messenger
field, $a$, which is real under SM with $U(1)_\mu$ charge $-z_X/2$ can
obtain its mass from the vev of $X$ without its conjugate partner. In
this case, its contributions to (A3)--(A1) are $-T_{3a} z_X$, $-T_{2a}
z_X$, and $-y_a^2 z_X$, respectively.
To cancel the anomalies coming from the MSSM sector [eq.~(\ref{A1})],
the messengers have to satisfy \begin{eqnarray}
\label{cond3}
3z_S - \sum_{r_3} T_{3r_3} z_S =0, \\
\label{cond2}
9z_Q +3z_L-z_S-\sum_{r_2} T_{2r_2} z_S =0, \\
\label{cond1}
-9z_Q-3z_L+7z_S-\sum_{r_1} y_{r_1}^2 z_S =0, \end{eqnarray} where $r_i$ runs
over all messenger representations (counting the SM vector-like pair
separately) under $SU(3)_C$, $SU(2)_W$, and $U(1)_Y$ respectively.
The gauge mediated contributions to the soft masses of the MSSM fields
transforming under $SU(3)_C$, $SU(2)_W$, and $U(1)_Y$ are
proportional to the messenger multiplicity factors \be \Delta \beta_3
\equiv \sum_{r_3} T_{3r_3}, \quad \Delta \beta_2 \equiv \sum_{r_2}
T_{2r_2}, \quad \Delta \beta_1 \equiv \sum_{r_1} y_{r_1}^2, \end{equation} which
are just the changes of the one-loop $\beta$-function coefficients of
the corresponding gauge groups due to the messenger fields. From
eq.~(\ref{cond3}) we see that \be
\label{b3}
\Delta \beta_3 = \sum_{r_3} T_{3r_3}=3 , \end{equation} which means the messenger
sector should either contain three pairs of {\bf 3} and $\bf \bar{3}$,
or one {\bf 8} under $SU(3)_C$. Combining eqs.~(\ref{cond2}) and
(\ref{cond1}) we obtain another constraint on the messenger sector, \be
\label{b2plusb1}
\Delta \beta_2+\Delta \beta_1 = \sum_{r_2} T_{2r_2} +\sum_{r_1}
y_{r_1}^2 =6, \end{equation} which limits $\Delta \beta_2$ and $\Delta \beta_1$
to a discrete set of choices.
The only possible messengers which can satisfy eqs.~(\ref{b3}) and
(\ref{b2plusb1}) (and do not cause the SM gauge couplings blowing up
below the Planck scale) are the ones transforming under $SU(3)_C
\times SU(2)_W$ as ({\bf 3, 2}), ({\bf 3,1}), ({\bf 8,1}), ({\bf
1,2}), ({\bf 1,3}), ({\bf 1,1}) and their conjugates. If they have
arbitrary hypercharges, then in general they can not decay into MSSM
fields. They will be stable and form bound states with fractional
electric charges, which may cause cosmological problems unless a late
period of inflation is incorporated. To avoid that, the hypercharges
of the messenger fields are fixed up to additive integers by the
hypercharges of the MSSM fields. Imposing the conditions (\ref{b3})
and (\ref{b2plusb1}), we find that the messenger sector can only
consist of fields among $q=({\bf 3, 2}, +1/6)$, $\bar{u}=({\bf
\bar{3}, 1}, -2/3)$, $\bar{d}= ({\bf \bar{3}, 1}, +1/3)$, $a=({\bf
8,1}, 0)$, $l=({\bf 1, 2}, -1/2)$, $w=({\bf 1, 3}, 0)$,
$\bar{e}=({\bf 1, 1}, +1)$, and their conjugates. There are 16
possible combinations with four different sets of $(\Delta \beta_3,\;
\Delta \beta_2,\; \Delta \beta_1)$, which are shown in
Table~\ref{TableMess}.
\begin{table}[t]
\centering \renewcommand{\arraystretch}{1.5}
\begin{tabular}{|c| |c|c|c|c|c|c|c| |c|c|c|}\hline
Model & $d\overline{d}$ & $u\overline{u}$ & $q\overline{q}$ & $a$ &
$l\overline{l}$ & $w$ & $e\overline{e}$ & $\Delta\beta_3$ &
$\Delta\beta_2$ & $\frac{3}{5}\Delta\beta_1$ \\ \hline \hline 1a &3
&-- &-- &-- &1 &-- &1 &3 &1 &3 \\ \hline 1b &2 &1 &-- &-- &1 &--
&-- &3 &1 &3 \\ \hline 1c &-- &-- &-- &1 &1 &-- &2 &3 &1 &3 \\
\hline \hline 2a &3 &-- &-- &-- &2 &-- &-- &3 &2 &2.4 \\ \hline 2b &3
&-- &-- &-- &-- &1 &1 &3 &2 &2.4 \\ \hline 2c &2 &1 &-- &-- &--
&1 &-- &3 &2 &2.4 \\ \hline 2d &-- &-- &-- &1 &2 &-- &1 &3 &2
&2.4 \\ \hline 2e &-- &-- &-- &1 &-- &1 &2 &3 &2 &2.4 \\
\hline\hline 3a &3 &-- &-- &-- &1 &1 &-- &3 &3 &1.8 \\ \hline 3b
&1 &-- &1 &-- &-- &-- &1 &3 &3 &1.8 \\ \hline 3c &-- &1 &1 &--
&-- &-- &-- &3 &3 &1.8 \\ \hline 3d &-- &-- &-- &1 &3 &-- &-- &3
&3 &1.8 \\ \hline\hline 4a &3 &-- &-- &-- &-- &2 &-- &3 &4 &1.2
\\ \hline 4b &1 &-- &1 &-- &1 &-- &-- &3 &4 &1.2 \\ \hline 4c &--
&-- &-- &1 &2 &1 &-- &3 &4 &1.2 \\ \hline 4d &-- &-- &-- &1 &--
&2 &1 &3 &4 &1.2 \\ \hline
\end{tabular}
\parbox{5.5in}{
\caption{ Possible number of messenger representations, and the
corresponding contributions to the gauge coupling beta functions. The
factor of 3/5 in front of $\Delta\beta_1$ corresponds to the $SU(5)$
normalization of the hypercharge.
\label{TableMess}}}
\end{table}
Already from the above simple constraints, we can see that there are
only four possible combinations of the gauge mediated contributions to
the soft masses of the MSSM fields. In particular for the SM gaugino
masses, which only receive masses from gauge mediation, their ratios
are fixed to these four cases, independent of the assumption that
there are no states with fractional electric charges. If the
$U(1)_\mu$ $D$ term and the other contributions are small compared to
the gauge mediated contributions, then the complete sparticle spectrum
is determined to a large extent in these four cases. For larger
$U(1)_\mu$ $D$ term contributions, we also need to know the specific
$U(1)_\mu$ charges of the MSSM fields, in order to predict the scalar
superpartner spectrum.
In addition to (\ref{cond3})--(\ref{cond1}), the $U(1)_\mu$ charges
also have to satisfy the $U(1)_Y\times [U(1)_\mu]^2$, $U(1)_\mu$, and
$[U(1)_\mu]^3$ anomaly cancellation conditions. In general, the
latter two anomalies are not cancelled by the combination of the MSSM
and messenger sector. Therefore, some fields from the DSB sector have
to carry $U(1)_\mu$ charges, so that $U(1)_\mu$ can communicate
supersymmetry breaking to both the messenger sector and to the MSSM
chiral superfields. It is remarkable that the existence of the three
sectors (MSSM, messenger and DSB) is required by the mathematical
consistency of the theory (namely the anomaly cancellation
conditions). We consider this situation an improvement compared with
the original gauge-mediated models \cite{DNS,dnns} in which the three
different sectors are introduced only for phenomenological reasons.
If the DSB sector dynamics does not break $U(1)_\mu$, then its
contributions to the $U(1)_\mu$ and $[U(1)_\mu]^3$ anomalies can be
represented by low energy effective composite degrees of freedom {\it
a la} 't~Hooft~\cite{tHooft}. The simplest example is the 3-2 model
\cite{ADS,DNS}, where after $SU(3)$ becomes strong and breaks
supersymmetry, there is one light field charged under the unbroken
``messenger'' $U(1)$. Other DSB models have a different number of
light composite fields with various $U(1)_\mu$ charge ratios. For
simplicity, in searching for solutions, we restrict ourselves to the
cases with no more than 2 extra such SM neutral and $U(1)_\mu$ charged
composite fields from the DSB sector. A renormalizable and calculable
example of a DSB model which gives rise to two light $U(1)_\mu$
charged composite fields is the $SU(4)\times SU(3)$ model
\cite{PST,AMM,CDM}. A brief description of the model and its
$U(1)_\mu$ charge assignments is presented in Appendix~A.
There are several additional constraints we impose when we search for
models. We allow the right-handed neutrinos to acquire Majorana
masses, so the $U(1)_\mu$ charges of the right-handed neutrinos have
to be $z_\nu=-z_S/2$ or $-z_N/2$ if they receive masses from $S$ or
$N$ vevs. Note that we avoid $z_\nu=0$ because in that case the field
content would not be chiral: the right-handed neutrinos would be gauge
singlets, and a Planck scale mass for $L_i$ and $H_u$ would be
potentially induced. For $z_\nu =-z_S/2 (=z_N)$, the operators $N L_i
H_u$ are gauge invariant, and give rise to the bilinear $R$ parity
violating terms after $N$ acquires a vev. The phenomenological
constraints on these bilinear terms (e.g., from flavor changing
neutral currents) require the couplings of the $N L_i H_u$
interactions to be very small. We will therefore only concentrate on
the case $z_\nu =-z_N/2$. In this case we will find that $R$ parity
conservation is an automatic consequence of the gauge symmetry.
We are free to choose $z_S > 0$ (note that $z_S \neq 0$ to avoid a
large $\mu$ term), which implies that the $U(1)_\mu$ $D$ term is
positive. We will require the $U(1)_\mu$ charges for ordinary quarks
and leptons to be non-negative, so that they do not receive negative
masses squared from the $U(1)_\mu$ $D$ term. This may not be
necessary if the positive contributions from gauge mediation are
larger than the negative $D$ term contributions. However, the squarks
and sleptons receive $D$ term masses at a higher scale, so the SM
gauge group may be broken before the gauge mediated contributions can
turn on. Therefore, we do not search for models with negative quark
or lepton charges.
Finally, if the messenger fields do not couple to the MSSM fields,
they are stable. For typical values of the messenger masses, they will
overclose the universe \cite{DGP}, unless diluted by a late period of
inflation. We therefore require that the $U(1)_\mu$ charges allow the
messenger fields to couple to the MSSM fields so that the messenger
fields can decay into MSSM fields before nucleosynthesis. This
requires the relevant matter-messenger couplings to be suppressed by
no more than one power of the Planck mass \cite{DGP}. At the same
time, the matter-messenger interactions which can induce too fast
proton decays should be forbidden (including the lepton number
conserving decays to gravitinos~\cite{Choi}).
The $U(1)_\mu$ charges of the MSSM fields can be expressed in terms of
the 4 charges, $z_Q$, $z_L$, $z_{H_u}$, and $z_S$, from the
requirements of the MSSM superpotential interactions. The Majorana
masses of the right-handed neutrinos impose a relation among $z_L$,
$z_{H_u}$, and $z_S$ ($-z_L-z_{H_u}=z_\nu=z_S/4$). Among the anomaly
conditions (\ref{cond3})--(\ref{cond1}), only 2 combinations have been
used. The other one, which can be taken as (\ref{cond2}), gives
another constraint among $z_Q$, $z_L$, and $z_S$ for each choice of
$\Delta \beta_2$, \be 9z_Q +3z_L -(1+\Delta\beta_2)z_S=0\, . \end{equation} We
choose the overall charge normalization by fixing $z_S$. The
$U(1)_\mu$ charges of the MSSM fields then depend only on one
independent charge, for example $z_Q$, and its range is limited by the
requirement that the quark and lepton $U(1)_\mu$ charges are
non-negative. The $U(1)_\mu$ charges of the MSSM fields as a function
of $z_Q$, and the allowed range for $z_Q$ for each case of
$\Delta\beta_2$ are shown in Tables~\ref{MSSMcharges} and
\ref{zQrange}, respectively.
\begin{table}[h]
\renewcommand{\arraystretch}{1.8} \centering
\begin{tabular}{|c|c|}\hline
$Q_i$ & $z_Q$ \\ \hline $\bar{U}_i$ & $-4z_Q+5 +4
\left(\Delta\beta_2-2\right)/3$ \\ \hline $\bar{D}_i$ & $ 2z_Q-1 -4
\left(\Delta\beta_2-2\right)/3$ \\ \hline $L_i$ & $-3z_Q+4 +4
\left(\Delta\beta_2-2\right)/3$ \\ \hline $\bar{E}_i$ & $ 6z_Q-5 -8
\left(\Delta\beta_2-2\right)/3$ \\ \hline $\nu_i$ & $1$ \\ \hline
$H_u$ & $ 3z_Q-5 -4 \left(\Delta\beta_2-2\right)/3$ \\ \hline $H_d$ &
$-3z_Q+1 +4 \left(\Delta\beta_2-2\right)/3$ \\ \hline
\end{tabular}
\caption{$U(1)_\mu$ charges of the MSSM fields in terms of $z_Q$, with
the normalization $z_S=4$.}
\label{MSSMcharges}
\end{table}
\begin{table}[ht]
\centering \renewcommand{\arraystretch}{1.8}
\begin{tabular}{|c|ccccc|} \hline
$\Delta\beta_2=1$ & $14/36$ & $\leq$ & $z_Q$ & $\leq$ & $32/36$ \\
\hline $\Delta\beta_2=2$ & $30/36$ & $\leq$ & $z_Q$ & $\leq$ & $45/36$
\\ \hline $\Delta\beta_2=3$ & $46/36$ & $\leq$ & $z_Q$ & $\leq$ &
$57/36$ \\ \hline $\Delta\beta_2=4$ & $66/36$ & $\leq$ & $z_Q$ &
$\leq$ & $69/36$ \\ \hline
\end{tabular}
\caption{The range of $z_Q$ for all MSSM quark and lepton charges
being non-negative, normalizing to $z_S=4$.}
\label{zQrange}
\end{table}
For the cases in Table~\ref{TableMess} and ``reasonably simple''
$U(1)_\mu$ charges in the corresponding allowed range, we search
numerically for the messenger and (DSB sector composite) singlet
charges which satisfy the rest of the anomaly constraints, allow
messengers to decay fast enough, and forbid too rapid proton
decay. Some of the solutions satisfying all the constraints are listed
in Table~\ref{Tablesolutions}.
\begin{table}[tp]
\centering \renewcommand{\arraystretch}{1.5}
\begin{tabular}{||c||c|r|r|r|r||} \hline\hline
Fields & \multicolumn{5}{c||}{Models} \\ \cline{2-6} &
1a(i) & 2a(i) & 2a(ii)& 2a(iii)& 2a(iv)\\ \hline\hline $Q$
& $2$ & $8$ & $ 1$ & $ 10$ & $ 11$ \\ \hline
$\bar{U}$ & $3$ & $13$ & $ 1$ & $ 5$ & $ 1$ \\
\hline $\bar{D}$ & $5$ & $7$ & $ 1$ & $ 11$ & $
13$ \\ \hline $L$ & $2$ & $12$ & $ 1$ & $ 6$
& $ 3$ \\ \hline $\bar{E}$ & $5$ & $3$ & $ 1$ & $
15$ & $ 21$ \\ \hline $\nu_R$ & $3$ & $ 9$ & $ 1$
& $ 9$ & $ 9$ \\ \hline $H_u$ & $-5$ & $-21$ &
$-2$ & $ -15$ & $-12$ \\ \hline $H_d$ & $-7$ &
$-15$ & $-2$ & $ -21$ & $-24$ \\ \hline \hline $S,X$ &
$12$ & $36$ & $ 4$ & $ 36$ & $ 36$ \\ \hline $N$ &
$-6$ & $-18$ & $-2$ & $ -18$ & $-18$ \\ \hline
$\bar{d}_i$ & $-4$ & $ -2$ & $ 0$ & $ 2$ & $ 4$ \\
\hline $d_i$ & $-8$ & $-34$ & $-4$ & $ -38$ &
$-40$ \\ \hline $l_i$ & $-1$ & $12$ & $ 1$ & $ 6$
& $ 3$ \\ \hline $\bar{l}_i$ & $-11$ & $-48$ & $-5$ & $
-42$ & $-39$ \\ \hline $\bar{e}$ & $-4$ & -- & -- &
-- & -- \\ \hline $e$ & $-8$ & -- & --
& -- & -- \\ \hline $b_1$ & $-12$ & $-39$ &
$-4$ & $ -36$ & $-36$ \\ \hline $b_2$ & $18$ & $90$
& $10$ & $ 90$ & $ 90$ \\ \hline\hline & $QL\bar{d},\,
\bar{U}\bar{E}d, $ & \multicolumn{4}{c||}{$\bar{E} H_d
l,\; \nu_R H_u ,l$} \\ Messenger & $ \bar{D}\nu_R d, \,
LL\bar{e},$ & \multicolumn{4}{c||}{$X L \bar{l},$} \\
decay & $\bar{E}\nu_R e,\, NQ\bar{D}l, $ &
\multicolumn{4}{c||}{$N Q L \bar{d},$} \\ operators &
$NL\bar{E}l,\, \bar{E} \nu_R H_d l,$ &
\multicolumn{4}{c||}{$N \bar{U} \bar{D} \bar{d},$} \\ & $
XN H_u l,\, \nu_R \nu_R H_u l$ & \multicolumn{4}{c||}{$Q
\nu_R H_d \bar{d}$} \\ \hline
\end{tabular}
\caption{Solutions for the $U(1)_\mu$ charges (normalized to
integers), which satisfy all the constraints. In models 1a(i) and
2a(ii) we find many other possible solutions with different messenger
charges, including different charges for the different $d_i$'s and
$l_i$'s. Here we only list one example for each case.}
\label{Tablesolutions}
\end{table}
The fields $b_{1,2}$ are the light composite superfields from the DSB
sector which carry $U(1)_\mu$ charges. Note that mass terms involving
$b_{1,2}$ and $S$ or $X$ can be generated only by higher dimensional
operators involving the fundamental fields from the DSB, and therefore
are Planck-scale suppressed. We find solutions in only a few out of
the 16 cases because of the restriction that there are no more than
two $b_i$ superfields. If we relax this simplifying assumption and
allow more singlets, there could be solutions in other cases as well.
The low energy MSSM spectrum and phenomenology depend mainly on
$\Delta\beta_{1, 2, 3}$ and the $U(1)_\mu$ charges of the MSSM fields.
They have little dependence on the exact compositions and charge
assignments of the mesenger and DSB sectors as long as the mixings
between the MSSM fields and messenger fields are small. We will
discuss the phenomenology in the next section.
\section{Phenomenology}
\label{phenomenology}
\setcounter{equation}{0}
\subsection{Particle spectrum}
First we shall briefly review the parameter space of this class of
models\footnote{For more details, we refer the reader to
Ref.~\cite{CDM}.} and discuss the possible particle spectra arising
in each case. For the rest of this section, we shall use the
$U(1)_\mu$ charge normalization $z_S=4$ and rescale the charges in
Table~\ref{Tablesolutions} correspondingly.
The desired minimum of the potential is at $\langle H_u \rangle =
\langle H_d \rangle = 0$ (at the scale of $U(1)_\mu$ breaking) and \be
\langle N^2 \rangle ={24{\tilde m}^2\over\lambda^2+\epsilon^2}, \quad \langle
X \rangle = {\epsilon\over\lambda}\langle S \rangle, \quad \langle S^2
\rangle = {\lambda^2 \over \lambda^2+\epsilon^2} \left( {\xi^2 \over4}
+{ {\tilde m}^2\over g_\mu^2} +{12{\tilde m}^2\over\lambda^2+\epsilon^2} \right) ~.
\label{vevs}
\end{equation} The corresponding SUSY-breaking $F$ and $D$-terms are induced at
the $U(1)_\mu$ breaking scale \be M_\mu\equiv g_\mu \langle N \rangle
\simeq 2\sqrt{6} {g_\mu\over \lambda}\tilde m \quad (\gg {\tilde m}),
\label{Mmu}
\end{equation} where $g_\mu$ is the $U(1)_\mu$ gauge coupling, and are given by
\be \langle F_N \rangle =0, \quad \langle F_X \rangle =
\frac{\lambda}{2} \langle N^2 \rangle \simeq \sqrt{6} {\tilde m} \langle N
\rangle, \quad \langle F_S \rangle = -\frac{\epsilon}{2} \langle N^2
\rangle, \quad g_\mu^2 \langle D \rangle = 4 {\tilde m}^2.
\label{F}
\end{equation}
The $\langle X \rangle$ and $\langle F_X \rangle$ vevs provide the
SUSY preserving and breaking masses for the messenger fields $\phi$
and $\bar{\phi}$. The gauge singlets $X$, $S$ and $N$ also get
masses. Their fermionic components mix with the $U(1)_\mu$ gaugino to
form two Dirac fermions, with masses $\sim24(g_\mu/\lambda)\tilde m$
and $\sim4\tilde m$, respectively. The scalar components of the
singlets also mix, and the resulting mass spectrum consists of a
massless Nambu-Goldstone boson, eaten by the $U(1)_\mu$ gauge boson; a
scalar of mass $24(g_\mu/\lambda)\tilde m$, which becomes part of the
heavy gauge supermultiplet; and four light scalars with masses
$2\sqrt{6}\tilde m, 2\sqrt{6}\tilde m, 2\sqrt{3}\tilde m$ and
$2\sqrt{2}\tilde m$, correspondingly~\cite{CDM}.
Assuming $\kappa'=0$ for the moment, $\langle S \rangle$ and $\langle
F_S \rangle$ provide the $\mu$ and $B$ terms for the Higgs sector: \be
\mu (M_\mu)= \kappa \langle S \rangle \simeq 2\sqrt{3} {\kappa \over
\lambda}\tilde m \quad (\begin{array}{c}\,\sim\vspace{-21pt}\\> \end{array} \tilde m),
\label{mu}
\end{equation} \be B (M_\mu)= {\langle F_S\rangle \over \langle S \rangle }
\simeq -2\sqrt{3} {\epsilon\over\lambda} \tilde m \quad(|B|\ll \tilde
m).
\label{B}
\end{equation}
Below the messenger scale \be M\ \equiv\ f \langle X\rangle \simeq
2\sqrt{3}{\epsilon f\over\lambda^2}\tilde m \quad (\gg {\tilde m}),
\label{Mql}
\end{equation} the messengers are integrated out, giving rise to the usual
one-loop gauge mediation contributions to the gaugino masses: \be
M_n(M) = \Delta\beta_n{\alpha_n\over4\pi}\Lambda g\left(\Lambda/
M\right), \end{equation} where $n = 1,2,3$ corresponds to $U(1)_Y$, $SU(2)_W$ and
$SU(3)_C$, $g(x)$ is the threshold function from \cite{Martin} and \be
\Lambda \equiv {\langle F_X \rangle \over \langle X\rangle} \simeq
2\sqrt{3}{\lambda\over\epsilon}\tilde m ~.
\label{Lambda}
\end{equation} The scalar squared masses receive a $U(1)_\mu$ $D$-term
contribution and a negative contribution from the $U(1)_\mu$
mediation: \be m_{\tilde f}^2(M_\mu) \ =\ z_{f} (4-z_{f}) \tilde m^2,
\label{scalarmass}
\end{equation} in addition to the usual two-loop SM gauge mediation
contributions: \be m^2_{\tilde f}(M)={2\Lambda^2\over(4\pi)^2} \left(
\Delta\beta_3 C_3^{f}\alpha_3^2 +\Delta\beta_2 C_2^{f}\alpha_2^2
+\frac{5}{3}\Delta\beta_1 C_1^{f}\alpha_1^2 \right)
f\left(\Lambda/M\right) ~,
\label{msq}
\end{equation} where the coefficients $C_i^{f}$ are zero for gauge singlet
sfermions $\tilde f$, and $4/3$, $3/4$ and $y^2$ for fundamental
representations of $SU(3)_C$, $SU(2)_W$ and $U(1)_Y$,
correspondingly. The threshold function $f(x)$ can be found in
Ref.~\cite{Martin}.
After imposing electroweak symmetry breaking, the parameter space of
this class of models is spanned by $\{\Lambda, M, M_\mu, \tan\beta,
{\rm sign}(\mu)\}$. However, if we allow a small coupling $\kappa' X
H_u H_d$, the conditions (\ref{mu}) and (\ref{B}) can be relaxed: \be
\mu (M_\mu)= \kappa \langle S \rangle + \kappa' \langle X \rangle
\simeq 2\sqrt{3} \left({\kappa \over \lambda}+{\kappa'\epsilon \over
\lambda^2} \right) \tilde m \quad (\begin{array}{c}\,\sim\vspace{-21pt}\\> \end{array} \tilde m),
\label{mu'}
\end{equation} \be B (M_\mu)= {\kappa \langle F_S\rangle + \kappa' \langle
F_X\rangle \over \kappa \langle S \rangle + \kappa' \langle X \rangle
} \simeq -2\sqrt{3}
\left({\epsilon\over\lambda}+{\kappa'\over\kappa}\right) \tilde m
\quad(|B|\begin{array}{c}\,\sim\vspace{-21pt}\\< \end{array} \tilde m),
\label{B'}
\end{equation} so that $\tilde m$ can be traded for $\kappa'/\lambda$ and treated
as an additional free parameter. This is particularly relevant for
models with $z_{H_d} < z_{H_u}$, where it is rather difficult to
obtain proper electroweak symmetry breaking at large values of
$\tan\beta$, which are suggested by (\ref{B}). This can be easily
understood as follows. Minimization of the tree-level potential leads
to the approximate relation \be m^2_{H_d}(M_Z)-m^2_{H_u}(M_Z) \simeq
m_A^2(M_Z) \end{equation} which implies that $m^2_{H_d}(M_Z) > m^2_{H_u}(M_Z)$.
{}From eq.~(\ref{scalarmass}), however, one finds \be
m^2_{H_d}(M_\mu)-m^2_{H_u}(M_\mu) = 8 (z_{H_d}- z_{H_u}) \tilde m^2,
\end{equation} so that at the $U(1)_\mu$-breaking scale we already have
$m^2_{H_d}(M_\mu)<m^2_{H_u}(M_\mu)$. In addition, at large $\tan\beta$
the bottom and tau Yukawa couplings are enhanced and tend to further
reduce $m^2_{H_d}(M_Z)$.
The collider phenomenology of this class of models depends on the
nature and lifetime of the next-to-lightest supersymmetric particle
(NLSP). Note that our models have automatic conservation of
$R$-parity, which can be defined as (recall that we are using the
normalization $z_S=4$) \be R\ =\ (-1)^{3[z-6y(z_Q-1)]+2s}, \end{equation}
where $y$ and $z$ stand for the hypercharge and $U(1)_\mu$ charge of a
particle, and $s$ is its spin. Therefore, the NLSP can only decay to
its superpartner plus a gravitino $\tilde G$.
First we discuss the mass spectrum, in order to determine which
particles are potential NLSP candidates. Below the scale $M_\mu$ there
are 6 neutralinos, for which we choose the basis $\{ \tilde B$
,$\tilde W_3$, $\tilde H_d$, $\tilde H_u$,
$\tilde\Sigma\equiv\cos\theta\tilde N+\sin\theta\tilde S'$, $\tilde
X'\}$, where $\cos^2\theta\approx 2/3$ and
$$
\tilde S'\ = \ {\lambda \tilde S + \epsilon \tilde X \over
\sqrt{\lambda^2+\epsilon^2}}, \qquad\qquad \tilde X'\ = \ {\lambda
\tilde X - \epsilon \tilde S \over \sqrt{\lambda^2+\epsilon^2}}.
$$
Here $\tilde N$ ($\tilde X$, $\tilde S$) denotes the fermionic
component of the SM singlet superfield $N$ ($X$, $S$). The neutralino
mass matrix is given by \be {\cal M}_{\tilde \chi^0}\ =\ \left(
\ba{cccccc} M_1 & 0 & -{1\over2}g'v_d & {1\over2}g'v_u & 0 & 0
\\ [2mm] 0 & M_2 & {1\over2}gv_d &-{1\over2}gv_u & 0 & 0 \\
[2mm] -{1\over2}g'v_d & {1\over2}gv_d & 0 &-\mu
&-{1\over\sqrt{6}}\kappa v_u &-{1\over\sqrt{2}}\kappa' v_u \\
[2mm] {1\over2}g'v_u & -{1\over2}gv_u &-\mu& 0
&-{1\over\sqrt{6}}\kappa v_d &-{1\over\sqrt{2}}\kappa' v_d \\
[2mm] 0 & 0 &-{1\over\sqrt{6}}\kappa v_u &-{1\over\sqrt{6}}\kappa
v_d & 0 & 4\tilde m \\ [2mm] 0 & 0
&-{1\over\sqrt{2}}\kappa' v_u &-{1\over\sqrt{2}}\kappa' v_d & 4\tilde
m & 0 \end{array} \right), \end{equation} where $v_{u, d}=\sqrt{2}\langle
H_{u,d}\rangle$. This situation resembles the next-to-minimal
supersymmetric standard model (NMSSM) \cite{NMSSM}, except that now we
have not one, but two singlet states, which are degenerate to lowest
order.
The neutral Higgs masses are the same as in the MSSM, with the
addition of two new CP-even singlet states with masses
$2\sqrt{6}\tilde m$ and $2\sqrt{2}\tilde m$, and two new CP-odd
singlet states with masses $2\sqrt{6}\tilde m$ and $2\sqrt{3}\tilde
m$. The mixing between these new states and the Higgses of the MSSM
($h^0$, $H^0$ and $A^0$) is suppressed by the small Yukawa couplings
$\kappa$ or $\kappa'$.
In Table~\ref{spectra} we list sample particle spectra for model
points in each of the cases represented in
Table~\ref{Tablesolutions}. In addition to the values of the model
parameters, for completeness we also give the corresponding ratios of
the fundamental parameters in the Lagrangian (coupling constants).
\begin{table}[h!p]
\centering \renewcommand{\arraystretch}{1.3}
\begin{tabular}{||c||c|c|c|c|c||} \hline\hline
Particle & \multicolumn{5}{c||}{Models} \\
\cline{2-6} & 1a(i) & 2a(i) & 2a(ii)& 2a(iii)&
2a(iv) \\ \hline\hline $\tilde\chi^0_1$ & 130.8 &
164 & 81 & 120 & 120 \\ \hline
$\tilde\chi^0_2$ & 202 & 268 & 134 & 120 &
120 \\ \hline $\tilde\chi^0_3$ & 400 & 724 &
507 & 126.5 & 161 \\ \hline $\tilde\chi^0_4$ &
400 & 724 & 509 & 201 & 258 \\ \hline
$\tilde\chi^0_5$ & 575 & 793 & 544 & 383 &
451 \\ \hline $\tilde\chi^0_6$ & 580 & 797 &
544 & 401 & 465 \\ \hline\hline
$\tilde\chi^+_1$ & 131.0 & 268 & 134 & 200 &
258 \\ \hline $\tilde\chi^+_2$ & 581 & 798 &
513 & 401 & 466 \\ \hline\hline $\tilde e_R $
& 253 & 262 & 248 & 131 & 155 \\ \hline
$\tilde e_L $ & 247 & 427 & 272 & 217 &
266 \\ \hline
$\tilde\tau_1 $ & 166 & 147 & 216 & 125.2 & 125 \\ \hline
$\tilde\tau_2 $ & 312 & 478 & 300 & 220 & 277 \\
\hline\hline
$\tilde g $ & 1126 & 1141 & 615 & 924 & 1134 \\ \hline
$\tilde t_1 $ & 984 & 1045 & 589 & 795 & 979 \\ \hline
$\tilde u_R $ & 1074 & 1112 & 610 & 866 & 1061 \\
\hline\hline $h^0 $ & 114 & 113 & 109 & 111 & 114
\\ \hline $H^0 $ & 379 & 487 & 177 & 339 & 454 \\
\hline\hline
$M [{\rm TeV}] $ & 500 & 200 & 100 & 200 & 200 \\ \hline
$\Lambda [{\rm TeV}] $ & 50 & 50 & 25 & 40 & 50 \\
\hline $M_\mu [{\rm TeV}]$ & 10,000 & 1,000 &10,000 & 5,000 & 2,000
\\ \hline $\tan\beta $ & 35 & 60 & 25 & 10 & 25 \\
\hline $\mu(M_\mu) $ & 602 & 862 &$-537$ & 387 & 460 \\
\hline $\tilde m $ & 100 & 182 & 156 & 30 & 30 \\
\hline\hline ${\kappa/\lambda} $ & 1.74 & 1.37 & 1.14 & 3.72 & 4.43
\\ \hline ${\epsilon/\lambda} $ & 0.0069 & 0.0126& 0.0188& 0.0026 & 0.002
\\ \hline ${\kappa'/\lambda} $ & 0.0796 & --- & --- & 1.545 & 0.713
\\ \hline\hline
\end{tabular}
\caption{ Sample particle spectra for the models in
Table~\ref{Tablesolutions}.}
\label{spectra}
\end{table}
A few comments are in order at this point. As we mentioned earlier in
this Section, models with $z_{H_d}<z_{H_u}$ (1a(i), 2a(iii) and
2a(iv)) typically require the presence of the additional coupling
$\kappa'$, in which case $\tilde m$ is an input. Otherwise, $\tilde
m$ is computed from eqs.~(\ref{B}) and (\ref{Lambda}): \be \tilde
m\ =\ \sqrt{{|B\Lambda|\over12}}. \end{equation} If $\tilde m$ is large, the
usual hierarchy between the left-handed and the right-handed sleptons
may be affected, due to the $U(1)_\mu$ contributions in
eq.~(\ref{scalarmass}). For example, in model 1a(i), where $z_E>z_L$
and $\tilde m$ is sizable, we find $m_{\tilde e_R}>m_{\tilde e_L}$,
contrary to the prediction of the minimal models \cite{dnns,DNS}. In
principle, this inverse slepton mass hierarchy is also possible for
models 2a(iii) and 2a(iv). This contribution, however, is not
important for the squarks, where the SM gauge-mediated contributions
dominate. We also find that the $\mu$ parameter is typically larger
than in the minimal gauge-mediated models, due to the negative
$U(1)_\mu$ contributions to $m_{H_u}^2$. Note the presence of the two
extra degenerate neutralinos in the spectrum. However, because of
their very small couplings, their impact on phenomenology is
negligible, unless one of them is the NLSP -- see the examples for
models 2a(iii) and 2a(iv).
Table~\ref{spectra} rather nicely illustrates all potential NLSP
candidates in our models:
\begin{enumerate}
\item
The lightest neutralino, which is mostly wino-like:
$\tilde\chi^0_1\sim \tilde W_3^0$. This situation may arise in any
one of the models with $\Delta\beta_2=1$, where at the {\em weak}
scale we find the reversed gaugino mass hierarchy
$M_3:M_1:M_2\sim9:1.5:1$. Since $M_2$ is also the soft mass of the
wino-like chargino, one faces the dilemma of deciding which one is
actually the NLSP: the chargino or the neutralino. (Quite recently,
the case of $M_2<M_1$ was discussed in the framework of
supergravity-mediated (SUGRA) models, where the soft masses arise
through the super-conformal anomaly \cite{Randall,GLMR}.) At
tree-level, one can expand the lightest chargino and neutralino mass
eigenvalues in terms of $1/|\mu|$: \begin{eqnarray} m_{\tilde\chi^+_1}&=&M_2
-{M_W^2\over\mu }s_{2\beta} -{M_W^2\over\mu^2}M_2 + {\cal
O}({1\over\mu^3}), \\ m_{\tilde\chi^0_1}&=&M_2 -{M_W^2\over\mu
}s_{2\beta} -{M_W^2\over\mu^2}M_2
-{M_W^2\over\mu^2}{M_W^2\over M_1-M_2}t_W^2s_{2\beta}^2 + {\cal
O}({1\over\mu^3}), \end{eqnarray} where $t_W\equiv\tan\theta_W$ and
$s_{2\beta}\equiv\sin 2\beta$.\footnote{Our result for both the
chargino and neutralino differs from that of
Refs.~\cite{Randall,GLMR}.} We find that the mass splitting occurs
only at order $1/|\mu^2|$ and the chargino is always heavier at
tree-level: \be \Delta m_\chi\ \equiv \
m_{\tilde\chi^+_1}-m_{\tilde\chi^0_1} \ = \ {M_W^2\over \mu^2}
{M_W^2\over M_1-M_2}t^2_W s^2_{2\beta} + {\cal O}({1\over\mu^3}).
\label{delta_mchi}
\end{equation} Notice the additional suppression at large $\tan\beta$ due to the
factor $\sin^22\beta \sim 4/\tan^2\beta$, in which case the next order
terms may be numerically important as well. Typical values of the
parameters result in a mass splitting $\Delta m_\chi$ in the MeV
range. In any case, we see that in order to correctly determine the
nature of the NLSP, it is necessary to account for the one-loop
gaugino mass corrections \cite{inomasses}. Including the full
one-loop mass corrections to the chargino and neutralino matrices
\cite{BMPZ}, we find that the neutralino is indeed the NLSP, and the
mass splitting is in fact much larger than predicted by
eq.~(\ref{delta_mchi}). We illustrate this result in
Fig.~\ref{delta_m}.
\begin{figure}[ht]
\epsfysize=3.5in \epsffile[-40 220 320 600]{approx.ps}
\begin{center}
\parbox{5.5in}{
\caption[] {\small The mass splitting $\Delta m_\chi\equiv
m_{\tilde\chi^+_1}-m_{\tilde\chi^0_1}$ at tree-level (dotted) and
one-loop (solid), versus the chargino mass $m_{\tilde\chi^+_1}$, which
is varied by varying $\Lambda$. The dot-dashed line represents the
exact one-loop correction $\delta\Delta m_\chi$, and the dashed line
is the result from the approximation in eq.~(\ref{approximation}).
\label{delta_m}}}
\end{center}
\end{figure}
Even though the chargino and neutralino mass corrections themselves
are dominated by the squark and Higgs loops, we have checked that the
renormalization of the {\em mass splitting} is due almost entirely to
the gauge boson loops. For small chargino or neutralino mixing, and
keeping only the gauge boson contributions, we can derive the
following approximate formula for the one-loop correction to $\Delta
m_\chi$: \begin{eqnarray} \delta\Delta m_\chi &\equiv& \Delta
m_\chi^{1-loop}-\Delta m_\chi^{tree} \nonumber \\
&=&{g^2\over8\pi^2}
\biggl[2c_W^2B_0(M_2,M_2,M_Z)+2s_W^2B_0(M_2,M_2,0)-2B_0(M_2,M_2,M_W)
\nonumber \\
&-&c_W^2B_1(M_2,M_2,M_Z)-s_W^2B_1(M_2,M_2,0)+B_1(M_2,M_2,M_W)
\biggr]M_2,
\label{approximation}
\end{eqnarray} with the functions $B_0$ and $B_1$ defined as in Appendix B of
Ref.~\cite{BMPZ}. Notice that this correction is purely finite and
cannot be accounted for in a leading-log decoupling scheme. Since the
dominant effect is from the gauge boson loops only, the result
(\ref{approximation}) is quite model-independent and will apply for
the supergravity-mediated models discussed in
Refs.~\cite{Randall,GLMR} as well.
Since the lightest chargino and neutralino are so degenerate, the
decay length $L_{\tilde\chi}$ for the decay
$\tilde\chi^+_1\rightarrow\tilde\chi^0_1+X$ could be macroscopic
\cite{CheDreGun}: \be L_{\tilde\chi}\ =\ \left({1 GeV\over\Delta
m_\chi}\right)^5 \left( {E^2\over m_{\tilde\chi}^2}-1\right)^{1/2}
\times 100\, \mu m. \end{equation}
For typical mass splittings $\Delta m_\chi\sim 200$ MeV (see
Fig.~\ref{delta_m}), $L_{\tilde\chi}$ is on the order of tens of
centimeters. In that case, the lightest chargino and neutralino may
act as co-NLSP's, if the decays to gravitinos are faster.
\item In any one of our models, the limit $\tilde m\rightarrow 0$
gives rise to a neutralino NLSP, which is a mixture of $\tilde\Sigma$
and $\tilde X'$. We see such examples in Table~\ref{spectra} for
models 2a(iii) and 2a(iv), but we find that small $\tilde m$ is
possible for all other models as well.
\item In all models with $\Delta\beta_2=2,3$ or 4 we find that
$M_1<M_2$, so that the lightest neutralino is mostly $\tilde B$, as in
the conventional SUGRA or minimal gauge-mediated models. For either
moderate values of $\tan\beta$ or rather large values of $\tilde m$,
it also turns out to be the NLSP -- see e.g. model 2a(ii) in
Table~\ref{spectra}. The phenomenology of similar gauge-mediated
models, albeit with a somewhat different gaugino mass splitting, has
been extensively discussed in the literature \cite{BinoNLSP}.
\item The lightest tau slepton $\tilde\tau_1$ can be the NLSP if
$\tan\beta$ is significant and $\tilde m$ is not too large, e.g. in
model 2a(i) of Table~\ref{spectra}. This case is not much different
from the minimal gauge-mediated models with a stau NLSP and has been
studied previously \cite{FM,stauNLSP} for both stable and promptly
decaying staus.
\end{enumerate}
The other important factor in the discussion of the typical collider
signatures of our models is the value of the intrinsic SUSY breaking
scale $E_{\rm vac}$, which determines the decay length $L_{\rm NLSP}$
of the corresponding NLSP: \be L_{\rm NLSP}\ \sim\ 130 \left( {100\
{\rm GeV}\over m_{\rm NLSP}} \right)^5 \left( {E_{\rm vac}\over 100\
{\rm TeV} } \right)^4 \mu m, \end{equation} with $E_{\rm vac}^4$ being the vacuum
energy density. The value of $E_{\rm vac}$ in our models is given
by~\cite{CDM} \be E_{\rm vac}\ \begin{array}{c}\,\sim\vspace{-21pt}\\> \end{array} \ {\cal O}(1)\ \left({4\pi\over
g_\mu} \right) \sqrt{F_X}\ \begin{array}{c}\,\sim\vspace{-21pt}\\> \end{array} \ {\cal O}(1)\times 200 {\rm TeV}.
\end{equation} We see that for $E_{\rm vac}$ close to the lower limit ($\sim
10^5$ GeV), $L_{\rm NLSP}$ could be microscopic and unlike most known
models of gauge-mediated SUSY breaking, prompt decays of the NLSP are
possible.
In the rest of this section we shall concentrate on the first two NLSP
options, since they are unique features of our models. {\em Prompt}
decays of the $\tilde W$-like chargino and $\tilde W_3\,$-like
neutralino co-NLSP's in the models with $\Delta\beta_2=1$ lead to
signatures which have never before been discussed as possible SUSY
discovery modes, so we devote the next subsection \ref{subsec:su2} to
this case. Later in subsection~\ref{subsec:singletino} we discuss the
phenomenology of the singletino NLSP scenario, which resembles
somewhat that of a gauge-mediated NMSSM. Finally, we conclude this
Section with comments on the more standard cases of $\tilde B$-like
neutralino or stau NLSP.
\subsection{$SU(2)$-neutralino NLSP}
\label{subsec:su2}
Type 1 models (see Table~\ref{TableMess}) have the generic prediction
$M_2<M_1<M_3$ and the lightest neutralino is mostly $\tilde W_3$. As
shown in the previous subsection, the lightest neutralino and the
lightest chargino in this case are degenerate enough so that they can
act as co-NLSP's. The typical experimental signatures therefore
depend on which chargino-neutralino combinations are mostly being
produced. At the Tevatron, the dominant production processes are
$p\bar{p}\rightarrow\tilde\chi^+_1\tilde\chi^-_1$ and
$p\bar{p}\rightarrow\tilde\chi^\pm_1\tilde\chi^0_1$, which are roughly
of the same order, while
$p\bar{p}\rightarrow\tilde\chi^0_1\tilde\chi^0_1$ is much smaller. In
the rest of this subsection, we shall therefore only consider the fate
of a $\tilde\chi^+_1\tilde\chi^-_1$ or a
$\tilde\chi^\pm_1\tilde\chi^0_1$ gaugino pair.
If the SUSY breaking scale $E_{\rm vac}$ is high, the decays of both
the chargino and the neutralino to gravitinos will happen outside the
detector and the signatures are similar to those discussed in
Ref.~\cite{CheDreGun,Randall,GLMR} for supergravity-mediated models.
In this case the chargino will have time to decay to a neutralino
first. However, it is rather unlikely that the chargino will make it
out to the muon chambers -- we saw that the one-loop corrections tend
to increase the $\tilde\chi^\pm_1-\tilde\chi^0_1$ mass splitting and
the chargino decay will probably occur within a meter or so from the
primary vertex, thus evading existing limits from heavy charged stable
particle searches \cite{CDF-heavy}. It will therefore look like a tau
and will be rather difficult to identify \cite{Randall}. Because of
the small chargino-neutralino mass splitting, the lepton from the
$\tilde\chi^\pm_1\rightarrow\tilde\chi^0_1 l^\pm \nu$ decay will be
very soft and cannot be used to tag the chargino decay. Note also
that this mass degeneracy renders the current LEP limits on the
chargino mass inapplicable.
As in any model with a rather low SUSY breaking scale, decays of the
NLSP to $\tilde G$ provide information about the hidden (or messenger)
sector via $L_{\rm NLSP}$. If it is finite ($\begin{array}{c}\,\sim\vspace{-21pt}\\> \end{array} 1$ mm) and the
NLSP's ($\tilde\chi^\pm_1$ or $\tilde\chi^0_1$) decay to gravitinos
inside the detector, this will give rise to events with displaced
vertices (kinks in the charged tracks), photons with finite impact
parameters or originating from the outer hadronic calorimeter
\cite{CheGun}. A recent CDF search for long-lived $Z$-parents
\cite{CDF-Zparents} is not sensitive enough to place a limit on the
neutralino mass in this case. Because of the phase space suppression,
the branching ratio $BR(\tilde\chi^0_1\rightarrow Z\tilde G)$ begins
to dominate over $BR(\tilde\chi^0_1\rightarrow \gamma\tilde G)$ only
for neutralino masses $m_{\tilde\chi^0_1}\begin{array}{c}\,\sim\vspace{-21pt}\\> \end{array} 130$ GeV, where the
production cross-section falls below the Run I sensivity.
Finally, if the SUSY breaking scale $E_{\rm vac} \sim 10^5$ GeV, the
chargino and neutralino co-NLSP's may decay promptly to gravitinos,
creating events with real $W$'s, $Z$'s or photons and missing
(transverse) energy. Since the signatures for
$\tilde\chi^+_1\tilde\chi^-_1$ and $\tilde\chi^\pm_1\tilde\chi^0_1$
production are different, we shall discuss each case in turn.
For chargino pair production with subsequent prompt decays to
gravitinos, the possible final state signatures are
$l^+l^-\!\not\!\!E_T$, $ljj\!\not\!\!E_T$ and $jjjj\!\not\!\!E_T$,
with branching ratios 6\%, 38\% and 56\%, correspondingly. The two
leptonic signatures suffer from large irreducible $W$-pair and
$t$-$\bar{t}$ backgrounds, although the latter one may be somewhat
suppressed via a $b$-jet veto. These two channels have been previously
considered as possible Standard Model Higgs search modes at both the
Tevatron and LHC \cite{Herbi,Han,Andre}, since for $m_h>140$ GeV the
branching ratio $BR(h\rightarrow W^+W^-)$ starts to dominate. The
result is that this signal will be rather difficult to observe at the
Tevatron, and a $3\sigma$ discovery is only possible with Run III
integrated luminosities $L_{\rm int}\sim 30\, {\rm fb}^{-1}$
\cite{Andre}. For a certain range of chargino masses, we can
immediately adapt this result to our case. For Higgs masses in the
range $140-180$ GeV, the cross-section for $W$-pair production via
single Higgs is $\sigma_{h}(gg\rightarrow h^0\rightarrow WW)\sim
0.2-0.4$ pb. For chargino masses in the range 130-150 GeV, the signal
cross-section $\sigma_{\tilde\chi}(p\bar{p}\rightarrow \chi^+\chi^-
+X)$ is of the same order, so we conclude that only Run III at the
Tevatron may possibly have some sensitivity beyond LEP-II in those two
channels. For smaller chargino masses, the Tevatron reach is better
and a signal may be observed in the very early stages of Run III. In
the most optimistic scenario, where the chargino mass is just beyond
the projected LEP-II limit ($m_{\tilde\chi^+_1}\sim 100$ GeV),
$\sigma_{\tilde\chi}\sim 1.2$ pb and can be observed even in Run II.
The other possible signal of $\tilde\chi^+_1\tilde\chi^-_1\rightarrow
W^+W^-\tilde G\tilde G$ is the multijet channel, which has rather
small SM physical backgrounds (the $t$-$\bar{t}$ background can be
suppressed with a lepton veto). The single Higgs production analogy
now does not work, because of the $\not\!\!\!E_T$ requirement. The
dominant background is from QCD multijet production and jet energy
mismeasurement, which is why a detailed Monte Carlo study with a very
realistic detector simulation is necessary in order to estimate the
reach in this channel. In addition to a hard $\not\!\!E_T$ cut, one
may also make use of the fact that two different jet pairs should
reconstruct close to the $W$ mass.
We now turn to the signatures arising in the
$\tilde\chi^\pm_1\tilde\chi^0_1$ case, where we have to factor in the
branching ratios of the neutralino to a $Z$ or a photon. For
relatively light neutralinos, it is best to study signatures where the
neutralino decays to a photon and a gravitino. First, for
$m_{\tilde\chi^0_1}\begin{array}{c}\,\sim\vspace{-21pt}\\< \end{array} M_Z$, this is the dominant decay mode ($\sim
100\%$) anyways. Second, even when $m_{\tilde\chi^0_1} > M_Z$ and the
decay to $Z$ dominates, the $BR(\tilde\chi^0_1\rightarrow \gamma
\tilde G)$ is never below $\sim 20\%$, which is still better than the
leptonic branchings of the $Z$'s (the channels with hadronic $Z$'s
have larger backgrounds). We conclude that the most promising clean
signature in this case is $l^\pm\gamma\not\!\!\!E_T$. The only
physical background process is $W\gamma$, which is rather rare, so the
typical backgrounds will involve photon/lepton misidentification
and/or $E_T$ mismeasurement. Note that in contrast to the minimal
gauge-mediated models, our type 1 models are not associated with any
{\em di-photon} signatures \cite{diphoton}, because the neutralino
pair-production cross-sections are suppressed, while the chargino
decay does not yield a photon.
Finally, there is a variety of possible signatures, if we consider
prompt neutralino decays to $Z$'s. We shall concentrate on the
following channels: $l^+l^- l^\pm\not\!\!\!E_T$; $l^+l^- jj
\not\!\!\!E_T$; $l^\pm jj \not\!\!\!E_T$ and $jjjj \not\!\!\!E_T$,
since $l^\pm \not\!\!\!E_T$ and $jj \not\!\!\!E_T$ have too large a
background to be even considered.
The clean trilepton signature has irreducible background from $WZ$ and
in addition one takes a hit from the $Z$ branching ratio of the
neutralino, so it is rather unlikely that an excess of such events
will be seen in any of the future Tevatron runs. Unlike the classic
SUSY trilepton signature \cite{3L}, one cannot use an invariant
dilepton mass cut to beat down the $WZ$ background. The case of the
$l^\pm jj\not\!\!\!E_T$ is even worse: it has large irreducible
backgrounds from both $WZ$ and $t\bar{t}$.
The dilepton plus jets signature $l^+l^-jj\not\!\!E_T$ looks somewhat
promising. It was used to search for cascade decays of gluinos and/or
squarks \cite{gluinosquark}. The difference now is that the leptons
are coming from a $Z$-decay, so the invariant dilepton mass cut is
exactly the opposite of what is used in the conventional SUSY search.
The dominant physical backgrounds then would be $Zjj\rightarrow
\tau^+\tau^- jj\rightarrow l^+l^- jj\not\!\!E_T$, and to some extent
$t$-$\bar{t}$. Both of them can be significantly reduced by requiring
that the jet pair reconstructs the $W$ mass.
The 4-jet plus $\not\!\!E_T$ signature was already discussed above for
the case of hadronically decaying $W$'s in chargino pair-production,
the difference now is that the two jet pairs should reconstruct the
$W$ and $Z$ mass, correspondingly, so that one should use a more
relaxed cut, e.g., $70\ {\rm GeV}<m_{jj}<100$ GeV.
\subsection{Singletino NLSP}
\label{subsec:singletino}
In the limit of small $\tilde m$ the two lightest neutralinos will be
rather degenerate and have significant ``singletino'' components from
$\tilde \Sigma$ and $\tilde X'$. Since their masses are of order
$4\tilde m$, while the mass of the lightest scalar singlet $H_S$ is
only $2\sqrt{2}\tilde m$, the ``singletino''-like NLSP will always
decay as $\tilde\chi^0_1\rightarrow H_S\tilde G$. $H_S$ will
subsequently decay to $b$-$\bar{b}$, due to the small
$S$-$\{H_u,H_d\}$ mixing. If the singletino decays some distance away
from the primary vertex, this will give rise to rather spectacular
signatures with displaced $b$-jets. The case when the singletinos
decay promptly resembles that of the minimal gauge-mediated models
with a short-lived higgsino NLSP \cite{KTM}, heavier than the light
Higgs $h^0$. The difference now is that the jet pairs should
reconstruct the mass of the singlet Higgs $H_S$ rather than
$h^0$. Note that the LEP limits on the Higgs mass do not directly
apply to $H_S$.
If the singletinos decay outside the detector, the typical signatures
depend on the nature of the next-to-next-to-lightest supersymmetric
particle\footnote{We do not count the second singletino.} (NNLSP).
Because of the small couplings of the `singletinos', all
supersymmetric particles will first decay to the NNLSP. For the models
from Table~\ref{spectra}, the NNLSP is typically $\tilde\tau_1$,
which can be understood as follows. The singletino NLSP case arises
for small values of $\tilde m$, when the $U(1)_\mu$ contributions to
the scalar masses are also small. Then, the supersymmetric mass
spectrum in any of our models resembles that of a minimal
gauge-mediated model, with the corresponding number and type of
messenger representations. Thus we can immediately adapt the NLSP
analysis in the minimal gauge mediated models to the question of the
NNLSP in our models. The balance between the masses of the two main
NNLSP candidates: stau and $\tilde B$-like neutralino, is for the most
part determined by the value of the messenger multiplicity factor
$\Delta\beta_1$, since $m_{\tilde\tau_1}\sim \sqrt{\Delta\beta_1}$,
while $m_{\tilde B}\sim \Delta\beta_1$. In models of type 1 and 2,
$\Delta\beta_1$ is large, and the stau is lighter than the bino
throughout most of the parameter space. One should keep in mind
though that in models 1 the stau mass should be compared to the
$\tilde W_3$-like neutralino mass instead, so that cases with
$m_{\tilde\chi^0_1}<m_{\tilde W_3}<m_{\tilde \tau_1}<m_{\tilde B}$ are
certainly possible. Note that at low enough values of $\tan\beta$ and
$\Lambda$ one can reach a situation where
$m_{\tilde\chi^0_3}-m_{\tilde \tau_1}<m_{\tau}$, so that the stau and
the bino are in fact co-NNLSP's. Such an example is shown in
Table~\ref{spectra} for model 2a(iii). Next, for $\Delta\beta_1=2$
(models of type 4), one typically finds $m_{\tilde\chi^0_3}<m_{\tilde
\tau_1}$. Finally, for $\Delta\beta_1=3$ (models of type 3), one
finds cases with either stau or bino NNLSP.
Turning on to the collider phenomenology of models with stable
singletino NLSP, we first discuss the stau NNLSP case. In principle,
each SUSY event will contain at least two taus from the
$\tilde\tau_1\rightarrow \tilde\chi^0_1 \tau $ decays. Their $p_T$
spectrum is determined by the mass difference
$m_{\tilde\tau_1}-m_{\tilde\chi^0_1}$, and may be quite soft -- see
the 2a(iii) example in Table~\ref{spectra}. To make matters worse,
the tau jets and especially the leptons from the tau decays will be
even softer, presenting serious triggering and identification problems.
The distinctive collider signature in case of a neutralino co-NNLSP
depends on which is the dominant decay of $\tilde\chi^0_3$ to the
singletino NLSP. There are three possibilities:
\begin{figure}[t!]
\epsfysize=4.0in \epsffile[-50 200 420 690]{diagrams.ps}
\begin{center}
\parbox{5.5in}{
\caption[] {\small Sample diagrams for the possible decay modes of
$\tilde\chi^0_3$ to the singletino NLSP $\tilde\chi^0_1$.
\label{diagrams}}}
\end{center}
\end{figure}
\begin{enumerate}
\item The two-body decay $\tilde\chi^0_3\rightarrow\tilde\chi^0_1 H_S$
may be open for $\tilde m \begin{array}{c}\,\sim\vspace{-21pt}\\< \end{array} M_1/6.8$. This decay proceeds via the
diagram shown in Fig.~\ref{diagrams}(a) and one can see that the rate
is suppressed by four powers of $\kappa$ or $\kappa'$, as well as the
gaugino-higgsino mixing.
\item For values of $\tilde m \begin{array}{c}\,\sim\vspace{-21pt}\\> \end{array} M_1/6.8$, the tree-level two body
decays of $\tilde\chi^0_3$ are closed and the three-body decays via
the diagrams in Fig.~\ref{diagrams}(b)-(d) are possible. They are
typically suppressed by only two powers of $\kappa$ or $\kappa'$, in
addition to the gaugino-higgsino mixing.
\item The radiative decay $\tilde\chi^0_3\rightarrow\tilde\chi^0_1
\gamma$ (Fig.~\ref{diagrams}(e)) is also possible. It becomes
important when the $\tilde B$ and $\tilde \Sigma$ ($\tilde X'$) masses
are very close and the three-body decays are suppressed. Unlike the
previous decays, this mode has no gaugino-higgsino mixing suppression.
\end{enumerate}
The relative importance of these three modes will depend on the
particular values of the model parameters \cite{singletino}. A more
quantitative analysis will have to take into account the correct
singletino-gaugino-higgsino mixing as well as the singlet-Higgs mixing.
We conclude this Section with some comments on the more conventional
cases of $\tilde B$ or stau NLSP. For the most part, they are very
similar to the corresponding minimal gauge-mediated models, and the
results from previous phenomenological analyses hold
\cite{BinoNLSP,FM,stauNLSP}. However, there are two
differences. First, the predicted gaugino mass ratios are different.
This is important e.g. in the case of a `stable' Bino-NLSP, since the
$p_T$ distributions of the $\tilde\chi^+_1$ and $\tilde\chi^0_2$ decay
products will be affected. For a given $\tilde\chi^+_1$ mass
(i.e. signal cross-section), we would expect softer (harder) $p_T$
spectra for models 2 (3-4), which will have an impact on the cuts
optimization. Second, in the minimal gauge-mediated models, for
$\mu>0$, large\footnote{The exact numerical bound depends on $\Lambda$
and the number of messenger pairs.} values of $\tan\beta$ are
typically excluded because the light stau is below the experimental
limit. In our models, with the possibility of the stau mass to
receive additional positive contributions from the $U(1)_\mu$ D-term,
we find that the large $\tan\beta$ part of the parameter space for
$\mu>0$ can be extended up to $\tan\beta\sim 70$, where either
$m_A^2<0$ or the tau Yukawa coupling diverges below the Planck scale
(the bottom Yukawa coupling is less of a problem, since for $\mu>0$ it
is reduced by the SUSY threshold corrections).
\section{Discussion and Conclusions}
\label{conclusions}
\setcounter{equation}{0}
In this section we discuss how robust our model selection assumptions
are, we summarize the phenomenological signatures, and we comment on
the general features of the models. We start with a list the most
notable constraints on model-building and we comment on their
necessity:
\begin{itemize}
\ite
Viability of the models even if any global symmetry (which is not an
accidental result of the gauge structure) is badly violated by Planck
scale physics.
To this end, the models have to be chiral ({\it i.e.} there are no
gauge invariant mass terms), and generic ({\it i.e.} there are no
gauge invariant and supersymmetric dimension-four operators with
exceedingly small coefficients in the Lagrangian; in practice we may
allow dimensionless couplings as small as the Yukawa coupling of the
electron). Hence, the $\mu$-term is induced only after a gauge
symmetry is spontaneously broken, while baryon number conservation is
a consequence of the gauge symmetry.
This constraint is a major motivation for the model-building effort
presented in Section~2. So far there is no rigorous proof that the
global symmetries of the MSSM are violated by Planck scale physics if
they are not protected by gauge invariance. However, this may be
the case, and therefore it is important to search for extensions of
the MSSM which remain viable independent of the quantum gravitational
effects.
\ite
The minimality of the gauge group: SM $\times U(1)_\mu \times$ DSB.
The gauge group has to include the standard $SU(3)_C \times SU(2)_W
\times U(1)_Y$ and some DSB gauge group responsible for breaking
supersymmetry. It is remarkable that the addition of the $U(1)_\mu$
gauge group is sufficient to prevent the potentially dangerous Planck
scale effects and to communicate supersymmetry breaking to the MSSM
fields. In principle, the $U(1)_\mu$ may be replaced by a larger
gauge group, but in that case it would be harder to cancel the mixed
gauge anomalies.
\ite
The cancellation of the mixed SM $\times U(1)_\mu$ anomalies of the
MSSM fields by the messenger sector, and of the remnant $U(1)_\mu$
and $U(1)_\mu^3$ anomalies by the DSB sector.
These are nice features of our models because the existence of the
three sectors (MSSM, messenger and DSB) is required by the
mathematical consistency of the theory. This is to be contrasted
with the original gauge mediated supersymmetry breaking models
\cite{dnns,DNS} where the three sectors are introduced ad-hoc, for
phenomenological reasons.
\ite
The quark and lepton masses are generated by the Yukawa couplings to
the Higgs vevs.
This assumption is convenient but does not help in explaining the
pattern of observed quark and lepton masses. If one allows only some
of the fermions to couple to the Higgs doublets, while inducing the
other quark and lepton masses using a Frogatt-Nielsen sector, higher
dimensional operators, or other mechanism, then the $U(1)_\mu$ charge
assignment can be more general so that completely different models may
be constructed. We will not elaborate further this possibility.
\ite
The neutrinos have masses and the mixings involve all three
generations.
As suggested by the solar, atmospheric, and accelerator neutrino
experiments, we have allowed the most general Yukawa couplings of the
neutrinos to the Higgs. This constraint can be relaxed, for example if
there are enough sterile neutrinos. In that case the lepton $U(1)_\mu$
charges no longer need to be generational independent. We also assume
that the Majorana masses for the right-handed neutrinos come from
$\langle N \rangle$, which results in automatic $R$-parity
conservation. If right-handed neutrinos obtain their masses from
$\langle S \rangle$, $R$-parity violating operators which violate
lepton number will exist and their couplings have to be quite small.
Of course, even a small $R$-parity violating coupling can allow the
NLSP to decay to jets and/or leptons instead, thus changing the
typical collider signatures correspondingly.
\ite
The $U(1)_\mu$ charges of the quarks and leptons are positive.
This constraint is sufficient to ensure that the squarks and sleptons
do not acquire vevs, but is not necessary. There could be regions in
the parameter space where the positive contributions to the squark
and slepton squared-masses from standard gauge mediation dominate
over the $U(1)_\mu$ $D$ term contribution. In that case negative
$U(1)_\mu$ charges for the quarks and leptons may be allowed. Squark
and gluino masses are insensitive to this contribution, but it may
affect the slepton spectrum and the question of NLSP. However, even
restricting ourselves to models with positive $U(1)_\mu$ charges for
quarks and leptons, we have found examples which exhaust all possible
NLSP canditates, so considering negative charges will not give us
anything new as far as phenomenology is concerned.
\ite
The set of SM singlet superfields from the messenger sector is
minimal.
It is possible to find various ways of extending the messenger
sector. For example, there can be more $X$ fields, with non-zero vevs
for the scalar and $F$-components, which would result in a more
general squark and slepton spectrum. However, with more singlets, it
is harder to find a viable minimum.
\ite
The $U(1)_\mu$ charges are reasonably simple.
This assumption is necessary only if one wants to be able to embed
$U(1)_\mu$ in a (``reasonably simple'') non-Abelian gauge group.
\ite
There are no fields with fractional electric charge.
Such fields would be stable and produced in large numbers in the Early
Universe, which is in disagreement with a wide range of
experiments. This constraint can be avoided if the number of particles
with fractional electric charge has been dramatically diluted during
a period of inflation that ended at a temperature below their masses.
\ite
The messenger fields can decay via dimension-4 operators.
Otherwise the lightest messenger is long lived and its presence at
the time of nucleosynthesis is ruled out by cosmological observations.
Again, this constraint can be relaxed if the Universe suffered a
period of late inflation. Without this assumption, we find solutions
for other classes of models as well.
\ite
The DSB sector does not give rise to more than two composite chiral
fermions charged under $U(1)_\mu$.
This assumption was made only for simplicity.
\end{itemize}
We point out that the phenomenology of these models is rather
insensitive to some of the extensions listed above. For example, the
last three assumptions itemized do not affect some of the novel
phenomenological features discussed in Section~3:
\begin{enumerate}
\item Non-standard (yet predictable) gaugino mass ratios.
\item Light singlet fermion and/or scalar states may sometimes be in
the spectrum.
\item The models allow for the intrinsic SUSY breaking scale $E_{\rm
vac}$ to be quite low, on the order of a few times $10^5$ GeV, thus
allowing prompt decays of the NLSP. Note that other models with the
SUSY breaking scale below $10^6$GeV are known \cite{LSSB}, but their
viability relies on assumptions about noncalculable strong dynamics
\item In certain cases we find new NLSP candidates: $\tilde W$-like
chargino, $\tilde W_3\,$-like neutralino or $\tilde S$-like
neutralino (``singletino'').
\end{enumerate}
It is worth emphasizing that the new SUSY discovery signatures of
$WW\not\!\!\!E_T$, $W\gamma\not\!\!\!E_T$ and $WZ\not\!\!\!E_T$ depend
only on two assumptions: $M_2<M_1,M_3$ and a low SUSY breaking scale.
Therefore, the importance of these signatures, which have been
overlooked until now, transcends the models introduced in this paper.
Even though we only discussed the phenomenological signatures of our
models for the case of the Tevatron, it is clear that the LHC, where
statistics is not an issue, will be able to definitively explore these
models via the clean signatures considered in
Section~\ref{phenomenology}.
In conclusion, we have constructed several classes of gauge-mediated
models which provide a rather complete answer to the question of
SUSY-breaking and communication to the visible sector. The models
allow acceptable neutrino masses, and at the same time avoid the $\mu$
problem and the difficulties with FCNC, baryon number violation and
messenger dark matter.
In retrospect, our models still leave several unsolved puzzles. Most
importantly, we have not attempted to explain the pattern of quark and
lepton masses. Some relatively small Yukawa couplings are still needed
for them and also for the $U(1)_\mu$ breaking sector. In addition, we
have not addressed the related strong CP problem, whose solution in
this approach should also follow from some gauge symmetry. Otherwise,
it would be highly sensitive to Planck scale physics too, as is, for
example, the Peccei-Quinn solution~\cite{strongcp}. Another open
question is whether the gauge couplings and gauge groups may unify at
some high scale. Finally, the vacuum in our model is metastable
(with a lifetime longer than the age of the
universe~\cite{DDR}), and this raises the question why it was chosen by the
early universe.
\vspace*{1.5cm}
{\it Acknowledgements:} We would like to thank M.~Luty and
S.~Willenbrock for discussions. Fermilab is
operated by the URA under DOE contract DE-AC02-76CH03000.
\section*{Appendix A: \ The 4-3 model}
\label{43model}
\renewcommand{\theequation}{A.\arabic{equation}}
\setcounter{equation}{0}
The detailed discussion of SUSY breaking in the $SU(4)\times SU(3)$
model can be found in Refs.~\cite{PST,AMM,CDM}. Here we just present
the model with the $U(1)_\mu$ charge assignment, and a brief
description of the essential results. The field content and the
$U(1)_\mu$ charges are shown in Table~\ref{tab:dsb}.
\begin{table}[ht]
\centering \renewcommand{\arraystretch}{1.4}
\begin{tabular}{|c||c|c||c|}\hline
Fields & $SU(4)$ & $SU(3)$ & $U(1)_\mu$\\ \hline \hline ${\cal Q}$ &
4 & 3 & $-(z_{b_1}+z_{b_2}) / 12$ \\ \hline ${\cal L}_1$ &
${\overline 4}$ & 1 & $(3z_{b_1}-z_{b_2})/ 4$ \\ \hline
${\cal L}_2$ & ${\overline 4}$ & 1 & $(3z_{b_2}-z_{b_1})/
4$ \\ \hline ${\cal L}_3$ & ${\overline 4}$ & 1 &
$-(z_{b_1}+z_{b_2})/ 4$ \\ \hline ${\cal R}_1$ & 1 &
${\overline 3}$ & $(-2z_{b_1}+z_{b_1})/ 3$ \\ \hline ${\cal
R}_2$ & 1 & ${\overline 3}$ & $(z_{b_1}-2z_{b_2})/ 3$ \\
\hline ${\cal R}_3, {\cal R}_4$ & 1 & ${\overline 3}$ &
$(z_{b_1}+z_{b_2})/ 3$ \\ \hline
\end{tabular}
\parbox{5.5in}{
\caption{Particle content and charge assignments in the DSB sector.
\label{tab:dsb}}}
\end{table}
The superpotential of the DSB sector is given by
\be W_{DSB}\ =\ \lambda_1{\cal L}_1{\cal Q}{\cal R}_1 + \lambda_2{\cal L}_2{\cal Q}{\cal R}_2 +
\lambda_3{\cal L}_3{\cal Q}{\cal R}_3 + {\alpha\over3!}{\cal R}_1{\cal R}_2{\cal R}_4 . \end{equation}
We assume that $\alpha \ll \lambda_1, \; \lambda_2, \; \lambda_3 \;
\sim 1$, so that the vacuum lies in the weakly coupled regime and
hence calculable. The low energy degrees of freedom can be described
by the baryons $b_i$, where
\be b_i = \frac{1}{3!} \epsilon_{ijkl} {\cal R}_j {\cal R}_k {\cal R}_l , \end{equation}
with $U(1)_\mu$ charges $z_{b_1},\, z_{b_2}, \, 0,\, 0$, respectively.
The $b_3$ and $b_4$ fields get vevs of the order
$(\alpha^{-\frac{4}{9}} \Lambda_D)^3$, where $\Lambda_D$ represent the
$SU(4)$ scale.The energy density at the minimum and the masses of the
scalar components of $b_1, \; b_2$ are
\begin{eqnarray} E^4_{\rm vac} &\sim& \alpha^{\frac{2}{9}} \Lambda_D^4 ,\\
m_{b_{1,2}}^2 \equiv m_b^2 &\sim& \alpha^{\frac{10}{9}} \Lambda_D^2 .
\label{mb}
\end{eqnarray}
At one loop, the $b_1$ and $b_2$ fields will generate a
Fayet-Illiopoulos $D$ term for the $U(1)_\mu$ gauge group,
\be -\xi^2 = - \sum_{j=1,2} \frac{g_\mu^2}{16\pi^2} z_{b_j} m_{b_j}^2
\ln \frac{M_V^2}{p^2} ,
\label{D-term}
\end{equation}
where $M_V$ represents the mass scale of the heavy fields in the DSB
sector, and the lower cutoff scale $p^2$ is the larger one between
the $U(1)_\mu$ breaking scale, $M_\mu^2$, and $m_b^2$. They also
generate a negative contribution to the mass squared of each scalar
field charged under $U(1)_\mu$ at two-loop, proportional to the
field's charge squared,
\be \frac{m_i^2}{z_i^2} \equiv -{\tilde m}^2 = -\sum_{j=1,2} 4 \left(
\frac{g_\mu^2}{16\pi^2}\right)^2 \left(z_{b_j}\right)^2 m_{b_j}^2 \ln
\frac{M_V^2}{p^2} .
\label{2loopmass}
\end{equation}
Note that the formulae (\ref{D-term}), (\ref{2loopmass}) only apply
when $p^2 < M_V^2$. If the $U(1)_\mu$ breaking scale ($p^2=M_\mu^2$)
is higher than $M_V^2$, the results will be suppressed by a factor
$M_V^2/M_\mu^2$.
| 27,750 |
\section{Introduction}
\subsection{Historical Remarks}
Binary systems are well known sources of periodic gravitational waves.
Indirect proof of the existence of gravitational waves emitted by binary pulsars
was given by Taylor \cite{1}. However, the direct observation of gravitational waves
still remains the unsolved problem of experimental gravitational physics.
The expected spectrum of gravitational waves
extends from $\sim 10^4$~Hz to $10^{-18}$~Hz \cite{2}, \cite{3}.
Within that range, the
spectrum of periodic waves from known binary systems extends from
about $10^{-3}$~Hz, the frequency of gravitational radiation from a
contact white-dwarf binary \cite{4}, through the $10^{-4}$ to
$10^{-6}$~Hz range of radiation from main-sequence binaries \cite{5},
to the $10^{-7}$ to $10^{-9}$~Hz frequencies
emitted by binary supermassive black holes postulated to lie in
galactic nuclei \cite{6}. The dimensionless strain
of these waves at the Earth, $h$, may be as great as $10^{-21}$ at the
highest frequencies, and as great as $3\times 10^{-15}$ at the lowest
frequencies in this range.
Sazhin \cite{7} first suggested detection of gravitational waves
from a binary system using timing observations of a pulsar, the line of sight to which
passes near the binary. It was shown that
the integrated time delay for
propagation of the electromagnetic pulse near the binary is proportional to $1/d^2$
where $d$ is the impact parameter of the unperturbed trajectory of the signal.
More recently,
Sazhin \& Saphonova \cite{8} made estimates of the probability of
observations of this effect, for pulsars in globular clusters, and
showed that the probability can be high, reaching 0.97 for one cluster. We note
however that mathematical technique worked out in these papers allows
rigorous treatment only of effects of the
near-zone, quasi-static quadrupolar part of the
gravitational field and is not enough to make any
conclusion about actual observability of gravitational waves emitted by a
binary system.
Wahlquist \cite{9} made another approach to the detection of periodic
gravitational waves, based on Doppler tracking of spacecraft traveling
in deep space. His approach is restricted by the plane gravitational wave
approximation developed earlier by Estabrook \& Wahlquist \cite{10}.
Tinto (\cite{11}, and references therein) made the most recent theoretical
contribution in this area. The Doppler tracking technique has been
used in space missions, by seeking the characteristic triple
signature, the presence of which would reveal the influence of a
gravitational wave crossing the line of sight from spacecraft to
observer \cite{12}.
Quite recently, Braginsky {\it et al.} \cite{13}
(see also \cite{14}) have raised the question of using
astrometry as a detector of stochastic gravitational waves. This idea
has also been investigated by Kaiser \& Jaffe \cite{15} and, in
particular, by Pyne {\it et al.} \cite{16} and Gwinn {\it et al.} \cite{17}
who
showed that the overall effect is proportional to the strain of metric
perturbation caused by the plane gravitational wave
and set observational limits on the energy
density of ultra long gravitational waves present in early universe. Montanari
\cite{18} studied polarization perturbations of free electromagnetic radiation in
the field of a plane gravitational wave and found that the effects are
exceedingly small.
Fakir (\cite{19}, and references therein) has suggested the possibility
of using astrometry to detect periodic variations in
apparent angular separations of appropriate light sources, caused by
gravitational waves emitted from isolated sources of gravitational
radiation. He was not able to develop a self-consistent approach to tackle the
problem with necessary completeness and rigor. For this reason his estimate of
the effect is erroneous. Another attempt to work out a more consistent approach
to the calculation of the deflection angle in the field of arbitrary source of
gravitational waves has been undertaken by Durrer \cite{20}. However, the
calculations have been done only for the plane wave approximation and the result
obtained was extrapolated for the case of
the localized source of gravitational waves without justification. For this
reason the deflection angle was overestimated. The same
misinterpretation of
the effect can be found in the paper by Labeyrie \cite{21} who studied a
photometric modulation of background sources by gravitational waves
emitted by fast binary stars. Because of this, the expected detection of the
gravitational waves from the
observations of the radio source GPS QSO 2022+171 suggested by Pogrebenko
{\it et al.} \cite{22} was not based on firm theoretical ground.
Damour \& Esposito-Far\`ese \cite{23} have studied the deflection of light and
integrated
time delay caused by the time-dependent gravitational field generated by a
localized material source lying close to the line of sight. They explicitly
took into account the full, retarded gravitational field in
the near, intermediate, and wave zones.
Contrary to the claims of Fakir \cite{19} and Durrer
\cite{20} and in agreement with Sazhin's \cite{7}
calculations, they found that the
deflections due to both the wave-zone gravitational wave and the
intermediate-zone retarded fields vanish exactly. The leading total
time-dependent deflection is given only by the quasi-static, near-zone
quadrupolar piece of the gravitational field.
In the present paper we work out an even more systematic approach to the problem.
While Damour \& Esposito-Far\`ese \cite{23} considered both the light source and
the observer to be located at infinity, and performed their calculations in
terms of the
spacetime Fourier transform, we do not need these assumptions. Our approach is
much more general and applicable for any location of the source of light and
observer in space with respect to the source of gravitational radiation. The
integration technique which we use for finding the solution of equations of
propagation of light rays was partially employed in \cite{24} and does
not require any implementation of the spacetime Fourier transform.
Section 2 of the present paper discusses equations of propagation of
electromagnetic waves in the
geometric optics approximation. The metric tensor and
coordinate systems involved in our calculations are described in section 3
along with gauge conditions imposed on the metric tensor. The method of
integration of the equations of motion with emphasis on specific details of
calculations of particular integrals is given in section 4. Exact solution of
the equations of light propagation and the form of relativistic perturbations
of the light trajectory are obtained in section 5. Section 6 is devoted to
derivation of basic observable relativistic effects - the integrated time delay
and the deflection angle. We find the more precise meaning of quite general
formulae obtained in the previous section by discussing in section 7 several
limiting cases in the
relative configuration of the source of light, the observer, and the source of
gravitational waves. Section 8 contains concluding remarks. Appendix A compares
results of our calculations with those
by Damour \& Esposito-Far\`ese \cite{23} and proves their gauge invariance.
Appendix B gives more details on the derivation of the ADM-harmonic coordinate
system used in the present paper for interpretation of observed relativistic
effects.
\subsection{Observational Capabilities}
Calculations of the effects of gravitational waves are of most
interest if they indicate that those can be detected with present
techniques, or foreseeable improvements. Astrometric precision and
accuracy have evolved rapidly in the last decades, and can be expected
to continue to improve. In principle, the accuracy attainable with a
given instrument is approximately the angular resolution of the
instrument, divided by the signal-to-noise ratio. In practice, the
time span of the observations and the angular separation of the source
from reference sources critically affect the attainable accuracy.
Very-long baseline interferometry, or VLBI, attains the highest
angular resolution available on an operational basis. It achieves
angular resolution set by the diffraction limit, of
$\Delta\theta\approx \lambda/B$, where $B$ is the separation of the
interferometer elements (the baseline), and $\lambda$ is the observing
wavelength. Practical baselines may be about as long as an Earth
radius, $B\sim 6400$~km; a typical observing wavelength is
$\lambda=3$~cm, yielding angular resolution of 1~milliarcsecond.
Observations of a moderately strong ($\sim 1$~Jy)
extragalactic source, such as a
quasar, can reach signal-to-noise ratio of several hundred in 5 or 10
minutes, offering potential angular accuracy of microarcseconds.
In principle, a day of integration
with the US Very Long Baseline Array (VLBA)
could yield angular accuracy of about 0.1~microarcseconds.
Observations using the largest radiotelescopes can increase the signal-to-noise
ratio by a factor of $\sim 10\times$.
In practice, a host of geodetic and propagation
effects limit the reproducibility of VLBI astrometry. These factors
must either be measured during the observations, or calculated from
models. At present, atmospheric stability and changes in source
structure limit reproducibility of measured angles between sources to
about 1~milliarcsecond, over periods of months.
Observations of pairs of radio sources, with separations of
$\sim 0.5^{\circ}$,
can yield angular accuracy of about 50~microarcseconds,
reproducible over periods of years,
when effects of source structure are included \cite{25}.
Astrophysical H$_2$O masers have extremely high flux densities, of up
to $10^6$~Jy at $\lambda=1.3$~cm.
In principle, a day of observation of masers with the VLBA could
yield angular accuracy of a few picoarcseconds.
Observations of masers have attained
reproducibility of better than 10~microarcseconds over several months,
between individual maser spots in a Galactic maser cluster, with
separations of a few arcsec \cite{26}. Astrometric
observations of extragalactic masers have attained accuracies of
better than 1~microarcsecond, for maser spots separated by
less than 1 arcsec \cite{27}.
Atmospheric variations probably dominate the error budget.
Shorter wavelengths offer potentially higher diffraction-limited
angular resolution, but practical obstacles are severe. Atmospheric
effects present greater phase changes, on shorter time scales; and
photon shot noise becomes a limiting factor for fainter sources and at
shorter wavelengths. Optical interferometers in space will probably
equal and exceed the accuracy of VLBI. For example, the Space
Interferometry Mission (SIM), and the proposed
European mission GAIA seek to attain
angular accuracy of about 1~microarcsecond in several hours of integration
\cite{28} - \cite{30}.
Astrometric observations to seek effects of gravitational waves could
attain higher accuracy, at least on shorter timescales. The periods
of the waves, and of the expected deflection, could be
short enough to avoid some atmospheric and other
propagation effects. For known binary systems, the wave period, and
perhaps the phase, are known accurately, permitting search
for deflections at this period.
Such a ``synchronous'' search would eliminate many noise sources,
allow detection of short-period motions with the sensitivity
resulting from long integrations,
and perhaps allow astrometric accuracy to approach
the signal-to-noise ratio limit.
\section{Equations of propagation of electromagnetic waves}
We assume the approximation of geometric optics, as the
wavelength of electromagnetic waves used for
astrometric observations is usually
much smaller than wavelength of gravitational waves emitted by isolated
astronomical systems like binary stars or supernova
explosions \cite{2}.
This allows us to use the
relativistic equation of geodesic motion of a massless particle
(such as a photon) for description of the process of
propagation of electromagnetic signal from the source of light to the observer at
the Earth. We also assume that space-time is asymptotically flat. This
assumption does not hold for cosmological distances. However, if we neglect
all terms depending on the rate of cosmological expansion and make a
rescaling of time and space coordinates with the cosmological scale
factor $a(t)$, our formalism will be still valid for application in cosmology.
We denote spatial coordinates by $x^i={\bf x}
=(x^1,x^2,x^3)$ and time
coordinate $x^0=ct$, where $c$ is the speed of light and $t$ is coordinate
time.
Let the motion of a photon be defined by fixing the mixed
initial-boundary conditions
introduced and extensively used by Brumberg \cite{31}
\begin{equation}
{\bf x}(t_{0})={\bf x}_{0}, \quad\quad\quad\quad
{\displaystyle {d{\bf x}(-\infty ) \over dt}}%
={\bf k},
\label{1}
\end{equation}
where ${\bf k}^{2}=1$ and the spatial components of
vectors are denoted by bold letters. These conditions define the coordinates
${\bf x}%
_{0}$ of the photon at the moment of emission $t_{0}$ and its velocity at
the infinite past and the infinite distance from the origin of the spatial
coordinates (that is, at past null infinity).
In what follows we put $c=1$ for convenience.
Equation of propagation of photons in a weak gravitational field
is given in
the first post-Minkowskian approximation by the
formula \cite{31,32}:
\begin{eqnarray}\label{2}
\ddot{x}^{i}(t)&=&\frac{1}{2}g_{00,i}-g_{0i,t}-\frac{1}{2}g_{00,t}\dot{x}^i-
g_{ik,t}\dot{x}^k-\left(g_{0i,k}-g_{0k,i}\right)\dot{x}^k-\\\nonumber
\\ \nonumber&&\mbox{}
g_{00,k}\dot{x}^k\dot{x}^i-\left(g_{ik,j}-\frac{1}{2}g_{kj,i}\right)
\dot{x}^k\dot{x}^j+
\left(\frac{1}{2}g_{kj,t}-g_{0k,j}\right)\dot{x}^k\dot{x}^j\dot{x}^i,
\end{eqnarray}
where the $g_{00}$, $g_{0i}$, $g_{ij}$ are components of metric tensor,
fully determined by
the given distribution and motion of mass inside the source of
gravitational field, dots over vectors denote
the total derivative with respect to
time, and commas indicate partial derivatives with respect to
spatial or time coordinates; that is, for any function $f_{,i}=
{\partial}f/{\partial} x^i$, $f_{,t}={\partial}f/{\partial} t$.
Hereafter repeated latin indices mean summation from $1$
to $3$. The given equation is valid in arbitrary coordinates (gauges) and
represents the ordinary second order differential equation for light
propagation.
The right-hand side of equation (\ref{2}) includes terms which
depend on the coordinate velocity $\dot{x}^{i}$ of the photon,
in the weak-field approximation approximately equal to the speed of light $c$.
We restrict ourselves to finding a
solution of equation (\ref{2}) only in the first linear approximation with
respect to the universal gravitational constant $G$. For this reason, when solving
equation (\ref{2}), only one
iteration is enough and it is admissible to make the
replacement $\dot{x}^{i}=k^{i}$ in
the right-hand side of the equation. The result of this approach is:
\begin{eqnarray}\label{pol}
\ddot{x}^{i}(t)&=&\frac{1}{2}g_{00,i}-g_{0i,t}-\frac{1}{2}g_{00,t}k^i-
g_{ij,t}k^j-\left(g_{0i,j}-g_{0j,i}\right)k^j-\\\nonumber
\\ \nonumber &&\mbox
g_{00,j}k^j k^i-\left(g_{ip,j}-\frac{1}{2}g_{pj,i}\right)
k^p k^j+
\left(\frac{1}{2}g_{pj,t}-g_{0p,j}\right)k^p k^j k^i\;.
\end{eqnarray}
This equation must be solved to obtain a perturbed trajectory of
the photon propagating through the gravitational field of an isolated
astronomical system emitting gravitational waves.
To accomplish this task one needs a mathematical expression for the
metric tensor.
\section{Metric Tensor and Coordinate Systems}
Let us chose the origin of the asymptotically flat coordinate frame at the
center of mass-energy of the isolated astronomical system and impose the
de-Donder (harmonic) gauge conditions on components of the
``canonical" metric tensor. We assume that gravitational field is weak and
the metric of spacetime $g_{\alpha\beta}$ is written as a sum of
the Minkowski metric $\eta_{\alpha\beta}={\rm diag}(-1,1,1,1)$ plus a
small perturbation $h_{\alpha\beta}$:
\vspace{0.3 cm}
\begin{eqnarray}
\label{rew}
g_{\alpha\beta}&=&\eta_{\alpha\beta}+h_{\alpha\beta},
\end{eqnarray}
where the Greek indices run from 0 to 3.
The most general expression for
the linearized
metric tensor, generated by a system emitting gravitational waves,
in terms of its symmetric and trace-free (STF) mass and spin multipole moments
is given
by Thorne \cite{33} (see also \cite{34,35}). It can be written as
\vspace{0.3 cm}
\begin{eqnarray}\label{metric}
h_{\alpha\beta}&=&h_{\alpha\beta}^{can.}+\nabla_{\beta}w_{\alpha}+
\nabla_{\alpha}w_{\beta}\;,
\end{eqnarray}
where $\nabla_{\alpha}=\partial/\partial x^{\alpha}$. The ``canonical'' form of
the metric tensor perturbations in harmonic gauge reads as follows \cite{36}
\vspace{0.3 cm}
\begin{eqnarray}\label{4}
h_{00}^{can.}&=&\frac{2{\cal M}}{r}+2\displaystyle
{\sum_{l=2}^{\infty}}\frac{(-1)^l}{l!}
\left[\frac{{\cal I}_{A_l}(t-r)}{r}\right]_{,A_l}\;,\\
\nonumber\\
h_{0i}^{can.}&=&-\frac{2\epsilon_{ipq}{\cal S}_p N_q}{r^2} -\\\nonumber \\
\nonumber
& &\mbox{}4\displaystyle{\sum_{l=2}^{\infty}}\frac{(-1)^l l}{(l+1)!}
\left[\frac{\epsilon_{ipq}{\cal S}_{pA_{l-1}}(t-r)}{r}\right]_{,qA_{l-1}}+
4\displaystyle{\sum_{l=2}^{\infty}}\frac{(-1)^l }{l!}
\left[\frac{\dot{\cal {I}}_{iA_{l-1}}(t-r)}{r}\right]_{,A_{l-1}}\;,\\
\nonumber\\\label{6}
h_{ij}^{can.}&=&\delta_{ij}\biggl\{\frac{2{\cal M}}{r}+2\displaystyle{\sum_{l=2}^{\infty}}
\frac{(-1)^l}{l!}
\left[\frac{{\cal I}_{A_l}(t-r)}{r}\right]_{,A_l} \biggr\}+ \\
\nonumber\\ \nonumber
& &\mbox{} 4\displaystyle{\sum_{l=2}^{\infty}}\frac{(-1)^l }{l!}
\left[\frac{\ddot{\cal {I}}_{ijA_{l-2}}(t-r)}{r}\right]_{,A_{l-2}}-
8\displaystyle{\sum_{l=2}^{\infty}}\frac{l(-1)^l l}{(l+1)!}
\left[\frac{\epsilon_{pq(i}\dot{\cal {S}}_{j)pA_{l-2}}(t-r)}{r}
\right]_{,qA_{l-2}}\;.
\vspace{1 cm}
\end{eqnarray}
where $N^i=x^i/r$, and the round brackets around indices in equation
({\ref{6}) means symmetrization; that is, for any two indices
$T_{(ij)}=\frac{1}{2}(T_{ij}+T_{ji})$.
In the pure harmonic gauge the functions $w^0$, $w^i$ are solutions of the
homogeneous d'Alembert's equation and are
given by the expressions
\vspace{0.3 cm}
\begin{eqnarray}\label{poh}
w^0&=&\displaystyle{\sum_{l=0}^{\infty}}
\left[\frac{{\cal W}_{A_l}(t-r)}{r}\right]_{,A_l}\;,\\
\nonumber\\\nonumber\\\label{boh}
w^i&=&\displaystyle{\sum_{l=0}^{\infty}}
\left[\frac{{\cal X}_{A_l}(t-r)}{r}\right]_{,iA_l}+
\displaystyle{\sum_{l=1}^{\infty}}\biggl\{
\left[\frac{{\cal Y}_{iA_{l-1}}(t-r)}{r}\right]_{,A_{l-1}}+
\left[\epsilon_{ipq}\frac{{\cal
Z}_{qA_{l-1}}(t-r)}{r}\right]_{,pA_{l-1}}\biggr\}\;,
\end{eqnarray}
where ${\cal W}_{A_l}$, ${\cal X}_{A_l}$, ${\cal Y}_{iA_{l-1}}$, and ${\cal
Z}_{qA_{l-1}}$ are arbitrary functions of time. Their specific choice will be
made later on in the discussion regarding the interpretation of observable
effects.
In equations (\ref{4})-(\ref{boh}), we adopt the notation:
$A_l=a_1a_2...a_l$ is a polyindex, ${\cal M}$ is the total (Tolman or ADM) mass
of the system, ${\cal I}_{A_l}$ and ${\cal{S}}_{A_l}$ are the STF mass and spin
gravitational multipoles, and ${\cal W}_{A_l}$, ${\cal X}_{A_l}$,
${\cal Y}_{A_l}$,
${\cal Z}_{A_l}$ are multipoles which reflect the freedom of coordinate
transformations. These multipoles can be eliminated from the metric using
the transformation
\begin{eqnarray}\label{coortr}
x'^{\alpha}=x^{\alpha}-w^{\alpha}\;,
\end{eqnarray}
relating an original harmonic coordinate system $x^{\alpha}$ to
another harmonic one $x'^{\alpha}$, in
which only the ``canonical" part of the
metric is present.
However, we would like to emphasize that, in general, equation (\ref{metric})
holds in an arbitrary gauge.
Particular examples of functions $w^0$ and $w^i$ in harmonic gauge are given in
equations (\ref{poh})-(\ref{boh}). Other expressions for $w^0$ and $w^i$ in
the ADM (Arnowitt-Deser-Misner)
gauge \cite{37} are given in Appendix B wherein we also
prove that it is possible to
choose functions $w^0$ and $w^i$ in such a way that ADM and harmonic gauge
conditions will be satisfied simultaneously. This means that the
classes of harmonic
and ADM coordinates overlap. The discussion of different gauges is
helpful for giving a unique interpretation of observable effects by
properly fixing the coordinate degrees of freedom in corresponding
quantities \cite{38}.
The STF cartesian tensor
has
a special algebraic structure which eliminates all reducible parts of the
tensor and leaves only the irreducible part having the highest rank
\cite{33,40}. In other words, contraction over of any two indices
of STF tensor gives identically zero. It is worth noting
the absence of
the dipole mass multipole ${\cal{I}}_i$ in equations (\ref{4})-(\ref{6})
which is identically
zero, due to the choice of the origin of coordinate system
at the center of mass of the gravitating system. We also
stress that the multipoles in the linearized metric (\ref{4})-(\ref{boh})
depend on the ``retarded time'' $t-r$. At first sight
this dependence
seems to make subsequent calculations more difficult. However, just the
opposite
happens and the dependence of the multipoles on the retarded time makes
the calculations simpler.
In what follows we consider the concrete case of a localized deflector
emitting gravitational waves. In this section we restrict ourselves to
considering the
influence of gravitational field of the deflector
on the propagation of electromagnetic signals made by its total constant
mass $M$, spin ${\cal{S}}$, and time-dependent
quadrupole
moment ${\cal{I}}_{ij}(t-r)$ only. This simplifies the expressions
(\ref{4})-(\ref{6}) for the metric tensor, which are reduced to the
expressions
\vspace{0.3 cm}
\begin{eqnarray}
h_{00}^{can.}&=&\frac{2{\cal M}}{r}+\nabla_p\nabla_q
\left[\frac{{\cal I}_{pq}(t-r)}{r}\right]\;,
\label{7}\\
\nonumber\\\nonumber\\
h_{0i}^{can.}&=&-\frac{2\epsilon_{ipq}{\cal S}_p N_q}{r^2}+
2\nabla_j\left[\frac{\dot{\cal {I}}_{ij}(t-r)}{r}\right]\;,
\label{8}\\
\nonumber\\\nonumber\\
h_{ij}^{can.}&=&\delta_{ij}h_{00}^{can.}+q_{ij}^{can.}\;,
\label{9}\\
\nonumber
\end{eqnarray}
where\vspace{0.3 cm}
\begin{eqnarray}\label{qij}
q_{ij}^{can.}&=&\frac{2}{r}\ddot{\cal{I}}_{ij}(t-r)\;.\\\nonumber
\end{eqnarray}
Herein terms depending on ${\cal M}$ and ${\cal S}_i$ are static and produce
well-known effects in the propagation of light rays. Retarded terms that are
inversely
proportional to the distance $r$ from the gravitating system describe the pure
gravitational-wave part of the metric.
Let us stress that in the harmonic coordinate system the
gravitational-wave part of the metric tensor is present in all of its components
and is expressed through the second time derivative of
the quadrupole moment \cite{33}.
If we choose the ADM gauge \cite{36}
it is possible to eliminate the
gravitational wave terms from the $h_{00}^{can.}$ and $h_{0i}^{can.}$
components of the metric
tensor and to bring all of them to $h_{ij}^{adm}$ \cite{41}.
Then $h_{00}^{adm}$ and $h_{0i}^{adm}$
depend only on the ``instantaneous" time $t$
and not on the retarded time $t-r$ (see Appendix B). In combining the ADM gauge
with the harmonic gauge an even simpler representation is possible where
$h_{00}$ and $h_{0i}$ do not depend on time at all. However, the
transformation from the canonical form of metric (\ref{7})-(\ref{qij}) to the
ADM-harmonic form includes integration of the quadrupole moment
with respect to time.
Appendix B gives a more detailed study of this procedure.
One might ask whether the ADM or harmonic coordinate system
is more preferable, for
the adequate physical treatment of the relativistic time delay and
deflection of light rays in the field of gravitational waves emitted by
a localized source. Our point of view is that the coordinate system should be
chosen in such a way to be simultaneously both ADM and harmonic.
The
reason for this
is that an observer who is at rest with respect to the ADM coordinate
system does not feel the gravitational force caused by gravitational waves.
This
means that if the instantaneous
gravitational field of the localized source may be
neglected, the observer fixed with respect to the ADM system can be considered
to be in free fall. Hence, no artificial
force need be
applied to the observer in order to
keep him at rest at the fixed coordinate point. The motion
of such an observer is
described by the extremely simple
equation $x^i={\rm const}$ and there is no need
to account for kinematic effects associated with the observer's motion.
All these advantages are lost
in the ``canonical" harmonic gauge. An observer fixed with respect to that
coordinate system must be kept at a fixed coordinate
point by some external
force to prevent his motion under the influence of gravitational waves.
The existence of such
a force is unnatural from physical and astronomical points of view.
On the other hand, the ``canonical" harmonic gauge has the advantage
of
a much simpler integration of the equations of light propagation than the
``canonical" ADM gauge.
One can see that the``canonical" ADM metric coefficients
(\ref{a1a})-(\ref{a2a})
contain functions which depend on time $t$ only. As will be clear from the
procedure of integration of equations of light propagation described in the
next section such ``instantaneous" functions of time do not permit explicit
integration of each specific term (only after summing all
terms is the explicit integration possible).
Fortunately, the classes of ADM and harmonic
coordinate systems overlap and, for this reason, we can substantially benefit
by
choosing a coordinate system that is simultaneously both ADM and harmonic. This
allows us to proceed in the following way. First we integrate equations of
light
propagation in the harmonic gauge and then apply coordinate transformations
(\ref{ttt})-(\ref{kkk}) which transform the
pure harmonic coordinate system to
the ADM one without violation of the
harmonic gauge conditions. This simplifies
the treatment of observable effects drastically.
\section{Method of Integration of the Equations of Motion}
\subsection{Useful Relationships}
We introduce astronomical coordinates ${\bf x}\equiv x^i=(x^1,x^2,x^3)$
corresponding to the plane of the sky of the
observer and based on
a triad of the unit vectors $({\bf I}_0,{\bf J}_0,{\bf K}_0)$. The vector ${\bf
K}_0$ points from the observer toward the deflector, and the vectors ${\bf I}_0$
and ${\bf J}_0$ lie in the plane of the sky, being orthogonal
to vector ${\bf K}_0$. The vector ${\bf I}_0$ is directed to the east, and
${\bf J}_0$ points towards the north celestial pole. The origin of the
coordinate system is chosen to lie
at the barycenter of the deflector which emits
gravitational waves (see Figure \ref{bundle}).
Another reference frame based on
a triad of the unit vectors $({\bf I},{\bf J},{\bf K})$
rotated with respect to vectors
$({\bf I}_0,{\bf J}_0,{\bf K}_0)$
is useful as well.
The vector ${\bf K}$ points from the observer toward the source of light, and
the vectors ${\bf I}$ and ${\bf J}$ lie in the plane of the sky,
being orthogonal
to vector ${\bf K}$, which is different from the plane of the sky being
orthogonal to vector ${\bf K}_0$. This is because the ``plane of the sky" is
actually a sphere, and vectors ${\bf K}$ and ${\bf K}_0$ point in different
directions.
Mutual orientation of one triad with respect to another one is
determined by the following equations
\begin{eqnarray}
\label{rotation}
{\bf I}_0&=&\hspace{0.3 cm}{\bf I}\cos\Omega+{\bf J}\sin\Omega\;,\\
{\bf J}_0&=&-{\bf I}\cos\theta\sin\Omega+{\bf J}\cos\theta\cos\Omega
+{\bf K}\sin\theta\;,\\
{\bf K}_0&=&\hspace{0.3 cm}{\bf I}\sin\theta\sin\Omega-{\bf J}\sin\theta\cos\Omega+{\bf
K}\cos\theta\;,
\end{eqnarray}
where rotational angles $\Omega$ and $\theta$ are constant.
To integrate the equations of propagation of electromagnetic waves in curved
space-time we must resort to an approximation method. In the Newtonian
approximation, the unperturbed trajectory of the light ray is a straight line:
\begin{equation}
x^i(t)=x^i_N(t)=x^i_0+k^i\left(t-t_0\right),
\label{15}
\end{equation}
where $t_0$ is the instant of time of the photon emission from the point with
spatial
coordinates $x^i_0$, and $k^i={\bf k}$ is a constant unit vector tangent
to the unperturbed trajectory and directed from the point of emission to the
point of observation of photon (the vector ${\bf k}\approx -{\bf K}$).
In the Newtonian approximation, the coordinate
speed of the photon $\dot{x}^i=k^i$ and is considered to be constant.
It is convenient to introduce a new independent parameter $\tau$ along the
photon's trajectory according to the rule \cite{24}
\begin{equation}
\label{16}
\tau \equiv {\bf k}\cdot{\bf x}=t-t_0+{\bf k}\cdot{\bf x}_0,
\end{equation}
where the dot symbol between two vectors denotes the Euclidean dot product of
two vectors. The moment $t_0$ of the signal's emission corresponds to the
numerical value of the parameter $\tau_0={\bf k}\cdot{\bf x}_0$, and
the moment $t^{\ast}$
of the closest approach of the
unperturbed trajectory of the photon to the origin of the coordinate system
corresponds to the value $\tau=0$
(note that $\tau_0 < 0$ if the source of light is behind the localaized source
of gravitational waves). Thus, we find
\vspace{0.3 cm}
\begin{equation}
\label{mom}
\tau=t-t^{\ast},\hspace{2 cm}\tau_0=t_0-t^{\ast}.
\end{equation}
The variable $\tau$ is negative from the point of emission up to the point of the
closest approach, and is positive otherwise.
The differential identity $dt= d\tau$ is valid and for this reason
the integration along ray's path
with respect to time $t$ can be replaced by the integration with respect to
parameter $\tau$.
Using parameter $\tau$, the equation of the unperturbed trajectory of light ray can
be represented as
\begin{equation}
x^i(\tau)=x^i_N(\tau)=k^i \tau+\xi^i,
\label{17a}
\end{equation}
and the distance, $r$, of the photon from the origin of coordinate system is
given by
\begin{equation}
\label{17}
r=r_N(\tau)=\sqrt{\tau^2+d^2},
\end{equation}
where the length of the constant (for a chosen light ray)
transverse vector ${\bm{\xi}}={\bf k}\times ({\bf x}_0
\times {\bf k})={\bf k}\times ({\bf x}\times {\bf k})$
is called the impact parameter of the unperturbed trajectrory of
the light ray,
$d=|{\bm{\xi}}|$, and the symbol $``\times"$ between two
vectors denotes the Euclidean cross product. It is worth
emphasizing that the vector $\xi^i$ is directed from the origin of the coordinate
system toward the point of the closest approach of the unperturbed path of
light ray to that origin.
The relations
\begin{equation}
\label{19}
r+\tau=\frac{d^2}{r-\tau},\hspace{2 cm}r_0+\tau_0=\frac{d^2}{r_0-\tau_0},
\end{equation}
also hold, and they are useful for presenting the
results of integration of the light ray
equations in different form. In particular, if we
assume the strong inequalities $d\ll r$, and $d\ll r_0$
to hold, then
\begin{equation}
\label{19a}
\tau=r-\frac{d^2}{2r}+...,\hspace{2 cm}\tau_0=-r_0+\frac{d^2}{2r_0}+...,
\end{equation}
which clearly shows that at the moment of light reception $\tau$
is positive and at that of light emission $\tau_0$ is negative.
Let us consider a set of curves $x^i(\tau)=k^i\tau+\xi^i$ with different
values of
vectors $k^i$ and $\xi^i$. The vector field $k^i$, defined along the curve
$x^i(\tau)$, describes the direction of a bundle of light rays along the curve,
and introduces a natural ``2+1"
splitting of 3-dimensional space. The vector $\xi^i$, on the plane
orthogonal to the bundle of light rays, is a point of intersection
of any of those rays with that plane (see Figure \ref{bundle}).
This vector does not depend on $\tau$ and
can be defined, as in equation (\ref{17a}),
by the relationship
\vspace{0.3 cm}
\begin{eqnarray}
\label{addi}
\xi^i&=&P^i_{\mbox{}j} x^j\hspace {0.5 cm},
\end{eqnarray}
\vspace{0.3 cm}
where
\vspace{0.3 cm}
\begin{equation}
P_{ij}=\delta_{ij}-k_{i}k_{j}\hspace{0.5 cm},
\label{19aa}
\end{equation}
is the projection operator onto the plane orthogonal to the vector $%
k^{i}$. The operator has only two algebraically independent components
and satisfies the
relationship
\vspace{0.3 cm}
\begin{eqnarray}
\label{ghu}
P^i_{\mbox{}k} P^k_{\mbox{}j}&=&P^i_{\mbox{}j}\hspace{0.5 cm}.
\end{eqnarray}
Because of this property we can recast equation (\ref{addi}) into the
form
\vspace{0.3 cm}
\begin{eqnarray}
\xi^i&=&P^i_{\mbox{}j} \xi^j\hspace{0.5 cm},
\end{eqnarray}
which shows explicitly that the vector $\xi^i$ is constrained to lie in a
2-dimensional plane.
Thus, we immediately have for the operation of partial differentiation in this plane
\begin{equation}
\label{18}
\frac{\partial\xi^i}{\partial\xi^j}=P_j^i=P^{ij}=P_{ij}.
\end{equation}
It is worth noting that the projection operator can be used to raise and
lower indices of any geometrical object lying in the plane orthogonal to
vector $k^i$.
In what follows, it is convenient to consider the spatial components
of coordinates $\xi^i$ as formally independent with subsequent projection onto
the plane when doing differentiation with respect to $\xi^i$. Therefore
we always use the operator of differentiation with respect to $\xi^i$ in
combination with the projection operator $P^i_j$. For example, before the
projection we treat
\begin{equation}\nonumber
\frac{\partial\xi^i}{\partial\xi^j}=\delta^i_j\;,
\end{equation}
and for the same expression with subsequent projection
\begin{equation}
\nonumber
P^q_j\frac{\partial\xi^i}{\partial\xi^q}=P^i_j\;,
\end{equation}
which agrees with equations (\ref{ghu}) and (\ref{18}).
Moreover, the following rule of differentiation
for an arbitrary smooth function $F(t,{\bf x})$ holds
\begin{equation}
\label{20}
\left[\left(\frac{\partial}{\partial x^i}+k_i\frac{\partial}{\partial t}\right)
F\left(t,{\bf x}\right)\right]_{{\bf x}={\bf x}_0+{\bf k}(t-t_0)}=
\left(P^j_i\frac{\partial}{\partial \xi^j}+k_i\frac{\partial}{\partial \tau}\right)
F\left[\tau,{\bm{\xi}}+{\bf k}\tau\right],
\end{equation}
Equation (\ref{20}) is a generalization of the corresponding
formula
introduced by Kopeikin (\cite{24}, equation (20)) for
functions which do not depend explicitly on time $t$. It is worth noting that
in the left-hand side of formula (\ref{20}) one has first to
differentiate the function $F(t,{\bf x})$
with respect to time $t$ and spatial coordinates $x^i$ and, then, to make
the substitution
${\bf x}={\bf x}_0+{\bf k}(t-t_0)$. However, one makes
corresponding substitutions in the right-hand side of the formula (\ref{20})
first and only afterwards takes derivatives.
It is useful to stress again that because the coordinates $\xi^i$ lie
in the plane
orthogonal to the vector $k^i$
only two of the three
$\xi^1,\xi^2,\xi^3$ are, in fact, independent. We also stress that the
new variables $\xi^i$ and $\tau$ are independent as well. For this reason,
the integration of any function, which can be represented as a
time derivative with respect to the parameter $\tau$, is always
quite straightforward
\begin{equation}
\label{22}
{\int}\frac{\partial}{\partial \tau}F(\tau,{\bm{\xi}})d\tau=
F(\tau,{\bm{\xi}})+C({\bm{\xi}}),
\end{equation}
where $C({\bm{\xi}})$ is an arbitrary function of the constant impact
parameter. Moreover, as the vector $\xi^i$ does not depend on time $\tau$,
the partial derivatives with respect to $\xi^i$ can be
removed from within the time integrals when calculating them along the photon's
trajectory, that is
\begin{equation}
\label{22aa}
{\int}\frac{\partial}{\partial\xi^i}F(\tau,{\bm{\xi}})d\tau=
\frac{\partial}{\partial\xi^i}{\int}F(\tau,{\bm{\xi}})d\tau.
\end{equation}
Because of these advantages the new independent
coordinates
$\tau$ and $\xi^i$ are quite useful in calculations. The usefulness of the variables
$\tau$ and $\xi^i$ has been also recognized by
Damour \& Esposito-Far\`{e}se \cite{23}.
The equations of motion of light rays (\ref{pol}) in terms of parameters ${\bm
\xi}$ and $\tau$ are simpler, and after accounting for a freedom in gauge
transformations and implementation of relationship
(\ref{20}) assume the
form \cite{42}
\begin{eqnarray}
\label{eqnm}
\ddot{x}^{i}(\tau)&=&\frac{1}{2}{\hat{\partial}}_i h_{00}^{can.}
-{\hat{\partial}}_{\tau}h_{0i}^{can.}-
\frac{1}{2}k^i {\hat{\partial}}_{\tau}h_{00}^{can.}-k^j
{\hat{\partial}}_{\tau}h_{ij}^{can.}+
k^j{\hat{\partial}}_{i}h_{0j}^{can.}+\\\nonumber\mbox{}&&
\frac{1}{2}\left({\hat{\partial}}_i+k^i{\hat{\partial}}_{\tau}\right)
k^p k^q h_{pq}^{can.}
-{\hat{\partial}}_{\tau\tau}\left(w^i\;-\;k^i\; w^0\right)\;,\\\nonumber
\end{eqnarray}
where the following notations are used:
$\hat{\partial}_i \equiv P_{ij}\partial/\partial\xi^j\;$,
$\hat{\partial}_{\tau} \equiv
\partial/\partial\tau$. Let us
emphasize once again that the representation of equation (\ref{eqnm}) is valid in
an arbitrary coordinate system
and all metric coefficients are taken
along
the unperturbed trajectory of propagation of the light ray;
that is,
$h_{\alpha\beta}(t,{\bf x})=h_{\alpha\beta}(\tau, {\bm {\xi}}+{\bf k}\tau)$.
We also remark that the right-hand side of equation (\ref{eqnm}) contains
only spatial partial
derivatives with the same index ``$i$'' as does
the left-hand side of the equation.
This contrasts with equation (\ref{pol}) where the
indices of spatial derivatives are
mixed.
Equation (\ref{eqnm}) will be used in sections 5 and 6
for a general
treatment of gravitational perturbations of the photon's trajectory
and discussion of relativistic time delay and angle of light deflection.
Another useful form of equation (\ref{eqnm}) may be obtained if one introduces
the four-vector $k^{\alpha}=(1,k^i)$. Then we find
\begin{eqnarray}
\label{neweq}
\ddot{x}^{i}(\tau)&=&\frac{1}{2}k^{\alpha}k^{\beta}
{\hat{\partial}}_i h_{\alpha\beta}^{can.}-{\hat{\partial}}_{\tau}\left(
k^{\alpha}h_{i\alpha}^{can.}-\frac{1}{2}k^ik^j k^p q_{jp}^{can.}\right)
-{\hat{\partial}}_{\tau\tau}\left(w^i\;-\;k^i\; w^0\right)\;.
\end{eqnarray}
This form of the
equation clearly shows that only the first term on the right-hand
side contributes to the deflection of light,
if observer and source of light are
at infinity. Indeed, one integration of (\ref{neweq}) with respect to time
from $-\infty$ to $+\infty$ brings all first and second time
derivatives to zero,
due to the asymptotic flatness of the metric tensor. This makes
a connection between the formalism of the present paper and that
of Damour \& Esposito-Far\`ese \cite{23} (see also Appendix A).
\subsection{Calculation of Integrals from the Static Part of the Gravitational
Field}
The static part of the gravitational field of the deflector contributes
to perturbations of light's ray
trajectory, defined by
the
following
indefinite integrals \cite{24}
\begin{equation}
\label{31}
A(\tau,{\bm{\xi}})\equiv{\int}\frac{d\tau}{r}
={\int}
\frac{d\tau}{\sqrt{d^2+\tau^2}}=-\ln\left(%
\sqrt{d^2+\tau^2}-\tau\right),
\end{equation}
\begin{equation}
\label{32}
B(\tau,{\bm{\xi}})\equiv{\int}A(\tau,{\bm{\xi}})d\tau=
-\tau \ln\left(\sqrt{d^2+\tau^2}-\tau\right)-\sqrt{d^2+\tau^2},
\end{equation}
where we have omitted constants of integration which are absorbed by
re-definition of constants of integration of unperturbed light trajectory
(\ref{15}).
Integrals (\ref{31}), (\ref{32}) are formally divergent at the lower limit.
However, this divergence is not dangerous
for setting the second of the boundary conditions (\ref{1})
because
only derivatives of the integral (\ref{31}) appear
in the result of the first time integration of the equations of motion of
light rays,
eliminating the divergent part of the
integral \cite{43}. With this in mind,
it is easy to prove that integrals (\ref{31}), (\ref{32}) are in agreement with
the boundary conditions (\ref{1}).
\subsection{Calculation of Integrals from Time Dependent Part of Gravitational
Field}
One meets two ways of calculation of integrals in finding the path
of
propagation of light in the gravitational field of
a localized source emitting
gravitational waves. The first method relies upon the use of the Fourier transform
(\ref{fur}) and allows one, at least in principle, to calculate all integrals
explicitly
if one knows the specific structure of the Fourier image of the
quadrupole moment of the deflector \cite{44}. The advantage of the second method is based on
the fact that one deals with the metric depending on retarded time only.
This allows one to make a special
transformation of variables within the integral
which excludes any dependence of the
integrands on the
impact parameter,
and transfers it to the limits of the integrals. Thus, partial
derivatives of the integrals can be calculated explicitly without
assumptions about the
structure of the quadrupole moment of the deflector.
Of course, both
methods give the same results. However, the second method is
more general.
\subsubsection{First Method of Integration}
Let us assume the most general aperiodic form for the time variation
of the deflector. In linear approximation the
total mass and spin of the deflector are conserved quantities \cite{45}
so that they do not depend on time at all,
and we can consider them as
contributing only to the
static part of the gravitational field of the deflector \cite{45a}.
The quadrupole moment is not
static. It may be represented through a Fourier transform as
\begin{equation}
{\cal{I}}_{ij}(t-r)=(2\pi)^{-1/2}\displaystyle{\int_{-\infty}^{+\infty}}
\tilde{\cal{I}}_{ij}(\omega)e^{i\omega (t-r)}d\omega,
\label{fur}
\end{equation}
where $\tilde{\cal{I}}_{ij}(\omega)$ is the (complex)
Fourier image of the quadrupole moment of the deflector
which must be specified for any particular source of
gravitational waves. Here, we need not know the specific structure of
$\tilde{\cal{I}}_{ij}(\omega)$ as it will be shown later
it is irrelevant for subsequent calculations.
Taking time derivatives of the quadrupole moment yields
\begin{equation}
\dot{\cal I}^{ij}=(2\pi)^{-1/2}\displaystyle{\int_{-\infty}^{+\infty}}
(i \omega)\tilde{\cal{I}}_{ij}(\omega)e^{i\omega (t-r)}d\omega,
\label{firs}
\end{equation}
\begin{equation}
\ddot{\cal I}^{ij}=(2\pi)^{-1/2}\displaystyle{\int_{-\infty}^{+\infty}}
(-\omega^2)\tilde{\cal{I}}_{ij}(\omega)e^{i\omega (t-r)}d\omega.
\label{seco}
\end{equation}
Generally speaking, arbitrary aperiodic source of gravitational waves have an
infinite spectrum. However, it is possible to choose that frequency band
which gives the largest contribution to the spectrum. The mean frequency
$\Omega$ of
this band defines the size of far (wave) zone of the source, as
being roughly equal
to the wavelength of emitted gravitational waves $\lambda=2\pi c/\Omega$.
For example, if the
deflector of light rays is a binary system, then the strongest
emission of
gravitational waves takes place at twice the mean orbital frequency of the
system. For making estimates we can use the following approximations for
components of the quadrupole moment
\vspace{0.3 cm}
\begin{equation}
\label{etc}
|\dot{\cal I}^{ij}|\simeq \left({\cal M}a\;e\;c\right)\frac{a}{\lambda}\;
,\hspace{1.5 cm}
|\ddot{\cal I}^{ij}|\simeq \left({\cal M}\;e\;c^2\right)
\frac{a^2}{\lambda^2}\;,\hspace{1.5 cm}
etc.\;,
\end{equation}
where $a$ is a characteristic size of the source of gravitational waves and $e$
is its oblateness, quantifying the deviation of the
density distribution
from spherical symmetry.
When integrating the equations of light propagation using
the metric with Fourier
transform (\ref{fur}) for the quadrupole moment one meets the following
integrals:
\begin{equation}
\label{23}
I_1(\tau,{\bm{\xi}},\omega)={\int_{-\infty}^{\tau}}\frac{\cos[
\omega(\tau-\sqrt{d^2+\tau^2})]}{\sqrt{d^2+\tau^2}}d\tau,
\end{equation}
\begin{equation}
\label{24}
I_2(\tau,{\bm{\xi}},\omega)={\int_{-\infty}^{\tau}}\frac{\sin[
\omega(\tau-\sqrt{d^2+\tau^2})]}
{\sqrt{d^2+\tau^2}}d\tau.
\end{equation}
In order to evaluate the integrals (\ref{23})-(\ref{24}) it is useful to change
the time argument, $\tau$, to the argument $y$, by the transformation
\begin{equation}
\label{25}
y=\tau-\sqrt{d^2+\tau^2},
\end{equation}
which yields
\begin{equation}
\label{26}
\tau=\frac{y^2-d^2}{2y}, \hspace{1 cm}
\sqrt{d^2+\tau^2}=-\frac{1}{2}\frac{d^2+y^2}{y},
\hspace{1 cm}d\tau=\frac{1}{2}\frac{d^2+y^2}{y^2}dy.
\end{equation}
While the parameter $\tau$ runs from $-\infty$ to $+\infty$, the new
parameter $y$ runs from $-\infty$ to 0; that is, $y$ is always negative.
After transforming time
arguments, the integrals $I_1$ and $I_2$ are reduced to the cosine- and
sine integrals respectively (\cite{46}, formula {\bf 8.230}):
\begin{equation}
\label{27}
I_1(\tau,{\bm{\xi}},\omega)=-{\bf Ci}(\omega y),
\end{equation}
\begin{equation}
\label{28}
I_2(\tau,{\bm{\xi}},\omega)=-{\bf Si}(\omega y),
\end{equation}
where constants of integration have been omitted.
Secondary integration of integrals (\ref{27})-(\ref{28}) along the light
trajectory is required as well. Using transformations (\ref{25})-(\ref{26})
we obtain
\begin{equation}
\label{30}
J_1(\tau,{\bm{\xi}},\omega)\equiv{\int_{-\infty}^{\tau}}I_1(\tau, {\bm{\xi}},
\omega)d\tau=
-\tau\; {\bf Ci}(\omega y)+\frac{1}{2}\omega\;d^2\left[{\bf Si}(\omega y)+
\frac{\cos(\omega y)}{2y}\right]+
\frac{\sin(\omega y)}{2\omega},
\end{equation}
\begin{equation}
\label{29}
J_2(\tau,{\bm{\xi}},\omega)\equiv{\int_{-\infty}^{\tau}}I_2(\tau, {\bm{\xi}},
\omega)d\tau=
-\tau\; {\bf Si}(\omega y)+\frac{1}{2}\omega\; d^2\left[{\bf Ci}(\omega y)-
\frac{\sin(\omega y)}{2y}\right]+
\frac{\cos(\omega y)}{2\omega},
\end{equation}
where constants of integration have again been omitted.
Using the Fourier transform of the quadrupole moment (\ref{fur}) and formulae
(\ref{23}), (\ref{24}), (\ref{27}), (\ref{28}) one calculates the important
integrals
\vspace{0.3 cm}
\begin{eqnarray}
\label{bb}
B_{ij}(\tau,{\bm{\xi}})&\equiv &\displaystyle{\int_{-\infty}^{\tau}}\frac{{\cal I}_{ij}(t-r)}{r}dt
=(2\pi)^{-1/2}
\displaystyle{\int_{-\infty}^{+\infty}}
\tilde{\cal{I}}_{ij}(\omega)e^{i\omega t^{\ast}}\left[
I_1(\tau,{\bm{\xi}},\omega)+i I_2(\tau,{\bm{\xi}},\omega)\right]d\omega\;,
\end{eqnarray}
\vspace{0.3 cm}
\begin{eqnarray}
\label{cc}
C_{ij}(\tau,{\bm{\xi}})&\equiv&\displaystyle{\int_{-\infty}^{\tau}}\frac{\dot{\cal I}_{ij}(t-r)}{r}dt=(2\pi)^{-1/2}
\displaystyle{\int_{-\infty}^{+\infty}}\omega
\tilde{\cal {I}}_{ij}(\omega)e^{i\omega t^{\ast}}\left[
-I_2(\tau,{\bm{\xi}},\omega)+i I_1(\tau,{\bm{\xi}},\omega)\right]d\omega\;,
\end{eqnarray}
\vspace{0.3 cm}
\begin{eqnarray}
\label{dd}
D_{ij}(\tau,{\bm{\xi}})&\equiv&\displaystyle{\int_{-\infty}^{\tau}}B_{ij}(\tau,{\bm{\xi}})dt=
(2\pi)^{-1/2}
\displaystyle{\int_{-\infty}^{+\infty}}
\tilde{\cal{I}}_{ij}(\omega)e^{i\omega t^{\ast}}\left[
J_1(\tau,{\bm{\xi}},\omega)+i J_2(\tau,{\bm{\xi}},\omega)\right]d\omega\;,
\end{eqnarray}
\vspace{0.3 cm}
\begin{eqnarray}
\label{ee}
E_{ij}(\tau,{\bm{\xi}})&\equiv&\displaystyle{\int_{-\infty}^{\tau}}C_{ij}(\tau,{\bm{\xi}})dt
=(2\pi)^{-1/2}
\displaystyle{\int_{-\infty}^{+\infty}}\omega
\tilde{\cal{I}}_{ij}(\omega)e^{i\omega t^{\ast}}\left[
J_2(\tau,{\bm{\xi}},\omega)-i
J_1(\tau,{\bm{\xi}},\omega)\right]d\omega\;,\\\nonumber
\end{eqnarray}
where $t^{\ast}$ is the moment of closest
approach of the photon to the origin of coordinate system.
In what follows, we need only
partial derivatives with respect to the impact parameter
of the integrals (\ref{bb}) - (\ref{ee}). These can be
calculated rather easily. We have, for example,
\vspace{0.3 cm}
\begin{equation}
\label{iju}
{\hat{\partial}}_i I_1(\tau,{\bm{\xi}},\omega)=
\left(y r\right)^{-1}\cos\left(\omega
y\right) \xi^i,\hspace{1.5 cm}{\hat{\partial}}_i I_2(\tau,{\bm{\xi}},\omega)=
\left(y r\right)^{-1}\sin\left(\omega y\right) \xi^i\;,\\\nonumber
\end{equation}
and so on. Thus, making use of the inverse Fourier transform we obtain
\vspace{0.3 cm}
\begin{eqnarray}
\label{cvt}
{\hat{\partial}}_k B_{ij}(\tau,{\bm{\xi}})&=&
\left(y r\right)^{-1}{\cal I}_{ij}(t-r)
\xi^k,
\end{eqnarray}
\begin{eqnarray}
\label{cds}
{\hat{\partial}}_{\tau} B_{ij}(\tau,{\bm{\xi}})&=&\left(1-\frac{\tau}{r}\right)
\frac{{\cal I}_{ij}(t-r)}{y},
\end{eqnarray}
\begin{eqnarray}
\label{ctui}
{\hat{\partial}}_k C_{ij}(\tau,{\bm{\xi}})&=&\left(y r\right)^{-1}
\dot{\cal I}_{ij}(t-r) \xi^k.
\end{eqnarray}
\begin{eqnarray}
\label{cdo}
{\hat{\partial}}_{\tau} C_{ij}(\tau,{\bm{\xi}})&=&\left(1-\frac{\tau}{r}\right)
\frac{\dot{\cal I}_{ij}(t-r)}{y},
\end{eqnarray}
Calculation of partial derivatives from integrals $D_{ij}(\tau,{\bm{\xi}})$ and
$E_{ij}(\tau,{\bm{\xi}})$ may be done without difficulty
in a similar fashion using equations (\ref{30})-(\ref{29}).
\subsubsection{Second Method of Integration}
The second method also uses the substitutions (\ref{25}), (\ref{26}).
The integrals (\ref{bb}) - (\ref{cc}) are brought into the form
\vspace{0.3 cm}
\begin{eqnarray}
\label{bbq}
B_{ij}(\tau,{\bm{\xi}})&\equiv& \displaystyle{\int_{-\infty}^{\tau}}
\frac{{\cal I}_{ij}(t-r)}{r}dt
=-\displaystyle{\int_{-\infty}^{y}}\frac{{\cal
I}_{ij}(t^{\ast}+\zeta)}{\zeta}d\zeta\;,
\end{eqnarray}
\vspace{0.3 cm}
\begin{eqnarray}
\label{ccq}
C_{ij}(\tau,{\bm{\xi}})&\equiv&\displaystyle{\int_{-\infty}^{\tau}}
\frac{\dot{\cal I}_{ij}(t-r)}{r}dt=
-\displaystyle{\int_{-\infty}^{y}}\frac{\dot{\cal
I}_{ij}(t^{\ast}+\zeta)}{\zeta}d\zeta\;.\\\nonumber
\end{eqnarray}
One sees that the integrands of the integrals do not depend on the
parameters $d$ and $\tau$ at all. They are present only in the upper
limit of integration. Hence, the integrals (\ref{bbq}), (\ref{ccq}) are functions
of the variable $y$ only, that is $B_{ij}(\tau,{\bm{\xi}})=B_{ij}(y)$ and
$C_{ij}(\tau,{\bm{\xi}})=C_{ij}(y)$. Making use of the transformations
(\ref{25}), (\ref{26}),
the integrals (\ref{dd}), (\ref{ee}) are reduced to the
expressions
\vspace{0.3 cm}
\begin{equation}
\label{ddq}
D_{ij}(\tau,{\bm{\xi}})\equiv
\displaystyle{\int_{-\infty}^{\tau}}B_{ij}(\tau,{\bm{\xi}})dt=
\frac{1}{2}\displaystyle{\int_{-\infty}^{y}}B_{ij}(\zeta)d\zeta+
\frac{d^2}{2}\displaystyle{\int_{-\infty}^{y}}\frac{B_{ij}(\zeta)}
{\zeta^2}d\zeta,
\end{equation}
\vspace{0.3 cm}
\begin{equation}
\label{eeq}
E_{ij}(\tau,{\bm{\xi}})
\equiv\displaystyle{\int_{-\infty}^{\tau}}C_{ij}(\tau,{\bm{\xi}})dt
=\frac{1}{2}\displaystyle{\int_{-\infty}^{y}}C_{ij}(\zeta)d\zeta+
\frac{d^2}{2}\displaystyle{\int_{-\infty}^{y}}\frac{C_{ij}(\zeta)}
{\zeta^2}d\zeta\;.\\\nonumber
\end{equation}
Hence, the integrals $D_{ij}(\tau,{\bm{\xi}})$, $E_{ij}(\tau,{\bm{\xi}})$ are
also functions of the variable $y$ only.
We stress once again that
our formalism holds true for arbitrary dependence of the quadrupole moment
of the localized source on time,
and includes the case of sources which produce
bursts of gravitational radiation, such as
supernova explosions or coalescence of
binary systems, as well as periodic systems.
Indeed, suppose that the burst starts at the
moment $t_1$ and terminates at the moment $t_2$. We assume for simplicity
that before and after the burst the quadrupole moment of the source
is identically zero.
During the burst,
the tensor function ${\cal F}_{ij}(t)$
describes the time dependence of the quadrupole moment.
Then all formulae derived in this paper hold, if we
describe the quadrupole moment of the source as a product of two Heaviside step
functions with the tensor function ${\cal F}_{ij}(t)$.
Thus, for any
moment of time we write
\begin{eqnarray}
\label{hev}
{\cal I}_{ij}(t)&=&H(t-t_1)H(t_2-t){\cal F}_{ij}(t)\;,
\end{eqnarray}
where the Heaviside step function is defined as follows
\begin{equation}
\label{det}
H(t-T)=
\cases{
1\quad\quad\text{if $t>T$,}\cr
0\quad\quad\text{otherwise.}\cr
}
\end{equation}
Time derivatives of the quadrupole moment are calculated
taking into account that
$\dot{H}(t-T)=\delta(t-T)$
is the Dirac delta-function,
and $\delta(t-t_1){\cal F}_{ij}(t_1)=
\delta(t-t_2){\cal F}_{ij}(t_2)=0$. This yields
\begin{equation}
\label{differ}
\dot{\cal I}_{ij}(t)=H(t-t_1)H(t_2-t)\dot{\cal F}_{ij}(t)\;,\quad\quad\quad
\ddot{\cal I}_{ij}(t)=H(t-t_1)H(t_2-t)\ddot{\cal F}_{ij}(t)\;,
\end{equation}
and similar formulae for higher derivatives.
It is evident from the structure of integrals (\ref{bbq})-(\ref{eeq})
that taking partial derivatives of any of the foregoing
integrals is reduced to taking the partial derivative with respect to
$y$. In particular, we obtain
\vspace{0.3 cm}
\begin{eqnarray}
\label{pzk}
{\hat{\partial}}_j B_{pq}(\tau,{\bm{\xi}})&=&-\frac{{\cal
I}_{pq}(t^{\ast}+y)}{y}{\hat{\partial}}_j y=
\left(y r\right)^{-1}{\cal I}_{pq}(t-r) \xi^j\;,\\\nonumber
\end{eqnarray}
which exactly coincides with the result (\ref{cvt}) derived above
using the inverse Fourier transform method.
Second and third partial derivatives of the function $B_{ij}(\tau,{\bm{\xi}})$
with respect to the impact parameter will be useful subsequently. They are
calculated making use of formula (\ref{pzk}). This yields
\vspace{0.3 cm}
\begin{eqnarray}
\label{pzka}
{\hat{\partial}}_{jk} B_{pq}(\tau,{\bm{\xi}})&=&\left(y
r\right)^{-1}\left[P_{jk}+\frac{\xi_j \xi_k}{y r}-
\frac{\xi_j \xi_k}{r^2}\right]{\cal I}_{pq}(t-r)-
\frac{\xi_j \xi_k}{y r^2}\dot{\cal I}_{pq}(t-r)\;,
\end{eqnarray}
\vspace{0.3 cm}
and
\vspace{0.3 cm}
\begin{eqnarray}
\label{pzkab}
{\hat{\partial}}_{ijk} B_{pq}(\tau,{\bm{\xi}})&=&\left(y
r\right)^{-1}\left[\frac{\xi_i P_{jk}}{y r}+\frac{2\xi_k P_{ij}}{y r}+
\frac{2\xi_i \xi_j \xi_k}{y^2 r^2}\right.\nonumber\\ \\\nonumber&&\mbox{}-
\left.\frac{\xi_i P_{jk}}{r^2}-
\frac{2\xi_k P_{ij}}{r^2}-\frac{3\xi_i \xi_j \xi_k}{y r^3}+
\frac{3\xi_i \xi_j \xi_k}{r^4}\right]{\cal I}_{pq}(t-r)\nonumber\\\nonumber
\\\nonumber&&\mbox{}-\left(y
r\right)^{-1}\left[\frac{\xi_i P_{jk}}{r}+\frac{2\xi_k P_{ij}}{ r}+
\frac{2\xi_i \xi_j \xi_k}{y r^2}-
\frac{3\xi_i \xi_j \xi_k}{r^3}\right]\dot{\cal I}_{pq}(t-r)
\nonumber\\\nonumber
\\\nonumber&&\mbox{}+\frac{\xi_i \xi_j \xi_k}{y r^3}\ddot{\cal
I}_{pq}(t-r)\;.\\\nonumber
\end{eqnarray}
We note that the formulae of partial differentiation of
$C_{ij}(\tau,{\bm{\xi}})$ look the same as for $B_{ij}(\tau,{\bm{\xi}})$
after taking into account the fact that the
integral (\ref{ccq}) depends on the
first time derivative of the quadrupole moment. The derivatives of the
functionals $E_{ij}(\tau,{\bm{\xi}})$ and $D_{ij}(\tau,{\bm{\xi}})$ can be
obtained using relationships (\ref{ddq})-(\ref{eeq}) and derviatives of
$B_{ij}(\tau,{\bm{\xi}})$ and $C_{ij}(\tau,{\bm{\xi}})$. For example,
\begin{eqnarray}
\label{diffd1}
{\hat{\partial}}_{j} D_{pq}(\tau,{\bm{\xi}})&=&\xi^j\left[
\frac{B_{pq}(\tau,{\bm{\xi}})}{y}
+\displaystyle{\int_{-\infty}^{y}}\frac{B_{ij}(\zeta)}
{\zeta^2}d\zeta\right]\;,\\\nonumber\\\label{diffd2}
{\hat{\partial}}_{jk}
D_{pq}(\tau,{\bm{\xi}})&=&
\frac{\xi^j}{y}{\hat{\partial}}_k
B_{pq}(\tau,{\bm{\xi}})+P^{jk}\left[
\frac{B_{pq}(\tau,{\bm{\xi}})}{y}
+\displaystyle{\int_{-\infty}^{y}}\frac{B_{ij}(\zeta)}
{\zeta^2}d\zeta\right]\;,\\\nonumber\\\label{diffd3}
{\hat{\partial}}_{ijk}
D_{pq}(\tau,{\bm{\xi}})&=&\frac{1}{y}\left[\left(P^{ij}+\frac{\xi^i\xi^j}{y
r}\right){\hat{\partial}}_k
B_{pq}(\tau,{\bm{\xi}})+P^{jk}{\hat{\partial}}_i
B_{pq}(\tau,{\bm{\xi}})+\xi^j {\hat{\partial}}_{ik}
B_{pq}(\tau,{\bm{\xi}})\right]\;.\\\nonumber
\end{eqnarray}
It is worth emphasizing that the third partial derivative of
$D_{pq}(\tau,{\bm{\xi}})$
does not include the integral
$B_{pq}(\tau,{\bm{\xi}})$ by itself, as
might be expected, but only its first and second derivatives.
Therefore, the third partial derivative of
$D_{pq}(\tau,{\bm{\xi}})$ does not depend on the past history of propagation of
the light ray (see
formulae (\ref{pzk}) and (\ref{pzka})).
Now, after making these remarks, we are ready to discuss the
relativistic perturbations of the photon's
trajectory in the radiative gravitational field of a localized source
deflecting light rays.
\section{Perturbations of Photon's Trajectory}
We first note that in terms of the new variables $\tau$ and $\xi^i$ the
components of the ``canonical"
metric tensor (\ref{7})-(\ref{9}) taken at an
arbitrary point on the light ray
can be re-written as follows \cite{47}:
\begin{eqnarray}\label{4aa}
h_{00}^{can.}(\tau,{\bm{\xi}})&=
&\frac{2{\cal M}}{r}+\left({\hat{\partial}}_{ij}+2k_i\hat{\partial}_{j\tau}+k_i
k_j{\hat{\partial}}_{\tau\tau}\right)\left[\frac{{\cal
I}_{ij}(t-r)}{r}\right]-\\\nonumber \\ \nonumber
& &\mbox{}2\left(k_i {\hat{\partial}}_{j}+k_i k_j{\hat{\partial}}_{\tau}\right)
\left[\frac{ \dot{\cal{I}}_{ij}(t-r)}{r}\right]+k_i k_j
\frac{\ddot{\cal {I}}_{ij}(t-r)}{r}\;,\\\nonumber \\\label{5aa}
h_{0i}^{can.}(\tau,{\bm{\xi}})&=&-\frac{2\epsilon_{ipq}{\cal S}^p x^q_N}{r^3}+
2\left({\hat{\partial}}_j+k_j {\hat{\partial}}_{\tau}\right)
\left[\frac{\dot{\cal {I}}_{ij}(t-r)}{r}\right]-
2k_j \frac{\ddot{\cal {I}}_{ij}(t-r)}{r}\;,\\\nonumber \\\label{6aa}
h_{ij}^{can.}(\tau,{\bm{\xi}})&=&\delta_{ij}
h_{00}^{can.}(\tau,{\bm{\xi}})+\frac{2}{r}\ddot{\cal {I}}_{ij}(t-r),
\end{eqnarray}
where in the right-hand side of all formulae it is implicitly assumed that
variables $t$, $x^i$ are replaced by $\tau$ and $\xi^i$, and
$\hat{\partial}_i \equiv P_i^j\partial/\partial\xi^j$,
$\hat{\partial}_{\tau} \equiv \partial/\partial\tau$. In addition,
note
that the dot over the quadrupole moment ${\cal I}_{ij}$
takes the usual meaning of
differentiation with respect to time, which must be completed first,
before
substitution of $t$ and $x^i$ for $\tau$ and $\xi^i$, and before
taking any other derivative.
The metric tensor (\ref{4aa}) - (\ref{6aa}) is used in the equations of motion of
light rays (\ref{eqnm})
which are reduced with the help of formula (\ref{20}) to the
expression:
\vspace{0.3 cm}
\begin{eqnarray}
\label{zoya}
\ddot{x}^i(\tau)&=&\left[2{\cal M}\left({\hat{\partial}}_{i}-k_i\hat{\partial}_{\tau}\right)-
2{\cal S}^p\left(\epsilon_{ipq}{\hat{\partial}}_{q\tau}+
k_q\epsilon_{ipq}{\hat{\partial}}_{\tau\tau}-k_j\epsilon_{jpq}{\hat{\partial}}_{iq}
\right)\right]\biggl\{\frac{1}{r}\biggr\}+\\\nonumber \\ \nonumber
& &\mbox{}\left({\hat{\partial}}_{ipq}-k_i{\hat{\partial}}_{pq\tau}+
2k_p{\hat{\partial}}_{iq\tau}+k_p k_q{\hat{\partial}}_{i\tau\tau}-
2k_i k_p{\hat{\partial}}_{q\tau\tau}-k_i k_p
k_q{\hat{\partial}}_{\tau\tau\tau} \right)
\biggl\{\frac{{\cal {I}}_{pq}(t-r)}{r}\biggl\}+\nonumber \\ \nonumber \\
\nonumber & &\mbox{}
2\left(k_i k_p{\hat{\partial}}_{q\tau}-\delta_{ip}{\hat{\partial}}_{q\tau}-
\delta_{ip}k_q{\hat{\partial}}_{\tau\tau}+
k_i k_p k_q{\hat{\partial}}_{\tau\tau}
\right)\biggl\{\frac{\dot{\cal {I}}_{pq}(t-r)}{r}\biggl\}-
{\hat{\partial}}_{\tau\tau}\left(w^i\;-k^i\;w^0\right)\; ,
\end{eqnarray}
where $w^i$ and $w^0$ are functions given by relationships
(\ref{poh})-(\ref{boh}). Remarkably, no terms depending on the
second time derivatives of the quadrupole moment appear
in the
equations of
motion of light rays (\ref{zoya}), because of mutual cancellation. This fact
explicitly demonstrates that gravitational waves emitted by localized sources
are much more elusive from detection by angular deflection
than other authors
suggest. It is worth
noting that the disappearance of terms with second derivatives
from the quadrupole moment is a local phenomena and is not a result of
integration of equation (\ref{zoya}). This is a characteristic feature of
General Relativity. Alternative theories of gravity do not
possess such a local cancellation of gravitational wave terms.
This cancellation may be used
for conducting new tests of General Relativity in the
weak, radiative
gravitational-field limit.
Let us
simplify the equations of motion (\ref{zoya}) in order to avoid writing down
cumbersome expressions. We introduce two functions
$\varphi^i$ and $\varphi^0$ which generate in (\ref{zoya})
the time derivatives of second and higher orders. These functions are
defined:
\vspace{0.3 cm}
\begin{eqnarray}\label{w0}
\varphi^0 &=&-\;2k_p \nabla_q\biggl\{
\frac{{\cal {I}}_{pq}(t-r)}{r}\biggl\}+
k_p k_q\biggl\{\frac{\dot{\cal {I}}_{pq}(t-r)}{r}\biggl\} \;,\\\nonumber\\\label{wi}
\varphi^i &=& 2{\cal S}^p\;k_q\;\epsilon_{ipq}\biggl\{\frac{1}{r}\biggr\}-
k_p\;k_q\nabla_i\biggl\{\frac{{\cal {I}}_{pq}(t-r)}{r}\biggl\}+
2 k_q \biggl\{\frac{\dot{\cal {I}}_{iq}(t-r)}{r}\biggl\}\;,
\end{eqnarray}
where the differential operator $\nabla_i\equiv\partial/\partial x^i$
must be applied
before the substitution of the unperturbed trajectory of light rays.
It can be easily confirmed by straightforward use of formula (\ref{20}) that the
expressions (\ref{w0})-(\ref{wi}) generate terms with second and
third derivatives with respect to $\tau$ in (\ref{zoya}). The equations
for the path of the light
ray now assume the form:
\begin{eqnarray}\label{zoya1}
\ddot{x}^i(\tau)&=&\left[2{\cal M}\left({\hat{\partial}}_{i}-
k_i\hat{\partial}_{\tau}\right)-
2{\cal S}^p\left(\epsilon_{ipq}{\hat{\partial}}_{q\tau}
-k_j\epsilon_{jpq}{\hat{\partial}}_{iq}
\right)\right]\biggl\{\frac{1}{r}\biggr\}+\\\nonumber \\ \nonumber
& &\mbox{}\left({\hat{\partial}}_{ipq}-k_i{\hat{\partial}}_{pq\tau}+
2k_p{\hat{\partial}}_{iq\tau}\right)
\biggl\{\frac{{\cal {I}}_{pq}(t-r)}{r}\biggl\}-
2P_{ij}{\hat{\partial}}_{q\tau}\biggl\{\frac{\dot{\cal {I}}_{jq}(t-r)}{r}\biggl\}-
\\\nonumber \\ \nonumber\mbox{}&&
{\hat{\partial}}_{\tau\tau}\left[w^i+\varphi^i
-k^i\;\left(w^0+\varphi^0\right)\right]\;.\\\nonumber
\end{eqnarray}
We note that the terms
$\varphi^0$ and $\varphi^i$ are gauge-dependent and can be, in principle,
eliminated from the equations of motion (\ref{zoya}) by choosing appropriate
gauge functions $w^0$ and $w^i$. However, such a procedure
will introduce a reference system with a coordinate grid very
sensitive to the direction to a specific source of light rays;
that is, to the
vector $k^i$. The coordinate system obtained in this way will be of trifling
practical usage. For this reason we do not recommend the elimination of
functions $\varphi^0$ and $\varphi^i$ from (\ref{zoya}) and give preference
to the ADM-harmonic coordinate system, which admits a much simpler and
unique
treatment of observable effects. Thus, we leave the functions
$\varphi^0$ and $\varphi^i$ in the equations of motion of light rays,
where gauge
functions $w^0$ and $w^i$ are defined by formulae (\ref{ttt})-(\ref{kkk}).
Proceeding further in this way
and integrating equations (\ref{zoya}) one obtains
\vspace{0.3 cm}
\begin{eqnarray}
\label{jja}
\dot{x}^i(\tau)&=&k^i+\dot{\Xi}^i(\tau)\\\nonumber\\
\label{epr}
x^i(\tau)&=&x^i_N(\tau)+\Xi^i(\tau)-\Xi^i(\tau_0)\;,
\end{eqnarray}
where the unperturbed trajectory of light ray $x^i_N(\tau)$ is determined by
the expression (\ref{17a}). The
relativistic perturbations to the trajectory are:
\vspace{0.5 cm}
\begin{eqnarray}
\label{aop}
\dot{\Xi}^i(\tau) &=&
\left(2{\cal M}{\hat{\partial}}_{i}+2{\cal
S}^pk_j\epsilon_{jpq}{\hat{\partial}}_{iq}\right)A(\tau,{\bm{\xi}})+
{\hat{\partial}}_{ipq}B_{pq}(\tau,{\bm{\xi}})-
\\ \nonumber \\&&\mbox{}
\left(2{\cal M}k_i + 2{\cal S}^p\epsilon_{ipq}{\hat{\partial}}_q\right)
\biggl\{\frac{1}{r}\biggr\}-
\left(k_i{\hat{\partial}}_{pq}-
2k_p{\hat{\partial}}_{iq}\right)
\biggl\{\frac{{\cal {I}}_{pq}(t-r)}{r}\biggl\}-
2P_{ij}{\hat{\partial}}_{q}\biggl\{\frac{\dot{\cal {I}}_{jq}(t-r)}{r}\biggl\}-
\nonumber\\ \nonumber \\\nonumber&&\mbox{}
{\hat{\partial}}_{\tau}\left[w^i+\varphi^i
-k^i\;\left(w^0+\varphi^0\right)\right]\;,\\ \nonumber \\\nonumber\\
\label{jjj}
\Xi^i(\tau)&=&\left(2{\cal M}{\hat{\partial}}_{i}+2{\cal
S}^pk_j\epsilon_{jpq}{\hat{\partial}}_{iq}\right)B(\tau,{\bm{\xi}})-
\left(2{\cal M}k_i-2{\cal S}^p\epsilon_{ipq}{\hat{\partial}}_q
\right)A(\tau,{\bm{\xi}})
+\\\nonumber
\\&&\mbox{}{\hat{\partial}}_{ipq}D_{pq}(\tau,{\bm{\xi}})-
\left(k_i{\hat{\partial}}_{pq}-2k_p{\hat{\partial}}_{iq}
\right)B_{pq}(\tau,{\bm{\xi}})
-2P_{ij}{\hat{\partial}}_{q}C_{jq}(\tau,{\bm{\xi}})-
\nonumber\\\nonumber
\\&&\mbox{}
-w^i(\tau,{\bm{\xi}})-\varphi^i(\tau,{\bm{\xi}})
+k^i\;\left[w^0(\tau,{\bm{\xi}})+\varphi^0(\tau,{\bm{\xi}})\right]\;.
\nonumber\\ \nonumber
\end{eqnarray}
We emphasize that before differentiation with respect to time $\tau$
or impact parameter $\xi^i$,
one has to differentiate the quadrupole moment with
respect to time $t$ and make the substitutions: $t\mapsto \tau$,
$r\mapsto \sqrt{d^2+\tau^2}$, $r_0\mapsto \sqrt{d^2+\tau_0^2}$. We also wish
to underline that the only integrals which need be calculated
explicitly in expressions (\ref{aop})-(\ref{jjj}) are $A(\tau,{\bm{\xi}})$ and
$B(\tau,{\bm{\xi}})$. All other
integrals are acted upon by
partial derivatives, which reduce them to ordinary
functions as explained in the
previous section. This remarkable fact allows
considerable simplification of
the calculations.
This simplification results from the fact that the
integrands can be formed from retarded potentials independent
of impact parameter, after using the
transformation of variables (\ref{25}).
This would be impossible if the metric tensor were
not a function of retarded time
$t-r$. Thus, retardation simplifies the calculations in the case of
time-dependent gravitational fields.
In the case of a static or stationary
gravitational field, the calculation of
propagation of light can be done using the same technique since one
can always consider a constant multipole also as a (constant) function of
retarded time. For this reason, more involved calculations of light propagation
(e.g. see \cite{24} and \cite{32}) can be simplified as well.
The functions $w^i$ and $w^0$, which describe
freedom in choosing coordinate systems, are
taken from formulae (\ref{ttt})-(\ref{kkk}) of Appendix B. Consequently, the
integrals of equations of light propagation (\ref{zoya})
expressed
in the ADM-harmonic coordinate gauge possess a simple interpretation of
observable effects, as discussed in the following section.
It is convenient to obtain an
expression for unit vector $k^i$ written in terms
of spatial coordinates of the
points of emission, ${\bf x_0}$, and observation,
${\bf x}$, of the light ray. From formula (\ref{epr}) one has
\begin{eqnarray}
\label{uuu}
k^i &=&-
K^i-\frac{P_j^i\left[\Xi^j
(\tau,{\bm{\xi}})-\Xi^j(\tau_0,{\bm{\xi}})\right]}{|{\bf x}-{\bf
x}_0|}\;,\\\nonumber
\end{eqnarray}
or more explicitly
\begin{eqnarray}
\label{expli}
k^i
&=&-K^i-\beta^i(\tau,{\bm{\xi}})+\beta^i(\tau_0,{\bm{\xi}})\;,
\\\nonumber\\
\beta^i(\tau,{\bm{\xi}})&=&\beta^i_M(\tau,{\bm{\xi}})+
\beta^i_S(\tau,{\bm{\xi}})+\beta^i_Q(\tau,{\bm{\xi}})\;,\\\nonumber
\end{eqnarray}
where the relativistic corrections $\beta^i(\tau,{\bm{\xi}})$
to the vector $K^i$ are defined as follows:\vspace{0.5
cm}
\begin{eqnarray}
\label{corr}
\beta^i_M(\tau,{\bm{\xi}})&=&\frac{
2{\cal M}{\hat{\partial}}_{i}B(\tau,{\bm{\xi}})}{|{\bf x}-{\bf x}_0|}\;,
\\\nonumber\\
\label{correc}
\beta^i_S(\tau,{\bm{\xi}})&=&\frac{2{\cal
S}^pk_j\epsilon_{jpq}{\hat{\partial}}_{iq}B(\tau,{\bm{\xi}})+
2P^{ij}{\cal S}^p\epsilon_{jpq}{\hat{\partial}}_q A(\tau,{\bm{\xi}})}
{|{\bf x}-{\bf x}_0|}\;,\\\nonumber\\
\label{coper}
\beta^i_Q(\tau,{\bm{\xi}})&=&\frac{
{\hat{\partial}}_{ipq}D_{pq}(\tau,{\bm{\xi}})+2k_p{\hat{\partial}}_{iq}
B_{pq}(\tau,{\bm{\xi}})
-2P^{ij}{\hat{\partial}}_{q}C_{jq}(\tau,{\bm{\xi}})
-P^i_j\left[w^j(\tau,{\bm{\xi}})+\varphi^j(\tau,{\bm{\xi}})\right]}
{|{\bf x}-{\bf x}_0|}
\;.
\\ \nonumber
\end{eqnarray}
The relativistic corrections $\beta^i(\tau_0,{\bm{\xi}})$ are obtained
by replacing the parameter $\tau$ in the numerators
of expressions (\ref{corr})-(\ref{coper}) by $\tau_0$. One notes that in
equation (\ref{expli})
the unit Euclidean vector
\begin{eqnarray}
\label{unitv}
K^i=-\frac{x^i-x_0^i}{|{\bf x}-{\bf x}_0|}\;
\end{eqnarray}
defines the direction from the observer towards the source of light and may be
interpreted as a
direction in asymptotically flat space-time \cite{48}.
Relationship (\ref{uuu}) allows
us
to apply the results of integration of equation of light propagation
to the boundary value problem as well.
The boundary value problem is formulated
in terms of initial ${\bf x}_0$ and final ${\bf x}$ positions of the photon
\begin{equation}\label{bvp}
{\bf x}(t)={\bf x}\;,\quad\quad {\bf x}(t_0)={\bf x}_0\;,
\end{equation}
whilst the initial-boundary value problem (\ref{1}) is formulated by means
of assignment of the initial position ${\bf x}_0$ and velocity of photon
at past null infinity.
The relativistic correction to the vector
$K^i$ contains in its denominator the
large numerical value of the distance
between observer and source of light. However, the difference $\Xi^j
(\tau,{\bm{\xi}})-\Xi^j(\tau_0,{\bm{\xi}})$ in the numerator of (\ref{uuu}) may be
of the same order as $|{\bf x}-{\bf x}_0|$ itself.
For this reason the relativistic correction in question
must be
taken into account, in general, for
calculation of light deflection in the cases of
finite distances of
observer or source of light from the localized source of gravitational
waves. Only in the case where observer and source of light reside
at large distances on
opposite
sides of the source of gravitational waves,
as was assumed in the paper by Damour \&
Esposito-Far\'ese (1998), can the relativistic correction $\beta^i$
beneglected.
\section{Basic Observable Relativistic Effects}
\subsection{Time Delay}
The gravitational time delay is derived from equations (\ref{epr}), (\ref{jjj}). In
order to obtain the expression for the time delay we multiply equation
(\ref{epr}) by itself and then find the
difference $t-t_0$ by taking the
square root and
using an expansion with respect to small relativistic parameters. This yields:
\vspace{0.3 cm}
\begin{eqnarray}
t-t_0&=&|{\bf x}-{\bf x}_0|-{\bf{k}}\cdot{\bm{\Xi}}(\tau)+
{\bf{k}}\cdot{\bm{\Xi}}(\tau_0)\;,\\\nonumber
\end{eqnarray}
or\vspace{0.3 cm}
\begin{eqnarray}
\label{qer}
t-t_0&=&|{\bf x}-{\bf x}_0|+\Delta_M(t,t_0)+\Delta_S(t,t_0)+
\Delta_Q(t,t_0),
\end{eqnarray}
where $|{\bf x}-{\bf x}_0|$ is the usual Euclidean distance
\cite{49}
between the points
of emission, ${\bf x}_0$, and reception, ${\bf x}$, of the photon, $\Delta_M$
is the classical
Shapiro delay produced by the (constant) spherically symmetric part of
the gravitational
field of the deflector, $\Delta_S$ is the Lense-Thirring or Kerr delay due to
the (constant) spin of the localized source of gravitational waves,
and $\Delta_Q$ describes an additional
delay caused by the time dependent quadrupole moment of the source.
Specifically we obtain:
\vspace{0.3 cm}
\begin{eqnarray}
\label{mass1}
\Delta_M=2{\cal M} \ln\left[\frac{r+\tau}{r_0+\tau_0}\right]
\end{eqnarray}
\vspace{0.3 cm}
\begin{eqnarray}
\label{spin1}
\Delta_S&=&-2\epsilon_{ijk}k^j{\cal S}^k {\hat{\partial}}_i
\ln\left[\frac{r+\tau}{r_0+\tau_0}\right]
\end{eqnarray}
\vspace{0.3 cm}
\begin{eqnarray}
\label{quad1}
\Delta_Q&=&{\hat{\partial}}_{ij}
\left[B_{ij}(\tau,{\bm{\xi}})-B_{ij}(\tau_0,{\bm{\xi}})\right]+
\delta_Q(\tau,{\bm{\xi}})-\delta_Q(\tau_0,{\bm{\xi}})\;,
\end{eqnarray}
where
\begin{eqnarray}
\label{delta}
\delta_Q(\tau,{\bm{\xi}}) &=& k^i\left(w^i+\varphi^i\right)-
w^0-\varphi^0=\\\nonumber\\\nonumber\mbox{}&&
\frac{1}{2}{\hat{\partial}}_{\tau}\left[\nabla_p\nabla_q\biggl\{
\frac{^{(-2)}{\cal{I}}_{pq}(t-r)}{r}\biggl\}\right]-
\nabla_p\nabla_q\biggl\{
\frac{^{(-1)}{\cal{I}}_{pq}(t-r)}{r}\biggl\}-
\\\nonumber\\\nonumber\\\nonumber\mbox{}&&
k_p k_q{\hat{\partial}}_{\tau}\biggl\{\frac{{\cal{I}}_{pq}(t-r)}{r}\biggl\}+
2k_p k_q\biggl\{\frac{\dot{\cal{I}}_{pq}(t-r)}{r}\biggl\}\;.\\\nonumber
\end{eqnarray}
and functions $^{(-1)}{\cal{I}}_{pq}(t-r)$ and $^{(-2)}{\cal{I}}_{pq}(t-r)$
are defined by formula (\ref{integr}) of Appendix B.
The expression for the second derivative of function $B_{ij}(\tau,{\bm{\xi}})$ has
been given in equation (\ref{pzka}).
The other derivatives appearing in $\Delta_Q$ are
as follows
\vspace{0.3 cm}
\begin{eqnarray}
\label{dertau}
{\hat{\partial}}_{\tau}\biggl\{\frac{{\cal I}_{ij}(t-r)}{r}\biggr\}&=&
-\frac{y}{r}\frac{\dot{\cal I}_{ij}(t-r)}{r}-
\frac{\tau}{r}\frac{{\cal I}_{ij}(t-r)}{r^2}\;,\\\nonumber\\\nonumber\\
\label{oij}
\nabla_p\nabla_q\biggl\{
\frac{^{(-1)}{\cal{I}}_{pq}(t-r)}{r}\biggl\} &=&
\left[\dot{\cal I}_{pq}(t-r)+
3\frac{{\cal{I}}_{pq}(t-r)}{r}+3\frac{^{(-1)}{\cal{I}}_{pq}(t-r)}{r^2}
\right]\frac{x^p_N x^q_N}{r^3}\;,\\\nonumber\\\nonumber\\
\label{mnb}
\nabla_p\nabla_q\biggl\{
\frac{^{(-2)}{\cal{I}}_{pq}(t-r)}{r}\biggl\} &=&
\left[{\cal I}_{pq}(t-r)+
3\frac{^{(-1)}{\cal{I}}_{pq}(t-r)}{r}+3\frac{^{(-2)}{\cal{I}}_{pq}(t-r)}{r^2}
\right]\frac{x^p_N x^q_N}{r^3}\;,\\\nonumber\\\nonumber\\
\label{dertxx}
{\hat{\partial}}_{\tau}\biggl\{\nabla_p\nabla_q
\frac{^{(-2)}{\cal{I}}_{pq}(t-r)}{r}\biggl\} &=&
2\left[{\cal I}_{pq}(t-r)+
3\frac{^{(-1)}{\cal{I}}_{pq}(t-r)}{r}+3\frac{^{(-2)}{\cal{I}}_{pq}(t-r)}{r^2}
\right]\frac{x^q_N k^p }{r^3}\\\nonumber\\\nonumber
\mbox{}&&
-3\frac{\tau}{r}\left[{\cal I}_{pq}(t-r)+4\frac{
^{(-1)}{\cal{I}}_{pq}(t-r)}{r}+5\frac{^{(-2)}{\cal{I}}_{pq}(t-r)}{r^2}\right]
\frac{x^p_N x^q_N}{r^4}\\\nonumber\\\nonumber
\mbox{}&&
-\frac{y}{r}\left[\dot{\cal I}_{pq}(t-r)+
3\frac{{\cal{I}}_{pq}(t-r)}{r}+3\frac{^{(-1)}{\cal{I}}_{pq}(t-r)}{r^2}
\right]\frac{x^p_N x^q_N}{r^3}\;.
\end{eqnarray}
The relationship (\ref{qer}) for the time delay has been
derived with respect to
coordinate time $t$. In order to convert this relationship
to observable proper time, we
assume for simplicity that the observer is in a state of free fall
and that his velocity is negligibly small at the point of
observation, with spatial
coordinate ${\bf x}$.
If the observer's velocity is not small
an additional
Lorentz transformation of time must be applied. Transformation from the
ADM-harmonic coordinate time $t$ to proper time $T$ is made with
the help of the formula
(e.g. see \cite{50})
\vspace{0.3 cm}
\begin{eqnarray}
\label{prop}
dT&=&dt\sqrt{- g_{00}(t,{\bf x})}=dt\left(1-\frac{1}{2}h_{00}\right)\;.
\end{eqnarray}
Implementation of formula (\ref{adm1}) for $h_{00}$ and subsequent
integration of (\ref{prop}) with respect to time yields
\vspace{0.3 cm}
\begin{eqnarray}
\label{tra}
T&=&\left(1-\frac{{\cal M}}{r}\right)\left(t-t_i\right)\;,
\end{eqnarray}
where $t_{\rm i}$ is the initial epoch of observation and all
velocity-dependent terms are assumed small, as
argued above, and are therefore omitted.
We also stress that under usual
circumstances the distance $r$ is so large that the difference between
the observer's proper
time
and coordinate time can be neglected. Thus, we are allowed to treat
coordinate time $t$ as proper time.
We note that the time delay in the propagation of light
depends not only on instantaneous functions of
retarded time but also on the integrals of time $^{(-1)}{\cal{I}}_{pq}(t-r)$ and
$^{(-2)}{\cal{I}}_{pq}(t-r)$.
These integrals describe the whole past history of the source
of gravitational waves up to the moment of observation. Under usual
circumstances, the influence of
such integrals on the time delay is expected to
be small. However, this question deserves more detailed discussion and will be
studied in more detail elsewhere.
For example, these terms may be revealed
in observations as the ``kinematic resonance effect'' predicted by
Braginsky \& Grishchuk \cite{14}. These terms may be also important for detection
of the solar $g$-mode tidal oscillations by the LISA gravitational- wave antenna in
space \cite{51}.
\subsection{Deflection of Light}
The coordinate direction to the source of light measured at the point
of observation ${\bf x}$ is
defined by the four-vector $p^\alpha=(1,p^i)$ where $p^i=-\dot{x}^i$, or
\vspace{0.3 cm}
\begin{eqnarray}
\label{coor}
p^i&=&-k^i-\dot{\Xi}^i\;,
\end{eqnarray}
and the minus sign directs the vector $p^i$
from observer to the source of light. However, the coordinate
direction $p^i$ is not a directly observable quantity. A real observable vector
towards the source of light, $s^\alpha=(1,s^i)$, is defined with respect to the local
inertial frame of the observer.
In this frame $s^i=-dX^i/dT$, where $T$ is the
observer's proper time and $X^i$ are spatial coordinates of the local inertial
frame. We shall assume for simplicity that observer is at
rest \cite{52} with respect to
the (global) ADM-harmonic coordinate system $(t,x^i)$. Then the infinitesimal
transformation from the global ADM-harmonic coordinates $(t,x^i)$ to the local
coordinates $(T,X^i)$ is given by the formulas
\vspace{0.3 cm}
\begin{eqnarray}
\label{trans}
dT=\Lambda^0_0\; dt+\Lambda^0_j\; dx^j\;\;\;&,&\;\;
dX^i=\Lambda^i_0\; dt+\Lambda^i_j\; dx^j\;\;\;,
\end{eqnarray}
where the matrix of transformation $\Lambda^{\alpha}_{\beta}$ is defined by the
requirements of orthonormality
\vspace{0.3 cm}
\begin{eqnarray}
\label{ort}
g_{\alpha\beta}&=&\eta_{\mu\nu}\Lambda^{\mu}_{\alpha}\Lambda^{\nu}_{\beta}\;.
\end{eqnarray}
In particular, the orthonormality condition (\ref{ort}) assumes that spatial
angles and lengths at the point of observations are measured with the
Euclidean metric $\delta_{ij}$. Because the vector $s^\alpha$ is
isotropic, we conclude that the Euclidean length $|{\bf s}|$ of
the vector $s^i$
is equal to 1. Indeed, one has
\begin{eqnarray}
\label{unity}
\eta_{\alpha\beta}s^\alpha s^\beta&=&-1+{\bf s}^2=0\;.
\end{eqnarray}
Hence, $|{\bf s}|=1$.
In the linear approximation with respect to G,
the matrix of the transformation is
as follows \cite{31}
\vspace{0.3 cm}
\begin{eqnarray}
\label{lambda}
\Lambda^0_0&=&1-\frac{1}{2}h_{00}(t,{\bf x})\;,\nonumber\\
\mbox{} \Lambda^0_i&=&-h_{0i}(t,{\bf x})\;,\nonumber\\
\mbox{} \Lambda^i_0&=&0\;,\nonumber\\\mbox{}
\Lambda^i_j&=&\left[1+\frac{1}{2}h_{00}(t,{\bf x})\right]\delta_{ij}+
\frac{1}{2}h^{TT}_{ij}(t,{\bf x})\;.
\end{eqnarray}
Using the transformation (\ref{trans}) we obtain the relationship between the
observable vector $s^i$ and the coordinate direction $p^i$
\vspace{0.3 cm}
\begin{eqnarray}
\label{rls}
s^i&=&-\frac{\Lambda^i_0-\Lambda^i_j\; p^j}{\Lambda^0_0-\Lambda^0_j\; p^j}\;.
\end{eqnarray}
In the linear approximation this takes the form
\vspace{0.3 cm}
\begin{eqnarray}
\label{form}
s^i&=&
\left(1+h_{00}-h_{0j}p^j\right)p^i+\frac{1}{2}h^{TT}_{ij}p^j\;.
\end{eqnarray}
Remembering that vector $|{\bf s}|=1$,
we find the Euclidean norm of the
vector $p^i$ from the relationship
\vspace{0.3 cm}
\begin{eqnarray}
\label{norma}
|{\bf p}|&=&1-h_{00}+h_{0j}p^j-\frac{1}{2}h^{TT}_{ij}p^i p^j\;,
\end{eqnarray}
which brings equation (\ref{form}) to the form
\begin{eqnarray}
\label{bnm}
s^i&=&m^i+\frac{1}{2}P^{ij}m^q h^{TT}_{jq}(t,{\bf x})\;,
\end{eqnarray}
where the Euclidean unit vector $m^i=p^i/|{\bf p}|$.
Let us now denote
by $\alpha^i$ the dimensionless vector describing the total angle of
deflection of the light ray measured at the point of observation,
and calculated
with respect to
vector $k^i$ given at past null infinity. It is defined according
to the relationship \cite{32}
\vspace{0.3 cm}
\begin{eqnarray}
\alpha^i(\tau,{\bm{\xi}})&=&k^i [{\bf k}\cdot
\dot{\bm{\Xi}}(\tau,{\bm{\xi}})]-\dot{\Xi}^i(\tau,{\bm{\xi}})\;,
\end{eqnarray}
or
\begin{eqnarray}
\label{ang}
\alpha^i(\tau,{\bm{\xi}})&=&-\;P^i_j\;\dot{\Xi}^j(\tau,{\bm{\xi}})\;.
\end{eqnarray}
As a consequence of the definitions (\ref{coor}) and (\ref{ang}) we conclude
that
\begin{eqnarray}
\label{uio}
m^i&=&-k^i+\alpha^i(\tau,{\bm{\xi}})\;.
\end{eqnarray}
Accounting for expressions (\ref{bnm}), (\ref{uio}), and (\ref{expli}) we
obtain for the observed direction to the source of light
\begin{eqnarray}
\label{dop}
s^i(\tau,{\bm{\xi}})&=&K^i+\alpha^i(\tau,{\bm{\xi}})+\beta^i(\tau,{\bm{\xi}})-
\beta^i(\tau_0,{\bm{\xi}})+\gamma^i(\tau,{\bm{\xi}})\;,
\end{eqnarray}
where relativistic corrections $\beta^i$ are defined by equations
(\ref{corr})-(\ref{coper}) and the perturbation
\vspace{0.3 cm}
\begin{eqnarray}
\label{gamma}
\gamma^i(\tau,{\bm{\xi}})&=&-\frac{1}{2}P^{ij}k^q h^{TT}_{jq}(t,{\bf x})\;.
\end{eqnarray}
If two sources of light (quasars) are observed along the directions $s_1^i$ and
$s_2^i$ the measured angle $\psi$ between them
in the local inertial frame is:
\vspace{0.3 cm}
\begin{eqnarray}
\label{lkj}
\cos\psi&=&{\bf s}_1\cdot{\bf s}_2\;,
\end{eqnarray}
where the dot denotes the
usual Euclidean scalar product. It is worth emphasizing
that the observed direction to the source of light includes the relativistic
deflection of the light ray. This depends not only on quantities
at the point of observation but also on
$\beta^i(\tau_0,{\bm{\xi}})$, at the point of emission of light. This
remark reveals that according to relation (\ref{lkj}) a single
gravitational wave signal may cause
different angular displacements for
different sources of light located at different distances from the source
of gravitational waves.
Without going into further
details of the observational procedure we give an explicit
expression for the angle $\alpha^i$. We have
\vspace{0.3 cm}
\begin{eqnarray}
\label{brr}
\alpha^i(\tau,{\bm{\xi}})&=&\alpha_M^i(\tau,{\bm{\xi}})+
\alpha_S^i(\tau,{\bm{\xi}})+\alpha_Q^i(\tau,{\bm{\xi}})\;,\\\nonumber
\end{eqnarray}
where
\vspace{0.3 cm}
\begin{eqnarray}
\label{sdr}
\alpha_M^i(\tau,{\bm{\xi}})&=&
-\;2{\cal M}{\hat{\partial}}_{i}A(\tau,{\bm{\xi}})\;,\\\nonumber\\
\label{dfg}
\alpha_S^i(\tau,{\bm{\xi}})&=&-\;2{\cal
S}^pk_j\epsilon_{jpq}{\hat{\partial}}_{iq}A(\tau,{\bm{\xi}})+
2{\cal S}^p
\left(
P^{ij}\epsilon_{jpq}{\hat{\partial}}_q
+k_q\epsilon_{ipq}{\hat{\partial}}_{\tau}\right)
\biggl\{\frac{1}{r}\biggr\}\;,\\\nonumber\\
\label{dyw}
\alpha_Q^i(\tau,{\bm{\xi}})&=&-\;{\hat{\partial}}_{ipq}B_{pq}(\tau,{\bm{\xi}})-
P_{ij}\left(
2k_p{\hat{\partial}}_{jq}
+k_p k_q{\hat{\partial}}_{j\tau}+2k_p\delta_{jq}{\hat{\partial}}_{\tau\tau}+
2\delta_{jq}{\hat{\partial}}_{p\tau}\right)\times\\
\nonumber\\\nonumber\mbox{}&&
\times\biggl\{\frac{{\cal {I}}_{pq}(t-r)}{r}\biggl\}
+2P_{ij}\left({\hat{\partial}}_{q}
+2k_q{\hat{\partial}}_{\tau}
\right)
\biggl\{\frac{\dot{\cal {I}}_{jq}(t-r)}{r}\biggl\}+
\frac{1}{2}{\hat{\partial}}_{i\tau}\left[\nabla_p\nabla_q\biggl\{
\frac{^{(-2)}{\cal {I}}_{pq}(t-r)}{r}\biggl\}\right]\;.
\end{eqnarray}
The expression for the third spatial derivative of function $B_{pq}(\tau,{\bf
\xi})$ has been given in equation (\ref{pzkab}). The other relevant
derivatives are:
\vspace{0.3 cm}
\begin{eqnarray}
\label{dotxi}
{\hat{\partial}}_j\biggl\{\frac{\dot{\cal I}_{ij}(t-r)}{r}\biggr\}&=&-\xi^j\left[
\frac{\ddot{\cal I}_{ij}(t-r)}{r^2}+\frac{\dot{\cal I}_{ij}(t-r)}{r^3}\right]\;,
\vspace{0.3 cm}\\\nonumber\\\nonumber\\
\label{dottau}
{\hat{\partial}}_{\tau}\biggl\{\frac{\dot{\cal I}_{ij}(t-r)}{r}\biggr\}&=&
-\frac{y}{r}\frac{\ddot{\cal I}_{ij}(t-r)}{r}-
\frac{\tau}{r}\frac{\dot{\cal I}_{ij}(t-r)}{r^2}\;,
\vspace{0.3 cm}\\\nonumber\\\nonumber\\
\label{dram}
{\hat{\partial}}_{iq}\biggl\{\frac{{\cal
I}_{pq}(t-r)}{r}\biggr\}&=&-P_{iq}\left[\frac{\dot{\cal I}_{pq}(t-r)}{r^2}+
\frac{{\cal I}_{pq}(t-r)}{r^3}\right]+\\\nonumber \\\nonumber&&\mbox{}
\xi_i \xi_q \left[
\frac{\ddot{\cal I}_{pq}(t-r)}{r^3}+\frac{3\dot{\cal I}_{pq}(t-r)}{r^4} +
\frac{3{\cal I}_{pq}(t-r)}{r^5}\right],
\vspace{0.3 cm}\\\nonumber\\\nonumber\\
\label{eris}
{\hat{\partial}}_{i\tau}\biggl\{\frac{{\cal
I}_{pq}(t-r)}{r}\biggr\}&=&
\frac{y}{r}
\left[\frac{\ddot{\cal I}_{pq}(t-r)}{r^2}+
\frac{\dot{\cal I}_{pq}(t-r)}{r^3}\right]\xi_i+
\\\nonumber\\\nonumber\mbox{}&&
\frac{\tau}{r}\left[2\frac{\dot{\cal I}_{pq}(t-r)}{r^3}+
3\frac{{\cal I}_{pq}(t-r)}{r^4}\right]\xi_i\;,\\\nonumber\\\nonumber\\
\label{lls}
{\hat{\partial}}_{\tau\tau}\biggl\{\frac{{\cal
I}_{pq}(t-r)}{r}\biggr\}&=&
\frac{y^2}{r^2}\frac{\ddot{\cal I}_{pq}(t-r)}{r}+
\left(\frac{2y\tau}{r^2}-1\right)\frac{\dot{\cal I}_{pq}(t-r)}{r^2}+
\left(\frac{3\tau^2}{r^2}-1\right)\frac{{\cal I}_{pq}(t-r)}{r^3}\;.
\\\nonumber
\end{eqnarray}
Straightforward but tedious calculation
of the last term in equation (\ref{dyw})
yields\vspace{0.5 cm}
\begin{eqnarray}
\label{tblo}
\hspace{-1 cm}{\hat{\partial}}_{i}\biggl\{\nabla_p\nabla_q
\frac{^{(-2)}{\cal I}_{pq}(t-r)}{r}\biggr\}&=&
2\left[{\cal I}_{pq}(t-r)+
3\frac{^{(-1)}{\cal{I}}_{pq}(t-r)}{r}+3\frac{^{(-2)}{\cal{I}}_{pq}(t-r)}{r^2}
\right]\frac{x^p_N P_{iq} }{r^3}
\\\nonumber\\\nonumber\mbox{}&&\hspace{-2 cm}
-\left[\dot{\cal I}_{pq}(t-r)+
6\frac{{\cal{I}}_{pq}(t-r)}{r}+
15\frac{^{(-1)}{\cal{I}}_{pq}(t-r)}{r^2}+
15\frac{^{(-2)}{\cal{I}}_{pq}(t-r)}{r^3}\right]
\frac{x^p_N x^q_N \xi^i}{r^4}\;.\\\nonumber
\end{eqnarray}
and\vspace{0.5 cm}
\begin{eqnarray}
\label{ted}
\hspace{-1 cm}{\hat{\partial}}_{i\tau}\biggl\{\nabla_p\nabla_q
\frac{^{(-2)}{\cal I}_{pq}(t-r)}{r}\biggr\}&=&
2\left[{\cal I}_{pq}(t-r)+
3\frac{^{(-1)}{\cal{I}}_{pq}(t-r)}{r}+3\frac{^{(-2)}{\cal{I}}_{pq}(t-r)}{r^2}
\right]\frac{k^p P_{iq} }{r^3}\\\nonumber\\\nonumber\mbox{}&&\hspace{-0.5 cm}
-6\frac{\tau}{r}\left[{\cal I}_{pq}(t-r)+4\frac{
^{(-1)}{\cal{I}}_{pq}(t-r)}{r}+5\frac{^{(-2)}{\cal{I}}_{pq}(t-r)}{r^2}\right]
\frac{P_{iq} x^p_N}{r^4}\\\nonumber\\\nonumber\mbox{}&&
-2\frac{y}{r}\left[\dot{\cal I}_{pq}(t-r)+
3\frac{{\cal{I}}_{pq}(t-r)}{r}+3\frac{^{(-1)}{\cal{I}}_{pq}(t-r)}{r^2}
\right]\frac{k^p P_{iq} }{r^3}\\\nonumber\\\nonumber\mbox{}&&\hspace{-2 cm}
-2\left[\dot{\cal I}_{pq}(t-r)+
6\frac{{\cal{I}}_{pq}(t-r)}{r}+
15\frac{^{(-1)}{\cal{I}}_{pq}(t-r)}{r^2}+
15\frac{^{(-2)}{\cal{I}}_{pq}(t-r)}{r^3}\right]
\frac{\xi^i k^p x^q_N }{r^4}\\\nonumber\\\nonumber\mbox{}&&\hspace{-3 cm}
+4\frac{\tau}{r}\left[\dot{\cal I}_{pq}(t-r)+
\frac{15}{2}\frac{{\cal{I}}_{pq}(t-r)}{r}+
\frac{45}{2}\frac{^{(-1)}{\cal{I}}_{pq}(t-r)}{r^2}+
\frac{45}{2}\frac{^{(-2)}{\cal{I}}_{pq}(t-r)}{r^3}\right]
\frac{\xi^i x^p_N x^q_N }{r^5}\\\nonumber\\\nonumber\mbox{}&&\hspace{-2 cm}
+\frac{y}{r}\left[\ddot{\cal I}_{pq}(t-r)+
6\frac{\dot{\cal{I}}_{pq}(t-r)}{r}+
15\frac{{\cal{I}}_{pq}(t-r)}{r^2}+
15\frac{^{(-1)}{\cal{I}}_{pq}(t-r)}{r^3}\right]\frac{\xi^i x^p_N x^q_N
}{r^4}\;.\\\nonumber
\end{eqnarray}
We note that the angular displacement in astrometric positions of sources of light
in the sky depends not only on
quantities that are instantaneous functions of
retarded time, but also on integrals over
time $^{(-1)}{\cal{I}}_{pq}(t-r)$ and
$^{(-2)}{\cal{I}}_{pq}(t-r)$, which describe the
whole past history of the source
of gravitational waves up to the moment of observation. Under usual
circumstances the influence of such integrals on the deflection of light is expected to
be small. However, this question deserves more detailed discussion and will be
discussed elsewhere.
\section{Discussion}
It is remarkable that among all the integrals
required for calculation of the
trajectory of the light ray, only
$B_{ij}(\tau, {\bm{\xi}})$ enters
the expressions
(\ref{quad1}), (\ref{dyw}) for time delay and deflection angle.
Furthermore, it is remarkable that
we need not know this integral explicitly, but only its
second and third
derivatives with respect to
impact parameter.
These are given in equations (\ref{pzka})
and (\ref{pzkab}). With the knowledge of these derivatives, and
derivatives of
other functions given in the previous section, we have
complete
information about the functional structure of
the relativistic time delay and the angle of light deflection
produced by any localized gravitating system possessing a
time-dependent quadrupole moment ${\cal I}_{ij}(t)$.
This structure indicates that the
explicit time dependence of the quadrupole moment completely
determines the results of astrometric and timing observations.
We shall not consider
this problem in the present paper, leaving it for future exploration.
Our concern in this section is the simplification of the
general formalism
developed in the foregoing text. In order to do this
we consider
three limiting cases:
\begin{enumerate}
\item The impact parameter $d$
is much smaller than the distance from the localized source of
gravitational waves to both the observer, $r$, and
to the source of light, $r_0$.
The source of light is behind the source of gravitational waves
(see Figure \ref{smallimp1});
\item The impact parameter $d$
is much smaller than the distance from the localized source of
gravitational waves to the observer, $r$, and
to the source of light, $r_0$.
The source of light is on the same side of the
source of gravitational waves as the observer (see Figure \ref{smallimp2});
\item The distance $R$ from the source of light rays to the
observer is much
smaller than distances from the observer or from the
source of light to the
localized
source of gravitational waves. The impact parameter $d$ may be
comparable with the
distance from the deflector to observer or the source of light
(see Figure \ref{largeimp}).
\end{enumerate}
We will conventionally refer to the cases 1 and 2 as those of small impact
parameter, with numerical values of $\tau_0<0$ and $\tau_0>0$ respectively.
Case 3 is that of large impact parameter, and
its small numerical
values are covered by the formalism as well, as will be clear in
section 7.3 below.
\subsection {Case 1. Small Impact Parameter ($\tau_0<0$)}
\subsubsection{Asymptotic expansions of independent variables}
We shall assume in this section that the condition $d\ll{\rm min}[r,r_0]$
holds. Let $L={\rm min}[r,r_0]$ and recall that
$\tau=\sqrt{r^2-d^2}$ and $\tau_0=-\sqrt{r_0^2-d^2}<0$ (see Figure
\ref{smallimp1}). This yields
\vspace{0.3 cm}
\begin{equation}
\label{gfr}
y=\sqrt{r^2-d^2}-r=-\frac{d^2}{2r}-\frac{d^4}{8r^3}+...,
\end{equation}
\vspace{0.3 cm}
and
\vspace{0.3 cm}
\begin{equation}
\label{pxl}
y_0=-\sqrt{r_0^2-d^2}-r_0=-2r_0+\frac{d^2}{2r_0}+\frac{d^4}{8r_0^3}+...\;,
\\\nonumber
\end{equation}
where dots denote terms of higher order, $r$ is the constant
distance from the deflector to observer, and $r_0$ is the constant distance
from the deflector to the point of emission of light. Using these expansions we
find:
\vspace{0.3 cm}
\begin{equation}
\label{tnx}
t=t^{\ast}+r-\frac{d^2}{2r}+...,\hspace{1.5 cm}
t_0=t^{\ast}-r_0+\frac{d^2}{2r_0}+...\;.
\end{equation}
These can be used for Taylor expansion of functions
about the time $t^\ast$,
the moment
of the closest approach of light ray to the deflector.
Specifically, if we assume convergence of this Taylor series
we find:
\vspace{0.3 cm}
\begin{equation}
\label{tfl}
{\cal I}_{ij}(t-r)={\cal I}_{ij}(t^{\ast})-\frac{d^2}{2r}
\dot{\cal I}_{ij}(t^{\ast})+...\;,
\end{equation}
\vspace{0.3 cm}
\begin{equation}
\label{tfz}
{\cal I}_{ij}(t_0-r_0)={\cal I}_{ij}(t^{\ast}-2r_0)+\frac{d^2}{2r_0}
\dot{\cal I}_{ij}(t^{\ast}-2r_0)+...\;,
\end{equation}
where dots again denote terms of higher order. Convergence
of the
time series given above requires:
\begin{equation}\label{requir}
\frac{\omega d^2}{c\;r}\ll 1\;, \quad\quad\mbox{and}\quad\quad\frac{\omega d^2}
{c\;r_0}\ll 1\;,
\end{equation}
where $\omega$ is the highest frequency of gravitational waves emitted
by the deflector. If the source of light rays and observer are at
infinite distances from the deflector then the requirements (\ref{requir}) are
satisfied automatically, irrespective of the structure of the
Fourier spectrum of the
quadrupole moment of the deflector. In
practical situations such an assumption may not be always satisfied. For this
reason, it will be more natural to avoid the Taylor expansions of the
quadrupole moment with respect to retarded time.
It is also worth noting that in the case of small impact parameter we have
\vspace{0.3 cm}
\begin{equation}
\label{gfrs}
\left(yr\right)^{-1}=-\frac{2}{d^2}+\frac{1}{2r^2}+\frac{d^2}{8r^4}+...,
\end{equation}
\vspace{0.3 cm}
and
\vspace{0.3 cm}
\begin{equation}
\label{pxls}
\left(y_0 r_0\right)^{-1}=-\frac{1}{2r_0^2}-\frac{d^2}{8r_0^4}+...\hspace{0.5
cm}.
\end{equation}
\vspace{0.3 cm}
The foregoing expansions then yield\vspace{0.3 cm}
\begin{eqnarray}
\label{qqq}
{\hat{\partial}}_{j}B_{pq}(\tau, {\bm {\xi}})&=&\left(
-2{\hat{\partial}}_{j}\ln d+\frac{\xi^j}{2r^2}\right){\cal I}_{jk}(t-r)+...\;,\\
\nonumber\\\label{qq1}
{\hat{\partial}}_{j}B_{pq}(\tau_0, {\bm {\xi}})&=&-\frac{\xi^j}{2r_0^2}
{\cal I}_{jk}(t_0-r_0)+...\;,\\\nonumber\\
\label{acu}
{\hat{\partial}}_{jk}B_{pq}(\tau, {\bm {\xi}})&=&
-2{\hat{\partial}}_{jk}\ln d\;{\cal I}_{pq}(t-r)+\frac{2}{r}n_j n_k
\dot{\cal I}_{pq}(t-r)+...\;,\\\nonumber\\
\label{acp}
{\hat{\partial}}_{jk}B_{pq}(\tau_0, {\bm {\xi}})&=&-\frac{1}{2r_0^2}P_{jk}
{\cal I}_{pq}(t_0-r_0)+...\;,\\\nonumber\\\label{asm}
{\hat{\partial}}_{ijk}B_{pq}(\tau, {\bm {\xi}})&=&
-2\left[{\cal I}_{pq}(t-r)+\frac{d^2}{2r}\dot{\cal I}_{pq}(t-r)\right]
{\hat{\partial}}_{ijk}\ln d+...\;,\\\nonumber\\\label{asop}
{\hat{\partial}}_{ijk}B_{pq}(\tau_0, {\bm
{\xi}})&=&O\left(\frac{1}{r_0^3}\right)\;,\\\nonumber
\end{eqnarray}
and
\begin{eqnarray}
\label{derd}
{\hat{\partial}}_{ijk}D_{pq}(\tau, {\bm {\xi}})&=&
-2r{\cal I}_{pq}(t-r){\hat{\partial}}_{ijk}\ln d-\frac{4n^i n^j n^k}{d}
{\dot{\cal I}}_{pq}(t-r)+...\;,\\\nonumber\\
\label{derivd}
{\hat{\partial}}_{ijk}D_{pq}(\tau_0, {\bm {\xi}})&=&
O\left(\frac{1}{r_0^3}\right)\;.\\\nonumber
\end{eqnarray}
In addition we have
\begin{eqnarray}
\label{uit}
\delta_Q(\tau,{\bm{\xi}}) &=&
\frac{1}{r}k^pk^q\dot{\cal{I}}_{pq}(t-r)+...\;,\\
\nonumber\\\label{mas}
\delta_Q(\tau_0,{\bm{\xi}}) &=&
O\left(\frac{1}{r_0^2}\right)\;.
\end{eqnarray}
We note that the leading terms of the expansions decay much faster
(at least as $1/r_0^2$) at
the point of emission of light than those at the point of
observation. This indicates that the main contribution to the effects of time
delay and deflection of light arise along the path of the
light ray from the
localized
source of gravitational waves to the observer. We discuss this question in more
detail in the following section.
The asymptotic expansions of integrals (\ref{31}) - (\ref{32}) describing
propagation of light rays in the static part of gravitational field of the
deflector are:
\vspace{0.3 cm}
\begin{equation}
\label{alt}
A(\tau,{\bm{\xi}})=-2\ln d+\ln 2r-\frac{d^2}{4r^2}+
...\hspace{0.5 cm},
\end{equation}
\vspace{0.3 cm}
\begin{equation}
\label{altov}
A(\tau_0,{\bm{\xi}})=-\ln 2r_0+\frac{d^2}{4r_0^2}+...\hspace{0.5 cm},
\end{equation}
\vspace{0.3 cm}
\begin{eqnarray}
\label{blat}
B(\tau,{\bm{\xi}})&=&-r-2r \ln d+r \ln 2r-\frac{d^2}{2r}\left[\frac{1}{2}-
\ln\left(\frac{d^2}{2r}
\right)\right]...\;,\\\nonumber\\\nonumber\\
\label{gnus}
B(\tau_0,{\bm{\xi}})&=&-r_0+r_0 \ln 2r_0-\frac{d^2}{2r_0}
\left(\frac{1}{2}+\ln 2 r_0\right)+...\;.\\\nonumber
\end{eqnarray}
These expansions are used for calculation of asymptotic expressions
for time delay and the angle of deflection of light rays.
\subsubsection{Asymptotic expressions for time delay and the angle of
light deflection}
The static part of time delay and deflection angle are:
\vspace{0.3 cm}
\begin{eqnarray}
\label{mass}
\Delta_M&=&-4{\cal M} \ln d+2{\cal M} \ln (4r r_0)+...\hspace{0.5
cm},\\\nonumber\\
\label{spin}
\Delta_S&=&-4\epsilon_{jip}k^j{\cal S}^p {\hat{\partial}}_i
\left[\ln d-\frac{1}{2}\ln (4r r_0)\right]+...\hspace{0.5 cm},\\\nonumber\\
\label{ma}
\alpha_M^i(\tau,{\bm{\xi}})&=&
4{\cal M}{\hat{\partial}}_i\left[ \ln d-\frac{1}{2}\ln (4r r_0)\right]
+...\hspace{0.5 cm},\\\nonumber\\
\label{sp}
\alpha_S^i(\tau,{\bm{\xi}})
&=&4\epsilon_{jpq}k^p{\cal S}^q {\hat{\partial}}_{ij}
\left[\ln d-\frac{1}{2}\ln (4r r_0)\right]+...\hspace{0.5 cm},\\\nonumber\\
\label{mam}
\beta_M^i(\tau,{\bm{\xi}})&=&-\frac{r}{R}\alpha_M^i(\tau,{\bm{\xi}})+...\;,
\\\nonumber\\
\label{spm}
\beta_S^i(\tau,{\bm{\xi}})&=&-\frac{r}{R}\alpha_S^i(\tau,{\bm{\xi}})-
\frac{4}{R}P^{ij}{\cal S}^k\epsilon_{jkq}{\hat{\partial}}_{q}\ln d+...
\;,
\\\nonumber
\end{eqnarray}
where we have neglected the angle $\gamma^i(\tau,{\bm{\xi}})$
because it is small
(recall that $\gamma^i(\tau,{\bm{\xi}})\simeq P^{ij}k^q
h^{TT}_{jq}$).
Asymptotic expressions for the time delay and angle of deflection
caused by the quadrupole moment are:
\vspace{0.3 cm}
\begin{eqnarray}
\label{quad}
\Delta_Q&=&-2{\cal I}_{ij}(t-r){\hat{\partial}}_{ij}\ln d+
\frac{1}{r}\left(2n_i n_j+k_i k_j\right)\dot{\cal I}_{ij}(t-r)
+...\;,\\\nonumber
\end{eqnarray}
and
\vspace{0.3 cm}
\begin{eqnarray}
\label{angle}
\alpha_Q^i(\tau,{\bm{\xi}})&=&
2\left[{\cal I}_{jk}(t-r)+\frac{d^2}{2r}\dot{\cal I}_{jk}(t-r)\right]
{\hat{\partial}}_{ijk}\ln d
+...\hspace{0.5 cm},\\\nonumber\\
\label{betanagle}
\beta_Q^i(\tau,{\bm{\xi}})&=&-\frac{r}{R}\alpha_Q^i(\tau,{\bm{\xi}})
-\frac{4}{R}\left[k^j{\cal I}_{jk}(t-r){\hat{\partial}}_{ik}\ln d+\frac{1}{2}
\xi^i {\dot{\cal I}}_{jk}(t-r){\hat{\partial}}_{jk}\ln d\right]+...\;,
\end{eqnarray}
where $n^i=\xi^i/d$ is the unit vector directed along the impact parameter,
$R=|{\bf x}-{\bf x}_0|$, and dots denote
terms of higher order \cite{53}.
The angle $\beta^i(\tau_0,{\bm{\xi}})$ at the point of emission of light is
negligibly small and, for this reason, its exact expression has been not shown.
Our calculations show that the time dependent part of the time delay and
light deflection by the quadrupole moment of a
localized source of gravitational field fall off in the first
approximation
as the \underline{\it inverse square} and \underline{\it inverse cube} of
the impact parameter $d$ respectively. For this reason there is no
magnification of the gravitational wave signal in astrometric
or pulsar timing
observations as some authors have suggested \cite{19} - \cite{21}.
In particular,
terms proportional to
$1/d$, or even to $1/d^2$
, appear only in terms of high
order in the expansion (\ref{angle}) and are always multiplied by
the factor $1/r$ to some power.
The first term of formula (\ref{quad}) was first derived by Sazhin
\cite{7} for the special case of
a binary system with a specific orientation of its
orbital plane.
Our derivation of formula (\ref{angle})
improves and gives independent confirmation
of the result established previously by Damour \& Esposito-Far\`{e}se \cite{23}
using another mathematical technique based on application of
Fourier transform and pure harmonic coordinates.
For completeness we have repeated the calculations of
Damour \& Esposito-Far\`{e}se \cite{23} for
the effect of deflection of light rays by
localized sources of gravitational waves in ADM rather than
harmonic coordinates (see Appendix A).
The result
coincides completely with that of Damour \& Esposito-Far\`{e}se \cite{23}
and clearly demonstrates the gauge invariance of the result.
However, our technique is more general and powerful.
Our formalism is valid
for any relative position of observer,
source of light, and source of gravitational waves,
and with finite or infinite separations.
The method developed by Damour \& Esposito-Far\`{e}se \cite{23} is
valid only for infinite separations and for
small values of impact parameter.
In particular, we note that while
Damour \& Esposito-Far\`{e}se \cite{23} find that the deflection depends on the
time $t^{\ast}$
of the closest approach of light to
the deflector,
our calculation shows that it depends on the retarded
time $t-r$. This difference is insignificant for extremely large separation of
the
light source and observer from the deflector,
and small impact parameter, but it
can be important in the cases of finite distances
or large impact parameter.
It is important to realize that in the case of a small impact parameter,
the basic time-dependent contribution to the bending of light and time
delay by the gravitational field of a localized source of gravitational waves
comes from the static part of the near-zone gravitational field of
the source taken at the retarded time (cf. formulae (50)-(53)
from \cite{24}).
In this respect it is worth emphasizing that the formula for
the bending of light given in paper \cite{23} as well as in Appendix A
is valid under two assumptions: 1) the impact parameter $d$ is small
compared with the distance to the observer $r$, 2) the velocity of matter
inside the source of gravitational radiation is much smaller than the speed of
light (the slow-motion approximation).
The first assumption is rather trivial, since the impact parameter $d$ is the
only finite distance when the source of light and observer are at infinity.
The second assumption appears because paper \cite{23} uses the
Taylor
expansion of the Fourier image of the tensor of energy-momentum of matter with
respect to wave vector ${\bf k}$ (see equations (3.3) and (3.4) of paper
\cite{23}). This expansion is
mathematically equivalent to the use of a slow-motion approximation \cite{53a}
which, in particular, restricts the nature of the source of gravitational
waves so that its Fourier spectrum is not allowed to include too high
frequencies.
In contrast, the general formalism given in the present paper produces results
(\ref{quad}) and (\ref{angle}) applicable to arbitrary sources of
gravitational waves, including gravitational radiation bursts with internal
velocity of matter comparable to the speed of light \cite{53b}.
Moreover, we do not assume positions of observer and the source of light to be
at infinity.
If we introduce the notion of the transverse-traceless (TT) and longitudinal
(L) tensors \cite{24}, \cite{33} with respect to the direction of propagation of light rays
\begin{eqnarray}\label{TTT}
{\cal I}_{ij}^{TT}&=&{\cal I}_{ij}+
\frac{1}{2}\left(\delta_{ij}+k_i k_j\right)k_p k_q\; {\cal
I}_{pq}-\left(\delta_{ip}k_j k_q+\delta_{jp}k_i k_q\right)\;{\cal
I}_{pq}\;,\\\nonumber\\\label{long}
{\cal I}_{ij}^{L}&=&k_i k_p{\cal I}_{jp}+k_j k_p{\cal I}_{ip}-k_i k_j\left(k_p
k_q {\cal I}_{pq}\right)\;,\\\nonumber
\end{eqnarray}
the expressions (\ref{quad})-(\ref{angle}) are reduced to the form
\begin{eqnarray}
\label{quadTT}
\Delta_Q&=&-2{\cal I}_{ij}^{TT}(t-r)\;{\hat{\partial}}_{ij}\ln d+
\frac{2}{r}n^i n^j\;\dot{\cal I}_{ij}^{TT}(t-r)
+...\;,\\\nonumber\\ \label{angleTT}
\alpha_Q^i(\tau,{\bm{\xi}})&=&
2\left[{\cal I}_{jk}^{TT}(t-r)+\frac{d^2}{2r}\;\dot{\cal I}_{jk}^{TT}(t-r)\right]
{\hat{\partial}}_{ijk}\ln d
+...\hspace{0.5 cm},\\\nonumber\\
\label{b}
\beta_Q^i(\tau,{\bm{\xi}})&=&-\frac{r}{R}\alpha_Q^i(\tau,{\bm{\xi}})
-\frac{4}{R}\left[k^j{\cal I}_{jk}^{L}(t-r){\hat{\partial}}_{ik}\ln
d+\frac{1}{2}\;
\xi^i {\dot{\cal I}}_{jk}^{TT}(t-r){\hat{\partial}}_{jk}\ln d\right]+...\;,
\\\nonumber
\end{eqnarray}
which reveals explicitly that only the transverse-traceless part of the
quadrupole
moment of the localized source of gravitational waves contributes
to the leading terms. However, terms of higher
order can depend on the longitudinal
part of the quadrupole moment as
well.
It is interesting to see that if we apply the expansions (\ref{tfl})-(\ref{tfz}),
use the approximation of a
gravitational lens, and omit all terms depending on
time derivatives of the quadrupole moment,
the expressions for the time delay and the angle of light deflection can be
reduced to the formulae \cite{54}
\begin{eqnarray}
\label{timed}
t-t_0&=&|{\bf x}-{\bf x}_0|-
4\psi+2{\cal M}\ln(4r r_0)\;,\hspace{2 cm}\alpha_i=4{\hat{\partial}}_i\psi\;,\\\nonumber
\end{eqnarray}
where $\psi$ is the gravitational lens potential \cite{55}
having the form
\vspace{0.3 cm}
\begin{eqnarray}
\label{damour}
\psi&=&\left[{\cal M}+\epsilon_{jpq} k^p{\cal S}^q{\hat{\partial}}_j+
\frac{1}{2}\;{\cal I}_{pq}^{TT}(t^{\ast})\;{\hat{\partial}}_{pq}
\right]\ln d\;.
\end{eqnarray}
Scrutiny of the multipole structure
of $\psi$ in cosmological gravitational lenses
may reveal the
presence of dark matter in the lens and identify the position of its center of
mass, velocity and density distribution.
Expression (\ref{damour}) includes explicit dependence on mass, spin, and
quadrupole moment of the deflector and generalizes that given by Damour \&
Esposito-Far\`{e}se \cite{23} by
accounting for the spin multipole. A similar result
for the gravitational lens
potential was obtained independently by Kopeikin \cite{24} in the case of
a
stationary gravitational field for the deflector.
The fact that the deflection angle can be represented as a gradient of the
gravitational lens potential $\psi$ explicitly indicates that
the, so-called, frame-dragging effect in gravitational
lenses \cite{56} can give
a noticeable contribution to
the deflection angle.
Frame-dragging also produces a
small displacement of the image of the background
source from the plane formed by the two vectors
directed from the observer
toward the image of the light source and toward the
gravitational lens. This torsional displacement of the image is
produced only by the component of spin of the deflector directed along the light
ray (see the second term in equation (\ref{spm}). The overall effect of the torsion is of
order $d/r$ smaller than the main terms in the expression
(\ref{damour}). These remarks
dispel a seemingly common opinion that rotation of the
deflector prevents representation of the deflection angle as a gradient of a
gravitational lens potential.
Similar conclusions can be derived from
\cite{24} and \cite{32}.
Ib\'a\~nez \& Martin \cite{57} and Ib\'a\~nez \cite{58} give a formula
for effects of frame-dragging equivalent to the spin-dependent term
in (\ref{damour}), although they do not calculate all necessary
integrals or estimate residual terms.
Taking into account formula (\ref{dop}) and expressions for $\alpha^i$,
$\beta^i$, and $\gamma$ we obtain the
vector equation for a gravitational lens
\begin{eqnarray}
\label{lens}
s^i
&=&K^i+\frac{r_0}{R}\;\alpha^i\;,
\\\nonumber
\end{eqnarray}
where $\alpha^i$ is given by relationships (\ref{timed}), (\ref{damour})
and we have taken into account that in the case under consideration $R\simeq
r+r_0$. One recognizes that when distances are finite the deflection angle
with respect to vector $K^i$ is not simply $\alpha^i$ but the product of
$r_0/R$ and $\alpha^i$. In the limit when $K^i\rightarrow k^i$, which is
equaivalent to $\beta^i\rightarrow 0$, or $r={\rm const.}$, $r_0\rightarrow
\infty$ the observed
angle of deflection approaches the total angle of deflection
$\alpha^i$, as it must in this limiting case.
\subsection {Case 2. Small Impact Parameter ($\tau_0>0$)}
\subsubsection{Asymptotic expansions of independent variables}
We shall again assume in this section that the condition
$d\ll{\rm min}[r,r_0]$ holds and that
$\tau=\sqrt{r^2-d^2}$ and $\tau_0=\sqrt{r_0^2-d^2}>0$ (see Figure
\ref{smallimp2}). This yields
\vspace{0.3 cm}
\begin{equation}
\label{gfr1}
y=\sqrt{r^2-d^2}-r=-\frac{d^2}{2r}-\frac{d^4}{8r^3}+...,
\end{equation}
\vspace{0.3 cm}
and
\vspace{0.3 cm}
\begin{equation}
\label{pxl1}
y_0=\sqrt{r_0^2-d^2}-r_0=-\frac{d^2}{2r_0}-\frac{d^4}{8r^3_0}+...\;,
\\\nonumber
\end{equation}
where dots denote terms of higher order, $r$ is the constant
distance from the deflector to observer, and $r_0$ is the constant distance
from the deflector to the point of emission of light. Using these expansions we
obtain the following decompositions
\vspace{0.3 cm}
\begin{equation}
\label{tnx1}
t=t^{\ast}+r-\frac{d^2}{2r}+...,\hspace{1.5 cm}
t_0=t^{\ast}+r_0-\frac{d^2}{2r_0}+...\;.
\end{equation}
These can be used for
Taylor expansion of functions that depend on
retarded time about the time $t^\ast$.
In this case $t^{\ast}$ is
the moment
of closest approach of the light ray
trajectory extrapolated backward
to the deflector (see Figure \ref{smallimp2}).
If we assume convergence of this Taylor series
we find:
\vspace{0.3 cm}
\begin{equation}
\label{tfl1}
{\cal I}_{ij}(t-r)={\cal I}_{ij}(t^{\ast})-\frac{d^2}{2r}
\dot{\cal I}_{ij}(t^{\ast})+...\;,
\end{equation}
\vspace{0.3 cm}
\begin{equation}
\label{tfz1}
{\cal I}_{ij}(t_0-r_0)={\cal I}_{ij}(t^{\ast})-\frac{d^2}{2r_0}
\dot{\cal I}_{ij}(t^{\ast})+...\;,
\end{equation}
where dots again denote terms of higher order.
We also have
\vspace{0.3 cm}
\begin{equation}
\label{gfrs1}
\left(yr\right)^{-1}=-\frac{2}{d^2}+\frac{1}{2r^2}+\frac{d^2}{8r^4}+...,
\end{equation}
\vspace{0.3 cm}
and
\vspace{0.3 cm}
\begin{equation}
\label{pxls1}
\left(y_0 r_0\right)^{-1}=-\frac{2}{d^2}+\frac{1}{2r_0^2}+\frac{d^2}{8r_0^4}+...\hspace{0.5
cm}.
\end{equation}
\vspace{0.3 cm}
The foregoing expansions yield\vspace{0.3 cm}
\begin{eqnarray}
\label{qqq1}
{\hat{\partial}}_{j}B_{pq}(\tau, {\bm {\xi}})&=&\left(
-2{\hat{\partial}}_{j}\ln d+\frac{\xi^j}{2r^2}\right){\cal I}_{jk}(t-r)+...\;,\\
\nonumber\\\label{qq11}
{\hat{\partial}}_{j}B_{pq}(\tau_0, {\bm {\xi}})&=&\left(
-2{\hat{\partial}}_{j}\ln d+\frac{\xi^j}{2r_0^2}\right){\cal I}_{jk}(t_0-r_0)
+...\;,\\\nonumber\\
\label{acu1}
{\hat{\partial}}_{jk}B_{jk}(\tau, {\bm {\xi}})&=&
-2{\hat{\partial}}_{jk}\ln d\;{\cal I}_{jk}(t-r)+\frac{2}{r}n_j n_k
\dot{\cal I}_{jk}(t-r)+...\;,\\\nonumber\\
\label{acp1}
{\hat{\partial}}_{jk}B_{jk}(\tau_0, {\bm {\xi}})&=&
-2{\hat{\partial}}_{jk}\ln d\;{\cal I}_{jk}(t_0-r_0)+\frac{2}{r_0}n_j n_k
\dot{\cal I}_{jk}(t_0-r_0)+...\;,\\\nonumber\\
\label{asm1}
{\hat{\partial}}_{ijk}B_{jk}(\tau, {\bm {\xi}})&=&
-2\left[{\cal I}_{jk}(t-r)+\frac{d^2}{2r}\dot{\cal I}_{jk}(t-r)\right]
{\hat{\partial}}_{ijk}\ln d+...\;,\\\nonumber\\
\label{asop1}
{\hat{\partial}}_{ijk}B_{jk}(\tau_0, {\bm
{\xi}})&=&
-2\left[{\cal I}_{jk}(t_0-r_0)+\frac{d^2}{2r_0}\dot{\cal I}_{jk}(t_0-r_0)\right]
{\hat{\partial}}_{ijk}\ln d+...
\;.\\\nonumber
\end{eqnarray}
and
\begin{eqnarray}
\label{derd1}
{\hat{\partial}}_{ijk}D_{pq}(\tau, {\bm {\xi}})&=&
-2r{\cal I}_{pq}(t-r){\hat{\partial}}_{ijk}\ln d-\frac{4n^i n^j n^k}{d}
{\dot{\cal I}}_{pq}(t-r)+...\;,\\\nonumber\\
\label{derivd1}
{\hat{\partial}}_{ijk}D_{pq}(\tau_0, {\bm {\xi}})&=&
-2r_0{\cal I}_{pq}(t_0-r_0){\hat{\partial}}_{ijk}\ln d-\frac{4n^i n^j n^k}{d}
{\dot{\cal I}}_{pq}(t_0-r_0)+...\;.\\\nonumber
\end{eqnarray}
In addition we have
\begin{eqnarray}
\label{uit1}
\delta_Q(\tau,{\bm{\xi}}) &=&
\frac{1}{r}k^pk^q\dot{\cal{I}}_{pq}(t-r)+...\;,\\
\nonumber\\
\label{mas1}
\delta_Q(\tau_0,{\bm{\xi}}) &=&
\frac{1}{r_0}k^pk^q\dot{\cal{I}}_{pq}(t_0-r_0)+...\;.
\end{eqnarray}
We note that the leading terms of the expansions now have the same dependence
on the distance of the point of emission of light
and of the point of
observation from the source of gravitational waves.
If the source of light is closer to the source of gravitational waves
than the observer,
it makes the largest contribution to the effects of time
delay and deflection of light.
The asymptotic expansions of integrals (\ref{31}) - (\ref{32}) describing
propagation of light rays in the static part of the gravitational field of the
deflector are:
\vspace{0.3 cm}
\begin{equation}
\label{alt1}
A(\tau,{\bm{\xi}})=-2\ln d+\ln (2r)-\frac{d^2}{4r^2}+
...\hspace{0.5 cm},
\end{equation}
\vspace{0.3 cm}
\begin{equation}
\label{altov1}
A(\tau_0,{\bm{\xi}})=-2\ln d+\ln (2r_0)-\frac{d^2}{4r_0^2}+...\hspace{0.5 cm},
\end{equation}
\vspace{0.3 cm}
\begin{eqnarray}
\label{blat1}
B(\tau,{\bm{\xi}})&=&-r-2r \ln d+r \ln (2r)-\frac{d^2}{2r}\left[\frac{1}{2}-
\ln\left(\frac{d^2}{2r}
\right)\right]...\;,\\\nonumber\\\nonumber\\
\label{gnus1}
B(\tau_0,{\bm{\xi}})&=&-r_0-2r_0 \ln d+r_0 \ln (2r_0)-\frac{d^2}{2r_0}
\left[\frac{1}{2}-
\ln\left(\frac{d^2}{2r_0}\right)\right]+...\;.\\\nonumber
\end{eqnarray}
These expansions are used for calculation of asymptotic expressions
for time delay and the angle of deflection of light rays.
\subsubsection{Asymptotic expressions for time delay and the angle of
light deflection}
The static part of time delay and deflection angle are given by:
\vspace{0.3 cm}
\begin{eqnarray}
\label{mass11}
\Delta_M=2{\cal M} \left[\ln \left(\frac{r}{r_0}\right)+
\frac{d^2}{4}\left(\frac{1}{r_0^2}-\frac{1}{r^2}\right)\right]
+...\hspace{0.5 cm},
\end{eqnarray}
\vspace{0.3 cm}
\begin{eqnarray}
\label{spin11}
\Delta_S&=&\epsilon_{ijp}k^j{\cal S}^p\xi^i\left(
\frac{1}{r_0^2}-\frac{1}{r^2}\right) +...\hspace{0.5 cm},\\\nonumber
\end{eqnarray}
Expressions for $\alpha_M^i$, $\alpha_S^i$, and $\alpha_Q^i$ will be the same
as in equations (\ref{ma}), (\ref{sp}), and (\ref{angle}) because they are taken
at the point of observation only. Expressions for $\beta_M^i$, $\beta_S^i$, and
$\beta_Q^i$ are given at the point of observation by equations
(\ref{mam}), (\ref{spm}), and (\ref{betanagle}).
Expressions for $\beta_M^i$, $\beta_S^i$, and
$\beta_Q^i$ at the point of emission of light are given by the same equations
(\ref{mam}), (\ref{spm}), and (\ref{betanagle}) after
substituting $r_0$ for $r$. The relativistic
perturbation $\gamma^i$ is calculated
in
equation (\ref{gamma}).
The asymptotic expression for the time delay
caused by the quadrupole moment is:
\vspace{0.3 cm}
\begin{eqnarray}
\label{quad11}
\Delta_Q&=&-2\left[{\cal I}_{ij}(t-r)-{\cal I}_{ij}(t_0-r_0)\right]
{\hat{\partial}}_{ij}\ln d\\\nonumber\\\mbox{}&&
+
\left(2n_i n_j+k_i k_j\right)\left[\frac{\dot{\cal I}_{ij}(t-r)}{r}-
\frac{\dot{\cal I}_{ij}(t_0-r_0)}{r_0}\right]+...\;.
\end{eqnarray}
One might think that the effect of retardation is again inversely
proportional
to the
square of impact parameter $d$. However, this is actually true
only for sources of gravitational waves with rapidly varying quadrupole moment.
If motion of matter inside the localized source of gravitational waves is slow, then
conditions (\ref{requir}) apply. In this
case, the real amplitude of the effect
becomes extremely small, being
inversely proportional to $1/r^2$ and $1/r^2_0$.
The asymptotic
expression for the observed direction $s^i$ to the source of light is
derived from the basic formula (\ref{dop}) and is:
\begin{eqnarray}
\label{tauzer}
s^i&=&K^i-\frac{2r_0}{R}\left[{\cal I}_{jk}(t-r)-
{\cal I}_{jk}(t_0-r_0)\right]{\hat{\partial}}_{ijk}\ln d\\\nonumber\mbox{}&&
-\frac{4k^j}{R}\left[{\cal I}_{jk}(t-r)-{\cal I}_{jk}(t_0-r_0)\right]
{\hat{\partial}}_{ik}\ln d\\\nonumber\mbox{}&&
-\frac{2\xi^i}{R}\left[\dot{\cal I}_{jk}(t-r)-
\dot{\cal I}_{jk}(t_0-r_0)\right]{\hat{\partial}}_{jk}\ln
d+\gamma^i(\tau,{\bm{\xi}})+...\;,\\\nonumber
\end{eqnarray}
where we have accounted for the approximate equality
$R\simeq r-r_0$ valid in the case of $\tau_0>0$. One can see that deflection
angle is small in the expression given. Moreover, if
we again assume that motion of the matter is slow,
then the observed deflection is
even smaller and is inversely proportional to $1/(rR)$ and $1/(r_0R)$.
\subsection{Case 3. Large Impact Parameter}
\subsubsection{Asymptotic expansions of independent variables}
In this limiting case we assume that the distance $R$
between observer and source
of light is much smaller than $r$ and $r_0$, their respective distances from
the deflector (see Figure
\ref{largeimp}). Then we have
\begin{eqnarray}\label{larg}
r_0^2&=&r^2-2r R
\cos\theta+R^2=r^2\left(1-\frac{2R}{r}\cos\theta+\frac{r^2}{r^2}\right)\;,
\end{eqnarray}
which leads to the expansions\vspace{0.3 cm}
\begin{eqnarray}
\label{expan}
r_0&=&r-R\cos\theta+...\;,\\\nonumber\\\label{q1}
\frac{1}{r_0}&=&\frac{1}{r}\left(1+\frac{R}{r}\cos\theta\right)+...\;.
\end{eqnarray}
The time parameters are
\begin{equation}
\label{free}
\tau=r\cos\theta\;,\quad\quad\mbox{and}\quad\quad \tau_0=\tau-R\;.
\end{equation}
Their numerical values
may be larger or less than zero.
The following exact equalities hold:
\begin{eqnarray}\label{q2}
d&=&r\sin\theta\;,\\
y&=&\tau-r=r(\cos\theta-1)
\;,\\
\left(yr\right)^{-1}&=&\frac{1}{r^2(\cos\theta-1)}\;.
\end{eqnarray}
In addition, we have asymptotic expansions
\begin{eqnarray}
\label{q3}
y_0&=&\tau_0-r_0=y\left(1+\frac{R}{r}\right)+...\;,\\
\left(y_0r_0\right)^{-1}&=&\frac{1}{yr}+\frac{R}{r^3}+...\;,\\ \\
t_0-r_0&=&t-r+R(\cos\theta-1)+...\;.
\end{eqnarray}
Thus, we can decompose any function of the time argument $t_0-r_0$ in a
Taylor series
with respect to the retarded time $t-r$ if convergence is
assumed \cite{59}. For example,
\begin{eqnarray}\label{texp}
{\cal{I}}_{ij}(t_0-r_0)&=&{\cal{I}}_{ij}(t-r)+R(\cos\theta-1)\;
\dot{\cal{I}}_{ij}(t-r)+...\;.
\end{eqnarray}
Finally, we note that the vector $\xi^i$ corresponding to impact parameter $d$
can be
represented as
\begin{eqnarray}\label{vecxi}
\xi^i&=&r\left(N^i-k^i\cos\theta\right)\;,
\end{eqnarray}
where $N^i=-K_0^i=x^i/r$, $|{\bf N}|=1$,
and $k^i$ is the unit vector in the direction from the source of light to
observer \cite{60}.
\subsubsection{Asymptotic expressions for time delay and the angle of
light deflection}
In this section all asymptotic expressions for relativistic effects
are given only up to leading terms
of order $1/r$ and $1/r_0$. For this reason all residual terms of order $1/r^2$
and $1/r^2_0$
are omitted in subsequent formulae without note.
Using asymptotic expansions of functions from the previous section and reducing
similar terms we obtain
\vspace{0.3 cm}
\begin{equation}
\label{pas}
\Delta_Q=\frac{1}{1-\cos\theta}\left[k^i k^j-2k^i N^j\cos\theta+
\frac{1}{2}\left(1+\cos^2\theta\right)N^iN^j\right]
\biggl\{\frac{\dot{\cal I}_{ij}(t-r)}{r}-
\frac{\dot{\cal I}_{ij}(t_0-r_0)}{r_0}\biggr\}\;,\\\nonumber
\end{equation}
where $\cos\theta={\bf k}\cdot{\bf N}={\bf K}\cdot{\bf K}_0$ (see Figures
\ref{bundle} and \ref{largeimp}).
We note that the expression for time delay given above can be further
simplified if the definition of ``transverse-traceless" tensor
with respect to the direction $N^i$ is applied \cite{24},
\cite{33}:
\begin{eqnarray}\label{TT}
{\cal I}_{ij}^{TT}&=&{\cal I}_{ij}+
\frac{1}{2}\left(\delta_{ij}+N_i N_j\right)N_p N_q\; {\cal
I}_{pq}-\left(\delta_{ip}N_j N_q+\delta_{jp}N_i N_q\right)\;{\cal I}_{pq}\;,
\end{eqnarray}
where the projection is onto the plane orthogonal to unit vector $N^i$.
Formula (\ref{pas}) for time delay now assumes the form
\begin{eqnarray}\label{timedel}
\Delta_Q&=&\frac{k^i k^j}{1-\cos\theta}\left[
\frac{\dot{\cal I}_{ij}^{TT}(t-r)}{r}-
\frac{\dot{\cal I}_{ij}^{TT}(t_0-r_0)}{r_0}\right]\;.
\end{eqnarray}
Differentiation of $\Delta_Q$ with respect to time gives the frequency shift
due to a remote localized source of gravitational waves
\begin{eqnarray}\label{freq}
z_g(t,t_0)&=&1-\frac{dt}{dt_0}=-\frac{1}{2}\;
\frac{k^i k^j}{1-{\bf k}\cdot{\bf N}}
\left[h_{ij}^{TT}(t-r)-h_{ij}^{TT}(t_0-r_0)\right]\;,
\end{eqnarray}
where the metric $h_{ij}^{TT}$ is defined by the equation (\ref{adm4}) and
taken in the leading order approximation with respect to $1/r$. We
recognize that the expression (\ref{freq}) is a generalization of the analogous
formula for $z_g$ obtained previously by Mashhoon \& Seitz \cite{61}
in the case of a plane
gravitational wave. This exact coincidence demonstrates the power of our
formalism,
which both reproduces well-known results and yields new
observational
predictions for relativistic effects in the propagation of light
rays in the field of an
arbitrary source of gravitational waves \cite{62}.
Repeating the calculations for the angle of light deflection under the assumption that
the
wavelength, $\lambda$, of gravitational waves emitted by the localized source
is smaller than the distance $R$ between source of light and observer,
we come to the following result:
\begin{eqnarray}
\label{asdr}
\alpha_Q^i&=&\frac{1}{1-\cos\theta}\left[\left(\cos\theta-2\right)
\left(k^ik^pk^q+2 k^ik^pN^q\cos\theta\right)+
\left(\cos^2\theta-2\cos\theta-1\right)\times\right.
\\\nonumber\\\nonumber\mbox{}&&\left.\times
\left(\frac{1}{2}k^iN^pN^q\cos\theta-N^iN^pN^q\right)+
N^ik^pk^q-2N^iN^pk^q\right]
\biggl\{\frac{\ddot{\cal I}_{pq}(t-r)}{r}\biggl\}
\\\nonumber\\\nonumber\mbox{}&&
+2\left(k^p- N^p\cos\theta\right)
\biggl\{\frac{\ddot{\cal I}_{ip}(t-r)}{r}\biggl\}\;.\\\nonumber
\end{eqnarray}
Transformation of this result
using relationship (\ref{TT}) and expression (\ref{adm4}) for
$h_{ij}^{TT}$, where only leading terms of order
$1/r$ are retained, reveals that\vspace{0.3 cm}
\begin{eqnarray}
\label{klon}
\alpha_Q^i&=&\frac{1}{2}\;
\frac{k^p k^q }{1-{\bf k}\cdot{\bf N}}\left[
\left({\bf k}\cdot{\bf N}-2\right)k^i+N^i\right]h_{pq}^{TT}(t-r)
+k^p h_{ip}^{TT}(t-r)\;,\\\nonumber
\end{eqnarray}
and, because the vector $\beta^i$ is small,\vspace{0.5 cm}
\begin{eqnarray}
\label{ssa}
s^i&=&K^i+\alpha^i_Q+\gamma^i+...\;.\\\nonumber
\end{eqnarray}
where the ellipsis designates
unimportant terms of higher order with
respect to $1/r$ \cite{63}, and we
have neglected the constant deflection caused by mass-monopole and spin-dipole
dependent terms.
One sees again that only the transverse-traceless component $h_{ij}^{TT}$
of the metric tensor appears in the final expression.
It is worthwhile to stress that the observed optical direction to the source of
light given by the formula (\ref{ssa}) coincides with that which can be obtained
by means of VLBI observations. Indeed, it is easy to confirm that equation
(\ref{ssa}) can be re-written as follows \cite{64}
\begin{eqnarray}
\label{direc}
s^i&=&K^i+\frac{1}{2}\;
\frac{K^i+N^i}{1+{\bf K}\cdot{\bf N}}\;K^p K^q h_{pq}^{TT}(t-r)
-\frac{1}{2}K^p h_{ip}^{TT}(t-r)\;.\\\nonumber
\end{eqnarray}
The direction to the source of electromagnetic waves measured by VLBI is determined
as difference between times of arrival of the wave to the first and second
antennas. Taking into account equations (\ref{qer}) and (\ref{timedel}) for the
first and second observing sites, and assuming that the time difference
$t_2-t_1$ in observation of the radio signal at the observatories is small
compared to the period of gravitational waves, we find
\vspace{0.3 cm}
\begin{eqnarray}
\label{timedif}
t_2-t_1&=&-\left({\bf K}+\frac{1}{2}\;
\frac{K^i+N^i}{1+{\bf K}\cdot{\bf N}}\;K^p K^q h_{pq}^{TT}(t-r)
\right)
\cdot({\bf x}_2-{\bf x}_1)\;.
\end{eqnarray}
If the baseline vector measured in the local inertial frame is denoted as ${\bf
b}$ and the transformation (\ref{trans}) is taken into account,
\vspace{0.3 cm}
\begin{eqnarray}
\label{end}
x^i_2-x^i_1&=&b^i-\frac{1}{2}\;h^{TT}_{ij}(t-r)b^j+O({\bf b}^2)\;.
\end{eqnarray}
We confirm that
\vspace{0.3 cm}
\begin{eqnarray}
\label{finish}
t_2-t_1&=&-{\bf s}\cdot{\bf b}\;,
\end{eqnarray}
where the vector $s^i$ is given by formula (\ref{direc}), which proves our
statement. It is worth emphasizing that equation (\ref{direc}) was obtained
independently by Pyne {\it et al.} (\cite{16}, see formula (47)). Their formalism, however,
has a limited region of application. Extension of the formalism of
Pyne {\it et al.} \cite{16} was one of the motivation of the present work.
\section{Conclusions}
The most accurate astrometric measurements are differential. They
measure the angle between 2 sources. The highest accuracy is
attainable when the sources are close to each other in the sky. In contrast,
angular deflection by gravitational waves varies only over large angles in the
general case of large impact parameter. Specifically, in such a case
the bending angle depends only on the metric in the
neighborhood of the observer and its first derivatives, as in
equations (\ref{klon}), (\ref{ssa}).
It thus can vary only as a quadrupole and the derivative of a
quadrupole, over the sky. Similarly, equations (\ref{dyw}),
(\ref{angle}) and (\ref{tauzer}) depend on the mass quadrupole moment
${\cal {I}}_{ij}$ and its first and second derivatives. Note that the
the angle of light deflection (\ref{dyw})
involves the time integrals of ${\cal {I}}_{ij}(t-r)$
which may be interpreted as the presence of the ``kinematic resonance effect''
\cite{14}; however, this term is small, as
discussed above. In the context of a purely locally-determined
deflection angle, it is not unexpected that lines of sight that pass
close to the deflector show almost purely the static effect, as was shown in
section 7.
The magnitudes of the leading terms in the limiting forms for the
deflection angle $\alpha_Q$, in equations (\ref{dyw}), (\ref{angle}) and
(\ref{tauzer}) are $\alpha_Q\sim \Omega^2 G M a^2/c^4 r$, where $M$ is
the mass of the deflector, and $a$ is its dimension. The frequency of
the gravitational waves is $\Omega$. For a gravitationally bound binary
system with a circular orbit,
$\Omega$ is twice the orbital frequency \cite{65}. We can use Kepler's third law to express
this in the form $\alpha_Q\sim \Omega^{2/3}(GM)^{5/3}/c^4 r$, or
alternatively, $\alpha_Q\sim 2.4\times 10^{-14}(M/M_{\odot})^{5/3}
P_{\rm sec}^{-2/3} (r_{\rm kpc})^{-1}\;{\rm arcsec}$ where $P_{\rm sec}$ is
the orbital period of the binary system. For a contact
white-dwarf binary at 200~pc, the expected deflection is about
$7\times 10^{-13}$~{\rm arcsec}, with a period of about 1000~sec. For
a supermassive black-hole binary, with mass $10^6~M_{\odot}$ and
period 10~yr at a distance of 1~Mpc, the expected deflection is about
$5\times 10^{-11}$~{\rm arcsec}.
Because the effect varies smoothly over the sky, the presently available
astrometric accuracies are a few microarcseconds. Higher accuracies
are attainable only over smaller angles. Very-long baseline
interferometry of a suite of radio sources attains
microarcsecond accuracy, over
periods of days to years. Specially-designed observations sensitive
to source motions of minutes or hours might attain higher accuracy,
perhaps as much as an order of magnitude better. Clearly, detection
of deflection of light rays by gravitational waves
from nearby localized sources
is not a goal for the near future because of its smallness.
However the background gravitational wave noise may be, perhaps, measured.
The near-perfect cancellation of the effect in General Relativity
suggests that deflection of light by gravitational waves could be a
test of that theory in radiative regime \cite{66a}. In a theory that does not
posess the
symmetries that cause the deflection to vanish, we can only guess the
resulting deflection. Such a guess might multiply the
general-relativistic $\alpha^i_Q$ by 3 factors. The first factor, of
$r/d$, reflects the amplitude of the gravitational wave at the point of
closest approach, rather than at the observer. The second factor,
some function of the distance to the source measured in
gravitational-wave wavelengths, perhaps $\ln (r/\lambda)$ \cite{67}, reflects
the cumulative effect of bending along the line of sight. The final
factor, unknown, reflects the coupling of the non-general-relativistic
part of the wave to the source and its effect on the light ray. For a
source an arcsecond from the deflectors described above, the first 2
factors can increase the effect by several orders of magnitude. Moreover,
if the effect is not local to the observer, differential astrometry
across small angles can detect it, so that greater accuracy is
attainable. Given sufficiently strong departures
from General Relativity, the effect might be detectable.
\acknowledgments
{We are greatful to V.B. Braginsky, M.V. Sazhin, and P. Schneider for
valuable and stimulating discussions.
S.M. Kopeikin is pleasured to acknowledge the hospitality
of G. Neugebauer and G. Sch\"afer and
other members of the Institute for Theoretical Physics of the Friedrich
Schiller University of Jena.
The US National Science Foundation supported parts of this work (AST-9731584).
This work was partially
supported by the Th\"uringer Ministerium f\"ur Wissenschaft, Forschung und
Kultur grant No B501-96060 and by the Max-Planck-Gesellschaft grant No
02160-361-TG74.}
| 48,769 |
\section{Introduction}
The impressive experimental programme for the study of $B$ decays
carried out in recent years has improved our knowledge of the
Cabibbo-Kobayashi-Maskawa
(CKM) matrix and CP violations. In the next few years still more abundant
data is to come, especially from the dedicated B-factories Belle \cite{belle}
and BaBar \cite{babar}.
One of the most important goals in these investigations will be a more precise
determination of the CKM matrix element $V_{ub}$ and, to this end,
both exclusive and inclusive $b \to u$ semileptonic transitions will be used.
The two methods have their own
uncertainties. Using the inclusive reaction implies the need to use perturbative QCD
methods in the region near the end-point of the lepton spectrum, where many
resonances are present and perturbative methods are less reliable.
This difficulty can be avoided by considering exclusive channels and
summing them up or taking them separately; however the
use of the exclusive channels forces to evaluate the hadronic matrix elements
by non perturbative methods that are either approximate or model-dependent.
Examples of these approximations are given by non perturbative methods
derived from Quantum-Chromo-Dynamics (QCD) first principles, i.e.
Lattice QCD and QCD Sum rules. The drawback of
these methods is the difficulty to improve the precision and
to evaluate reliably the theoretical error, which follows from the nature of the
approximations, i.e. the quenched approximation in Lattice QCD and a truncated
Operator Product Expansion in QCD Sum rules. Although less fundamental,
other approaches can be reliably used to give estimates of the hadronic
matrix elements that appear in exclusive $b\to u$ transitions and we refer here to
constituent quark models. At the present stage of our understanding of the hadronic
interactions from first principles, they offer in our opinion a viable alternative, and
the model dependence, which is obvious in this approach, can be used to estimate the
overall theoretical uncertainty. In this paper we shall follow this route
and use a Constituent Quark Meson (CQM) model, introduced and defined
in \cite{gatto} (hereafter referred to
as I) to compute two semileptonic exclusive heavy to light decays, viz.
$B$ to the light
vector mesons $\rho$ and $a_1$. The first decay has been recently observed by the CLEO
collaboration \cite{cleo} (see also \cite{pdg}) that has measured the branching
ratio of the semileptonic decay $B \to \rho \ell \nu$:
\begin{equation}
{\cal B}(B^0 \to \rho^- \ell^+ \nu)=(2.5 \pm 0.4^{+0.5}_{-0.7}\pm0.5) \;
\times \; 10^{-4} \;\;\;. \label{cleo}
\end{equation}
This decay will be used as a test of the
model of Ref.\cite{gatto}, since we do not introduce here any new parameter.
On the other hand, $B \to a_1 \ell \nu$ is a prediction of this model yet to
be tested by the experiment.
The CQM model developed in I tries to conjugate the simplicity of an effective
lagrangian encoding the symmetries of the problem together with some dynamical
information coming from the observed facts of confinement and
chiral symmetry breaking. In spite of the simple way in which the
dynamics is implemented, the results found in I are encouraging. We
discussed there the following issues: leptonic constants for heavy mesons,
Isgur-Wise form factors for heavy mesons of both negative and positive
parity, strong coupling constants of heavy mesons and pions, radiative
decays of $D^*,~B^*$; the comparison with data was
satisfactory whenever it was available.
The plan of the present paper is as follows. The model is briefly reviewed
in Section 2. In Section 3 we compute the direct contribution to the form
factors for
the $B \to \rho \ell \nu$, $B \to a_1 \ell \nu$ decays, i.e. the contribution
arising from diagrams where the weak current directly
interacts with the quarks belonging to heavy and light mesons.
In Section~4 we compute the strong coupling constants of $\rho$ and $ a_1$
to heavy mesons: these couplings are relevant for the calculation of
the polar diagrams, i.e. the diagrams where the weak current
couples to $B$ and $\rho$ (or $a_1$) {\it via} an intermediate
particle. These contributions (called polar contributions)
are described in Section 5. In Section 6 we present our results
and compare them with other approaches and with available data.
In Section 7 we draw our conclusions and in the Appendix we collect
formulas and integrals used in the calculations.
\section{The CQM model}
We begin with a short summary of the CQM model; for a more detailed treatment see I. The model
is an effective field theory containing a quark-meson lagrangian:
\begin{equation}
{\cal L} ~=~{\cal L}_{\ell \ell}~+~{\cal L}_{h \ell}~.\label{lagra}
\end{equation}
The first term involves only the light degrees of freedom (a constituent quark
model for light quarks and mesons was originally suggested by Georgi and
Manohar \cite{manohar}). To the fields considered
in I, i.e. the light quark field $\chi$ and the pseudo-scalar
$SU(3)$ octet of mesons $\pi$, we add the vector meson and axial
vector meson octets $\rho_\mu$ and
$a_\mu$. Considering only the kinetic part of the light quarks and mesons as well as the
quark-meson interactions at the lowest order, we have for ${\cal L}_{\ell \ell} $:
\begin{eqnarray}
{\cal L}_{\ell \ell}&=&{f_{\pi}^2\over 8} \partial_{\mu} \Sigma^{\dagger} \partial^{\mu}
\Sigma +\frac{1}{2 g_V^2} tr[{\cal F(\rho)}_{\mu \nu} {\cal F(\rho)}^{\mu \nu}]
+\frac{1}{2 g_A^2} tr[{\cal F}(a)_{\mu \nu} {\cal F}(a)^{\mu \nu}]
\nonumber\\
&+&
{\bar \chi} (i D^\mu \gamma_\mu -m) \chi \nonumber\\
&+& {\bar \chi} (h_\pi
{\cal A}^\mu \gamma_\mu \gamma_5- i h_\rho \rho^{\mu} \gamma_\mu - i h_a a^{\mu}
\gamma_\mu \gamma_5) \chi.\label{ll}
\end{eqnarray}
Let us discuss the various terms in this equation. The first three terms refer to pions,
light vector and axial vector respectively. We have $\xi=\exp(i\pi/f_\pi )$,
$\Sigma=\xi^2$, $f_\pi=132$ MeV; the $\rho$ and $a_1$ field strengths are given by
\begin{equation}
{\cal F}(x)_{\mu \nu} = \partial_\mu x_\nu - \partial_\nu x_\mu +
[x_\mu,x_\nu]~,
\end{equation} where, consistently with the notations of \cite{rep} (see also \cite{bando}), we
write
\begin{equation}
\rho_\mu = i \frac{g_V}{\sqrt{2}} {\hat \rho}_\mu , ~~~~~~~~~~~~g_V =
\frac{m_\rho}{f_\pi} \simeq 5.8~.
\end{equation}
By analogy we also write ($m_a\simeq 1.26$ GeV is axial vector meson mass):
\begin{equation}
a_\mu = i \frac{g_A}{\sqrt{2}} {\hat a}_\mu,
~~~~~~~~~~~~g_A = \frac{m_a}{f_\pi} \simeq 9.5~. \label{GA}
\end{equation}
Here $\hat \rho$, $\hat a$ are hermitean $3\times 3$ matrices
of the negative and positive parity light vector mesons.
The fourth term in Eq.~(\ref{ll}) contains
the light quarks, with $D_\mu = \partial_\mu-i {\cal V}_\mu$ and
\begin{equation}
{\cal V}^\mu = {1\over 2} (\xi^\dagger \partial^\mu \xi +\xi \partial^\mu
\xi^\dagger)~.
\end{equation}
Therefore it gives both the kinetic term of the light quark and its coupling
to an even number of pions. For $m$, in I we took the value $m=300$ MeV
(for non-strange quarks).
The last three terms describe further interactions between light quarks and
light mesons. The coupling of the light quark to an odd number of pions is
mediated by
\begin{equation}
{\cal A}^\mu = {i\over 2} (\xi^\dagger \partial^\mu \xi -\xi \partial^\mu
\xi^\dagger)~. \label{av}
\end{equation}
Moreover, consistently with a low energy theorem for pions,
we put $h_\pi=1$. Concerning the interactions of vector particles, we put
$\displaystyle{h_\rho=\frac{\sqrt{2}
m^2_\rho}{g_V f_\rho},~h_a=\frac{\sqrt{2} m^2_a}{g_A f_a}}$,
where $f_{\rho}$ and
$f_a$ are the leptonic constants. For the $\rho$ leptonic constant
we use $f_{\rho}=0.152~{\mathrm GeV}^2$, as given by $\rho^0$, $\omega$ decay into
$e^+ e^-$. For $f_a$ a phenomenological determination using
$\tau \to \nu_\tau \pi \pi \pi$ was obtained in \cite{reader}, i.e.
$f_a= 0.25 \pm 0.02~{\mathrm GeV}^2$, a result which agrees with the one found by
QCD sum rules \cite{shu}. On the other hand from lattice QCD one obtains
$f_a=0.30 \pm 0.03 {\mathrm GeV}^2$ \cite{fa}. Since $1/f_a$ multiplies all the amplitudes
involving the $a_1$ meson, the uncertainty on $f_a$ will induce a normalization
uncertainty on all the amplitudes involving the light axial-vector meson.
We note that our choice for $h_\rho$ and $h_a$ implements the
hypothesis of the Vector and Axial-Vector Meson Dominance. Numerically we find:
\begin{equation}
h_\rho\simeq h_a\simeq 0.95~.
\end{equation}
We also observe that our choice for the normalization of the light axial
vector meson field Eq.(\ref{GA}) is conventional since $g_A$ disappears
from the physical quantities (in \cite{bando} $g_A=g_V$ is assumed). We also
differ from the phenomenological analyses of Ref. \cite{bando} since we
do not assume the current algebra relations $m_a^2= 2 m_\rho^2$ and
$f_a=f_\rho$ that seem to have substantial violations.
Let us now discuss ${\cal L}_{h \ell}$, i.e. the part of the lagrangian that
contains both light and heavy degrees of freedom, in particular
the heavy quark ($Q$) and mesons ($Q\bar q$). According
to the Heavy Quark Effective Theory (HQET) \cite{neurev}, in the limit
$m_Q \to \infty$, these mesons can be organized in spin-parity multiplets.
We shall consider here
the negative parity spin doublet $(P,P^*)$ (e.g. $B$ and $B^*$)
and its extension to $P$-waves, i.e. the doublet
containing the $0^+$ and $1^+$ degenerate states $(P_0,P_1^{*\prime})$.
Incidentally,
we note that HQET predicts another low-lying multiplet, comprising two
degenerate states
with $1^+$ and
$2^+$ \cite{falk}, which is of no interest here.
In matrix notations these fields can be represented by two $4 \times 4$ Dirac
matrices $H$ and $S$, with one spinor index for the heavy quark
and the other for the light degrees of freedom.
An explicit matrix representation is, for negative parity states:
\begin{equation}
H = \frac{(1+\slash v)}{2}\; [P_{\mu}^*\gamma^\mu - P \gamma_5 ]\\
\end{equation}
$({\bar H} = \gamma_0 H^{\dagger} \gamma_0)$, whereas, for positive
parity states:
\begin{equation}
S={{1+\slash v}\over 2}[P_{1\mu}^{*\prime} \gamma^\mu\gamma_5-P_0]~.
\end{equation}
In these equations $v$ is the heavy meson velocity, $v^\mu P^*_{\mu}=v^\mu P_{1
\mu}^{*\prime}= 0$; $P^{*\mu}$, $P$, $P_{1\mu}^{*\prime}$ and $P_0$ are annihilation
operators normalized as follows:
\begin{eqnarray}
\langle 0|P| Q{\bar q} (0^-)\rangle & =&\sqrt{M_H}\\
\langle 0|{P^*}^\mu| Q{\bar q} (1^-)\rangle & = & \epsilon^{\mu}\sqrt{M_H}~~,
\end{eqnarray}
with similar equations for the positive parity states ($M_H=M_P=M_{P^*}$
is the common mass in the $H$ multiplet). With these notations
the heavy-light interaction lagrangian is written as follows:
\begin{eqnarray}
{\cal L}_{h \ell}&=&{\bar Q}_v i v\cdot \partial Q_v
-\left( {\bar \chi}({\bar H}+{\bar S})Q_v +h.c.\right)\nonumber \\
&+&\frac{1}{2 G_3} {\mathrm {Tr}}[({\bar H}+{\bar S})(H-S)]~,
\label{qh1}
\end{eqnarray}
where $Q_v$ is the effective heavy quark field of HQET and we have assumed
that the fields $H$ and $S$ have the same coupling to
the quarks, which is a dynamical assumption based on a simplicity criterion
(in I we
justify it on the basis of a four quark Nambu-Jona Lasinio interaction by
partial
bosonization \cite{ebert}).
After renormalization of the heavy fields $H$ and $S$
\cite{gatto} one obtains the kinetic part of the heavy meson
lagrangian in a form that is standard for heavy meson effective chiral theories
\cite{rep}:
\begin{eqnarray}
{\cal L}_{h \ell}&=& {\mathrm {Tr}} {\bar {\hat H}}(i v\cdot \partial
-\Delta_H){\hat H}
+{\mathrm {Tr}} {\bar {\hat S}}(i v\cdot\partial -\Delta_S) {\hat S}~.
\end{eqnarray}
Here $\Delta_H$ and $\Delta_S$ are the mass difference between the meson and the heavy
quark at the lowest order; typical values considered in I are
$\Delta_H=0.3-0.5$ GeV. $\Delta_H$ and $\Delta_S$ are related:
for example, for $\Delta_H = 0.4$ GeV one obtains the value
$\Delta_S = 0.590$ GeV \cite{gatto}.
These values correspond to a value for the $S-$multiplet mass $m=2165 \pm 50$
MeV; these states, called in the literature (for the charmed case)
$D_0,D^{*\prime}_1$, have not been
observed yet, since they are expected to be rather broad.
${\hat H}$ and ${\hat S}$ are the renormalized fields
and are given in terms of the bare
fields $H,S$ by
\begin{eqnarray}
{\hat H} &=& \frac{H}{\sqrt {Z_H}} \\
{\hat S} &=& \frac{S}{\sqrt {Z_S}}.
\end{eqnarray}
$Z_H,~Z_S$ are renormalization constants that have been computed in
\cite{gatto}
with the results (the integral $I_3$ can be found in the Appendix):
\begin{eqnarray}
Z^{-1}_H&=& \left[ (\Delta_H +m ) \frac{\partial I_3(\Delta_H)}
{\partial \Delta_H}+
I_3(\Delta_H) \right] \\
Z^{-1}_S&=&\left[ (\Delta_S -m ) \frac{\partial I_3(\Delta_S)}
{\partial \Delta_S}+
I_3(\Delta_S) \right]~,
\end{eqnarray}
where $m$ is the constituent light quark mass.
Let us finally discuss the way to compute the quark-meson
loops arising from the previous lagrangian. As we have seen,
the CQM model describes the interactions in terms of effective
vertices between a
light quark, a heavy quark and a heavy meson (Eq.~(\ref{qh1}).
We describe the heavy
quarks and heavy mesons consistently with HQET; for example
the heavy quark propagator is given by
\begin{equation}
{i\over {v\cdot k + \Delta}}~,
\end{equation}
where $\Delta$ is the difference between the heavy meson and heavy quark mass and $k$
is the residual momentum arising from the interaction with
the light degrees of freedom.
The light quark momentum is equal to the integrated
loop momentum. It is therefore natural to assume an
ultraviolet cut-off on the loop momentum of the order of the scale at which the
chiral symmetry is broken, i.e. $\Lambda \simeq 1$ GeV (in I we assumed the
value $\Lambda=1.25$ GeV). Since the residual momentum of the heavy quark
does not exceed few units of $\Lambda_{QCD}$ in the effective theory,
imposing such a cut-off
does not cut any significant range of ultraviolet frequencies. We also observe
that the value of the ultraviolet cut-off $\Lambda$ is independent of
the heavy quark mass, since it does not appear in the effective lagrangian.
Concerning the infrared behavior, the model is not confining and thus
its range of validity cannot be extended below energies of the order of
$\Lambda_{QCD}$. In order to drop the unknown confinement part of the quark
interaction one introduces an infrared cut-off $\mu$. These parameters
appear in the regularized amplitudes; as discussed in
\cite{gatto} (see also \cite{ebert}) we have chosen a proper
time regularization; the regularized form for the light quark propagator
(including integration over momenta) is
\begin{equation}
\int d^4 k_E \frac{1}{k_E^2+m^2} \to \int d^4 k_E \int_{1/
\Lambda^2}^{1/\mu^2} ds\; e^{-s (k_E^2+m^2)}~, \label{cutoff}
\end{equation}
where $\mu$ and $\Lambda$ are
the infrared and ultraviolet cut-offs. For $\mu$ in I we assumed the value:
$\mu=300$ MeV. For a different choice of the
cut-off prescription in related models see \cite{holdom}, \cite{bardeen}.
\section{ $B \to \rho$ and $B \to a_1$ form factors: evaluation of the direct
contributions.}
The form factors for the semileptonic decays $B\to\rho \ell\nu$
and $B\to a_1 \ell\nu$ can be written as follows ($q=p-p^\prime$):
\begin{eqnarray}
<\rho^+(\epsilon(\lambda),p^\prime)|\overline{u}\gamma_\mu
(1-\gamma_5)b|\bar{B^0}(p)>
& = & \frac{2 V(q^2)}{m_B+m_{\rho}}\epsilon_{\mu\nu\alpha\beta}
\epsilon^{*\nu}p^\alpha p^{\prime\beta}\nonumber\\
&-& i \epsilon^{*}_{\mu}(m_{B} + m_{\rho}) A_{1}(q^{2})
\nonumber\\
&+& i (\epsilon^{*}\cdot q)
\frac{(p + p^\prime)_{\mu}}{m_B + m_{\rho}} A_{2}(q^{2})
\\
&+& i (\epsilon^{*}\cdot q)
\frac{2 m_{\rho}}{q^{2}} q_{\mu} [A_{3}(q^{2}) - A_{0}(q^{2})]
\nonumber\;\; ,
\end{eqnarray}
where
\begin{equation}
A_{3}(q^{2}) = \frac{m_{B} + m_{\rho}}{2 m_{\rho}} A_{1}(q^{2})
- \frac{m_{B} - m_{\rho}}{2 m_{\rho}} A_{2}(q^{2}) \;\; ,
\end{equation}
and
\begin{eqnarray}
<a^{+}_1(\epsilon(\lambda),p^\prime)|\overline{q^\prime}
\gamma_\mu(1-\gamma_5)b|B(p)>
& = & \frac{2 A(q^2)}{m_B+m_a}\epsilon_{\mu\nu\alpha\beta}
\epsilon^{*\nu}p^\alpha p^{\prime\beta}\nonumber\\
&-& i \epsilon^{*}_{\mu}(m_{B} + m_{a}) V_{1}(q^{2})
\nonumber\\
&+& i (\epsilon^{*}\cdot q)
\frac{(p + p^\prime)_{\mu}}{m_B + m_a} V_{2}(q^{2})
\\
&+& i (\epsilon^{*}\cdot q)
\frac{2 m_a}{q^{2}} q_{\mu} [V_{3}(q^{2}) - V_{0}(q^{2})]
\nonumber\;\; ,
\end{eqnarray}
where $m_a $ is the $a_1$ mass and
\begin{equation}
V_{3}(q^{2}) = \frac{m_{B} + m_{a}}{2 m_{a}} V_{1}(q^{2})
- \frac{m_{B} - m_{a}}{2 m_{a}} V_{2}(q^{2}) \;\; ,
\end{equation}
We note that, having used this parameterization for
the weak matrix elements\cite{bsw}, at $q^2=0$ the following conditions hold
\begin{eqnarray}
A_{3}(0) &=& A_{0}(0)\label{A3}\\
V_{3}(0) &=& V_{0}(0)\label{V3}\;.
\end{eqnarray}
\par
The contribution we consider in this section arises from
diagrams where the weak current couples directly to the quarks belonging to
the light and heavy mesons (see Fig. 1).
These diagrams are computed using the rules described in the previous Section.
The results of a straightforward, but lengthy calculation are as follows:
\begin{eqnarray}
V^{D}(q^{2}) &=& -\frac{m_{\rho}^2}{f_{\rho}} \sqrt{\frac{Z_H} {m_B}}
\left( \Omega_1 - m Z \right) (m_B + m_{\rho})\\
A^{D}_1 (q^{2}) &=& \frac{2 m_{\rho}^2}{f_{\rho}} \sqrt{Z_H m_B}
\frac{1}{m_B + m_{\rho}}
\nonumber \\
&& \left[ (m^2 + m m_{\rho} {\bar{\omega}}) Z -{\bar{\omega}}
m_{\rho}\Omega_1 - m_\rho
\Omega_2 -2 \Omega_3 -\nonumber \right.\\
&& \left. \Omega_4 -\Omega_5 -2 {\bar{\omega}} \Omega_6 \right]\\
A^{D}_2(q^{2}) &=& \frac{m_{\rho}^2}{f_\rho }\sqrt{\frac{Z_H}{m_B}}
\left( m Z -\Omega_1 - 2 \frac{\Omega_6}{m_\rho} \right) (m_B + m_{\rho})\\
A^{D}_0 (q^{2}) &=& -\frac{m_\rho }{f_\rho} \sqrt{Z_H m_B}
\left[\Omega_1 \left( m_\rho {\bar{\omega}} +2 m \frac{q^2}{m_B^2} -
\frac{r_1}{m_B}\right) + m_\rho \Omega_2 + \nonumber \right.\\
&&\left. 2\Omega_3 + \Omega_4 (1-
2 \frac{q^2}{m_B^2}) + \Omega_5 + 2\Omega_6 \left( \bar{\omega}- \frac
{r_1}{m_B m_\rho} \right)-\nonumber\right.\\
&&\left. Z(m^2 - m \frac{r_1}{m_B} + m m_\rho {\bar{\omega}})
\right]
\end{eqnarray}
where
\begin{equation}
{\bar{\omega}}=\frac{m_B^2+m_\rho^2-q^2}{2 m_B m_\rho}~,
\end{equation}
and
\begin{equation}
r_1=\frac{m_B^2-q^2-m^2_\rho}{2}\label{r2}
\end{equation}
and the functions $Z$, $\Omega_j$ are given by the formulae of the Appendix
with $\Delta_1=\Delta_H$, $\Delta_2=\Delta_1 -m_\rho {\bar{\omega}}$,
$x=m_\rho$; $m$ is the constituent light quark mass.
\newpage
The calculation for the $ B \to a_1$ transition is similar. The results are:
\begin{eqnarray}
A^{D}(q^2) &=&- \frac{m_{a}^2}{f_{a}} \sqrt{\frac{Z_H} {m_B}}
\left( \Omega_1 - m Z -\frac{2m}{m_a} \Omega_2 \right) (m_B + m_a) \\
V^{D}_1 (q^{2})&=&\frac{2 m_{a}^2}{f_a} \sqrt{Z_H m_B}
\frac{1}{m_B + m_a}
\nonumber \\
&&\left[(-m^2 + m m_a \bar{\omega})Z + 2m\Omega_1 - \bar{\omega}m_a\Omega_1+
\nonumber \right.\\
&& \left.+(2m\bar{\omega}
-m_a )\Omega_2-2\Omega_3 -\Omega_4-\Omega_5
-2\bar{\omega}\Omega_6 \right]\\
V^{D}_2(q^{2}) &=& \frac{m_a^2}{f_a}\sqrt{\frac{Z_H}{m_B}}
\left( m Z -\Omega_1 - 2 \frac{\Omega_6}{m_a}+2\frac{m}{m_a}\Omega_2
\right) (m_B + m_a)\\
V^{D}_0 (q^{2}) &=& -\frac{m_a }{f_a} \sqrt{Z_H m_B}
\left[\Omega_1 \left( m_a \bar{\omega}+2m \frac{q^2}{m^{2}_B}-
\frac{r_1^{\prime}}{m_B}
-2m\right) + \nonumber \right.\\
&& \left. \Omega_2 \left( m_a + 2m \frac{ r_1^\prime}{m_B m_a} -
2 m \bar{\omega}\right)+
2\Omega_3 + \Omega_4 \left(1- 2\frac{q^2}{m^{2}_B} \right)+\Omega_5+
\nonumber
\right.\\
&&\left. 2\Omega_6 \left(\bar{\omega} - \frac{r_1^\prime}{m_B m_a} \right)+
Z\left(m^2+m\frac{r_1^\prime}{m_B}- m m_a \bar{\omega} \right)\right] ~,
\end{eqnarray}
where now:
\begin{eqnarray}
{\bar{\omega}}&=&\frac{m_B^2+m_a^2-q^2}{2 m_B m_a}\\
r_1^\prime & = &\frac{m_B^2-q^2-m^2_a}{2}~.
\label{r2p}
\end{eqnarray}
The previous results for the form factors can be used directly for the
numerical analysis. In order to allow an easier way of using our results
we have fitted these formulas by the simple parameterization:
\begin{equation}
F^D(q^2)=\frac{F^D(0)}{1~-~a_F \left(\displaystyle\frac{q^2}{m_B^2}\right) +~b_F
\left(\displaystyle\frac{q^2}{m_B^2}\right)^2}
\label{16b}
\end{equation}
\noindent
for a generic form factor $F^D(q^2)$; $a_F~,b_F~$ have been fitted by a
numerical analysis performed up to $q^2=16~{\mathrm GeV}^2$, both for $\rho$
and $a_1$ mesons. We have collected the fitted values in Table \ref{t:tab1}.
We note explicitly that the results for $B \to a_1$ form factors at $q^2=0$
are proportional to the factor
$(0.25~{\mathrm GeV}^2/f_a)$. Besides the normalization
uncertainty due to $f_a$, we estimate a theoretical error of $15\%$ on these
parameters.
\section{Strong coupling constants}
In this Section we compute the strong couplings $HH\rho$, $H S\rho$, $HH a_1$,
$H S a_1$. As discussed in the introduction they are relevant for the
calculation of the polar contribution to the form factors.
We parameterize these couplings by considering
the following effective Lagrangians (we follow the notations introduced in
\cite{rep}):
\begin{eqnarray}
{\cal L}_{H H \rho}&=&i\lambda {\mathrm {Tr}}(\overline{H} H \sigma^{\mu \nu}
{\cal F(\rho)}_{\mu \nu}) -i \beta {\mathrm {Tr}}(\overline{H} H \gamma^\mu
\rho_\mu)
\\ \label{strong1}
{\cal L}_{H S \rho}&=&-i \zeta {\mathrm {Tr}}(\overline{S}
H \gamma^\mu \rho_\mu)+
i\mu {\mathrm {Tr}}(\overline{S} H \sigma^{\mu \nu}
{\cal F(\rho)}_{\mu \nu})
\\ \label{strong2}
{\cal L}_{H H a_1}&=& -i \zeta_A {\mathrm {Tr}}(\overline{H} H
\gamma^\mu a_\mu )+
i\mu_A {\mathrm {Tr}}(\overline{H} H \sigma^{\mu \nu}
{\cal F}(a)_{\mu \nu} )
\\ \label{strong3}
{\cal L}_{H S a_1}&=&i\lambda_A {\mathrm {Tr}}(\overline{S} H \sigma^{\mu \nu}
{\cal F}(a)_{\mu \nu} ) -i \beta_A {\mathrm {Tr}}(\overline{S} H \gamma^\mu
a_\mu )~.
\label{strong4}
\end{eqnarray}
The strong couplings can be computed by matching the effective meson
lagrangian of Eqs.(\ref{strong1})-(\ref{strong4}) with the quark-meson
lagrangian (\ref{qh1}), i.e. considering triangular quark loops
with external legs representing light and heavy mesons. The calculation
is similar to the one of the previous section. The results are
as follows:
\begin{eqnarray}
\lambda &=&\frac{m^2_{\rho}}{ \sqrt{2} g_V f_{\rho}} Z_H (-\Omega_1+ m Z)\\
\beta &=&\sqrt{2}\frac{m^2_{\rho}}{ g_V f_{\rho}} Z_H [2 m \Omega_1+
m_\rho \Omega_2 + 2 \Omega_3 - \Omega_4 + \Omega_5-
m^2 Z ]~.
\end{eqnarray}
Here the functions $Z$, $\Omega_j$ are given by the formulae of the Appendix
with $\Delta_1=\Delta_H$, $x=m_\rho$, $\omega=m_\rho/(2m_B)$ (we keep here
the first $1/m_Q$ correction). Moreover
\begin{eqnarray}
\mu &=& \frac{m^2_\rho}{\sqrt{2} g_V f_{\rho}} \sqrt{Z_H Z_S}( -\Omega_1-
2 \frac{\Omega_6}{m_\rho}+ m Z) \\
\zeta &=& \frac{\sqrt{2} m^2_\rho}{g_V f_{\rho}} \sqrt{Z_H Z_S}
\left(m_\rho \Omega_2 +2 \Omega_3 +\Omega_4 +\Omega_5 -m^2 Z \right)~.
\end{eqnarray}
Here the functions $Z$, $\Omega_j$ are given by the formulae of the Appendix
with $\Delta_1=\Delta_H$, $\Delta_2=\Delta_S$, $x=m_\rho$ and
$\omega=(\Delta_1-\Delta_2)/{m_\rho}$.
For the axial-vector $a_1$ couplings to $H$ and $S$ states we find
\begin{eqnarray}
\lambda_A &= & \frac{m^2_a}{\sqrt{2} g_A f_a} \sqrt{Z_H Z_S}
( -\Omega_1 +2\Omega_2\frac{m}{m_a} + m Z)\\
\beta_A & = & \sqrt{2}\frac{m^2_a}{ g_A f_a} \sqrt{Z_H Z_S}
(m_a \Omega_2 +2\Omega_3 -\Omega_4 + \Omega_5
+ m^2 Z)~,
\end{eqnarray}
where $Z$, $\Omega_j$ are given by the formulae of the Appendix
with $\Delta_1=\Delta_H$, $\Delta_2=\Delta_S$, $x=m_a$ and
$\omega=(\Delta_1-\Delta_2)/{m_a}$. Moreover
\begin{eqnarray}
\mu_A & = & \frac{m^2_a}{\sqrt{2} g_A f_a} Z_H \left( m \left(Z+
2\frac{\Omega_2}{m_a}\right)
- \Omega_1 -2\frac{\Omega_6}{m_a}\right) \\
\zeta_A & = & \frac{\sqrt{2} m^2_a}{g_A f_a} Z_H
( -2m\Omega_1+ m_a\Omega_2 + 2\Omega_3+\Omega_4+\Omega_5+m^2 Z) ~,
\end{eqnarray}
where $\Delta_1=\Delta_H$, $x=m_a$, $\omega=m_a/(2m_B)$
Numerically we get the following results:
$$
\begin{array}{cclcccl}
\lambda &=& 0.60~{\mathrm {GeV}}^{-1}
&\hspace{0.5truecm}& \lambda_A &=& 0.85 \times
(0.25~ {\mathrm GeV}^2/f_a)~{\mathrm {GeV}}^{-1}\nonumber \\
\beta &=& -0.86
& & \beta_A &=& -0.81 \times (0.25~ {\mathrm GeV}^2/f_a) \nonumber \\
\mu &=& 0.16~{\mathrm {GeV}}^{-1}
& & \mu_A &=& 0.23 \times (0.25~ {\mathrm GeV}^2/f_a)~{\mathrm {GeV}}^{-1} \\
\zeta &=& 0.01
& &\zeta_A &=& 0.15 \times (0.25~ {\mathrm GeV}^2/f_a)~.
\end{array}
$$
A discussion about the theoretical uncertainties of these results is in order.
We have explicitly written down the dependence on $f_a$ of the strong
coupling constants involving the light axial-vector meson, since as noted
before, this is a major source of theoretical uncertainty for these constants.
Another source of spreading in the reported values is the variation of
$\Delta_H$ in the range $\Delta_H=0.3-0.5$ GeV (we use $\Delta_H=0.4$ GeV
in the calculation). This produces a significant uncertainty
only for $\zeta,\beta_A,\zeta_A$ since we obtain $\zeta = 0.01\pm 0.19$,
$\beta_A = -0.81^{+0.45}_{-0.24}$ and $\zeta_A = 0.15^{+0.16}_{-0.14}$ while
in the other cases only a few percent variation is observed.
For the other constants: $\lambda$, $\mu$, $\lambda_A$, $\mu_A$, a
theoretical uncertainty of $\pm 15 \%$ can be guessed. This estimate follows
for example from a different evaluation of the $\lambda$ parameter performed
in I. For other determinations of the coupling constant $\lambda$ see
\cite{aliev} (QCD sum rules and light cone sum rules) and \cite{defazio2}
(this paper uses data from $D^*$ decays together with vector meson dominance).
\section{ $B \to \rho$ and $B \to a_1$ form factors: evaluation of
the polar contributions.}
The polar contributions are given by diagrams where the weak
current is coupled to $B$ and to the light vector or axial vector meson
by an intermediate heavy meson state (see Fig. 2). These diagrams,
because of the heavy
meson propagator, produce a typical polar behavior of the form
factors, in the form
\begin{equation}
F^P(q^2)=\frac{F^P(0)}{1~-~ \frac{q^2}{m_P^2}}~.
\label{fp}
\end{equation}
\noindent
This behaviour is certainly valid near the pole; we assume its validity for
the whole $q^2$ range, which can be justified on the basis of the minor
numerical role of this polar
contribution, as compared to the direct one, in the region of low $q^2$,
at least for the form factors $A_1^P, A_2^P$ (see the
numerical results at the end of this Section). The assumption
(\ref{fp}) cannot be made
for the form factors $A_0^P(q^2)$ and
$V_0^P(q^2)$, as we discuss below, and is also less reliable for
$A^P(q^2)$ and $V^P(q^2)$ (see Table \ref{t:tabp}).
The values at $q^2=0$ in Eq.~(\ref{fp}) can be easily computed in term of
the strong coupling constants defined in the previous Section and using
the leptonic decay constants ${\hat F}$ and ${\hat F}^+$ that give the
coupling of the intermediate states to the currents.
Neglecting logarithmic corrections, ${\hat F}$ and ${\hat F}^+$ are related,
in the infinite heavy quark mass limit, to the
leptonic decay constant $f_B$ and $f^+$ defined by
\begin{eqnarray}
\langle 0|{\bar q} \gamma^\mu \gamma_5 b |B(p)\rangle
&=& i p^\mu f_B \\
\langle 0|{\bar q} \gamma^\mu b |B_0(p)\rangle
&=& p^\mu f^+ ~,
\end{eqnarray}
by the relations $f_B={\hat F}/\sqrt{m_B}$
and $f^+={\hat F}^+/\sqrt{m_{B_0}}$ ($B_0$ is the $S$ state with $J^P=0^+$ and
$b\bar q$ content).
These couplings have been computed in \cite{gatto} with the results
given in Table \ref{t:tabf} for different values of the
parameters.
For the values $F^P(0)$ we obtain the following results for the $B\to \rho$
transition.
\begin{eqnarray}
V^P (0)&=& -\sqrt{2} g_V \lambda {\hat F} \frac{m_B +m_\rho}{ m_B^{3/2}}\\
A^P_1 (0) &=& \frac{\sqrt{2 m_B}g_V {\hat F}^+}{m_{B_0} (m_B+m_{\rho})}
(\zeta-2\mu {\bar \omega} m_{\rho})\\
A^P_2 (0) &=& -\sqrt{2} g_V \mu {\hat F}^+ \frac{\sqrt{m_B} (m_B+m_\rho)}
{m_{B_0}^2}~.
\end{eqnarray}
where ${\bar \omega}=m_B/(2 m_\rho)$.
For $A^P_0 (q^2)$, we have to implement the condition contained in
Eq.~(\ref{A3}); for instance a possible choice is
\begin{equation}
A^P_0 (q^2)= A^P_3(0) + g_V \beta {\hat F} \frac{1}{m_\rho\sqrt{2 m_B}}
\frac{q^2}{m^2_B-q^2}~.
\label{AP0}
\end{equation}
For the $B\to a_1$ transition we have:
\begin{eqnarray}
A^P (0)&=& -\sqrt{2} g_A \lambda_A {\hat F}^+ \frac{m_B +m_a}{ m_B^{3/2}}\\
V^P_1 (0) &=& \frac{\sqrt{2 m_B}g_A {\hat F}}{m_B (m_B+m_a)}
(\zeta_A-2\mu_A {\bar \omega} m_a) \\
V^P_2 (0) &=& -\sqrt{2} g_A \mu_A {\hat F} \frac{\sqrt{m_B} (m_B+m_a)}
{m_B^2}~,
\end{eqnarray}
where ${\bar \omega}=m_B/(2 m_a)$. Similarly to the previous discussion
for $A^P_0 (q^2)$ we can put for instance:
\begin{equation}
V^P_0 (q^2)= V^P_3(0) + g_A \beta_A {\hat F}^+ \frac{1}{m_a\sqrt{2 m_B}}
\frac{q^2}{m^2_{B_0}-q^2} ~.
\label{VP0}
\end{equation}
We note that Eqs.~(\ref{AP0}) and (\ref{VP0}) have been written down only
as an example of
a possible behaviour of these form factors satisfying the given constraints.
For massless
leptons they do not contribute to the semileptonic width and can be neglected.
Numerically we obtain the results in Table \ref{t:tabp} where we have also
reported the values of the pole masses for all the form factors except
$A^P_0 (q^2)$ and $V^P_0 (q^2)$ because of Eqs.~(\ref{AP0}) and (\ref{VP0}).
Similarly to the previous analyses an overall
uncertainty of $\pm 15\%$ can be assumed. In Fig. 3 we plot the form factors
$A_1$ and $A_2$ for the semileptonic decay $B \to \rho$. In Fig. 4 are shown
the form factors $A$, $V_1$ and $V_2$ for the semileptonic decay $B \to a_1$.
Since the behaviour in Eqs.~(\ref{AP0}) and (\ref{VP0}) is only guessed,
we have
not included the form factors $A^P_0 (q^2)$ and $V^P_0 (q^2)$ in Fig. 3 and 4;
in addition $V(q^2)$ is not reported in Fig. 3 since our prediction is
affected by a large error (see the discussion in the next section).
Note that the theoretical error is not included in Fig. 3 and 4; one should
refer to the numbers in Tables \ref{t:tab1} and \ref{t:tabp}.
\section{Branching ratios and widths}
In this Section we compute the branching ratios and widths for semileptonic
decays using the numerical results for form factors reported in Table I,II.
Let us first compare our results for the $B \to \rho$ form factors with
those obtained by other methods (see Table IV). These form factors (as well
as those concerning the transition $B \to a_1$) are obtained by adding
the direct and polar contributions:
\begin{equation}
F(q^2)=F^D(q^2) + F^P(q^2),
\end{equation}
where $F^D(q^2)$ was introduced in Section III and $F^P(q^2)$ in Section V.
Our result for the vector form factor $V^{\rho}(q^2)$
is affected by a large error since it arises from
the sum of two terms opposite in sign and almost equal in absolute value.
Apart from this large uncertainty, our results are in relative good
agreement with the results of QCD sum rules, but they are in general
higher than those obtained by other approaches.
For the $B\to \rho \ell\nu$ decay width and branching ratio
we obtain (using $V_{ub}=0.0032$,
$\tau_B=1.56 \times 10^{-12}$ s):
\begin{eqnarray}
{\cal B}(\bar B^0 \to \rho^+ \ell \nu) &=& (2.5 \pm 0.8) \times 10^{-4} \nonumber \\
\Gamma_0(\bar B^0 \to \rho^+ \ell \nu) &=& (4.4 \pm 1.3) \times 10^{7} \; s^{-1}
\nonumber \\
\Gamma_+ (\bar B^0 \to \rho^+ \ell \nu)&=& (7.1 \pm 4.5) \times 10^{7} \;
s^{-1} \nonumber \\
\Gamma_- (\bar B^0 \to \rho^+ \ell \nu)&=& (5.5 \pm 3.7) \times 10^{7} \;
s^{-1} \nonumber \\
(\Gamma_+ + \Gamma_-) (\bar B^0 \to \rho^+ \ell \nu)&=& (1.26 \pm 0.38) \times
10^8 \; s^{-1}
\end{eqnarray}
where $\Gamma_0$, $\Gamma_+$, $\Gamma_-$ refer to the $\rho$ helicities.
The branching ratio for $B\to \rho \ell\nu$ is in agreement with the
experimental result quoted in the introduction, Eq.~(\ref{cleo}).
Let us now discuss the theoretical uncertainty of these results. The large error
of $V^\rho (0)$ affects significantly the values of $\Gamma_+$ and $\Gamma_-$,
whose errors are correlated;
it has however no effect on $\Gamma_0$ and a small effect
on the branching ratio, which increases at most by $8\%$. The theoretical
uncertainties on $A_1^\rho (0)$ and $A_2^\rho (0)$ are likely to be related.
To get the theoretical error on the widths we have added in quadrature the
error induced by $V^\rho (0)$ and a common $\pm 15\%$ error on
$A_1^\rho (0)$ and $A_2^\rho (0)$.
Having used the decay $B\to \rho \ell\nu$ as a test of the CQM model, we can
now consider the $B\to a_1 \ell\nu $ semileptonic decay.
The results obtained are:
\begin{eqnarray}
{\cal B}(\bar B^0 \to a_1^+ \ell \nu) &=& (8.4 \pm 1.6) \times 10^{-4}
\nonumber \\
\Gamma_0 (\bar B^0 \to a_1^+ \ell \nu)&=& (4.0 \pm 0.7) \times 10^{8} \;
s^{-1} \nonumber \\
\Gamma_+ (\bar B^0 \to a_1^+ \ell \nu)&=& (4.6 \pm 0.9) \times 10^{7} \;
s^{-1} \nonumber \\
\Gamma_- (\bar B^0 \to a_1^+ \ell \nu)&=& (0.98 \pm 0.18) \times 10^{8} \;
s^{-1} \label{ba1}
\end{eqnarray}
where $\Gamma_0$, $\Gamma_+$, $\Gamma_-$ refer to the $a_1$ helicities.
We have included in the determination of these decay widths only the
normalization uncertainty arising from $f_a$; the lower values correspond to
$f_a=0.30~{\mathrm GeV}^2$ while the higher values to
$f_a=0.25~{\mathrm GeV}^2$. One should also take into account the
theoretical errors arising from the values of the form factors at $q^2=0$;
they are more difficult to estimate reliably and are not included here.
In any case the over-all theoretical uncertainty is larger (presumably
by a factor of two) than the one reported in the previous formula.
\section{Conclusions}
The main conclusion of this paper can be read from eqs.(\ref{ba1}).
We predict
a branching ratio for the decay $\bar B^0 \to a_1^+ \ell \nu$ significantly
larger than the branching ratio for $\bar B^0 \to \rho^+ \ell \nu$;
in spite of the theoretical uncertainties inherent to the CQM model,
which we have discussed in the previous Sections, this is a remarkable outcome.
A consequence of this result
is that the $ B \to a_1 \ell \nu$ decay channel
might account for around $50\%$
of the semileptonic $ B \to X_u \ell \nu$ decay channel (evaluated,
for example, by the parton model), whereas the $ B \to \rho \ell \nu$
decay channel adds another $15\%$; given the relevance of these results for the
determination of $V_{ub}$, it would be interesting to test these predictions
in the future by other theoretical methods and, hopefully, by experimental data.
\section{Appendix}
In the paper we have introduced several integrals and parameters that we list
in this Appendix.
\begin{eqnarray}
I_0(\Delta)&=& \frac{iN_c}{16\pi^4} \int^{\mathrm {reg}}
\frac{d^4k}{(v\cdot k + \Delta + i\epsilon)} \nonumber \\
&=&{N_c \over {16\,{{\pi }^{{3/2}}}}}
\int_{1/{{{\Lambda}^2}}}^{1/{{{\mu }^2}}} {ds \over {s^{3/2}}}
\; e^{- s( {m^2} - {{\Delta }^2} ) }
\left( {3\over {2\,s}} + {m^2} - {{{\Delta}}^2} \right)
[1+{\mathrm {erf}}(\Delta\sqrt{s})]\nonumber \\
&-& \Delta {{N_c m^2}\over {16 \pi^2}} \Gamma(-1,{{{m^2}}
\over {{{\Lambda}^2}}},{{{m^2}}\over {{{\mu }^2}}})
\\
I_1&=&\frac{iN_c}{16\pi^4} \int^{reg} \frac{d^4k}{(k^2 - m^2)}
={{N_c m^2}\over {16 \pi^2}} \Gamma(-1,{{{m^2}}
\over {{{\Lambda}^2}}},{{{m^2}}\over {{{\mu }^2}}})
\\
I_1^{\prime}&=&\frac{iN_c}{16\pi^4} \int^{\mathrm {reg}} d^4
k\frac{k^2}{(k^2 - m^2)}
={{N_c m^4}\over {8 \pi^2}} \Gamma(-2,{{{m^2}}
\over {{{\Lambda}^2}}},{{{m^2}}\over {{{\mu }^2}}})\\
I_2&=&-\frac{iN_c}{16\pi^4} \int^{\mathrm {reg}} \frac{d^4k}{(k^2 - m^2)^2}=
\frac{N_c}{16\pi^2} \Gamma(0,\frac{m^2}{\Lambda^2}, \frac{m^2}{\mu^2})\\
I_3(\Delta) &=& - \frac{iN_c}{16\pi^4} \int^{\mathrm {reg}}
\frac{d^4k}{(k^2-m^2)(v\cdot k + \Delta + i\epsilon)}\nonumber \\
&=&{N_c \over {16\,{{\pi }^{{3/2}}}}}
\int_{1/{{\Lambda}^2}}^{1/{{\mu }^2}} {ds \over {s^{3/2}}}
\; e^{- s( {m^2} - {{\Delta }^2} ) }\;
\left( 1 + {\mathrm {erf}} (\Delta\sqrt{s}) \right)\\
I_4(\Delta)&=&\frac{iN_c}{16\pi^4}\int^{\mathrm {reg}}
\frac{d^4k}{(k^2-m^2)^2 (v\cdot k + \Delta + i\epsilon)} \nonumber\\
&=&\frac{N_c}{16\pi^{3/2}} \int_{1/\Lambda^2}^{1/\mu^2} \frac{ds}{s^{1/2}}
\; e^{-s(m^2-\Delta^2)} \; [1+{\mathrm {erf}}(\Delta\sqrt{s})]~.
\end{eqnarray}
where $\Gamma$ is the generalized incomplete gamma function and erf
is the error function. Moreover, having defined:
\begin{equation}
\sigma(x,\Delta_1,\Delta_2,\omega)={{{\Delta_1}\,\left( 1 - x \right) +
{\Delta_2}\,x}\over {{\sqrt{1 + 2\,\left(\omega -1 \right) \,x +
2\,\left(1-\omega\right) \,{x^2}}}}}~,
\end{equation}
we have:
\begin{eqnarray}
I_5(\Delta_1,\Delta_2, \omega) & = & \frac{iN_c}{16\pi^4} \int^{\mathrm {reg}}
\frac{d^4k}{(k^2-m^2)(v\cdot k + \Delta_1 + i
\epsilon )
(v'\cdot k + \Delta_2 + i\epsilon )} \nonumber \\
& = & \int_{0}^{1} dx \frac{1}{1+2x^2 (1-\omega)+2x
(\omega-1)}\times\nonumber\\
&&\Big[ \frac{6}{16\pi^{3/2}}\int_{1/\Lambda^2}^{1/\mu^2} ds~\sigma
\; e^{-s(m^2-\sigma^2)} \; s^{-1/2}\; (1+ {\mathrm {erf}}
(\sigma\sqrt{s})) +\nonumber\\
&&\frac{6}{16\pi^2}\int_{1/\Lambda^2}^{1/\mu^2}
ds \; e^{-s(m^2-2\sigma^2)}\; s^{-1}\Big]~.
\end{eqnarray}
We also define, if $q^\mu= x v^{\prime\mu}$, $\omega=v\cdot v^{\prime}$,
$\Delta_2=\Delta_1- x ~ \omega$, the formula:
\begin{eqnarray}
Z &=& \frac{iN_c}{16\pi^4} \int^{\mathrm {reg}}
\frac{d^4k}{(k^2-m^2)[(k+q)^2-m^2](v\cdot k + \Delta_1 + i\epsilon)}\nonumber \\
&=&\frac{I_5(\Delta_1, x/2,\omega)-I_5(\Delta_2,- x/2,\omega)}{2 x}~.
\end{eqnarray}
We use in the text the following combinations of the previous integrals:
\begin{eqnarray}
K_1&=&m^2 Z -I_3(\Delta_2)\nonumber \\
K_2&=&\Delta_1^2 Z -\frac{I_3(x/2)-
I_3(-x/2)}{4 x}[\omega ~ x + 2 \Delta_1]
\nonumber \\
K_3&=&\frac{x^2}{4} Z +\frac{I_3(\Delta_1)-3
I_3(\Delta_2)}{4}+\frac{\omega}{4}[\Delta_1 I_3(\Delta_1)-
\Delta_2 I_3(\Delta_2)]
\nonumber \\
K_4&=&\frac{x \Delta_1}{2} Z +\frac{\Delta_1[I_3(\Delta_1)-
I_3(\Delta_2)]}{2 x}+\frac{I_3(x/2)-I_3(-x/2)}{4} \nonumber \\
\Omega_1&=&\frac{ I_3(-x/2)-I_3(x/2)+\omega[I_3(\Delta_1)-
I_3(\Delta_2)]}{2 x (1-\omega^2)} - \frac{[\Delta_1-\omega x/2]Z}
{1-\omega^2}
\nonumber \\
\Omega_2&=&\frac{ -I_3(\Delta_1)+I_3(\Delta_2)-\omega[I_3(-x/2)-I_3(x/2)]}
{2 x (1-\omega^2)} - \frac{[x/2- \Delta_1\omega ]Z}{1-\omega^2}
\nonumber \\
\Omega_3&=&\frac{K_1}{2}+\frac{2 \omega K_4-K_2-K_3}{2(1-\omega^2)}
\nonumber \\
\Omega_4&=&\frac{-K_1}{2(1-\omega^2) }+\frac{3 K_2-6 \omega K_4+K_3
(2\omega^2+1)}{2(1-\omega^2)^2}
\nonumber \\
\Omega_5&=&\frac{-K_1}{2(1-\omega^2) }+\frac{3 K_3-6 \omega K_4+K_2
(2\omega^2+1)}{2(1-\omega^2)^2}
\nonumber \\
\Omega_6&=&\frac{K_1\omega}{2(1-\omega^2) }+\frac{2 K_4(2\omega^2+1)-
3\omega( K_2+K_3)
}{2(1-\omega^2)^2}~.
\nonumber \\
\end{eqnarray}
\twocolumn
\par
\noindent
\vspace*{1cm}
\par
\noindent
{\bf Acknowledgments}
\par
\noindent
A.D. acknowledges the support of a ``Marie Curie'' TMR research fellowship
of the European Commission under contract ERBFMBICT960965 in the first stage
of this work. He was also supported by the EC-TMR (European Community
Training and Mobility of Researchers) Program on ``Hadronic Physics with
High Energy Electromagnetic Probes''. A.D.P. acknowledges support from
I.N.F.N. Italy. This work has also been carried out in part under the
EC program Human Capital and Mobility, contract UE
ERBCHRXCT940579 and OFES 950200.
| 16,537 |
\part{
\nabla_{\ell}(z)=z^{b_1+1} (a_1 + a_2 z^2+ \dots + a_d z^{2(d-1)}),}
where $z=t^{1/2}-t^{-1/2}$ and $b_1$ is the number of components of the link.
On the other hand, the torsion of $Y$ can be easily computed in terms of the
multivariable Conway function of the link, when the linking
matrix of the surgery presentation is null. The relation
between them is (see \turaev, section 4.3.4, Remark 2):
\eqn\spal{
\tau (Y; t_1, \dots, t_{b_1})= \biggl( \prod_{i=1}^{b_1}
(t_i^{1/2} - t_i^{-1/2}) \biggr)^{-1} \nabla_{\ell} (t_1, \dots, t_{b_1} ).
}
As a consequence of \spal, notice that, if the manifold
$Y$ is obtained by $0$-surgery on a knot
$K \subset {\bf S}^3$, the Alexander polynomial
of the manifold $Y$ is the Alexander polynomial of the knot $K$.
As an example of \spal\ for links, consider the Borromean link in \borromean.
This link has the multivariable Conway polynomial
$\nabla_l (t_1, t_2, t_3) = \prod_{i=1}^3(t_i^{1/2}-t_i^{-1/2})$,
therefore the torsion of ${\bf T}^3$ is $\tau({\bf T}^3)=1$.
Returning to the general case, and using
now \rel, \multial, \part\ and \spal, we obtain for $b_1 (Y)>1$
\eqn\relation{
\Delta_Y (1, \dots, 1) = |H_1 (R, \IZ)| a_1.}
The right hand side of this expression is precisely
Lescop's extension of the Casson invariant
for manifolds with $b_1(Y)>1$ (see \lescop, 5.1.7), therefore we have
\eqn\finalone{
Z_{DW}(Y\times {\bf S}^1) = 4\lambda_{CWL} (Y).}
The factor of four arises as follows. The monopole and
dyon cusps contribute equally. The other factor of
2 comes from the center of the gauge group $SU(2)$.
\subsec{Relation to Floer homology}
As we mentioned in the introduction, one of our motivations to analyze the
partition function of
Donaldson-Witten theory for manifolds with the structure $Y \times {\bf S}^1$
was to obtain a relation with the Euler characteristic of the Floer
cohomology of $Y$. One can check these expectations by considering
the three-manifolds $Y_g= \Sigma_g \times {\bf S}^1$, where $\Sigma_g$ is a
Riemann surface of genus $g$. The first Betti number for this class of
manifolds
is $b_1(Y_g)= 1+2g$. The Floer cohomology of these manifolds is
computed by turning on a non zero flux on $\Sigma_g$, {\it i.e.}
$w_2 (E)=[{\bf S}^1]$. In this case, the expressions \simple\finalone\
remain valid, as \mst\ the basic classes on $Y_g$ are two-dimensional
classes on $\Sigma_g$ and they have no intersection with $w_2(E)$.
The ring structure of the Floer cohomology for these manifolds is
known \munozfloer\ and in particular $\chi (HF (Y_g))=0$ except for
$g=1$, where one has $\chi (HF ({\bf S}^1)^3))=1$. This is in perfect
agreement with the behavior of the Casson invariant, which vanishes for
$b_1(Y)>3$, and
has $\lambda_{CWL} (({\bf S}^1)^3)=1$ \lescop. We then see that the
partition function should be related to the Euler characteristic of
the Floer cohomology, for manifolds with $b_1(Y)>1$, as
\eqn\relfloer{
Z_{DW}(Y\times {\bf S}^1) = 4 \chi (HF(Y)).}
\subsec{Extension to higher rank}
When $b_1 (Y)>1$, the partition function of Donaldson-Witten theory
for gauge group $SU(N)$ can be easily computed using the results of \mmtwo.
In this paper, a simple expression for the $SU(N)$ Donaldson-Witten function
on manifolds with $b_2^+>1$ and of simple type was derived using the $u$-plane
approach of \mw.
This expression is given in equation (9.17) of \mmtwo. For the partition
function,
one obtains the following equation, which generalizes \wittens:
\eqn\simple{
\eqalign{
Z_{DW}^{SU(N)}(X) &= N {\widetilde \alpha}_N^\chi
{\widetilde \beta}_N^\sigma \sum_{k=0}^{N-1} \omega^{k[(N^2-1)
\delta + N \vec \lambda_0\cdot \vec \lambda_0]} \cr
& \,\,\,\,\,\,\,\, \cdot \sum_{\lambda^I} {\rm e}^{ 2\pi
i (\lambda^I, \lambda_0^I) } \Bigl(\prod_{I=1}^{N-1}
SW(\lambda^I)\Bigr)\prod_{1\le I< J \le r}
q_{IJ}^{-(\lambda^I, \lambda^J)}.\cr}
}
In this equation, $\widetilde \alpha_N$, $\widetilde \beta_N$ are
constants, $\omega =
\exp [ i \pi /N]$, $\vec \lambda_0$ is an integral lifting of the generalized
Stiefel-Whitney class of the gauge bundle (see \mmtwo\ for details),
$r=N-1$ is the rank of the gauge group, and $\delta =(\chi + \sigma)/4$.
The terms
$q_{IJ}$ are the leading terms of the off-diagonal couplings. We have
also included an overall $N$ factor corresponding to the order of the center
of the gauge group. Finally, the sum over
$k$ is a sum over the ${\cal N}=1$ vacua.
If we consider a manifold $X=Y \times {\bf S}^1$, with $b_1(Y)>1$, and we
choose
$\vec \lambda_0=0$, the above
expression factorizes completely, as the exponents of the nondiagonal couplings
$q_{IJ}$ are zero. We then find
\eqn\cassun{
Z_{DW}^{SU(N)}(Y \times {\bf S}^1) = N^2 \bigl( \lambda_{CWL}(Y) \bigr)
^{N-1},}
which generalizes \finalone\ to $SU(N)$. It would
be very interesting to compare \cassun\ with the generalizations of the Casson
invariant that can be obtained using Rozansky-Witten invariants. \foot{
Investigating generalizations of the Casson invariant for other gauge groups
using
Rozansky-Witten theory has also been recently proposed by
G. Thompson in \gthompson.}
These generalizations might be more nontrivial for
the case $b_1(Y)\leq 1$.
\newsec{Three-manifolds with $b_1(Y)=1$}
\subsec{The Donaldson-Witten partition function}
When the three-manifold $Y$ has $b_1(Y)=1$, the four-manifold $X=Y \times
{\bf S}^1$ has
$b_2^+ (X)=1$. This means that we have to take into account both the SW
and the
$u$-plane contribution, as explained in \mw. The $u$-plane contribution for
nonsimply connected manifolds has been analyzed in detail in \ns. We also
have to take into account that
the Donaldson-Witten function depends now on the period point of the metric,
and in general
our answers will be metric-dependent. We will consider in particular
the chambers corresponding to a small or big radius for the circle ${\bf S}^1$.
When studying the relation between Donaldson invariants and
three-dimensional invariants, it
is important to take into account the torsion in $H^2 (X, \IZ)$. The
inclusion of torsion in the $u$-plane integral can be done in the
following way: the partition function
of the photon includes a sum over topological sectors, {\it i.e.}
over topological classes of line
bundles. This means that we have to sum over torsion classes as
well in $H^2 (X, \IZ)$. But the
photon partition function
depends on the topology of the
gauge (line) bundle only through the
curvature $2$-form $F_A$ and therefore
is only sensitive to the torsion-free part of $H^2 (X, \IZ)$. This means
that, when summing over
all the topological sectors, we
will have a sum over the classes in $H^2 (X, \IZ)/
{\rm Tor}(H^2(X, \IZ))$ and then include
a global factor $|{\rm Tor}(H^2(X, \IZ))|$ multiplying the $u$-plane integral.
\foot{One could define other topological theories
by introducing a nontrivial character of the
group ${\rm Tor}(H^2(X, \IZ))$ into the path integral.
This would be an analog for Donaldson-Witten theory
of ``discrete torsion'' of a string theory orbifold. We will
not investigate this possibility further here.}
In particular, the wall-crossing formula will have this factor in the
presence of torsion,
as noticed in \kots. When matching to SW wall-crossing as in \mw\ns, this
factor will be
present as well on the SW side, as the WC crossing of the SW invariants
only depends on the
torsion-free part of the ${\rm Spin}^c$ structure \okonek\liliu.
We can now compute the $u$-plane contribution to the Donaldson-Witten
function of manifolds
$X=Y \times {\bf S}^1$. Let's first analyze the metric dependence. A
generic period point has the structure,
\eqn\period{
\omega={1 \over {\sqrt 2}} ({\rm e}^{\theta} [S_1] + {\rm e}^{-\theta}
[S_2] ).}
The limit of a small radius for ${\bf S}^1$ corresponds to $\theta
\rightarrow \infty$, as the volume
of the $S_1$ is
\eqn\volume{
\int_{S_1} \omega = {1 \over {\sqrt 2}} {\rm e}^{-\theta}.}
The other limit, $\theta \rightarrow - \infty$, corresponds to a large
radius for the circle.
It is helpful to keep the following example in mind.
Suppose $Y$ is a circle bundle over $\relax\hbox{$\inbar\kern-.3em{\rm C}$} P^1$, and
$\alpha_Y$ is the volume form of a metric
$ds^2$ on $\relax\hbox{$\inbar\kern-.3em{\rm C}$} P^1$ normalized
to unit volume. Then we could consider a metric on
$X$ given by:
\eqn\helpful{
R^2 (d \varphi)^2 + (d \psi)^2 + { 1 \over R^2} ds^2
}
where $\psi$ is a coordinate on the fiber. Thus
we can identify $R$ the radius of the circle
parametrized by $0 \leq \varphi \leq 1$ with \volume.
We first analyze the Donaldson-Witten partition function in the limit
$R\rightarrow 0$, and
with no magnetic fluxes, so we put $w_2(E)=0$. In such a situation, the
$u$-plane integral can be computed
directly, as in section 8 of \mw. For $R\rightarrow 0$, the right
choice of the reduction vector $z$ is in this case
\eqn\zetared{
z=[S_1], \,\,\,\,\,\ z_+^2={1 \over 2} {\rm e}^{-2\theta} \ll 1.}
Again, if $Y={\bf S}^2 \times {\bf S}^1$, $X=\relax\hbox{$\inbar\kern-.3em{\rm C}$} P^1 \times {\bf T}^2$,
this is the chamber where the volume of the torus is very small. As
our manifold is non-simply connected, we have to use the expressions
of \ns. These involve a
choice of cohomology class $\Sigma$ and a
modified two-observable $I(\tilde S)$. In our case
the cohomology class $\Sigma$ is given in \susigma\ and
the modified two-observable is obtained from
\eqn\stilde{
(\tilde S, z) = (S,z) - { \sqrt 2 \over 16} { d\tau \over du} \Omega,}
where $\Omega$ is the volume element of the torus
${ \bf T}^2= H^1(X, \relax{\rm I\kern-.18em R})/H^1(X, \IZ)$ \ns. The holomorphic function $f$
introduced in
\mw\ns\ is
\eqn\holof{
f= { {\sqrt 2} \over 8 \pi i } {du \over d \tau} \exp \biggl[ {\sqrt 2
\over 32} a {d\tau \over da}
(S, \Sigma) \Omega + S^2 T(u) \biggr] , }
where $du/d\tau$, $a$ and $T(u)$ are certain modular
forms described in \mw\ns.The $u$-plane integral in this chamber is given by:
\eqn\uplane{
Z_u = -4 {\sqrt 2} \pi i \cdot 2^{9 b_1/4} i^{b_1/2}
|{\rm Tor}(H_1(Y, \IZ))| \biggl\{ \sum_I \biggl[ { f_I h_I \over
1-{\rm e} ^{-i (\tilde S, z)/ h_I } } \biggr] _{q^0} \biggr\} ,}
where $f_I, h_I$ are some more modular forms defined
in \mw\ns.
The sum is over four regions at infinity of the
$u$-plane, labelled $I=(\infty, 0)$, $(\infty, 1)$, $(\infty, 2)$,
$(\infty, 3)$,
and the monopole and dyon regions of
the $u$-plane, labelled $I=M,D$. These regions are each a copy
of a fundamental domain for ${\rm SL}(2, \IZ)$. Together the
six regions form a fundamental domain for $\Gamma^0 (4)$.
This domain has three cusps: the cusp at infinity
(corresponding to the four regions $I=(\infty,0), \dots, (\infty,3)$)
and the regions near $\tau=0$ and $\tau=2$ (corresponding to $I=M,D$,
respectively.) The numerical prefactor involving $b_1$ in \uplane\
comes from the measure for the one-forms and was determined in \ns\
by comparing to known topological results.
Using the K\"unneth theorem and the universal coefficient
theorem we have
${\rm Tor} (H^2 (X, \IZ)) \cong {\rm Tor}(H_1(Y, \IZ))$
so the prefactor for the torsion classes can be
written as
$|{\rm Tor} (H^2 (X, \IZ))|= |{\rm Tor}(H_1(Y, \IZ))|$. As
$f$ in \holof\ involves the volume element $\Omega$,
we have to expand the functions appearing in \uplane\ in
$\Omega$ and then integrate. The computation is easy, and
one finds that each region $I$ contributes
$-2 |{\rm Tor}(H_1(Y, \IZ))|/12$. In conclusion, we find
\eqn\total{
Z_u (Y \times {\bf S}^1) =-|{\rm Tor} H_1 (Y, \IZ)|.}
Let's now consider the SW contribution. The SW invariants,
computed for a small perturbation, do not depend on the metric.
To obtain the Donaldson-Witten partition function, we have
to sum over all SW invariants, as in \simple.
Using \alexander\ and \zeroper, we find
\eqn\sumspin{
\sum_{c} SW(Y,c) = \sum_{\ell=1}^r \ell^2 a_\ell = {1 \over 2} \Delta_Y''(1),}
where $\Delta_Y$ is the Alexander polynomial. Taking into account \torsum,
we can write the
partition function of Donaldson-Witten theory,
$Z_{DW}=Z_u + Z_{SW}$, in terms of the Alexander polynomial of $Y$ as
follows:
\eqn\casiles{
Z_{DW}(Y \times {\bf S}^1)= 2 \Delta_Y''(1) -\Delta_Y (1).}
It is interesting to compare this result with the
Casson invariant as extended by Lescop \lescop.
For manifolds with
$b_1(Y)=1$ it is given by (see \lescop, 5.1):
\eqn\boneles{
\lambda_{CWL} (Y)= {1 \over 2} \Delta_{Y}''(1) - {|{\rm Tor}(H_1(Y, \IZ))|
\over 12}.}
We therefore arrive at one of the key
results of this paper:
\eqn\comp{
Z_{DW} (Y \times {\bf S}^1)=4 \lambda_{CWL} (Y) - {4 \over 6}|{\rm
Tor}(H_1(Y, \IZ))| .}
Note that, even after accounting for a factor of $4$, as
in \finalone, the invariants do not agree. It is important to
notice that the result \casiles\ is obviously an integer,
while Lescop's extension of the Casson invariant for manifolds
with $b_1(Y)=1$ takes values in $\IZ /12$. For instance,
for $Y={\bf S}^2 \times {\bf S}^1$ (which has $\Delta_Y(t)=1$),
one has $\lambda_{CWL} (Y)=-1/12$, but $Z_{DW}
(Y \times {\bf S}^1) = -1$. We will comment on this
disagreement below, as well as on the relation of
\casiles\ to the results of \rw.
The fact that our result is an integer suggests that it is
related to the Euler characteristic of the Floer homology
of $Y$. Strictly speaking, we should expect to recover the
Euler characteristic of the Hilbert space in the chamber
$R \rightarrow \infty$ (the ``long neck" chamber). However,
one can easily check that, in this chamber, one also has
\casiles\ for the partition function. This is easily seen
by using the wall-crossing formulae derived in \ns\ for the
Donaldson invariants: there is no wall-crossing for the
partition function. This interpretation of \casiles\ as
an Euler characteristic is not easy to check from known
mathematical results, however, as on manifolds with
$b_1 (Y)>0$ the Floer homology has only been constructed
when there is a nontrivial magnetic flux on $Y$, in order
to avoid reducible flat connections \floerb\ (see \ffloer\
for a nice review). In order to interpret our result \casiles,
it is illuminating to compute the partition function when $w_2 (E)$
has the integral lift $\alpha_Y$. We can do the computation
in two different ways. When one uses the lattice reduction
and unfolding, the inclusion of the flux $w_2 (E)= \alpha_Y$
has the following effect: the contribution of the monopole
and the dyon cusps is the same as before, but for the cusps
at infinity, one has to change
\eqn\change{
{1 \over
1-{\rm e} ^{-i (\tilde S, z)/ h_I } } \rightarrow {1 \over 2i}
\csc \biggl( {(\tilde S, z)\over 2 h_I} \biggr).}
After doing this, the contribution from the monopole and the
dyon cancel the contribution from the four semiclassical
regions. Therefore, the $u$-plane integral vanishes.
Alternatively, one can consider the chamber
$R\rightarrow \infty$, where the vanishing theorem
for the $u$-plane integral holds. As there is no
wall-crossing for the partition function, we find
again $Z_u=0$. Therefore,
\eqn\wflux{
Z_{DW}^{w_2=\alpha_Y} (Y \times {\bf S}^1)=2 \Delta_Y''(1).}
We see that the inclusion of a nonzero flux, which gets rid
of the reducibles, kills the $u$-plane
contribution, as expected. The term $-\Delta_Y (1)$ should
be understood as the contribution of the reducible flat
connections on $Y$ to the partition function.
\subsec{A relation to Reidemeister-Milnor torsion}
In the above computations, we have not included any
observable in the generating function. One can try to
include an appropriate 2-observable in such a way that
the Donaldson-Witten function has the structure of a
formal series related to the Meng-Taubes series. When
a 2-observable is included, the SW contribution for
$Y \times {\bf S}^1$ has the structure \monopole\ns
\eqn\swobs{
Z_{SW}({\rm e}^{I(S)})= 2 \sum_{\lambda}
SW(\lambda)\biggl( {\rm e}^{(S,x) + S^2/2} +
{\rm e}^{-i(S,x) - S^2/2}\biggr),}
where we have put $w_2 (E)=0$, the sum is over ${\rm Spin}^c$
structures and $x= 2k \alpha_Y$. If we consider the 2-homology class
\eqn\observ{
S={1\over 2} t \, S_2,
}
where $t$ has to be understood as a formal parameter,
we see that the dependence in $t$ has the form
\eqn\formalt{
{\rm e}^{(S,x)}= ({\rm e}^t)^k,}
for the monopole contribution.Therefore, the sum
of SW invariants corresponding to the ${\rm Spin}^c$
structures with $x=2k \alpha_Y$ are the coefficients of
a polynomial in ${\rm e}^t$ (for the monopole contribution)
and in ${\rm e}^{-it}$ (for the dyon contribution),
very much as in \cpseries, and \swobs\ becomes
\eqn\formal{
\sum_{x \in H} \Biggl( \sum_{c|\bar c_1(c)=x}
SW(Y,c) \Biggr) \bigl( ({\rm e}^t) ^k + ({\rm e}^{-it})^k \bigr).}
Notice that the SW invariants considered here
are computed using a small perturbation.
The surprise comes when one computes the
$u$-plane contribution. We have to expand
\eqn\expansion{
{1 \over 1-{\rm e} ^{-i { (\tilde S, z)/ h}}} = {
1 \over 1-{\rm e} ^{-i { ( S, z) /h}} } +
{ {\rm e} ^{-i { ( S, z) / h}} \over
\bigl( 1-{\rm e} ^{-i { ( S, z)/ h}} \bigr)^2}
{ i {\sqrt 2} \over 16} {d \tau \over da} \Omega + \dots,}
and only the second term survives after integrating over
the $2$-torus of flat connections. We have to extract
the $q^0$ power of the expansions at the different
cusps. The monopole and dyon cusp contributions are
regular at $q=0$, while the semiclassical cusp gives
a power series in $h_{\infty}(q)$,
where $h_{\infty}$ is a modular form given in \mw.
The final result is
\eqn\surprise{
\eqalign{
Z_{DW} ({\rm e}^{I(S)}) &= 2 { |{\rm Tor}(H_1(Y, \IZ))|
\over (({\rm e}^t)^{1/2} -({\rm e}^t)^{-1/2})^2} +
2 \sum_{x \in H} \bigl( \sum_{c|\bar c_1(c)=x} SW(Y,c) \bigr)
({\rm e}^t)^k \cr
& + 2 { |{\rm Tor}(H_1(Y, \IZ))|
\over (({\rm e}^{-it})^{1/2} -({\rm e}^{-it})^{-1/2})^2} +
2\sum_{x \in H} \bigl( \sum_{c|\bar c_1(c)=x} SW(Y,c) \bigr)
({\rm e}^{-it})^k\cr
&- \biggl[ { 2 |{\rm Tor}(H_1(Y, \IZ))|
\over \bigl( \sin (t/4 h_{\infty}) \bigr)^2} \biggr]_{q^0}.\cr} }
This expression is regular when $t=0$, as the
poles cancel between the monopole and dyon cusps. We can
write it in a more compact form using \finwc\ and \mtone:
\eqn\fform{
Z_{DW} ({\rm e}^{I(S)}) = 2 \tau ( Y; {\rm e}^t)
+ 2 \tau (Y; {\rm e}^{-it}) -
\biggl[ { 2 |{\rm Tor}(H_1(Y, \IZ))| \over
\bigl( \sin (t/4 h_{\infty}) \bigr)^2} \biggr]_{q^0}.}
We see that the infinite series associated
to the stable SW invariants can be
reinterpreted as the $u$-plane contribution from the monopole
or dyon cusps (in the chamber $R \rightarrow 0$) to the
generating function associated to the observable \observ. In addition,
we have found a relation between the Reidemeister-Milnor torsion and
a generating function in Donaldson-Witten theory. It is interesting
to notice that
Donaldson-Witten functions involving
${\rm e}^{t I(S)}$ appear in a natural way in the context of
Fukaya-Floer homology (see for instance \ffloer.)
We should point out that the generalization of the results obtained
here for $b_1(Y)=1$ to the
higher rank case is not an easy task, since the computation of the integral
over the Coulomb branch can not be done using the unfolding technique.
\newsec{On the perils of compactification}
We now return to the key result \comp\ and
investigate its meaning.
\subsec{Review of the relation of three- and four-dimensional
SYM}
In this section we review some results of
Seiberg and Witten \threesw\ and of Rozansky and Witten
\rw.
In \threesw\ Seiberg and Witten studied
the low-energy effective action of ${\cal N}=2$ super
Yang-Mills compactified on $\relax{\rm I\kern-.18em R}^3 \times {\bf S}^1 $ where
the $S^1$ factor has radius $R$.
They argued that in the limits $R\rightarrow \infty$
and $R \rightarrow 0$ one recovers the pure 4d and 3d
theories, respectively, and therefore that the two
different limits are connected through an interpolating
theory that depends on $R$. The low-energy description
of the compactified theory is a three-dimensional
${\cal N}=4$ sigma model
whose target space is a hyperk\"ahler manifold
${\cal M}_R$. As a complex manifold ${\cal M}_R$ can be
identified with the total space of the elliptic fibration
over the $u$-plane defined by the SW curve.
The metric on ${\cal M}_R$ depends on the
compactification radius $R$ and has not
been determined explicitly for general values of
$R$. In the limit as $R \rightarrow 0$ there
is a well-defined limit on the complement
of a section of the elliptic fibration and the
limiting metric turns out to be
the Atiyah-Hitchin metric.
The derivation of the sigma model
with target ${\cal M}_R$ can be approached in
two ways: One can first work out a low energy
theory in 4 dimensions and then compactify,
or one can compactify and then work out the
quantum corrections. The first approach is
better at large $R$ and the second is better at
small $R$. We elaborate on this briefly.
The first method uses the compactification
of the low-energy SW effective theory
of a $U(1)$ vectormultiplet
\threesw.
In this point of view we first first work in
four dimensions and go to the infrared.
To write the low energy lagrangian we
{\it choose} a
duality frame, i.e., we use $SL(2,\IZ)$ to make
a choice of weakly coupled
$U(1)$ vectormultiplet $(a(u), A, \lambda) $.
We next carry out dimensional
reduction. Then we use 3-dimensional
dualization to go from the 3D vector field
$A_\mu$ to a compact
scalar $\sigma$. The result is the tree level sigma model:
\eqn\swtreelev{
\int_Y 2 \pi R g_{u \bar u} du \wedge * d \bar u
+ {1 \over 8 \pi^2 R {\rm Im} \tau(u) } \vert d \sigma - \tau d b\vert^2
}
where $0 \leq \sigma ,b \leq 2 \pi$.
\foot{There is one important difference relative to
\threesw. In our case the threefold $Y$ is
compact, with a volume growing like
${\rm vol}(Y) \sim R^{-1}$. } Thus,
the sigma model has
as target the total space of the elliptic fibration over
the $u$-plane. The metric in \swtreelev\
is only an approximation. However, the
underlying complex manifold is exactly
determined.
As a complex manifold the total space over
the $u$-plane is the surface
\eqn\surface{
z y^2 = x^3 - z x^2 u + z^2 x
}
for $\bigl( (x:y:z) ; u\bigr) \in \relax\hbox{$\inbar\kern-.3em{\rm C}$} P^2 \times \relax\hbox{$\inbar\kern-.3em{\rm C}$}$.
As shown in \threesw, after removing
a section of the fibration one may identify
this surface with the Atiyah-Hitchin manifold
in one of its complex structures.
Unfortunately, there are important quantum
corrections and Kaluza-Klein
corrections which are hard to control in
this approach.
Working instead at small $R$
one can make a compactification
of the the underlying UV $SU(2)$ ${\cal N}=2,d=4$
theory, and then work out the
quantum dynamics. In the
limit $R \rightarrow 0$ we expect that
we can use the dimensional reduction
of the UV theory
to obtain $SU(2)$ ${\cal N}=4, d=3$ SYM.
In this theory one can study quantum corrections.
Denoting the scalar field vevs in the Cartan subalgebra
by $\vec \phi$, and working at
large $\vert \vec \phi \vert$ and to one-loop
approximation
one finds a Taub-Nut metric
\seibtd\threesw:
\eqn\tnmetdef{
ds^2 = V^{-1} (d \sigma + \vec \omega \cdot d \vec \phi)^2 +
V (d \vec \phi)^2
}
for the target space of the 3D sigma model. Here
$\sigma$ is the dualized photon and, as usual,
$\nabla \times \vec \omega = \nabla V$ \egh.
Moreover in this case the
TN potential here has negative mass:
\eqn\tnmet{
V = {2 \pi \over e_3^2} - { 1 \over 2 \pi \vert \vec \phi\vert}
}
Furthermore, studies of 3D instanton effects
reveal the leading $e^{-r}$ corrections corresponding
to the Atiyah-Hitchin metric \khoze.
Motivated by
the non-perturbative results in
supersymmetric gauge theory in three dimensions
of \threesw, Rozansky and Witten constructed a new
topological field theory in three dimensions \rw.
\foot{A new paper on the subject, with
some relation to the present paper, was recently
posted on the
e-print archives \gthompson.}
The RW theory is
based on the twisting of an ${\cal N}=4$
sigma model with target space a hyperk\"ahler manifold ${\cal M} $.
It has been known for some time that the partition function of the twisted
${\cal N}=4$ Yang-Mills theory in three
dimensions (which is the dimensional reduction of
Donaldson-Witten theory) {\it formally}
computes the Casson invariant,
\topc\aj\bt\btft. In \rw, Rozansky and
Witten used the low-energy description of this theory to
show that this is not just formally true,
but really true in the case of homology
spheres, while for three manifolds with $b_1 (Y)>0$, the Rozansky-Witten
partition function is precisely Lescop's
extension of the Casson invariant:
\eqn\rwcwl{
Z_{RW}(Y; {\cal M}_0) = \lambda_{CWL}(Y) .
}
More generally, we may use the interpolating
hyperk\"ahler manifold ${\cal M}_R$ of the SW 3D
sigma model to obtain:
\eqn\rwcwl{
Z_{RW}(Y; {\cal M}_R) =-{1\over 2}b_\theta({\cal M}_R) \lambda_{CWL}(Y),
}
where \rw\
\eqn\bthe{
b_{\theta} ({\cal M}_R) =\int_{{\cal M}_R}
{1 \over 8 \pi^2} {\rm Tr} [ \CR \wedge \CR] ,}
and $\CR$ is the curvature two-form
associated to the hyperk\"ahler metric
on ${\cal M}_R$. For $R=0$, ${\cal M}_R$ is the
Atiyah-Hitchin manifold, and the integral is $-2$.
\subsec{ Donaldson-Witten $\not=$ Rozansky-Witten }
We have considered the partition function of
Donaldson-Witten theory on the four-manifold
$X= Y \times {\bf S}^1$ in the limit when the
radius of the circle goes to zero. For manifolds
with $b_1 (Y)=1$ our result does not agree with Lescop's
extension of the Casson invariant, and therefore
does not agree with the Rozansky-Witten partition
function. However, the results are not totally
unrelated and we can be more precise about the
relation between the different quantities.
In general, the Donaldson-Witten function has
the structure
\eqn\usw{
Z_{DW} = Z_u + Z_{SW}
}
but there is no canonical decomposition into
a ``$u$-plane part'' and an ``SW part.''
As we change the metric the relative contributions
change due to SW wall-crossing. However,
when there is no SW wall-crossing, as in the present
case, the decomposition of the Donaldson-Witten function
in terms of SW contributions and the $u$-plane integral
is canonical, since they do not mix.
Moreover, when
we perform the computation of $Z_u$
in a chamber such as
$R \rightarrow 0$ by lattice reduction
and unfolding, the contributions from the different regions
on the $u$-plane are well distinguished: the contribution of
monopole and dyon cusps correspond to finite regions in
the $u$-plane centered around the monopole and dyon
singularities, respectively, while the four
semiclassical cusps correspond to regions that extend to
infinity in the $u$-plane. Thus, given a chamber, we have
the decomposition
\eqn\udecomp{
Z_u = Z_{u,M} + Z_{u,D} + Z_{u,\infty} .
}
It is important to stress that, in general,
the decomposition of the $u$-plane
integral into contributions from different cusps is not canonical
and depends on the chamber under consideration (when the integral
is computed using lattice reduction, this decomposition depends on
the chamber through the choice of a lattice vector $z$, as explained in
section 8 of \mw). However,
in the present case, for {\it both} $R \rightarrow 0$ and
$R \rightarrow \infty$ we find
\eqn\zeeyou{
Z_{u } = -
\vert {\rm Tor}(H_1(Y, \IZ)) \vert}
because there is neither Donaldson nor SW
wall-crossing. More surprisingly, we find
for $R \rightarrow 0,\infty$ the same decomposition:
\foot{
There is a very interesting further subtlety here. If we also
``regularize'' by including a two-observable $I(S)$ as in
\surprise\ then we find that $Z_u(e^{I(S)})$ is different
in the chambers $R \rightarrow 0$ and
$R \rightarrow \infty$. Still when we subsequently
take the limit $S \rightarrow 0$ we obtain the same result:
$\lim_{S \rightarrow 0} Z_u(e^{I(S)})= -
\vert {\rm Tor}(H_1(Y, \IZ)) \vert$ in both chambers. Nevertheless,
if we first take $R \rightarrow \infty$ and then let
$S \rightarrow 0$ we find a different decomposition
of the $u$-plane integral:
$Z_{u,\infty} = -
\vert {\rm Tor}(H_1(Y, \IZ)) \vert$, $Z_{u,M}=0$,
$Z_{u,D}=0$. }
\eqn\udecompii{
\eqalign{
Z_{u,M} &
= Z_{u,D} = -{1 \over 6} \vert {\rm Tor}(H_1(Y, \IZ)) \vert\cr
Z_{u,\infty}
& = -{4 \over 6} \vert {\rm Tor}(H_1(Y, \IZ)) \vert .\cr}
}
Combining these two decompositions we
can write a decomposition of the Donaldson-Witten
function for the chamber $R \rightarrow 0$
\eqn\mdi{
Z_{DW} = Z_M + Z_D + Z_{\infty}.
}
For example, the contribution of
the monopole cusp
is given by:
\eqn\moncon{
Z_M = 2\biggl( {1 \over 2} \Delta_Y ''(1) - { |{\rm Tor}(H_1(Y, \IZ))|
\over 12} \biggr)= 2 \lambda_{CWL} (Y).
}
Here the first term comes
from the SW invariants at $u=1$, and the second term comes
from the contribution of the monopole cusp in the $u$-plane integral.
The same result holds for $Z_D$. Therefore, this ``truncated" topological
invariant
agrees with the Rozansky-Witten invariant (after including in
the latter the factor of $2$ due to the center of the gauge group),
and therefore with Lescop's extension of the Casson invariant.
In comparing the theories we therefore have:
\eqn\comparsion{
\eqalign{
Z_{M}(Y \times {\bf S}^1 ) & =2 Z_{RW}(Y) \cr
Z_{D}(Y \times {\bf S}^1 ) & =2 Z_{RW}(Y) \cr
Z_{\infty}(Y \times {\bf S}^1 ) & =
-{4 \over 6} \vert {\rm Tor}(H_1(Y, \IZ)) \vert\ \not\propto 2
Z_{RW}(Y) \cr}
}
If we include the two-observable \observ\ and use
\surprise\ we find that $Z_M$ is given by
\eqn\obs{
Z_M ({\rm e}^{I(S)}) =2 \tau (Y;{\rm e}^t).}
It is interesting to notice that the second term in \moncon\
can be interpreted as the $\zeta$-regularization of \obs\ as
$t \rightarrow 0$. The infinite series one obtains in this limit is
precisely the infinite series of wall-crossings \univ:
\eqn\regul{
\vert {\rm Tor}(H_1(Y, \IZ)) \vert(1+2+ \dots) = \zeta (-1) |{\rm
Tor}(H_1(Y, \IZ))| = -{ |{\rm Tor}(H_1(Y, \IZ))|
\over 12} .}
A glance at \surprise\ shows that the cusp at infinity
does not contribute anything like the torsion.
It remains to understand more clearly {\it why} there
is a discrepancy between three-dimensional and
four-dimensional theories. Note that this subtlety does
not enter for $b_1(Y)>1$. In this case there is
no $u$-plane contribution and the 4D and 3D theories are
related in the expected way. Therefore, we begin
by revisiting the $u$-plane integral.
\subsec{A closer look at the $u$-plane measure}
Let us now examine more closely the $u$-plane
integral:
\eqn\copy{
\eqalign{
Z_u(Y \times {\bf S}^1) =
\half \vert {\rm Tor}(H_1(Y, \IZ)) & \vert
\int_{{\Gamma}^0(4) \backslash {\cal H} }
{dx dy \over y^{1/2}} \cdot
\cr
\sum_{n,m\in \IZ } \Biggl\{
\biggl({\rm e}^{\theta}
(n^2 {\rm e}^{-2\theta} - m^2 {\rm e}^{2 \theta}) - {\rm e}^{\theta} {1
\over 2 \pi y}
\biggr) &
\cdot \exp \biggl[ -\pi y (n^2 {\rm e}^{-2\theta} + m^2 {\rm e}^{2
\theta}) - 2\pi i m x \biggr] \Biggr\} \cr}}
The integral is over a fundamental domain for the
congruence subgroup $\Gamma^0(4)$. We denote
$\tau = x+i y$.
The sum is over line bundles for the $U(1)$
gauge theory with
\eqn\latvec{
\lambda= n[S_1] + m [S_2] \qquad n,m\in \IZ .}
Recall that the metric defines the period point $*\omega= \omega$ with
\eqn\per{
\omega = {1 \over {\sqrt 2}}({\rm e}^{\theta} [S_1] + {\rm e}^{-\theta}
[S_2]),}
The first term in the sum in
\copy\ comes from bringing down the
term $\sim { d \tau \over da} F \psi \psi $ in the action and soaking up
fermion $\psi$-zeromodes associated with $b_1(X) = 2$.
The second term in the sum in
\copy\ is a contact term.
Referring to the definitions of the classes
$[S_1],[S_2]$ and to
\volume\helpful\ we see that the limit of shrinking
Kaluza-Klein circle $R \sim e^{- \theta} \rightarrow 0$ corresponds to
$\theta \rightarrow \infty$.
Let us now consider the behavior of
$Z_u$ in this limit.
The first thing to note is
that the terms in the integrand with $m=0$
actually blow up in this limit! The reason for this is that
such line bundles have ``instanton'' connections
with no dependence in the
Kaluza-Klein circle direction $\varphi$.
However, since the overall
volume is fixed, the volume of $Y$ goes like
$e^{+\theta}\rightarrow + \infty$. Thus, new zeromodes,
related to the decompactification of $Y$ develop,
causing a term by term divergence in the
integrand of \copy.
Interestingly, there is in fact a {\it cancellation} between
the positive contribution of the fermion zeromode
term and the negative contribution of the contact
term. This can be seen mathematically as follows.
First, note that the two terms in the
sum in \copy\ combine as a total derivative in
$\theta$:
\eqn\totalderv{
{e^{2 \theta} \over 2 \pi y } {d \over d\theta}
\Biggl[
e^{-\theta} \sum_{n,m} \exp \biggl\{
-\pi y (n^2 {\rm e}^{-2\theta} + m^2 {\rm e}^{2 \theta}) - 2\pi i m x \biggr\}
\Biggr] .
}
Now we see that the divergence from the
sum on $n$ (at $m=0$) can be offset by the vanishing
of $e^{-\theta}$. To see which dominates we use
the Poisson summation formula to write:
\eqn\poi{
\sum_{n,m}
\exp \biggl[ -\pi y (n^2 {\rm e}^{-2\theta} + m^2 {\rm e}^{2 \theta}) -
2\pi i m x \biggr] =
{ {\rm e}^{\theta} \over y^{1/2} } \sum_{\hat n, m} \exp \biggl[ -{\pi
\over y} |\hat n + m \tau|^2 {\rm e}^{2\theta} \biggr].}
Combining the two terms one sees that \copy\ becomes
\eqn\lastint{Z_u =
- {1 \over 2} \vert {\rm Tor}(H_1(Y, \IZ))
\vert \int_{{\Gamma}^0(4) \backslash {\cal H} }
{dx dy \over y^{3}} {\rm e}^{4 \theta} \sum_{\hat n, m} |\hat n + m\tau|^2
\exp \biggl[ -{\pi \over y} |\hat n + m \tau|^2 {\rm e}^{2\theta}
\biggr] .}
We can learn two things from \lastint. First, we now
note that not only have the divergences of the $m=0$
terms cancelled, but the remaining integrand actually
{\it vanishes}
exponentially fast as $\theta \rightarrow + \infty$:
\eqn\limintgnd{
\lim_{\theta \rightarrow + \infty}
{\rm e}^{4 \theta} \sum_{\hat n, m} |\hat n + m\tau|^2
\exp \biggl[ -{\pi \over y} |\hat n + m \tau|^2 {\rm e}^{2\theta} \biggr] =0 .
}
The second thing we learn from \lastint\ is that
the measure is in fact $SL(2,\IZ)$ invariant.
This is a surprise since in general the $u$-plane
measure
is only ${\Gamma}^0(4)$ invariant.
Thus, each of the six copies of the
fundamental domain $\CF$ of $SL(2,\IZ)$
contribute equally to $Z_u$. Moreover,
the integrand is in the standard form for
which one can apply the unfolding technique.
Combining these two observations we get:
\eqn\unfold{
Z_u = - 3 \vert {\rm Tor}(H_1(Y, \IZ))
\vert \int _0^{\infty}{ dy \over y^{3}} {\rm e}^{4 \theta}
\sum_{\hat n} \hat n^2 \exp \biggl[ -\pi \hat n^2 {e^{2 \theta} \over y}
\biggr] .}
We can some further insight into the nature of
the measure from the expression \unfold.
Note first that we can explicitly eliminate all
$\theta$-dependence by a change of variables to
\eqn\chgvrbl{
\xi \equiv {e^{2 \theta } \over y} = {1 \over 2 R^2 y} .
}
Thus the integral \unfold\ is in fact $R$-independent and
simply given by $- \vert {\rm Tor}(H_1(Y, \IZ)) \vert $.
As we have observed, and as is even more
obvious from \unfold, at {\it fixed} value of
$y$, as $R \rightarrow 0 $ the integrand vanishes.
On the other hand the integral is $R$-independent
and nonzero. Thus the
integrand is becoming delta function supported.
We can see this rather explicitly by letting
$w=1/y$ and noting that:
\eqn\limdelts{
\eqalign{
\lim_{R \rightarrow 0} \biggl( \sum_{\hat n \in \IZ} \hat n^2 {w \over
R^4} e^{-\pi \hat n^2 w/R^2}
\biggr) dw & =
\sum_{\hat n \not=0 } \lim_{R \rightarrow 0 }
\hat n^2 {w \over R^4} e^{-\pi \hat n^2 w/R^2} dw \cr
& =
\sum_{\hat n \not=0 } {1 \over \pi^2 \hat n^2} \delta(w) dw\cr
& = {1 \over 3} \delta(w) dw \cr}
}
Thus, as $R \rightarrow 0$, the measure in each cusp
region is becoming $\delta$-function supported at
$y=\infty$.\foot{This is similar to the source of the holomorphic
anomaly in certain one-loop expressions in string
theory \bcov.}
Now, this discussion can be carried out in each of
the three cusp regions of the fundamental
domain of ${\Gamma}^0(4)$. In each cusp region
we have a different $q$ expansion $q = e^{2 \pi i \tau} $,
$\tau = x + i y$. Denoting the relevant $q$-parameters
by $q_M, q_D, q_{\infty}$ the $R \rightarrow 0$ limit
of the $u$-plane measure is
\eqn\arrzeromes{
-\vert {\rm Tor}(H_1(Y, \IZ)) \vert \bigg\{
{1 \over 6} \delta^{(2)}(q_M) d^2 q_M +
{1 \over 6} \delta^{(2)}(q_D) d^2 q_D +
{4 \over 6} \delta^{(2)}(q_\infty) d^2 q_\infty
\biggr\} .
}
\subsec{An interpretation of the result}
Given the facts reviewed in section 6.1, the
discrepancy between
$Z_{DW}(Y \times {\bf S}^1)$ and $Z_{RW}(Y)$
is somewhat surprising. In this section we
will discuss some of the physics behind this
discrepancy and suggest an interpretation
of the result. We thank N. Seiberg and E. Witten
for important remarks that helped us to this
picture.
Let us first dispose of a red-herring. Nonintegral values of
the Witten index are often associated with the
presence of
noncompact field spaces, and the mishandling of
a ``bulk'' or a ``boundary'' contribution.
We stress that this is {\it not} what is going on
here since $Z_{DW}(Y \times {\bf S}^1)$ has no wall crossing.
Our interpretation of \comparsion\ is that the
Donaldson-Witten theory on
$X=Y \times {\bf S}^1$ for small $R$ is simultaneously a
three-dimensional and a four-dimensional theory.
By this we mean the following:
We must integrate over moduli space to get the physical
partition function. There is very different
physics in the different regimes of moduli space.
Some of it is three-dimensional and some of
it is four-dimensional.
For small $R$ the measure for the cusp at $\infty$ is
concentrated in the region
\eqn\infreg{
{\rm Im} \tau_{\infty}(u) \gsim 1/R^2
}
where $\tau_{\infty}$ is the $\tau$ parameter
selected by the semiclassical cusp. Because
of asymptotic freedom, at small $R$ we can use the semiclassical one-loop
answer and the
measure is concentrated in
the region
\eqn\inftyregii{
\log \vert u \vert \gsim {\pi \over 2 R^2}
}
In this region of the $u$-plane physics is
effectively four-dimensional. The infrared
4-dimensional SW description becomes applicable
at length scales
\eqn\stoprun{
\ell \sim {1 \over \sqrt{u}} \sim \exp[ - { \pi \over 4 R^2} ]
\ll R
}
At such length scales the compactification on ${\bf S}^1_R$ is
completely irrelevant. Because of asymptotic freedom
this becomes better and better as $R \rightarrow 0$.
Let us now consider the monopole cusp. The $u$-plane
measure is concentrated in the region
\eqn\infreg{
{\rm Im} \tau_{M}(u) \gsim 1/R^2
}
where $\tau_M = -1/\tau_{\infty}$ defines the weak-coupling
frame near the monopole cusp $u=1$. In particular
$\tau_M(u) \cong { 1\over i \pi} \log (u-1) $
so the relevant region of the $u$-plane is:
\eqn\monreg{
\vert u-1 \vert
{\ \lower-1.2pt\vbox{\hbox{\rlap{$<$}\lower5pt\vbox{\hbox{$\sim$}}}}\ } e^{
- \pi /R^2}
}
consequently the monopoles are very light.
However, the effective theory of monopole
hypermultiplets and dual $U(1)$ vectormultiplets is
IR free and UV unstable - it is not defined as a
four-dimensional theory at distance scales $\ell \ll R$.
Indeed, the infrared SW description is only applicable
at length scales
\eqn\stoprunii{
\ell \gsim e^{+ \pi/R^2} \gg R
}
For this region of moduli space we must first
compactify and then solve for the dynamics.
\subsec{Comments on one-loop corrections}
When one combines standard one-loop expressions
with some of the above remarks one can be lead to
paradoxes which have troubled the authors
of this paper not a little. In this section we
mention some of these confusions, and suggest a
resolution.
In classical dimensional reduction
the gauge couplings
$e_3^2$ and $g_4^2$ in three and four
dimensions, respectively,
are related by $g_4^2 = e_3^2 R$.
In 4D gauge theory, when integrating out massive
charged vectormultiplets and hypermultiplets
of mass $m_i$ and charge $Q_i$ in
a weakly coupled theory the threshold
correction relating the coupling $g_{4,UV}^2$ of the
underlying theory and $g_{4,IR}^2$ of the low
energy effective theory is:
\eqn\fdren{
{ 8 \pi \over g_{4,IR}^2} =
{ 8 \pi \over g_{4,UV}^2} - 16 \pi
\sum_i (-1)^{\epsilon_i} Q_i^2 \int {d^4 p \over (2 \pi)^4}
{1 \over (p^2 + m_i^2 )^2}
}
where $\epsilon_i = 0$ for VM's and $\epsilon_i =1$ for HM's.
The integral in \fdren\ is log divergent and a regularization
\eqn\regular{
{1 \over (p^2 + m_i^2)^2 } \rightarrow { \Lambda^{2 \alpha -4 } \over
(p^2 + m_i^2)^\alpha}
}
with $\alpha = 2 + \epsilon$, $\epsilon \rightarrow 0^+$
is understood here and below.
When we compactify $\relax{\rm I\kern-.18em R}^4 \rightarrow \relax{\rm I\kern-.18em R}^3 \times {\bf S}^1$
the integral in \fdren\ becomes
\eqn\intren{
{1 \over R}\sum_{n=-\infty}^{\infty}
\int {d^3 \vec p \over (2 \pi)^3}
{1 \over (\vec p^2 + (A_4 + n/R)^2 + m^2 )^2}
}
where $A_4$ is a background Wilson loop.
The expression \intren\ interpolates nicely between
the renormalizations in 3D and 4D.
\foot{We elaborate
here on remarks in \sh\ssh. }
Indeed, performing the integral on
$\vec p$ we get:
\eqn\vcpint{
{\pi^{3/2} \over R} {\Gamma(\alpha-3/2) \over \Gamma(\alpha)}
\sum_{n=-\infty}^{\infty}
{ \Lambda^{2 \alpha -4 } \over ((A_4 + n/R)^2 + m^2)^{\alpha-3/2} } .
}
At small values of $R$ we have
\eqn\vcpintii{
{1 \over \epsilon} + { \pi^2 \over R} { 1 \over \sqrt{A_4^2 + m^2}}
+ F(RA_4 , R m)
}
where $F(x,y)$ is an analytic series vanishing
as $x,y \rightarrow 0$.
On the other hand, at large values of $R$ we find
for the same integral:
\eqn\largearr{
{\pi^2 \over \epsilon} - \pi^2 \log {m^2 \over \Lambda^2}
+ 2 \pi^2 \sum_{n\not=0} e^{ 2 \pi i n (A_4 R)}
K_0(2 \pi \vert n \vert m R)
}
The physics of \largearr\ is
clear: The log term is the 4d 1-loop
effect of integrating out heavy charged
particles in the
low energy effective abelian theory.
The Bessel functions from the nonzero modes
decay as $\sim
{1 \over 2 \sqrt{\vert n \vert m R}} e^{-2 \pi \vert n \vert m R} $
and can be understood, from the 3D perspective, as
instantons from particles of mass $m$ running in the
${\bf S}^1$ loop. As pointed out in
\threesw, such quantum corrections are expected
to renormalize the metric to a smooth hyperk\"ahler metric on the moduli
space. The expression
\vcpintii\ can then be understood as a modification
\eqn\tnmodfy{
{8 \pi \over e_{3,IR}^2 } = {8 \pi \over e_{3,UV}^2 } -
16 \pi
\sum_i (-1)^{\epsilon_i} Q_i^2
{ \pi^2 \over \sqrt{A_4^2 + m^2}}
}
Identifying
$ A_4^2 + m^2$ with $\vec \phi^2$
we reproduce the 3D 1-loop result. Dualization
of the photon then leads to a Taub-NUT metric
with
\eqn\tnmassdef{
V \propto 1 + { M_{TN} \over r}
}
Here $M_{TN}$ is the ``Taub-NUT mass,''
and $r$ is the radial Euclidean distance in the standard
representation of the TN space as a circle fibration
over $\relax{\rm I\kern-.18em R}^3$ \egh.
Comparing \tnmodfy\ and \tnmassdef\ we see that
there is a direct connection between the sign of
the TN mass and the sign of the coefficient
of the 4D $\beta$-function. A {\it negative}
mass $M_{TN}$
in 3D corresponds to an asymptotically free
beta function in 4D. This leads to an apparent
paradox: The SW 3D target space is the
Atiyah-Hitchin manifold
${\cal M}_0$ as $R \rightarrow 0$. The latter
is well-known to be approximately a negative
mass TN space for $r \rightarrow \infty$. How
is this consistent with the concentration of the $u$-plane
measure at $u=\pm 1$ for $R \rightarrow 0$ ?
Indeed, when one examines the detailed map between
$u$-plane coordinates \surface\ and coordinates
$\sigma, \vec \phi$ on the Atiyah-Hitchin manifold
(such as those used in \tnmetdef) one finds a
complicated relation between ``regions at infinity.''
In particular, regions of large
$\vert \vec \phi \vert$ can sit over finite points on
the $u$-plane. Since the effective theory
near $u=\pm 1$ is IR free one might expect to
see a {\it positive} mass TN metric. How is this
possible !?
The way out of this confusion is to note that the
4D one-loop analysis in this regime is not very
meaningful. In particular, the monopoles are very
light. From \monreg\ we see that in the relevant
portion of the $u$-plane they have a mass of
order $\vert a_D(u) \vert \lsim e^{-\pi/R^2}$,
and hence expansions such as \largearr\
do not converge.
Clearly, there is much more to understand here,
but we leave it at that, for the moment.
\subsec{Possible implications for string duality}
The low energy dynamics of D-branes and M-branes
gives a novel and powerful approach to investigating
supersymmetric Yang-Mills theory
\giveonkutasov. Conversely, results on supersymmetric
gauge theory will probably teach us important things
about branes. Here we make a preliminary remark on
a possible implication of the present results for
brane physics.
In the gauge-theory-from-D/M-brane framework, the naive
equivalence of Donaldson-Witten/Seiberg-Witten theory on
$Y \times {\bf S}^1$ to Rozansky-Witten/Seiberg-Witten theory on
$Y$ can be easily proved using standard string
dualities. To do this one begins with the description
of $d=4, {\cal N}=2$ theory as the low energy theory of
an $M5$ brane with two dimensions wrapped on
the Seiberg-Witten curve \wittfdsol.
In the IIA limit the configuration is described by
parallel solitonic 5branes connected by $D4$-branes
as in the Hanany-Witten setup \hanwit.
If the solitonic 5branes wrap $Y \times {\bf S}^1$ we can
apply $T$-duality along the ${\bf S}^1$,
and then $S$-duality to obtain an
effective 3D theory whose low energy dynamics is
described by monopole moduli spaces, such as
${\cal M}_0$. Our computation shows that, at least for
some quantities, like the partition function with
supersymmetry preserving boundary conditions,
the application of duality should
be applied with care.
\newsec{Conclusions}
In this paper we investigated
the Donaldson-Witten partition function
$Z_{DW} $ on $Y \times {\bf S}^1$
for $b_1(Y)>0$. We have found some interesting relations
to the torsion of $Y$, reinterpreting the result of
Meng and Taubes from the physical point of view,
and gaining some information on Floer homology.
Some very interesting questions remain open.
One important circle of questions is related to
the rational homology spheres with $b_1(Y)=0$.
These present new challenges since, in evaluating
$Z_{DW}$ one must integrate over the $u$-plane
with a density involving one-loop determinants.
Ironically, the actual $u$-plane integral turns out
to be trivial and is just the volume of the
fundamental domain of $\Gamma^0(4)$.
However, this is more than compensated by
the subtleties of the required one-loop graphs.
We defer a discussion of this
subject to another occasion.
We have also discussed some interesting
subtleties in dimensional compactification of
SYM. It would be nice to understand more
deeply than we have done here the origin of
the discrepancy between $Z_{DW}(Y \times {\bf S}^1)$
and $Z_{RW}(Y)$ for manifolds of
$b_1(Y)=1$. A good understanding of the
hyperk\"ahler metric on ${\cal M}_R$ and the relation
between regions at infinity in the $u$-plane and
Atiyah-Hitchin descriptions would be very
helpful.
Finally, our discussion
has some potential applications in string duality,
as mentioned in section 6.4.
\bigskip
\centerline{\bf Acknowledgements}\nobreak
\bigskip
We would like to thank P. Kronheimer, T. Li, M. Marcolli,
G. Meng, V. Mu\~noz, N. Seiberg and B.L. Wang
for very useful discussions and correspondence. We are specially
indebted to E. Witten for many useful discussions and for his
observations on a preliminary version of this paper.
This work is supported by
DOE grant DE-FG02-92ER40704.
\listrefs
\bye
| 17,178 |
\section{Introduction.}
In the past few years there has been substantial progress in
understanding the origin of angular momentum transport in astrophysical
accretion disks (see the reviews by Papaloizou \& Lin 1995 and Balbus
\& Hawley 1998). In particular, the nature of transport by
magnetohydrodynamic (MHD) turbulence has been greatly clarified.
Magnetized disks are linearly unstable to the weak field
magnetorotational instability (Balbus \& Hawley 1991). However, in
regions surrounding the solar nebula and in protostellar disks more
generally, temperatures and densities suggest very small ionization
fractions, leading to magnetic decoupling. The presence of turbulence
in such a disk is problematic. For this reason, hydrodynamical studies
of disk turbulence remain of great interest.
Before the importance of magnetic fields in disk dynamics was clear,
disk turbulence was viewed as a hydrodynamical problem. Adverse
entropy gradients (vertical stratification) and adverse angular
momentum gradients (radial stratification) each brought with them a
legacy of local instability, turbulence and enhanced transport.
Moreover, simple shear layers break down into turbulence via nonlinear
processes, even in the absence of rotation, and Couette flow experiments
show nonlinear breakdown of some Rayleigh-stable velocity profiles
(Coles 1965; Drazin \& Reid 1981). Even if details were a bit vague,
enhanced turbulent transport via {\it some\/} hydrodynamic process seemed
more than plausible.
Convective models of disk transport are now two decades old (Cameron
1978, Lin \& Papaloizou 1980). But convective turbulence does not, by
itself, guarantee outward angular momentum transport (Prinn 1990);
indeed, recent investigations suggests the opposite. Ryu \& Goodman
(1992) analyzed the linear stages of convective instability, and
pointed out that it produces inward, not outward transport. Kley,
Papaloizou \& Lin (1993) found inward transport in an axisymmetric disk
convection simulation. Stone \& Balbus (1996) conducted a full
three-dimensional (3D) numerical simulation of the compressible Euler
equations in a local patch of Keplerian disk, found small inward
transport, and gave arguments as to why this might be
expected. Despite the presence of vigorous convection in these
simulations, what little net transport was present was directed
radially inwards. The time-averaged amplitude of the stress was very
small, some three to four orders of magnitude below typical values
produced by MHD turbulence in comparable simulations. Similar results
were found by Cabot (1996) in a 3D local Navier-Stokes calculation,
when the assumed viscosity was sufficiently small.
Shear instabilities have the virtue that any resulting transport will
certainly be directed outwards, since the outwardly decreasing angular
velocity gradient would be the source of the turbulence. An even older
idea that convection (Crawford \& Kraft 1956), high Reynolds number
shear turbulence as a source of angular momentum transports predates
modern accretion disk theory, and is explicitly invoked in Shakura \&
Sunyaev (1973). Unfortunately, its validity has never been
demonstrated. Unlike convective instability, which occurs when a
well-understood linear stability criterion is violated, differentially
rotating flows are linearly stable by the Rayleigh criterion. The
oft-made conjecture is that, despite this, Keplerian disks are
nonlinearly unstable, as evinced by some Rayleigh-stable (but
decidedly non-Keplerian) Couette flows.
In principle, the nonlinear stability question of hydrodynamical
Keplerian disks could be settled by direct numerical simulation. But
nonlinear shear instability is a 3D problem, and the critical Reynolds
number for the onset of turbulence was thought to be too high to be
attainable with a 3D numerical code. This, however is not so (Balbus,
Hawley, \& Stone 1996; hereafter BHS). Working with the inviscid Euler
equations, BHS evolved numerical models at a variety of resolutions,
and for a range of angular momentum distributions. A Rayleigh-unstable
model produced rapid growth of the perturbation energy, as expected.
Simple Cartesian shear flow also produced unstable growth, due to a
nonlinear instability. A constant angular momentum distribution also
proved to be nonlinearly unstable: this profile is marginally stable
to linear perturbations, and BHS used simple symmetry arguments
to show that in its stability properties the system is formally
equivalent to (unstable) Cartesian shear flow. Thus, 3D simulations
{\it can\/} reproduce the onset on nonlinear shearing instabilities where
they are known to be present.
BHS found that simple shear and constant angular momentum flows were
the {\it only\/} (unmagnetized) Rayleigh-stable systems to exhibit any
dynamical instability. Keplerian disks simulations, in particular,
were completely stable. BHS argued that the crucial difference between
Keplerian flow and simple shear is the presence of Coriolis forces.
The epicyclic oscillations produced by those forces are strongly
stabilizing for both linear {\it and\/} nonlinear disturbances.
Epicyclic oscillations are not present in shear layers or in constant
angular momentum rotation profiles, which were found to be the only
nonlinear unstable flows. If the velocity profile of a disk has even
the most gentle rise in specific angular momentum with radius, its
behavior is qualitatively different from the constant angular momentum
(or simple shear) case.
At a minimum, the findings of BHS do not augur well for the existence
of hydrodynamic turbulent transport in differentially rotating disks.
The unfavorable results of the disk convection simulations, combined
with the finding that high Reynolds number shear instabilities are
easily simulated (when present), but disappear the moment rotational
effects are smoothly introduced, suggests that only MHD turbulence
offers a viable basis for Shakura-Sunyaev (1973) $\alpha$-disk models.
If hydrodynamic turbulence is present, it must be driven by some source
other than differential rotation, and generally will not transport
angular momentum (e.g., Balbus \& Hawley 1998).
In this paper we return to the local hydrodynamic simulations and
consider the hydrodynamic stability problem from several new
perspectives. We extend the body of simulations beyond what was done
in BHS with higher resolution, and with algorithms which differ in
their diffusive properties. In \S2 we briefly review the moment
equations developed by BHS; these form the basis for interpreting the
results of local numerical simulations. In \S3 we review numerical
procedures used for the local simulations. In \S4 we investigate a
number of issues: Is there any significant effect due to numerical
resolution on BHS's conclusions regarding the stability of Keplerian
flows? BHS speculated that the boundary between nonlinear stability
and instability (e.g., near constant angular momentum distributions)
should not be sharp, and we confirm this expectation. Nonlinearly
unstable, but Rayleigh-stable, laboratory Couette flows are precisely
analogous to flows which find themselves at this boundary. We next
consider the decay of the applied turbulence in the Keplerian system,
at a number of resolutions, and with two distinct numerical schemes.
Finally we compare the Reynolds and Maxwell stresses in a series of MHD
simulations, which span a full range of background angular momentum
distributions. In \S5 we present our conclusions.
\section{Hydrodynamic Fluctuations}
We begin with a brief review of basic disk equations and the formalism
of BHS on the nature of hydrodynamic turbulence in disk flows.
Nonadvective transport in a hydrodynamical accretion disk is determined by
the Reynolds stress tensor,
\begin{equation}\label{one}
T_{R\phi}\equiv \langle \rho u_R u_\phi\rangle
\end{equation}
where $\rho$ is the mass density, and ${\bf u}$ is the noncircular
component of the velocity ${\bf v}$, i.e., ${\bf v} = R\Omega
\bb{\hat\phi} + {\bf u}$. The average in
equation (\ref{one}) is spatial: we assume that a volume can be found
over which the small scale variations average out, leaving $T_{R\phi}$
a smoothly varying quantity. The phenomenological $\alpha$ prescription
of Shakura \& Sunyaev (1973) is $T_{R\phi}=\alpha P$, where $P$ is the
dynamical pressure (possibly including radiation).
The stress $T_{R\phi}$ has several roles. First, and most familiar,
it is the agent of angular momentum and energy transport.
We are particularly interested in the radial dependence of
$T_{R\phi}$. Consider the average radial angular momentum flux,
\begin{equation}\label{momflux}
\langle R \rho v_\phi u_R\rangle \equiv R^2\Omega \langle \rho
u_R\rangle
+ R T_{R\phi},
\end{equation}
and the radial energy flux
\begin{equation}\label {enflux}
\langle {1\over2}\rho v_\phi^2 u_R + \Phi_c \rangle =
-{1\over 2} R^2\Omega^2 \langle\rho u_R \rangle + R \Omega T_{R\phi}.
\end{equation}
where $\Phi_c$ is the central (external) gravitational potential. Note
that in both equations (\ref{momflux}) and (\ref{enflux}) the first
terms represent advected flux; all nonadvective flux is in the
$T_{R\phi}$ contribution of the second terms. Outward transport
corresponds to $T_{R\phi} > 0$. The nonadvective contributions differ
from one another only by a factor of $\Omega$ in the energy flux. Each
of the above (net) fluxes is a simple linear combination of $\langle
u_R\rangle$ and $T_{R\phi}$ only. The fact that no other flow
quantities appear is crucial to the formulation of classical
steady-state $\alpha$
disk theories, for it allows for a well-defined luminosity--accretion
rate relationship.
The turbulent stress must do more than regulate angular momentum
transport, however. It is also the conduit by which free
energy is tapped to maintain the fluctuations, which produce
$T_{R\phi}$ in the first place. This crucially important role is
not a part of standard $\alpha$ disk theory. It is a consequence of
{\it fluctuation\/} dynamics, not mean flow dynamics. This may be
seen by inspecting the diagonal moments of the radial and azimuthal
$u$ equations of motion (BHS):
\begin{equation}\label{balbusr}
{\partial\ \over\partial t} \left\langle {\rho u_R^2\over2}\right\rangle
+ {\nabla}{\cdot}\left\langle {1\over2}\rho u_R^2 {\bf u}\right\rangle =
2\Omega T_{R\phi}
-\left\langle {u_R}{\partial P\over\partial R} \right\rangle -{\rm losses}
\end{equation}
and
\begin{equation}\label{balbusaz}
{\partial\ \over\partial t} \left\langle {\rho u_\phi^2\over2}\right\rangle
+ {\nabla}{\cdot}\left\langle {1\over2}\rho u_\phi^2 {\bf u}\right\rangle =
-{\kappa^2\over2\Omega}T_{R\phi}
- \left\langle {u_\phi\over R}{\partial P\over\partial\phi} \right\rangle
-{\rm losses}
\end{equation}
where ``losses'' refer to viscous losses. In disks the stress tensor
couples both to the Coriolis force and to the background shear, and the
former is bigger than the latter.
Contrast this with simple shear flows. Here,
only the shear couple is present; the stabilizing Coriolis force is
absent. Reverting to Cartesian coordinates with background flow
velocity $V(x) {\bf {\hat e}_y }$, the dynamical $u$-moment equations
for shear flow are
\begin{equation}\label{X}
{\partial\ \over\partial t} \left\langle {\rho u_X^2\over2}\right\rangle
+ {\nabla}{\cdot}\left\langle {1\over2}\rho u_X^2 {\bf u}\right\rangle =
- \left\langle {u_X}{\partial P\over\partial x} \right\rangle - {\rm losses}
\end{equation}
and
\begin{equation}\label{Y}
{\partial\ \over\partial t} \left\langle {\rho u_Y^2\over2}\right\rangle
+ {\nabla}{\cdot}\left\langle {1\over2}\rho u_Y^2 {\bf u}\right\rangle =
-{dV\over dx} T_{XY} - \left\langle {u_Y}{\partial P\over\partial y}
\right\rangle
-{\rm losses}
\end{equation}
where $T_{XY}$ is the obvious analogue to $T_{R\phi}$.
In both disk and shear flow, the shear is the source of free energy
which maintains the kinetic energy of the fluctuations. But the
dynamical content of (\ref{X}) and (\ref{Y}), as compared
with (\ref{balbusr}) and (\ref{balbusaz}) is evidently very different.
The disk is faced with grave difficulties in keeping up both outward
transport ($T_{R\phi} > 0)$ and significant levels of $\langle
u^2\rangle$. Whereas $2\Omega T_{R\phi}$ is a source term for $\langle
\rho u_R^2/2\rangle$ if $T_{R\phi} >0$, the $-\kappa^2/2\Omega$ term in
equation (\ref{balbusaz}) is a sink for $\langle \rho
u_\phi^2/2\rangle$. The presence of both a source and a sink coupled
to $T_{R\phi}$ means that the $u_R$ and $u_\phi$ fluctuations cannot
grow simultaneously: one would grow only at the expense of the other,
and the implicit correlation embodied in $T_{R\phi}$ could not be
self-consistently maintained. One could appeal to the pressure term
in equation (\ref{balbusaz}) for help, and one needs to do so in
{\it any\/} hydrodynamical disk flow where there is outward
transport. This leads not to turbulence, whose physical origin is
vorticity entrainment in large scale shear (Tennekes \& Lumley 1972),
but to transport by spiral waves.
In shear flow there is no $T_{XY}$
sink in the corresponding equation (\ref{Y}), and hence no barrier to
maintaining both transport and fluctuation amplitude. The nonlinear
instability (at sufficiently high Reynolds numbers) and resulting
turbulence of simple shear
flow is a matter of common experience. The behavior of disks could
not be more different.
\section{Numerical Procedure}
Numerical simulations demonstrate the behaviors of disk and shear flows
unambiguously. It is sufficient to work in the local shearing-periodic
box system (Hawley, Gammie \& Balbus 1995). The background angular
velocity of the disk is taken to be a power law: $\Omega \propto
R^{-q}$. We construct a set of local coordinates corotating with the
fluid at a fiducial radius $R_\circ$. Equations are expanded to first
order about $R_{\circ}$, using locally Cartesian coordinates ${\bf x} =
(x,y,z) = (R-R_{\circ}, R_{\circ}(\phi-\Omega t), z)$. ($\Omega$ is
evaluated at $R=R_\circ$ in the expression for $y$.) Although the
local geometry is Cartesian, Coriolis and tidal forces ensure the local
dynamics is not.
The resulting hydrodynamic equations are
\begin{equation}\label{continuity}
{\partial\rho\over{\partial t}} + \nabla \cdot (\rho {\bf v}) = 0,
\end{equation}
\begin{equation}\label{euler}
{\partial {\bf v}\over{\partial t}} + {\bf v}\cdot \nabla {\bf v}
= - {1\over\rho}\nabla P
- 2 {\bf\Omega} \times {\bf v}
+ 2 q \Omega^2 x {\hat{\bf x}},
\end{equation}
\begin{equation}\label{energy}
{\partial \rho \epsilon\over{\partial t}} + \nabla\cdot(\rho\epsilon {\bf v})
+ P \nabla \cdot {\bf v} = 0,
\end{equation}
where the pressure $P$ is given by the
\begin{equation}\label{eos}
P = \rho \epsilon(\gamma - 1),
\end{equation}
and the remainder of the terms have their usual meaning. For
simplicity, the vertical component of gravity is not included. The
shearing box is defined to be a cube with length $L\ (=1)$ on a side, and
the initial equilibrium solution is $\rho= 1$, $P = L\Omega^2$, and
${\bf v} = -q \Omega x \hat y$.
The boundary conditions in the angular ($y$) and vertical ($z$)
directions are strictly periodic. The radial ($x$) direction, however,
is ``shearing periodic." This means that the radial faces are joined
together in such a way that they are periodic at $t = 0$ but
subsequently shear with respect to one another. Thus, when a fluid
element moves off the outer radial boundary, it reappears at the inner
radial boundary at its appropriate sheared position, with its
angular velocity compensated for the uniform mean shear across the
box. See Hawley, Gammie, \& Balbus (1995) for a detailed description
of these boundary conditions.
To begin a simulation, a background angular velocity gradient
($q$ value) is chosen, and
initial velocity perturbations are introduced into the flow. Stability
is determined by whether or not these fluctuations grow in amplitude.
The simulations of BHS began with random perturbations in pressure and
velocity applied as white noise down to the grid scale. However, such
initial conditions have the disadvantage of varying with resolution;
the initial perturbation spectrum will never be fully resolved. For
the models computed in this paper, we use a specific initial
perturbation rather than random noise. The initial conditions consist
of well-defined products of sine-wave perturbations of $v_y$ in all three spatial
directions, with wavelengths $L$, $L/2$, $L/3$ and $L/4$. A linear
combination of sine waves is constructed for each direction, e.g.,
if
\begin{equation}\label{sines}
f(x) = [\sin (2\pi x +\phi_1)+\sin(4\pi x+\phi_2)+
\sin(6\pi x+\phi_3)+\sin(8\pi x+\phi_4)]
\end{equation}
where the $\phi$ terms are fixed phase differences, then the
perturbation is applied to $v_y$ as
\begin{equation}\label{perturb}
\delta v_y = A L\Omega f(x) f(y) f(z)
\end{equation}
The amplitude $A$ of the perturbation is set to some fraction of the
shearing velocity $L\Omega$, typically 10\%. This procedure
ensures that the initial conditions will be the same for all
simulations within a comparison group, regardless of grid resolution,
and that they will be adequately resolved, even on the $32^3$ zone
grid.
Most of the simulations described in \S 4 were computed with the same
code used in BHS. This is an implementation of the hydrodynamic
portion of the ZEUS algorithm (Stone \& Norman 1992).
To address the possibility of
numerical artifacts affecting our findings, it has proven
useful to compare the results obtained using two very different
numerical algorithms. To this end, we have
adapted the Virginia Hydrodynamics-1 (VH1) Piecewise Parabolic Method
(PPM) code to the three-dimensional shearing box problem. The PPM
algorithm was developed by Colella \& Woodward (1984), and it is a
well-known, widely-used, and well-tested numerical technique for
compressible hydrodynamics. Like ZEUS, PPM employs directional
splitting, but differs fundamentally from ZEUS in its use of a
nonlinear Riemann solver rather than finite differences to obtain
the source terms in the Euler equations. PPM also uses piecewise
parabolic representations (third order in truncation error) for the
fundamental variables rather than the piecewise linear
functions used in ZEUS (second order). Both schemes employ
monotonicity filters to minimize zone to zone oscillations. VH1 uses a
Lagrangian-remap approach in which each one-dimensional sweep through the
grid is evolved using Lagrangian equations of motion, after which the
results are remapped back onto the original grid using parabolic
interpolations. Further information about the VH1 implementation of
PPM is currently available at http://wonka.physics.ncsu.edu/pub/VH-1,
and at http://www.pbm.com/~lindahl/VH-1.html.
\section{Results}
\subsection{Flows Marginally Stable by the Rayleigh Criterion}
A constant angular momentum distribution ($q=2$) is marginally stable
to linear perturbations by the Rayleigh criterion. BHS showed that
such a flow, which has a vanishing epicyclic frequency, is formally
equivalent in its stability properties to simple Cartesian shear. When
$\kappa=0$ equations (\ref{balbusr}) and (\ref{balbusaz}) have the same
form as (\ref{X}) and (\ref{Y}). This equivalence implies that
constant angular momentum flows should be subject to the same nonlinear
instabilities that disrupt shear flows. The simulations of BHS
demonstrate this unequivocally.
It is possible to explore deeper consequences of the symmetry. Not
only should a $q=2$ flow be formally analogous to a shear layer, a
``$q=2-\epsilon$'' Rayleigh-stable flow should be formally analogous to
a shear layer with a little bit of rotation: $d\Omega/d\ln R \gg
2\Omega$. This can be inferred from the $R\leftrightarrow\phi$
symmetry of equations (\ref{balbusr}) and (\ref{balbusaz}). (From the
standpoint of a source of free energy there is no problem; differential
rotation serves this role. This is somewhat unusual, since normally the
source of free energy disappears with the onset of linear stability.)
At large Reynolds numbers, only the ratio of the coefficients of
$T_{R\phi}$ matters, and where stability is concerned, reciprocal flows
(those whose coefficient ratios are reciprocals of one another) should
have the same stability properties. The $q=2-\epsilon$ case is
important, because some Couette flows in which the outer cylinder
dominates the rotation are found to be nonlinearly unstable, with the
onset of instability occurring near the inner cylinder (Coles 1965;
Drazin \& Reid 1981). This breakdown is the basis of ``subcritical''
behavior, which is occasionally cited as evidence for nonlinear
instability in {\it Keplerian\/} disks (e.g., Zahn 1991). From the
symmetry reasons stated above, however, we believe that subcritical
behavior is evidence that disks with $q=2-\epsilon$ are nonlinearly
unstable, not $q=1.5$ disks. This is a very testable hypothesis.
We examine this conjecture by computing models at $64^3$ and $32^3$
resolution, $1.94\le q\le 2$ in intervals of
0.01, for two different amplitudes of initial perturbations:
$\delta v_y = 0.1 (L \Omega)$ and $\delta v_y = 0.01 (L\Omega)$. The
value of $q$ determines the strength of the linear stabilization, the
initial perturbation amplitude sets the strength of the initial
nonlinear interactions, and the grid resolution influences the amount
of stabilizing numerical damping present [``losses'' in (\ref{balbusr})
and (\ref{balbusaz})]. Together these effects will determine when the
perturbations grow and when they do not.
Figure 1 displays some of the results. Figure 1a shows the perturbed
kinetic energy in units of $L\Omega$ for the $32^3$ resolution, large
amplitude ($\delta v_y = 0.1L\Omega$) perturbation models. The
different $q$ models begin with the same initial perturbations of the
form (\ref{perturb}). The kinetic energy decreases during the first
orbit, and the curves promptly separate according to angular momentum
distribution, with the smallest $q$ model decaying the most rapidly.
Only the flows with $q=2$ and 1.99 show any subsequent growth. The
$q=1.98$, 1.97, 1.96 and 1.95 models die away; the smaller the value of
$q$, the lower the remaining kinetic energy. Figure 1b depicts models
with the same range of $q$ and the same initial perturbations, but
computed with $64^3$ grid zones. Again there is a short initial period
of rapid decline in perturbed kinetic energy, with the curves
separating according to $q$. However, this decrease is smaller than
that seen in the $32^3$ zone simulations. After about one orbit in
time, the kinetic energies grow for all but the $q=1.96$ and 1.95
model. These models vary with time around an average that remains
close to the initial value; only the $q=1.94$ model experiences a clear
decline in perturbed kinetic energy with time.
The sensitivity of the nonlinear instability to initial perturbation
amplitudes is demonstrated with a third group of $64^3$ grid zone
models. These are begun with an initial perturbation amplitude of
only $\delta v_y = 0.01 L\Omega$ (Fig. 1c). The perturbation kinetic
energy increases slightly during the first orbit, and again the curves
separate according to $q$ value. In this case, however, only the
$q=2.0$ model shows growth; all the others die away.
Because the instability is truly nonlinear, the details of how a flow
develops depend upon the amplitude of the initial disturbance, and, to
a far greater degree than for a linear instability, the numerical
resolution. When $\kappa^2 = (2-q)\Omega = 0$ the linear forces on the
system sum to zero, and nonlinear dynamics determine the fate of the
flow. The simple shear flow shows that in the absence of a restoring
force the nonlinear dynamics are destabilizing. As $\kappa^2$ slowly
increases from zero, the linear restoring force returns; the separation
of the curves by $q$ value illustrates the stabilizing effect of the
Coriolis force. Whether or not the linear restoring force can ensure
stability in a given system depends on its strength compared to that of
the nonlinear dynamics, which, in turn, depend on the amplitude of the
initial perturbations. The larger the perturbations, the greater the
nonlinear forces.
The subcritical behavior of $2-\epsilon$ flows seems to have its roots
in epicyclic excursions. The mechanism of instability in planar
Couette flow is believed to be vorticity stretching in the shear
(Tennekes \& Lumley 1972). The presence of epicyclic motion in general
is incompatible with this process. Nearby perturbed elements execute
bound (pressure modified) epicyclic orbits around a a common angular
momentum center. There is no indefinite stretching of local vortices,
or at least the process is far less efficient. But the aspect ratio of
the ellipse epicycle becomes extreme as $q\rightarrow2$; in the absence
of pressure, the minor to major (radial to azimuthal) axis ratio for
displacements in the disk plane is $(1-q/2)^{1/2}$. At some point, it
is all but impossible to distinguish the epicycle from the shearing
background, and of course the epicyclic frequency is then tiny compared
with the shearing rate. This latter rate is the time scale for vortex
stretching, and so we expect this mechanism for turbulence to be viable
under these conditions. The fact the formal linear epicyclic excursion
may be bound is inconsequential. Vortex stretching and the feeding of
turbulence will proceed if there is ample time before the epicycle
closes. For $q=1.95$, the approximate threshold for nonlinear
instability found in the simulations, $\kappa=0.2|d\Omega/d\ln R|$, and
the aspect ratio quoted above is $0.16$, i.e. about 6 to 1 major to
minor axis ratio. These are sensible values in the scenario we have
suggested: if $\kappa$ is not less than an order of magnitude smaller
than the shearing rate, or the aspect ratio is not less than 0.1 to
0.2, then the flow is strongly stabilized by Coriolis-induced epicyclic
motion. In this case, vortices are not stretched efficiently by the
background shear: rather than monotonically increasing the distortion,
the epicyclic orbit relaxes the vortex stretching over half of
the period.
Numerical diffusion error represents a loss term from (\ref{balbusr})
and (\ref{balbusaz}), and this adds to the stabilizing effect by
reducing the amplitude of the perturbations. At a given resolution,
however, numerical effects should be nearly the same from one $q$ value
to the next. Hence a series of models that differ by only $q$ isolates
the physical effects from the numerical. Any differences observed in
these simulations are dynamical, not numerical, in origin.
To conclude, we have observed that the growth or decay of applied
velocity perturbations depends on the resolution and the initial
perturbation amplitude for flows near the critical $\kappa^2 = 0$
limit. This hypersensitivity, however, is shown only over a tiny range
of $q$. Below $q=1.95$ all indications of instability are gone. These
results are precisely what one would expect from the presence of a
nonlinear instability, and they are consistent with the observed
presence of such instabilities in shear dominated, but formally
Rayleigh-stable Couette experiments.
\subsection{The Influence of Resolution and Algorithm}
A concern in any numerical study, particularly one whose goal is to
search for a physical instability, is the effect of finite numerical
resolution. In \S4.1 we demonstrated how various flows
could be stabilized by increasing the epicyclic frequency (through a
decrease in $q$ from the critical value of 2.0). In some of these
cases, a transition from stability to instability occurred when
resolution was increased. Clearly, numerical resolution does play its
anticipated role: numerical diffusion error has a stabilizing effect.
But is numerical diffusion error decisive as $q$ becomes smaller?
BHS argued that the stability of Keplerian flow to finite perturbations
was due to physical, not numerical effects, and gave support to that
position through simulations done at three resolutions, all of which
produced similar results. In this section we describe a series of
simulations that improve upon those previous resolution experiments.
We have evolved a series of Keplerian flows with a range of numerical
resolutions. We begin with $32^3$ grid zones, and then increase the
number of grid zones by a factor of two in each of the three dimensions
in subsequent simulations, up to $256^3$ grid zones. Each of these
Keplerian flows is perturbed with angular velocity fluctuations of the
form (\ref{perturb}), with an maximum initial amplitude of $\delta v_y
= 0.1 L\Omega$.
Figure 2 shows the evolution of the kinetic energy of the angular and
radial velocity perturbations, $(\rho v_x^2)/2$ (Fig. 2a) and $(\rho
\delta v_y^2)/2$ (Fig. 2b). The initial perturbations produce radial
kinetic energies which rapidly (0.2 orbits) increase to a maximum value comparable
with the azimuthal component. Beyond this point, the
perturbed kinetic energies drop off with time; the higher the
resolution, the less rapid the decline, although each doubling in
resolution creates a diminishing change. All resolutions show rapid
decrease in $(\rho \delta v_y^2)/2$ from the initial value. One
intriguing difference between the models is that the higher the
resolution, the {\it lower} the value of $(\rho\delta v_y^2)/2$ after
about 2 orbits. Thus, far from promoting greater instability, higher
resolution is {\it reducing} the amplitude of the angular momentum
perturbations.
Why should an increase in resolution lead to more rapid damping? This is
clearly at variance with the expected behavior if numerical diffusion
were the sole sink of perturbed kinetic energy. As we have
emphasized, there is also a significant dynamical sink. Equation
(\ref{balbusaz}) shows that the Reynolds stress is a loss term for
$\langle \delta v_y^2 \rangle$.
All simulations begin with a positive Reynolds stress,
and the higher resolution simulations maintain larger values during the
initial orbit. At each resolution,
the Reynolds stress can be integrated over time
and space to give a measure of its strength:
$\int {\kappa^2 \over 2\Omega}\, {\langle \rho v_x\,v_y\rangle}\, dt$.
These values {\it increase} monotonically with resolution, from 0.0033,
to 0.0059, 0.0078, and finally to 0.0092 for the $256^3$ model. (For
reference, the initial value of $\langle{{1\over 2} \rho v_y^2 }\rangle$
is 0.04.)
Further evidence for the damping effect of the Reynolds stress can be
seen in the low resolution run. In Figure 3 we plot $(\rho \delta
v_y^2)/2$ (Fig. 3a) and the Reynolds stress (Fig. 3b) as a function of
time for the first orbit in the $32^3$ grid zone model. This low
resolution simulation is of special interest because at orbit 0.25 the
averaged Reynolds stress becomes negative. At the same time, the rate
of decline of $\langle \rho \delta v_y^2 \rangle$ decreases, as one
would anticipate from (\ref{balbusaz}). Hence, although at low
resolution grid scale numerical diffusion is the largest loss term in
the angular velocity fluctuation equation, the sink due to the Reynolds
stress is large enough to observe directly. Improved numerical
resolution increases the dynamical Reynolds sink by a greater factor
than it reduces the numerical diffusion!
We next turn to simulations of Keplerian flows at $32^3$, $64^3$ and
$128^3$ grid zone resolutions using the VH1 PPM code. We ran the same
problem with the same initial perturbations as above. Figure 4 shows
the time-history of the perturbed radial and azimuthal kinetic
energies. This plot should be compared with Figure 2 and for reference
we include the $32^3$ and the $128^3$ ZEUS simulation results as
dashed lines. Figure 5 is the Reynolds stress during the first orbit
for all the resolutions and for both algorithms; the PPM runs are the
bold lines.
The PPM results are completely consistent with the ZEUS simulations.
Most striking is the close similarity between a given PPM evolution and
the ZEUS simulation run at twice its resolution. For example, the
history curve of the Reynolds stress in the PPM $32^3$ run lies almost
on top of the ZEUS $64^3$ curve (fig. 5) through 0.2 orbits in time.
The Reynolds stresses in the $64^3$ and $128^3$ PPM simulations peak at
the same level as the $256^3$ ZEUS simulation, then decline at slightly
different rates beyond 0.2 orbits. The $128^3$ PPM simulation
apparently has less numerical diffusion than the $256^3$ ZEUS model.
Regardless of the relative merits of the two schemes, the succession of
curves with increasing resolution showing the same outcome, done with
two completely independent algorithms, indicates convergence to a
solution near that of the maximum resolution models. In other words,
Keplerian disks would prove stable even if computed at arbitrarily high
resolution.
\subsection{Nonlinear Decay in the Keplerian System}
In simulations of Keplerian differential rotation, the kinetic energy
of the perturbations declines at a rate which itself decreases with
time. Why should there be any decay at all in a stable inviscid
system? Is this decay entirely a numerical artifact?
These simulations begin with perturbations of the form
(\ref{perturb}). The initial power spectrum for the perturbed kinetic
energy thus contains power in the first four wavenumbers only. Once
the evolution begins, nonlinear interactions cause a cascade of
available perturbed kinetic energy into higher wavenumbers.
Dissipation occurs rapidly at the highest wavenumber. Although this
dissipation is numerical, it mimics the behavior of physical
dissipation at the viscous scale. The rate at which energy cascades to
larger wavenumbers, and hence the rate at which the perturbed kinetic
energy declines, should again be a function of numerical resolution and
perturbation amplitude. In this section we investigate these effects,
explore the reasons for the decay of the turbulence, and examine the
properties of the velocity fluctuations that remain at late time.
A study the Fourier power spectrum of the perturbed kinetic energy
yields important information. Because the background velocity has
shear, we must transform the data into coordinates in which the
shearing box system is strictly periodic, take the Fourier transform,
and then remap the wavenumbers onto the fixed Eulerian system. This
procedure is described in Hawley et al.\ (1995). Figure 6 is a one
dimensional power spectra, $| \delta {\bf v}(k)|^2$ in $k_x$,
$k_y$, and $k_z$, for the $64^3$ and $128^3$ grid zone Keplerian PPM
simulations discussed in \S4.2. The spectra are shown for orbits 1, 2
and 3, with the dashed lines corresponding to the $64^3$ run and the
solid lines to the $128^3$ model. The initial perturbation spectrum is
constant across the first four wavenumbers (seen in Figure 6 as a
horizontal line).
Immediately after the evolution begins, energy cascades into higher
wavenumbers. Within one third of an orbit, a relatively smooth power
law distribution has been established. As time goes by the energy at
all wavenumbers steadly declines. The power spectrum across the
smallest wavenumbers remains relatively flat but has dropped steadily
from $t=0$. Beyond $k\sim 10$ the spectra drop off as steep power
laws, with the $k_y$ distribution the steepest of all three
directions. Because of the background shearing velocity, transport in
$y$ direction produces the largest numerical diffusion. The $k_x$
function has the smallest slope. In this case, the background shear
causes low $k_x$ waves to be wrapped into higher wavenumbers, i.e.,
$k_x(t) = k_x (0) - tm d\Omega/dR$, where $m$ is an azimuthal
wavenumber. The higher resolution curves in Figure 6 have
larger energies compared to the low resolution curve. Aside from this,
the additional grid zones extend the curves out to higher
wavenumber without much significant qualitative difference.
Next we consider the effect of the initial perturbation amplitude on
the rate of decay of the turbulence and the properties of the system at
late times. Experience suggests that if a system is vulnerable to
nonlinear instabilities, large initial perturbation amplitudes will
promote the onset of turbulence. Indeed, this is what was observed in
the marginally Rayleigh-stable runs described in \S4.1. Here we run a
series of low resolution $32^3$ Keplerian simulations that have initial
velocity perturbations with maximum values equal to $\delta v_y/L\Omega
= 1.0$, 0.1, 0.01, and 0.001. The time evolution of the perturbed
kinetic energies in these four runs is shown in Figure 7. All the runs
show rapid decay; contrary to naive expectations, however, the higher
the initial perturbation amplitude the {\it larger} the initial decay
rate of the perturbed kinetic energy.
Figure 8 illustrates the reasons for this. Figure 8 shows the 1D
Fourier power spectrum for the largest and smallest initial
perturbation runs after 0.25 orbits. The importance of nonlinear
interactions is increased by larger initial amplitudes. This is why in
\S4.1 nonlinear effects were able (in some cases) to overcome the
stabilizing influence of the Coriolis force when the initial
perturbation amplitudes were increased. Here, however, the main
nonlinear effect promoted by increased perturbation amplitude is to
create a more rapid cascade of energy to high wavenumbers. In
contrast, the $\delta v_y = 0.001L\Omega$ case is dominated by linear
and numerical effects. Energy has been carried into higher $k_x$
wavenumbers by linear shear, and lost from $k_y$ by numerical diffusion
error. The spectrum in $k_z$ is completely flat through the first four
wavenumbers (those initially powered) before dropping precipitously.
Evidence for a strong nonlinear cascade in the largest intial
perturbation runs is also found in the rapid increase in entropy at the
start of the evolution due to thermalization of the initial kinetic
energy in those runs. By orbit 5, the decay rates have been reduced to
a much lower level comparable to that seen in the small amplitude
perturbation runs. The ratio of the kinetic energy at orbit 5 to the
initial value in each of these runs is 0.00042, 0.0042, 0.014, and
0.058. Eventually the fluctuation energy in all these simulations
levels off at a finite, small value. What remains at these late times
are pressure and epicyclic waves, whose amplitude is determined by the
strength of the initial perturbation. The very slow numerical decay of
these long-wavelength linear waves is due to numerical dissipation.
The Reynolds stress oscillates around zero in all runs, with an
amplitude similar to the late-time kinetic energy.
We have argued that the residual kinetic energy represents nothing more
than linear waves left over from the initial perturbations. Their
presence does not imply that Keplerian disks are somehow still
``slightly unstable''; stability certainly does not require that
velocity perturbations die out. Indeed, a complete decay to zero
amplitude would have been puzzling; the motivation of the section
after all was to give an account of why there was {\it any\/} decay.
This understood, even low levels of velocity fluctuations might
be of interest in a disk, if they could be sustained indefinitely. Can
one convincingly rule out the possibility that these lingering
fluctuations are somehow feeding off the differential rotation? An
experiment to test this conjecture is to chart the evolution of
perturbations in a $q=0$, constant $\Omega$ flow. In a uniformly
rotating disk, Coriolis forces are present without any background shear
at all. Such a system is rigorously stable; without background shear
there is no source of energy to feed velocity perturbations. At late
times, the noise in a uniformly rotating disk must reflect residual
energy from the initial conditions, not ongoing excitation. Further,
the absence of shear flow will reduce the effective numerical
viscosity; the perturbations will not be advected by the shear flow,
nor will radial wavenumber modes be sheared out to higher values.
The $q=0$ case has been run at two resolutions, $64^3$ and $32^3$, for
comparison with equivalently resolved Keplerian systems. The initial
perturbations have a maximum value $\delta v_y = 0.1 L\Omega$. The
time histories of the perturbed kinetic energy for both the $q=0$ and
the $q=1.5$ $64^3$ simulations are shown in Figure 9. Both angular
velocity distributions show rapid declines in kinetic energy, although
after 10 orbits the rate of decline is greatly reduced. The $32^3$
resolution simulations look similar, except that they have less energy
at late time. The residual energy is due to stable waves that have not
yet been damped by numerical diffusion. Compared to similarly resolved
simulation with a Keplerian rotation profile ($q=1.5$) the $q=0$ models
level out at {\it higher\/} energies. Without advection through the
grid, there is less numerical diffusion, and higher residual wave
amplitudes are preserved. The case for driven residual weak Keplerian
turbulence becomes untenable, if the ``turbulence'' is stronger in a
rigorously stable uniformly rotating disk!
\subsection{The Contrast with Magnetic Turbulence}
Although Keplerian flows have proven to be stable to the local
development of hydrodynamic turbulence, the inclusion of a magnetic
field changes everything, even if the field is weak (subthermal).
Hydrodynamic stability in a differentially rotating system is assured
so long as the Rayleigh criterion $dL/dR > 0$ is satisfied. Magnetic
differentially rotating systems quite generally require $d\Omega/dR >
0$ for stability (Balbus 1995), a condition not satisfied in accretion
disks. With a magnetic field present the stress tensor acquires a
magnetic component proportional to $B_R B_\phi$,
\begin{equation}\label{magstress}
T_{R\phi}= \langle \rho u_R u_\phi - \rho u_{AR}u_{A\phi}\rangle,
\end{equation}
where
\begin{equation}
{\bf u_A} = { {\bf B}\over \sqrt{4\pi\rho}}.
\end{equation}
Most importantly, the way the stress tensor couples to the
fluctuations changes. With the new expression for $T_{R\phi}$
the mean flow equations (\ref{momflux}) and (\ref{enflux}) are
unchanged, but the fluctuation equations become
\begin{equation} \label{magenr}
{1\over2}{\partial\ \over\partial t}\langle \rho (u_R^2 +u_{A\, R}^2)\rangle
+\nabla {\cdot}\langle\quad \rangle=
2\Omega\langle\rho u_R u_\phi \rangle -
\langle u_R {\partial P_{tot} \over \partial R} \rangle - {\rm losses,}
\end{equation}
\begin{equation} \label{magenaz}
{1\over2}{\partial\ \over\partial t}\langle \rho (u_\phi^2 +u_{A\,
\phi}^2)\rangle
+\nabla{\cdot} \langle\quad \rangle =
- 2\Omega\langle\rho u_R u_\phi \rangle
- T_{R\phi}\,{d\Omega\over d\ln R}
- \langle {u_\phi\over R} {\partial P_{tot} \over \partial
\phi}\rangle -\rm{losses}.
\end{equation}
(Terms proportional to $\nabla{\cdot} {\bf u}$ have been dropped,
the fluxes are not shown explicitly, and
$ P_{tot} = P + {B^2/8\pi}$.)
Now the stress tensor no longer works at cross purposes to itself.
There is still Coriolis stabilization in equation (\ref{magenaz}), but
it is not sufficient to overcome the stress--gradient coupling term.
One consequence of this is the now well-understood linear instability
of weak magnetic fields in disks (Balbus \& Hawley 1991; see reviews by
Papaloizou \& Lin 1995, and Balbus \& Hawley 1998). Another is that
outward transport of angular momentum maintains the turbulence
self-consistently by directly tapping into the free energy of
differential rotation.
The different couplings of the Maxwell (magnetic) and Reynolds stresses
can be demonstrated in simulations. Abramowicz, Brandenburg, \& Lasota
(1996) carried out a series of simulations with different values of
background $q$. They found an increase in angular momentum transport
levels roughly in proportion to the background shear to vorticity
ratio, i.e., $q/(2-q)$. This result is best understood by rewriting
the right hand side of (\ref{magenaz}) to obtain
\begin{equation}\label{qstress}
{1\over R}{dR^2\Omega\over dR}\langle\rho u_R u_\phi\rangle
- {d\Omega\over d\ln R} \langle\rho u_{AR} u_{A\phi}\rangle.
\end{equation}
Thus the Reynolds (kinetic) stress couples directly to the vorticity
[$=(2-q)\Omega$], and the Maxwell (magnetic) stress couples to the shear
($q\Omega$). In other words, vorticity limits turbulence whereas shear
promotes it.
Here we expand the study of Abramowicz et al.\ by examining a full range of
$q$ values between 0 and 2 in intervals of 0.1 in a local shearing
box. The simulations are of the same type as some of those presented
in Hawley, Gammie \& Balbus (1996). The initial magnetic field is $B_z
\propto \sin(2\pi x/L_x)$ with a maximum corresponding to $\beta =
P/P_{mag} =400$. The box size is $L_x = L_z = 1$, and $L_y = 2\pi$,
and the grid resolution is $32\times 64\times 32$.
Figure 10 shows time-averaged Reynolds and Maxwell stresses as a
function of $q$ for the full range of simulations. The magnetic
instability is present for all $q>0$. Equation (\ref{qstress})
provides no direct limit on the Maxwell stress; it acquires whatever
level the nonlinear saturation of the instability can support.
However, if the turbulence is to be sustained from the differential
rotation, not pressure forces, the Maxwell stress must in general
exceed the Reynolds stress by more than $(2-q)/q$, the ratio of the
vorticity to the shear. In practice the ratio of the Maxwell stress to
Reynolds stress is significantly greater than this, particularly in the
range $0<q<1$. In this regime the vorticity is so strongly stabilizing
that the Reynolds stress is kept to a minimum even when fluid
turbulence is created and maintained by the magnetic instability. When
$q>1$, however, the shear and vorticity become comparable; the Reynolds
and Maxwell stresses both climb with increasing $q$. As $q\rightarrow
2$, the vorticity goes to zero and there are no constraints on the
Reynolds stress from (\ref{qstress}). The total stress increases
dramatically as the flow enters the domain of the nonlinear
hydrodynamical instability. When $q>2$, of course, the flow is
Rayleigh unstable.
\section{Discussion}
In this paper we have carried out a series of numerical simulations to
explore further the local hydrodynamical stability properties of
Keplerian disks, and the role that the Reynolds stress plays in
determining that stability. The key conceptual points are embodied in
the moment equations (\ref{balbusr}) and (\ref{balbusaz}) for
hydrodynamics, and (\ref{magenr}) and (\ref{magenaz}) for
magnetohydrodynamics. The differences in those equations are clearly
manifest in simulations, both hydrodynamic and MHD. The Maxwell stress
couples to the shear, the Reynolds stress to the vorticity. While the
former maintains turbulence, the latter works against it. Thus, while
magnetized disks are unstable, and naturally create and sustain
turbulence, a nonmagnetized Keplerian flow possesses only the Reynolds
stress, and that cannot by itself sustain turbulence. The accumulating
evidence, both numerical and analytic, from this paper and earlier
works (BHS; Stone \& Balbus 1996), points clearly to the conclusion
that Keplerian flows are locally hydrodynamically stable, linearly and
nonlinearly.
It has been traditional to point to the nonlinear instabilities
observed in simple shear flows to support the conjecture that Keplerian
disks behave similarly. Such reasoning, however, neglects the critical
difference between such flows, namely the dynamical stabilization due
to the Coriolis force. Linear stabilization is measured by the
epicyclic frequency, $\kappa^2 = 2(2-q)\Omega^2$. As
$q\rightarrow 2$, $\kappa^2 \rightarrow 0$, and dynamical stabilization
becomes weaker and weaker. At $q=2$ it vanishes completely; the flow
becomes equivalent to a simple Cartesian shear and subject to the
nonlinear instabilities to which simple shear flows are prone. Viewed
in this light, the nonlinear instability of a Cartesian shear flow is
less a generic property than a singular case lying between the linearly
unstable and linearly stable regimes. The nonlinear instability exists
not because nonlinear forces can generally overcome linear restoring
forces, but because those linear forces vanish at the delimiting
boundary between
Rayleigh stability ($q<2$) and instability ($q>2$).
This is highlighted by our study of the transition between stability
and instability. By varying $q$ to values near to but slightly less
than 2, we can explore the dynamics of systems close to the marginal
stability limit. We find that when stabilization from the Coriolis
term is very weak, both the amplitude of the initial perturbations and
the size of the numerical diffusion error (grid resolution) can
determine whether the velocity perturbations amplify or decay. This is
entirely consistent with the experimental configurations that are
linearly stable but which nevertheless become unstable. Such
nonlinearly unstable systems are precisely those where a large shear
dominates over other factors (e.g., a rapidly rotating outer cylinder
in a Couette experiment). In these fluid experiments the transition to
turbulence depends on the amplitude of the perturbing noise and the
Reynolds number of the flow. When we reduce the strength of the
Coriolis force by varying $q$ just below the marginally stable value
$q=2$, we produce a similar dominance of shear and again find an
instability that depends on the initial perturbation amplitude and the
(numerical) viscosity. We have understood this in terms of epicyclic
orbits, which are highly distorted near $q =2$, almost
indistinguishable from background shear. Once $q$ is sufficiently
below $q=2$, however, Coriolis stabilization is powerful, epicycles are
rounder, and perturbation growth is no longer possible.
This conclusion is greatly strengthened by experiments in which the
Keplerian system is evolved with different initial perturbations and
different grid resolutions. First we explored the impact of finite
resolution. Recall that the effect of numerical diffusion error on
flow structure (the turbulent perturbations) will be as an additional
loss term in (\ref{balbusr}) and (\ref{balbusaz}). Even if we were to
assume an ideal scheme with no numerical losses, however, the sink due
to the Coriolis term in (\ref{balbusaz}) would remain unchanged. The
simulations with various $q$ values near but just below 2 provide a
quantitative measure of just how big that term needs to be to stabilize
the flow, and an estimate of the importance of numerical viscosity as a
loss term. Although we find that increasing the effective Reynolds
number (i.e., by increasing the resolution and thusly reducing
numerical diffusion) can convert a marginally stable flow into a
marginally unstable one, one should not conclude that further increases
will have a similar effect on strongly stable Keplerian flows. Vortex
stretching can ``sneak'' into a highly elongated epicycle, but it
cannot do so in a rounded, strongly stable Keplerian disturbance.
We have investigated the possibility of diffusive numerical
stabilization with a series of resolution experiments run with two
completely different algorithms. Keplerian simulations were run at 4
resolutions from $32^3$ up to $256^3$ using the ZEUS hydrodynamics
scheme, and 3 resolutions from $32^3$ up to $128^3$ using the PPM
algorithm. The results from all these simulations were very similar.
No hint of instability was seen in any of these simulations, nor was
there any trend observed which could conceivably suggest instability in
an extrapolation to arbitrarily high resolution. Furthermore, not just
decaying trends, but detailed numerical behavior was reproduced in two
distinct codes with very different numerical diffusion properties. The
case that numerical diffusion is dominating and stabilizing these runs
is untenable.
Next, a series of experiments explored a range of initial perturbation
amplitudes. The largest had initial fluctuations that were comparable
to the background rotation velocity $L\Omega$. We found that the {\it
larger} the initial perturbation, the more rapid the decay of the
resulting turbulence. Far from promoting instability, stronger initial
perturbations actually increase the rate of decay of the perturbed
kinetic energy. When finite amplitude perturbations are added to the
Keplerian system they rapidly establish a nonlinear cascade of energy
to higher wavenumbers. This energy is then thermalized (or lost,
depending upon the numerical scheme and the equation of state) at the
high wavenumber end. Linear amplitude perturbations do not decay via
such a cascade, and damp at much lower rates.
Turbulence decays in homogeneous systems lacking an external energy
source. A uniformly rotating disk is an example of such. A Keplerian
system is more interesting because decay is observed
despite the presence of free energy in the differential rotation which
could, in principle, sustain the turbulence. This does not happen,
however, because the coupling of the Reynolds stress to the background
vorticity simply makes it impossible to power simultaneously both the
radial and azimuthal velocity fluctuations that make up the Reynolds
stress. Residual levels of fluctuations were even lower in the
Keplerian disk than they were in the uniformly rotating disk.
This behavior stands in contrast to the MHD system. The magnetic
instability produces not just turbulent fluctuations, but the {\em
right kind\/} of fluctuations: positive correlations in $u_R$ and
$u_\phi$, and in $B_R$ and $B_\phi$. It is because the
magnetorotational instability is driven by differential rotation that
the critical $R$--$\phi$ correlations exist. Unless $T_{R\phi}$ were
positive, energy would not flow from the mean flow into the
fluctuations. Hydrodynamical Cartesian shear flows maintain the
correlation physically by ensnaring vortices (a nonlinear process);
magnetic fields do this by acting like springs attached to the fluid
elements (a linear process). Sources of turbulence other than the
differential rotation (or simple shear) do not force a correlation
between $u_R$ and $u_\phi$, and generally do not lead to enhanced
outward transport.
Magnetic fields, then, are uniquely suited to be the agents responsible
for the behavior of $\alpha$ disks. While this conclusion has
important implications for fully ionized disks, its implications for
protostellar disks are yet more profound. If such disks are unable to
sustain adequate levels of magnetic coupling, or unable to sustain such
coupling throughout their radial extent, angular momentum transport may
well be tiny or nonexistent. Angular momentum transport, when it
occurs, will have to be accomplished through global nonaxisymmetric
waves, driven, for example, by self-gravitational instabilities. Even
if triggered by, say, convective instability, turbulence would likely
prove to be transient: it cannot be sustained from the only source of
energy available, namely the differential rotation. More generally,
nonmagnetic disks will not be describable in terms of the usual
$\alpha$ model.
Phenomenologically much less is known of MHD turbulence than of
hydrodynamical turbulence. There is very little laboratory to draw
upon, in contrast to the rich literature of hydrodynamical Couette
flow. The observational complexity of many disk systems suggests the
presence of a versatile and eruptive source of turbulence; magnetic
fields seem an obvious candidate for producing such behavior. The
physics behind magnetic reconnection, large scale field topology,
ion-neutral interactions, magnetic Prandtl number, and global dynamos
is likely to prove at least as rich and complex as the behavior of
astrophysical disks.
This work is supported in part by NASA grants NAG5-3058, NAG5-7500, and
NSF grant AST-9423187. Simulations were carried out with support from
NSF Metacenter computing grants at the Pittsburgh Supercomputing Center
and at NPACI in San Diego.
\clearpage
\begin{center}
{\bf References}
\end{center}
\refindent Abramowicz, M., A. Brandenburg, \& J.-P. Lasota 1996,
MNRAS, 281, L21
\refindent Balbus, S.~A. 1995, ApJ, 453, 380
\refindent Balbus, S.~A., \& Hawley, J. F. 1991, ApJ, 376, 214
\refindent Balbus, S.~A., \& Hawley, J.~F. 1998, Rev Mod Phys, 70, 1
\refindent Balbus, S.A., Hawley, J.F., \& Stone, J.M. 1996, ApJ, 467,
76 (BHS)
\refindent Cabot, W. 1996, ApJ, 465, 874
\refindent Cameron, A.~G.~W. 1978, Moon and Planets, 18, 5
\refindent Colella, P., \& Woodward, P. R. 1984, J. Comput. Phys., 54, 174
\refindent Coles, D. 1965, J. Fluid Mech, 21, 385
\refindent Crawford, J.~A., \& Kraft, R.~P. 1956, ApJ 123, 44
\refindent Cuzzi, J.~N., Dobrovolskis, A.~R., \& Hogan, R.~C. 1996, in
Chondrules and the Protoplanetary Disk, ed. R. H. Hewins, R. H.
Jones, \& E. R. D. Scott (Cambridge: Cambridge Univ. Press), 35
\refindent Drazin, P. G., \& Reid, W. H. 1981, Hydrodynamical Stability
(Cambridge: Cambridge University Press)
\refindent Hawley, J.~F., Gammie, C.~F., Balbus, S.~A. 1995, ApJ, 440,
742
\refindent Hawley, J.~F., Gammie, C.~F., Balbus, S.~A. 1996, ApJ, 464,
690
\refindent Kley, W., Papaloizou, J. C. B., \& Lin, D. N. C. 1993, ApJ,
416, 679
\refindent Lin, D.~N.~C., \& Papaloizou, J.~C.~B. 1980, MNRAS, 191, 37
\refindent Papaloizou, J.~C.~B., \& Lin, D.~N.~C. 1995, ARAA, 33, 505
\refindent Prinn, R.~G. 1990, ApJ, 348, 725
\refindent Ryu, D. \& Goodman, J. 1992, ApJ 388, 438
\refindent Shakura, N.~I., \& Sunyaev, R.~A. 1973, A\&A, 24, 337
\refindent Stone, J.~M., \& Balbus, S.~A. 1996, ApJ, 464, 364
\refindent Stone, J.~M., \& Norman, M.~L. 1992, ApJS, 80, 753
\refindent Tennekes, H., \& Lumley, J. L. 1972, A First Course in
Turbulence (Cambridge: MIT Press)
\refindent Zahn, J-P. 1991, in Structure and Emission Properties of
Accretion Disks, C. Bertout, S. Collin-Souffrin, J-P. Lasota, \& J.
Tran Thanh Van eds (Gif sur Yvette, France: Editions Fronti\`eres)
\newpage
\begin{figure}
\plotone{hbw1.ps}
\caption{Evolution of kinetic energy of velocity perturbations for
background rotation laws $\Omega \propto R^{-q}$ near the marginally
stable constant angular momentum distribution $q=2$. Selected curves
are labeled by their $q$ value. Top: Low resolution $32^3$ grid zone
simulations with initial maximum perturbation $\delta v_y =
0.1L\Omega$. Only the upper two curves ($q=2$ and $q=1.99$) show any
perturbation amplification. Middle: Simulations with $64^3$ grid zone
resolution and initial perturbation amplitude $\delta v_y =
0.1L\Omega$. The 6 curves correspond to $q=2.0$ to $q=1.94$ in
increments of 0.01. The $q=1.95$ curve remains level while the
$q=1.94$ declines with time. Bottom: Simulations with $64^3$ grid
zones and initial perturbation amplitude $\delta v_y = 0.01L\Omega$.
There 5 curves range from $q=2.0$ to $q=1.96$ in increments
of 0.01. Only the $q=2.0$ curve shows growth.
}
\end{figure}
\begin{figure}
\plotone{hbw2.ps}
\caption{
Evolution of $v_x$ (top) and $v_y$ (bottom) fluctuation
kinetic energy for simulations with resolutions of $32^2$, $64^3$,
$128^3$, and $256^3$ grid zones. Curves are labeled by resolution.
}
\end{figure}
\begin{figure}
\plotone{hbw3.ps}
\caption{
Time evolution of perturbed angular velocity kinetic energy
and volume-averaged Reynolds stress for the $32^3$ simulation. The
abrupt change of slope in $\rho\delta v_y^2/2$ (dashed line added for
reference) that occurs at $t=0.24$ (indicated by vertical line)
corresponds to the point in time when the Reynolds stress becomes
negative. A negative Reynolds stress provides a source for angular
velocity fluctuation energy; a positive Reynolds stress is a sink.
}
\end{figure}
\begin{figure}
\plotone{hbw4.ps}
\caption{
Evolution of $v_x$ (top) and $v_y$ (bottom) fluctuation
kinetic energy for 3 simulations using the PPM algorithm with $32^2$,
$64^3$, and $128^3$ grid zones (bold curves). The $32^3$ and $128^3$
grid zone simulations from Figure 2 (dashed curves) are included for
reference.
}
\end{figure}
\begin{figure}
\plotone{hbw5.ps}
\caption{
Time evolution of the Reynolds stress over the first orbit
in the Keplerian simulations for a range of resolutions and for both
the ZEUS and PPM (bold lines) numerical algorithms. The peak in the
stress is labeled by the corresponding resolution and algorithm.
}
\end{figure}
\begin{figure}
\plotone{hbw6.ps}
\caption{
One dimensional power spectrum $|\delta v(k)|^2$
for the $128^3$ (solid line) and $64^3$ (dashed line) PPM Keplerian
simulation at 1, 2 and 3 orbits. The horizontal line extending out to
$k/2\pi = 4$ is power spectrum of the initial perturbation.
}
\end{figure}
\begin{figure}
\plotone{hbw7.ps}
\caption{
Time evolution of the perturbation kinetic energy in four
$32^3$ grid zone simulations of Keplerian shearing systems. The curves
are labeled by the maximum amplitude of the initial perturbations.
Larger initial perturbations show a larger rate of decay of the
perturbed kinetic energy.
}
\end{figure}
\begin{figure}
\plotone{hbw8.ps}
\caption{
One dimensional power spectrum $|\delta v(k)|^2$ at 0.25
orbits for a $32^3$ large amplitude perturbation simulation (solid line)
and a $32^3$ small amplitude perturbation simulation (dashed line).
The curves are labeled by their initial maximum perturbation amplitude.
}
\end{figure}
\begin{figure}\plotone{hbw9.ps}
\caption{
Time evolution of the perturbed kinetic energy in a
constant $\Omega$ simulation, labeled $q=0$, and a
Keplerian simulation, labeled $q=1.5$.
}
\end{figure}
\begin{figure}
\plotone{hbw10.ps}
\caption{
Reynolds stress (stars) and Maxwell stress (diamonds) for a
series of MHD shearing box simulations with different background
angular velocity distributions $q$. Stress values are time-averaged
over the entire simulation. Error bars correspond to one standard
deviation in the stress values.
}
\end{figure}
\end{document}
| 17,151 |
\section*{Acknowledgements} V.I.K. is grateful to Willi
Gruebler for supplying the full tables of experimental data of the
Z\"urich group. We are also grateful to the UK EPSRC for grant
GR/L22843 supporting S.G. Cooper, the Russian Foundation for Basic
Research (grant 97-02-17265) for financial assistance and to the Royal
Society (UK) for supporting a visit by V.I. Kukulin to
the UK. We thank Jeff Tostevin for sending us Goddard's deuteron scattering code
DDTP.
\clearpage
\newpage
| 144 |
\section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\paragraph{Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ $\Box$}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\text{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}%
\newdimen\theight
\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}%
\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}%
\def\cents{\hbox{\rm\rlap/c}}%
\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}%
\def\vvert{\Vert
\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column} %
\def\dB{\hbox{{}}
\def\mB#1{\hbox{$#1$}
\def\nB#1{\hbox{#1}
\def\note{$^{\dag}}%
\makeatother
| 1,241 |
\section{Introduction}
LQ Hya (HD 82558) is a rapidly rotating
({\it v} sin{\it i} = 25 km s$^{-1}$), single K2 dwarf, classified as a
BY Dra variable (Fekel et al.~1986a; Fekel, Moffett \& Henry 1986b;
Strassmeier \& Hall 1988),
with a photometric rotational period of 1.600881 days (Strassmeier et al. 1997).
Its high lithium abundance (Fekel et al.~1986a)
suggests LQ Hya has age $t < 7.5\times 10^{7}$ years (at least as young
as the youngest Pleiades star); Vilhu, Gustafsson \& Walter (1991) even
suggest that it may be a pre-main sequence object.
Saar, Piskunov \& Tuominen (1992, 1994), Strassmeier et al.~(1993)
and Rice \& Strassmeier (1998)
found variable spot distribution in
this star using Doppler-imaging, and
widespread magnetic fields have been detected by
Saar et al.~(1992, 1994), Basri \& Marcy (1994), Donati et al.~(1997), and Donati (1998).
LQ Hya is also (not surprisingly) a very active star, as indicated by emission
in several chromospheric lines, including
Ca~{\sc ii} H \& K (Fekel et al.~1986a,b; Strassmeier et al.~1990),
Ca~{\sc ii} $\lambda$8542 (Basri \& Marcy 1994), and
H$\alpha$ (Vilhu et al.~1991), which can also appear as
partly (Strassmeier et al.~1993) or completely filled-in absorption
(Fekel et al.~1986a). A filled-in He~{\sc i} D$_{3}$
line is reported by Vilhu et al.~(1991) and Saar et al.~(1997).
Strong UV chromospheric and transition region emission lines have also been
found by Simon \& Fekel (1987), and the star has
been detected by ROSAT (Pye et al.~1995) and EUVE (Bowyer et al.~1996).
Flares are believed to result from the release of magnetic free energy
stored in the corona through reconnection
(see reviews by Mirzoyan 1984; Haisch, Strong \& Rodon\`{o} 1991).
Many types of cool stars flare (Pettersen 1989), sometimes at levels
several orders of magnitude more energetic than
their solar counterparts.
In the dMe stars (or UV Cet type stars) optical flares are a common
phenomenon, however, in more luminous stars flares are usually only detected
through UV or X-ray observations (e.g., Landini et al.~1986;
H\"{u}nsch \& Reimers 1995; Ayres et al.~1994); optical
flares are rare (Catalano 1990;
Saar, N\"ordstrom \& Andersen 1990; Henry \& Newsom 1996).
Ambruster \& Fekel (1990) detected a strong ultraviolet flare on LQ Hya in
four continuum bands between 1250 and 1850~\AA, while no
enhancement was evident in any of the chromospheric lines.
Recently, HST GHRS observations by Saar \& Bookbinder (1998) showed that
many low-level flares are present in the transition region lines of this
star. But consistent with its K2 spectral type and correspondingly
increased optical continuum, strong optical flares are rare on LQ Hya
(Montes et al.~1998b); Henry \& Newsom (1996), for example, saw none.
In this paper, we report one of these rare events:
the detection of an unusually strong optical flare in
LQ Hya through simultaneous observations of several optical chromospheric
activity indicators:
H$\alpha$, H$\beta$,
Na~{\sc i} D$_{1}$, D$_{2}$, He~{\sc i} D$_{3}$, Mg~{\sc i} b triplet
lines, and several UV chromospheric and transition region lines.
In Sect. 2 we give the details of our observations and data reduction.
In Sect. 3 we describe the different aspects of the flare
deduced from our echelle spectra, such as
the continuum enhancement, the response of chromospheric lines
to the flare, the variation of other photospheric lines,
the energy released of various emission features as a function of time
during the flare, and the line asymmetries.
Finally, Sect. 4 gives the conclusions.
\begin{table}
\caption[]{Observing log WHT/UES (1993 December 22)
\label{tab:obslogues93}}
\begin{flushleft}
\scriptsize
\begin{tabular}{ccccl}
\noalign{\smallskip}
\hline
\noalign{\smallskip}
UT & JD & {$\varphi$} &
S/N & Description \\
\noalign{\smallskip}
(h:m:s) & (2449343.0+) & &
& \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
01:14:26 & 0.55 & 0.444 & 52 & Quiescent \\
02:41:52 & 0.61 & 0.482 & 108 & Quiescent \\
04:35:48 & 0.69 & 0.532 & 145 & Impulsive \\
05:31:18 & 0.73 & 0.557 & 118 & Flare maximum \\
06:00:43 & 0.75 & 0.569 & 125 & Gradual \\
06:07:52 & 0.76 & 0.573 & 134 & " \\
06:14:20 & 0.76 & 0.576 & 132 & " \\
06:21:04 & 0.76 & 0.578 & 129 & " \\
06:29:06 & 0.77 & 0.582 & 123 & " \\
06:35:06 & 0.77 & 0.585 & 124 & " \\
06:41:03 & 0.78 & 0.587 & 129 & " \\
06:47:01 & 0.78 & 0.590 & 130 & " \\
06:53:01 & 0.79 & 0.592 & 122 & " \\
06:59:08 & 0.79 & 0.595 & 123 & " \\
07:05:06 & 0.80 & 0.598 & 122 & " \\
07:11:04 & 0.80 & 0.600 & 128 & " \\
07:17:08 & 0.80 & 0.603 & 128 & " \\
07:23:07 & 0.81 & 0.605 & 124 & " \\
07:29:05 & 0.81 & 0.608 & 129 & " \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{flushleft}
\end{table}
\begin{table}
\caption[]{Continuum variation$^*$ during the flare
\label{tab:continuum}}
\begin{flushleft}
\begin{tabular}{lcccc}
\hline
\noalign{\smallskip}
{Obs.} & $\lambda$4866 & $\lambda$5175
& $\lambda$5868 & $\lambda$ 6540 \\
(UT) & (\%) & (\%) & (\%) & (\%) \\
\hline
\noalign{\smallskip}
02:42$^1$ & - & - & - & - \\
04:36 & 34 & 32 & 26 & 23 \\
05:31$^2$ & 10 & 8 & 7 & 8 \\
06:01 & 5 & 5 & 4 & 6 \\
06:29 & 6 & 7 & 5 & 4 \\
07:29 & 9 & 9 & 7 & 2 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{5}{l}{$^*$ near H$\beta$, Mg {\sc i} b, He {\sc i} D$_{3}$,
and H$\alpha$, respectively} \\
\multicolumn{5}{l}{$^1$Quiescent spectrum; $^2$Flare maximum }
\end{tabular}
\end{flushleft}
\end{table}
\section{Observations and Data Reduction}
Echelle spectroscopic observations
of LQ Hya were obtained
with the 4.2m William Herschel Telescope (WHT) and the
Utrecht Echelle Spectrograph (UES) on 1993 December 22,
covering several optical chromospheric activity indicators.
These WHT/UES spectra were obtained with echelle 31
(31.6 grooves per mm) and a 1024~x~1024 pixel TEK2 CCD as detector.
The central wavelength is 5808~\AA$\ $ covering a wavelength range
from 4842 to 7725 \AA$\ $ in a total of 44 echelle orders.
The reciprocal dispersion ranged from 0.048 to 0.076~\AA/pixel.
In Table~\ref{tab:obslogues93} we give the observing log.
For each echelle spectrum we list
the universal time (UT), the Julian Date (JD),
the rotational phase ($\varphi$),
and signal to noise ratio (S/N) obtained in the H$\alpha$ line region.
The rotational phase ($\varphi$) was calculated with the
ephemeris recently given by Strassmeier et al.~(1997)
(T$_{0}$~=~2445275.0, P$_{\rm phtm}$~=~1.600881),
for alternative period determinations see also
Strassmeier et al.~(1993) and Jetsu (1993).
The spectra have been extracted using the standard IRAF
reduction procedures
\footnote{IRAF is distributed by the National Optical Observatory,
which is operated by the Association of Universities for Research in
Astronomy, Inc., under contract with the NSF.}
(bias subtraction,
flat-field division, and optimal extraction of the spectra).
The wavelength calibration was obtained using
spectra of Th-Ar lamp.
Finally, the spectra were normalized by
a polynomial fit to the observed continuum.
Frequent IUE SWP and LWP spectra of LQ Hya were taken between 15 and 24
Dec 1993. The data were reduced with standard IUE software and fluxes
above background determined by simple integration (Table \ref{tab:uv_fluxes}).
Here we analyze the NEWSIPS calibrated data, resulting
in some small changes in the measured
fluxes relative to our initial results (Montes et al.~1998b).
\section{Description of the Flare}
We detected a strong flare during the echelle observations of LQ Hya
on 1993 December 22.
The temporal evolution of the flare consists of an initial
impulsive phase which started between 2:42 UT (last
quiescent optical spectrum)
and 4:07 UT (end of the first IUE exposure with enhanced emission).
By the next optical spectrum (4:36 UT)
strong increases in the chromospheric lines and continuum are seen.
The optical chromospheric lines reached maximum intensity at 5:31 UT,
by which time the continuum enhancement had already strongly decreased.
After this, the lines slowly decreased in a gradual phase that
lasted at least until the end of the observation (07:29 UT),
i.e., $>$ 2 hours.
In the following we describe in detail the various aspects of the flare
deduced from our spectra, first exploring the time variation of the
continuum, and then the response of the lines.
Line emission is seen both in
the ``typical" chromospheric diagnostics (H~{\sc i} Balmer, and
He~{\sc i} D$_{3}$)
and, after the subtraction of the quiescent spectrum,
also in other He~{\sc i} lines
($\lambda$4921.9, 5015.7, and 6678.2~\AA)
and other strong lines such as the Na~{\sc i} D$_{1}$ and D$_{2}$,
the Mg~{\sc i} b triplet and several Fe~{\sc i} and Fe~{\sc ii} lines.
Finally, we calculate the energy release during the flare
and we analyse the broad component and the asymmetry exhibited by
some of the emission lines.
\subsection{The variation of the continuum}
Our echelle spectra show a change in depth of all the
photospheric lines due to continuum enhancement during the flare.
We have determined the contribution of the flare to the
total observed continuum by calculating what fraction of a
linear continuum must be added to
the quiescent spectrum in order to reproduce the corresponding flare spectrum.
In Table \ref{tab:continuum} we give this contribution in
representative spectral orders for several spectra
during the flare.
The maximum contribution of the flare to the continuum is reached at the
beginning of the event (the impulsive phase) and decreases thereafter. It
clearly depends on wavelength:
in the H$\beta$ line region the maximum contribution is 34 \%,
in the He~{\sc i} D$_{3}$ line region is
26 \%, and in the H$\alpha$ line region is 23 \%, giving a power law
index $F_{\rm cont} \propto \lambda^{-1.35}$ and an approximate blackbody
temperature of $T \approx$ 7500 K.
The continuum behaviour is thus in agreement with photometry
of other stellar flares, showing that the flare is initially dominated
by strong continuum radiation, strongest at short wavelengths.
The temperature indicated suggests that the plasma has already
cooled somewhat and thus our
``impulsive" spectrum (4:36 UT) may come somewhat late in the impulsive phase.
\subsection{The response of the optical Chromospheric lines to the flare}
To analyse the behavior
of the various optical chromospheric lines during the flare,
we first subtracted the quiescent spectrum (UT:02:42),
as is normally done in the analysis of flare stars.
However, since the star is active, this procedure
underestimates the total
chromospheric contribution in these features, and ignores any
variation (due to e.g., rotational modulation or evolution) of the ``quiescent"
state. To obtain total chromospheric contribution,
we applied the spectral subtraction technique
(i.e., subtraction of the rotationally broadened, radial-velocity
shifted spectrum of a inactive star
chosen to match the spectral type and luminosity class of LQ Hya;
see Montes et al.~1995a,~b,~c).
We used HD 10476 (K1V) as the inactive reference star, taken from the
spectral library of Montes \& Mart\'{\i}n (1998).
In the case of the H$\alpha$ and H$\beta$ lines
we have computed, for all the flare spectra, both
the observed - quiescent (O-Q) and observed - reference (O-R) profiles
(see Figures~\ref{fig:uesha} and ~\ref{fig:ueshb}).
For the rest of the lines, not affected by chromospheric activity in the
quiescent spectrum,
we studied only the (O-Q) spectra
(see Figures~\ref{fig:ueshed3}, \ref{fig:ueshe6678}, ~\ref{fig:uesna},
~\ref{fig:uesmg}).
In all cases, before the standard spectrum
(quiescent or reference) is subtracted we take into account
the contribution of the flare to the continuum (\S 3.1)
in each spectral region, for each flare spectrum.
\subsubsection{The H$\alpha$ line}
In Fig.~\ref{fig:uesha} we have plotted the quiescent spectrum,
the reference star spectrum, and
representative spectra during the development of the flare in the
the H$\alpha$ region.
In the left panel we plot the observed spectra,
in the central panel the (O-Q) profiles,
and in the right panel the (O-R) profiles.
This figure clearly shows the conspicuous H$\alpha$ emission
enhancement, from a weak emission
above the continuum (quiescent spectrum), to a very strong
and broad emission at the maximum of the flare.
The excess H$\alpha$ emission equivalent width in the (O-R)
spectra, increases by a factor
of $\approx$2.7 in an interval of 2.8 hours.
After the maximum the emission decreases slowly; if modeled with an
exponential decay EW $\propto$ exp($-t/\beta$), the e-folding time
$\beta \sim$2.5 hours in the first hour, slowing even further to
$\beta \sim$11 hours in the second hour.
\begin{figure*}
{\psfig{figure=MY690_f1.ps,bbllx=20pt,bblly=166pt,bburx=705pt,bbury=678pt,height=16.0cm,width=17.5cm,clip=}}
\caption[ ]{H$\alpha$ observed spectra (left panel),
after the subtraction of the quiescent spectrum (central panel) and
after the spectral subtraction (right panel)
\label{fig:uesha} }
\end{figure*}
\begin{figure*}
{\psfig{figure=MY690_f2.ps,bbllx=20pt,bblly=166pt,bburx=705pt,bbury=678pt,height=16.0cm,width=17.5cm,clip=}}
\caption[ ]{H$\beta$ observed spectra (left panel),
after the subtraction of the quiescent spectrum (central panel) and
after the spectral subtraction (right panel)
\label{fig:ueshb} }
\end{figure*}
One significant aspect of these spectra is a broad emission component in the
both the (O-Q) and (O-R) profiles which
makes the total emission poorly matched by a single-Gaussian fit.
To study these profiles, we have
therefore fitted them using
two Gaussian components: a narrow component (N) having a FWHM of
56-69 km~s$^{-1}$ and a broad component (B) with
190 $\leq$ FWHM $\leq$ 293 km~s$^{-1}$.
In Table~\ref{tab:measuresha} we list the parameters (I, FWHM, EW)
of the broad and narrow components.
As can be seen in this table, the contribution of the B component
to the total EW and FWHM of the line is a maximum in the
impulsive phase. The line profiles are also asymmetric,
and the two Gaussian fit is optimized when the broad component
is blue- or red-shifted with respect to the narrow component
(see the $\Delta \lambda$ = $\lambda$$_{\rm N}$ - $\lambda$$_{\rm B}$
value in Table~\ref{tab:measuresha}). We discuss this further in \S 3.5.
In Table~\ref{tab:measuresha} we also give,
for the total subtracted spectra,
the peak emission intensity (I),
the excess H$\alpha$ emission equivalent width (EW(H$\alpha$)), and
absolute fluxes at the stellar surface, logF$_{\rm S}$(H$\alpha$),
in erg cm$^{-2}$ s$^{-1}$
obtained with the calibration of Hall (1996).
The time evolution of the EW( H$\alpha$) during the flare is
shown in Fig.~\ref{fig:lqhya_ews}.
\begin{table*}
\caption[]{H$\alpha$ parameters during the flare in the
subtracted profiles for the two Gaussian component fits and
for the total emission
\label{tab:measuresha}}
\begin{flushleft}
\begin{tabular}{lcccccccccccccc}
\hline
&
\multicolumn{4}{c}{H$\alpha$ broad component} & &
\multicolumn{4}{c}{H$\alpha$ narrow component} \\
\cline{2-5}\cline{7-10}
\noalign{\smallskip}
{Obs.} &
{I} & {\scriptsize FWHM} & EW$_{\rm B}$ & $_{\rm B}$/$_{\rm T}$ & &
{I} & {\scriptsize FWHM} & EW$_{\rm N}$ & $_{\rm N}$/$_{\rm T}$ &
$\Delta \lambda$ & I$_{\rm T}$ & EW$_{\rm T}$ & $\log {\rm F}_{\rm T}$ \\
(UT) &
& {\scriptsize (\AA)} & {\scriptsize (\AA)} & (\%) & &
& {\scriptsize (\AA)} & {\scriptsize (\AA)} & (\%) &
($\lambda$$_{\rm N}$ - $\lambda$$_{\rm B}$) & & {\scriptsize (\AA)} & \\
\hline
{\bf Obs. - Quiescent} \\
\hline
\noalign{\smallskip}
02:42 (Quiescent) & - & - & - & - &
& - & - & - & - & - & 0.000 & 0.000 & 0.00 \\
04:36 & 0.137 & 7.709 & 1.098 & 78.4 &
& 0.117 & 2.414 & 0.304 & 45.2 & +0.34 & 0.254 & 1.401 & 6.70 \\
05:31 (Maximum) & 0.192 & 6.766 & 1.371 & 65.4 &
& 0.281 & 2.427 & 0.725 & 34.6 & -0.06 & 0.471 & 2.096 & 6.88 \\
06:01 & 0.174 & 6.021 & 1.113 & 66.1 &
& 0.240 & 2.227 & 0.570 & 33.9 & -0.36 & 0.417 & 1.683 & 6.78 \\
06:29 & 0.140 & 6.414 & 0.948 & 66.7 &
& 0.199 & 2.238 & 0.473 & 33.3 & -0.42 & 0.331 & 1.421 & 6.71 \\
07:29 & 0.121 & 6.567 & 0.837 & 64.3 &
& 0.193 & 2.267 & 0.465 & 35.7 & -0.66 & 0.315 & 1.302 & 6.67 \\
\noalign{\smallskip}
\hline
{\bf Obs. - Reference} \\
\hline
\noalign{\smallskip}
02:42 (Quiescent)&0.053 & 4.168 & 0.236 & 20.3 &
& 0.714 & 1.221 & 0.927 & 79.7 & -0.12 & 0.753 & 1.163 & 6.62 \\
04:36 & 0.194 & 6.414 & 1.316 & 59.4 &
& 0.615 & 1.371 & 0.898 & 40.6 & +0.21 & 0.802 & 2.214 & 6.90 \\
05:31 (Maximum) & 0.307 & 5.532 & 1.809 & 58.5 &
& 0.835 & 1.442 & 1.281 & 41.5 & +0.07 & 1.132 & 3.090 & 7.04 \\
06:01 & 0.270 & 5.153 & 1.481 & 54.4 &
& 0.834 & 1.397 & 1.240 & 45.6 & -0.11 & 1.085 & 2.721 & 6.99 \\
06:29 & 0.226 & 5.326 & 1.280 & 51.2 &
& 0.835 & 1.221 & 1.221 & 48.8 & -0.17 & 1.043 & 2.501 & 6.95 \\
07:29 & 0.201 & 5.449 & 1.167 & 48.6 &
& 0.822 & 1.397 & 1.233 & 51.4 & -0.09 & 1.006 & 2.400 & 6.93 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\end{flushleft}
\end{table*}
\begin{table*}
\caption[]{H$\beta$ parameters during the flare in the
subtracted profiles for the two Gaussian component fits and
for the total emission
\label{tab:measureshb}}
\begin{flushleft}
\begin{tabular}{lcccccccccccccc}
\noalign{\smallskip}
\hline
&
\multicolumn{4}{c}{H$\beta$ broad component} & &
\multicolumn{4}{c}{H$\beta$ narrow component} \\
\cline{2-5}\cline{7-10}
\noalign{\smallskip}
{Obs.} &
{I} & {\scriptsize FWHM} & EW$_{\rm B}$ & $_{\rm B}$/$_{\rm T}$ & &
{I} & {\scriptsize FWHM} & EW$_{\rm N}$ & $_{\rm N}$/$_{\rm T}$ &
$\Delta \lambda$ & I$_{\rm T}$ & EW$_{\rm T}$ & H$\alpha$/H$\beta$ \\
(UT) &
& {\scriptsize (\AA)} & {\scriptsize (\AA)} & (\%) & &
& {\scriptsize (\AA)} & {\scriptsize (\AA)} & (\%) &
($\lambda$$_{\rm N}$ - $\lambda$$_{\rm B}$) & & {\scriptsize (\AA)} & \\
\hline
{\bf Obs. - Quiescent} \\
\hline
\noalign{\smallskip}
02:42 (Quiescent) & - & - & - & - &
& - & - & - & - & - & 0.000 & 0.000 & 0.00 \\
04:36 & 0.208 & 5.185 & 1.142 & 86.3 &
& 0.117 & 1.450 & 0.181 & 13.7 & +0.03 & 0.326 & 1.323 & 1.143\\
05:31 (Maximum) & 0.279 & 4.440 & 1.142 & 69.7 &
& 0.357 & 1.507 & 0.572 & 30.3 & -0.08 & 0.644 & 1.888 & 1.498\\
06:01 & 0.203 & 4.429 & 0.957 & 65.5 &
& 0.328 & 1.439 & 0.503 & 34.5 & -0.15 & 0.532 & 1.450 & 1.253\\
06:29 & 0.178 & 4.599 & 0.870 & 70.7 &
& 0.248 & 1.361 & 0.359 & 29.3 & -0.20 & 0.429 & 1.230 & 1.246\\
07:29 & 0.158 & 4.569 & 0.767 & 73.9 &
& 0.178 & 1.428 & 0.271 & 26.1 & -0.30 & 0.366 & 1.038 & 1.353\\
\noalign{\smallskip}
\hline
{\bf Obs. - Reference} \\
\hline
\noalign{\smallskip}
02:42 (Quiescent)&- & - & - & - &
& - & - & - & - & - & 0.437 & 0.398 & 3.152\\
04:36 & 0.232 & 5.102 & 1.253 & 76.5 &
& 0.359 & 1.004 & 0.384 & 23.5 & -0.09 & 0.610 & 1.636 & 1.460\\
05:31 (Maximum) & 0.327 & 4.308 & 1.501 & 65.5 &
& 0.663 & 1.118 & 0.789 & 34.5 & -0.13 & 1.003 & 2.290 & 1.455\\
06:01 & 0.250 & 4.343 & 1.156 & 60.4 &
& 0.661 & 1.076 & 0.757 & 39.6 & -0.23 & 0.920 & 1.913 & 1.534\\
06:29 & 0.211 & 4.627 & 1.036 & 61.4 &
& 0.594 & 1.028 & 0.650 & 38.6 & -0.30 & 0.814 & 1.686 & 1.600\\
07:29 & 0.191 & 4.713 & 0.955 & 63.5 &
& 0.514 & 1.000 & 0.548 & 36.5 & -0.28 & 0.718 & 1.503 & 1.723\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\end{flushleft}
\end{table*}
\subsubsection{The H$\beta$ line}
Fig.~\ref{fig:ueshb} is as Fig.~\ref{fig:uesha} for the H$\beta$ line.
In this case the line changes from mostly filled-in absorption
(quiescent), to a strong and broad emission line at flare maximum.
The excess H$\beta$ emission EW in (O-R)
spectra increases by a factor of 5.8 from the quiescent level to the maximum,
a considerably larger enhancement than in the H$\alpha$ line.
However, during the gradual phase the emission
declines slightly more rapidly than the H$\alpha$ emission
(see Fig. \ref{fig:lqhya_ews}), with $\beta = 2.3$ hours in the first hour of
decay, slowing to $\beta = 5.9$ hours in the second. The more rapid decay
of H$\beta$ has also been observed in other solar and stellar flares
(Johns-Krull et al.~1997).
Like H$\alpha$, very broad wings are visible in the
(O-R) H$\beta$ profiles, and the results of double Gaussian fits are
given in Table~\ref{tab:measureshb}.
The contribution and FWHM of the broad component again reach a
maximum in the impulsive phase and
the profiles show an asymmetry similar to that seen in H$\alpha$.
\subsubsection{The H$\alpha$/H$\beta$ Balmer decrement }
The Balmer decrement
can be an important indication of physical parameters
such as electron density (Kunkel 1970).
It is commonly tabulated for stellar flares with respect
to the H$\gamma$ line (Hawley \& Pettersen 1991).
Lacking H$\gamma$ data in our spectra,
we have instead calculated the Balmer decrement
from the EW(H$\alpha$)/EW(H$\beta$) ratio,
assuming that the LQ Hya continua at H$\alpha$ and H$\beta$ have a ratio
appropriate to a blackbody at T$_{eff}$ = 4900 K
(F$_{\lambda 6563}$/F$_{\lambda 4861}$ = 1.08). The values obtained
are given in Table~\ref{tab:measureshb}.
We found that the H$\alpha$/H$\beta$ Balmer decrement
is shallower during the flare, changing from 3.15 at the quiescent state
to 1.46 at the impulsive and maximum phases of the flare,
indicating (unsurprisingly) a significant change in the properties of hydrogen
emitting regions during the flare.
In all the flares on dMe stars tabulated by Hawley \& Pettersen (1991),
the decrement for H$\beta$ and higher Balmer lines
was also shallower during the flare.
The H$\alpha$/H$\beta$ ratio can fall
to $<1$ during large stellar flares (Kunkel 1970) and in
the largest solar flares (Zirin \& Ferland 1980).
However, in other stellar flares steeper decrements (2.0)
are reported (Worden et al.~1984) and in the solar flares observed
by Johns-Krull et al.~(1997) this ratio only changed from 2.5 to 1.45.
\subsubsection{The He~{\sc i} D$_{3}$ and other He~{\sc i} lines}
In Fig.~\ref{fig:ueshed3} we show the quiescent spectrum and
representative flare spectra in the
the He~{\sc i} D$_{3}$ region; the
left panel displays the observed spectra and
in the right panel, the (O-Q) profiles are plotted.
The He~{\sc i} D$_{3}$ line, completely filled-in in quiescence, goes
into emission during the flare,
reaching a maximum intensity (0.346) at the same time that the Balmer lines.
The subtracted profiles again show broad wings; an analysis
like that used with H$\alpha$ and H$\beta$ yields
the parameters in Table~\ref{tab:measuresheid3}.
This line is a well known diagnostic of flares in the Sun and in other stars.
In the Sun the D$_{3}$ line appears as absorption
in plage and weak flares and as emission in strong flares (Zirin 1988).
The He~{\sc i} D$_{3}$ feature is typically in emission in dMe stars
(e.g., Pettersen, Evans \& Coleman 1984);
in flare like events in UV Ceti flare stars
(Kunkel 1970; Bopp \& Moffett 1973);
in strong flares in more luminous, active stars such as
the RS CVn systems II Peg
(Huenemoerder \& Ramsey 1987; Montes et al.~1997;
Berdyugina, Ilyin \& Tuominen 1998)
and UX Ari (Montes et al.~1996b; 1997),
the weak-lined T Tauri star V410 Tau (Welty \& Ramsey 1998), and in
the active G5V $\kappa$ Cet (Robinson \& Bopp 1987).
Thus, the detection of prominent D$_{3}$
emission indicates that we are observing a very strong flare in LQ Hya.
\begin{figure}
{\psfig{figure=MY690_f3.ps,bbllx=40pt,bblly=166pt,bburx=578pt,bbury=680pt,height=12.0cm,width=8.4cm,clip=}}
\caption[ ]{He~{\sc i} D$_{3}$ observed spectra (left panel) and
after the subtraction of the quiescent spectrum (right panel)
\label{fig:ueshed3} }
\end{figure}
Other He~{\sc i} lines have been also reported in stellar flares
(Bopp \& Moffett 1973; Hawley \& Pettersen (1991);
Abdul-Aziz et al. 1995; Abranin et al. 1998).
In our spectra, after the subtraction of the quiescent spectra,
we have also found an excess emission in other He~{\sc i} lines
at 4921.93, 5015.68 and 6678.15~\AA.
In particular, He {\sc i} $\lambda$6678.15 appears superimposed on the
Fe {\sc i} $\lambda$6677.99 absorption line
and the excess emission can be seen
even in the observed spectra (see Fig.~\ref{fig:ueshe6678} left panel).
When the quiescent spectra is subtracted, excess He~{\sc i} emission
is clearly seen (see Fig.~\ref{fig:ueshe6678} right panel),
with a maximum peak intensity of 0.17.
The profiles have been again fitted with two Gaussians;
the corresponding parameters are given in Table~\ref{tab:measureshei6678}.
The temporal evolution of this He~{\sc i} line during the flare
is similar to the D$_{3}$ line.
\begin{figure}
{\psfig{figure=MY690_f4.ps,bbllx=40pt,bblly=166pt,bburx=578pt,bbury=680pt,height=12.0cm,width=8.4cm,clip=}}
\caption[ ]{He~{\sc i} $\lambda$6678 observed spectra (left panel) and
after the subtraction of the quiescent spectrum (right panel)
\label{fig:ueshe6678} }
\end{figure}
\begin{figure}
{\psfig{figure=MY690_f5.ps,bbllx=40pt,bblly=166pt,bburx=578pt,bbury=680pt,height=12.0cm,width=8.4cm,clip=}}
\caption[ ]{Spectra near the Na~{\sc i} D$_{1}$ and D$_{2}$ lines before
(left panel) and
after subtracting the quiescent spectrum (right panel)
\label{fig:uesna} }
\end{figure}
\begin{figure}
{\psfig{figure=MY690_f6.ps,bbllx=40pt,bblly=166pt,bburx=578pt,bbury=680pt,height=12.0cm,width=8.4cm,clip=}}
\caption[ ]{Spectra near the Mg~{\sc i} b triplet lines before
(left panel) and
after the subtraction of the quiescent spectrum (right panel)
\label{fig:uesmg} }
\end{figure}
\subsubsection{The Na~{\sc i} D$_{1}$ and D$_{2}$ lines}
The Na~{\sc i} D$_{1}$ and D$_{2}$ resonance lines at
5895.9~\AA\ and 5890.0~\AA\ are collisionally-controlled in the
atmospheres of late-type stars and thus provide information about
chromospheric activity (see Montes et al.~1996b, 1997, 1998a). Recent
models of these lines for M dwarfs have been made
by Andretta, Doyle, \& Byrne (1997).
Using model solar flares, Falchi, Falciani \& Smaldone (1990) found that the
Na~{\sc i} D$_{2}$ line shows modifications only in the core of its profile.
Subtraction of the modified reference star from the quiescent
spectrum reveals that excess emission
in the Na~{\sc i} D$_{1}$ and D$_{2}$ lines is very small;
we have therefore analysed only the (O-Q) spectra to study the
flare in these lines.
As can be seen in Fig.~\ref{fig:uesna}, the lines show a clear
filling-in of the core its profile, reaching its maximum intensity (0.13)
at the same time as the other chromospheric lines.
The peak intensity (I), FWHM, and EW for both lines are given in
Table~\ref{tab:measuresna}.
\subsubsection{The Mg~{\sc i} b triplet lines}
The strong Mg~{\sc i} b triplet $\lambda$$\lambda$5167, 5172, 5183
is formed in the lower chromosphere and the
region of temperature minimum and they are good diagnostics of
activity (Basri, Wilcots \& Stout 1989;
Gunn \& Doyle 1997; Gunn, Doyle \& Houdebine 1997).
In some stellar flares this lines exhibit a central reversal
(Bopp \& Moffett 1973; Mochnacki \& Schommer 1979;
Abdul-Aziz et al. 1995; Abranin et al. (1998).
In the quiescent spectrum very strong absorption lines are observed, without
evidence of filling-in by activity. In the
flare spectra, however, a small reversal is observed
in line cores (Fig.~\ref{fig:uesmg} left panel).
After the subtraction of the quiescent spectra, excess emission is clearly
visible, both in the three Mg~{\sc i} b lines (Fig.~\ref{fig:uesmg} right
panel), and (even more intensely) in the Fe~{\sc ii} feature at 5169.0~\AA.
The measured parameters for these lines are given in
Table~\ref{tab:measuresmg}.
\subsubsection{The variation of other photospheric lines}
The upper photosphere of the late-type stars is also affected by the
chromospheric activity, as seen in the filling-in of
strong photospheric lines
with relatively low excitation potentials in active dwarfs
(Basri et al.~1989) and pre-main sequence stars (Finkenzeller \& Basri 1987).
During moderately strong solar flares a large number of
photospheric lines are filled-in when the quiescent spectrum is subtracted
(Acampa et al.~1982; Mauas 1990; Johns-Krull et al.~1997).
Filling-in of some metal lines are also reported in earlier
observations of flares
of the UV Cet-type stars by Mochnacki \& Schommer (1979;
and references therein), Hawley \& Pettersen (1991),
and in recently observations by
Abdul-Aziz et al. (1995) and Abranin et al. (1998).
We have observed a slight filling in of many
photospheric absorption lines in our spectra during the flare.
These include all the lines reported to be activity-sensitive
by Basri et al.~(1989) and Finkenzeller \& Basri (1987),
many of the lines appearing in
the high resolution spectra of a moderate solar flare (Johns-Krull et al.~1997),
all the lines identified in the spectra of a more energetic
solar flare (Acampa et al.~1982),
and some of the lines reported in stellar flares
(Mochnacki \& Schommer 1979;
Abdul-Aziz et al. 1995; Abranin et al. (1998).
The lines with the largest filling in are those of multiplet 42 of
Fe {\sc ii} (4943.9, 5018.4, 5169.0~\AA).
Other lines with significant filling are the lines of
multiplet 48 (5316.8, 5362.9~\AA) and
49 (5197.6, 5234.6, 5276.0, 5316.6~\AA) of Fe {\sc ii}.
\begin{table*}
\caption[]{He~{\sc i} D$_{3}$ parameters during the flare in the
(Observed - Quiescent) profiles for the two Gaussian component
fits and for the total emission
\label{tab:measuresheid3}}
\begin{flushleft}
\begin{tabular}{lcccccccccccccc}
\hline
&
\multicolumn{4}{c}{He~{\sc i} D$_{3}$ broad component} & &
\multicolumn{4}{c}{He~{\sc i} D$_{3}$ narrow component} \\
\cline{2-5}\cline{7-10}
\noalign{\smallskip}
{Obs.} &
{I} & {\scriptsize FWHM} & EW$_{\rm B}$ & $_{\rm B}$/$_{\rm T}$ & &
{I} & {\scriptsize FWHM} & EW$_{\rm N}$ & $_{\rm N}$/$_{\rm T}$ &
$\Delta \lambda$ & I$_{\rm T}$ & EW$_{\rm T}$ \\
(UT) &
& {\scriptsize (\AA)} & {\scriptsize (\AA)} & (\%) & &
& {\scriptsize (\AA)} & {\scriptsize (\AA)} & (\%) &
($\lambda$$_{\rm N}$ - $\lambda$$_{\rm B}$) & & {\scriptsize (\AA)} \\
\hline
\noalign{\smallskip}
02:42 (Quiescent) & - & - & - & - &
& - & - & - & - & - & 0.000 & 0.000 \\
04:36 & 0.033 & 3.400 & 0.117 & 43.0 &
& 0.160 & 0.912 & 0.155 & 57.0 & +0.32 & 0.198 & 0.272 \\
05:31 (Maximum) & 0.051 & 2.981 & 0.161 & 46.5 &
& 0.226 & 0.772 & 0.185 & 53.5 & +0.03 & 0.272 & 0.346 \\
06:01 & 0.049 & 2.450 & 0.128 & 53.3 &
& 0.142 & 0.745 & 0.112 & 46.7 & -0.13 & 0.190 & 0.241 \\
06:29 & 0.030 & 2.802 & 0.088 & 43.1 &
& 0.133 & 0.825 & 0.117 & 56.9 & -0.21 & 0.161 & 0.205 \\
07:29 & 0.034 & 2.554 & 0.093 & 42.0 &
& 0.145 & 0.826 & 0.128 & 58.0 & -0.11 & 0.184 & 0.220 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\end{flushleft}
\end{table*}
\begin{table*}
\caption[]{He~{\sc i} $\lambda$6678 parameters during the flare in the
(O - Q) profiles for the two Gaussian component
fits and for the total emission
\label{tab:measureshei6678}}
\begin{flushleft}
\begin{tabular}{lcccccccccccccc}
\hline
&
\multicolumn{4}{c}{He~{\sc i} $\lambda$6678 broad component} & &
\multicolumn{4}{c}{He~{\sc i} $\lambda$6678 narrow component} \\
\cline{2-5}\cline{7-10}
\noalign{\smallskip}
{Obs.} &
{I} & {\scriptsize FWHM} & EW$_{\rm B}$ & $_{\rm B}$/$_{\rm T}$ & &
{I} & {\scriptsize FWHM} & EW$_{\rm N}$ & $_{\rm N}$/$_{\rm T}$ &
$\Delta \lambda$ & I$_{\rm T}$ & EW$_{\rm T}$ \\
(UT) &
& {\scriptsize (\AA)} & {\scriptsize (\AA)} & (\%) & &
& {\scriptsize (\AA)} & {\scriptsize (\AA)} & (\%) &
($\lambda$$_{\rm N}$ - $\lambda$$_{\rm B}$) & & {\scriptsize (\AA)} \\
\hline
\noalign{\smallskip}
02:42 (Quiescent) & - & - & - & - &
& - & - & - & - & - & 0.000 & 0.000 \\
04:36 & 0.094 & 0.911 & 0.091 & 95.4 &
& 0.017 & 0.249 & 0.004 & 04.6 & -0.08 & 0.106 & 0.095 \\
05:31 (Maximum) & 0.093 & 0.857 & 0.085 & 72.4 &
& 0.079 & 0.384 & 0.032 & 27.6 & -0.09 & 0.171 & 0.117 \\
06:01 & 0.080 & 0.797 & 0.067 & 84.5 &
& 0.041 & 0.285 & 0.012 & 15.5 & -0.08 & 0.120 & 0.080 \\
06:29 & 0.062 & 0.833 & 0.055 & 78.3 &
& 0.040 & 0.360 & 0.015 & 21.7 & -0.13 & 0.101 & 0.070 \\
07:29 & 0.047 & 1.060 & 0.053 & 68.6 &
& 0.060 & 0.381 & 0.024 & 31.4 & -0.18 & 0.111 & 0.078 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\end{flushleft}
\end{table*}
\begin{table*}
\caption[]{Na~{\sc i} D line parameters
during the flare in the (O - Q) profile
\label{tab:measuresna}}
\begin{flushleft}
\begin{tabular}{lccccccc}
\hline
&
\multicolumn{3}{c}{Na~{\sc i} D$_{2}$} & &
\multicolumn{3}{c}{Na~{\sc i} D$_{1}$} \\
\cline{2-4}\cline{6-8}
\noalign{\smallskip}
{Obs.} &
{I} & {FWHM} & EW & &
{I} & {FWHM} & EW \\
(UT) &
& {\scriptsize (\AA)} & {\scriptsize (\AA)} & &
& {\scriptsize (\AA)} & {\scriptsize (\AA)} \\
\hline
\noalign{\smallskip}
02:42$^1$ &0.000& 0.000 & 0.000 &
& 0.000 & 0.000 & 0.000 \\
04:36 & 0.058 & 0.674 & 0.036 &
& 0.078 & 0.549 & 0.038 \\
05:31$^2$ & 0.106 & 0.584 & 0.059 &
& 0.127 & 0.487 & 0.065 \\
06:01 & 0.095 & 0.600 & 0.057 &
& 0.104 & 0.528 & 0.056 \\
06:29 & 0.074 & 0.718 & 0.048 &
& 0.092 & 0.657 & 0.052 \\
07:29 & 0.090 & 0.915 & 0.063 &
& 0.117 & 0.666 & 0.074 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{8}{l}{$^1$Quiescent spectrum; $^2$Flare maximum }
\end{tabular}
\end{flushleft}
\end{table*}
\begin{table*}
\caption[]{Mg~{\sc i} b triplet and Fe~{\sc ii} $\lambda$5169
line parameters during the flare in the (O - Q) profile
\label{tab:measuresmg}}
\begin{flushleft}
\begin{tabular}{lccccccccccccccccccc}
\hline
&
\multicolumn{3}{c}{Mg~{\sc i} b$_{3}$} & &
\multicolumn{3}{c}{Mg~{\sc i} b$_{2}$} & &
\multicolumn{3}{c}{Mg~{\sc i} b$_{1}$} & &
\multicolumn{3}{c}{Fe~{\sc ii} $\lambda$5169 } \\
\cline{2-4}\cline{6-8}\cline{10-12}\cline{14-16}
\noalign{\smallskip}
{Obs.} &
{I} & {FWHM} & EW & &
{I} & {FWHM} & EW & &
{I} & {FWHM} & EW & &
{I} & {FWHM} & EW \\
(UT) &
& {\scriptsize (\AA)} & {\scriptsize (\AA)} & &
& {\scriptsize (\AA)} & {\scriptsize (\AA)} & &
& {\scriptsize (\AA)} & {\scriptsize (\AA)} & &
& {\scriptsize (\AA)} & {\scriptsize (\AA)} \\
\hline
\noalign{\smallskip}
02:42$^1$ &0.000& 0.000 & 0.000 &
& 0.000 & 0.000 & 0.000 &
& 0.000 & 0.000 & 0.000 &
& 0.000 & 0.000 & 0.000 \\
04:36 & 0.078 & 0.531 & 0.044 &
& 0.069 & 0.503 & 0.037 &
& 0.060 & 0.467 & 0.030 &
& 0.154 & 0.458 & 0.075 \\
05:31$^2$ & 0.137 & 0.392 & 0.057 &
& 0.122 & 0.379 & 0.049 &
& 0.111 & 0.350 & 0.041 &
& 0.275 & 0.401 & 0.118 \\
06:01 & 0.111 & 0.425 & 0.050 &
& 0.102 & 0.381 & 0.042 &
& 0.089 & 0.359 & 0.034 &
& 0.222 & 0.402 & 0.095 \\
06:29 & 0.093 & 0.342 & 0.034 &
& 0.082 & 0.349 & 0.030 &
& 0.069 & 0.337 & 0.025 &
& 0.177 & 0.416 & 0.078 \\
07:29 & 0.103 & 0.414 & 0.045 &
& 0.100 & 0.318 & 0.034 &
& 0.082 & 0.265 & 0.023 &
& 0.204 & 0.387 & 0.084 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{8}{l}{$^1$Quiescent spectrum; $^2$Flare maximum }
\end{tabular}
\end{flushleft}
\end{table*}
\subsection{The flare in the UV}
Ultraviolet line and continuum fluxes are given in (Table \ref{tab:uv_fluxes}).
We coadd selected strong chromospheric (CHR) and transition region (TR)
line fluxes to improve statistics: $f_{\rm TR}$ is the
sum of N~{\sc v} ($T_{\rm form} \sim 1.25\times10^5$ K),
Si~{\sc iv} ($T_{\rm form} \sim 8\times10^4$ K) and C~{\sc iv} ($T_{\rm form}
\sim 10^{5}$ K), while $f_{\rm CHR}$ coadds O~{\sc i} and C~{\sc i}
($T_{\rm form} \sim
7\times10^3$ K), C~{\sc ii} ($T_{\rm form} \sim 2\times10^4$ K),
and Si~{\sc ii} ($T_{\rm form}\sim 1\times10^4$ K).
Figure~\ref{fig:uv_spec_s3} shows the quiescent (an average of 10
exposures), impulsive (JD$_{\rm start}$ = 2449343.568; 1:37 UT),
and just post-maximum (= ``UV peak";
JD$_{\rm start}$ = 2449343.783; 6:48 UT) UV spectra.
Figure~\ref{fig:lqmg2} depicts Mg~{\sc ii} spectra
between the SWP data in Figure~\ref{fig:uv_spec_s3}, plus an
average quiescent spectrum.
Figure~\ref{fig:lqhya_uv} shows the time variation of $f_{\rm TR}$
and $f_{\rm CHR}$ during the period of the optical flare. We note that
while the UV ``impulsive" spectrum ends before the first flare-affected
optical spectrum (4:36 UT),
it already shows noticeably enhanced (1.5$\times$quiescent) transition
region (TR) and continuum emission. Thus the true ``impulsive" phase
starts earlier than the optical data
indicates, with high $T_{\rm form}$ TR lines leading the evolution
(as is typical).
\begin{figure*}
{\psfig{figure=MY690_f7.ps,bbllx=43pt,bblly=400pt,bburx=554pt,bbury=680pt,height=8.4cm,width=17.5cm}}
\caption[ ]{
IUE SWP-lo spectra showing fluxes at earth (per \AA)
for the mean quiescent phase (average of 10 spectra; heavy solid), the
early impulsive phase (JD 2449343.568; thin solid) and
the near-maximum phase (JD 2449343.783; dotted). The spectra have been
smoothed by a 3 pixel running mean.
\label{fig:uv_spec_s3} }
\end{figure*}
\begin{figure}
{\psfig{figure=MY690_f8.ps,bbllx=102pt,bblly=374pt,bburx=548pt,bbury=719pt,height=7.4cm,width=8.4cm}}
\caption[ ]{IUE LWP-hi spectra of the Mg~{\sc ii} k line
showing fluxes at earth (per \AA)
for the mean quiescent phase (average of 8 spectra; solid), and
two spectra near flare peak (taken between the
two SWP spectra in Fig.~\ref{fig:uv_spec_s3}); the later spectrum
is offset by 10$^{-11}$ for clarity.
Spectra with the ISM absorption approximately removed (dashed)
and simple models combining ISM, enhanced quiescent, and broad
components (see text) are shown (dotted); blends are also
noted (thick vertical marks), as is the broad component peaks
(note the shift between the spectra).
\label{fig:lqmg2} }
\end{figure}
Besides the lines measured in Table \ref{tab:uv_fluxes},
numerous weak lines are also present (typically in UV peak
and/or quiescent spectra). Most certain of these are
C~{\sc i} 1261, 1277, 1280, 1560~\AA,
Si~{\sc iii} $\lambda$ 1295-98, 1892~\AA,
S~{\sc i} 1820~\AA,
Al~{\sc iii} 1855, 1863~\AA,
Fe~{\sc ii} 1611, 1633, 1671~\AA, and
O~{\sc iii} 1666~\AA.
The following are also detected with less certainty:
C~{\sc i} 1463, 1493~\AA,
S~{\sc i} 1473, 1483, 1487, 1900~\AA,
Si~{\sc i} 1258, 1267, 1565, 1574~\AA,
Si~{\sc ii} 1533\AA, Fe~{\sc ii} 1372, 1588, 1598~\AA,
S~{\sc iv} 1417~\AA,
O~{\sc iv} 1407~\AA,
O~{\sc v} 1371~\AA, and
S~{\sc v} 1502~\AA.
Also of interest is the possible detection of the coronal
1354.1~\AA\ Fe~{\sc xxi} line ($T_{\rm form} \approx 10^7$ K) in the
UV peak and (perhaps) in the quiescent spectrum.
Possible contributions from other nearby lines
(e.g., C~{\sc i} 1354.3~\AA) make
this identification uncertain, though.
The strongest high $T_{\rm form}$ line, C~{\sc iv},
shows a strong blueward asymmetry in the impulsive spectrum
(its centroid is shifted by $\approx -250$ km s$^{-1}$),
perhaps foreshadowing the
blueshifts in the broad components of the optical Balmer and He~{\sc i} lines
seen later (see \S 3.5). Si~{\sc IV} and N~{\sc V}
(with similar $T_{\rm form}$) also appear to be blue-shifted,
though the shift is uncertain due
to probable blends and the weakness these features.
Cooler strong lines (C~{\sc II}, He~{\sc II})
show no shift. The lack of H$_2$ rules out the 1547~\AA~line as responsible,
but we note that numerous
features near C~{\sc IV} are coincident with (normally weak) Si~{\sc I}
(1545, 1552, 1555-8, 1563~\AA), and C~{\sc I} (1542~\AA) lines.
Thus, the apparent C~{\sc IV} blueshift {\it may} be partly due to blends.
There are no SWP data during optical flare maximum,
though two LWP spectra appear to show Mg~{\sc ii} peaking
at about the this time (Fig.~\ref{fig:lqmg2}). The presence of ISM absorption,
emission self-reversal,
and nearby blends complicates the interpretation of these features.
We first constructed a ``quiescent" spectrum $F_{\rm Q}$
from the average of
eight LWP images which appeared uncontaminated by flares.
We then removed the ISM absorption from $F_{\rm Q}$ by adding
a 2 pixel Gaussian at the center of the absorption feature.
We modeled the Mg~{\sc ii} lines in the flare very simply, with
$F_{\rm flare} = A F'_{\rm Q} + B G_{\rm B} - C G_{\rm ISM} +D$,
where $F'_{\rm Q}$ is the ISM-corrected quiescent spectrum,
$G_{\rm B}$ and $G_{\rm ISM}$ are broad and narrow (2 pixel) Gaussians,
and $A$, $B$, $C$, and $D$ are adjustable constants.
The width and central $\lambda$ of $F'_{\rm Q}$ and $G_{\rm B}$
were also allowed to vary; results are shown in Fig.~\ref{fig:lqmg2}.
The advantage of using $F'_{\rm Q}$ as a template is
that we account for the intrinsic Mg~{\sc ii} shape and
(partially) remove blends.
We find that the first Mg~{\sc ii} spectrum shows a 2.5$\times$
enhancement over quiescent in its narrow component, and a broad
($\approx$250 km s$^{-1}$ FWHM) component with a flux
of $f_{\rm B} = 3.3\times10^{-12}$ ergs cm$^{-2}$ s$^{-1}$
at earth in Mg~{\sc ii} k. The broad component in this spectrum
is Doppler shifted by $\approx -40$ km s$^{-1}$.
The continuum is also significantly enhanced (by at least 10$\times$), with
$\langle f_{\rm con} \rangle \approx 2\times10^{-12}$
ergs cm$^{-2}$ s$^{-1}$ \AA$^{-1}$ at 2800~\AA.
The second spectrum, taken about 1.5 hours later,
showed both broad and narrow components reduced by a factor of
$\approx 1.5$, with the broad component now basically unshifted but
with a similar width (Fig.~\ref{fig:lqmg2}).
Thus, the Mg~{\sc ii} lines respond to the flare
much like the optical chromospheric features, initially exhibiting
enhanced emission with broad, blue-shifted
components, which gradually drift to the red and weaken
as the flare evolves.
The next SWP spectrum (UV peak,
coinciding the with early gradual phase in the optical
data) reveals $> 20 \times$ enhancements in the TR and $> 4 \times$
enhancements in the CHR. Several high $T_{\rm form}$ lines
only weakly detected in the quiescent spectrum (C~{\sc iii} 1175~\AA,
N~{\sc v} 1240~\AA) are also greatly strengthened. Several lines not listed
in Table \ref{tab:uv_fluxes} appear in this spectrum; the more certain of these
include the C~{\sc i} complexes (1261~\AA, 1277~\AA, and 1280~\AA) and the
Si~{\sc iii} complex at 1295-1299~\AA,
(combined fluxes $f \approx$ 1.3 and $0.9 \times 10^{-13}$ ergs
cm$^{-2}$ s$^{-1}$), O~{\sc iii} 1666~\AA, Fe~{\sc ii} 1671~\AA,
Al~{\sc iii} 1854+1862 \AA, and Si~{\sc iii} 1892~\AA\ (with $f \approx
0.8, 1.3$, $2.4$, and $1.3\times 10^{-13}$ ergs cm$^{-2}$ s$^{-1}$,
respectively).
\begin{figure}
{\psfig{figure=MY690_f9.ps,height=8.4cm,width=8.4cm}}
\caption[ ]{The change of EW of several optical chromospheric lines
during the flare
\label{fig:lqhya_ews} }
\end{figure}
The UV continuum is also noticably enhanced in this spectrum.
Machado \& H\'enoux (1982) suggested similar enhancements in solar flares
at $\lambda <$ 1524~\AA~and $\lambda <$ 1682~\AA are due to are due bound-free
radiation from the ground and first excited levels of Si~{\sc i}
excited by UV line emission (primarily by C~{\sc ii} and
C~{\sc iv}, respectively). Exploring this idea,
Phillips, Bromage \& Doyle (1992)
studied the power per \AA~in 50 \AA~(relatively line free) bands centered at
1490~\AA~and 1595~\AA~in stellar flares, and found that they correlated
well with line power in C~{\sc ii} and C~{\sc iv}, respectively:
specifically, $\log P_{\rm cont}$(1490\AA) = 0.94 $\log P_{\rm C~{\sc ii}} +
0.3$ and $\log P_{\rm cont}$(1595\AA) = 1.04 $\log P_{\rm C~{\sc iv}} - 2.9$.
We have integrated the flux in similar bands ($f_{\rm cont}$ in
Table \ref{tab:uv_fluxes}); note that the small $f_{\rm cont}$ in the
non-flare spectra may be due to the sum of many weak lines.
We find good agreement for the 1490~\AA~continuum,
with $\log P_{\rm cont}$(1490~\AA) (= $f_{\rm cont}/50$\AA\ $\times 4 \pi d^2$,
where $d$ is the stellar distance) within an average 0.10 dex of the prediction
for the impulsive and UV max spectra.
Agreement is less good for $\log P_{\rm cont}$(1595~\AA), which is
overestimated by an average of 0.31 dex.
We see no Si~{\sc i} edges at 1524~\AA~or 1682~\AA,
but this is perhaps not surprising
given the low resolution of IUE (Phillips et al.~1992).
In general, though, the far UV flare
continuum of LQ Hya seems consistent with the Si~{\sc i} recombination model.
If LQ Hya were very young
and still had significant circumstellar material, one might expect X-ray
excitation of the gas in such a strong flare.
On the suggestion of the referee, we explored whether any of the weak features
in the impulsive or UV peak spectrum might be due to excitation of
circumstellar H$_2$. As there is no evidence in these spectra
for the four strongest H$_2$
features seen in T Tau (1446, 1490, 1505, 1562~\AA; Brown, de M. Ferraz \& Jordan~1984),
it seems unlikely
there is much nearby gas. This makes it less likely that LQ Hya is
pre-main sequence, as suggested by Vilhu et al. (1991).
The UV fluxes do not return to their
quiescent level until almost a day later (JD 2449344.744), though lack of
data for $\sim$0.8 day after the UV maximum makes the
interpretation ambiguous. The time
between UV maximum and quiescence covers over half a
rotation, making it difficult for the flare (if localised) to have
remained visible unless it was very near the pole, or in a very
extended ($>R_*$) loop. The sharp drop in flux from
344.589 to 344.744 could then be due in part to the flare's finally
disappearing over the limb. Perhaps more likely, the enhancement seen on JD
2449344 might be due to a second flare and the fast decay a
consequence of its lower energy ($E_{\rm flare} \propto t_{\rm decay}^2$;
Lee, Petrosian \& McTiernan 1993; Saar \& Bookbinder 1998).
\begin{figure}
{\psfig{figure=MY690_f10.ps,bbllx=64pt,bblly=369pt,bburx=546pt,bbury=706pt,height=8.4cm,width=8.4cm}}
\caption[ ]{Evolution of the combined IUE chromospheric
fluxes (at earth) of O~{\sc i} (1304~\AA), C~{\sc ii} (1335\AA),
C~{\sc i} (1657~\AA), and Si~{\sc ii} (1810~\AA)
(=$f_{\rm CHR}$, $\diamond$),
chromospheric Mg~{\sc ii}$/5$ (2800~\AA; $\ast$), the
combined transition region fluxes of N~{\sc v} (1240~\AA),
Si~{\sc iv} (1400\AA) and C~{\sc iv} (1550\AA) (=$f_{\rm TR}$, $\Box$),
and the UV continuum flux (average of the total $f_{\rm cont}$ in two
50~\AA~bands centered at 1490~\AA~and 1595~\AA; $\triangle$).
Horizontal solid lines through the data points indicate
the exposure durations. The first time points (plotted arbitrarily at 0.4)
show the mean quiescent fluxes and their errors (vertical line).
The span of the optical observations is also indicated, with the line type
indicating the spectrum's appearance (dashed=quiescent,
heavy solid=impulsive/flare peak, thin solid = gradual);
the optical maximum is marked with a tick.
\label{fig:lqhya_uv} }
\end{figure}
\begin{table*}
\caption[]{UV continuum fluxes, chromospheric (CHR) and transition region
(TR) lines fluxes
\label{tab:uv_fluxes}}
\begin{flushleft}
\begin{tabular}{lccccccccccccccc}
\hline
\noalign{\smallskip}
JD$_{\rm start}$ & t$_{\rm exp}$ &
\multicolumn{10}{c}{$f$~~~(10$^{-13}$ ergs cm$^{2}$ s$^{-1}$ at earth)$^1$} &
$f_{\rm CHR}$ & $f_{\rm TR}$ & $f_{\rm cont}$ & $f_{\rm cont}$ \\
-2449000 & (s) & C {\sc iii} & N {\sc v} & O {\sc i} & C {\sc ii} &
Si {\sc iv} & C {\sc iv} & He {\sc ii} & C {\sc i} & Si {\sc ii} &
Mg {\sc ii}$^2$ & sum & sum & {\scriptsize (1490\AA)} &
{\scriptsize (1595\AA)} \\
\hline
\noalign{\smallskip}
342.742 & 2400 & ... & ... & ... & ... & ...
& ... & ... & ... & ... & 30/32 & ... & ... & ... & ... \\
342.781$^{3}$ & 7200 & 0.8:$^{4}$ & 0.2: & 0.8 & 1.6 & 1.4
& 3.3 & 2.0 & 1.1 & 1.5 & ... & 5.0 & 4.8 & 0.9: & 1.6: \\
343.568$^{5}$ & 9000 & $<$0.4: & 0.6: & 0.9 & 1.3 & 0.8:
& 5.8: & 1.2 & 1.3 & 1.2 & ... & 4.7 & 7.2: & 2.4: & 2.9: \\
343.684 & 2400 & ... & ... & ... & ... & ...
& ... & ... & ... & ... & 106:/120: & ... & ... & ... & ... \\
343.750 & 2400 & ... & ... & ... & ... & ...
& ... & ... & ... & ... & 78/83 & ... & ... & ... & ... \\
343.783 & 6300 & 13.7 & 5.4 & 2.8 & 8.8 & 12.2
& 50:$^6$ & 18.3 & 4.5 & 4.1 & ... &20.2 & 68: & 14.7 & 18.0 \\
344.589 & 9000 & 3.8: & 1.5: & 1.3: & 3.1 & 2.9
& 6.9 & 3.5 & 2.0 & 2.8 & ... &9.2 & 11.3 & 1.6: & 2.9: \\
344.744 & 7200 & 1.0: & 0.4: & 0.4: & 1.0 & 0.5
& 2.4 & 1.7 & 1.2 & 1.1 & ... &4.9 & 4.2 & 1.6: & 1.6: \\
344.833 & 2400 & ... & ... & ... & ... & ...
& ... & ... & ... & ... & 27/29 & ... & ... & ... & ... \\
\noalign{\smallskip}
\multicolumn{2}{l}{$<$quiescent$>$}
& 1.2: & 0.4 & 0.7 & 1.5 & 1.1
& 3.1 & 1.7 & 1.0 & 0.9 & 29/31 & 4.1 & 4.6 & 1.5 & 1.5 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
$^1$ lines at 1175\AA, 1239+43\AA, 1302+5+6\AA, 1335\AA, 1394+1403\AA,
1548+51\AA, 1641\AA, 1657\AA, 1808+17\AA, 2796+2803\AA, respectively.
$^{2}$ uncorrected/corrected for ISM absorption~~~~~$^{3}$ not shown in Fig.
10.~~~~~$^{4}$ colon = uncertain measurement~~~~~$^{5}$ noisy spectrum
$^{6}$ line saturated, $f > 38.0 \times 10^{-13}$ ergs cm$^{-2}$ s$^{-1}$,
flux estimated from Gaussian fit to line wings and shoulder.
\end{flushleft}
\end{table*}
\subsection{Estimate of energy released}
To estimate the flare energy released in the observed optical
chromospheric lines, we converted the EW into absolute
surface fluxes and luminosities.
Since we have not observed the
entire flare, and are missing important lines (e.g., the saturated
Ly $\alpha$; Mg~{\sc ii}; He~{\sc i} 10830~\AA)
our estimates are only lower limits to the total flare energy
in chromospheric lines.
We have used the calibration of Hall (1996) to obtain the
stellar continuum flux in the H$\alpha$ region
as a function of (B~-~V) and then convert
the EW into absolute surface flux.
For the other lines, we have used the continuum flux at
H$\alpha$ corrected for the difference in the continuum
F$_{\lambda 6563}$/F$_{\lambda}$, assuming F$_{\lambda}$ is given by a
blackbody at T$_{eff}$ = 4900 K. The corresponding
absolute fluxes at flare maximum
(erg cm$^{-2}$ s$^{-1}$), and total flux (ergs cm$^{-2}$) integrated over the
the observation interval ($\sim$ 3~h) are given in Table~\ref{tab:energy}.
We converted these fluxes into luminosities using
the radius R = 0.76 R$_\odot$ (Strassmeier et al.~1993).
We have not estimated the energy released in the UV chromospheric lines,
since, without the (saturated) Ly$\alpha$ and Mg~{\sc ii}
lines, the total $f_{\rm CHR}$ will be greatly underestimated.
We find that the estimated total energy released in optical chromospheric
lines is $E_{\rm CHR} \geq$~5.7~10$^{33}$ ergs, indicating
that this LQ Hya flare is more energetic in line emission
than an average flare on a dMe star, where
typically 10$^{28}$ ergs $\leq E_{\rm CHR} \leq$ 10$^{34}$ ergs
(see Hawley \& Pettersen 1991).
\begin{table*}
\caption[]{The flare energy released the
chromospheric lines
\label{tab:energy}}
\begin{flushleft}
\begin{tabular}{lccccc}
\hline
\noalign{\smallskip}
{Line}
& F$_{\lambda}${\scriptsize (max)} & $\int$F$_{\lambda} dt$
& L$_{\lambda}${\scriptsize (max)} & $\int$L$_{\lambda} dt$ \\
& (10$^{6}$) & (10$^{10}$)
& (10$^{29}$) & (10$^{33}$) \\
& {\scriptsize (ergs cm$^{-2}$ s$^{-1}$)} & {\scriptsize (ergs cm$^{-2}$)}
& {\scriptsize (erg s$^{-1}$)} & {\scriptsize (erg)} \\
\hline
\noalign{\smallskip}
H$\beta$ & 6.269 & 5.876 & 2.204 & 2.066 \\
Mg~{\sc i} b$_{3}$ & 0.200 & 0.210 & 0.070 & 0.074 \\
Fe~{\sc ii} $\lambda$5169 & 0.414 & 0.406 & 0.145 & 0.143 \\
Mg~{\sc i} b$_{2}$ & 0.172 & 0.174 & 0.062 & 0.061 \\
Mg~{\sc i} b$_{1}$ & 0.144 & 0.137 & 0.051 & 0.048 \\
He~{\sc i} D$_{3}$ & 1.270 & 1.234 & 0.447 & 0.434 \\
Na~{\sc i} D$_{2}$ & 0.217 & 0.253 & 0.076 & 0.089 \\
Na~{\sc i} D$_{1}$ & 0.239 & 0.278 & 0.084 & 0.098 \\
H$\alpha$ & 7.510 & 7.242 & 2.641 & 2.546 \\
He~{\sc i} $\lambda$6678 & 0.415 & 0.412 & 0.146 & 0.145 \\
\noalign{\smallskip}
& & & & \\
Total lines & 16.85 & 16.23 & 5.926 & 5.704 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\end{flushleft}
\end{table*}
\subsection{Line asymmetry}
Very broad wings have been found in Mg~{\sc ii} and in
the subtracted profiles
of H$\alpha$, H$\beta$, He~{\sc i} D$_{3}$ and He~{\sc i} $\lambda$6678 lines,
and double Gaussian (narrow=N and broad=B) fits were used to model them.
The contribution of the broad component
to the total EW and FWHM of these lines reaches a maximum in the
impulsive phase and then decreases.
The line profiles are also asymmetric,
with the B component often shifted relative to the N.
In the impulsive phase we found a blue asymmetry in the lines --
the B component appears blue-shifted with respect to the N.
At the maximum of the flare $\Delta \lambda \approx 0$,
and during the gradual phase a red asymmetry is present,
with $\Delta \lambda$ increasing with time (see Tables 3-6).
Similar broad Balmer emission wings have been seen in
dMe stars and chromospherically active binaries connected
with flare events, with similar blue or red asymmetries in some cases
(Eason et al.~1992; Phillips et al.~1988; Doyle et al.~1988;
Gunn et al.~1994a; Abdul-Aziz et al. 1995; Montes et al. 1996b, 1997;
Montes \& Ramsey (1998); Abranin et al. 1998; Berdyugina et al. 1998)
and without obvious line asymmetries in other cases
(Hawley \& Pettersen 1991).
The authors ruled out the possibility of pressure (Stark) effects as the cause
and conclude that the profiles are best explained
if these broad components and asymmetries are attributed
to plasma turbulence or mass motions in the flare.
In solar flares, most frequently, a red asymmetry is observed
in chromospheric lines and interpreted as the result of
chromospheric downward condensations (CDC)
(Canfield et al. 1990 and references therein).
However, evidence of a blue asymmetry has been also
reported (Heinzel et al. 1994) and blue and red asymmetries are observed
simultaneously at different positions in a flaring
region (Canfield et al. 1990),
or at the same position but different times (Ji et al. 1994).
Recent line profile calculations
(Gan, Rieger \& Fang 1993; Heinzel et al. 1994; Ding \& Fang 1997)
show that a CDC can explain both the blue and red asymmetries.
On the other hand, in stellar flares evidence of mass motions
have also been reported,
in particular a large enhancement in the far blue wings
of Balmer lines during the impulsive phase of a stellar
flare was interpreted
as a high velocity mass ejection (Houdebine, Foing \& Rodon\'o 1990),
or high velocity chromospheric evaporation (Gunn et al. 1994b),
and red asymetries in the wings of Balmer lines are reported
by Houdebine et al. (1993) as evidence of CDC.
Broadening and shifts in UV lines during stellar flares,
also interpreted as mass motions, have been
reported by Simon, Linsky \& Schiffer (1980), and Linsky et al. (1989).
Thus another possible explanation of the broad components and asymmetries we
observed is mass motions in the flare - perhaps an explosive ejection/eruption
(blueshift) followed by flows down the flaring loops (redshift).
Still, since
a CDC can also explain both the blue and red asymmetries observed in
this stellar flare, it remains a distinct possibility as well.
\section{Conclusions}
We have detected a strong flare on LQ Hya in high resolution
optical spectra (4800 to 7000~\AA) and IUE SWP observations.
Such a strong flare is unusual in an early K star like LQ Hya.
The flare started on 1993 December 22 between
2:42 UT (quiescent spectrum) and 4:07 UT (end of first enhanced UV spectrum).
UV data suggest the impulsive phase began well before the first
optical sign at 04:36 UT. The optical chromospheric lines
reached their maximum intensity $\approx$55 min later, by which time
the continuum enhancement had sharply decreased. Thereafter, the optical line
emission slowly decreased in a gradual phase that lasted at least until
the last observation (07:29 UT).
Quiescent C~{\sc iv} flux levels were not recovered after $\approx$4~h UT
on the following day (though a second flare or rotation of the flaring
region beyond the limb may have affected the results).
We detected an optical continuum enhancement that increased toward
the blue ($\propto \lambda^{-1.35}$) and reached a maximum (36\%)
during the impulsive phase. The UV continuum was
enhanced by at least $\approx 10\times$ at 1500~\AA~and 2800~\AA.
We analyse the lines by subtracting a quiescent LQ Hya spectrum or that of
a modified inactive reference star.
The excess H$\alpha$ and H$\beta$ emission equivalent widths,
in the observed - reference spectra, increase by a factor of 2.7 and 5.8
at maximum, respectively, over the quiescent level.
The H$\alpha$/H$\beta$ Balmer decrement is shallower during the flare,
changing from 3.15 at the quiescent state
to 1.46 at the impulsive and maximum phases of the flare.
We observe the He~{\sc i} D$_{3}$ line, a well known
diagnostic of flares, going into emission during the flare,
reaching an EW = 0.346~\AA\ at the maximum. We also observe excess emission
in He~{\sc i} lines at 4921.9, 5015.7, and 6678.1~\AA~in the (O-Q) spectra,
and in other metal lines such as the Na~{\sc i} D$_{1}$ and D$_{2}$, the
Mg~{\sc i} b triplet and several Fe~{\sc i} and Fe~{\sc ii} lines.
The more intense lines,
H$\alpha$, H$\beta$, He~{\sc i} D$_{3}$ and He~{\sc i} $\lambda$6678,
exhibit very broad wings in the subtracted profiles.
Their profile are hence not well matched
with a single component model;
we have used a two Gaussian fit (narrow and broad) to characterize them.
For all these lines the contribution of the broad component
to the total EW of the line and their FWHM reach a maximum at the
impulsive phase and then decrease.
Moreover, the line profiles are asymmetric, appearing
the broad component blue-shifted in the impulsive phase
and red-shifted during the gradual phase, with
$\Delta \lambda$ between the components increasing with time.
Mg~{\sc ii} profiles respond similarly.
These broad components and asymmetries can be attributed
to plasma turbulence or to upward and downward mass motions in the flare.
Similar blueshifts may be seen in the C~{\sc iv} line
during the impulsive phase of the flare.
Ultraviolet TR lines are enhanced by a factor of $> 20$, chromospheric lines
by a factor of $>4$ and the far UV continuum by a factor of $>10 \times$ over
background in our UV spectrum just after optical flare maximum. The
continuua in our flare-affected UV spectra generally agree with a
Si~{\sc i} recombination model.
We estimate the energy released during the flare for all the
optical chromospheric lines at
$\sim$ 5.7 10$^{33}$ ergs, indicating
that this flare in LQ Hya is more energetic in line emission
than an average flare on a dMe star.
\section*{Acknowledgments}
This work has been partially supported by the Universidad Complutense de Madrid
and the Spanish Direcci\'{o}n General de Ense\~{n}anza Superior e Investigaci\'{o}n
Cient\'{\i}fica (DGESIC) under grant PB97-0259, by
NASA grants NAG5-1975 and NAGW-112, HST grant GO-5871.01-94A,
and NSF grant AST-9528563.
YCU is supported by the Austrian Science Foundation grant S7302.
We thank the referee J.G. Doyle for helpful comments, and C.M. Johns-Krull
for useful discussions.
| 23,917 |
\section{The Gamma Matrices}
\setcounter{equation}{0}
In this appendix we give our conventions for the gamma matrices. We
follow closely
the conventions of \cite{green}, however some relabeling of the
coordinates will be required.
The $32\times 32$ gamma matrices are in the Majorana representation
and are purely imaginary. They are
\begin{eqnarray}
\Gamma^0&=&\tau_2\times I_{16}\nonumber\\
\Gamma^I&=&i \tau_1\times \gamma^I, \;\;\;\;\;\;\;\; I=1,...8\nonumber\\
\Gamma^9&=&i \tau_3\times I_{16}\nonumber\\
\Gamma^{10}&=&i \tau_1\times \gamma^9
\end{eqnarray}
where $\tau_i$ are the Pauli matrices, $I_x$ are $x\times x$
identity matrices and the $16\times 16$ real
matrices $\gamma^I$
satisfy
\be
\{\gamma^{{I}},\gamma^{{J}}\}=2\delta^{{I}{J}},\; \;\;\;\;\;\;\;
\; {I},{J} =1,...8.
\end{equation}
and
\be
\gamma^9=\prod_{I=1}^{8}\gamma^{{I}}.
\end{equation}
This ensures that
\be
\{\Gamma^\mu,\Gamma^\nu\}=-2\eta^{\mu\nu}.
\end{equation}
We now construct the $spin(8)$ Clifford algebra.\footnote{
This construction is that presented in Appendix 5.B of Ref.\cite{green}}
The matrices $\gamma^{{I}}$ take the form
\begin{eqnarray}
\gamma^{\hat{I}}&=&\pmatrix{0& \tilde{\gamma}^{{\hat{I}}}\cr
-\tilde{\gamma}^{{\hat{I}}}&0\cr },\ {\hat{I}}=1,...7,\nonumber\\
\gamma^{8}&=&\pmatrix{I_{8}& 0\cr
0&-I_{8}\cr },
\end{eqnarray}
where the $8\times 8$ matrices $\tilde{\gamma}^{{\hat{I}}}$ are
antisymmetric and explicitly given by
\begin{eqnarray}
\tilde{\gamma}^1&=&-i \tau_2\times\tau_2\times\tau_2\nonumber\\
\tilde{\gamma}^2&=&i 1_2\times\tau_1\times\tau_2\nonumber\\
\tilde{\gamma}^3&=&i 1_2\times\tau_3\times\tau_2\nonumber\\
\tilde{\gamma}^4&=&i \tau_1\times\tau_2\times1_2\nonumber\\
\tilde{\gamma}^5&=&i \tau_3\times\tau_2\times1_2\nonumber\\
\tilde{\gamma}^6&=&i \tau_2\times1_2\times\tau_1\nonumber\\
\tilde{\gamma}^7&=&i \tau_2\times1_2\times\tau_3
\end{eqnarray}
It follows that $\gamma^{9}$ is given by
\be
\gamma^{9}=\pmatrix{0&-I_{8}\cr
-I_{8}&0\cr }.
\end{equation}
Furthermore
\be
\Gamma^+=\frac{1}{\sqrt{2}}\pmatrix{i & -i \cr i & -i \cr}\times
I_{16},\;\;\;\;\;\;
\Gamma^-=\frac{1}{\sqrt{2}}\pmatrix{-i & -i \cr i & i \cr}\times I_{16},
\end{equation}
such that
\be
(\Gamma^+)^2=(\Gamma^-)^2=1,\;\;\;\;\;\; \{ \Gamma^+,\Gamma^-\}=2.
\end{equation}
Then it is straightforward to show that the condition $\Gamma^+\theta=0$
leads to
\be
\theta=\pmatrix{S_1\cr S_2 \cr S_1 \cr S_2 \cr}.
\end{equation}
Moreover, it follows that
\begin{eqnarray}
&\bar{\theta}\Gamma^\mu\partial\theta=0&,\;\;\;\;\;\;\;\;\;\;\;\;
\mbox{unless}\;\;\mu=-\nonumber\\
&\bar{\theta}\Gamma^{\mu\nu}\partial\theta=0&,\;\;\;\;\;\;\;\;\;\;\;\;
\mbox{unless}\;\;\mu\nu=-M
\end{eqnarray}
where $\bar{\theta}=\theta^\dagger\Gamma_0=\theta^{T}\Gamma_0\;$ ($\theta$
is real). Finally notice that
\be
(\Gamma^\mu)^\dagger=\Gamma^0\Gamma^\mu\Gamma^0,\;\;\;\;\;\;\; \;
\Gamma^{11}=\prod_{\mu=0}^{10}\Gamma^{{\mu}}=i\Gamma^{10}.
\end{equation}
\newpage
| 1,483 |
\section{Introduction}
Much attention has been paid in recent years to the nature of the
foregrounds which obscure our view of the background. This attention
has resulted in discoveries about the nature of the foregrounds as
well as methods for estimating CMB anisotropy from
foreground-contaminated data. From studying these developments, we
have concluded that for planned large area, multi-frequency
experiments, such as MAP\footnote{MAP home page:
{\tt http://map.gsfc.nasa.gov}}
and Planck\footnote{Planck home page:\\
{\tt http://astro.estec.esa.nl/SA-general/\linebreak[0]Projects/Cobras/cobras.html}}, the foregrounds are unlikely to
be responsible for qualitative degradation of the primary cosmological
results.
This happy situation is due to a number of factors. First, there is a
window in frequency space, where, at high galactic latitude, CMB
fluctuations are the brightest diffuse source in the sky. Second, the
high-latitude galactic foregrounds are very smooth; they do not have
much small-scale fluctuation power. Third, foregrounds, unlike
instrument noise and some systematic error sources, are suppressed at
small angular scales by the finite
angular resolution of the telescope. Fourth, point source count
estimates suggest that only a small fraction of pixels in the MAP and
Planck maps will be affected---and these can be identified with
threshold cuts and removed. Finally, even if uncertainty in a
particular mode of the CMB map is dramatically increased by the
presence of foregrounds, uncertainty in the CMB power spectrum may not
be significantly affected. This is due to the fact that, at least in
the foreground free case, the dominant source of power spectrum
uncertainty (except at the smallest angular scales) comes from sample
variance, not instrument noise.
The primary cosmological results---determination of cosmological
parameters---depend mostly on how well the power spectrum is measured.
We thus focus on the impact of foregrounds on the determination of
this power spectrum. Our method for estimating this impact can be
considered to be a generalization of those based on Wiener filtering
by Bouchet, Gispert, and Puget (1995, hereafter ``BGP95'') and Tegmark
and Efstathiou (1996, hereafter ``TE96'') as well as that of White
(1998, hereafter ``W98''). All these approaches take the CMB and
foregrounds to be statistically isotropic, Gaussian-distributed
fields. Given this assumption, estimation of the power spectrum
errors is straightforward, as described below.
As is always the case with parameter estimation, how well the desired
parameters can be recontsructed depends on the assumed prior
information. The methods of TE96 and W98 essentially assume that the
foreground power spectra are known with infinite precision {\it a
priori}. The most important difference between our method and theirs
is that we only assume finite prior information about the foreground
power spectra.
Although the method of BGP95 was derived assuming Gaussianity, they
have tested it with non-Guassian simulations of Planck Surveyor maps
(see, e.g., Gispert and Bouchet, 1996). Their results lend
credibility to the forecasts derived analytically under the Gaussian
assumption.
Below we first describe our methods for estimating the power spectrum
uncertainties given an experimental configuration and foreground
model. In section III we describe our model of the foregrounds, which
is based on that detailed in the Planck Phase A proposal (Bersanelli
{\frenchspacing\it et al.} ~1996), and the High Frequency Instrument (HFI) and Low Frequency
Instrument (LFI) proposals\footnote{These proposals are not
yet publically available. Most of the work referred to here will soon
be available as Bouchet and Gispert (1998).}.
To date, foregrounds have been essentially ignored in estimates
of the cosmological parameter uncertainties\footnote{A notable exception
is the ``LDB'' case in Bond, Efstathiou \& Tegmark (1997) which was based on calculations by
the TopHat group of CMB pixel errors after pixel-by-pixel subtraction
of foregrounds in their Antarctic maps.}. We find that
they are unlikely to qualitatively change the results. Although
for MAP this conclusion depends somewhat on the amplitude of
the contribution from the Sunyaev-Zeldovich effect, which is not yet
sufficiently well-determined.
Not only does the amplitude of the SZ power spectrum affect the
ability of MAP data to constrain the CMB power spectrum, but so does our
prior knowledge of it. This is fortunate, because while the amplitude is
completely out of our control, we {\it can} do something about
how well we know it. We emphasize that the prior information we
need is not of the actual SZ map, but of the statistics of the map.
The statistics can be calculated theoretically, or by actually
measureing the SZ map over only a few per cent of the sky.
Higher order moments of the probability distribution may also be of
interest if the CMB statistics are non-Gaussian, which they will be to
some degree even if the primordial fluctuations are Gaussian. Therefore,
we also estimate how well the amplitudes of individual
spherical harmonics can be determined. The uncertainty on these
amplitudes is much more strongly affected by the presence of
foregrounds than is the uncertainty on the power spectrum.
Effects due to contributions not in ones model of the data may be
detrimental and our formalism does not address such a problem. Nevertheless,
we find it encouraging that for the quite general model we have chosen,
where the data are required to simultaneously constrain thousands
of foreground parameters, the results look very good.
\section{Methodology}
We assume that the experiment measures the full sky in $\nu =
1,...,n_{\rm ch}$ channels, and model the (beam-deconvolved,
spherical-harmonic transformed) map data as due
to the CMB, foregrounds and noise:
\begin{equation}
\label{eqn:datamodel}
\Delta_{\nu lm} = \sum_i g_{i\nu} a_{ilm} + n_{\nu lm}
\end{equation}
where $i$ runs over the components, ($i=0$ is CMB,
$i>0$ are the foregrounds) and $g_{i\nu}$ gives their frequency
dependence. In the following we usually suppress all indices
and use a notation in which Eq.~\ref{eqn:datamodel} becomes:
\begin{equation}
{\bf \Delta} = {\bf g}^\dagger {\bf a}+{\bf n}.
\end{equation}
Throughout we assume
that the noise is spatially uniform, Gaussian-distributed, and
uncorrelated from channel to channel. Therefore,
${\bf W} \equiv
<{\bf n} {\bf n}^\dagger>^{-1}$
is given by
\begin{equation}
W_{\nu l m,\nu^\prime l^\prime m^\prime} = w_\nu
\delta_{\nu \nu^\prime} \delta_{l l^\prime} \delta_{m m^\prime}
\end{equation}
where the weight-per-solid angle for map $\nu$, $w_\nu$, equals
$B_{\nu,l}^2/(\sigma_\nu^2\Omega_{\rm pix})$, $\sigma_\nu$ is the standard error in the temperature
determination of a map pixel with solid angle $\Omega_{\rm pix}$, and
$B_{\nu,l}$ is the beam profile---which for a Gaussian beam with
full-width at half-max $\sqrt{8\ln 2}\sigma$ is given by
$exp(-(l\sigma)^2/2)$. The beam-damping of the weight matrix is due
to the fact we are describing the noise in the {\it beam-deconvolved}
maps.
If we make specific assumptions about the statistics of the CMB and
foregrounds then we can determine how well we can measure the
parameters of those statistical distributions. For simplicity and
specificity we assume the CMB and foregrounds to all be statistically
isotropic and Gaussian-distributed. In this case a complete
statistical description of the data is given by the two-point
function:
\begin{eqnarray}
\label{eqn:Mnoindices}
{\bf M} & \equiv & \langle
{\bf \Delta} {\bf \Delta}^\dagger \rangle \nonumber\\
& = & {\bf g}^\dagger \langle {\bf a}{\bf a}^\dagger \rangle {\bf g}+
{\bf W}^{-1}.
\end{eqnarray}
If, in addition to statistical isotropy, we assume that each
of the foreground components are uncorrelated then we can write
\begin{equation}
\langle a_{lm}^i a_{l^\prime m^\prime}^{i^\prime} \rangle = C_{il}\delta_{ll^\prime}
\delta_{mm^\prime}\delta_{ii^\prime}
\end{equation}
and Eq.~\ref{eqn:Mnoindices} simplifies to (with indices restored):
\begin{eqnarray}
\label{eqn:Mindices}
M^{\nu \nu^\prime}_{lm,l^\prime m^\prime} = \left[\sum_ig_{i\nu}g_{i^\prime \nu^\prime}C_{il}
+{1\over w_\nu}\delta_{\nu \nu^\prime}\right]\delta_{ll^\prime}\delta_{mm^\prime}.
\end{eqnarray}
Given the data, we could write down and calculate the posterior
probability distribution of the parameters, $C_{il}$, or any
other parameterization, $a_p$, of ${\bf M}$. The posterior
is proportional to the product of the likelihood and the prior.
In the limit that the posterior distribution of $a_p$ is Gaussian,
the expectation value for the covariance matrix of the parameters is given
by the inverse of the ``posterior'' Fisher matrix,
\begin{eqnarray}
\label{eqn:fishmat}
F_{pp'} & \equiv &\langle {-\partial^2 \ln P_{posterior} \over \partial a_p
\partial a_{p'}} \rangle \nonumber\\
&=& {1 \over 2} {\rm Tr}\left({\bf M}^{-1}{\bf M}_{,p}{\bf M}^{-1}
{\bf M}_{,p^\prime}\right) +F^{\rm prior}_{p p^\prime}.
\end{eqnarray}
Note that the trace is a sum over $\ell$s, $m$s and $\nu$s. ${\bf M}$
is block-diagonal with block size $n_{ch}$ by $n_{ch}$ so its inversion
is readily feasible. The
matrix $F$, or rather its inverse, is exactly what we want, the
expectation value of the covariance matrix of the
parameters. We are interested in calculating this parameter
covariance matrix for various parameter choices -- in particular
the $C_{il}$ -- as well as assumptions about their prior distributions.
We parameterize the (diagonal) prior as zero for $i=0$ and
\begin{equation}
F^{\rm prior}_{il,il} =(\alpha/C_{il})^2
\end{equation}
for $i > 0$ where $C_{il}$ are the assumed actual power spectra, to be
discussed in the next section. Note that if we take the foreground
$C_{il}$s to be a priori perfectly known ($\alpha \rightarrow \infty$),
then Eq. \ref{eqn:fishmat} gives the Fisher matrix for the
Wiener filter method of foreground removal (TE96, BGP95),
an explicit expression for which is in W98. In the
absence of foregrounds it is equivalent to that in Bond, Efstathiou \&
Tegmark (1997, hereafter ``BET97'') and for a single channel
experiment it is equivalent to that given by Knox 1995 and by Jungman
{\frenchspacing\it et al.} ~(1996). Below we vary $\alpha$ to see quantitatively how the
strength of our prior assumptions determines the ability to measure
$C_{0l}$.
It is straightforward to generalize the above to include polarization
information. Maps of the Q and U Stokes parameters can be decomposed
into two components, $a_{lm}^E$ and $a_{lm}^B$ (Kamionkowski {\frenchspacing\it et al.} ~1997;
Zaldarriaga \& Seljak 1997), which are now in addition to the temperature
component $a_{lm}^T$. In general, we can write the contribution from
each component as $a_{ilm}^b$ and the data in each channel as
$\Delta_{\nu,lm}^b$ where the superscript is either $T$, $E$ or $B$.
Then the covariance matrix for the data (Eq.~\ref{eqn:Mindices}) becomes
\begin{eqnarray}
\label{eqn:Mpol}
M^{b \nu,b^\prime\nu^\prime}_{lm,l^\prime m^\prime} =
\left[\sum_i g_{i\nu} g_{i^\prime\nu^\prime} C_{il}^{bb'}
+{1\over w^b_\nu}\delta_{bb^\prime} \delta_{\nu \nu^\prime}\right]\delta_{ll^\prime}\delta_{mm^\prime}
\end{eqnarray}
where $C_{il}^{bb'}$ equals
$C_{il}^T \equiv \langle a^T_{ilm} {a^T_{ilm}}^* \rangle$ for $b=b'=T$,
$C_{il}^E \equiv \langle a^E_{ilm} {a^E_{ilm}}^* \rangle$ for $b=b'=E$,
$C_{il}^B \equiv \langle a^B_{ilm} {a^B_{ilm}}^* \rangle$ for $b=b'=B$, and
$C_{il}^C \equiv \langle a^E_{ilm} {a^T_{ilm}}^* \rangle$ for $b=T$, $b'=E$.
All other elements vanish.
Thus, while the matrix of Eq. ~\ref{eqn:Mindices} is
block-diagonal in blocks of dimension $n_{ch}$, this matrix is
block-diagonal in blocks of dimension $3n_{ch}$. This approach
generalizese the multi-frequency Wiener filter error forecasting
of Bouchet {\frenchspacing\it et al.} ~(1998, hereafter ``BPS''), who generalized
the single-frequency, no foreground, treatment of Zaldarriaga {\frenchspacing\it et al.} ~(1997).
We may also be interested in how well an individual mode can be
measured.
The covariance matrix for the error in the minimum variance estimate
of ${\bf a}$ is
\begin{equation}
\label{eqn:fishmode}
\langle \delta {\bf a} \delta {\bf a}^\dagger \rangle =
\left( {\bf g}^\dagger {\bf W} {\bf g} +{\bf W}^{\rm prior}\right)^{-1}
\end{equation}
where we have assumed a prior probability for ${\bf a}$ that is
Gaussian-distributed with weight matrix ${\bf W}^{\rm prior}$. For
example, we may wish to assume that foreground $i$ has variance
$C_{il}\delta_{ll^\prime} \delta_{mm^\prime}$ in which case $W^{\rm
prior}_{ilm,i^\prime l^\prime m^\prime} = 1/C_{il}\delta_{ll^\prime} \delta_{mm^\prime}$. With
this prior, this is the variance given by the Wiener filter procedure.
Without the prior it is the variance given by the
pixel-by-pixel subtraction procedure of Dodelson
(1996) and also of Brandt {\frenchspacing\it et al.} ~(1994) (except for their non-linear
parameter dependences).
When there are more foregrounds than channels,
${\bf g}^\dagger {\bf W} {\bf g}$ is singular and therefore addition
of a prior is necessary to make $\langle \delta {\bf a}\delta {\bf
a}^\dagger \rangle$ finite. For more flexibility in the prior
choice later, we define $\beta$ so that $W^{\rm
prior}_{ilm,ilm} = \beta/C_{il}$. Note that Eq.~\ref{eqn:fishmode} does not
assume anything about the statistical properties of the foregrounds
and CMB---except through the prior, which we have explicitly assumed
to be Gaussian.
\section{Foreground Models}
Our foreground model is based on that developed for the Planck Phase A
proposal (Bersanelli {\frenchspacing\it et al.} ~1996) and updated in the HFI and LFI
instrument proposals. We refer the interested reader to these
proposals and to Bouchet and Gispert (1998,
hereafter ``BG98''). Below we briefly describe our model, with an
emphasis on the modifications and additions we have made. In all
cases, these alterations make the model more pessimistic.
\subsection{Galactic}
Analyses of the DIRBE (Diffuse Infrared Background Explorer) and
IRAS (Infrared Astronomy Satellite) Sky Survey Atlas maps
have determined the shape of
the dust power spectrum to be $C_l \propto l^{-3}$ (Gauttier {\frenchspacing\it et al.}
~1992; Wright 1997) or $C_l \propto l^{-2.5}$ (Schlegel {\frenchspacing\it et al.} ~1998).
We assume $C_l \propto l^{-2.5}$ since it is the
more pessimistic choice, given that we normalize at large angles.
We take the same $C_l$ shape for the free-free power spectrum because
both the dust intensity and free-free are expected to be from the same
warm interstellar medium. Indeed, there is strong observational
evidence for a correlation (Kogut {\frenchspacing\it et al.} 1996, Leitch {\frenchspacing\it et al.} 1997, de
Oliveira-Costa {\frenchspacing\it et al.} 1997, Jaffe {\frenchspacing\it et al.} 1998). Note, however, that we assume no
cross-correlation between free-free and dust, because any correlation
makes the foreground separation easier. The same shape is also taken
for synchrotron radiation.
We choose amplitudes and frequency dependences for the galactic
foregrounds consistent with the Kogut {\frenchspacing\it et al.} ~(1996) analysis of
DMR, DIRBE and Haslam maps. We take the antenna temperatures
of the free-free and synchrotron to vary with power-law indices
-2.9 and -2.16, respectively. For the dust we assume a $\nu^2$
emissivity dependence and a single component with $T=18K$.
Draine and Lazarian (1997) have proposed an alternative explanation
to the observed correlation between dust and 30~GHz to 90~GHz radiation.
They propose that the rotational emission from spinning dust
grains, greatly increases the emission in the 10~GHz to 100~GHz range
above what one expects from the vibrational emission.
We have not included this component of dust emission in our model.
Instead, we include something worse -- a component with spectral
shape similar to the ``anomalous'' emission, but which has no correlations
with the dust. Again, this is more pessimistic than the strong
correlation expected in a realistic model.
\begin{figure}[bthp]
\plotone{fgnd_model2.eps}
\caption[foreground model]{\baselineskip=10pt
The frequency-dependent rms antenna temperature,
$g_{i\nu}\sqrt{l(l+1)C_{il}/(2\pi)}$
evaluated at $l=500$ (top panel) and $l=1500$ (lower panel)
for standard cdm CMB (black), dipole and thermal emission dust (both red),
free-free (green),
synchrotron (blue), SZ (cyan), and radio and FIR point sources (both magenta).
}
\label{fig:model}
\end{figure}
\subsection{Extragalactic}
Extragalactic contributions to the microwave sky include inverse
Compton scattering of CMB photons by hot gas in clusters (the thermal
Sunyaev-Zeldovich (SZ) effect), the Far Infrared Background (FIRB)
and radio point sources.
Following Tegmark and Efstathiou, we model the contribution from a
diffuse background of unresolved radio point sources as having an
intensity, $I(\nu) \propto \nu^{-\alpha}$ with a white noise angular
power spectrum ($C_\ell$ independent of $\ell$). Deviations from
white noise due to clustering are observed to be neglible at 1.5GHz
(TE96; Tofollati {\frenchspacing\it et al.} ~1998, herafter ``To98''). Below 200 GHz, we
take $\alpha=0$ but above 200 GHz we introduce a break to $\alpha=0.7$
as suggested by To98. We adopt this break in our spirit of
pessimism because, despite decreasing the brightness of this
contaminant, it actually makes determination of the CMB more
difficult. This is due to the fact that with the break, the spectral
shape more closely resembles that of the CMB.
We are actually considering the power spectrum of the sources which
remain after some cut is done of clearly contaminated pixels, e.g.,
those above a $5\sigma$ threshhold where $\sigma^2$ is the variance in
the map. Thus the amplitude depends both on the number-count
distribution and on the level of flux cut that is used. Although this
flux cut will vary for maps at different frequencies and from
different instruments, we choose to fix it at 1 Jy. We view this as
quite conservative since the typical level for all the Planck maps is
about $\sigma$ = 0.1 Jy. This is according to Tegmark \& de Oliveira-Costa
(1998) who used the point-source model of To98 and included the effect
of reduction in $\sigma$ that one can achieve by applying a Wiener
filter. The values of $\sigma$ for the MAP maps should not differ by
more than a factor of 2 from those for the LFI.
For the amplitude of the FIRB we
rely on the estimates of BG98 which are derived from
the model of FIR point sources of Guiderdoni {\frenchspacing\it et al.} ~(1998).
This model has succesfully predicted source counts in a
wide wavelength range, from 15 to 850 microns (see BG98
and references there in).
The mean frequency dependence of the model is shown in
Fig.~\ref{fig:model}. Bouchet and Gispert (1998) have shown that this
frequency dependence has only slight spatial variations, lending
credence to our modeling of it as a frequency dependence times a fixed
brightness spatial template. We assume clustering is unimportant and
therefore the spatial power spectrum has the same shape as we
have assumed for radio point sources: $C_l$ is a constant.
CMB photons moving through a hot gas of electrons have their frequency
shifted as they Compton scatter, leading to the generation of
anisotropy with a non-thermal spectrum. This Sunyaev-Zeldovich (SZ) effect
can also be treated as an additive foreground, with the
frequency-dependence of a Compton $y$ distortion. Calculations of the
power spectrum of this foreground component, assuming a
Press-Schechter distribution of clusters with masses greater than some
cutoff have been done (Aghanim {\frenchspacing\it et al.} ~1996, herafter A96;Atrio-Barandella
\& Mucket 1998). We use
the results of A96 for the $\Omega=1$ cosmology. Their power
spectrum is well-fit in the range of interest by $C_l = a(1+l_c/l)$
where $l_c=1190$ and $a$ is such that
$l(l+1)C_l/(2\pi) = 5.3 \mu{\rm K}^2$ at $l=1500$ in the Rayleigh-Jeans
(low frequency) limit (see Fig.~\ref{fig:model}). Modelling of this
contribution will soon be improved by replacement of the use of
Press-Schechter with N-body/hydro simulations.
\subsection{Spectral Shape Uncertainty}
Implicit in our formalism is that the frequency dependence of the
foregrounds is known perfectly and has no spatial variation. However,
we can allow for some degree of spatial dependence of the spectrum as
follows. Consider foreground $i$ with mean power-law frequency
dependence, $\beta$, and deviation $\delta \beta_{lm}$. Then, the
signal contribution to the data, $\Delta_{\nu lm}$ from component $i$
is
\begin{equation}
a_{ilm}(\nu/\nu_0)^{\beta+\delta \beta_{lm}} \simeq
a_{ilm}(\nu/\nu_0)^{\beta} + a_{ilm}\delta \beta_{lm} (\nu/\nu_0)^{\beta}
\ln(\nu/\nu_0).
\end{equation}
Thus we can treat radiation from a component with spatially
varying spectral index as due to two components with amplitudes
$a_{ilm}$ and $a_{ilm} \delta \beta_{lm}$, which will, in general,
be correlated. For simplicity we have modeled these additional
components as uncorrelated with the parent component and taken
$\langle a_{ilm} \delta \beta_{lm} a_{il^\prime m^\prime}\beta_{l^\prime m^\prime}
\rangle= C_{il} \langle \delta \beta^2 \rangle$. We have assumed
$\langle \delta \beta^2\rangle = 0.25$ for the rotating small dust
grains, dust, and synchrotron with the same prior as
used on other foregrounds. TE96 also considered using extra
components to model spatial dependence of the spectral shape. For
an alternative approach, see Tegmark (1997).
\subsection{Foreground Polarization}
Precious little is known about the polarization of foregrounds.
For a review, see Keating {\frenchspacing\it et al.} ~(1998).
Extrapolation from observations at low-frequency ($\mathrel{\mathpalette\fun <} 1$ GHz) are
complicated by Faraday rotation along the line-of-sight, which is
negligible at higher frequencies. Measurements at higher frequencies
are in the galactic plane in dense star-forming regions (Hildebrand \&
Dragovan 1995) and are not expected to be representative of the
statistics at high latitude. We make the same assumptions about foreground
polarization as BPS. They neglect polarization in all foregrounds
except for synchrotron and dust. For the synchrotron, they
take $C_l^E = 0.2C_l^T$ and for the dust they take the
model of Prunet {\frenchspacing\it et al.} ~(1998, hereafter ``PSB'')
(see also Sethi {\frenchspacing\it et al.} ~(1998)).
It must be kept in mind that the PSB calculation
relies on indirect arguments and is therefore quite uncertain, as is
the synchrotron model, as the authors readily admit.
\section{Application to Planned Experiments}
\subsection{Temperature}
In Fig.~\ref{fig:dclt} one can see that MAP's ability to measure
the power spectrum is not significantly affected
by the foregrounds below $\ell \simeq 500$. Going to smaller
values of $\ell$ we have greater frequency coverage, and greater
ratio of signal to instrument noise.
The only thing that gets slightly
worse as $\ell$ decreases is the relative amplitude of the
galactic foreground power spectra, but this effect is overwhelmed
by the others. Of course going to higher $\ell$ we have
less frequency coverage and a smaller ratio of signal to
instrument noise. The galactic
foregrounds still do not become a problem though since their
relative power continues to decrease.
What does become a concern at higher $\ell$
are foregrounds with rising angular power spectra: radio point sources
and the thermal Sunyaev-Zeldovich effect from galaxy clusters.
These alone are responsible for the deviation of $\Delta C_l$ from the
no foreground case, visible in Fig. \ref{fig:dclt}.
The impact of the Sunyaev-Zeldovich component is worth exploring more.
It is quite possible that the actual amplitude is ten times larger
than in our baseline model. The A96 calculation ignores
the contribution from filaments---which may actually dominate the
contribution from the clusters, and it ignores the clustering of the
clusters. If we increase the power by a factor of 10, and relax the
prior on it to $\alpha = 0.1$ from $\alpha=1.1$, $\Delta C_l$ doubles
in the range from $l=400$ to $l=700$. On the other hand, if we
increase the power by a factor of 10, and do not relax the prior,
$\Delta C_l$ only increases by a few per cent. What we learn from
this is that having some constraints on the power spectrum of the SZ
component can be just as important as the actual amplitude.
The usefulness of prior knowledge of the SZ $C_l$ is encouraging.
It suggests that the analysis of MAP data can
profit significantly from accurate theoretical
predictions of the statiscal properties of the SZ component.
It also suggests that measurements of the SZ component in much
smaller regions of the sky, which roughly constrain the power
spectrum, can be beneficial to the analysis of the full-sky MAP data.
Such analyses should be possible from combining MAP data with
datasets from higher frequency instruments such as TopHat\footnote{
TopHat home page:
{\tt http://\linebreak[1]topweb.gsfc.nasa.gov}}
and
BOOMERANG\footnote{
BOOMERANG home page:\\ {\tt http://\linebreak[1]astro.caltech.edu/\~{}mc/boom/boom.html}}, which by themselves will be extremely interesting CMB datasets.
Planck's ability to measure the power spectrum is not significantly
affected by the foregrounds below $\ell \simeq 1200$. At higher
$\ell$, the frequency coverage reduces, the noise in each channel
increases and the SZ, radio and FIRB components increase in amplitude.
Unlike for MAP, SZ is not important because in the HFI frequency
range, SZ is easily distinguished from CMB; there is even the null
at 217 GHz. However, the radio point sources and FIRB are a
concern. There is strong dependence on the prior. Even with moderate
prior information ($\alpha=1.1$ on these two components), $\Delta
C_{0l}$ is 3 times larger than the no foreground case. With an
infinite prior this reduces to a much less significant factor of about
1.2. The situation is greatly improved if the flux from the two
backgrounds of unresolved sources is a factor of 4 less in
amplitude (16 in $C_l$) than we have assumed. This is not unlikely
since our assumed flux cut of 1 Jy is about 20 times the level of
confusing noise, calculated by Tegmark \& de Oliveira-Costa (1998),
in the (post-Wiener filtering)
143 GHz, 217 GHz and 353 GHz HFI maps, and is therefore an extremely
conservative $20\sigma$ cut. Thus, we also show the results with our
input power spectrum for point sources, and the FIRB each reduced by a
factor of 16 as the dashed line in Fig.~\ref{fig:dclt}.
We see that with only the use of a moderate amount of prior
information, the errors on the $C_{0l}$s here are not qualitatively
different from the no-foreground results. The conclusions of those
forecasting cosmological parameter errors would not be qualitatively changed
by including the effect of the foregrounds as modelled here.
\begin{figure}[bthp]
\plotone{dclt2.eps}
\caption[powspec errors]{\baselineskip=10pt
MAP uncertainties for $\alpha = 0.1$ on all foreground components (top curve),
$\alpha=0.1$ on all foreground components except for
radio point sources and SZ, for which $\alpha=1.1$ (second to top
curve). The lowest
uncertainty curve is identical for $\alpha=\infty$ and no
foregrounds. Planck uncertainties for
$\alpha = 0.1$ on all foreground components except radio
point sources and the FIRB for which $\alpha=1.1$
(highest curve); $\alpha=\infty$ (middle solid curve)
the no foreground case (bottom curve), and
same as the top curve but with the FIRB and radio point source
power spectra reduced by sixteen (dashed curve).
With the FIRB and radio point source power spectra reduced by a factor
of 16, the $\alpha=\infty$ case is identical with the no foreground
case.
}
\label{fig:dclt}
\end{figure}
If galactic foregrounds are well-described by the model used
here, then they will not have significant impact on the
primary science goals of MAP and Planck. That is perhaps
the most robust conclusion to draw from the above. This
is not to say that these foregrounds do not have their impact on how well
the CMB can be measured. The left side of Fig.~\ref{fig:4panel}
shows how the foregrounds affect the uncertainties
in $a_{lm}^{\rm CMB}$. As long as $\delta a_{lm}^{\rm CMB}/\sqrt{C_l}
< 1$ then sample-variance dominates the errors in $C_\l$. As
can be seen in the figure, this inequality holds out to at least
$l = 500$, except for MAP in the case of pixel-by-pixel subtraction
($\beta = 0$, or no use of prior information).
\begin{figure}[bthp]
\plotone{4panel.eps}
\caption[fourpanel]{\baselineskip=10pt
MAP (top) and Planck (bottom) component map uncertainties
expressed as
$\delta a_{ilm}/\sqrt{C_{il}}$ for the CMB (left) and the
three galactic foregrounds (right). No other
foregrounds were included in the calculation of the uncertainty.
On the left, the three cases are pixel by pixel subtraction
($\beta=0$, top curve), Wiener filtering ($\beta=1$, middle curve) and
no foregrounds (bottom curve). On the right the three cases
are dust (red), free-free (green) and synchrotron (blue) with
a $\beta=1$ prior
applied to all the components except for the CMB and the component in
question.
}
\label{fig:4panel}
\end{figure}
\subsection{Polarization}
The CMB is expected to be polarized at a level of about 10\% of the
anisotropy. The polarization foregrounds are no where near
as well-understood and explored as the temperature foregrounds.
However, taking some initial guesses at the polarization foregrounds
we find the outlook for CMB polarization measurement by MAP and
Planck to be fairly bright. The reason being that, once again,
there is a window in frequency space where the CMB is the dominant
contributor to spatial variations in polarization.
This window does not necessarily exist across the
entire polarization power spectrum, and in particular may disappear at
low $l$. This is unfortunate since the two most interesting features
in the polarization power spectra are the bump at $l \simeq
2\sqrt{z_{ri}}$ where $z_{ri}$ is the redshift of reionization and the
B-mode power spectrum due to tensor and vector modes (Kamionkowski {\frenchspacing\it et al.} 1997,
Seljak \& Zaldarriaga 1997) which also peaks at low $l$.
Here we focus on the reionization bump. To study sensitivity to it
we have not implemented Eq.~\ref{eqn:Mpol}. Instead we have
ignored cross-correlations between temperature and polarization
so that Eq.~\ref{eqn:Mindices} is applicable with appropriate
substitutions (e.g., $C_{il} \rightarrow C_{il}^E$). In general
the cross-correlations improve the constraints on the polarization
power spectrum (BPS) but that shouldn't be the case here since
the reionization bump is a feature solely of the polarization maps
and does not show up in cross-correlation with temperature maps.
For standard CDM with an optical depth to Thomson scattering of
$\tau=0.1$, Planck measures the reionization feature with cosmic
variance precision (although HFI alone does not and neither does MAP).
At larger $\ell$, where the signal is large, the foregrounds, as
modelled here, have no significant impact on the ability of either of
the satellites to measure the CMB polarization power spectrum. Our
infinite foreground prior (or Wiener filter) results are in agreement
with the Wiener filter results of BPS.
\begin{figure}[bthp]
\plotone{3panel.eps}
\caption[powspec errors]{\baselineskip=10pt
$C_\ell^E$ vs. $\ell$ for standard cdm (thick black line) with optical depth
to Thomson scattering of $\tau=0.1$ and expected uncertainties under different
assumptions. The assumptions are, from top to bottom in each panel,
no foreground prior (green), infinite foreground prior (blue) and
no foregrounds (red).
}
\label{fig:dclp}
\end{figure}
\section{Discussion}
We have presented a method to calculate the sensitivity to
the CMB and its power spectrum given multi-resolution,
multi-wavelength observations of a sky that consists of
multiple foreground contributions\footnote{IDL programs implementing
this procedure are available from the author.}. The
applications to MAP and Planck have allowed for much greater freedom
in the behavior of the foregrounds than did previous analyses
(TE96, BG98). Despite this extra freedom, the conclusions
are similar---that foregrounds are not likely to qualitatively
affect the uncertainties that can be achieved on cosmological
parameters. Similar conclusions have been reached by
de Oliveira-Costa {\frenchspacing\it et al.} (1998).
Our approach has not fully taken into account the non-Gaussianity of
the foregrounds, spatial dependence of the spectrum of each component,
uncertainty in the spectral shapes, and unknown components (e.g., a
population of points sources whose spectra peak at 90 GHz). For these
reasons it is difficult to conclude with certainty that the
foregrounds will not qualitatively affect the determination of
cosmological parameters. However, a very important reason for our
rosy conclusions is a very simple fact: for most of the multipole
moments measured by a given experiment, the quality of the CMB map can
be highly degraded, without having any impact on the quality of the
power spectrum. Thus, any effect we have not included here has to
overcome this hurdle in order to be important.
Non-gaussianity is both a friend and a foe. We have already exploited
it here in assuming that the brightest points sources could be
identified with threshhold cuts and removed. However, it can
present a challenge to the above sample-variance argument
if it resulted in the errors in each $a_{lm}$ being
strongly correlated with the errors in $a_{lm^\prime}$, in such a way
that they did not beat down with many averagings. One can think
of this as an effective reduction in the number of independent
modes in the foreground (Eisenstein 1998).
However, we expect that small-scale behavior in patches
of the sky sufficiently separated to be decorrelated. Hence we
do not expect the mode number reduction to be large,
though further investigation of effects of non-Gaussianity is
clearly warranted.
We have also negected things that will improve estimation of the
CMB from MAP and Planck data, such as the use of maps at other
frequencies, e.g., DIRBE, IRAS and FIRST (which will fly with Planck).
Assumptions about the smoothness of foreground power spectra are
also reasonable and could significantly reduce our error forecasts
at high $l$, by extending the information gained at lower $l$
where there is greater frequency coverage.
It is clear though, that even if foregrounds do not do anything more
than double the errors on cosmologcial parameters, the determination
of the exact size of the error bars will probably be dominated by
foreground considerations. Small patches of the sky will be analyzed
separately, with those appearing the cleanest given more weight.
Foreground model residuals will be agressively sought. Thus the study
of foregrounds remains very important. We close by listing the
following improvements in our understanding of foregounds which could
prove to be extremely beneficial:
\noindent $\bullet$ More accurate theoretical calculation of the statistics of the
SZ component. Our positive conclusions for MAP depend on the
amplitude of the SZ power spectrum
and on how well that power spectrum
can be determined {\it a priori}. We have shown that having
a prediction of $C_l^{\rm SZ}$ good to about a factor of 2
(which would justify our use of $\alpha=1.1$) is enough to
keep $\Delta C_l$ within about ten per cent of the no foreground case,
even if $C_l^{\rm SZ}$ is ten times larger than the A96 calculation.
\noindent $\bullet$ Higher frequency complements to MAP, such as are coming from
balloon flights (e.g., TopHat and \\
BOOMERANG). Even coverage
of just a few per cent of the sky, can be used to characterize
the level of contamination in the rest of the sky.
\noindent $\bullet$ A point source survey near 90 GHz (see Gawiser {\frenchspacing\it et al.} ~1998).
\noindent $\bullet$ Further development of methods for removing non-Gaussian
foregrounds and understanding of resulting CMB uncertainties.
\noindent $\bullet$ A full-sky, high resolution H$\alpha$ survey, since this
is a tracer of free-free. Useful steps in this direction
are already being made with a survey in the North with one degree
resolution (Reynolds {\frenchspacing\it et al.} 1998) and one in the South with 0.1 degree
resolution (McCullough {\frenchspacing\it et al.} 1998) nearing completion.
\noindent $\bullet$ Measurements of high galactic latitude dust and synchrotron polarization.
\acknowledgments
I thank K. Ganga, A. Jaffe and J. Ruhl for useful conversations,
as well as the organizers and participants of the Sloan Summit on Microwave
Foregrounds, who have informed the above discussion. I acknowledge
the use of CMBFAST (Seljak \& Zaldarriaga 1996).
| 10,710 |
\section{Introduction}\label{S1}
The classical Hardy and Rellich inequalities are estimates for differential operators on the spaces $L_p({\bf R}^d\backslash\{0\})$, $p\in\langle1,\infty\rangle$, for which the optimal constants are known.
Our intention is to derive similar estimates on $\Omega={\bf R}^d\backslash K$ where $K$ is a closed convex subset of ${\bf R}^d$.
There are two stages in the analysis, first the existence of the inequalities and secondly the optimality of the corresponding constants.
Background information on both these aspects and references to the literature can be found in the recent monograph \cite{BEL}.
Our estimates depend on the Hausdorff--Minkowski dimension $d_H$ of the boundary $\Gamma=\partial\Omega$ of $\Omega$.
If the dimension $\dim (K)$ (of the affine closure $A_K$) of $K$
takes one of the values $0,1,\ldots,d-1$ then $ d_H=\dim(K)$ but if $\dim(K)=d$ then $d_H=d-1$, assuming
$K\neq{\bf R}^d$.
In addition, the general inequalities depend on
the Euclidean distance $d_\Gamma$ to the boundary, i.e.\
$ d_\Gamma(x)=\inf_{y\in \Omega^c}|x-y|$ for $x\in \Omega$.
We begin by establishing the existence of weighted Hardy inequalities with a weight function $c_\Omega=c\circ d_\Gamma$ where $c$ is a strictly positive function on $\langle0,\infty\rangle$ with different power behaviours at the origin and at infinity.
\begin{thm}\label{tcc1.1}
Let $\Omega={\bf R}^d\backslash K$ where $K$ is a closed convex subset of ${\bf R}^d$
and denote the Hausdorff dimension of the boundary $\Gamma$ of $ \Omega$ by $d_H$.
Further let $c_\Omega=c\circ d_\Gamma$ where $c(s)=s^\delta(1+s)^{\delta-\delta'}$ with $\delta,\delta'\geq0$.
If $d-d_H+(\delta\wedge\delta')-p>0$ with $p\in[1,\infty\rangle$ then
\begin{equation}
\int_\Omega c_\Omega\,|\nabla\varphi|^p\geq
\int_\Omega c_\Omega\,|(\nabla d_\Gamma).(\nabla\varphi)|^p\geq a_p^{\,p}\int_\Omega c_\Omega\,d_\Gamma^{\;-p}\,|\varphi|^p
\label{ecc1.1}
\end{equation}
for all $\varphi\in C_c^1(\Omega)$ with $a_p=(d-d_H+(\delta\wedge\delta')-p)/p$.
\end{thm}
Here and in the sequel all functions are real-valued.
Moreover, we use the standard notation $|\nabla\varphi|=(\sum^d_{k=1}|\partial_k\varphi|^2)^{1/2}$.
Then the left hand inequality in (\ref{ecc1.1}) follows since
$d_\Gamma$ is a Lipschitz function with $|\nabla d_\Gamma|\leq1$.
The choice of the weight $c$ is governed by the asymptotic properties $c(s)/s^\delta\to1$ as $s\to0$ and $c(s)/s^{\delta'}\to 1$ as $s\to\infty$.
Although this theorem and the subsequent one are stated for the particular coefficient $c$ the general conclusions are valid for a large
class of $c$ with similar asymptotic properties.
Note that if $\delta=\delta'$ then $c(s)=s^\delta$ which is the conventional weight function used in the discussion of Hardy inequalities.
An important part of the proof of the theorem, which will be given in Section~\ref{S2}, is the observation that $d_\Gamma$ is a convex function on convex subsets of~$\Omega$.
This is the analogue of the statement that if $U$ is an open convex subset of ${\bf R}^d$ then $d_{\partial U}$ is a concave function (see
\cite{Hor8}, Corollary~2.1.26, or \cite{BFT}, Example~2).
The proofs of the two statements are very similar.
There is also a somewhat weaker analogue of Theorem~\ref{tcc1.1} for weighted operators on convex sets which we will establish at the end of Section~\ref{S2}.
In Section~\ref{S3} we consider the existence of weighted Rellich inequalities on $\Omega={\bf R}^d\backslash K$.
The classic Rellich inequalities were initially established for the Laplacian $\Delta=-\sum^d_{k=1}\partial_k^{\,2}$ on $L_2({\bf R}^d\backslash\{0\})$ but have subsequently been extended to all the spaces $L_p({\bf R}^d\backslash\{0\})$ with $p\in\langle1,\infty\rangle$
(see, for example, \cite{BEL} Sections~6.1--6.3 and in particular Corollary~6.3.5).
Our aim is to establish similar estimates for the weighted operators $H=-\sum^d_{k=1}\partial_k \,c_\Omega\,\partial_k=-\mathop{\rm div}(c_\Omega\nabla)$ on the spaces $L_p(\Omega)$.
The operators $H$ are defined on the universal domain $C_c^2(\Omega)$ and all estimates are on this domain.
\begin{thm}\label{tcc1.2}
Let $\Omega={\bf R}^d\backslash K$ where $K$ is a closed convex subset of ${\bf R}^d$
and denote the Hausdorff dimension of the boundary $\Gamma$ of $\Omega$ by $d_H$.
Further let $c_\Omega=c\circ d_\Gamma$ where $c(s)=s^\delta(1+s)^{\delta-\delta'}$ with $\delta,\delta'\in[0,2\rangle$.
If $d-d_H+p\,(\delta\wedge\delta')-2p\geq2p\,|\delta-\delta'|\,(2-\delta\vee\delta')^{-1}$
with $p\in\langle1,\infty\rangle$ then there is a $c_p\in\langle0,C_p]$, where $C_p=(p-1)\,(d-d_H)\,(d-d_H+p\,(\delta\wedge\delta')-2p)\,p^{-2}$,
such that
\begin{equation}
\int_\Omega|H\varphi|^p\geq c_p^{\,p}\int_\Omega |c_\Omega\,d_\Gamma^{\,-2}\varphi|^p
\label{ecc1.2}
\end{equation}
for all $\varphi\in C_c^2(\Omega)$.
Moreover, if $\delta=\delta'$ then $c_p=C_p$.
\end{thm}
The proof of the theorem allows one to identify $c_p$ as a function of $d_H, \delta$ and $\delta'$
but the result is significantly more complicated than the expression for $C_p$.
Although the condition $c_p>0$ requires the restriction $\delta,\delta'<2$ the existence of the Rellich inequalities should not depend on this latter condition.
In fact if $p=2$ then the weighted inequalities (\ref{ecc1.2}) follow for all $\delta,\delta'\geq 0$ from the arguments of \cite{Rob12}.
Theorems~\ref{tcc1.1} and \ref{tcc1.2} establish criteria for existence of the Hardy and Rellich inequalities (\ref{ecc1.1}) and
(\ref{ecc1.2}), respectively, but they give no information about optimality of the constants $a_p^{\,p}$ and $c_p^{\,p}$.
This problem, which appears more challenging, is tackled in Section~\ref{S4}.
We show, for example, that $a_p^{\,p}$ is optimal for the Hardy inequality if $K=\{0\}$.
It is also optimal if $k=\dim(K)\in\{1,\ldots,d-1\}$
and either $\delta\leq \delta'$ or the `dimension at infinity' $k_\infty$ of $K$ is equal to $k$.
Alternatively $C_p^{\,p}$ is the optimal constant for the Rellich inequality if $K=\{0\}$ and $\delta=\delta'\in[0,2\rangle$.
More generally it is optimal if $k\in\{1,\ldots,d-1\}$, $\delta=\delta'\in[0,2\rangle$ and $k_\infty=k$.
But these results leave open room for improvement.
In particular if $p=2$ then it follows from \cite{Rob12} that $C_2^{\,2}$ is the optimal constant for all $\delta,\delta'\geq0$
such that $\delta+\delta'\leq4$ with no further restriction on $K$ or $\delta,\delta'$ other than $C_2>0$.
Finally note that if there is no weighting factor, i.e.\ if $c=1$, then $H=\Delta$, the Laplacian, and (\ref{ecc1.2}) states that
\[
\int_\Omega|\Delta\varphi|^p\geq C_p^{\,p}\int_\Omega |d_\Gamma^{\,-2}\varphi|^p
\]
for all $\varphi\in C_c^2(\Omega)$.
Moreover, the constant $C_p=(p-1)(d-d_H)(d-d_H-2p)/p^2$ is optimal.
In particular if $K=\{0\}$ then $d_H=0$ and this gives the classical $L_p$-Rellich inequality with the optimal constant.
But (\ref{ecc1.2}), which is a `weighted operator' version of the classical inequality, is not the only possible weighted generalization.
A second natural alternative would be the `weighted measure' version
\begin{equation}
\int_\Omega c_\Omega^{\;p}\,|\Delta\varphi|^p\geq b_p\int_\Omega c_\Omega^{\;p}\,|d_\Gamma^{\,-2}\varphi|^p
\label{ecc1.3}
\end{equation}
with $b_p>0$ for all $\varphi\in C_c^2(\Omega)$.
The relation between the existence and optimality of the two versions (\ref{ecc1.2}) and (\ref{ecc1.3}) of the Rellich inequalities is unclear.
\section{Hardy inequalities}\label{S2}
In this section we prove Theorem~\ref{tcc1.1}.
As a preliminary to the proof we need to establish local convexity of the distance function $d_\Gamma$
where $\Omega={\bf R}^d\backslash K$ with $K$ a closed convex subset $K$ of ${\bf R}^d$ and $K\neq{\bf R}^d$.
Since $K$ is the complement of $\Omega$
it follows from Motzkin's theorem (see, for example, \cite{Hor8}, Theorem~2.1.20, or \cite{BEL}, Theorem~2.2.9) that each point $x\in \Omega$ has a unique nearest point $n(x)\in K$, i.e.\ there is a unique $n(x)\in K$ such that $d_\Gamma(x)=|x-n(x)|$.
Moreover, $d_\Gamma$ is differentiable at each point $x\in\Omega$ and $(\nabla d_\Gamma)(x)=(x-n(x))/|x-n(x)|$.
Thus $|\nabla d_\Gamma|=1$ and $(\nabla d_\Gamma^{\;2})(x)=2\,(x-n(x))$.
Note that in the degenerate case $K=\{0\}$ one has $d_\Gamma(x)=|x|$
and consequently $\nabla^2d_\Gamma^{\;2}=2\,d$.
In the non-degenerate case it is not, however, clear that $d_\Gamma$ is even twice-differentiable.
But this follows from local convexity.
\begin{prop}\label{pcc2.1}
The distance function $d_\Gamma$ is convex on all open convex subsets of $\Omega$.
In particular it is twice-differentiable almost everywhere in $\Omega$ and the corresponding Hessian
$(\partial_k\partial_ld_\Gamma)(x)$ is positive-definite for
almost all $x\in\Omega$.
\end{prop}
\proof\
First we prove the convexity in an open neighbourhood of an arbitrarily chosen point
of $\Omega$.
Let $n(x)\in \Gamma$ be the unique near point of $x\in\Omega$.
Then there is a unique tangent hyperplane $T_x$ at the point $n(x)$ which is orthogonal to $x-n(x)$.
The hyperplane separates ${\bf R}^d$ into two half open half spaces, $\Gamma^{(+)}_x\subset \Omega$ and
$\Gamma^{(-)}_x\supset \mathop{\rm Int}(\Omega^c)$.
Moreover, $\Omega=\bigcup_{x\in\Omega}\Gamma^{(+)}_x$ and $\mathop{\rm Int}(\Omega^c)=\bigcap_{x\in \Omega}\Gamma^{(-)}_x$.
Now fix a point $x_0\in\Omega$ and an $r>0$ such that the open Euclidean ball $B_{x_0}(r)$ with centre $x_0$ and radius $r$
is contained in $\Omega$.
Next choose $r$ sufficiently small that $B_{x_0}(r)\subset \bigcap_{x\in B_{x_0}(r)}\Gamma^{(+)}_x$.
This is possible since if $x_k\in \Omega$ converges pointwise to $x\in\Omega$ then $n(x_k)\to n(x)$ (see \cite{BEL}, Lemma~2.2.1).
Therefore the family of open subsets $s>0\mapsto \Lambda_{x_0}(s)= \bigcap_{x\in B_{x_0}(s)}\Gamma^{(+)}_x$
increases as $s$ decreases to zero to $\Gamma^{(+)}_{x_0}\supset B_{x_0}(r)$.
But the balls $B_{x_0}(s)$ decrease as $s\to0$.
Therefore there is an $r_0$ such that $B_{x_0}(r)\subset \bigcap_{x\in B_{x_0}(r_0)}\Gamma^{(+)}_x$ for all $r\in\langle0,r_0\rangle$.
Secondly, we argue that if $r<r_0$ then $d_\Gamma$ is convex on $B_{x_0}(r)$.
To this end choose three points $x, y,z\in B_{x_0}(r)$ such that $x=\lambda\,y+(1-\lambda)\,z$
with $\lambda\in\langle0,1\rangle$.
Since $r<r_0$ it follows that $B_{x_0}(r)\subset \Gamma^{(+)}_x$.
Thus the tangent plane $T_x$ separates $B_{x_0}(r)$ and $\Omega^c$.
Next let $\tilde x,\tilde y,\tilde z$ denote the orthogonal projections of $x,y,z$ onto $T_x$.
Then $\tilde x=n(x)$, by definition, and $d_\Gamma(x)=|x-\tilde x|$.
But
\[
|y-\tilde y|=\inf_{y_0\in \Gamma^{(-)}_x}|y-y_0|\leq \inf_{y_0\in \Omega^c}|y-y_0|=d_\Gamma(y)
\;.
\]
Similarly $|z-\tilde z|\leq d_\Gamma(z)$.
Moreover, $\tilde x=\lambda\,\tilde y +(1-\lambda)\,\tilde z$ and
\[
|x-\tilde x|=\lambda\,|y-\tilde y|+(1-\lambda)\,|z-\tilde z|
\;.
\]
Therefore $d_\Gamma(x)\leq \lambda\,d_\Gamma(y)+(1-\lambda)\,d_\Gamma(z)$.
Since this is valid for all choices of $x,y,z\in B_{x_0}(r)$ and $\lambda\in\langle0,1\rangle$ with $x=\lambda\, y+(1-\lambda)\,z$ it follows that $d_\Gamma$ is convex on
$B_{x_0}(r)$.
Thirdly, it follows from Motzkin's theorem that $d_\Gamma$ is once-differentiable at each $x\in\Omega$.
But since $d_\Gamma$ is convex on $B_{x_0}(r)$ it follows from Alexandrov's theorem (see \cite{EvG}, Section~6.4) that $d_\Gamma$ is
twice-differentiable almost-everywhere on $B_{x_0}(r)$.
Since this is valid for each $x_0\in\Omega$ for some $r>0$
it then follows that $d_\Gamma$ is twice-differentiable almost-everywhere on $\Omega$.
The Hessian of a convex function is automatically positive-definite.
Hence the Hessian of $d_\Gamma$ is positive-definite almost everywhere on $\Omega$.
Finally let $d_\Gamma^{\,(\varepsilon)}$, $\varepsilon>0$, denote a family of local mollifications/regularizations of $d_\Gamma$
(see \cite{EvG}, Section~4.2.1).
Then the $d_\Gamma^{\,(\varepsilon)}$ are $C^2$-functions and their Hessians are positive-definite.
In fact the proof of Alexandrov's theorem relies on proving the positive-definiteness of the regularizations.
Next it follows by a standard consequence of convexity (see \cite{bSim7}, Theorem~1.5) that $d_\Gamma^{\,(\varepsilon)}$ is convex on all open convex subsets suitably distant from the boundary.
But $d_\Gamma^{\,(\varepsilon)}\to d_\Gamma$ as $\varepsilon\to0$.
Therefore in the limit $d_\Gamma$ is convex on all open convex subsets of~$\Omega$. \hfill$\Box$
\bigskip
The subsequent proof of the Hardy inequalities of Theorem~\ref{tcc1.1} depends on control of the second derivatives of $d_\Gamma$.
\begin{cor}\label{ccc2.2}
If $\Omega={\bf R}^d\backslash K$ where $K$ is a closed convex subset
then $\nabla^2d_\Gamma^{\;2}\geq 2\,(d-d_H)$ where $d_H$ is the Hausdorff $($Minkowski$\,)$ dimension of $\Gamma$.
\end{cor}
\proof\
First if $K$ is a singleton then one can assume $K=\{0\}$.
Hence $d_\Gamma^{\;2}(x)=|x|^2$ and $\nabla^2d_\Gamma^{\;2}=2\,d$.
Secondly, if $\dim(K)=k$ with $k\in \{1,\ldots, d-1\}$ one can factor ${\bf R}^d$ as a direct product $ {\bf R}^k\times {\bf R}^{d-k}$
where ${\bf R}^k$ is identified with $A_K$, the affine hull of $K$.
Thus if $x=(y,z)\in {\bf R}^d$ with $y\in {\bf R}^k$ and $z\in {\bf R}^{d-k}$ one has $d_\Gamma^{\;2}(x)=d_K^{\;2}(y)+|z|^2$
where $d_K(y)=\inf_{y'\in K}|y-y'|$.
In particular if $y\in K$ then $d_\Gamma^{\;2}(x)=|z|^2$
and $\nabla^2d_\Gamma^{\;2}=\nabla_{\!z}^2d_\Gamma^{\;2}=2\,(d-k)=2\,(d-d_H)$
because $d_H=\dim(K)$.
But if $y\not\in K$ then $(\nabla_{\!x}^2d_\Gamma^{\;2})(x)=(\nabla_{\!y}^2d_K^{\;2})(y)+\nabla_{\!z}^2|z|^2> 2\,(d-k)$.
Hence one now has $\nabla^2d_\Gamma^{\;2}\geq 2\,(d-d_H)$ for all $k\in \{1,\ldots, d-1\}$ .
Thirdly, if $\dim(K)=d$, and $K\neq {\bf R}^d$ then $\Gamma=\partial K$ and
the Hausdorff dimension $d_H$ of $\Gamma$ is $d-1$.
Then one can argue as in \cite{BEL}, Section~3.4.2 and 3.4.3 that $\nabla^2d_\Gamma^{\;2}\geq 2$.
Specifically if $x\in\Omega$ one can choose coordinates $x=(y_1,z)$ with $y_1>0$, $z\in{\bf R}^{d-1}$
and such that the near point of $(y_1,0)$ is the origin.
Then
\[
(\nabla_{\!x}^2d_\Gamma^{\;2})(x)=\partial_{y_1}^{\,2}y_1^2+(\nabla_{\!z}^2d_\Gamma^{\;2})(x)\geq2
\]
since $(\nabla_{\!z}^2d_\Gamma^{\;2})(x)\geq0$ by Proposition~\ref{pcc2.1}.
In fact the lower bound is attained if $K$ has a proper face with dimension $d-1$.
\hfill$\Box$
\bigskip
At this point we are prepared to establish the weighted Hardy inequalities (\ref{ecc1.1}).
\smallskip
\noindent{\bf Proof of Theorem~\ref{tcc1.1}}$\;$
Let $\chi_p=c_\Omega\,d_\Gamma^{\;-p}\,(\nabla d_\Gamma^{\;2})$.
Further let $c'_\Omega=c'\circ d_\Gamma$.
Then
\begin{eqnarray*}
\mathop{\rm div}\chi_p&=&2\,(c'_\Omega \,d_\Gamma/c_\Omega-p)\,c_\Omega\,d_\Gamma^{\;-p}\,|\nabla d_\Gamma|^2
+c_\Omega\,d_\Gamma^{\;-p}\,(\nabla^2d_\Gamma^{\;2})\\[5pt]
&\geq& 2\,b_p\,c_\Omega\,d_\Gamma^{\;-p}
\end{eqnarray*}
with $b_p=(d-d_H+\delta\wedge\delta'-p)$ where we have used $|\nabla d_\Gamma|^2=1$,
the estimate $\nabla^2d_\Gamma^{\;2}\geq2\,(d-d_H)$ of Corollary~\ref{ccc2.2} and the observation that
$c'_\Omega \,d_\Gamma/c_\Omega\geq \delta\wedge\delta'$.
(The last estimate follows since $s\,c'(s)/c(s)=(\delta+s\,\delta')/(1+s)$.)
Next for $\varepsilon>0$ set $\varphi_\varepsilon=(\varphi^2+\varepsilon^2)^{1/2}-\varepsilon$.
Then $\varphi_\varepsilon\geq0$ is a regularized approximation to $|\varphi|$ with the same support as $\varphi$.
But $\varphi^2+\varepsilon^2=(\varphi_\varepsilon+\varepsilon)^2\geq \varphi_\varepsilon^2+\varepsilon^2$
so $\varphi_\varepsilon\leq |\varphi|$.
In addition $\nabla\varphi_\varepsilon=(\varphi/(\varphi^2+\varepsilon^2)^{1/2})\,\nabla\varphi$.
Now assume $p\in\langle1,\infty\rangle$ and
$b_p>0$.
Then
\begin{eqnarray*}
0<2\,b_p\int_\Omega c_\Omega\,d_\Gamma^{\;-p}\,\varphi_\varepsilon^{\,p}&\leq&\int_\Omega(\mathop{\rm div}\chi_p)\,\varphi_\varepsilon^{\,p}\\[5pt]
&=&-p\int_\Omega c_\Omega\,d_\Gamma^{\;-p}\,(\nabla d_\Gamma^{\;2}).(\nabla\varphi_\varepsilon)\,\varphi_\varepsilon^{\,p-1}\\[5pt]
&=&-2\,p\int_\Omega c_\Omega\,d_\Gamma^{\;-p}\,(\nabla d_\Gamma).(\nabla\varphi_\varepsilon)\,\varphi_\varepsilon^{\,p-1}\\[5pt]
&\leq&2\,p\,\Big(\int_\Omega(c_\Omega\,d_\Gamma^{\;-p+1})^p|(\nabla d_\Gamma).(\nabla\varphi)|^p\,\psi^p\Big)^{1/p}.
\Big(\int_\Omega\varphi_\varepsilon^{\,p}\,\psi^{-q}\Big)^{1/q}
\end{eqnarray*}
where $q$ is the conjugate of $p$ and $\psi$ is a strictly positive function.
The last step uses the H\"older inequality.
Choosing $\psi=c_\Omega^{-1/q}\,d_\Gamma^{\;p-1}$ one finds
\begin{eqnarray*}
0<b_p\int_\Omega c_\Omega\,d_\Gamma^{\;-p}\,\varphi_\varepsilon^{\,p}&\leq& p\,\Big(\int_\Omega c_\Omega\,|(\nabla d_\Gamma).(\nabla\varphi)|^p\Big)^{1/p}.
\Big(\int_\Omega c_\Omega\,d_\Gamma^{\;-p}\varphi_\varepsilon^{\,p}\Big)^{1/q}\;.
\end{eqnarray*}
Dividing by the last factor and raising the inequality to the power $p$ one obtains
\[
\int_\Omega c_\Omega\,|\nabla\varphi|^p\geq \int_\Omega c_\Omega\,|(\nabla d_\Gamma).(\nabla\varphi)|^p\geq a_p\int_\Omega c_\Omega\,d_\Gamma^{\;-p}\,\varphi_\varepsilon^{\,p}
\]
for all $\varphi\in C_c^1(\Omega)$.
Then the Hardy inequality of the proposition follows in the limit $\varepsilon\to0$ by dominated convergence.
The proof for $p=1$ is similar but simpler.
The H\"older inequality is not necessary.
\hfill$\Box$
\bigskip
The existence of a weighted Hardy inequality of the form (\ref{ecc1.1}) in the situation, $\delta=\delta'$, and with $d_H<d-1$, follows from Theorem~4.2 of
\cite{LV}.
This paper also indicates a number of interesting directions to extend the current results.
\begin{remarkn}\label{rcc1}
The foregoing proof only uses some general features of the weight function $c$.
The estimates (\ref{ecc1.1}) follow for any strictly positive differentiable $c$ on $\langle0,\infty\rangle$ with
$c'(s)s/c(s)\geq \delta\wedge \delta'$.
If one makes the replacement $c(s)\to c(s)=s^\delta(a+bs)^{\delta'-\delta}$ with $a, b>0$ then
$c'(s)s/c(s)=(a\,\delta+b\,\delta's)/(a+b\,s)\geq \delta\wedge \delta'$ and the theorem remains valid.
Moreover, the constant $a_p$ in the Hardy inequality (\ref{ecc1.1})
is unchanged but now $c(s)/s^\delta\to a^{\delta'-\delta}$ as $s\to0$ and $c(s)/s^{\delta'}\to b^{\delta'-\delta}$ as $s\to\infty$.
\end{remarkn}
\begin{remarkn}\label{rcc2} The condition $d-d_H+\delta\wedge\delta'-p>0$ in Theorem~\ref{tcc1.1}
restricts the result to sets whose boundary have small codimension.
For example if $\delta=0=\delta'$ it requires $d-d_H>p\geq1$.
In particular it does not apply if $d_H=d-1$.
If, however, $d_H$ is small it is useful and allows one to deduce Rellich inequalities on $L_2(\Omega)$ by the arguments
of \cite{Rob12} for all $\delta, \delta'\geq0$ (see Section~\ref{S3}).
\end{remarkn}
The foregoing arguments may also be used to obtain Hardy inequalities on convex subsets $\Omega$
but the conclusions are somewhat weaker.
The problem is that points in $\Omega$ can have multiple near points.
This causes complications since $|(\nabla d_\Gamma)(x)|=1$ if and only if $x$ has a unique near point (see \cite{BEL}, Section~2.2).
The set of points in $\Omega$ which have more than one near point is defined as the skeleton $S(\Omega)$ of the set.
Then $ |(\nabla d_\Gamma)(x)|=1$ on $G(\Omega)={\bf R}^d\backslash\overline{S(\Omega)}$.
The following result is in the spirit of Theorem 3.4.3 of \cite{BEL}.
\begin{prop}\label{p2}
Assume $\Omega$ is convex.
Again let $c_\Omega=c\circ d_\Gamma$ with $c(s)=s^\delta(1+s)^{\delta-\delta'}$ where $\delta,\delta'\geq0$.
If $p-1-\delta\vee\delta'>0$ then
\[
\int_\Omega c_\Omega\,|\nabla\varphi|^p\geq \int_\Omega c_\Omega\,|(\nabla d_\Gamma).(\nabla\varphi)|^p\geq a_p\int_\Omega c_\Omega\,d_\Gamma^{\;-p}\,|\varphi|^p
\]
for all $\varphi\in C_c^1(G(\Omega))$ with $a_p=((p-1-\delta\vee\delta')/p)^p$.
\end{prop}
\proof\
If $\Omega$ is convex then $d_\Gamma$ is concave (see, for example, \cite{BEL}, Theorem~2.3.2).
This is sufficient to deduce that $\Delta d_\Gamma$ is a positive measure
(see \cite{EvG}, Chapter~6).
Therefore
\[
\int_\Omega\,(\nabla\psi).(\nabla d_\Gamma)=\int_\Omega\,d\mu_\Omega\,\psi\geq0
\]
for all positive $\psi\in C_c^1(\Omega)$ with $\mu_\Omega$ a positive Radon measure.
Again introduce the regularizations of $|\varphi|$ by
$\varphi_\varepsilon=(\varphi^2+\varepsilon^2)^{1/2}-\varepsilon$ with $\varepsilon>0$.
It then follows that
\[
\int_\Omega\,(\nabla (c_\Omega\,d_\Gamma^{\;-p+1}\,\varphi_\varepsilon^{\,p})).(\nabla d_\Gamma)\geq0
\;.
\]
Therefore
\[
\int_\Omega\,(\nabla (c_\Omega\,d_\Gamma^{\;-p+1})).(\nabla d_\Gamma)\,\varphi_\varepsilon^{\,p}
+p\int_\Omega c_\Omega\,d_\Gamma^{\;-p+1}\,(\nabla d_\Gamma).(\nabla\varphi_\varepsilon)\,\varphi_\varepsilon^{\,p-1}
\geq0
\;.
\]
Next it follows that
\begin{eqnarray*}
-(\nabla (c_\Omega\,d_\Gamma^{\;-p+1})).(\nabla d_\Gamma)&=&(p-1-c_\Omega'\,d_\Gamma/c_\Omega)\,c_\Omega\,d_\Gamma^{\;-p}
|\nabla d_\Gamma|^2\\[5pt]
&\geq&(p-1-(\delta\vee\delta'))\,c_\Omega\,d_\Gamma^{\;-p}|\nabla d_\Gamma|^2
\end{eqnarray*}
since $c_\Omega'\,d_\Gamma\leq (\delta\vee\delta')\,c_\Omega$.
Next if $\varphi\in C_c^1(G(\Omega))$ then $\mathop{\rm supp}\varphi_\varepsilon\subset G(\Omega)$ and consequently $|\nabla d_\Gamma|=1$
on the support of $\varphi_\varepsilon$.
Therefore, by combining the foregoing estimates ,
one obtains
\[
0<b_p\int_\Omega c_\Omega\,d_\Gamma^{\;-p}\,\varphi_\varepsilon^{\,p}\leq p\int_\Omega c_\Omega\,d_\Gamma^{\;-p+1}\,|(\nabla d_\Gamma).(\nabla\varphi)|\,\varphi_\varepsilon^{\,p-1}
\]
whenever $b_p=(p-1-(\delta\vee\delta'))>0$. Here we have again used $\nabla\varphi_\varepsilon=(\varphi/(\varphi^2+\varepsilon^2)^{1/2})\,\nabla\varphi$.
Therefore the H\"older inequality gives
\[
b_p\int_\Omega c_\Omega\,d_\Gamma^{\;-p}\,\varphi_\varepsilon^{\,p}
\leq p\,\Big(\int_\Omega(c_\Omega\,d_\Gamma^{\;-p+1})^p|(\nabla d_\Gamma).(\nabla\varphi)|^p\,\psi^p\Big)^{1/p}.
\Big(\int_\Omega\,\varphi_\varepsilon^{\,p}\,\psi^{-q}\Big)^{1/q}
\]
for all $\psi$ positive.
One can then proceed as previously and choose $\psi=c_\Omega^{-1/q}\,d_\Gamma^{\;p-1}$ to find
\begin{eqnarray*}
b_p\int_\Omega c_\Omega\,d_\Gamma^{\;-p}\,\varphi_\varepsilon^{\,p}&\leq &
p\,\Big(\int_\Omega c_\Omega\,|(\nabla d_\Gamma).(\nabla\varphi)|^p\Big)^{1/p}.
\Big(\int_\Omega c_\Omega\,d_\Gamma^{\;-p}\,\varphi_\varepsilon^{\,p}\Big)^{1/q}\\[5pt]
\end{eqnarray*}
Then since $b_p>0$ one can divide throughout by the last factor, raise the inequality to the $p$-th power
and take the limit $\varepsilon\to0$ to obtain the
second inequality in the proposition.
The first one then follows since $|\nabla d_\Gamma|=1$ on $G(\Omega)$.
\hfill$\Box$
\bigskip
For further results on the weighted and unweighted Hardy inequality on convex sets we refer to \cite{MMP}, \cite{Avk1} and references therein.
\section{Rellich inequalities}\label{S3}
In this section we establish the Rellich inequalities (\ref{ecc1.2}) of Theorem~\ref{tcc1.2}.
Our proof is based on an extension of Theorem~4 in the paper of Davies and Hinz \cite{DaH} (see Theorem~6.3.3 in \cite{BEL}) from the Laplacian to the weighted operator $H$.
\begin{prop}\label{pcc3.1}
Let $\Omega$ be a general domain in ${\bf R}^d$ and fix $p\in\langle1,\infty\rangle$.
Define the closeable operator $H=-\sum^d_{k=1}\partial_k \,c_\Omega\,\partial_k$ on $D(H)=C_c^\infty(\Omega)$.
If there is a $\chi$ in the domain of the $L_p$-closure $\overline H$ of $H$ such that
$\chi>0$, $\overline{H}\chi>0$ and $\overline{H}\chi^{1+\gamma}\geq0$ for some $\gamma>0$ then
\begin{equation}
\int_\Omega|\overline{H}\chi|\,|\varphi|^p\leq p^{2p}(p+\gamma\,(p-1))^{-p} \int_\Omega \chi^p\,|\overline{H}\chi|^{-p+1}\,
|H\varphi|^p
\label{ecc3.1}
\end{equation}
for all $\varphi\in C_c^\infty(\Omega)$.
\end{prop}
This proposition differs superficially from that of Davies--Hinz since we define the Laplacian as $\Delta=-\nabla^2$ instead of $\nabla^2$.
Similarly we have introduced a minus sign in the definition of $H$.
Moreover, the parameter $\delta$ in \cite{DaH} is replaced by $1+\gamma$ and this changes slightly the form of the constant in (\ref{ecc3.1})
The proof of Proposition~\ref{pcc3.1} closely follows the arguments of \cite{DaH}.
The introduction of the coefficient $c_\Omega$ makes no essential change.
In fact since the estimates are on $C_c^\infty(\Omega)$ it suffices that $c_\Omega$ is the operator
of multiplication by a locally $C_1$-function.
The Davies--Hinz result also extends to more general divergence-form operators but this is not relevant in the current context.
It suffices that it applies to the weight functions used in Theorem~\ref{tcc1.2}.
Since the proof of Theorem~4 in \cite{DaH} is relatively long and since its adaptation to the weighted operators
does not introduce any significant changes we omit further discussion of the proof of Proposition~\ref{pcc3.1}.
We do, however, give the details of its application to the proof of Theorem~\ref{tcc1.2}.
\medskip
\noindent{\bf Proof of Theorem~\ref{tcc1.2}}$\;$
Define $\chi$ on the open right half line by $\chi(s)=s^{-\alpha}(1+s)^{-\alpha'+\alpha}$ with $\alpha, \alpha'\geq0$.
Then set $\chi_\Omega=\chi\circ d_\Gamma$ and adopt the notation $\chi'_\Omega=\chi'\circ d_\Gamma$ etc.
Our aim is to derive conditions on $\alpha$ and $\alpha'$ such that $H\chi_\Omega>0$ with $H$ (the closure of) the weighted operator of
Theorem~\ref{tcc1.2}.
In fact one can obtain quite precise lower bounds on $H\chi_\Omega$.
\begin{lemma}\label{lcc3.1} Let
$b_\alpha=\left(d-d_H+(\delta\wedge\delta')\right)(\alpha\wedge\alpha')
-(\alpha\vee\alpha')\,(\alpha\vee\alpha'+2)$.
\smallskip
It follows that $H\chi_\Omega\geq b_\alpha\,d_\Gamma^{\;-2} \,c_\Omega\,\chi_\Omega$.
Hence if $b_\alpha>0$ then $H\chi_\Omega>0$.
\end{lemma}
\proof\
First one has
$\chi'(s)
=-s^{-1}\,\chi(s)\,(\alpha +\alpha' s)(1+s)^{-1}$.
Therefore
\[
-s^{-1}\,\chi(s)\,(\alpha\vee\alpha')\leq \chi'(s)\leq -s^{-1}\,\chi(s)\,(\alpha\wedge\alpha')
\;.
\]
In addition
\begin{eqnarray*}
\chi''(s)
&=&s^{-2}\,\chi(s)\,(1+s)^{-2}\,\Big(\alpha\,(\alpha+1)+2\,\alpha\,(\alpha'+1)\,s+\alpha'\,(\alpha'+1)\,s^2\Big)\\[5pt]
&\leq&s^{-2}\,\chi(s)\,(\alpha\vee\alpha')\,(\alpha\vee\alpha'+1)
\;.
\end{eqnarray*}
Secondly, one calculates that
\begin{eqnarray}
H\chi_\Omega&=&-d_\Gamma^{\;-1}c_\Omega\,\chi'_\Omega \left(\nabla^2d_\Gamma^{\;2}\right)/2-\left(c'_\Omega\,\chi'_\Omega
-d_\Gamma^{\;-1}\,c_\Omega\,\chi'_\Omega+c_\Omega\,\chi''_\Omega\right)|\nabla d_\Gamma|^2
\;.\label{ecc3.10}
\end{eqnarray}
But $|\nabla d_\Gamma|=1$ by the discussion at the beginning of Section~\ref{S2} and $(\nabla^2d_\Gamma^{\;2})/2\geq d-d_H$ by
Corollary~\ref{ccc2.2}.
Then we use the bounds on $\chi'$ and $\chi''$ together with the lower bound $c'(s)\geq (\delta\wedge\delta')\,s^{-1}c(s)$ to estimate the four terms on the right hand side
of (\ref{ecc3.10}).
The first two terms give positive contributions but the other terms are negative.
One finds
\begin{eqnarray*}
H\chi_\Omega&\geq&
\left( (d-d_H)+(\delta\wedge\delta')\right)\,(\alpha\wedge\alpha') \left(d_\Gamma^{\;-2}\,c_\Omega\,\chi_\Omega\right)\\[4pt]
&&\hspace{4cm}{}-\left((\alpha\vee\alpha')+ (\alpha\vee\alpha')\,(\alpha\vee\alpha'+1)\right)\,\left(d_\Gamma^{\;-2}\,c_\Omega\,\chi_\Omega\right)\\[5pt]
&=&b_\alpha\, d_\Gamma^{\;-2}\,c_\Omega\,\chi_\Omega \;.
\end{eqnarray*}
Clearly $H\chi_\Omega>0$ if the $\delta, \alpha,$ etc. are such that $b_\alpha>0$.
\hfill$\Box$
\bigskip
Now assuming that $\alpha$ and $\alpha'$ are chosen to ensure that $b_\alpha>0$ one can bound the product
$\chi_\Omega^{\;p}\,|H\chi_\Omega|^{-p+1}$ occurring on the right hand side of (\ref{ecc3.1}).
Explicitly one obtains
\[
\chi_\Omega^{\;p}\,|H\chi_\Omega|^{-p+1}\leq b_\alpha^{-p+1}\,d_\Gamma^{\;-\sigma}(1+d_\Gamma)^{-\tau}
\]
with
$\sigma=\alpha-(2-\delta)(p-1)$ and
$\tau=(\alpha'-\alpha)+(\delta'-\delta)(p-1)$.
Hence if one chooses $\alpha=\alpha_p=(2-\delta)(p-1)$ and $\alpha'=\alpha'_p=(2-\delta')(p-1)$
one obtains the uniform bound
\begin{equation}
\chi_\Omega^{\;p}\,|H\chi_\Omega|^{-p+1}\leq b_{\alpha_p}^{-p+1}
\label{ecc3.2}
\end{equation}
as long as
\[
b_{\alpha_p}=\left(d-d_H+(\delta\wedge\delta')\right)(\alpha_p\wedge\alpha'_p)
-(\alpha_p\vee\alpha_p')\,(\alpha_p\vee\alpha_p'+2)>0
\;.
\]
But this is a condition on $p, \delta$ and $ \delta'$.
\begin{lemma}\label{lcc3.2}
If $(d-d_H+p\,(\delta\wedge\delta')-2p)\geq 2p\,|\delta-\delta'|\,(2-\delta\vee\delta')^{-1}$ then $b_{\alpha_p}>0$.
\end{lemma}
\proof\
Substituting the values of $\alpha_p$ and $\alpha'_p$ in the definition of $b_\alpha$ one finds
\begin{eqnarray*}
b_{\alpha_p}
&=&\left(d-d_H+(\delta\wedge\delta')-(\alpha_p\vee\alpha_p'+2)\right)\,(\alpha_p\wedge\alpha_p')-|\alpha_p-\alpha_p'|\,((\alpha_p\vee\alpha_p')+2)\\[5pt]
&\geq&(p-1)\left((d-d_H+p\,(\delta\wedge\delta')-2\,p)(2-\delta\vee\delta')-2\,p\,|\delta-\delta'|\right)
\;.
\end{eqnarray*}
Since $p>1$ the statement of the lemma follows immediately.
\hfill$\Box$
\bigskip
Note that the condition of the lemma is the condition posited in Theorem~\ref{tcc1.2} for validity of the Rellich inequality.
The next lemma provides the last estimates necessary for the application of Proposition~\ref{pcc3.1} to derive the Rellich inequality.
\begin{lemma}\label{lcc3.3}
Let $\tilde\chi_\Omega=d_\Gamma^{\;-\alpha_p}(1+d_\Gamma)^{-\alpha'_p+\alpha_p}$. Assume $b_{\alpha_p}>0$.
Then
\[
\tilde\chi_\Omega^{\;p}\,|H\tilde\chi_\Omega|^{-p+1}\leq b_{\alpha_p}^{\;-p+1}
\;\;\;\;\;\;\;and \;\;\;\;\;\;\; H\tilde\chi_\Omega\geq b_{\alpha_p}\,(c_\Omega\,d_\Gamma^{\;-1})^p
\;.
\]
Moreover, $ H\tilde\chi_\Omega^{\;1+\gamma}\geq0$ for all $\gamma\in[0,\gamma_p\,]$ where
$\gamma_p=b_{\alpha_p}/(\alpha_p\vee\alpha'_p)^2$.
\end{lemma}
\proof\
The first estimate follows from Lemma~\ref{lcc3.1} and the choice of $\alpha_p$ and $\alpha'_p$ as discussed above.
The second estimate follows from another application of Lemma~\ref{lcc3.1} by noting that
\begin{eqnarray*}
H\tilde\chi_\Omega\geq b_{\alpha_p}\,d_\Gamma^{\;-2}\,c_\Omega\,\tilde\chi_\Omega
&=&b_{\alpha_p}\,d_\Gamma^{\;-2}\,d_\Gamma^{\;\delta}(1+d_\Gamma)^{\delta'-\delta}\,d_\Gamma^{\;-\alpha_p}(1+d_\Gamma)^{-(\alpha'_p-\alpha_p)}\nonumber\\[5pt]
&=&b_{\alpha_p}\,d_\Gamma^{\;-2p}\,d_\Gamma^{\;\delta p}(1+d_\Gamma)^{(\delta'-\delta)p}=b_{\alpha_p}\,(c_\Omega\,d_\Gamma^{\;-1})^p\end{eqnarray*}
where the second equality results from substituting the specific values of $\alpha_p$ and $\alpha_p'$.
The last statement of the lemma
follows by first noting that
\[
\tilde\chi_\Omega^{\;1+\gamma}=d_\Gamma^{\;(1+\gamma)\alpha_p}(1+d_\Gamma)^{(1+\gamma)(-\alpha'_p+\alpha_p)}
\;.
\]
Therefore $ H\tilde\chi_\Omega^{\;1+\gamma}\geq0$ if $b_{(1+\gamma)\alpha_p}\geq0$ by a third application of Lemma~\ref{lcc3.1}.
But
\[
b_{(1+\gamma)\alpha_p}=(1+\gamma)\,(b_{\alpha_p}-\gamma\,(\alpha_p\vee\alpha_p')^2)
\]
by the definition of $b_{\alpha}$.
Therefore $ b_{(1+\gamma)\alpha_p}\geq0$ whenever $0\leq\gamma\leq \gamma_p$.
\hfill$\Box$
\bigskip
At this point we have verified the conditions necessary for the application of Proposition~\ref{pcc3.1} to $H$ and $\tilde\chi_\Omega$
to obtain the Rellich inequalities of Theorem~\ref{tcc1.2}.
We now evaluate (\ref{ecc3.1}) with the foregoing estimates.
First we observe that $b_{\alpha_p}>0$ by Lemma~\ref{lcc3.2} and the assumption of the theorem.
Then it follows from the estimates of Lemma~\ref{lcc3.3} that
\begin{eqnarray*}
b_{\alpha_p}\int_\Omega |c_\Omega\,d_\Gamma^{\,-2}\varphi|^p&\leq&\int_\Omega |H\chi_\Omega|\,|\varphi|^p\\[5pt]
&\leq& p^{2p}(p+\gamma_p\,(p-1))^{-p} \int_\Omega \chi_\Omega^{\,p}\,|H\chi|^{-p+1}\,|H\varphi|^p\\[5pt]
&\leq& p^{2p}(p+\gamma_p\,(p-1))^{-p}\,b_{\alpha_p}^{-p+1}\int_\Omega |H\varphi|^p
\;.
\end{eqnarray*}
Thus by rearrangement one obtains the Rellich inequality (\ref{ecc1.2}) with
\[
c_p=(p+\gamma_p\,(p-1))\,b_{\alpha_p}\,p^{-2}
\;.
\]
It follows from $b_{\alpha_p}, \gamma_p>0$ that $c_p>0$.
We next argue that $c_p\leq C_p$.
First one has
\begin{eqnarray*}
b_\alpha=\left(d-d_H+(\delta\wedge\delta')-(\alpha\vee\alpha'+2)\right) (\alpha\wedge\alpha')-a_\alpha
\end{eqnarray*}
with
\begin{eqnarray*}
a_\alpha&=&(\alpha\vee\alpha')\,(\alpha\vee\alpha'+2)-(\alpha\wedge\alpha')\,(\alpha\vee\alpha'+2)\\[5pt]
&=&|\alpha-\alpha'|\,((\alpha\vee\alpha')+2)\geq0
\;.
\end{eqnarray*}
Now set
\[
\tilde b_\alpha=(d-d_H+(\delta\wedge\delta')-(\alpha\vee\alpha'+2))
\]
Then
\[
b_\alpha=(\alpha\wedge\alpha')\,\tilde b_\alpha-a_\alpha\leq (\alpha\wedge\alpha')\,\tilde b_\alpha
\;.
\]
Hence $b_{\alpha_p}\leq (\alpha_p\wedge\alpha'_p)\,\tilde b_{\alpha_p}$ with equality if and only if $\alpha_p=\alpha'_p$ or, equivalently,
$\delta=\delta'$.
Moreover, $\gamma_p=b_{\alpha_p}/(\alpha_p\vee\alpha'_p)^2\leq \tilde\gamma_p$, where $\tilde\gamma_p=\tilde b_{\alpha_p}/(\alpha_p\vee\alpha'_p)$, with equality if and only if $\delta=\delta'$.
Now
\begin{eqnarray*}
c_p&\leq&(p+\tilde\gamma_p\,(p-1))\,(\alpha_p\wedge\alpha'_p)\,\tilde b_{\alpha_p}\,p^{-2}\\[5pt]
&\leq&((\alpha_p\vee\alpha'_p)\,p+\tilde b_{\alpha_p}\,(p-1))\,\tilde b_{\alpha_p}\,p^{-2}
\;.
\end{eqnarray*}
But
\begin{eqnarray*}
\tilde b_{\alpha_p}&=&(d-d_H+(\delta\wedge\delta')-((2-\delta\wedge\delta')\,(p-1)+2)\\[5pt]
&=&(d-d_H+p\,(\delta\wedge\delta')-2p)
\end{eqnarray*}
and
\begin{eqnarray*}
(\alpha_p\vee\alpha'_p)\,p+{\tilde b_{\alpha_p}}(p-1)&=&(2-(\delta\wedge\delta'))\,p(p-1)+{\tilde b_{\alpha_p}}(p-1)\\[5pt]
&=&(p-1)\,(d-d_H)
\end{eqnarray*}
Combining these estimates one has $c_p\leq C_p$ where $C_p$ is defined in Theorem~\ref{tcc1.2}.
\hfill$\Box$
\bigskip
We have avoided calculating $c_p$ explicitly since the resulting expression is complicated and is not necessarily optimal.
It is, however, straightforward to identify it from the value of $b_{\alpha_p}$ given prior to Lemma~\ref{lcc3.3} and the definition of
$\gamma_p$.
Nevertheless $c_p$ does have some simple properties as a function of the degeneracy parameters $\delta$ and $\delta'$.
Set $c_p=c_p(\delta,\delta')$ to denote the dependence on $\delta$ and $\delta'$.
Then $c_p$ is a positive symmetric function and $\delta\in[0,2\rangle\mapsto c_p(\delta,\delta)$ is strictly increasing.
Moreover, if $c_p(\delta_0,0)\geq 0$ then $\delta\in[0,\delta_0]\mapsto c_p(\delta,0) $ is strictly decreasing.
In particular
\[
c_p(0,0)\geq c_p(\delta,0)\geq c_p(\delta_0,0)
\]
for all $\delta\in[0,\delta_0]$.
These inequalities follow because
\[
c_p(\delta,\delta)=(p-1)(d-d_H)(d-d_H+p\delta-2p)/p^2
\]
and
\[
c_p(\delta,0)=c_p(0,\delta)=(p-1)(d-d_H)((d-d_H)(1-\delta/2)-2p)(1-\delta/2)/p^2
\]
which are special cases of the general formula for $c_p$.
\section{Optimal constants}\label{S4}
In this section we consider the problem of deriving optimal constants in the Hardy and Rellich inequalities of Theorems~\ref{tcc1.1}
and \ref{tcc1.2}.
First we discuss whether the constant $a_p^{\,p}$ in Theorem~\ref{tcc1.1} is the largest possible for the Hardy inequality.
The maximal positive constant $\mu_p(\Omega)$ for which (\ref{ecc1.1}) is valid is given by
\begin{equation}
\mu_p(\Omega)=
\inf\Big\{\int_\Omega c_\Omega\,|\nabla\varphi|^p\Big/\!\int_\Omega c_\Omega\,d_\Gamma^{\;-p}\,|\varphi|^p:\; \varphi\in C_c^1(\Omega)\Big\}
\;.\label{ecc4.1}
\end{equation}
Clearly $\mu_p(\Omega)\geq a_p^{\,p}$ by Theorem~\ref{tcc1.1}.
Therefore optimality follows if the infimum in (\ref{ecc4.1}) is less than or equal to $a_p^{\,p}$.
Since $c_\Omega$ has a different asymptotic behaviour at the boundary $\Gamma$ to that at infinity this variational problem has two distinct elements, a local and a global.
For orientation we begin with a brief discussion of the classical case $K=\{0\}$ (see, for example, \cite{BEL} Section~1.2).
If $\Omega={\bf R}^d\backslash\{0\}$ then the constant $a_p$ of Theorem~\ref{tcc1.1} is given by $a_p=(d+\delta\wedge\delta'-p)/p$.
Therefore if $a_p(\sigma)=((d+\sigma -p)/p)^p$ for all $\sigma\geq0$ then $a_p^{\,p}= a_p(\delta\wedge\delta')=a_p(\delta)\wedge a_p(\delta')$.
Thus to prove that $a_p^{\,p}=\mu_p(\Omega)$ it suffices to prove that $\mu_p(\Omega)\leq a_p(\delta)$ and $\mu_p(\Omega)\leq a_p(\delta')$.
This can be achieved by standard arguments (see, for example, \cite{BEL}, Chapter~1).
The first upper bound follows by estimating the infimum in (\ref{ecc4.1})
with a sequence of functions $\varphi_\alpha=d_\Gamma^{\;-\alpha}\xi$, $\alpha>0$, where $\xi$ has support in a small neighbourhood of the origin.
Since $d_\Gamma(x)=|x|$ it follows that $c_\Omega\,|\nabla\varphi|^p$ and $c_\Omega\,d_\Gamma^{\;-p}\,|\varphi_\alpha|^p$ are integrable at the origin if $\alpha <(d+\delta-p)/p$.
In which case the leading term $d_\Gamma^{\;-\alpha}$ gives a bound proportional to $\alpha^p$ in the evaluation of (\ref{ecc4.1}).
Then by a suitable choice of localization functions $\xi$ and a limiting argument one concludes that
$\mu_p(\Omega)\leq ((d+\delta-p)/p)^p=a_p(\delta)$.
Here the property $\lim_{s\to0}c(s)\,s^{-\delta}=1$ is important.
The estimate at infinity is similar.
One now chooses $\varphi_\alpha$ with support in the complement of a large ball centred at the origin and again proportional to $d_\Gamma^{\,-\alpha}$.
Then, however, $c_\Omega\,|\nabla\varphi|^p$ and $c_\Omega\,d_\Gamma^{\;-p}\,|\varphi_\alpha|^p$ are integrable at infinity if $\alpha>(d+\delta'-p)/p$.
Again the leading term gives a bound proportional to $\alpha^p$.
Then another approximation and limiting argument gives the
upper bound
$\mu_p(\Omega)\leq ((d+\delta'-p)/p)^p=a_p(\delta')$.
Here the property $\lim_{s\to\infty}c(s)\,s^{-\delta'}=1$ is crucial.
Thus one arrives at the following conclusion.
\begin{prop}\label{pcc4.1}
If $K=\{0\}$ then the optimal constant $\mu_p(\Omega)$ in the Hardy inequality $(\ref{ecc1.1})$ is given by $\mu_p(\Omega)=a_p^{\,p}=((d+\delta\wedge\delta'-p)/p)^p$.
\end{prop}
In the more general situation that $\dim(K)\geq 1$ the foregoing approach is complicated by the geometry.
Nevertheless one can obtain bounds by a similar two step process of local estimates and estimates at infinity.
The local estimates are obtained by the methods of Barbatis, Filippas and Tertakis \cite{BFT}
which are also developed in Section~5 of Ward's thesis \cite{War}.
The following theorem covers the cases with $\dim(K)<d$.
\begin{thm}\label{tcc4.1} Adopt the assumptions of Theorem~$\ref{tcc1.1}$.
Further assume that $ \dim(K)\in\{1,\dots, d-1\}$.
Then the optimal constant $\mu_p(\Omega)$ in $(\ref{ecc1.1})$ satisfies $\mu_p(\Omega)\leq ((d-d_H+\delta-p)/p)^p$.
In particular, if $\delta\leq\delta'$ then $\mu_p(\Omega)= ((d-d_H+\delta-p)/p)^p$.
\end{thm}
\proof\
The proposition follows by the proof of Theorem~5.2.1 in \cite{War} but with some modification to take into account the weighting factor $c_\Omega$.
We outline a variation of Ward's argument which is subsequently extended to give a local bound on the optimal constant in the
Rellich inequality (\ref{ecc1.2}).
First we give the proof for the special case $\delta=\delta'$ or, equivalently, $c(s)=s^\delta$.
Then, since the argument only involves functions with support in an arbitrarily small ball centred at a point of the boundary
the result can be extended to the general weighting factor $c_\Omega$.
The starting point of the proof is a modification of Ward's Lemma~5.1.1.
\begin{lemma}\label{lcc4.1} Assume $c(s)=s^\delta$ with $\delta\geq0$.
Then
\begin{eqnarray}
\mu_p(\Omega)\leq(1-\lambda)^{-(p-1)} |(\beta+\delta-p)/p|^p
+\lambda^{-(p-1)}
\Big(\int_\Omega \,d_\Gamma^{\;-\beta+p} \,|(\nabla\varphi)|^p\Big/\!
\int_\Omega \,d_\Gamma^{\;-\beta} \,|\varphi|^p\Big)\label{ecc4.11}
\end{eqnarray}
for all $\varphi\in W^{1,p}_0(\Omega)$, $\beta\geq0$, $p>1$ and $\lambda\in\langle0,1\rangle$.
\end{lemma}
\proof\ The proof follows that of Ward with $\varphi$ in (\ref{ecc4.1}) replaced by $\psi\varphi $
where $\psi=d_\Gamma^{\;-(\beta+\delta-p)/p}$.
Then one uses the Leibniz rule, the
$l_2$-triangle inequality
\[
|(\nabla\psi)\, \varphi+\psi\,(\nabla\varphi)|\leq |(\nabla\psi)|\, |\varphi|+|\psi|\,|(\nabla\varphi)|
\]
and the estimate
\begin{equation}
(s+t)^{p}\leq (1-\lambda)^{-(p-1)}s^p+\lambda^{-(p-1)}t^p
\label{ecc4.12}
\end{equation}
which is valid for all $s,t\geq0$, $\lambda\in\langle0,1\rangle$ and $p>1$.
(The latter inequality
was used by Secchi, Smets and Willem \cite{SSW} in their analysis of the Hardy inequality
on the complement of affine subsets.
It follows by minimization of the right hand side over $\lambda$.)
Now by combination of these observations one finds
\begin{eqnarray*}
\int_\Omega d_\Gamma^{\;\delta}\,|\nabla(\psi\varphi)|^p&\leq&
(1-\lambda)^{-(p-1)}\int_\Omega d_\Gamma^{\;\delta}\,|\nabla\psi|^p|\varphi|^p
+\lambda^{-(p-1)}\int_\Omega d_\Gamma^{\;\delta}\,|\psi|^p|\nabla\varphi|^p\\[5pt]
&=&(1-\lambda)^{-(p-1)}|(\beta+\delta-p)/p|^p
\int_\Omega d_\Gamma^{\;-\beta}\,|\varphi|^p+\lambda^{-(p-1)}\int_\Omega d_\Gamma^{\;-\beta+p}\,|\nabla\varphi|^p
\end{eqnarray*}
where we have used the explicit form of $\psi$.
Similarly
\[
\int_\Omega d_\Gamma^{\;\delta-p}\,|\psi\varphi|^p=\int_\Omega d_\Gamma^{\;-\beta}|\varphi|^p
\;.
\]
The statement of the lemma follows immediately.
\hfill$\Box$
\bigskip
The estimate for $\mu_p(\Omega)$ given in Theorem~\ref{tcc4.1} now follows by Ward's reasoning in the proof of his Theorem~5.2.1.
The idea is to construct a sequence of $\varphi_n $ such that the numerator in the last term in (\ref{ecc4.11}) is bounded uniformly in $n$ if $\beta=d-k$,
with $k=\dim(\partial K)=\dim(\Gamma)=d_H$,
but the denominator diverges as $n\to\infty$.
This is particularly easy in the current context since we are assuming $k=\dim(K)\leq d-1$.
First let ${\bf R}^d={\bf R}^k\times {\bf R}^{d-k}$ where ${\bf R}^k$ is identified with the affine hull of $K$.
Therefore if one sets $x=(y,z)\in \Omega$ with $y\in {\bf R}^k$ and $z\in{\bf R}^{d-k}$ then
$d_{\Omega}(x)=(d_K(y)^2+|z|^2)^{1/2}$ where $d_K(y)=\inf_{y'\in K}|y-y'|$.
Since $d_K(y)=0$ if $y\in K$ it follows that $d_\Gamma(y,z)=|z|$ if $y\in K$.
Secondly, define $\varphi\in C_c^\infty(\Omega)$ by setting $\varphi(y,z)=\eta(y)\chi(z)$ where $\eta\in C_c^\infty(K)$ and $\chi\in C_c^\infty({\bf R}^{d-k}\backslash\{0\})$.
Further assume $\chi$ is a radial function.
Then with $\beta=d-k$ one has
\begin{equation}
\int_\Omega\,d_\Gamma^{\,-\beta}|\varphi|^p=\int_Kdy\,|\eta(y)|^p\int_{{\bf R}^{d-k}}dz\,|z|^{-(d-k)}|\chi(z)|^p
=a_1\int_0^\infty dr\,r^{-1}|\chi(r)|^p
\label{ecc4.81}
\end{equation}
but
\begin{eqnarray}
\int_\Omega\,d_\Gamma^{\,-\beta+p}|\nabla\varphi|^p&=&
\int_Kdy\int_{{\bf R}^{d-k}}dz\,|z|^{\,-(d-k-p)}(|(\nabla\eta)(y)|\,|\chi(z)|+|\eta(y)|\,|(\nabla\chi)(z)|)^p\nonumber\\[5pt]
&\leq&a_2\int_0^\infty dr\,r^{p-1}|\chi(r)|^p+a_3\int_0^\infty dr\,r^{p-1}|\chi'(r)|^p
\label{ecc4.82}
\end{eqnarray}
with $a_1, a_2, a_3>0$.
The last estimate again uses (\ref{ecc4.12}).
Next consider the sequence of functions $\xi_n$ defined on $\langle0,\infty\rangle$ by $\xi_n(r)=0$ if $r\leq n^{-1}$,
$\xi_n(r)=\log rn/\log n$ if $n^{-1}\leq r\leq1$ and $\xi_n=1$ if $r\geq1$.
Then $0\leq \xi_n\leq1$ and the $\xi_n$ converge monotonically upward to the identity function.
Further let $\zeta$ be a $C^\infty$-function with $\zeta(r)=1$ if $r\leq1$, $\zeta(r)=0$ if $r\geq 2$
and $0\leq \zeta\leq1$.
Then set $\chi_n=\xi_n\zeta$.
It follows immediately that
\[
\lim_{n\to\infty}\int_0^\infty dr\,r^{-1}|\chi_n(r)|^p=\infty
\;.
\]
Moreover,
\[
\int_0^\infty dr\,r^{p-1}|\chi_n(r)|^p\leq \int_0^2 dr\,r^{p-1}|\xi_n(r)|^p\leq \int_0^2 dr\,r^{p-1}\leq 2^{\,p}p^{-1}
\]
for all $ n>1$.
But $\mathop{\rm supp}\chi_n\subseteq [0,2]$, $\chi'_n=\xi'_n$ on $\langle0,1]$ and $ \chi'_n=\zeta'$ on $[1,2]$.
Therefore
\[
\int_0^\infty dr\,r^{p-1}|\chi_n'(r)|^p= \int_0^1 dr\,r^{p-1}|\xi_n'(r)|^p+ \int_1^2 dr\,r^{p-1}|\zeta'(r)|^p
=(\log n)^{-(p-1)}+a
\]
where $a>0$ is the contribution of the second integral.
Since $p>1$ the bound is uniform for all $n>1$.
Hence if one sets $\varphi_n=\eta\,\chi_n$ one deduces from (\ref{ecc4.81}) and (\ref{ecc4.82}),
with $\chi$ replaced by $\chi_n$, that
\[
\limsup_{n\to\infty}\int_\Omega\,d_\Gamma^{\,-\beta+p}|\nabla\varphi_n|^p\Big/ \int_\Omega\,d_\Gamma^{\,-\beta}|\varphi_n|^p=0
\;.
\]
Therefore replacing $\varphi$ with $\varphi_n$ in (\ref{ecc4.11}) and setting $\beta=d-k$ one deduces that
\[
\mu_p(\Omega)\leq (1-\lambda)^{-(p-1)}\,((d-k+\delta-p)/p)^p
\]
for all $\lambda\in\langle0,1\rangle$.
Thus in the limit $\lambda\to0$ one has $\mu_p(\Omega)\leq a_p(\delta)$.
This completes the proof of the upper bound for $c(s)=s^\delta$, i.e.\ for $\delta=\delta'$.
But it follows by construction that
\[
\mathop{\rm supp}\varphi_n\subseteq \{(y,z): y\in \mathop{\rm supp}\eta, |z|\leq2\}
\;.
\]
The choice of the value $2$ is, however, arbitrary and by rescaling the $\xi_n$ it can be replaced by any
$r>0$ without materially affecting the argument.
Then since $|z|^\delta(1+r)^{-|\delta-\delta'|}\leq c(z)\leq |z|^\delta(1+r)^{|\delta-\delta'|}$ for $|z|<r$ the case of general
$c_\Omega$ is reduced to the special case $\delta=\delta'$.
Finally if $\delta\leq \delta'$ it follows from Theorem~\ref{tcc1.1} that $\mu_p(\Omega)\geq a_p(\delta)$.
Consequently one must have equality.
\hfill$\Box$
\bigskip
Next we investigate the derivation of the bounds $\mu_p(\Omega)\leq ((d-d_H+\delta'-p)/p)^p$
in the setting of Theorem~\ref{tcc4.1}.
These bounds require additional information on the global properties of $K$.
The dimension $k$ of the convex set is defined as the dimension of the affine hull $A_K$ of $K$
and is essentially a local concept.
It carries little information about the global character of the set.
For example, in $2$-dimensions $K$ could be a disc, an infinitely extended strip or a quadrant.
But viewed from afar these sets would appear to have dimension $0$, $1$ and $2$, respectively.
This aspect of the sets is captured by the `dimension at infinity' $k_\infty$ which is defined by
\[
k_\infty=\liminf_{r\to\infty}\left(\log|K\cap B_r|/\log r\right)
\]
where $B_r=\{y\in {\bf R}^k:|y|<r\}$ and $|S|$ indicates the $k$-dimensional Lebesgue measure of
the set $S$.
The parameter $k_\infty$ of the convex set is integer valued with $0\leq k_\infty\leq k$.
In the two-dimensional examples it takes the values $0$, $1$ and $2$ as expected.
The equality $k_\infty=k$ of the global and local dimensions will be the key property in deriving the upper bounds on $\mu_p(\Omega)$.
\begin{lemma}\label{lcc4.0}
Assume $k_\infty=k$.
Then
\[
\inf_{\eta\in C_c^\infty(K)}\int_K|\nabla\eta|^p\Big/\!\!\int_K|\eta|^p=0
\;.
\]
\end{lemma}
\proof\
First let $\xi$ be a $C^\infty$-function with $0\leq\xi\leq 1$ such that
$\mathop{\rm supp}\xi\subseteq K$ and $\xi(y)=1$ if $d_K(y)\geq1$, with $d_K$ the Euclidean distance to the boundary $\partial K$.
Secondly, let $\zeta_n$ be a sequence of $C^\infty$-functions
with $0\leq\zeta_n\leq 1$, $\zeta_n(y)$ if $y\in B_r$ and $\zeta_n=0$ if $y\in B^c_{r+1}$.
We may assume $\sup_n|\nabla\zeta_n|<\infty$.
Now set $\eta_n=\zeta_n\,\xi$.
Then $\eta_n\in C_c^\infty(K)$ and $\mathop{\rm supp}|\nabla\eta_n|$ has measure at most $b\,r^{k-1}$ for all $r\geq1$ with
$b>0$ independent of $r$.
But $\eta_n=1$ on a set of measure $c\,r^k$ with $c>0$.
Therefore
\[
\int_K|\nabla\eta_n|^p\Big/\!\int_K|\eta_n|^p< a\,r^{-1}
\]
with $a>0$ independent of $r$.
The lemma follows immediately.
\hfill$\Box$
\bigskip
The following theorem establishes that $k_\infty=k$ is a sufficient condition for the expected global bounds but it is likely that it, or some variation of it, is also necessary.
\begin{thm}\label{tcc4.3}
Let $K$ be a closed convex subset of ${\bf R}^d$ with $k=\dim(K)\in \{1,\ldots, d-1\}$ and with $k_\infty=k$.
Then the optimal constant $\mu_p(\Omega)$ in the Hardy inequality $(\ref{ecc1.1})$
on $\Omega={\bf R}^d\backslash K$ is given by
$\mu_p(\Omega)=((d-k+\delta\wedge \delta'-p)/p)^p$.
\end{thm}
\proof\
First, $\mu_p(\Omega)\geq a_p^{\,p}$ with $a_p=(d-k+\delta\wedge \delta'-p)/p$ by Theorem~\ref{tcc1.1}.
Therefore it suffices to establish a matching upper bound.
But the local estimates of Theorem~\ref{tcc4.1} give the bound $\mu_p(\Omega)\leq ((d-k+\delta-p)/p)^p$.
Thus it remains to prove that $\mu_p(\Omega)\leq ((d-k+\delta'-p)/p)^p$.
Secondly, we again consider the decomposition ${\bf R}^d={\bf R}^k\times {\bf R}^{d-k}$ with $K\subseteq {\bf R}^k
$ and ${\bf R}^k=A_K$.
Then since $d_\Gamma(y,z)=|z|$ if $y\in K$ the weighted Hardy inequality (\ref{ecc1.1}) on $L_p(\Omega)$ takes the form
\begin{equation}
\int_{K}dy\int_{{\bf R}^{d-k}} dz\,c(|z|)|(\nabla\varphi)(y,z)|^p\geq
a_p^{\,p}\int_{K}dy\int_{{\bf R}^{d-k}} dz\, c(|z|)|z|^{-p}|\varphi(y,z)|^p
\label{ecc4.2}
\end{equation}
for all $\varphi\in C_c^1(\Omega)$ with $\mathop{\rm supp}\varphi\subseteq K\times{\bf R}^{d-k}$ where
$c(s)=s^\delta(1+s)^{\delta'-\delta}$.
Therefore the optimal constant satisfies
\[
\mu_p(\Omega)\leq
\bigg(\int_{{\bf R}^k}dy\int_{{\bf R}^{d-k}} dz\,c(|z|)|(\nabla\varphi)(y,z)|^p\Big/
\!\int_{{\bf R}^k}dy\int_{{\bf R}^{d-k}} dz\, c(|z|)|z|^{-p}|\varphi(y,z)|^p\bigg)
\;.
\]
Again let $\varphi$ be a product $\varphi(y,z)=\eta(y)\chi(z)$ with $\eta\in C_c^\infty(K)$ but $\chi\in C_c^\infty(O_{\!R})$ where $O_{\!R}=\{z\in {\bf R}^{d-k}:|z|>R\}$.
Then
\[
\mu_p(\Omega)\leq{{ \int_{O_{\!R}} dz\,c(|z|)\int_K dy\Big(|(\nabla\chi)(z)|\,|\eta(y)|+|\chi(z)|\,|(\nabla\eta)(y)|\Big)^{p}}\over{
\Big(\int_{O_{\!R}}dz\, c(|z|)|z|^{-p}|\chi(z)|^p}\Big)\Big(\int_Kdy\,|\eta(y)|^p\Big)}
\;.
\]
We can again use (\ref{ecc4.12}) to estimate the right hand side.
One immediately obtains
\begin{eqnarray}
\mu_p(\Omega)&\leq&(1-\lambda)^{-(p-1)}\Bigg({{\int_{O_{\!R}} dz\,c(|z|)|(\nabla\chi)(z)|^p}\over{ \int_{O_{\!R}}dz\,c(|z|)|z|^{-p}|\chi(z)|^p}}\Bigg)
\nonumber\\[5pt]
&&\hspace{3.3cm}+\lambda^{-(p-1)}\Bigg({{\int_{O_{\!R}} dz\,c(|z|)|\chi(z)|^p}\over{ \int_{O_{\!R}} dz\,c(|z|)|z|^{-p}|\chi(z)|^p}}\Bigg)
\Bigg({{\int_K dy\,|(\nabla\eta)(y)|^p}\over{ \int_K dy\,|\eta(y)|^p}}\Bigg)
\label{ecc4.14}
\end{eqnarray}
for all $\lambda\in\langle0,1\rangle$.
Therefore
taking the infimum over $\eta\in C_c^\infty(K)$
followed by the infimum over $\lambda\in\langle0,1\rangle$ one deduces from Lemma~\ref{lcc4.0} that
\begin{equation}
\mu_p(\Omega)\leq
{{\int_{O_{\!R}} dz\,c(|z|)|(\nabla\chi)(z)|^p}\over{ \int_{O_{\!R}}dz\,c(|z|)|z|^{-p}|\chi(z)|^p}}
\label{ecc4.13}
\end{equation}
for all $\chi\in C_c^\infty(O_{\!R})$ and all large $R$.
Finally the infimum of the right hand side of (\ref{ecc4.13}) over $\chi$ followed by the limit $R\to\infty$ gives
$\mu_p(\Omega)\leq ((d-k+\delta'-p)/p)^p$
by the global estimates for the Hardy inequality on ${\bf R}^{d-k}\backslash\{0\}$ sketched at the beginning of the
section.
The proof of the theorem now follows from this estimate combined with the observations in the first paragraph of the proof.
\hfill$\Box$
\bigskip
Theorem~\ref{tcc4.3} applies to the special case that $K$ is an affine set since the assumption $k_\infty=k$ is automatically fulfilled.
The corresponding statement is an extension of a result of \cite{SSW}.
Moreover, if $K$ is a general closed convex set and $A_K$ its affine hull then the theorem identifies the constant $a_p^{\,p}$ of Theorem~\ref{tcc1.1} as the optimal
constant $\mu_p({\bf R}^d\backslash A_K)$ of the Hardy inequality (\ref{ecc1.1}) on $L_p({\bf R}^d\backslash A_K)$.
Therefore one has the general conclusion that $\mu_p({\bf R}^d\backslash A_K)\leq \mu_p({\bf R}^d\backslash K)$
for convex sets with $\dim(K)=k\in\{1,\ldots,d-1\}$.
Moreover, $ \mu_p({\bf R}^d\backslash A_K)=\mu_p({\bf R}^d\backslash K)$ if $\delta\leq\delta'$ because the proof only requires a local estimate.
\medskip
Next we address the question of calculating the optimal constant in the Rellich inequality (\ref{ecc1.2}),
i.e.\ the value of
\begin{equation}
\nu_p(\Omega)=
\inf\Big\{\int_\Omega \,|H\varphi|^p\Big/\!\int_\Omega c_\Omega^{\;p}\,d_\Gamma^{\;-2p}|\varphi|^p:\; \varphi\in C_c^2(\Omega)\Big\}
\;.\label{ecc4.4}
\end{equation}
Theorem~\ref{tcc1.2} gives the lower bound $\nu_p(\Omega)\geq c_p^{\,p}$ but this is rather complicated and not likely to be
an efficient bound in general.
Therefore we consider the special case $\delta=\delta'$ with weighting factor $d_\Gamma^{\,\delta}$.
Then Theorem~\ref{tcc1.2} gives the simpler bound $\nu_p(\Omega)\geq C_p^{\,p}$ with
$C_p=(p-1)(d-d_H)(d-d_H+p\,\delta-2p)p^{-2}$.
Now we establish that $C_p^{\,p}$ is the optimal constant if $\delta=\delta'$ and $\dim{K}\in \{0,1,\ldots,d-1\}$.
First we consider the degenerate case.
\begin{prop}\label{pcc4.2}
If $K=\{0\}$ and $\delta=\delta'\in[0,2\rangle$ then the optimal constant in the Rellich inequality $(\ref{ecc1.2})$ is given by
\[
\nu_p(\Omega)=C_p^{\,p}=\left((p-1)\,d\,(d+p\,\delta-2p)\,p^{-2}\right)^p
\]
for all $p>1$ for which $d+p\,\delta-2p>0$.
\end{prop}
\proof\
It follows from Theorem~\ref{tcc1.2}, with $\delta=\delta'$, that the lower bound $\nu_p(\Omega)\geq C_p^{\,p}$ is valid.
Therefore it suffices to establish a matching upper bound.
This is well known if $\delta=0$ but the proof is almost identical for $\delta\neq0$.
First, since $K=\{0\}$ one has $d_\Gamma(x)=|x|$.
Then as $\delta=\delta'$ one can deduce an upper bound from (\ref{ecc4.4}) by a local estimate (see, for example, \cite{BEL}
Corollary~6.3.5 for the case of the Laplacian).
This is achieved by the elementary procedure used to estimate the upper bound on the Rellich constant
in the one-dimensional case.
One estimates with radial functions $\varphi(x)=|x|^{-\alpha}\,\chi(|x|)$
where $\alpha>0$ and $\chi$ is a $C^2$-function with compact support near the origin.
The integrability of $|H\varphi|^p$ at the origin
imposes the restriction
$d+p\,\delta-2p>0$.
Therefore one chooses $\alpha=(d+p\,\delta-2p+\varepsilon)/p$, with $\varepsilon>0$,
and estimates as in the one-dimensional case.
This leads to the upper bound $\nu_p(\Omega)\leq C_p^{\,p}$.
We omit the details.
\hfill$\Box$
\begin{remarkn}\label{rcc4.1} If $K=\{0\}$ and $\delta\neq\delta'$ then one can establish the upper bound
$\nu_p(\Omega)\leq \left((p-1)\,d\,(d+p\,(\delta\wedge\delta')-2p)\,p^{-2}\right)^p$.
This follows by a local estimate which gives the bound $\left((p-1)\,d\,(d+p\,\delta-2p)\,p^{-2}\right)^p$ followed
by a similar estimate at `infinity' which gives the bound $\left((p-1)\,d\,(d+p\,\delta'-2p)\,p^{-2}\right)^p$.
Then one takes the minimum of the two bounds.
Unfortunately Theorem~\ref{tcc1.2} only gives a matching lower bound if $\delta=\delta'$.
If, for example, $\delta'=0$ then the upper bound is equal to $c_p(0,0)^p=\left((p-1)\,d\,(d-2p)\,p^{-2}\right)^p$
where we have used the notation introduced at the end of Section~\ref{S3}.
But Theorem~\ref{tcc1.2} gives the lower bound $c_p(\delta,0)^p$ under the assumption that $c_p(\delta,0)>0$.
It follows, however, that $c_p(\delta,0)<c_p(0,0)$ if $\delta>0$ by the discussion in Section~\ref{S3}.
\end{remarkn}
Now we establish a similar conclusion for $\dim(K)\in\{1,\ldots,d-1\}$.
The following result corresponds to the Rellich analogue of Theorem~\ref{tcc4.1} and Theorem~\ref{tcc4.3}.
\begin{thm}\label{tcc4.31}
Let $K$ be a closed convex subset of ${\bf R}^d$ with $k=\dim(K)\in \{1,\ldots, d-1\}$
Then the optimal constant in the Rellich inequality $(\ref{ecc1.2})$
satisfies the upper bound
\[
\nu_p(\Omega)\leq \left((p-1)(d-k)(d-k+p\,\delta-2p)\,p^{-2}\right)^p
\;.
\]
If, in addition, $k_\infty=k$ then
\[
\nu_p(\Omega)\leq \left((p-1)(d-k)(d-k+p\,(\delta\wedge\delta')-2p)\,p^{-2}\right)^p
\]
and for $\delta=\delta'$ one has equality.
\end{thm}
\proof\
The proof follows the earlier two step process of obtaining a local bound, dependent on $\delta$,
followed by a global bound, dependent on $\delta'$.
The local bound is independent of the assumption $k_\infty=k$.
\smallskip
\noindent{\bf Step 1}$\;$ The first statement of the theorem is established by a generalization of the
local estimates used to prove Theorem~\ref{tcc4.1}.
Since all the estimates in this first step are local we again assume initially that $c(s)=s^\delta$.
Following the earlier proof we choose coordinates $x=(y,z)\in \Omega$ with $y\in {\bf R}^k$ and $z\in{\bf R}^{d-k}$
where ${\bf R}^k$ is identified with the affine hull of $K$.
Then $d_\Gamma(y,z)=|z|$ if $y\in K$.
Again we define $\varphi\in C_c^\infty(\Omega)$ by setting $\varphi(y,z)=\eta(y)\chi(z)$ where $\eta\in C_c^\infty(K)$ and $\chi\in C_c^\infty({\bf R}^{d-k}\backslash\{0\})$ is a radial function.
Next for $\alpha\geq0$ we set $\varphi_\alpha=d_\Gamma^{\;-\alpha}\varphi=\eta\,\chi_\alpha$
where $\chi_\alpha(z)=|z|^{-\alpha}\chi(z)$.
Thus $\varphi_\alpha=d_0^{\;-\alpha}\varphi$ where $d_0$ is the operator of multiplication by $|z|$.
Then
\[
H\varphi_\alpha=(Hd_0^{\,-\alpha})\varphi+d_0^{\,-\alpha}(H\varphi)+2\,d_0^{\,\delta}(\nabla d_0^{\,-\alpha}).(\nabla\varphi)
\;.
\]
Therefore one calculates that
\[
|H\varphi_\alpha|\leq \alpha(d-k+\delta-\alpha-2)d_0^{\,-\alpha-2+\delta}|\varphi|+R_\alpha
\]
if $d-k+\delta-2>\alpha$ where
\[
R_\alpha=d_0^{\,-\alpha}|H\varphi|+2\,\alpha\, d_0^{\,-\alpha-1+\delta}|\nabla\varphi|
\;.
\]
Hence it follows as in the proof of Lemma~\ref{lcc4.1} that
\begin{equation}
\nu_p(\Omega)\leq (1-\lambda)^{-(p-1)}( \alpha(d-k+\delta-\alpha-2))^p
+\lambda^{-(p-1)}\Big(\int_\Omega|R_\alpha|^p\Big/\int_\Omega d_0^{\,p(\delta-2)}|\varphi_\alpha|^p\Big)
\label{recc4.0}
\end{equation}
for all $\lambda\in\langle0,1\rangle$.
Now we choose $\alpha=(d-k+p\,\delta-2p)/p$ and assume $\alpha>0$.
Then the constant appearing in the first term on the right is
$\left((p-1)(d-k)(d-k+p\,\delta-2p)\,p^{-2}\right)^p$.
So it remains to prove that the second term, with the specific choice of $\alpha$, can be made
insignificant by a suitable choice of a sequence of $\chi$.
First one has
\begin{equation}
\int_\Omega d_0^{\,p(\delta-2)}|\varphi_\alpha|^p=\int_Kdy\,|\eta(y)|^p\int_{{\bf R}^{d-k}}dz\,|z|^{-p(\alpha-\delta+2)}|\chi(z)|^p
=a_1\int^\infty_0dr\,r^{-1}|\chi(r)|^p
\label{recc4.1}
\end{equation}
with $a_1>0$.
Secondly,
\begin{eqnarray*}
|R_\alpha|^p&\leq& a\Big(d_0^{\,-p\alpha}|H\varphi|^p+d_0^{\,-p(\alpha-\delta+1)}|\nabla\varphi|^p\Big)\\[5pt]
&\leq &a'\Big(d_0^{\,-p(\alpha-\delta)}|\Delta\chi|^p\,|\eta|^p+d_0^{\,-p(\alpha-\delta+1)}(|\nabla\chi|^p\,|\eta|^p+|\chi|^p\,|\nabla\eta|^p)\Big)
\end{eqnarray*}
with $a, a'>0$.
Therefore one obtains a bound
\begin{equation}
\int_\Omega|R_\alpha|^p\leq a_2\int_0^\infty dr\,r^{p-1}|\chi(r)|^p+a_3\int_0^\infty dr\,r^{p-1}|\chi'(r)|^p+
a_4\int_0^\infty dr\,r^{2p-1}|\chi''(r)|^p
\label{recc4.2}
\end{equation}
with $a_2,a_3,a_4>0$.
This is very similar to the bounds occurring in the proof of Theorem~\ref{tcc4.1} with the exception of the last term which depends
on $\chi''$.
If this term were absent one could then replace $\chi$ by the sequence of functions $\chi_n$ used in the proof of the earlier proposition
to complete the argument that $\nu_p(\Omega)\leq \left((p-1)(d-k)(d-k+p\,\delta-2p)\,p^{-2}\right)^p$.
But the extra term complicates things.
In fact the $\chi_n$ used earlier are not even twice differentiable.
Therefore it is necessary to make a more sophisticated choice.
We now use an argument given in Section~4 of \cite{RSi3}.
Let $\chi_n$ be the sequence of functions on $\langle0,\infty\rangle$ used in the proof of Theorem~\ref{tcc4.1}.
The derivatives $\chi_n'$ are discontinuous at $n^{-1}$ and at $1$.
The functions $\xi_n=\chi_n^2$ have similar characteristics to the $\chi_n$ except their derivatives $\xi_n'$ are only discontinuous at $1$.
Therefore we now consider the $\xi_n$ and modify the derivative $\xi'_n$ by the addition of a linear function to remove the discontinuity.
The modifications $\eta_n$ of the derivatives are defined by $\eta_n(s)=0$ if $s\leq n^{-1}$ or $s\geq 1$ and
\[
\eta_n(s)=\xi_n'(s)-\xi'_n(1)(s-n^{-1})/(1-n^{-1})
\]
if $s\in[n^{-1},1]$.
Now $\eta_n$ is continuous and we set $\zeta_n(s)=\int_0^s\eta_n$ for $s\leq1$ and $\zeta_n(s)=\zeta_n(1)$ if $s\geq1$.
The resulting function $\zeta_n$ is twice-differentiable.
Finally setting $\rho_n=\zeta_n/\zeta_n(1)$ one verifies that $0\leq \rho_n\leq 1$, $\rho_n(s)=0$ if $s\leq n^{-1}$ and $\rho_n(s)=1$
if $s\geq1$.
Moreover, $\lim_{n\to\infty}\rho_n(s)=1$ for all $s>0$.
Finally set $\sigma_n=\rho_n\,\zeta$ where $\zeta$ is the cutoff function used in the proof of Theorem~\ref{tcc4.1}.
Now we consider the estimates (\ref{recc4.1}) and (\ref{recc4.2}) with $\chi$ replaced by the sequence $\sigma_n$.
First since $\sigma_n\to 1$ on $\langle0,1]$ as $n\to\infty$ it follows that $\int^\infty_0dr\,r^{-1}|\sigma_n(r)|^p\to\infty$ as $n\to\infty$
but $\int^\infty_0dr\,r^{p-1}|\sigma_n(r)|^p$ is uniformly bounded in $n$.
Moreover, $\sigma_n'=\zeta_n(1)^{-1}\eta_n\,\zeta+\zeta'$ and it follows by the earlier calculation that $\int^\infty_0dr\,r^{p-1}|\sigma'_n(r)|^p$ is
also uniformly bounded in $n$.
Therefore it remains to consider the term in (\ref{recc4.2}) dependent on $\sigma_n''$.
But $\sigma''_n=\zeta_n(1)^{-1}(\eta_n'\zeta+\zeta')+\zeta''$.
Therefore it follows from the definition of $\eta_n$ and the cutoff $\zeta$ that
\[
\int^\infty_0dr\,r^{2p-1}|\sigma_n''|^p\leq a+b\int^1_{n^{-1}}dr\,r^{2p-1}|\xi_n''(r)|^p
\]
with $a,b>0$ independent of $n$.
Now on $[n^{-1},1]$ one has
\[
\xi''_n(r)=2\,(\chi_n'(r))^2+2\,\chi_n(r)\chi_n''(r)=2\,r^{-2}(1-\log rn)/(\log n)^2
\;.
\]
Therefore
\[
\int^1_{n^{-1}}dr\,r^{2p-1}|\xi_n''(r)|^p=2^p(\log n)^{-2p}\int^1_{n^{-1}}dr\,r^{-1}|1-\log rn|^p\leq 2^{p-1}(\log n)^{-(p-1)}
\]
and this gives a bound uniform for $n>1$.
One now deduces that if $\varphi_\alpha$ in the bound (\ref{recc4.0}) is replaced by $\varphi_{\alpha,n}=d_0^{\,-\alpha}\eta\,\sigma_n$
then in the limit $n\to\infty$ the second term tends to zero since the numerator is bounded uniformly for $n>1$ and the denominator converges to infinity.
Therefore one concludes that
\[
\nu_p(\Omega)\leq (1-\lambda)^{-(p-1)}\left((p-1)(d-k)(d-k+p\,\delta-2p)\,p^{-2}\right)^p
\]
for all $\lambda\in\langle0,1\rangle$.
Hence in the limit $\lambda\to0$ one obtains the first bound of the theorem.
This was, however, obtained with the assumption $c(s)=s^\delta$.
But again by rescaling one can arrange that the $\sigma_n$ are supported in a small interval $[0,r]$ and this allows one to reduce the general case to the special case.
There is one extra small complication which did not occur in the Hardy case and that arises since the weighting factor $c_\Omega$
is positioned centrally in the operator $H$ and is not a direct weighting of the measure.
But this causes no difficulty.
For example, if $\varphi$ has support within distance $r$ of the boundary then
\begin{eqnarray*}
|(Hd_0^{\,-\alpha})\varphi|&\leq&c_\Omega\,| \Delta d_0^{\,-\alpha}|\,|\varphi|
+|c'_\Omega|\,|(\nabla d_0).(\nabla d_0^{\,-\alpha})|\,|\varphi|\\[5pt]
&\leq& \left(d_0^{\,\delta}|\Delta d_0^{\,-\alpha}|
+d_0^{\,\delta-1}|\nabla d_0^{\,-\alpha}|\right)|\varphi|\,(1+r^{|\delta-\delta'|})
\;.
\end{eqnarray*}
Making these modifications one obtains the first bound of the theorem modulo an additional factor $(1+r^{|\delta-\delta'|})$
but since this is valid for all small $r>0$ one can then take the limit $r\to0$.
\medskip
\noindent{\bf Step 2}$\;$ Next we assume $k_\infty=k$ and establish the second bound in Theorem~\ref{tcc4.31}.
The proof is similar to that of Theorem~\ref{tcc4.3}.
We continue to use the factorization ${\bf R}^d={\bf R}^k\times {\bf R}^{d-k}$ and to set $x=(y,z)\in \Omega$ with
$y\in {\bf R}^k$ and $z\in{\bf R}^{d-k}$.
Then $ d_\Gamma(y,z)=|z|$ if $y\in K$
and the Rellich inequality (\ref{ecc1.2}) on $L_p(\Omega)$ takes the form
\begin{equation}
\int_{K}dy\int_{{\bf R}^{d-k}} dz\,|(H\varphi)(y,z)|^p\geq
c_p^{\,p}\int_{K}dy\int_{{\bf R}^{d-k}} dz\, c(|z|)^p|z|^{-2p}|\varphi(y,z)|^p
\label{ecc4.22}
\end{equation}
for all $\varphi\in C_c^2(K\times {\bf R}^{d-k})$.
Therefore the optimal constant satisfies
\[
\nu_p(\Omega)\leq
\bigg(\int_{K}dy\int_{{\bf R}^{d-k}} dz\,|(H\varphi)(y,z)|^p\Big/
\!\int_{K}dy\int_{{\bf R}^{d-k}} dz\, c(|z|)^p|z|^{-2p}\, |\varphi(y,z)|^p\bigg)
\]
for all $\varphi\in C_c^2(K\times {\bf R}^{d-k})$.
Again we set $\varphi=\eta\, \chi$ with $\chi\in C_c^\infty(O_{\!R})$, where $O_{\!R}=\{z\in {\bf R}^{d-k}:|z|>R\}$, and $\eta\in C_c^\infty(K)$.
But the action of $H$ on the product $ \chi\,\eta$ takes the Grushin form
\begin{eqnarray*}
(H\varphi)(y,z)&=&-\sum^k_{j=1}c(|z|)\chi(z)\,(\partial_j^{\,2}\eta)(y)-\sum^d_{j=k+1}(\partial_jc(|z|)\partial_j\chi)(z)\,\eta(y)\\[5pt]
&=&c(|z|) \chi(z)(\Delta\eta)(y)+(H\chi)(z)\eta(y)
\end{eqnarray*}
where the second line is a slight abuse of notation.
This identity replaces the Leibniz rule used in the proof of Theorem~\ref{tcc4.3}.
Then arguing as in the former proof one
obtains for all $\lambda\in\langle0,1\rangle$ the estimates
\begin{eqnarray*}
\nu_p(\Omega)&\leq&(1-\lambda)^{-(p-1)}\Bigg({{\int_{O_{\!R}} dz\,|(H\chi)(z)|^p}\over{ \int_{O_{\!R}}dz\,c(|z|)^p|z|^{-2p}|\chi(z)|^p}}\Bigg)
\nonumber\\[5pt]
&&\hspace{3.3cm}+\lambda^{-(p-1)}\Bigg({{\int_{O_{\!R}} dz\,c(|z|)^p|\chi(z)|^p}\over{ \int_{O_{\!R}} dz\,c(|z|)^p|z|^{-2p}|\chi(z)|^p}}\Bigg)
\Bigg({{\int_K dy\,|(\Delta\eta)(y)|^p}\over{ \int_K dy|\eta(y)|^p}}\Bigg)
\end{eqnarray*}
as a replacement for (\ref{ecc4.14}).
But since $k_\infty=k$ the infimum over $\eta$ of the second term on the right hand side is zero.
This is no longer a consequence of Lemma~\ref{lcc4.0} but it follows by identical reasoning.
Hence one can then take the limit $\lambda\to0$ to deduce that
\[
\nu_p(\Omega)\leq\Bigg({{\int_{O_{\!R}} dz\,|(H\chi)(z)|^p}\over{ \int_{O_{\!R}}dz\,c(|z|)^p|z|^{-2p}|\chi(z)|^p}}\Bigg)
\]
for all $\chi\in C_c^\infty(O_{\!R})$.
Thus the problem of estimating $\nu_p(\Omega)$ is reduced to a `large distance' estimate on the Rellich
constant $\nu_p({\bf R}^{d-k}\backslash\{0\})$.
This follows from the standard argument sketched in the proof of Proposition~\ref{pcc4.2}.
One obtains the bound
\[
\nu_p(\Omega)\leq \left((p-1)(d-k)(d-k+p\,\delta'-2p)\,p^{-2}\right)^p
\;.
\]
The second statement of the theorem then follows by minimizing this bound and the local bound obtained in Step~1 of the proof.
\smallskip
The proof of Theorem~\ref{tcc4.31} is completed by noting that if $\delta=\delta'$ the upper bound on $\nu_p(\Omega)$ coincides with
the lower bound given by Theorem~\ref{tcc1.2}.
Therefore one has equality between $\nu_p(\Omega)$ and the bound.
\hfill$\Box$
\bigskip
Although Proposition~\ref{pcc4.2} and Theorem~\ref{tcc4.31} do not provide compelling evidence that the optimal constant
in the Rellich inequality should be $C_p^{\,p}$ the arguments of \cite{Rob12} give some support to this conjecture in more general circumstances.
The following $L_2$-result applies on the complement of a general convex set and for all $\delta, \delta'\in[0,2\rangle$.
\begin{prop}\label{pcc4.3}
Adopt the assumptions of Theorem~$\ref{tcc1.2}$ but with
$p=2$.
It follows that if $d-d_H+2(\delta\wedge\delta')-4>0$ then the Rellich inequality $(\ref{ecc1.2})$ is valid with constant equal to $C_2^{\,2}$.
\end{prop}
\proof\
The proposition is essentially a corollary of Theorem~1.2 in \cite{Rob12}.
First Theorem~\ref{tcc1.1} of the current paper establishes the Hardy inequality
\[
\int_\Omega c_\Omega\,|\nabla\varphi|^2\geq \int_\Omega\,|\eta\,\varphi|^2
\]
with $\eta=a_2\,c_\Omega^{\,1/2}d_\Gamma^{\;-1}$ where $a_2=(d-d_H+(\delta\wedge\delta')-2)/2$.
Secondly
\[
\int_\Omega c_\Omega\,|\nabla\eta|^2=a_2^{\,2}c_\Omega^{\,2}d_\Gamma^{\;-4}\,|1-c'_\Omega d_\Gamma/2c_\Omega|\leq (\nu /a_2^2)\,\eta^4
\]
where $\nu=\sup\{|1-t/2|^2:\delta\wedge\delta'\leq t\leq \delta\vee\delta'\}$.
In particular $\nu=(1-(\delta\wedge\delta')/2)^2$ if $\delta,\delta'\in[0,2\rangle$.
Theorem~1.2 in \cite{Rob12} asserts, however, that if $\nu /a_2^2<1$ then the Rellich inequality (\ref{ecc1.2}) is satisfied
with constant $\nu_2(\Omega)=a_2^4(1-\nu /a_2^2)^2=(a_2^2-\nu)^2$.
But the condition $\nu<a_2^2$ is equivalent to $d-d_H+2(\delta\wedge\delta')-4>0$ or, equivalently, to $C_2>0$.
Then one calculates that
\[
\nu_2(\Omega)=(a_2^2-\nu)^2=((d-d_H)(d-d_H+2(\delta\wedge\delta')-4)/4)^2
\;.
\]
But the right hand side is equal to $C_2^{\,2}$.
\hfill$\Box$
\begin{remarkn}\label{rcc4.2}
The proof of the lower bound $\nu_2(\Omega)\geq C_2^{\,2}$ established in Proposition~\ref{pcc4.3}
readily extends to all $\delta,\delta'\geq0$ with $\delta+\delta'<4$.
Moreover, if $K=\{0\}$ then it also follows from Remark~\ref{rcc4.1}, with $p=2$, that $\nu_2(\Omega)\leq C_2^{\,2}$.
Therefore the conclusion $\nu_2(\Omega)= C_2^{\,2}$ of Proposition~\ref{pcc4.2} is valid for $\delta\neq\delta'$ if $p=2$.
\end{remarkn}
\section{Concluding remarks}\label{S5}
In conclusion we note that the $L_2$-Rellich inequalities established in \cite{Rob12} are much stronger than the corresponding $L_2$-statement of Theorem~\ref{tcc1.2}.
If $p=2$ the Hardy inequality (\ref{ecc1.2}) gives a lower bound on the quadratic form $h(\varphi)=\int_\Omega c_\Omega\,|\nabla\varphi|^2$ which is valid for all $\varphi\in C_c^1(\Omega)$.
But the form $h$ is closeable and the lower bound extends to the closure $\overline h$.
The latter is, however, a local Dirichlet form and it determines in a canonical manner a submarkovian operator $H_{\!F}$, the Friedrichs' extension
of the symmetric operator $H=-\mathop{\rm div}(c_\Omega\nabla)$ defined in the introduction on $C_c^2(\Omega)$.
But the domain of the closed form $\overline h$ is equal to $D(H_{\!F}^{\,1/2})$ and $\overline h(\varphi)=\|H_{\!F}^{\,1/2}\varphi\|_2^2$
for all $\varphi\in D(H_{\!F}^{\,1/2})$.
In particular the $L_2$-Hardy inequality of Theorem~\ref{tcc1.1} can be rephrased in a `weighted operator' form
\[
\|H_{\!F}^{\,1/2}\varphi\|_2^2=\int_\Omega |H_{\!F}^{\,1/2}\varphi|^2\geq a_2^{\,2}\int_\Omega c_\Omega|d_\Omega^{\,-1}\varphi|^2
=a_2^{\,2}\|c_\Omega^{\,1/2}d_\Omega^{\,-1}\varphi\|_2^2
\]
for all $\varphi\in D(H_{\!F}^{\,1/2})$.
It can be stated equivalently as
\[
H_{\!F}\geq a_2^{\,2}\,c_\Omega\,d_\Omega^{\;-2}
\]
in the sense of ordering of positive self-adjoint operators.
This form of the Hardy inequality is the starting point of Theorem~1.2 of \cite{Rob12}.
The conclusion of the latter theorem is the validity of the Rellich inequality
\begin{equation}
\|H_{\!F}\varphi\|_2^2=\int_\Omega |H_{\!F}\varphi|^2\geq c_2^{\,2}\int_\Omega c_\Omega^2|d_\Omega^{\,-2}\varphi|^2
=a_2^{\,2}\|c_\Omega^{\,1/2}d_\Omega^{\,-1}\varphi\|_2^2
\label{ecc5.1}
\end{equation}
for all $\varphi\in D(H_{\!F})$ or, in the sense of operator ordering,
\[
H_{\!F}^2\geq c_2^{\,2}\,c_\Omega^{\,2}\,d_\Omega^{\;-4}
\;.
\]
But the statement of Theorem~\ref{tcc1.2} in the introduction only gives a
statement comparable to (\ref{ecc5.1}) for $\varphi\in C_c^2(\Omega)$ or, by closure for all $\varphi\in D(\overline H)$ where $\overline H$ is the closure of $H$.
In particular it gives the operator statement
\[
H^*\overline H\geq c_2^{\,2}\,c_\Omega^{\,2}\,d_\Omega^{\;-4}
\;.
\]
Since $H_{\!F}\supseteq \overline H$ it follows that
$(H_{\!F})^2\leq H^*\overline H$ with equality if and only if $H$ is essentially self-adjoint,
i.e.\ if and only if $H^*=\overline H=H_{\!F}$.
Hence the $L_2$-Rellich inequalities of \cite{Rob12} are strictly stronger than those of the current paper
unless $H$ is essentially self-adjoint.
Another way of distinguishing between the two classes of symmetric operators is to consider the case that
$H^2$ is densely-defined as a positive symmetric operator.
Then $H^*\overline H$ corresponds to the Friedrichs' extension $(H^2)_{\!F}$ of $H^2$.
For example, if $H=\Delta$, the Laplacian defined on $C_c^2(\Omega)$, then $\Delta^2$ is a symmetric operator
on $C_c^4(\Omega)$.
But $\Delta_F$ is the the self-adoint extension of $\Delta$ with Dirichlet boundary conditions
and $\Delta^*\overline \Delta=(\Delta^2)_{\!F}$ is the biharmonic operator which is determined by
quite different boundary conditions to those of $(\Delta_F)^2$.
If one considers the classic case of $\Omega={\bf R}^d\backslash\{0\}$ it is well known that $\Delta$ has a unique
submarkovian extension if and only if $d>2$, which happens to be the condition that ensures the validity of the
Hardy inequality.
Moreover, $\Delta$ has a unique self-adjoint extension if and only if $d>4$, which is the condition which ensures
the validity of the Rellich inequality.
So in this simple case there is no ambiguity.
But for more general $\Omega$ and operators $H=-\mathop{\rm div}(c_\Omega\nabla)$ the situation is much more complicated.
Criteria for Markov uniqueness were obtained for quite general $\Omega$ in \cite{LR} and it would be of interest
to develop analogous criteria for self-adjointness.
\section*{Acknowledgements}
The author is again indebted to Juha Lehrb\"ack for continuing advice and information on Hardy and Rellich inequalities
and for a critical reading of a preliminary version of the paper.
| 32,853 |
\section{Introduction}
The Lugiato-Lefever (LL) equation \cite{LL} in one and two dimensions (1D\
and 2D) is a fundamental model governing the dynamics of optical fields in
pumped lossy laser cavities with the intrinsic Kerr nonlinearity, which may
have self-focusing or defocusing sign. This equation is well known as an
important tool for the analysis of pattern formation, with various
applications in nonlinear optics \cite{patterns-early, pixel}. The progress
in theoretical and experimental studies has recently drawn a great deal of
renewed interest to the use of the LL equation in diverse settings \cite%
{patterns-2}-\cite{NJP}. A natural extension of these studies is
incorporation of external potentials into the LL equation, which can be
easily fabricated in laser cavities as transverse landscapes of the
refractive-index inhomogeneity, and may be used as an efficient means for
the control of optical fields \cite{Kestas}.
One of essential applications of the LL equation is its use for modeling
well-localized pixels (i.e., sharply bounded bright spots) in the cavity
\cite{pixel}. In most cases, pixels are considered as \textit{anti-dark
solitons}, i.e., bright objects created on top of a uniformly pumped
background field. In this work, we aim to demonstrate a possibility to
create completely localized robust pixels (i.e., bright solitons with zero
background), by adding to the model a confining potential corresponding to
an isotropic 2D harmonic oscillator. Furthermore, we demonstrate that the
same setting makes it possible to create stable \textit{vortex pixels}, by
applying a vortically structured pump. The consideration reported below
combines an analytical approach, chiefly based on the variational and
Thomas-Fermi approximations (VA and TFA), and systematic direct simulations,
in imaginary and real time alike, with the purpose to create confined modes
and test their stability.
The paper is organized as follows. The model, based on the 2D\ LL equation
with the harmonic-oscillator trapping potential, is formulated in Section
II. Analytical treatment, which makes use of the VA, power-balance equation,
and TFA, is presented in Section III. Numerical results for the existence
and stability of the fundamental (zero-vorticity) and vortical trapped modes
are reported in Sections IV and V, respectively. The latter section also
reports simple analytical results for the vortex states, obtained by means
of the TFA. The paper is concluded by Section VI.
\section{The model}
The 2D LL equation for the amplitude $\phi (x,y,t)$ of the electromagnetic
field in a pumped lossy laser cavity is (see, e.g., Ref. \cite{Kestas})
\begin{gather}
i\left( \gamma +\frac{\partial }{\partial t}\right) \phi =\left[ -\frac{1}{2}%
\left( \frac{\partial ^{2}}{\partial x^{2}}+\frac{\partial ^{2}}{\partial
y^{2}}\right) +\Delta \right. \notag \\
+\left. \frac{\Omega ^{2}}{2}(x^{2}+y^{2})+\sigma |\phi |^{2}\right] \phi
+E\;, \label{LuLe}
\end{gather}%
where $E$ is the pump field, $\gamma >0$ the dissipation rate, $\Delta
\gtrless 0$ detuning of the pump with respect to the cavity, and $\Omega
^{2} $ the strength of the confining potential, while $\sigma =-1$ and $+1$
correspond to the self-focusing and defocusing nonlinearity, respectively.
By means of rescaling, one may fix $\gamma =1$, although it may be convenient
to keep $\gamma $ as a free parameter, as shown below.
Stationary solutions to Eq. (\ref{LuLe}) have a simple asymptotic form at $%
r\equiv \sqrt{x^{2}+y^{2}}\rightarrow \infty $:%
\begin{equation}
\phi (r)\approx -\frac{2E}{\left( \Omega r\right) ^{2}}+\frac{4\left( \Delta
-i\gamma \right) E}{\left( \Omega r\right) ^{4}}. \label{infi}
\end{equation}%
We also note that the following exact power-balance equation ensues from Eq.
(\ref{LuLe}):
\begin{equation}
\frac{dP}{dt}=-2\gamma P-2\int \int \mathrm{Im}\{E^{\ast }\phi (x,y,t)\}dxdy,
\label{PB}
\end{equation}%
where power $P$ (alias norm) of the solitary wave is defined as
\begin{equation}
P=\int \int |\phi (x,y,t)|^{2}dxdy. \label{Norm}
\end{equation}
The objective is to reduce the 2D LL equation (\ref{LuLe}) to a
quasi-zero-dimensional limit (a dynamical system for a pixel, similar to
those realized by theoretically predicted \cite{pixel} and experimentally
created \cite{NJP} spatial solitons) in the case of tight confinement,
represented by large $\Omega ^{2}$. First, we do it by means of the VA,
defining
\begin{equation}
\phi (x,y,t)\equiv \Phi (x,y,t)\exp \left( -\gamma t\right) ,
\end{equation}
and thus casting Eq. (\ref{LuLe}) in the form of
\begin{eqnarray}
i\frac{\partial }{\partial t}\Phi &=&\left[ -\frac{1}{2}\left( \frac{%
\partial ^{2}}{\partial x^{2}}+\frac{\partial ^{2}}{\partial y^{2}}\right)
+\Delta \right. \notag \\
&+&\left. \frac{\Omega ^{2}}{2}(x^{2}+y^{2})+\sigma e^{-2\gamma t}|\Phi |^{2}%
\right] \Phi +Ee^{\gamma t}. \label{PhiLL}
\end{eqnarray}%
Unlike the original LL equation (\ref{LuLe}), the transformed one (\ref%
{PhiLL}) can be directly derived from a real time-dependent Lagrangian,
\begin{eqnarray}
L &=&\int \int dxdy\left\{ \frac{i}{2}\left( \Phi _{t}^{\ast }\Phi -\Phi
^{\ast }\Phi _{t}\right) +\frac{1}{2}\left( |\Phi _{x}|^{2}+|\Phi
_{y}|^{2}\right) \right. \notag \\
&+&\left[ \Delta +\frac{\Omega ^{2}}{2}(x^{2}+y^{2})\right] |\Phi |^{2}+%
\frac{\sigma }{2}e^{-2\gamma t}\,|\Phi |^{4} \notag \\
&+&\left. e^{\gamma t}\left( E\Phi ^{\ast }+E^{\ast }\Phi \right) \right\}
\,. \label{LLL}
\end{eqnarray}
\section{Analytical considerations}
\subsection{The variational approximation}
For the 1D LL equation without trapping potentials, the VA was developed in
Ref. \cite{VA-LL}. To derive this approximation in a form appropriate for
the present model, we note that, in the lowest approximation, Eq. (\ref%
{PhiLL}) gives rise to the following asymptotic form of solutions at $%
r\rightarrow \infty $: $\Phi =-2E\left( \Omega r\right) ^{-2}e^{\gamma t}$,
cf. Eq. (\ref{infi}). This form suggests us to adopt an ansatz based on the
fractional expression, with real variables $f(t)$ and $\chi (t)$, which may
be combined into a complex one, $F(t)=f(t)\exp \left( i\chi (t)\right) $:%
\begin{eqnarray}
\Phi \left( x,y,t\right) &=&-\frac{2E}{\Omega ^{2}}e^{\gamma t}\frac{F(t)}{%
1+r^{2}F(t)}\equiv \epsilon \,e^{\gamma t}\frac{f(t)\,e^{i\chi (t)}}{%
1+r^{2}f(t)\,e^{i\chi (t)}}\;,\quad \label{ansatz} \\
\epsilon &\equiv &-\frac{2E}{\Omega ^{2}}\;. \label{epsilon}
\end{eqnarray}
\begin{figure}[tb]
\includegraphics[width=0.75\columnwidth]{NF1.eps}
\caption{(Color online) Lines in the parameter plane of ($\Delta $, $g$),
along which solutions of the overdetermined system (\protect\ref{n21}), (%
\protect\ref{n22}) exist. Here, solid red, dashed gray, and dotted black
lines correspond, respectively, to $\Omega =2$, $4$, and $6$. The inset
displays a zoom of the curve for $\Omega =10$.}
\label{ER}
\end{figure}
The insertion of ansatz (\ref{ansatz}) in Eq. (\ref{LLL}) and subsequent
integration gives rise to an effective Lagrangian,
\begin{eqnarray}
\frac{e^{-2\gamma t}}{\pi \epsilon ^{2}}L_{\mathrm{eff}} &=&\frac{1}{2}%
fq_{1}(\chi )\frac{d\chi }{dt}-\frac{1}{2}q_{2}(\chi )\sin \chi \frac{df}{dt}%
+f^{2}q_{2}(\chi ) \notag \\
&+&\Delta fq_{1}(\chi )+\frac{\sigma \epsilon ^{2}}{8}f^{3}q_{3}(\chi
)-\Omega ^{2}q_{1}(\chi )\cos \chi \notag \\
&-&\frac{\Omega ^{2}}{4}\int d\chi \lbrack q_{3}(\chi )\sin \chi ]\,,
\label{L}
\end{eqnarray}%
with $q_{1}(\chi )\equiv \chi /\sin \chi $, $q_{2}(\chi )\equiv \lbrack
\left( \sin \chi -\chi \cos \chi \right) /\sin ^{3}\chi $, and $q_{3}(\chi
)\equiv \lbrack 2\chi -\sin \left( 2\chi \right) ]/\sin ^{3}\chi $. The last
term in Eq. (\ref{L}) is cast in the integral form as a result of
``renormalization": the respective term in the original
Lagrangian formally diverges logarithmically at $R\rightarrow \infty $, but
the diverging part actually does not depend on $f$ and $\chi $, and it may
be cancelled by means of the differentiation with respect to $\chi $ and
subsequent integration, also with respect to $\chi $.
The Euler-Lagrange equations following from Lagrangian (\ref{L}) are (taking
into account that the Lagrangian must be substituted into the action, $\int
Ldt$, and then the action must be subjected to the variation; this makes it
necessary to apply the time differentiation to $e^{2\gamma t}$):
\begin{eqnarray}
&&\frac{1}{2}\left[ q_{2}(\chi )\cos \chi +q_{2}^{\prime }(\chi )\sin \chi
+q_{1}(\chi )\right] \frac{df}{dt} \notag \\
&+&\left( \gamma f-\Omega ^{2}\sin \chi \right) q_{1}(\chi )+\left( \Omega
^{2}\cos \chi -\Delta \,f\right) q_{1}^{\prime }(\chi ) \notag \\
&-&f^{2}\,q_{2}^{\prime }(\chi )-\frac{g}{8}f^{3}q_{3}^{\prime }(\chi )+%
\frac{\Omega ^{2}}{4}q_{3}(\chi )\sin \chi =0\;, \label{f0}
\end{eqnarray}%
\begin{eqnarray}
&&\Delta q_{1}(\chi )+2\,f\,q_{2}(\chi )+\frac{3g}{8}\,f^{2}\,q_{3}(\chi
)+\gamma q_{2}(\chi )\sin \chi \notag \\
&+&\frac{1}{2}\left[ q_{2}(\chi )\cos \chi +q_{2}^{\prime }(\chi )\sin \chi
+q_{1}(\chi )\right] \frac{d\chi }{dt}=0\;, \label{theta0}
\end{eqnarray}%
where a renormalized nonlinearity coefficient is [see Eq. (\ref{epsilon})]
\begin{equation}
g=\sigma \epsilon ^{2}\equiv 4\sigma E^{2}/\Omega ^{4}. \label{g}
\end{equation}%
Note that, although it may seem that Eqs. (\ref{f0}) and (\ref{theta0}) are
singular at $\chi =0$, in reality all the singularities cancel. A
singularity is instead possible at $\chi =\pi $.
We consider stationary (fixed-point) solutions of Eqs. (\ref{f0}) and (\ref%
{theta0}) by setting $df/dt=d\chi /dt=0$, which yields
\begin{eqnarray}
&&\left( \Omega ^{2}\sin \chi -\gamma f\right) q_{1}(\chi )+\left( \Delta
\,f-\Omega ^{2}\cos \chi \right) q_{1}^{\prime }(\chi ) \notag \\
&+&f^{2}\,q_{2}^{\prime }(\chi )+\frac{g}{8}f^{3}q_{3}^{\prime }(\chi )-%
\frac{\Omega ^{2}}{4}q_{3}(\chi )\sin \chi =0, \label{f}
\end{eqnarray}%
\begin{equation}
\Delta q_{1}(\chi )+2\,f\,q_{2}(\chi )+\frac{3g}{8}\,f^{2}\,q_{3}(\chi
)+\gamma q_{2}(\chi )\sin \chi =0. \label{theta}
\end{equation}%
Further, it is possible to find approximate solutions of Eqs. (\ref{f}) and (%
\ref{theta}), assuming that they have
\begin{equation}
|\chi |\ll \pi ~. \label{<<}
\end{equation}%
In this case, Eq. (\ref{theta}), in the first approximation, assumes the
form of
\begin{equation}
\Delta +\frac{2}{3}f+\frac{g}{2}f^{2}+\frac{\gamma }{3}\chi =0.
\label{new_theta}
\end{equation}%
{\small \ }Similarly, in the lowest approximation Eq. (\ref{f}) yields an
expression for $\chi $:
\begin{equation}
\chi =\frac{30\gamma f}{10\Omega ^{2}+f\left[ 10\Delta +8f+3gf^{2}\right] }%
\;, \label{th}
\end{equation}%
The assumption (\ref{<<}) may be then secured by a natural assumption of the
strong confinement, i.e., considering large values of $\Omega $. In this
case, Eqs. (\ref{new_theta}) and (\ref{th}) can be further simplified to
\begin{eqnarray}
f &\approx &\frac{-2\pm \sqrt{4-18g\Delta }}{3g}, \label{ff-simple} \\
\chi &\approx &\frac{\gamma \left( -2\pm \sqrt{4-18g\Delta }\right) }{%
g\Omega ^{2}}. \label{th-simple}
\end{eqnarray}%
Obviously, Eqs. (\ref{ff-simple}) and (\ref{th-simple}) produce a physically
relevant result under condition $g\Delta <2/9$.
One can construct another approximate solution for large detuning $\Delta $:
\begin{eqnarray}
f &\approx &\sqrt{-2\Delta /g}-2/\left( 3g\right) , \label{f_DLarge} \\
\chi &\approx &(15/2)\gamma /\Delta . \label{theta_DLarge}
\end{eqnarray}%
In the general case, stationary solutions of Eqs. (\ref{f}) and (\ref{theta}%
), where, as said above, we may fix $\gamma =1$, depend on three parameters:
$\Delta \gtrless 0$, $g\gtrless 0$ [see Eq. (\ref{g})], and $\Omega ^{2}>0$.
\begin{figure}[tb]
\includegraphics[width=0.48\columnwidth]{NF2a.eps} \hfil
\includegraphics[width=0.48\columnwidth]{NF2b.eps} \hfil
\includegraphics[width=0.48\columnwidth]{NF2c.eps}
\caption{(Color online) Power $P$ versus pumping strength $E$. Variational
results for the defocusing case ($\protect\sigma =+1$), produced by
simplified equations (\protect\ref{ff-simple}) and (\protect\ref{th-simple}%
), are shown in (a), with $g=1$, by curves with circles (yellow), boxes
(red), and diamonds (green), for $\Delta =-1$, $-4$, and $-10$,
respectively. The self-focusing case ($\protect\sigma =-1$) is shown in (b),
with $g=-1$, by curves with hexagons (cyan), down triangles (magenta), and
up triangles (gray), for $\Delta =1$, $4$, and $10$, respectively. In (c) we
compare the analytical and numerical results for the defocusing case ($g=1$
and $\Delta =-10$), shown, severally, by curves with diamonds (green) and
stars (orange), and analytical results for the self-focusing case [$g=-1$
and $\Delta =10$, shown by the curve with up triangles (gray)]. The
solutions numerically found in the case of the self-focusing are unstable.
In all the plots, $\Omega =10$ and $\protect\gamma =1$ are fixed here.}
\label{F5}
\end{figure}
In addition to the consideration of the stationary solutions (fixed points),
the full dynamical version of the VA, based on Eqs. (\ref{f0}) and (\ref%
{theta0}), can be also used to analyze their stability, as well as evolution
of unstable solutions. However, in practical terms such a dynamical analysis
turns out to be quite cumbersome, direct numerical simulations being
actually more efficient, as shown below.
\subsection{The power-balance condition}
The substitution of ansatz (\ref{ansatz}) in the definition of power (\ref%
{Norm}) and power-balance equation (\ref{PB}) yields
\begin{eqnarray}
P &=&\frac{4\pi E^{2}f}{\Omega ^{4}}\frac{\chi }{\sin (\chi )}, \label{P_2}
\\
\frac{dP}{dt} &=&-\frac{8\pi \gamma E^{2}f}{\Omega ^{4}}\frac{\chi }{\sin
(\chi )}+\frac{4\pi E^{2}}{\Omega ^{2}}\chi , \label{dP_2}
\end{eqnarray}%
(in these expressions, $f>0$ is implied). Equation (\ref{dP_2}) predicts the
equilibrium condition, $dP/dt=0$, at
\begin{equation}
\sin (\chi )=\frac{2\gamma }{\Omega ^{2}}f. \label{equilibrium}
\end{equation}%
Note that $E$ drops out from Eq. (\ref{equilibrium}), and condition $\sin
(\chi )\leq 1$, following from Eq. (\ref{equilibrium}), imposes a
restriction on $f$,
\begin{equation}
f\leq \Omega ^{2}/(2\gamma ).
\end{equation}%
Finally, for $|\chi |\ll \pi $ Eq. (\ref{equilibrium}) simplifies to $\chi
\approx \left( 2\gamma /\Omega ^{2}\right) f$. Using this to eliminate $f$
in favor of $\chi $, Eqs. (\ref{new_theta}) and (\ref{th}) give rise to the
following system of equations:
\begin{eqnarray}
\frac{g\Omega ^{4}\chi ^{3}}{80\gamma ^{3}}+\frac{\Omega ^{2}\chi ^{2}}{%
15\gamma ^{2}}+\frac{\Delta \chi }{6\gamma }-\frac{1}{6} &=&0, \label{n21}
\\
\frac{g\Omega ^{4}\chi ^{2}}{8\gamma ^{2}}+\frac{1}{3\gamma }\left( \Omega
^{2}+\gamma ^{2}\right) \chi +\Delta &=&0. \label{n22}
\end{eqnarray}%
Of course, the system of two equations (\ref{n21}) and (\ref{n22}) for the
single unknown $\chi $ is overdetermined, and a solution of this system may
exist only if a special restriction is imposed on parameters, as shown in
Fig. \ref{ER}, in the plane of ($\Delta $, $g$), for $\gamma =1$ and three
different fixed values of the confinement strength, $\Omega =2$, $\Omega =4$%
, and $\Omega =10$. Note that these curves do not depend on the pumping
strength, $E$. Indeed, this parameter is related only to the power of the
solution, see Eq. (\ref{P_2}) and Fig. \ref{F5}. The meaning of the
overdetermined system is that, realizing the VA and power-balance condition
simultaneously, its solution has a better chance to produce an accurate
approximation. This expectation is qualitatively corroborated below, see
Fig. \ref{NER} and related text in the next section.
Generic properties of the modes predicted by ansatz (\ref{ansatz}) are
characterized by the corresponding dependence of power $P$ on the pumping
strength, $E$. Using, for this purpose, the simplified approximation given
by Eqs. (\ref{ff-simple}) and (\ref{th-simple}), we display the dependences
for the defocusing nonlinearity ($g=1$) in Fig. \ref{F5}(a), at fixed values
of the detuning, $\Delta =-1$, $-4$, and $-10$. Figure \ref{F5}(b) displays
the same dependences in the case of the self-focusing nonlinearity ($g=-1$),
for $\Delta =1$, $4$, and $10$. Note that power $P$ is not symmetric with
respect to the reversal of the signs of nonlinearity $g$ and detuning $%
\Delta $.
In Fig. \ref{F5}(c) we compare the VA\ results for the self-defocusing ($g=1$
and $\Delta =-10$) and focusing ($g=-1$ and $\Delta =10$) cases. In
addition, Fig. \ref{F5}(c) includes full numerical results (for details see
the next Section). It is seen that the simplified VA produces a qualitatively
correct prediction, which is not quite accurate quantitatively. Below, we
demonstrate that the VA is completely accurate only in small black regions
shown in Fig. \ref{NER}.
\subsection{The Thomas-Fermi approximation (TFA)}
In the case of the self-defocusing sign of the nonlinearity, and positive
mismatch, $\Delta >0$, the ground state, corresponding to a sufficiently
smooth stationary solution of Eq. (\ref{LuLe}), $\phi =\phi (r)$, may be
produced by the TFA, which neglects derivatives in the radial equation \cite%
{TFA}:%
\begin{equation}
\left( \Delta -i\gamma +\frac{\Omega ^{2}}{2}r^{2}+\sigma |\phi |^{2}\right)
\phi =-E\;. \label{TF}
\end{equation}%
In particular, the TFA is relevant if $\Delta $ is large enough.
The TFA-produced equation (\ref{TF}) for the ground-states's configuration
is not easy to solve analytically, as it is a cubic algebraic equation with
complex coefficients. The situation greatly simplifies in the limit case of
a very large mismatch, \textit{viz}., $\Delta \gg \gamma $ and $\Delta
^{3}\gg \sigma E^{2}$. Then, both the imaginary and nonlinear terms may be
neglected in Eq. (\ref{TF}), to yield%
\begin{equation}
\phi (r)\approx -E\left( \Delta +\frac{\Omega ^{2}}{2}r^{2}\right) ^{-1}.
\label{simple}
\end{equation}%
This simple approximation, which may be considered as a limit form of ansatz
(\ref{ansatz}), can be used to produce estimates for various characteristics of the
ground state (see, in particular, Fig. \ref{FN3} below). In fact, the TFA
will be the most relevant tool in Section V, as an analytical approximation
for trapped vortex modes, for which the use of the VA, even in its
stationary form, is too cumbersome.
The TFA cannot be applied to nonstationary solutions, hence it does not
provide direct predictions for stability of stationary modes. However, it
usually tends to produce ground states, thus predicting stable
solutions. This expectation is corroborated by results produced below.
\section{Numerical results for fundamental modes}
\begin{figure}[tb]
\includegraphics[width=0.48\columnwidth]{NF3a.eps} \hfil
\includegraphics[width=0.48\columnwidth]{NF3b.eps} \hfil
\includegraphics[width=0.48\columnwidth]{NF3c.eps}
\caption{(Color online) Profiles of fundamental trapped modes, $\protect%
\varrho (x)$, obtained via imaginary-time simulations of Eq. (\protect\ref%
{LuLe}), are shown by yellow circles. Black solid lines display counterparts
of the same profiles produced by the variational approximation based on
ansatz (\protect\ref{ansatz}). The parameters are (a) $\Delta =-1$, (b) $%
\Delta =-4$, (c) $\Delta =-10$, others fixed as $\Omega =10$, $\protect%
\gamma =1$, $E=10$, and $g=1$ (the self-defocusing nonlinearity). }
\label{FN1}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=0.48\columnwidth]{NF4a.eps} \hfil
\includegraphics[width=0.48\columnwidth]{NF4b.eps} \hfil
\includegraphics[width=0.48\columnwidth]{NF4c.eps}
\caption{(Color online) The same as in Fig. \protect\ref{FN1}, for
parameters (a) $\Delta =1$, (b) $\Delta =4$, (c) $\Delta =10$, with $\Omega
=10$\text{, }$\protect\gamma =1$, $E=10$, and $g=-1$ (the self-focusing
nonlinearity).}
\label{FN2}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=0.48\columnwidth]{N5a.eps} \hfil
\includegraphics[width=0.48\columnwidth]{N5b.eps}
\caption{(Color online) Following Figs. \protect\ref{FN1} and \protect\ref{FN2}, chains of
yellow circles depict profiles of fundamental trapped modes, $\protect%
\varrho (x)$, obtained via imaginary-time simulations of Eq. (\protect\ref%
{LuLe}), for the self-defocusing nonlinearity, $g=1$, and large positive
values of the mismatch: $\Delta =10$ in (a) and $\Delta =20$ in (b). Black
solid lines display the same profiles, as produced by the simplest version
of the Thomas-Fermi approximation, given by Eq. (\protect\ref{rhoTFA}).
Other parameters are $\Omega =10$, $\protect\gamma =1$, and $E=10$.}
\label{FN3}
\end{figure}
\subsection{Stationary trapped modes}
To obtain accurate results, and verify the validity of the VA predictions
which are presented in the previous section, we here report results obtained
as numerical solutions of Eq. (\ref{LuLe}). First, we aim to find
ground-state localized states by means of imaginary-time propagation. In the
framework of this method, one numerically integrates Eq. (\ref{LuLe}),
replacing $t$ by $-it$ and normalizing the solution at each step of the time
integration to maintain a fixed total power \cite{IT}. For testing stability
of stationary states, Eq. (\ref{LuLe}) was then simulated in real time, by
means of the fourth-order split-step method implemented in a \emph{GNU
Octave } program \cite{Eaton_Octave} (for a details concerning the method
and its implementations in MATLAB, see Ref. \cite{Yang_10}).
In Fig. \ref{FN1} we show 1D integrated intensity profiles $\varrho (x)$,
defined as
\begin{equation}
\varrho (x)\equiv \int_{-\infty }^{+\infty }|\phi (x,y)|^{2}dy, \label{rho}
\end{equation}%
and obtained from the imaginary-time solution of Eq. (\ref{LuLe}), along with their
analytical counterparts produced by the VA based on Eq. (\ref{ansatz}), for
three different values of detuning $\Delta $, \textit{viz}., (a) $\Delta =-1$%
, (b) $\Delta =-4$, and (c) $\Delta =-10$, for $g=1$ (the self-defocusing
nonlinearity) and $\Omega =10$ and $E=10$. We used Eqs. (\ref{ff-simple})
and (\ref{th-simple}) to produce values of the parameters $f$ and $\chi $ in
ansatz (\ref{ansatz}), which was then used as the initial guess in direct
numerical simulations.
In Fig. \ref{FN2} we display results similar to those shown in Fig. \ref{FN1}%
, but for the self-focusing nonlinearity ($g=-1$) and three different
(positive) values of $\Delta $, \textit{viz.}, (a) $\Delta =1$, (b) $\Delta
=4$, and (c) $\Delta =10$, with fixed $\Omega =10$ and $E=10$. In both cases
of $g=\pm 1$, the VA profiles show good match to the numerical ones,
although the accuracy slightly deteriorates with the increase of $|\Delta |$%
.
\begin{figure}[tb]
\centering
\includegraphics[width=0.48\columnwidth]{NF5a.eps} \hfil %
\includegraphics[width=0.48\columnwidth]{NF5b.eps} %
\includegraphics[width=0.48\columnwidth]{NF5c.eps} \hfil
\includegraphics[width=0.48\columnwidth]{NF5d.eps}
\caption{(Color online) The evolution of the norm [total power (\protect\ref%
{Norm})] of the solution starting from ansatz (\protect\ref{ansatz})
perturbed by $5\%$ random noise, as produced by real-time simulations of Eq.
(\protect\ref{LuLe}). Here the results are presented for the defocusing
nonlinearity, i.e., $g=1$, with (a) $\Delta =-1$ and (b) $\Delta =-10$, and
for the focusing nonlinearity, i.e., $g=-1$, with (c) $\Delta =1$ and (d) $%
\Delta =10$. Other parameters are $E=10$, $\Omega =10$, and $\protect\gamma %
=1$. The evolution of the norm at large times is shown in the insets.}
\label{fig_norm}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.45\columnwidth]{NF6a.eps} \hfil
\includegraphics[width=0.45\columnwidth]{NF6b.eps} \hfil
\includegraphics[width=0.45\columnwidth]{NF6c.eps} \hfil
\includegraphics[width=0.45\columnwidth]{NF6d.eps}
\caption{(Color online) Density profile $|\protect\phi |^{2}$ obtained via
direct numerical simulations of Eq. (\protect\ref{LuLe}) in the case of the
self-defocusing nonlinearity ($g=1$). Inputs, represented by ansatz (\protect
\ref{ansatz}) with the addition of $5\%$ random noise, are displayed in
panels (a) for $\Delta =-1$ and (c) for $\Delta =-10$. The corresponding
profiles produced by the simulations at $t=1000$ are shown in (b) and (d),
respectively. Other parameters are the same as in Fig. \protect\ref{fig_norm}%
.}
\label{perf_def}
\end{figure}
Note that the results displayed in Fig. \ref{FN2}, for the situations to
which the TFA does not apply, because the nonlinearity is self-focusing in
this case, demonstrate \ the growth of the maximum value, $\varrho (x=0)$,
with the increase of mismatch $\Delta $. In the case of self-defocusing it
is natural to expect decay of $\varrho _{\mathrm{TFA}}(x=0)$ with the
increase of $\Delta $. As shown in Fig. \ref{FN3}, this expectation is
confirmed by the numerical results and the TFA alike. In particular, for the
integrated intensity profile defined by Eq. (\ref{rho}), the simplest
version of the TFA, produced by Eq. (\ref{simple}), easily gives%
\begin{equation}
\varrho _{\mathrm{TFA}}(x)=\frac{2\pi E^{2}}{\Omega ^{4}}\left( \frac{%
2\Delta }{\Omega ^{2}}+x^{2}\right) ^{-3/2}. \label{rhoTFA}
\end{equation}%
Figure \ref{FN3} also corroborates that the TFA, even in its simplest form,
becomes quite accurate for sufficiently large values of $\Delta >0$.
\subsection{Stability of the stationary modes}
The stability of the trapped configurations predicted by ansatz (\ref{ansatz}%
) was tested in real-time simulations of Eq. (\ref{LuLe}), adding $5\%$
random noise to the input. We display the results, showing the evolution of
the solution's norm (total power) in the case of the defocusing nonlinearity
($g=1$), for $\Delta =-1$ and $-10$, in Figs. \ref{fig_norm}(a) and (b),
respectively. The insets show the asymptotic behavior at large times. In the
case of the defocusing nonlinearity, the solution quickly relaxes to a
numerically exact stationary form, and remains \emph{completely stable} at $%
t>10$ (in fact, real-time simulations always quickly converge to stable
solutions at all values of the parameters). However, in the case of the
self-focusing with $\Omega =10$, the solutions are unstable, suffering rapid
fragmentation, as seen in Fig. \ref{perf_foc}. This behavior is also
exemplified in results shown in Figs. \ref{fig_norm}(c) and \ref{fig_norm}%
(d) for the temporal evolution of the solution's total power in the case of
the self-focusing nonlinearity ($g=-1$), for $\Delta =1$ and for $\Delta =10$%
, respectively. The instability of the fundamental modes in this case is a
natural manifestation of the modulational instability in the LL equation
\cite{MI}. Note that the large size of local amplitudes in small spots,
which is attained in the course of the development of the instability
observed in Fig. \ref{perf_foc}, implies the trend to the onset of the 2D
collapse driven by the self-focusing cubic nonlinearity \cite{collapse}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.45\columnwidth]{NF7a.eps} \hfil
\includegraphics[width=0.45\columnwidth]{NF7b.eps} \hfil
\includegraphics[width=0.45\columnwidth]{NF7c.eps} \hfil
\includegraphics[width=0.45\columnwidth]{NF7d.eps}
\caption{(Color online) The same as in Fig. \protect\ref{perf_def}, but in
the case of the self-focusing nonlinearity ($g=-1$). Panels (a,b) and (c,d)
are drawn for $\Delta =1$ and $\Delta =10$, respectively. Here, the profiles
shown in (b) and (d) are outputs of the simulations obtained at $t=10$.}
\label{perf_foc}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.6\columnwidth]{NF8a.eps} \\
\includegraphics[width=0.6\columnwidth]{NF8b.eps} \\
\includegraphics[width=0.6\columnwidth]{NF8c.eps}
\caption{The existence area of stable modes obtained by means of real-time
simulation of Eq. (\protect\ref{LuLe}). To generate the area, we used the
input in the form of ansatz (\protect\ref{ansatz}) with parameters predicted
by the VA, adding random noise at the $5\%$ amplitude level. We here consider
(a) $\Omega =2$, (b) $\Omega =4$, and (c) $\Omega =10$. In all the cases,
the norm (total power) of the solution undergoes variations. For parameter
values corresponding to the gray boxes it stabilizes after a short
relaxation period. In the white boxes, the norm keeps oscillating, while the
solution maintains the localized profile, avoiding onset of instability. The
simulations in the region not covered by boxes feature instability
scenarios: in the case of $g\leq 0$ the solution suffers fragmentation, like
in Figs. \protect\ref{perf_foc}(b) and \protect\ref{perf_foc}(d)), while in
the case of self-defocusing the solution is subject to fragmentation due to
the strong nonlinearity [e.g., at $g>5$ in (c)]. In the black region, the
output states are very close to the input. Other parameters are $E=10$, and $%
\protect\gamma =1$. The data for the linear system, corresponding to $g=0$,
are not included, as in the linear system all the stationary solutions are
obviously stable.}
\label{NER}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=0.48\columnwidth]{NF9a.eps} \hfil
\includegraphics[width=0.48\columnwidth]{NF9b.eps} %
\includegraphics[width=0.48\columnwidth]{NF9c.eps} \hfil
\includegraphics[width=0.48\columnwidth]{NF9d.eps}
\caption{(Color online) The same as in Fig. \protect\ref{perf_foc}, but for $%
g=10$. Panels (a,b) and (c,d) pertain to $\Delta =-10$ and $\Delta =10$,
respectively. The unstable output profiles are displayed at $t=10$.}
\label{F9}
\end{figure}
In Figs. \ref{perf_def} and \ref{perf_foc} we display the time evolution of
density profiles $|\phi |^{2}$ produced by the simulations of Eq. (\ref{LuLe}%
) with the self-defocusing and focusing nonlinearity, respectively. The
input profiles are again taken as per the VA ansatz (\ref{ansatz}) with the
addition of $5\%$ random noise. In Figs. \ref{perf_def}(a) and \ref{perf_def}%
(c) we show the perturbed input profiles in the case of self-defocusing, for
$\Delta =-1$ and $\Delta =-10$, respectively, while the corresponding
profiles at $t=1000$ are displayed in Figs. \ref{perf_def}(b) and \ref%
{perf_def}(d). Note that the agreement between the variational and numerical
profiles tends to deteriorate with the increase of $|\Delta |$ [the same
trend as observed in Fig. \ref{F5}(c)].
Further, in Figs. \ref{perf_foc}(a) and \ref{perf_foc}(c) we display the
perturbed input profiles in the case of the self-focusing nonlinearity for $%
\Delta =1$ and $\Delta =10$, respectively, with the corresponding profiles
at $t=1000$ displayed in Figs. \ref{perf_foc}(b) and \ref{perf_foc}(d).
These results clearly confirm the instability of the perturbed solutions, as
suggested by the evolution of the total power depicted in Figs. \ref%
{fig_norm}(c) and \ref{fig_norm}(d). Strong instability is observed for all
values of $g<0$, which corresponds to the self-focusing.
The findings for the existence and stability of the localized pixels are
summarized by diagrams displayed in Fig. \ref{NER}. To produce them, we
analyzed the temporal evolution of total power (\ref{Norm}), parallel to
monitoring the spatial profile of each solution at large times ($t=100$ and $%
t=1000$). In Fig. \ref{NER}, we address three different values of strength $%
\Omega $ of the trapping potential: (a) $\Omega =2$, (b) $\Omega =4$, and
(c) $\Omega =10$. The stability area is represented by gray and white boxes,
which correspond, respectively, to robust static outputs and those which
feature small residual oscillations, while the parameter area not covered by
boxes corresponds to unstable solutions. This includes the area of $g\leq
0$ (self-focusing), where the modes suffer strong instability observed in
Figs. \ref{perf_foc}(b) and \ref{perf_foc}(d) at $\Omega =10$, but may be
stable at $\Omega =2$ and $4$ [in the latter case, the stability domain for $%
g>0$ is very small, as seen in Fig. \ref{NER}(b)]. On the other hand, at $%
g>5$ and $\Omega =10$, the solution undergoes fragmentation under the action
of the strong self-defocusing nonlinearity, for all values of $-10\leq
\Delta \leq +10$. An example of that is displayed in Fig. \ref{F9} for $g=10$
and two extreme values of the mismatch, $\Delta =-10$ and $+10$.
In the stability area, black spots highlight values of the parameters at
which the output profiles of the static solutions, observed at $t=1000$, are
very close to the respective input profiles, i.e., the VA provides very
accurate predictions. Generally, the shape of the stability area in the form
of the vertical stripe, observed in Fig. \ref{NER}(c), roughly follows the
vertical direction of the dotted black line in Fig. \ref{ER}, which pertains
to the same value of $\Omega =10$. On the other hand, the expansion of the
stability area in the horizontal direction for $\Omega =2$ and $\Omega =4$,
which is observed in Figs. \ref{NER}(a,b), qualitatively complies with the
strong change of the curves in Fig. \ref{ER} for the same values of $\Omega $%
. Looking at Fig. \ref{NER}, one can also conclude that large positive
values of $\Delta $ help to additionally expand the stability region.
We stress that the results shown in Fig. \ref{NER} are extremely robust:
real-time simulations lead to them, even starting with zero input. The input
provided by the VA ansatz (\ref{ansatz}) is used above to explore the
accuracy of the VA, which is relevant, as similar approximations can be
applied to similar models, incorporating the pump, linear loss, and Kerr
nonlinearity (self-defocusing or focusing).
\section{Vortex solitons}
\subsection{Analytical considerations: the Thomas-Fermi approximation}
\begin{figure}[tb]
\includegraphics[width=0.48\columnwidth]{NF10a.eps} %
\includegraphics[width=0.48\columnwidth]{NF10b.eps}
\caption{(Color online) The stability area in the plane of ($\Delta $, $%
\protect\sigma $) for vortex solutions numerically generated by real-time
simulations of Eq. (\protect\ref{LuLe}) with vortex pump (\protect\ref{lin}%
). Other parameters are $\Omega =2$, $\protect\gamma =1$, and (a) $E_{0}=1$
or (b) $E_{0}=2$. Simple stable vortices are found in the gray area, while
the yellow one represents stable modes with the spiral phase structure which
features a full turn, and a multi-ring radial structure, see a typical
example in Fig. \protect\ref{strong spiral}. No stable vortices were found
in the white area.}
\label{VF1}
\end{figure}
\begin{figure}[tb]
\centering \includegraphics[width=0.48\columnwidth]{NF11a.eps} %
\includegraphics[width=0.48\columnwidth]{NF11b.eps}
\caption{(Color online) Output profiles $|\protect\phi _{\mathrm{out}%
}(x,0)|^{2}$ of stable ring-shaped vortices, produced by real-time
integration of Eq. (\protect\ref{LuLe}) with pump profile (\protect\ref{lin}%
), for two different values of $|\protect\sigma |$ (the absolute value of
the nonlinearity coefficient). The radial shapes obtained with the
self-defocusing ($\protect\sigma =5$) and focusing ($\protect\sigma =-5$)
nonlinearities are displayed by solid black and dashed red lines,
respectively, for $\Delta =0$ (a) and $\Delta =10$ (b). Other parameters are
$\protect\gamma =1$, $\Omega =2$ and $E_{0}=1$.}
\label{VF2}
\end{figure}
In the previous sections, we considered uniform pump field $E$, which
generates fundamental modes without vorticity. Here we explore the confined
LL model with space-dependent pump carrying the vorticity. It is represented
by the driving term
\begin{equation}
E=E_{0}re^{i\theta } \label{lin}
\end{equation}%
in Eq. (\ref{LuLe}), where $\theta $ is the angular coordinate and $E_{0}=%
\mathrm{const}$. This term naturally corresponds to the pump supplied by a
vortex laser beam (with vorticity $1$) \cite{vortex beam}. In the case of
multiple vorticity $m>1$ (which will be considered elsewhere), Eq. (\ref{lin}%
) is replaced by $E=E_{0}r^{m}e^{im\theta }$.
Patterns supported by the vortex pump correspond to factorized solutions of
the stationary version of Eq. (\ref{LuLe}), taken as%
\begin{equation}
\phi \left( r,\theta \right) =e^{i\theta }A(r), \label{A}
\end{equation}%
with complex amplitude $A$ satisfying the following radial equation:
\begin{gather}
\left[ \frac{1}{2}\left( \frac{d^{2}}{dr^{2}}+\frac{1}{r}\frac{d}{dr}-\frac{1%
}{r^{2}}\right) -\Delta +i\gamma -\frac{\Omega ^{2}}{2}r^{2}-\sigma |A|^{2}%
\right] A= \notag \\
E_{0}r\;. \label{ODE}
\end{gather}%
As an analytical approximation, the TFA for vortex solitons may be applied
here, cf. Ref. \cite{TFA-vortex}. In the general case, the TFA implies
dropping the derivatives in the radial equation, which leads to a complex
cubic equation for $A$, cf. Eq. (\ref{TF}), under the conditions $\sigma >0$
(self-defocusing) and $\Delta >0$ (positive mismatch):%
\begin{equation}
\left[ \Delta -i\gamma +\frac{1}{2}\left( \frac{1}{r^{2}}+\Omega
^{2}r^{2}\right) +\sigma |A|^{2}\right] A=-E_{0}r\;. \label{TF-vortex}
\end{equation}
Equation (\ref{TF-vortex}), as well as its counterpart (\ref{TF})\ for the
zero-vorticity states, strongly simplifies in the limit of large $\Delta >0$%
, when both the imaginary and and nonlinear terms may be neglected:%
\begin{equation}
A(r)=-E_{0}r\left[ \Delta +\frac{1}{2}\left( \frac{1}{r^{2}}+\Omega
^{2}r^{2}\right) \right] ^{-1}. \label{A(r)}
\end{equation}%
In particular, the simplest approximation provided by Eq. (\ref{A(r)}) makes
it possible to easily predict the radial location of maximal intensity in
the ring-shaped vortex mode:%
\begin{equation}
r_{\max }^{2}=\left( \sqrt{\Delta ^{2}+3\Omega ^{2}}+\Delta \right) /\Omega
^{2}. \label{max}
\end{equation}%
Comparison of values given by Eq. (\ref{max}) with their counterparts
extracted from numerically found vortex-ring shapes, which are displayed
below in Figs. \ref{VF2}(a) and (b) for $\Delta \geq 0$, demonstrates that
the analytically predicted values are smaller than the numerical
counterparts by $11\%$ for $\Delta =0$, and by $6\%$ for $\Delta =10$.
Naturally, the TFA provides better accuracy for large $\Delta ,$ but even
for $\Delta =0$ the prediction is reasonable. Furthermore, Eq. (\ref{A(r)})
predicts a virtually exact largest intensity, $\left\vert A(r=r_{\max
})\right\vert ^{2}$, for the small-amplitude mode displayed in Fig. \ref{VF2}%
(b).
\subsection{Numerical results}
Equation (\ref{LuLe}) with vortex pump profile (\ref{lin}) was numerically
solved with zero input. This simulation scenario is appropriate, as vortex
states, when they are stable, are sufficiently strong attractors to draw
solutions developing from the zero input.
The results, produced by systematic real-time simulations, are summarized in
Fig. \ref{VF3} for $\Omega =2$ in Eq. (\ref{LuLe}) and $E_{0}=1$ or $2$ in
Eq. (\ref{lin}). The figure displays stability areas for the vortex modes in
the plane of free control parameters ($\Delta $, $\sigma )$ (the mismatch
and nonlinearity strength). It is worthy to note that the stability domain
for the self-focusing nonlinearity ($\sigma <0$) is essentially larger than
in the diagram for the fundamental (zero-vorticity) modes, which is
displayed, also for $\Omega =2$, in Fig. \ref{NER}. This fact may be
naturally explained by the fact that the vanishing of the vortex drive (\ref%
{lin}) at $r\rightarrow 0$, in the combination with the intrinsic structure
of the vortex states, makes the central area of the pattern nearly
``empty", thus preventing the onset of the modulational
instability in it.
In the gray areas in Fig. \ref{VF1}, the stable vortex modes have a simple
ring-shaped structure, with typical radial profiles shown in Fig. \ref{VF2}.
In the case of zero mismatch, $\Delta =0$ [Fig. \ref{VF1}(a)], the vortex
state naturally acquires a higher amplitude under the action of the
self-focusing. On the other hand, in the case of large positive mismatch
[Fig. \ref{VF1}(b)], the small amplitude is virtually the same under the
action of the focusing and defocusing, which is explained, as mentioned
above, by the TFA that reduces to Eq. (\ref{A(r)}).
In unstable (white) areas in Fig. \ref{VF1}, direct simulations lead to
quick fragmentation of vortically driven patterns into small spots, that
feature a trend to developing the above-mentioned critical collapse \cite%
{collapse}. A typical example of the unstable evolution is displayed in Fig. %
\ref{VF3}.
\begin{figure}[tb]
\includegraphics[width=0.48\columnwidth]{NF12a.eps} %
\includegraphics[width=0.48\columnwidth]{NF12b.eps}
\caption{(Color online) (a) Local-intensity $|\protect\phi \left( x,y\right)
|^{2}$ and (b) phase profiles of an unstable pattern, produced by the
simulations of Eq. (\protect\ref{LuLe}) with vortex pump (\protect\ref{lin})
and the strong self-focusing ($\protect\sigma =-5$) at $t=20$. Other
parameters are $\Delta =-8$, $\protect\gamma =1$, $\Omega =2$, and $E_{0}=1$%
. }
\label{VF3}
\end{figure}
More sophisticated stable vortex profiles are observed in yellow areas in
Fig. \ref{VF3}. They are characterized by a multi-ring radial structure, and
a spiral shape of the vorticity-carrying phase distribution, as shown in
Fig. \ref{strong spiral}. The yellow areas are defined as those in\ which
the spiral phase field performs a full turn by $360$ degrees, as can be seen
in Fig. \ref{strong spiral}(b). Note that this area exists for both the
focusing and defocusing signs of the nonlinearity in Fig. \ref{VF3}(a), and
solely for zero nonlinearity in Fig. \ref{VF3}(b), which corresponds to the
stronger pump.
\begin{figure}[tb]
\includegraphics[width=0.48\columnwidth]{NF13a.eps}%
\includegraphics[width=0.48\columnwidth]{NF13b.eps} %
\includegraphics[width=0.48\columnwidth]{NF13c.eps}
\caption{(Color online) A stable multi-ring vortex with the spiral phase
field. Panels (a), (b), and (c) display, respectively, the 2D
local-intensity pattern, phase field, and the radial structure. Parameters
are the same as in Fig. \protect\ref{VF3}, except for a weaker self-focusing
strength, $\protect\sigma =-1$.}
\label{strong spiral}
\end{figure}
The spiral shape of the phase pattern is explained by the fact that radial
amplitude $A(r)$ in solution (\ref{A}) is a complex function, as is
explicitly shown, in particular, by Eqs. (\ref{infi}) and (\ref{TF-vortex}).
The spirality of vortices is a well-known feature of 2D complex
Ginzburg-Landau equations \cite{GL}. However, unlike the present situation,
the spirality is not usually related to a multi-ring radial structure.
Patterns with multi-ring shapes usually exist as excited states, on top of
stable ground states in the same models, being unstable to azimuthal
perturbations \cite{Javid}. For this reason, the stability of complex modes,
like the one displayed in Fig. \ref{strong spiral}, is a noteworthy finding.
Lastly, a typical example of a stable vortex at the boundary between the
simple (non-spiral) and complex (spiral-shaped) ones is presented in Fig. %
\ref{weak spiral}. It features emerging spirality in the phase field, but
the radial structure keeps the single-ring shape.
\begin{figure}[tb]
\includegraphics[width=0.48\columnwidth]{NF14a.eps} %
\includegraphics[width=0.48\columnwidth]{NF14b.eps} %
\includegraphics[width=0.48\columnwidth]{NF14c.eps}
\caption{(Color online) The same as in Fig. \protect\ref{VF3}, but with the
self-defocusing sigh of the nonlinearity, $\protect\sigma =2$.}
\label{weak spiral}
\end{figure}
\section{Conclusion}
We have introduced the 2D model based on the LL\ (Lugiato-Lefever) equation
with confinement imposed by the harmonic-oscillator trap. In spite of the
action of the uniform pump, the confinement creates well localized patterns,
which may be used for the creation of robust small-area pixels in
applications. The VA (variational approximation), based on a novel
fractional ansatz, as well as a simple TFA (Thomas-Fermi approximation),
were elaborated to describe the fundamental (zero-vorticity) confined modes.
The VA effectively reduces the 2D LL equation to the zero-dimensional
version. The VA is additionally enhanced by taking into regard the balance
condition for the integral power. The comparison with the full numerical
analysis has demonstrated that the VA provides qualitatively accurate
predictions, which are also quantitatively accurate, in some areas of the
parameter space. The systematic numerical analysis has produced overall
stability areas for the confined pattern in the underlying parameter space,
which demonstrate that the patterns tend to be less stable and more stable
under the action of the self-focusing and defocusing nonlinearity,
respectively (although very strong self-defocusing causes fragmentation of
the patterns). The increase of the confinement strength leads to shrinkage
of the stability area, although it does not make all the states unstable. On
the other hand, large positive values of the cavity's detuning tends to
expand the region of the stability in the parameter space.
We have also explored vortex solitons (which may be used to realize vortical
pixels in microcavities) supported by the pump with embedded vorticity. In
this case, the simple TFA provides a qualitatively correct description, and
systematically collected numerical results reveal a remarkably large
stability area in the parameter space, for both the self-defocusing and
focusing signs of the nonlinearity. In addition to simple vortices, stable
complex ones, featuring the multi-ring radial structure and the spiral phase
field, have been found too. As an extension of the present work, a
challenging issue is to look for confined states with multiple embedded
vorticity.
A summary of authors' contributions to the work: the numerical part has been
carried out by W.B.C. Analytical considerations were chiefly performed by
B.A.M. and L.S. All the authors have contributed to drafting the text of the
paper.
\begin{acknowledgments}
WBC acknowledges the financial support from the Brazilian agencies CNPq
(grant \#458889/2014-8) and the National Institute of Science and Technology
(INCT) for Quantum Information. LS acknowledges for partial support the 2016 BIRD project
``Superfluid properties of Fermi gases in optical potentials''
of the University of Padova. The work of B.A.M. is supported, in part, by grant No. 2015616 from
the joint program in physics between NSF (US) and
Binational (US-Israel) Science Foundation.
\end{acknowledgments}
| 15,453 |
\section{Introduction}\label{sec:introduction}}
\section{Introduction}
\IEEEPARstart{D}{ynamic} mechanisms that drive nonlinear systems are universally complex and obscure. Straightforwardly, one can investigate the behavior of a system by discovering the patterns from its observations. In practice, when we take observations from a real-world complex system, spatiotemporal data are one of the most widely encountered form relating to space and time and showing the characteristics of time series. With the remarkable development of sensing technologies, tremendous spatiotemporal data are accessible for analysis, such as satellite data \cite{reynolds2002improved}, station-level weather observation data \cite{Daymet}, and trajectory of human mobility \cite{zheng2015trajectory}, to name but a few. In the literature, various spatiotemporal data were characterized by time series models. Leveraging time series models not only allows one to analyze spatiotemporal data but also makes it possible to discover inherent spatial and temporal patterns from the data over space and time.
As a simple yet efficient and classical statistical model, vector autoregression (VAR) explicitly finds the linear relationship among a sequence of time series changing over time, which can also successfully describe the dynamic behavior of time series \cite{hamilton1994time, lutkepohl2005new}. Formally, if the given time series $\boldsymbol{s}_{1},\ldots,\boldsymbol{s}_{T}\in\mathbb{R}^{N}$ (i.e., a collection of observations with $N$ variables at consecutive times) are stationary, then the $d$th-order VAR takes a linear formulation as $\boldsymbol{s}_{t}=\sum_{k=1}^{d}\boldsymbol{A}_{k}\boldsymbol{s}_{t-k}+\boldsymbol{\epsilon}_{t},\forall t\in\{d+1,\ldots,T\}$, in which the coefficient matrices $\boldsymbol{A}_{1},\ldots,\boldsymbol{A}_{d}\in\mathbb{R}^{N\times N}$ are expected to capture the temporal correlations of the multivariate time series, and $\boldsymbol{\epsilon}_{t}$ is the residual at time $t$. A large body of VAR and its variants such as reduced-rank VAR \cite{izenman1975reduced,ahn1988nested,velu1998multivariate}, dynamic mode decomposition (DMD) \cite{schmid2010dynamic, tu2013dynamic, kutz2016dynamic}, high-dimensional VAR \cite{wang2021high}, and time-varying VAR with tensor factorization \cite{harris2021time} have been developed for analyzing real-world time series data. Essentially, the modeling intuition of these data-driven approaches is that the coefficient matrix in VAR is rank-deficient, or at least the dominant patterns underlying the coefficient matrix can be revealed by a low-rank structure.
DMD models---the classical variants of VAR---were initially developed for discovering spatiotemporal coherent patterns from fluid flow mechanism, and they are available for interpreting the system behavior and underlying data patterns, giving credit to its low-rank structure (i.e., a reduced-rank linear system of VAR) for dimensionality reduction \cite{schmid2010dynamic, tu2013dynamic,kutz2016dynamic}. However, stationary data are often required when analyzing multivariate time series via VAR and its variants, which is against the fact that real-world data are usually nonstationary. Therefore, one great challenge of modeling time series with VAR is identifying the time-varying system behaviors in the analysis, which is often associated with the nonstationarity issue.
Although the nonstationarity and time-varying system behaviors are pretty clear to verify, the problem of discovering underlying data patterns from time-varying systems is challenging and still demands further exploration. To characterize the underlying time-varying system behaviors hidden behind time series, a classical framework was proposed by utilizing the dynamic linear models (e.g., time-varying vector autoregression) whose latent variables vary over time \cite{west1997bayesian, primiceri2005time, del2015time}.
Typically, time-varying VAR model takes a sequence of VAR processes at different times, and it is capable of handling the time-varying system behaviors. Nevertheless, the time-varying coefficients in the model give rise to the over-parameterization issue, which implies that the number of parameters (e.g., coefficient matrices in time-varying VAR) inevitably exceeds the number of observations. The over-parameterized coefficients can be regularized by a certain smoothing term from a machine learning perspective, but it does not work in the high-dimensional settings. Another efficient approach to address the over-parameterization issue is using low-rank models such as low-rank matrix/tensor factorization for pursuing perfect model compression. Preferably, low-rank models have shown unprecedented compression power on both matrix and tensor data or parameters in several contemporary studies \cite{donoho2006compressed, eldar2012compressed, kolda2009tensor, anandkumar2014tensor, wright2022high}.
Complementary to the above methods, recent work provides an online DMD \cite{zhang2019online} and a time-varying autoregression \cite{harris2021time} for modeling time-varying systems. The online DMD model allows one to compute DMD in real time and find the approximation of a system's dynamics for time-varying streaming data. However, as for DMD models, they are sensitive and vulnerable to the noises in the time series data, posing both methodological and practical challenges \cite{wu2021challenges}. Therefore, it is hard to use DMD to discover data patterns and system behaviors from time-varying and noisy time series. In \cite{harris2021time}, the time-varying autoregression model has a sequence of windowed linear systems consisting of VAR, highly motivated by the need for time-varying system identification. It allows one to capture pattern changes of a dynamical system through tensor factorization. Typically, tensor factorization in the time-varying autoregression is capable of compressing the over-parameterized coefficients and identifying the time-varying system behaviors.
In this work, we revisit the classical spatiotemporal data analysis problem and introduce a novel method to discover spatial and temporal patterns from time-varying systems in a data-driven fashion. The cornerstone of this work is the time-varying VAR on nonstationary time series. Formally, we can formulate the first-order VAR with time-varying coefficients as $\boldsymbol{s}_{t}=\boldsymbol{A}_{t}\boldsymbol{s}_{t-1}+\boldsymbol{\epsilon}_{t},\forall t$, in which the system matrices $\{\boldsymbol{A}_{t}\}$ are changing over time. In this situation, the system matrices involve $\mathcal{O}(N^2(T-1))$ parameters, which exceed $\mathcal{O}(NT)$ observations. To address the over-parameterization issue, we utilize a low-rank tensor factorization structure---Tucker decomposition \cite{kolda2009tensor, golub2013matrix}---to achieve the model compression. In the meanwhile, tensor factorization allows one to discover spatiotemporal patterns from spatiotemporal data.
The main contributions of this work are twofold:
\begin{itemize}
\item We propose a fully time-varying reduced-rank VAR model which allows one to discover interpretable dynamic modes from spatiotemporal data. In our model, we draw strong connections between time-varying VAR and tensor factorization (i.e., Tucker decomposition) that address the over-parameterization issue and reveal spatial and temporal patterns in the latent spaces.
\item We demonstrate the model capability of the time-varying reduced-rank VAR for discovering interpretable patterns from extensive spatiotemporal datasets, including fluid dynamics, sea surface temperature, USA surface temperature, and NYC taxi trips. The evaluation results show that the latent variables in the tensor factorization of our model can reveal both spatial and temporal patterns. Time-varying system behaviors underlying these spatiotemporal data can be clearly identified by our model.
\end{itemize}
Focusing on the time-varying autoregression with low-rank structures, our model differs from the existing model proposed by \cite{harris2021time} on the technical side: i) the existing model takes a sequence of windowed (first-order) VAR processes, while our model is parameterized with fully time-varying coefficients in the higher-order VAR (see Definition~\ref{time_varying_var_definition} and Remark~\ref{windowed_model} for more details); ii) the existing model utilizes the CANDECOMP/PARAFAC (CP) decomposition (i.e., a special case of Tucker decomposition \cite{kolda2009tensor}) to circumvent the over-parameterized coefficients in which CP decomposition does not show any interactions between spatial modes and temporal modes, in contrast, our model applies the Tucker decomposition which allows us to better characterize spatiotemporal patterns. Without loss of generality, the proposed model lays a foundation for modeling real-world spatiotemporal data with the time-varying system behaviors and provides insights into dynamical system modeling.
The remainder of this work is organized as follows. Section~\ref{methodology} introduces a time-varying reduced-rank VAR model and presents an alternating minimization algorithm for solving the involved optimization problem. Section~\ref{experiment} shows extensive experiments on some real-world spatiotemporal datasets. We discuss this research and give a summary in Section~\ref{conclusion}. For more supplementary material, Appendix~\ref{appendix_dmd} presents spatial/temporal modes achieved by DMD on the fluid dynamics
\section{Related Work}\label{related_work}
\subsection{Low-Rank Autoregression}
In the past decades, substantial studies on time series modeling have sought solutions to the over-parameterized autoregression, mainly aiming at model compression on the coefficients. Essentially, the idea of low-rank regression is taking advantage of dimensionality reduction techniques. To solve the over-parameterization issue in the high-dimensional VAR, there are some low-rank matrix and tensor tools ranging from the linear structure to multilinear structure, e.g., matrix factorization \cite{koop2019bayesian}, Tucker decomposition \cite{wang2021high}, and tensor train decomposition \cite{oseledets2011tensor}, to compress the coefficient matrices in VAR. At an early stage, a general reduced-rank setup for VAR models was proposed by Ahn and Reinsel \cite{ahn1988nested, reinsel1992vector}. Very recently, some representative models such as the multivariate autoregressive index model \cite{carriero2016structural} and Bayesian compressed VAR model \cite{koop2019bayesian} integrated matrix factorization into the framework. Another idea is to use a low-rank tensor structure to characterize the higher-order VAR (i.e., $d>1$), e.g., Tucker decomposition formatted VAR model in \cite{wang2021high}. Bahadori \emph{et al.} proposed a unified low-rank tensor learning framework for multivariate spatiotemporal analysis \cite{bahadori2014fast}. In these high-dimensional VAR models with the higher order, tensor factorization can perform better than matrix factorization in terms of model compression on the coefficients. However, tensor factorization would inevitably involve high complexity and nonconvex optimization.
In the field of dynamical systems, DMD and higher-order DMD models can also be categorized as reduced-rank VAR models. They take the form of VAR and can handle high-dimensional time series because truncated singular value decomposition (SVD) and eigenvalue decomposition are adopted for preserving the most important dynamic modes of time series data \cite{schmid2010dynamic, tu2013dynamic, kutz2016dynamic, brunton2019data}. Therefore, matrix/tensor factorization in the low-rank autoregression not only addresses the over-parameterization issue, but also provides insights into pattern discovery.
\subsection{Time-Varying Autoregression}
In real-world applications, time series data are usually nonstationary. Recall that the stationary time series possess mean, average, and autocorrelation that are independent of time \cite{hamilton1994time}, but nonstationary time series violate that principle. Therefore, the nonstationarity characteristic poses great methodological and practical challenges for developing well-suited frameworks on such kind of data. Instead of stationarizing time series, the existing studies presented some autoregression models for characterizing time-varying system behaviors. In literature, a classical dynamic linear framework (e.g., time-varying VAR) utilizes time-varying latent variables to characterize the time-varying system behaviors of time series \cite{west1997bayesian,primiceri2005time,del2015time}. To reinforce such framework, many counterparts of time-varying autoregression such as time-varying structural VAR \cite{primiceri2005time,del2015time}, online DMD \cite{zhang2019online}, and time-varying autoregression with low-rank tensors \cite{harris2021time} have been introduced to time series analysis. Among these works, Harris \emph{et al.} \cite{harris2021time} introduced a time-varying VAR and addressed the over-parameterization issue through low-rank tensor factorization---tensor factorization allows one to achieve both model compression and pattern discovery. However, to the best of our knowledge, existing studies have not provided any systematical analysis for showing how to discover dynamic patterns from spatiotemporal data via time-varying autoregression.
\section{Methodology}\label{methodology}
This section introduces a fully time-varying reduced-rank VAR model for the multivariate time series. In particular, the model utilizes low-rank tensor factorization to compress the over-parameterized coefficients. Meanwhile, tensor factorization in the model allows us to automatically discover interpretable dynamic modes (corresponding to spatiotemporal patterns) from spatiotemporal data.
\subsection{Model Description}
In this work, we propose a time-varying VAR model with interpretability regarding the underlying system's dynamic behaviors in a data-driven fashion. This model provides a flexible framework built on the past efforts such as DMD \cite{schmid2010dynamic, kutz2016dynamic} and time-varying autoregression with low-rank tensors \cite{harris2021time}. For any observed spatiotemporal data in the form of multivariate time series, our model considers a time-varying linear system as follows.
\begin{definition}[Fully time-varying VAR]\label{time_varying_var_definition}
Given any multivariate time series data $\boldsymbol{S}\in\mathbb{R}^{N\times T}$ with columns $\boldsymbol{s}_{1},\ldots,\boldsymbol{s}_{T}\in\mathbb{R}^{N}$, if a $d$th-order VAR takes time-varying coefficients at time $t\in\{d+1,\ldots,T\}$, then the optimization problem of the fully time-varying VAR can be defined as follows,
\begin{equation}\label{opt_tv_var}
\min_{\{\boldsymbol{A}_{t}\}}~\frac{1}{2}\sum_{t=d+1}^{T}\|\boldsymbol{y}_{t}-\boldsymbol{A}_{t}\boldsymbol{z}_{t}\|_{2}^{2},
\end{equation}
where $\boldsymbol{z}_{t}\triangleq(\boldsymbol{s}_{t-1}^\top,\cdots,\boldsymbol{s}_{t-d}^\top)^\top\in\mathbb{R}^{dN}$ is an augmented vector and $\boldsymbol{y}_{t}\triangleq\boldsymbol{s}_{t}$.
The data pair $\{\boldsymbol{y}_{t},\boldsymbol{z}_{t}\}$ are the inputs for learning the coefficient matrices $\{\boldsymbol{A}_{t}\in\mathbb{R}^{N\times (dN)}\}$. The notation $\|\cdot\|_{2}$ denotes the $\ell_{2}$-norm of vector. Notably, these coefficient matrices are capable of expressing the time series with a sequence of parameters.
\end{definition}
\begin{remark}\label{windowed_model}
In contrast to the fully time-varying VAR as mentioned in Definition~\ref{time_varying_var_definition}, Harris \emph{et al.} \cite{harris2021time} presented a windowed time-varying VAR with the first-order form (i.e., $d=1$), which is given by
\begin{equation}
\min_{\{\boldsymbol{A}_{t}\}}~\frac{1}{2}\sum_{t=1}^{M}\|\boldsymbol{Y}_{t}-\boldsymbol{A}_{t}\boldsymbol{Z}_{t}\|_{F}^2,
\end{equation}
where $\{\boldsymbol{A}_{t}\in\mathbb{R}^{N\times N}\}$ are the coefficient matrices. The time steps $T=MK,M\in\mathbb{N}^+,K\in\mathbb{N}^{+}$ is comprised of $M$ windows and each window is of length $K$. At any window $t$, the data pair $\{\boldsymbol{Y}_{t},\boldsymbol{Z}_{t}\}$ are defined as augmented matrices:
\begin{equation*}
\begin{aligned}
\boldsymbol{Y}_{t}&=\begin{bmatrix}
\mid & & \mid \\
\boldsymbol{s}_{(t-1)K+1} & \cdots & \boldsymbol{s}_{tK-1} \\
\mid & & \mid \\
\end{bmatrix}\in\mathbb{R}^{N\times (K-1)}, \\
\boldsymbol{Z}_{t}&=\begin{bmatrix}
\mid & & \mid \\
\boldsymbol{s}_{(t-1)K+2} & \cdots & \boldsymbol{s}_{tK} \\
\mid & & \mid \\
\end{bmatrix}\in\mathbb{R}^{N\times (K-1)}. \\
\end{aligned}
\end{equation*}
If $K=1$, then the windowed time-varying VAR is reduced to our fully time-varying VAR with $d=1$, showing the flexibility of the fully time-varying VAR over the windowed time-varying VAR.
\end{remark}
In Definition~\ref{time_varying_var_definition}, gathering $\{\boldsymbol{A}_{t}\}$ and stacking these matrices along an additional dimension, there exists a third-order tensor $\boldsymbol{\mathcal{A}}\in\mathbb{R}^{N\times (dN)\times(T-d)}$ for representing these coefficient matrices, in which $\boldsymbol{A}_{t}$ is the $t$th frontal slice of $\boldsymbol{\mathcal{A}}$. One great concern of the model is that it possesses $\mathcal{O}(dN^2(T-d))$ parameters, which would vastly exceed $\mathcal{O}(NT)$ observations in most cases, regardless of the order $d$ in the model. Therefore, one can demonstrate that, due to time-varying coefficients, the autoregression model involves the \emph{over-parameterization} issue. To address this issue, low-rank matrix/tensor factorization usually plays as fundamental tools for compressing the over-parameterized coefficients, e.g., matrix factorization in reduced-rank regression/autoregression \cite{izenman1975reduced, ahn1988nested, reinsel1992vector, velu1998multivariate} and tensor factorization in high-dimensional VAR \cite{wang2021high}. In this work, we assume that the tensor $\boldsymbol{\mathcal{A}}$ has a reduced rank, i.e., the rank of $\boldsymbol{\mathcal{A}}$ is deficient, and claim the existence of a multilinear rank-$(R,R,R)$ Tucker decomposition such that
\begin{equation}\label{tucker}
\boldsymbol{\mathcal{A}}=\boldsymbol{\mathcal{G}}\times_{1}\boldsymbol{W}\times_{2}\boldsymbol{V}\times_{3}\boldsymbol{X},
\end{equation}
where $\boldsymbol{\mathcal{G}}\in\mathbb{R}^{R\times R\times R}$ is the core tensor, and $\boldsymbol{W}\in\mathbb{R}^{N\times R},\boldsymbol{V}\in\mathbb{R}^{(dN)\times R},\boldsymbol{X}\in\mathbb{R}^{(T-d)\times R}$ are factor matrices. The notation $\times_{k}$ denotes the modal product between a tensor and a matrix along the mode $k$ \cite{kolda2009tensor}. If the autoregressive process is not time-varying, i.e., in the case of static coefficients, then the model would be reduced to the precedent reduced-rank VAR with matrix factorization \cite{ahn1988nested, reinsel1992vector, velu1998multivariate}. According to the property of Tucker decomposition and Kronecker product \cite{kolda2009tensor, golub2013matrix}, we can build connections between Eqs.~\eqref{opt_tv_var} and \eqref{tucker} through
\begin{equation}
\boldsymbol{A}_{t}=\boldsymbol{\mathcal{G}}\times_1\boldsymbol{W}\times_2\boldsymbol{V}\times_3\boldsymbol{x}_{t}^\top\equiv\boldsymbol{W}\boldsymbol{G}(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})^\top,
\end{equation}
where $\boldsymbol{x}_{t}\in\mathbb{R}^{R}$ is the $t$th row of $\boldsymbol{X}$, and $\boldsymbol{G}\triangleq\boldsymbol{\mathcal{G}}_{(1)}\in\mathbb{R}^{R\times R^2}$ is the mode-1 unfolding/matricization of the core tensor $\boldsymbol{\mathcal{G}}$. The symbol $\otimes$ denotes the Kronecker product.
In this case, tensor factorization is used to compress the coefficients in the time-varying VAR. If one integrates the above-mentioned tensor factorization into time-varying VAR processes, then the optimization problem of the resultant time-varying reduced-rank VAR can be formulated as follows,
\begin{equation}\label{time_varying_model}
\min_{\boldsymbol{W},\,\boldsymbol{G},\,\boldsymbol{V},\,\boldsymbol{X}}~\frac{1}{2}\sum_{t=d+1}^{T}\|\boldsymbol{y}_{t}-\boldsymbol{W}\boldsymbol{G}(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})^\top\boldsymbol{z}_{t}\|_{2}^{2}.
\end{equation}
As can be seen, we draw connections between Tucker decomposition and time-varying VAR and provide an expression for decomposing the coefficients into a sequence of matrices. The model not only addresses the over-parameterization issue in the time-varying VAR, but also provides physically interpretable modes for real-world time series data. As demonstrated by \cite{harris2021time}, one significant benefit of tensor factorization in the model is revealing the underlying data patterns in the parameters. For instance, on spatiotemporal data, $\boldsymbol{W}$ and $\boldsymbol{V}$ can be interpreted as spatial modes, while $\boldsymbol{X}$ can be interpreted as temporal modes. Time-varying $\{\boldsymbol{x}_{t}\}$ can explore the system's behavior of time series data. Since our model takes into account time-varying coefficients, we can offer a better understanding of dynamics of spatiotemporal data with nonstationarity and time-varying system behaviors.
In contrast to our model, the time-varying autoregression with low-rank tensors builds VAR for the time series in each window and follows CP tensor factorization for model compression and outlier detection \cite{harris2021time}, and which seems to be less parsimonious than our model in terms of interpretable structures.
\begin{lemma}\label{matrix_form_opt_prop}
The optimization problem in Eq.~\eqref{time_varying_model} is equivalent to the following optimization problem in the form of matrix:
\begin{equation} \label{eq: formulation}
\min_{\boldsymbol{W},\,\boldsymbol{G},\,\boldsymbol{V},\,\boldsymbol{X}}~\frac{1}{2}\|\boldsymbol{Y}-\boldsymbol{W}\boldsymbol{G}(\boldsymbol{X}\otimes\boldsymbol{V})^\top\tilde{\boldsymbol{Z}}\|_{F}^{2},
\end{equation}
where $\|\cdot\|_{F}$ denotes the Frobenius norm of matrix, and $\boldsymbol{Y},\tilde{\boldsymbol{Z}}$ are defined as follows,
\begin{equation*}
\boldsymbol{Y}\triangleq\begin{bmatrix}
\mid & \mid & & \mid \\
\boldsymbol{y}_{d+1} & \boldsymbol{y}_{d+2} & \cdots & \boldsymbol{y}_{T} \\
\mid & \mid & & \mid \\
\end{bmatrix}\in\mathbb{R}^{N\times(T-d)},
\end{equation*}
and
\begin{equation*}
\tilde{\boldsymbol{Z}}\triangleq\begin{bmatrix}
\boldsymbol{z}_{d+1} & \boldsymbol{0} & \cdots & \boldsymbol{0} \\
\boldsymbol{0} & \boldsymbol{z}_{d+2} & \cdots & \boldsymbol{0} \\
\vdots & \vdots & \ddots & \vdots \\
\boldsymbol{0} & \boldsymbol{0} & \cdots & \boldsymbol{z}_{T} \\
\end{bmatrix}\in\mathbb{R}^{(dN(T-d))\times (T-d)},
\end{equation*}
respectively.
\end{lemma}
\begin{remark}
This matrix-form optimization problem provides insights into understanding the problem of the time-varying reduced-rank VAR. Putting the orthogonal factor matrices with the matrix-form optimization problem together, our model can be written as a higher-order singular value decomposition (HOSVD, \cite{kolda2009tensor, golub2013matrix}) on the coefficient tensor. In this case, we give an unfolded form of HOSVD as follows,
\begin{equation}
\begin{aligned}
\min_{\boldsymbol{W},\,\boldsymbol{G},\,\boldsymbol{V},\,\boldsymbol{X}}~&\frac{1}{2}\|\boldsymbol{\mathcal{A}}_{(1)}-\boldsymbol{W}\boldsymbol{G}(\boldsymbol{X}\otimes\boldsymbol{V})^\top\|_{F}^{2} \\
\text{s.t.}~&\begin{cases}\boldsymbol{W}^\top\boldsymbol{W}=\boldsymbol{I}_{R},\\
\boldsymbol{V}^\top\boldsymbol{V}=\boldsymbol{I}_{R},\\
\boldsymbol{X}^\top\boldsymbol{X}=\boldsymbol{I}_{R},
\end{cases}
\end{aligned}
\end{equation}
where $\boldsymbol{\mathcal{A}}_{(1)}\triangleq\boldsymbol{Y}\tilde{\boldsymbol{Z}}^\dagger\in\mathbb{R}^{N\times (dN(T-d))}$ is the mode-1 unfolding of $\boldsymbol{\mathcal{A}}$. Herein, $\cdot^\dagger$ denotes the Moore-Penrose pseudo-inverse. In the constraint, $\boldsymbol{I}_{R}$ is the $R$-by-$R$ identity matrix.
\end{remark}
As can be seen, our model shows the capability of HOSVD for pattern discovery, but in high-dimensional settings, computing with the large and sparse matrix $\tilde{\boldsymbol{Z}}$ is computationally costly. In what follows, we let $f$ be the objective function of the optimization problem in Eq.~\eqref{time_varying_model}, and still use the vector-form formula to develop an alternating minimization algorithm for the time-varying reduced-rank VAR.
\subsection{Model Inference}
Recall that alternating minimization (e.g., alternating least squares) is a simple yet efficient method for addressing a manifold of nonconvex optimization problems in low-rank matrix/tensor factorization \cite{kolda2009tensor, golub2013matrix}. To solve the involved optimization problem in our model, we apply an alternating minimization algorithm in which we have a sequence of least squares subproblems with respect to $\{\boldsymbol{W},\,\boldsymbol{G},\,\boldsymbol{V},\,\boldsymbol{X}\}$, respectively. Regarding the subproblem with respect to each unknown variable, we fix the remaining variables as known variables and intend to write down the least squares solutions or obtain the numerically approximated solutions. In addition, as reported in \cite{harris2021time}, the estimation based on alternating minimization has shown to be effective in such optimization problems.
\subsubsection{Updating the Variable $\boldsymbol{W}$}
With respect to the variable $\boldsymbol{W}$, the current estimation task is deriving closed-form solution from
\begin{equation}
\boldsymbol{W}:=\argmin_{\boldsymbol{W}}~\frac{1}{2}\sum_{t=d+1}^{T}\|\boldsymbol{y}_{t}-\boldsymbol{W}\boldsymbol{G}(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})^\top\boldsymbol{z}_{t}\|_{2}^{2},
\end{equation}
while $\{\boldsymbol{G},\boldsymbol{V},\boldsymbol{X}\}$ are fixed as known variables. Since this optimization problem is convex, we can first write down the partial derivative of $f$ with respect to $\boldsymbol{W}$ as follows,
\begin{equation} \label{eq: derivate of W}
\begin{aligned}
\frac{\partial f}{\partial\boldsymbol{W}}=&-\sum_{t=d+1}^{T}(\boldsymbol{y}_{t}-\boldsymbol{W}\boldsymbol{G}(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})^\top\boldsymbol{z}_{t})\boldsymbol{z}_{t}^\top(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})\boldsymbol{G}^\top \\
=&-\sum_{t=d+1}^{T}\boldsymbol{y}_{t}\boldsymbol{z}_{t}^\top(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})\boldsymbol{G}^\top \\
&+\sum_{t=d+1}^{T}\boldsymbol{W}\boldsymbol{G}(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})^\top\boldsymbol{z}_{t}\boldsymbol{z}_{t}^\top(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})\boldsymbol{G}^\top. \\
\end{aligned}
\end{equation}
Then let $\frac{\partial f}{\partial\boldsymbol{W}}=\boldsymbol{0}$, there exists a least squares solution to the variable $\boldsymbol{W}$:
\begin{equation}\label{least_square_w}
\begin{aligned}
\boldsymbol{W}=&\left(\sum_{t=d+1}^{T}\boldsymbol{y}_{t}\boldsymbol{z}_{t}^\top(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})\boldsymbol{G}^\top\right) \\
&\cdot\left(\sum_{t=d+1}^{T}\boldsymbol{G}(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})^\top\boldsymbol{z}_{t}\boldsymbol{z}_{t}^\top(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})\boldsymbol{G}^\top\right)^{-1}.
\end{aligned}
\end{equation}
\subsubsection{Updating the Variable $\boldsymbol{G}$}
With respect to the variable $\boldsymbol{G}$, the estimation goal is finding the optimal solution from
\begin{equation}
\boldsymbol{G}:=\argmin_{\boldsymbol{G}}~\frac{1}{2}\sum_{t=d+1}^{T}\|\boldsymbol{y}_{t}-\boldsymbol{W}\boldsymbol{G}(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})^\top\boldsymbol{z}_{t}\|_{2}^{2},
\end{equation}
while $\{\boldsymbol{W},\boldsymbol{V},\boldsymbol{X}\}$ are fixed as known variables. To find the closed-form solution from this optimization problem, the partial derivative of $f$ with respect to $\boldsymbol{G}$ is given by
\begin{equation}
\begin{aligned}
\frac{\partial f}{\partial\boldsymbol{G}}=&-\sum_{t=d+1}^{T}\boldsymbol{W}^\top(\boldsymbol{y}_{t}-\boldsymbol{W}\boldsymbol{G}(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})^\top\boldsymbol{z}_{t})\boldsymbol{z}_{t}^\top(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V}) \\
=&-\sum_{t=d+1}^{T}\boldsymbol{W}^\top\boldsymbol{y}_{t}\boldsymbol{z}_{t}^\top(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V}) \\
&+\sum_{t=d+1}^{T}\boldsymbol{W}^\top\boldsymbol{W}\boldsymbol{G}(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})^\top\boldsymbol{z}_{t}\boldsymbol{z}_{t}^\top(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V}). \\
\end{aligned}
\end{equation}
Let $\frac{\partial f}{\partial\boldsymbol{G}}=\boldsymbol{0}$, then there exists a least squares solution to the variable $\boldsymbol{G}$:
\begin{equation}\label{least_square_G}
\begin{aligned}
\boldsymbol{G}=&\boldsymbol{W}^\dagger\left(\sum_{t=d+1}^{T}\boldsymbol{y}_{t}\boldsymbol{z}_{t}^\top(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})\right)\\
&\cdot\left(\sum_{t=d+1}^{T}(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})^\top\boldsymbol{z}_{t}\boldsymbol{z}_{t}^\top(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})\right)^{-1}. \\
\end{aligned}
\end{equation}
\subsubsection{Updating the Variable $\boldsymbol{V}$}
With respect to the variable $\boldsymbol{V}$, the optimization problem is given by
\begin{equation}
\boldsymbol{V}:=\argmin_{\boldsymbol{V}}~\frac{1}{2}\sum_{t=d+1}^{T}\|\boldsymbol{y}_{t}-\boldsymbol{W}\boldsymbol{G}(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})^\top\boldsymbol{z}_{t}\|_{2}^{2},
\end{equation}
where $\{\boldsymbol{W},\boldsymbol{G},\boldsymbol{X}\}$ are fixed as known variables. To get the closed-from solution, one is required to reformulate the objective function because the involved Kronecker product complicates the formula. According to the property of Kronecker product (see Lemma~\ref{lemma1}), we can rewrite the objective function as follows,
\begin{equation}
\begin{aligned}
f=&\frac{1}{2}\sum_{t=d+1}^{T}\|\boldsymbol{y}_{t}-\boldsymbol{W}\boldsymbol{G}(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})^\top\boldsymbol{z}_{t}\|_{2}^{2} \\
=&\frac{1}{2}\sum_{t=d+1}^{T}\|\boldsymbol{y}_{t}-\boldsymbol{W}\boldsymbol{G}((\boldsymbol{x}_{t}\boldsymbol{z}_{t}^\top)\otimes\boldsymbol{I}_{R})\text{vec}(\boldsymbol{V}^\top)\|_{2}^{2}, \\
\end{aligned}
\end{equation}
where $\text{vec}(\cdot)$ denotes the operator of vectorization.
\begin{lemma}\label{lemma1}
For any $\boldsymbol{x}\in\mathbb{R}^{n},\boldsymbol{Y}\in\mathbb{R}^{p\times q},\boldsymbol{z}\in\mathbb{R}^{p}$, there exists
\begin{equation}
\begin{aligned}
(\boldsymbol{x}^\top\otimes\boldsymbol{Y})^\top\boldsymbol{z}=&(\boldsymbol{x}\otimes\boldsymbol{Y}^\top)\boldsymbol{z} \\
=&\text{vec}(\boldsymbol{Y}^\top\boldsymbol{z}\boldsymbol{x}^\top) \\
=&\text{vec}(\boldsymbol{I}_{q}\boldsymbol{Y}^\top(\boldsymbol{z}\boldsymbol{x}^\top)) \\
=&((\boldsymbol{x}\boldsymbol{z}^\top)\otimes\boldsymbol{I}_{q})\text{vec}(\boldsymbol{Y}^\top).
\end{aligned}
\end{equation}
\end{lemma}
\begin{remark}
This lemma stems from one fundamental property of Kronecker product, which is given by
\begin{equation}
\text{vec}(\boldsymbol{A}\boldsymbol{X}\boldsymbol{B})=(\boldsymbol{B}^\top\otimes\boldsymbol{A})\text{vec}(\boldsymbol{X}),
\end{equation}
for any $\boldsymbol{A}\in\mathbb{R}^{m\times m}$, $\boldsymbol{X}\in\mathbb{R}^{m\times n}$, and $\boldsymbol{B}\in\mathbb{R}^{n\times n}$ commensurate from multiplication in that order.
\end{remark}
Therefore, the partial derivative of $f$ with respect to the vectorized variable $\text{vec}(\boldsymbol{V}^\top)$ is given by
\begin{equation}
\begin{aligned}
\frac{\partial f}{\partial\text{vec}(\boldsymbol{V}^\top)}=&-\sum_{t=d+1}^{T}((\boldsymbol{z}_{t}\boldsymbol{x}_{t}^\top)\otimes\boldsymbol{I}_{R})\boldsymbol{G}^\top\boldsymbol{W}^\top\boldsymbol{y}_{t} \\
&+\sum_{t=d+1}^{T}((\boldsymbol{z}_{t}\boldsymbol{x}_{t}^\top)\otimes\boldsymbol{I}_{R})\boldsymbol{G}^\top\boldsymbol{W}^\top \\
&\cdot\boldsymbol{W}\boldsymbol{G}((\boldsymbol{x}_{t}\boldsymbol{z}_{t}^\top)\otimes\boldsymbol{I}_{R})\text{vec}(\boldsymbol{V}^\top). \\
\end{aligned}
\end{equation}
Let $\frac{\partial f}{\partial\text{vec}(\boldsymbol{V}^\top)}=\boldsymbol{0}$, then there exists a system of linear equations:
\begin{equation}\label{linear_eq_vec_v}
\begin{aligned}
&\sum_{t=d+1}^{T}((\boldsymbol{z}_{t}\boldsymbol{x}_{t}^\top)\otimes\boldsymbol{I}_{R})\boldsymbol{G}^\top\boldsymbol{W}^\top\boldsymbol{W}\boldsymbol{G}((\boldsymbol{x}_{t}\boldsymbol{z}_{t}^\top)\otimes\boldsymbol{I}_{R})\text{vec}(\boldsymbol{V}^\top) \\
=&\sum_{t=d+1}^{T}((\boldsymbol{z}_{t}\boldsymbol{x}_{t}^\top)\otimes\boldsymbol{I}_{R})\boldsymbol{G}^\top\boldsymbol{W}^\top\boldsymbol{y}_{t}. \\
\end{aligned}
\end{equation}
This system of linear equations has a least squares solution. However, the potential inverse of the large matrix would cost $\mathcal{O}((dNR)^3)$, involving high computational complexity. To convert this large-scale and sparse problem into an easy-to-solve problem, we utilize Lemma~\ref{lemma2} to establish an equivalent generalized Sylvester equation with respect to the variable $\boldsymbol{V}$.
\begin{lemma}\label{lemma2}
For any $\boldsymbol{x}\in\mathbb{R}^{n},\boldsymbol{Y}\in\mathbb{R}^{q\times n},\boldsymbol{z}\in\mathbb{R}^{p}$, there exists
\begin{equation}
((\boldsymbol{z}\boldsymbol{x}^\top)\otimes\boldsymbol{I}_{q})\text{vec}(\boldsymbol{Y})=\text{vec}(\boldsymbol{Y}\boldsymbol{x}\boldsymbol{z}^\top).
\end{equation}
\end{lemma}
According to the property of Kronecker product as mentioned in Lemma~\ref{lemma2}, Eq.~\eqref{linear_eq_vec_v} is equivalent to the following generalized Sylvester equation:
\begin{equation}
\begin{aligned}
&\sum_{t=d+1}^{T}((\boldsymbol{z}_{t}\boldsymbol{x}_{t}^\top)\otimes\boldsymbol{I}_{R})\boldsymbol{G}^\top\boldsymbol{W}^\top\boldsymbol{W}\boldsymbol{G}(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})^\top\boldsymbol{z}_{t} \\
=&\sum_{t=d+1}^{T}((\boldsymbol{z}_{t}\boldsymbol{x}_{t}^\top)\otimes\boldsymbol{I}_{R})\boldsymbol{G}^\top\boldsymbol{W}^\top\boldsymbol{y}_{t}. \\
\end{aligned}
\end{equation}
As can be seen, it yields a generalized Sylvester equation that infers the solution to $\boldsymbol{V}^\top$:
\begin{equation}\label{matrix_eq_v_transpose}
\sum_{t=d+1}^{T}\boldsymbol{P}_{t}\boldsymbol{x}_{t}\boldsymbol{z}_{t}^\top=\sum_{t=d+1}^{T}\boldsymbol{Q}_{t}\boldsymbol{x}_{t}\boldsymbol{z}_{t}^\top,
\end{equation}
where we define two auxiliary variables $\boldsymbol{P}_{t},\boldsymbol{Q}_{t}\in\mathbb{R}^{R\times R}$ such that
\begin{equation*}
\begin{cases}
\text{vec}(\boldsymbol{P}_{t})\triangleq\boldsymbol{G}^\top\boldsymbol{W}^\top\boldsymbol{W}\boldsymbol{G}(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})^\top\boldsymbol{z}_{t},\\
\text{vec}(\boldsymbol{Q}_{t})\triangleq\boldsymbol{G}^\top\boldsymbol{W}^\top\boldsymbol{y}_{t}.
\end{cases}
\end{equation*}
With respect to the variable $\boldsymbol{V}$, the first impulse is to take a transpose operation on both left-hand and right-hand sides of Eq.~\eqref{matrix_eq_v_transpose}. Then the resultant equation is given by
\begin{equation}\label{matrix_eq_v}
\sum_{t=d+1}^{T}\boldsymbol{z}_{t}\boldsymbol{x}_{t}^\top\boldsymbol{P}_{t}^\top=\sum_{t=d+1}^{T}\boldsymbol{z}_{t}\boldsymbol{x}_{t}^\top\boldsymbol{Q}_{t}^\top.
\end{equation}
In this case, we can use the conjugate gradient, a classical and efficient numerical approximation algorithm \cite{golub2013matrix}, to solve the generalized Sylvester equation. To solve Eq.~\eqref{matrix_eq_v} through conjugate gradient, we need to define an operator on the left-hand side of the equation as follows,
\begin{equation}
\mathcal{L}_{v}(\boldsymbol{V})\triangleq\text{vec}\left(\sum_{t=d+1}^{T}\boldsymbol{z}_{t}\boldsymbol{x}_{t}^\top\boldsymbol{P}_{t}^\top\right)\in\mathbb{R}^{dNR}.
\end{equation}
Algorithm~\ref{cong_grad} summarizes the estimation procedure for approximating the solution to the variable $\boldsymbol{V}$ in Eq.~\eqref{matrix_eq_v}. The conjugate gradient method allows one to search for the approximated solution to a system of linear equations with a relatively small number of iterations (e.g., 5 or 10). Admittedly, the conjugate gradient methods with a small number of iterations cannot match the least squares solution. Nevertheless, the numerical approximated solution via conjugate gradient is rather accurate \cite{golub2013matrix}.
\begin{algorithm}
\caption{Conjugate gradient for inferring $\boldsymbol{V}$}
\label{cong_grad}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE Data pair $\{\boldsymbol{y}_{t},\boldsymbol{z}_{t}\}$, known variables $\{\boldsymbol{W},\boldsymbol{G},\boldsymbol{X}\}$, initialized $\boldsymbol{V}$, and the maximum iteration $\tilde{L}$ (e.g., the default value as $\tilde{L}=5$).
\ENSURE Estimated $\boldsymbol{V}$.
\STATE Let $\mathcal{R}_{v}=\boldsymbol{0}$.
\FOR {$t=d+1$ to $T$}
\STATE Compute $\boldsymbol{Q}_{t}$ with $\text{vec}(\boldsymbol{Q}_{t})\triangleq\boldsymbol{G}^\top\boldsymbol{W}^\top\boldsymbol{y}_{t}$. \STATE Take $\mathcal{R}_{v}+=\boldsymbol{z}_{t}\boldsymbol{x}_{t}^\top\boldsymbol{Q}_{t}^\top$.
\ENDFOR
\STATE Initialize $\boldsymbol{v}_{0}$ by the vectorized $\boldsymbol{V}$.
\STATE Compute residual vector $\boldsymbol{r}_{0}=\text{vec}(\mathcal{R}_{v})-\mathcal{L}_v(\boldsymbol{V})$, and $\boldsymbol{q}_0=\boldsymbol{r}_0$.
\FOR {$\ell = 0$ to $\tilde{L}-1$}
\STATE Convert vector $\boldsymbol{q}_{\ell}$ into matrix $\boldsymbol{Q}_{\ell}$.
\STATE Compute $\alpha_{\ell}=\frac{\boldsymbol{r}_{\ell}^\top\boldsymbol{r}_{\ell}}{\boldsymbol{q}_{\ell}^\top\mathcal{L}_v(\boldsymbol{Q}_{\ell})}$.
\STATE Update $\boldsymbol{v}_{\ell+1}=\boldsymbol{v}_{\ell}+\alpha_{\ell}\boldsymbol{q}_{\ell}$.
\STATE Update $\boldsymbol{r}_{\ell+1}=\boldsymbol{r}_{\ell}-\alpha_{\ell}\mathcal{L}_v(\boldsymbol{Q}_{\ell})$.
\STATE Compute $\beta_{\ell}=\frac{\boldsymbol{r}_{\ell+1}^\top\boldsymbol{r}_{\ell+1}}{\boldsymbol{r}_{\ell}^\top\boldsymbol{r}_{\ell}}$.
\STATE Update $\boldsymbol{q}_{\ell+1}=\boldsymbol{r}_{\ell+1}+\beta_{\ell}\boldsymbol{q}_{\ell}$.
\ENDFOR
\STATE Convert vector $\boldsymbol{v}_{\tilde{L}}$ into matrix $\boldsymbol{V}$.
\end{algorithmic}
\end{algorithm}
\subsubsection{Updating the Variable $\{\boldsymbol{x}_{t}\}$}
According to the property of Kronecker product, we can rewrite the objective function of the optimization problem in Eq.~\eqref{time_varying_model} as follows,
\begin{equation}
\begin{aligned}
f=&\frac{1}{2}\sum_{t=d+1}^{T}\|\boldsymbol{y}_{t}-\boldsymbol{W}\boldsymbol{G}(\boldsymbol{x}_{t}^\top\otimes\boldsymbol{V})^\top\boldsymbol{z}_{t}\|_{2}^{2}\\
=&\frac{1}{2}\sum_{t=d+1}^{T}\|\boldsymbol{y}_{t}-\boldsymbol{W}\boldsymbol{G}(\boldsymbol{I}_{R}\otimes(\boldsymbol{V}^\top\boldsymbol{z}_{t}))\boldsymbol{x}_{t}\|_{2}^{2}. \\
\end{aligned}
\end{equation}
Therefore, the optimization problem with respect to $\boldsymbol{x}_{t},\forall t$ now becomes
\begin{equation}
\boldsymbol{x}_{t}:=\argmin_{\boldsymbol{x}_{t}}~\frac{1}{2}\|\boldsymbol{y}_{t}-\boldsymbol{W}\boldsymbol{G}(\boldsymbol{I}_{R}\otimes(\boldsymbol{V}^\top\boldsymbol{z}_{t}))\boldsymbol{x}_{t}\|_{2}^{2},
\end{equation}
while $\{\boldsymbol{W},\boldsymbol{G},\boldsymbol{V}\}$ are fixed as known variables.
Then, we can obtain the partial derivative of $f$ with respect to the variable $\boldsymbol{x}_{t}$ (i.e., the $t$th row of the variable $\boldsymbol{X}$), which is given by
\begin{equation}
\begin{aligned}
\frac{\partial f}{\partial\boldsymbol{x}_{t}}=&(\boldsymbol{I}_{R}\otimes(\boldsymbol{V}^\top\boldsymbol{z}_{t}))^\top\boldsymbol{G}^\top\boldsymbol{W}^\top(\boldsymbol{W}\boldsymbol{G}(\boldsymbol{I}_{R}\otimes(\boldsymbol{V}^\top\boldsymbol{z}_{t}))\boldsymbol{x}_{t}-\boldsymbol{y}_{t}) \\
=&(\boldsymbol{I}_{R}\otimes(\boldsymbol{V}^\top\boldsymbol{z}_{t}))^\top\boldsymbol{G}^\top\boldsymbol{W}^\top\boldsymbol{W}\boldsymbol{G}(\boldsymbol{I}_{R}\otimes(\boldsymbol{V}^\top\boldsymbol{z}_{t}))\boldsymbol{x}_{t} \\
&-(\boldsymbol{I}_{R}\otimes(\boldsymbol{V}^\top\boldsymbol{z}_{t}))^\top\boldsymbol{G}^\top\boldsymbol{W}^\top\boldsymbol{y}_{t}, \\
\end{aligned}
\end{equation}
In this case, letting $\frac{\partial f}{\partial\boldsymbol{x}_{t}}=\boldsymbol{0}$ leads to a least squares solution to the variable $\boldsymbol{x}_{t}$:
\begin{equation}\label{least_square_x}
\boldsymbol{x}_{t}=\left(\boldsymbol{W}\boldsymbol{G}(\boldsymbol{I}_{R}\otimes(\boldsymbol{V}^\top\boldsymbol{z}_{t}))\right)^\dagger\boldsymbol{y}_{t}.
\end{equation}
\subsection{Solution Algorithm}
As mentioned above, the variables $\{\boldsymbol{W},\boldsymbol{G},\boldsymbol{X}\}$ can be computed by the closed-form least squares solutions, while the solution to the variable $\boldsymbol{V}$ can be numerically approximated by the conjugate gradient method in an efficient manner. Starting from the initialized variables by using singular value decomposition, then we update these variables in an iterative routine. In the iterative process, the basic idea of alternating minimization is that we fix the remaining variables when updating one of these variables. Since each subproblem is convex and there is a unique solution to each of the coordinate-wise minimization problems, the convergence of our algorithm through the block coordinate minimization can be therefore verified \cite{harris2021time}.
\begin{algorithm}
\caption{Time-varying reduced-rank VAR}
\label{trvar}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE Data pair $\{\boldsymbol{y}_t,\boldsymbol{z}_{t}\}$ constructed by the time series data $\boldsymbol{S}\in\mathbb{R}^{N\times T}$, $d$ as the order of VAR, $R$ as the low rank ($R\leq\min\{N,T-d\}$), and $L$ as the maximum iteration.
\ENSURE $\{\boldsymbol{W},\,\boldsymbol{G},\,\boldsymbol{V},\,\boldsymbol{X}\}$.
\STATE Initialize factor matrices $\boldsymbol{W},\boldsymbol{V},\boldsymbol{X}$ by the $R$ left singular vectors of $\boldsymbol{Y},\boldsymbol{Z},\boldsymbol{S}^\top$, respectively.
\FOR {$\ell = 0$ to $L-1$}
\STATE Update $\boldsymbol{G}$ by Eq.~\eqref{least_square_G}.
\STATE Update $\boldsymbol{W}$ by Eq.~\eqref{least_square_w}.
\STATE Compute $\boldsymbol{V}$ from Eq.~\eqref{matrix_eq_v} with conjugate gradient (see Algorithm~\ref{cong_grad}).
\FOR {$t=d+1$ to $T$}
\STATE Update $\boldsymbol{x}_{t}$ by Eq.~\eqref{least_square_x}.
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Experiments}\label{experiment}
In this section, we evaluate our model on an artificial dataset for fluid dynamics and some real-world datasets, including sea surface temperature, USA surface temperature, and NYC taxi trips. We demonstrate the performance of our model on characterizing the underlying data patterns and system's behavior, especially showing the modeling capability and interpretability on time-varying systems. All the experiment results are replicable at \url{https://github.com/xinychen/vars}.
\subsection{Fluid Dynamics}
\begin{figure*}[ht!]
\centering
\begin{subfigure}{0.32\linewidth}
\centering
\includegraphics[width = 0.95\textwidth]{fluid_flow_heatmap.png}
\caption{Fluid flow (original data.)}
\label{fluid_flow_heatmap}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\centering
\includegraphics[width = 0.95\textwidth]{fluid_mode_trvar.png}
\caption{Spatial modes.}
\label{fluid_flow_spatial_mode}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\centering
\includegraphics[width = 0.95\textwidth]{fluid_temporal_mode.pdf}
\caption{Temporal modes.}
\label{fluid_temporal_mode}
\end{subfigure}
\caption{Fluid flow and spatial/temporal modes to demonstrate the model. (a) Heatmaps (snapshots) of the fluid flow at times $t=5,10,\ldots,40$. It shows that the snapshots at times $t=5$ and $t=35$ are even same, and the snapshots at times $t=10$ and $t=40$ are also even same, allowing one to infer the seasonality as 30 for the first 50 snapshots. (b) Mean vorticity field and spatial modes of the fluid flow. Spatial modes are plotted by the columns of $\boldsymbol{W}$ in which seven panels correspond to the rank $R=7$. Note that the colorbars of all modes are on the same scale. (c) Temporal modes of the fluid flow in $\boldsymbol{X}$. Seven panels correspond to the rank $R=7$.}
\end{figure*}
Investigating fluid dynamic systems is of great interest for uncovering large-scale spatiotemporal coherent structures because dominant patterns exist in the flow field. The data-driven models, such as proper orthogonal decomposition (POD) \cite{berkooz1993proper} and DMD \cite{tu2013dynamic, kutz2016dynamic, wu2021challenges}, have become an important paradigm. To analyze the underlying spatiotemporal patterns of fluid dynamics, we apply our data-driven model to the cylinder wake dataset in which the flow shows a supercritical Hopf bifurcation. The dataset is collected from the fluid flow passing a circular cylinder with laminar vortex shedding at Reynolds number $\mathrm{Re}=100$, which is larger than the critical Reynolds number, using direct numerical simulations of the Navier-Stokes equations.\footnote{\url{http://dmdbook.com/}} This is a representative three-dimensional flow dataset in fluid dynamics, consisting of matrix-variate time series of vorticity field snapshots for the wake behind a cylinder. The dataset is of size $199\times 449\times 150$, representing 199-by-449 vorticity fields with 150 time snapshots (see some examples in Fig.~\ref{fluid_flow_heatmap}).
We first manually build an artificial dataset based on fluid dynamics observations to test the proposed model. First, we can reshape the data as a high-dimensional multivariate time series matrix of size $89351\times 150$. Then, to manually generate a multiresolution system in which the fluid flow takes dynamic-varying system behaviors, we concatenate two parts of data with different frequencies---putting i) the first 50 snapshots (original frequency) together with ii) the uniformly sampled 50 snapshots from the last 100 snapshots (double frequency). As a consequence, the newly-built fluid flow dataset for evaluation has 100 snapshots in total but with a frequency transition in its system behaviors, i.e., possessing different frequencies in two phases. Multiresolution fluid dynamic data come with their own challenges such as multiple frequencies that the standard DMD model cannot work effectively (see Appendix~\ref{appendix_dmd} for further information). Regarding these challenges, finding a spatiotemporal coherent structure from multiresolution data is a challenging task, but of great significance. As demonstrated by \cite{kutz2016multiresolution}, uncovering such multiscale system can recognize and separate multiscale spatiotemporal features.
In our model, rank is a key parameter, which indicates the number of dominant spatial/temporal modes. In practice, a lower rank may only help reveal a few low-frequency dominant modes, but a higher rank would bring some complicated and less interpretable modes, usually referring to high-frequency information \cite{hansen2006deblurring}. This is consistent with the law that nature typically follows---noise is usually dominant at high frequencies and the system signal is more dominant at lower frequencies. On this dataset, we set the rank of our model as $R=7$. Fig.~\ref{fluid_flow_spatial_mode} shows the spatial modes of the fluid flow revealed by our model. It demonstrates that the spatial mode 1 corresponds to a background mode that is not changing over time because it is consistent with the mean vorticity. The other dominant spatial modes essentially show the waves of fluid flow, which look similar to both DMD and POD modes on the same analysis \cite{kutz2016dynamic}. With the increase of rank, the spatial modes can be more detailed. In this case, observing the harmonic frequencies of temporal modes (see Fig.~\ref{fluid_temporal_mode}), the corresponding spatial modes 4 and 5 are more complicated than the spatial modes 2 and 3, while the spatial modes 6 and 7 are more complicated than the spatial modes 4 and 5. Despite recognizing similar spatial modes as POD, our model can also discover the temporal modes that reveal how the system behaviors evolve. Fig.~\ref{fluid_temporal_mode} shows the temporal modes of the fluid flow in $\boldsymbol{X}$. As can be seen, the frequency of all temporal modes in $\boldsymbol{X}$ is changing at the time $t=50$, and all temporal modes can identify the time series with different frequencies of oscillation. The dynamics of fluid flow essentially consist of the phases of two frequencies. Thus, we can emphasize the model's ability for identifying the time-evolving patterns from multiresolution fluid flow. The temporal mode 1 is the most dominant pattern of the fluid flow, corresponding to the spatial mode 1. With higher rank, the frequency of harmonic cycles increases, implying that the importance of the latter modes is secondary and the last spatial/temporal modes represent high-frequency information. Therefore, we can tell that our model can discover both spatial and temporal modes from the spatiotemporal data with time-varying system behaviors.
\begin{figure*}[ht!]
\def\tabularxcolumn#1{m{#1}}
\begin{tabularx}{\textwidth}{@{}cXX@{}}\begin{tabular}{c}
\subfloat[Distribution of mean temperature.]{\includegraphics[width = 0.35\textwidth]{mean_temperature.pdf}\label{mean_temperature}}\\
\subfloat[Time series of mean temperature.]{\includegraphics[width = 0.45\textwidth]{mean_temperature_time_series.pdf}\label{mean_temperature_time_series}} \\
\subfloat[Temporal modes.]{\includegraphics[width = 0.45\textwidth]{temperature_temporal_mode.pdf}\label{temperature_temporal_mode}}\end{tabular}&
\subfloat[Spatial modes.]{\includegraphics[width = 0.5\textwidth]{temperature_mode_trvar.pdf}\label{temperature_mode_trvar}}
\end{tabularx}
\caption{Model application and results on SST data from 1990 to 2019. (a) Geographical distribution of long-term mean values of the SST over the 30-year period. (b) The time series of mean values of the SST over the 30-year period. The whole average temperature is 12.45$^\circ\text{C}$. (c) Temporal modes of the SST achieved by our model. From the top panel to the bottom panel, we have the temporal modes 1, \ldots, 6, respectively. (d) Geographical distribution of spatial modes of the SST achieved by our model. Spatial modes are plotted by the columns of $\boldsymbol{W}$ in which six panels correspond to the rank $R=6$.}
\end{figure*}
\subsection{Sea Surface Temperature (SST)}
The oceans play a very important role in the global climate system. Exploiting the SST data allows one to monitor the climate change and understand the dynamical processes of energy exchange at the sea surface \cite{deser2010sea, kutz2016multiresolution}. Here, we consider the SST dataset that covers weekly means of temperature on the spatial resolution of (1 degree latitude, 1 degree longitude)-grid, and there are $180\times 360$ global grids (i.e., 64,800 grids) in total.\footnote{\url{https://psl.noaa.gov/data/gridded/data.noaa.oisst.v2.html}} The dataset spans a 30-year period from 1990 to 2019, and the time dimension is of length 1,565 (weeks). Therefore, the data can be represented as a matrix of size $64800\times 1565$, which seems to be high-dimensional. Fig.~\ref{mean_temperature} exhibits the long-term mean values of the SST dataset over the 30-year period. It is not hard to see the most basic patterns of geographical distribution of SST. Fig.~\ref{mean_temperature_time_series} shows the yearly cycle ($\approx52$ weeks) of time series of the mean temperature.
Using the proposed model with rank $R=6$, we plot the temporal modes and spatial modes of the SST data in Fig.~\ref{temperature_temporal_mode} and \ref{temperature_mode_trvar}, respectively. Our model can identify both quasi-periodic and non-periodic behaviors and patterns from complicated and noisy signals. In contrast, DMD models are empirically demonstrated to be sensitive to the data noise in SST \cite{kutz2016dynamic}. The temporal modes 1, 2, 3, and 4 take a yearly cycle/rhythm, showing dominant patterns of SST. Notably, the spatial modes revealed by our data-driven model is consistent with the previous literature \cite{kutz2016multiresolution}. Specifically, the spatial mode 1, as a background mode, is consistent with the long-term mean temperature. The other spatial modes explain the deviations from the long-term monthly means respectively. It is worth noting that some special oceanic events can also be revealed in these spatial and temporal modes. For instance, the spatial/temporal modes 3 and 4 convey the two Southern Annular Modes in Southern Ocean as well as the positive and negative phases of the North Atlantic Oscillation. In addition, the spatial/temporal modes 5 and 6 demonstrate the phenomenon of El Ni\~{n}o Southern Oscillation (with both of El Ni\~{n}o and La Ni\~{n}a), as well as the Pacific Decadal Oscillation. Combining with the temporal mode 5 in Fig.~\ref{temperature_temporal_mode}, we can directly identify two strongest El Ni\~{n}o events on record, i.e., happened on 1997--98 and 2014--16, corresponding to the spatial mode 5 as highlighted in Fig.~\ref{temperature_mode_trvar}. These results can also be generated through the multiresolution DMD model \cite{kutz2016multiresolution}, but the multiresolution DMD are incapable of discovering the complex SST systems.
\subsection{USA Surface Temperature Data}
Daymet project provides long-term and continuous estimates of daily weather parameters such as maximum and minimum daily temperature for North America.\footnote{\url{https://daac.ornl.gov/DAYMET}} There are 5,380 stations over the United States Mainland. In this work, we use the daily maximum temperature data in the United States Mainland from 2010 to 2021 (i.e., 12 years or 4,380 days in total) for evaluation. The data can be represented as a matrix of size $5380\times 4380$. In particular, we apply the nonstationary temporal matrix factorization model \cite{chen2022nonstationary} to impute 2.67\% missing values in the original data and conduct the following evaluation on the recovered data. Fig.~\ref{usa_temp_spatial_dist} shows the long-term mean values of the temperature dataset over the 12-year period, demonstrating the most basic patterns of geographical distribution of the temperature data. Fig.~\ref{usa_temp_time_series} demonstrates the mean temperature changing over time, showing strong yearly rhythms.
\begin{figure}[ht]
\centering
\begin{subfigure}{1\linewidth}
\centering
\includegraphics[width = 0.65\textwidth]{usa_temp_spatial_dist.png}
\caption{Distribution of mean temperature.}
\label{usa_temp_spatial_dist}
\end{subfigure}
\begin{subfigure}{0.9\linewidth}
\centering
\includegraphics[width = 0.9\textwidth]{usa_temp_time_series.pdf}
\caption{Time series of mean temperature.}
\label{usa_temp_time_series}
\end{subfigure}
\caption{Mean temperature of the maximum daily temperature data in the United States Mainland from 2010 to 2021. (a) Geographical distribution of the long-term mean temperature over the 12-year period. (b) The time series of mean temperature over the 12-year period.}
\end{figure}
\begin{figure*}[ht!]
\centering
\begin{subfigure}{0.9\linewidth}
\centering
\includegraphics[width = 1\textwidth]{usa_temp_spatial_modes.png}
\caption{Spatial modes.}
\label{usa_temp_spatial_modes}
\end{subfigure}
\begin{subfigure}{0.45\linewidth}
\centering
\includegraphics[width = 1\textwidth]{usa_temp_temporal_modes.pdf}
\caption{Temporal modes over the 12-year period.}
\label{usa_temp_temporal_modes}
\end{subfigure}
\begin{subfigure}{0.45\linewidth}
\centering
\includegraphics[width = 1\textwidth]{usa_temp_temporal_modes_zoom_in.pdf}
\caption{Temporal modes over the two-year period.}
\label{usa_temp_temporal_modes_zoom_in}
\end{subfigure}
\caption{Model application and results on the daily temperature data in the United States Mainland. (a) Geographical distribution of four spatial modes of the temperature data achieved by our model. (b) Four temporal modes (over the 12-year period) of the temperature data achieved by our model. From the top panel to the bottom panel, we have the temporal modes 1-4, respectively. (c) Four temporal modes (during the two years (730 days) from 2010 to 2011) of the temperature data achieved by our model.}
\end{figure*}
Fig.~\ref{usa_temp_spatial_modes} shows the geographical distribution of the spatial modes of the temperature data revealed by $\boldsymbol{W}$ in our model, while Fig.~\ref{usa_temp_temporal_modes} and \ref{usa_temp_temporal_modes_zoom_in} visualize the temporal modes of the temperature data revealed by $\boldsymbol{X}$ in our model. The spatial mode 1 demonstrates the most dominant mode which is consistent with the mean temperature as shown in Fig.~\ref{usa_temp_spatial_dist}. In the meanwhile, the temporal mode 1 shows strong yearly cycles/rhythms underlying the daily temperature. The temporal mode 2 shows the similar time series curve as the temporal mode 1, but their values are quite different. The spatial mode 2 highlights the relatively hot areas and relatively cold areas. The temporal modes 3 and 4 are rather complicated, but the corresponding spatial modes 3 and 4 are intuitive for understanding the geographical distribution of these patterns. The spatial mode 3 roughly highlights the eastern areas and the western areas, identifying different characteristics. The spatial mode 4 identifies three areas in which the eastern and western areas follow the similar feature and the central areas take another feature.
Recall that rank is a key parameter of our model, which also refers to the number of spatial/temporal modes. Thus, the prescribed rank determines how many dominant modes we discover. Fig.~\ref{usa_temp_spatial_modes_rank_6}, \ref{usa_temp_spatial_modes_rank_7}, and \ref{usa_temp_spatial_modes_rank_8} show the geographical distribution of spatial modes of the temperature data achieved by our model with ranks $R=6,7,8$, respectively. It is clear to see that the first four spatial modes achieved by our model with ranks $R=6,7,8$ are same as the spatial modes in Fig.~\ref{usa_temp_spatial_modes}. As shown in Fig.~\ref{usa_temp_spatial_modes_rank_6}, \ref{usa_temp_spatial_modes_rank_7}, and \ref{usa_temp_spatial_modes_rank_8}, the spatial modes achieved by our model with a relatively large rank (e.g., $R=8$) can cover the the spatial modes achieved by our model with a relatively small rank (e.g., $R=6,7$). Furthermore, as the rank increases, the spatial modes achieved by our model tend to reveal higher-frequency information and more complicated patterns than the model with a relatively small rank. Thus, these findings demonstrate that our model is capable of effectively discovering dominant spatial and temporal patterns from the real-world data.
\begin{figure*}[ht!]
\centering
\begin{subfigure}{0.9\linewidth}
\centering
\includegraphics[width = 1\textwidth]{usa_temp_spatial_modes_rank_6.png}
\caption{Rank $R=6$.}
\label{usa_temp_spatial_modes_rank_6}
\end{subfigure}
\begin{subfigure}{0.9\linewidth}
\centering
\includegraphics[width = 1\textwidth]{usa_temp_spatial_modes_rank_7.png}
\caption{Rank $R=7$.}
\label{usa_temp_spatial_modes_rank_7}
\end{subfigure}
\begin{subfigure}{0.9\linewidth}
\centering
\includegraphics[width = 1\textwidth]{usa_temp_spatial_modes_rank_8.png}
\caption{Rank $R=8$.}
\label{usa_temp_spatial_modes_rank_8}
\end{subfigure}
\caption{Geographical distribution of spatial modes of the temperature data achieved by our model.}
\end{figure*}
\begin{figure*}[ht!]
\centering
\begin{subfigure}{0.9\linewidth}
\centering
\includegraphics[width = 1\textwidth]{usa_temp_spatial_modes_rank_4_d2.png}
\caption{Order $d=2$.}
\label{usa_temp_spatial_modes_rank_4_d2}
\end{subfigure}
\begin{subfigure}{0.9\linewidth}
\centering
\includegraphics[width = 1\textwidth]{usa_temp_spatial_modes_rank_4_d3.png}
\caption{Order $d=3$.}
\label{usa_temp_spatial_modes_rank_4_d3}
\end{subfigure}
\caption{Geographical distribution of four spatial modes of the temperature data achieved by our model with rank $R=4$.}
\end{figure*}
In the above experiments, we use the model with order $d=1$. In practice, we can also set a relatively large order for the time-varying VAR. Recall that we have two variables associated with the spatial dimension, i.e., $\boldsymbol{W}\in\mathbb{R}^{N\times R}$ and $\boldsymbol{V}\in\mathbb{R}^{(dN)\times R}$, in which we assume that the spatial modes revealed by $\boldsymbol{W}$ do not change with the increase of the order $d$. Fig.~\ref{usa_temp_spatial_modes_rank_4_d2} and \ref{usa_temp_spatial_modes_rank_4_d3} show the spatial modes achieved by our model with rank $R=4$ and order $d=2,3$. These results demonstrate that our model discovers similar spatial modes with different orders.
\subsection{NYC Taxi Trips}
Human mobility data are always associated with strong quasi-seasonality (e.g., weekly rhythms) and trends (e.g., decline of trips due to unplanned events like COVID-19), it is challenging to characterize these nonlinear and time-varying system behaviors. In this work, we consider to use a NYC (yellow) taxi trip dataset.\footnote{\url{https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page}} We use 69 zones in Manhattan as pickup/dropoff zones and aggregate daily taxi trip volume of the data from 2012 to 2021. Therefore, the daily trip volume tensor is of size $69\times 69\times 3653$. In the following analysis, we aggregate the the trip volume tensor as a pickup trip volume matrix and a dropoff trip volume matrix, both of size $69\times 3653$. The left panel of Fig.~\ref{pickup_trips} and \ref{dropoff_trips} shows the total pickup trips and total dropoff trips, respectively. As can be seen, most pickup/dropoff trips are created in the central urban areas of Manhattan.
\begin{figure}[ht!]
\centering
\includegraphics[width = 0.2\textwidth]{taxi_dropoff_minus_pickup.pdf}
\caption{Total dropoff trips minus total pickup trips in the 69 zones of Manhattan.}
\label{taxi_dropoff_minus_pickup}
\end{figure}
\begin{figure*}[ht!]
\centering
\begin{subfigure}{0.9\linewidth}
\centering
\includegraphics[width = 1\textwidth]{taxi_spatial_modes_pickup.pdf}
\caption{Total pickup trips and spatial modes.}
\label{pickup_trips}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\centering
\includegraphics[width = 1\textwidth]{taxi_temporal_mode_2_pickup.pdf}
\caption{Temporal mode 2 and taxi trips.}
\label{taxi_temporal_mode_2_pickup}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\centering
\includegraphics[width = 1\textwidth]{taxi_temporal_mode_4_pickup.pdf}
\caption{Temporal mode 4 and taxi trips.}
\label{taxi_temporal_mode_4_pickup}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\centering
\includegraphics[width = 1\textwidth]{taxi_temporal_mode_5_pickup.pdf}
\caption{Temporal mode 5 and taxi trips.}
\label{taxi_temporal_mode_5_pickup}
\end{subfigure}
\caption{NYC taxi pickup trips and their spatial and temporal modes achieved by our model. We zoom in the temporal modes in the first four months of 2020. These modes reveal the total traffic reduction due to the COVID-19 pandemic since March 2020. (a) Total trips and spatial modes revealed by $\boldsymbol{W}$. (b-d) refer to temporal mode 2, 4, 5, respectively; note that the bottom panels of these temporal modes uncover the trip time series of certain taxi zones.}
\end{figure*}
\begin{figure*}[ht!]
\centering
\begin{subfigure}{0.9\linewidth}
\centering
\includegraphics[width = 1\textwidth]{taxi_spatial_modes_dropoff.pdf}
\caption{Total dropoff trips and spatial modes.}
\label{dropoff_trips}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\centering
\includegraphics[width = 1\textwidth]{taxi_temporal_mode_2_dropoff.pdf}
\caption{Temporal mode 2 and taxi trips.}
\label{taxi_temporal_mode_2_dropoff}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\centering
\includegraphics[width = 1\textwidth]{taxi_temporal_mode_4_dropoff.pdf}
\caption{Temporal mode 4 and taxi trips.}
\label{taxi_temporal_mode_4_dropoff}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\centering
\includegraphics[width = 1\textwidth]{taxi_temporal_mode_5_dropoff.pdf}
\caption{Temporal mode 5 and taxi trips.}
\label{taxi_temporal_mode_5_dropoff}
\end{subfigure}
\caption{NYC taxi dropoff trips and their spatial and temporal modes achieved by our model. We zoom in the temporal modes in the first four months of 2020. These modes reveal the total traffic reduction due to the COVID-19 pandemic since March 2020. (a) Total trips and spatial modes revealed by $\boldsymbol{W}$. (b-d) refer to temporal mode 2, 4, 5, respectively; note that the bottom panels of these temporal modes uncover the trip time series of certain taxi zones.}
\end{figure*}
Fig.~\ref{taxi_dropoff_minus_pickup} visualizes the long-term changes of human mobility via the dropoff trips minus the pickup trips. The dropoff trips are greater than the pickup trips in edges of urban areas (see the red zones such as the Upper East Side of Manhattan). In contrast, the pickup trips are greater than the dropoff trips in the central urban areas (see the blue zones). Therefore, it demonstrates the difference between the pickup trips and the dropoff trips in the spatial context.
As shown in Fig.~\ref{pickup_trips} and \ref{dropoff_trips}, the spatial and temporal modes can explain the long-term trip patterns of NYC taxi trips. The spatial mode 1 essentially demonstrates the long-term patterns, which is consistent with the total trips (see the left panel of Fig.~\ref{pickup_trips} and \ref{dropoff_trips}). Other spatial modes can be used to reveal the trip patterns of some specific zones. In terms of pickup trips, Fig.~\ref{taxi_temporal_mode_2_pickup} shows both the temporal mode 2 and two trip time series curves of the highlighted zones (i.e., zones 161 and 237) in the spatial mode 2. The temporal mode 2 is informative before the COVID-19 pandemic and shows consistent patterns with the daily trips of zones 161 and 237.\footnote{The unique zone identifiers are generated by the NYC Taxi and Limousine Commission (TLC), and the taxi zone map is available at \url{https://www1.nyc.gov/assets/tlc/images/content/pages/about/taxi_zone_map_manhattan.jpg}.} Fig.~\ref{taxi_temporal_mode_4_pickup} shows both the temporal mode 4 and three trip time series curves of the highlighted zones (i.e., zones 107, 231, and 234) in the spatial mode 4. The temporal mode 4 reveals the patterns of trips before and during the COVID-19 pandemic, but it shows a great change in March 2020. Fig.~\ref{taxi_temporal_mode_5_pickup} shows both the temporal mode 5 and two trip time series curves of the highlighted zones (i.e., zones 170 and 186). The temporal mode 5 is consistent with the daily trips of zones 170 and 186, and it also reveals the change of trips in March 2020.
In terms of the dropoff trips, Figure~\ref{taxi_temporal_mode_2_dropoff} shows both the temporal mode 2 and three trip time series curves of the highlighted zones (i.e., zones 161, 236, and 237). The temporal mode 2 reveals the (weekly) seasonal patterns of trips before the crisis of COVID-19. It is consistent with the daily trips of these zones, showing strong quasi-seasonality before the COVID-19 pandemic. It is clear to see the difference of trip patterns before and during the COVID-19 pandemic. Figure~\ref{taxi_temporal_mode_4_dropoff} shows both the temporal mode 4 and three trip time series curves of the highlighted zones (i.e., zones 79, 234, and 249). The temporal mode 4 demonstrates the trip patterns that are consistent with daily trips of these zones. Figure~\ref{taxi_temporal_mode_5_dropoff} shows both the temporal mode 5 and the trip time series curve of highlighted zone 246. The temporal mode 5 is consistent with the daily trips of zone 246 before the COVID-19 pandemic, and the pattern changes since March 2020. Remarkable traffic/trip reduction has been reported due to the travel restriction during the COVID-19 pandemic.
\section{Conclusion}\label{conclusion}
Spatiotemporal data are increasingly accessible due to the remarkable development of sensing technologies and advanced information systems. These data provide unprecedented opportunities for discovering complex dynamic mechanisms and data patterns from nonlinear systems, and it is essential to understand them through data-driven approaches. Introducing reduced-rank VAR models such as DMD into spatiotemporal data has been demonstrated to be efficient for discovering spatiotemporal patterns \cite{tu2013dynamic,kutz2016dynamic}; however, these models are sensitive to practical data noises \cite{wu2021challenges} and incapable of characterizing the time-varying system behaviors of data. Therefore, it still demands data-driven frameworks that are well-suited to spatiotemporal data.
To this end, this work presents a time-varying reduced-rank VAR model for discovering interpretable modes from time series, providing insights into modeling real-world time-varying spatiotemporal systems. The model is built on the time-varying VAR and allows one to compress the over-parameterized coefficients by low-rank tensor factorization. Experiments demonstrated that the model can reveal meaningful spatial and temporal patterns underlying the time series through interpretable modes. To evaluate the performance, we test our model on several real-world spatiotemporal datasets, including fluid dynamics, SST, USA surface temperature, and NYC taxi trips. Our model can discover the underlying spatial patterns through the spatial modes and identify the time-varying system behaviors through the temporal modes. Last but not least, it would be of interest to develop more general time-varying autoregression models for spatiotemporal data in the presence of high-dimensional time series.
\appendices
\section{DMD on Fluid Dynamics}\label{appendix_dmd}
To highlight the advantages of the proposed model for discovering interpretable modes from spatiotemporal data, we consider DMD \cite{tu2013dynamic, kutz2016dynamic} as an important baseline for comparison. Fig.~\ref{fluid_mode_dmd} and \ref{fluid_temporal_mode_dmd} show the spatial modes and the temporal modes achieved by the DMD model, respectively. In contrast to the temporal modes achieved by our model (see Fig.~\ref{fluid_temporal_mode}), the temporal modes achieved by DMD as shown in Fig.~\ref{fluid_temporal_mode_dmd} is hard to reveal the time-varying system behaviors, failing for the changing dynamics in this case. Comparing to DMD, multiresolution DMD proposed by \cite{kutz2016multiresolution} still requires a well designed scheme, and it requires fixed frequency in each window. Going back to our model, it is fully time-varying, making it more flexible than the DMD models.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.35\textwidth]{fluid_mode_dmd.png}
\caption{Mean vorticity field and spatial modes of the fluid flow achieved by DMD with the rank $R=7$. Note that the colorbars of all modes are on the same scale.}
\label{fluid_mode_dmd}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.35\textwidth]{fluid_temporal_mode_dmd.pdf}
\caption{Temporal modes of the fluid flow achieved by DMD. Seven panels correspond to the rank $R=7$.}
\label{fluid_temporal_mode_dmd}
\end{figure}
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
X. Chen and C. Zhang would like to thank the Institute for Data Valorisation (IVADO) and the Interuniversity Research Centre on Enterprise Networks, Logistics and Transportation (CIRRELT) for providing the PhD Excellence Scholarship to support this study. We also thank Dr. Wenshuo Wang for providing insightful suggestions for improving this work.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| 23,192 |
\section{Introduction}
Kagome lattices are emerging as an exciting platform for the rich emergent
physics, including magnetism, charge density wave (CDW), topology, and
superconductivity \cite%
{kagome-01,kagome-04,kagome-05,kagome-03,kagome-06,mag-02,mag-03,mag-11,FeSn-03,FeSn-04,mag-01,mag-04,mag-09,mag-10,mag-05,mag-06,mag-07,mag-08,135-0,135-01,135-02,135-03,135-05,135-06,135-08,135-09,135-10,135-11,135-13,135-14,135-add1,135-add3,135-add4,135-07,135-04,135-add2,135-add7,135-add8,135-12,135-add5,135-add6,CoSn-1,CoSn-2}%
. Three key features have been identified in the electronic structure
associated with its lattice geometry, which are flat band derived from the
destructive phase interference of nearest-neighbour hopping, topological
Dirac crossing at K point in the Brillouin zone (BZ), and a pair of van Hove
singularities (vHSs) at M point \cite%
{kagome-03,kagome-06,kagome-04,kagome-05}. When large density of states from
the kagome flat bands are located near the Fermi level, strong electron
correlations can induce magnetic order \cite{kagome-04,kagome-05}. There are
several magnetic kagome materials, such as FeSn \cite%
{mag-02,mag-03,mag-11,FeSn-03,FeSn-04}, Fe$_{3}$Sn$_{2}$ \cite%
{mag-01,mag-04,mag-09,mag-10}, Mn$_{3}$Sn \cite{mag-05}, Co$_{3}$Sn$_{2}$S$%
_{2}$ \cite{mag-06} and AMn$_{6}$Sn$_{6}$ (A=Tb, Y) \cite{mag-07,mag-08},
which usually exhibit magnetic order with ferromagnetically ordered layers
that are either ferromagnetically or antiferromagnetically stacked.
Meanwhile, when vHSs are located near the Fermi level, interaction between
the saddle points and lattice instability could induce symmetry-breaking CDW
order \cite{kagome-03,kagome-06}, such as the class of recently discovered
kagome materials AV$_{3}$Sb$_{5}$ (A=K, Rb, Cs) \cite%
{135-0,135-01,135-02,135-03,135-04,135-05,135-06,135-07,135-08,135-09,135-10,135-11,135-12,135-13,135-14,135-add1,135-add2,135-add3,135-add4,135-add5,135-add6,135-add7,135-add8}%
. Significant interests have been focused on them since an unusual
competition between unconventional superconductivity and CDW order has been
found \cite%
{135-0,135-01,135-02,135-03,135-04,135-05,135-06,135-07,135-08,135-09,135-10,135-11,135-12,135-13,135-14,135-add1,135-add2,135-add3,135-add4,135-add5,135-add6,135-add7,135-add8}%
. Note that in kagome system, magnetic order and CDW order have not been
usually observed simultaneously within one material, probably due to the
fact that they originate from the flat band and the vHSs respectively, which
have the large energy difference and usually do not both appear near the
Fermi level \cite{2210.06653}.
Very recently, a CDW order has been discovered to appear deeply in a
magnetically ordered kagome metal FeGe, providing the opportunity for
understanding the interplay between CDW and magnetism in a kagome lattice
\cite%
{teng2022discovery,yin2022discovery,2203.01930,2206.12033,2210.06359,2210.06653}%
. Isostructural to FeSn \cite{mag-02,mag-03,mag-11,FeSn-03,FeSn-04} and CoSn
\cite{CoSn-1,CoSn-2}, hexagonal FeGe\ consists of stacks of Fe kagome planes
with both in-plane and inter-plane Ge atoms \cite{FeGe-1963}. A sequence of
magnetic phase transitions have been discussed in 1970-80s \cite%
{FeGe-1972,FeGe-1975,FeGe-1977,FeGe-1978,FeGe-1984,FeGe-1988}.\ Below $T_{N}$
= 410 K, FeGe exhibits collinear A-type antiferromagnetic (AFM) order with
moments aligned ferromagnetically (FM) within each plane and anti-aligned
between layers, and becomes a c-axis double cone AFM structure at a lower
temperature $T_{canting}$ = 60 K \cite{FeGe-1984,FeGe-1988}. Recent neutron
scattering, spectroscopy and transport measurements suggest a CDW in FeGe
which takes place at $T_{CDW}$ around 100K, providing the first example of a
CDW in a kagome magnet \cite{teng2022discovery,yin2022discovery}. The CDW in
FeGe enhances the AFM ordered moment and induces an emergent anomalous Hall
effect (AHE) possibly associated with a chiral flux phase similar with AV$%
_{3}$Sb$_{5}$ \cite{135-07,135-04,135-add2}, suggesting an intimate
correlation between spin, charge, and lattice degree of freedom \cite%
{teng2022discovery}. Though AHE is not usually seen in antiferromagnets in
zero field, recent studies have shown that a breaking of combined
time-reversal and lattice symmetries in the antiferromagnetic state results
in the AHE \cite{AHC-01,AHC-02,AHC-03}. In kagome FeGe, the AHE associated
with CDW order indicates that, the combined symmetry breaking occurs via the
structural distortion or magnetic structure transition below the CDW
temperature. The CDW in FeGe was then extensively studied experimentally and
theoretically \cite%
{teng2022discovery,yin2022discovery,2203.01930,2206.12033,2210.06359,2210.06653}%
, and the CDW wavevectors are identical to that of AV$_{3}$Sb$_{5}$ \cite%
{135-05,135-06,135-08,135-09,135-10,135-11}. However, sharply
different from AV$_{3}$Sb$_{5}$ \cite{135-12,135-add5,135-add6}, all the
theoretically calculated phonon frequencies in FeGe remain positive \cite%
{2206.12033,2210.06359,2210.06653}, and the structural distortion of the CDW
phase remain elusive. It is firstly\ suggested to be reduced to $P622$ with
the distortion of two non-equivalent Fe atoms \cite{teng2022discovery},
while the later works propose that FeGe shares the same space group of $%
P6/mmm$ with the pristine phase \cite{2206.12033,2210.06359}. Based on
first-principles calculations and scanning tunneling microscopy, Shao $et$ $%
al.$ show that the CDW\ phase of FeGe exhibits a generalized Kekul\'{e}
distortion \cite{kekule1865studies} in the Ge honeycomb atomic layers \cite%
{2206.12033}. Meanwhile, using hard x-ray diffraction and spectroscopy, Miao
$et$ $al.$\ report an experimental discovery of charge dimerization that
coexists with the CDW phase in FeGe \cite{2210.06359}. Therefore, the
understanding of the magnetism, and the intertwined connection between
complex magnetism and structural distortion in kagome FeGe is an emergency
issue, which we will address in this work based on first-principles study
and symmetry analysis.
In this work, we systematically analyze the electronic and magnetic
properties of kagome FeGe. Our numerical results show that this material is
a magnetic metal exhibiting large magnetic splitting around 1.8 eV. Based on
combining magnetic force theorem and linear-response approach \cite%
{J-1987,wan2006,wan2009}, the magnetic exchange parameters have been
estimated. The results show that the nearest-neighbor $J_{1}$ is FM and
dominates over the others, while the magnetic interactions between nearest
kagome layers favors AFM, consequently resulting in the A-type AFM
ground-state configuration. Based on these spin exchange parameters, the
calculated N\'{e}el temperature and Curie-Weiss temperature also agree well
with the experiments. Using the method in Ref. \cite{force-1,force-2}, we
also calculate the magnetic anisotropic energy (MAE) to be around 0.066 meV
per Fe atom with easy axis being out of the kagome layers, which is in
reasonable agreement with the experimental results \cite{FeGe-1988}.
However, the double cone magnetic transition at $%
T_{canting}$ = 60 K cannot be reproduced by these reasonable magnetic
parameters. We find that Dzyaloshinskii-Moriya (DM) interactions \cite{DM-D,DM-M}\ are much more efficient than Heisenberg interactions for causing this canted spin
structure. Unfortunately, the space group $P6/mmm$ of high-temperature phase in FeGe has
inversion symmetry and mirror symmetries, and all of them eliminate the net
contribution of DM interactions to
the double cone magnetic structure.
It is well known that DM interactions are very sensitive to atomic
displacements, while small structural distortion usually has little effect
on Heisenberg interactions. Therefore we explore the possible CDW distortions which can explain the low-temperature magnetic structure.
Symmetry theoretical analysis reveals that there are 68 different distortions, which are the subgroups of the parent $P6/mmm$\ phase with $2\times
2\times 2$ supercell \cite%
{teng2022discovery,yin2022discovery,2206.12033,2210.06359}.
Based on the group theoretical analysis, we find that only
four structures (space groups $P622$ and $P6_{3}22$) without inversion and mirror symmetry thus can have double cone spin structure.
We further propose that using Raman spectroscopy, these four CDW phases can be identified from their different
numbers of Raman active peaks.
\begin{figure*}[!htb]
\centering\includegraphics[width=0.98\textwidth]{STRUCT.jpg}
\caption{Crystal and magnetic structures of FeGe. Yellow and purple spheres
represent Fe and Ge atoms respectively, while arrows denote magnetic moments
of Fe atoms. (a) Top view of FeGe. The exchange interactions $J_{i}\ $denote
the $i$th-nearest-neighbor interactions between Fe ions within kagome
layers. (b) The exchange interactions $J_{ci}$ denote the $i$%
th-nearest-neighbor interactions between Fe ions on the nearest kagome
layers. (c) The exchange interactions $J_{c^{\prime }i}$ denote the $i$%
th-nearest-neighbor interactions between Fe ions on the next nearest kagome
layers. }
\label{crystal}
\end{figure*}
\section{Method}
The first-principles calculations have been carried out by using the full
potential linearized augmented plane-wave method as implemented in the
Wien2k package \cite{blaha2001wien2k}. The converged k-point Monkhorst-Pack
meshes are used for the calculations depending on materials. The
self-consistent calculations are considered to be converged when the
difference in the total energy of the crystal does not exceed $0.01mRy$. We
adopt local spin-density approximation (LSDA) \cite{vosko1980accurate} as
the exchange-correlation potential, and include the spin orbit coupling
(SOC)\ using the second-order variational procedure \cite%
{koelling1977technique}.
The spin exchange interactions, including Heisenberg and DM\ interactions
\cite{DM-D,DM-M}, are calculated using first principles based on combining
magnetic force theorem and linear-response approach \cite%
{J-1987,wan2006,wan2009}, which have successfully applied to various
magnetic materials \cite{wan2006,wan2009,wan2011,wan2021,mywork-1}.
Monte Carlo (MC) simulations are performed with Metropolis algorithm for
Heisenberg model \cite{metropolis1949monte,MC-1,MC-2}. The size of the cell
in the MC simulation are 16$\times $16$\times $16-unit cells with periodic
boundary conditions. At each temperature we carry out 400000 sweeps to
prepare the system, and sample averages are accumulated over 800000 sweeps.
\section{Results}
\subsection{The electronic and magnetic properties}
\begin{figure*}[tbph]
\centering\includegraphics[width=0.98\textwidth]{band.jpg}
\caption{(a) Band structure of nonmagnetic FeGe from LDA + SOC calculation.
(b),(c) Orbital-resolved band structure of Fe-$d_{xy}/d_{x^{2}-y^{2}}$ and
Fe-$d_{xz}/d_{yz}$ for A-type AFM configuration with spin orientations along
the (001) direction from LSDA + SOC calculation.}
\label{band}
\end{figure*}
The pristine phase of FeGe crystallizes in the hexagonal structure with
space group P6/mmm (No. 191) \cite{FeGe-1963}, where the coordinates of the
atoms are shown in table\ \ref{cdw} and Fig. \ref{crystal}. Firstly we
perform nonmagnetic local-density approximation (LDA) + SOC calculation, and
show the band structures in Fig. \ref{band}(a). While Ge-2$p$ states are
mainly located between -6.0 and -2.0 eV, the main contribution around the
Fermi level comes from the 3$d$ orbitals of Fe ions, as shown in Fig. \ref%
{dos} of Appendix. Consistent with previous first-principle calculations
\cite{2203.01930,2210.06653}, the kagome flat bands around the Fermi level
exhibit a large peak in the density of states, which indicates the magnetic
instability. Therefore, LSDA + SOC calculations are performed based on the
A-type AFM configuration and the band structures are shown in Figs. \ref%
{band}(b) and \ref{band}(c). The magnetic moment of Fe ions is estimated to
be 1.55 $\mu _{B}$, which is in agreement with the previous experimental
value around 1.7 $\mu _{B}$\ \cite{FeGe-1975,FeGe-1978}.\ Note that each
kagome layer is FM and the key signatures of electronic structures in kagome
lattice are remained. The magnetic splitting is around 1.8 eV (see Fig. \ref%
{dos} of Appendix), which makes that the flat bands above and below Fermi
level correspond to the spin minority bands and spin majority bands
respectively. Meanwhile, the vHSs that are relatively far from the Fermi
level in the nonmagnetic state, are brought near the Fermi level by the spin
splitting, as shown in Figs. \ref{band}(b) and \ref{band}(c). We present
orbital-resolved band structures, and find that the vHSs near the Fermi
level, which marked as vHS-1 and vHS-2 in Figs. \ref{band}(b) and \ref{band}%
(c), are mainly contributed by the $d_{xy}/d_{x^{2}-y^{2}}$ and $%
d_{xz}/d_{yz}$ orbitals respectively. These vHSs near the Fermi level are
suggested to induce symmetry-breaking CDW order in kagome metal FeGe \cite%
{2210.06653}.
\begin{table}[tbp]
\caption{Spin exchange parameters (in meV) including Heisenberg and DM
interactions of FeGe evaluated from LSDA+SOC calculations, respectively. The
Fe-Fe distances and the corresponding number of neighbors NN are presented
in the second and third columns.}%
\begin{tabular}{c|cccc}
\hline\hline
& Distance($\mathring{\mathrm{A}}$) & NN & J & DM \\ \hline
$J_{1}$ & 2.50 & 4 & -41.97 & (0, 0, 0.03) \\
$J_{2}$ & 4.33 & 4 & 5.49 & (0, 0, -0.12) \\ \hline
$J_{c1}$ & 4.05 & 2 & 8.44 & (0, 0, 0) \\
$J_{c2}$ & 4.76 & 8 & -2.04 & (0.01, -0.02, -0.07) \\
$J_{c3}$ & 5.93 & 8 & 1.81 & (0.07, -0.04, -0.09) \\ \hline
$J_{c^{\prime }1}$ & 8.11 & 2 & -0.66 & (0, 0, 0) \\
$J_{c^{\prime }2}$ & 8.49 & 8 & 0.09 & (-0.04, -0.09, -0.03) \\ \hline\hline
\end{tabular}%
\label{JDM}
\end{table}
To quantitatively understand the rich magnetic phenomenon in kagome\ FeGe, a
microscopic magnetic model with proper parameters is extremely important.
Based on the calculated electronic structures, we estimate the exchange
parameters including Heisenberg and DM interactions using the
linear-response approach \cite{J-1987,wan2006,wan2009} and summarize the
results in table \ref{JDM}. As shown in Fig. \ref{crystal}, we divide the
magnetic interactions considered into three types: the exchange interactions
$J_{i}$, $J_{ci}$ and $J_{c^{\prime }i}\ $represent the $i$%
th-nearest-neighbor interactions between Fe ions within kagome layers, on
the nearest kagome layers, and on the next nearest kagome layers
respectively. As shown in table \ref{JDM}, the in-plane nearest neighbor
coupling $J_{1}$\ favors FM order and is estimated to be -41.97 meV, which
has the similar value with the one in kagome FeSn (around -50 meV) \cite%
{mag-03,mag-11,FeSn-03,FeSn-04}. Note that the distance in $J_{1}$ is 2.5 \r{%
A}\ while the others are all greater than 4 \r{A}. Though there are also AFM
in-plane magnetic interactions such as in-plane next-nearest neighbor
coupling $J_{2}$, they are at least an order of magnitude smaller than $J_{1}
$, resulting in each FM kagome layer. As the out-of-plane nearest neighbor
coupling, $J_{c1}$ is estimated to be 8.44 meV. It makes the magnetic moments stacked
antiferromagnetically between kagome layers, consequently resulting in the
A-type AFM order in kagome\ FeGe, which is consistent with the experiment
\cite{FeGe-1972}. It is worth mentioning that, SOC always exists and leads
to the DM interactions even in the centrosymmetric compound\ FeGe, since not
all Fe-Fe bonds have inversion symmetry. For the equivalent DM interactions
connected by the crystal symmetry (see table \ref{relation1}-\ref{relation3}
in\ Appendix), we only present one of them as a representative. As shown in
table \ref{JDM}, the in-plane nearest neighbor $\mathbf{D}_{1}$ has the form
of (0, 0, $D_{1}^{z}$) according to the crystal symmetry, and $D_{1}^{z}$ is
estimated to be 0.03 meV. Meanwhile, the in-plane next nearest neighbor $%
\mathbf{D}_{2}$ is estimated to be (0, 0, $-$0.12) meV. For the out-of-plane
nearest neighbor, $\mathbf{D}_{c1}$ is zero because its bond has an
inversion center. The other calculated DM interactions are also listed in
table \ref{JDM}, and most of them are small in the order of 0.01 meV.
To explore the magnetic anisotropy in kagome FeGe, we consider the MAE with
the expression $E_{MAE}=K_{2}\sin ^{2}\theta +K_{4}\sin ^{4}\theta $ \cite%
{FeGe-1972,FeGe-1978,FeGe-1984,FeGe-1988}\ neglecting terms of order higher
than four, where $\theta $\ is the angle between the magnetic moment and the
z-axis. The values of $K_{2}$ and $K_{4}$\ are estimated to be 0.066 meV and
0.018 meV respectively based on the approach of Ref. \cite{force-1,force-2},
which are in reasonable agreement with the experimental values 0.021 meV
\cite{FeGe-1988} and 0.012 meV \cite{FeGe-1972}. Here $K_{2}$ and $K_{4}$\
are both positive, making out-of-plane magnetization favored, which is
different from the easy-plane anisotropy in FeSn \cite{mag-11}. Note that
positive $K_{4}$ is the requirement for the stability of the double cone
magnetic structure, which will be discussed below.
According to the experiments \cite{FeGe-1972}, the Curie-Weiss temperature $%
\theta _{CW}$ in kagome\ FeGe is -200 K while the N\'{e}el temperature $%
T_{N} $ is 410 K. The relative low value of the frustration index $|\theta
_{CW}|$/$T_{N}$ (smaller than 1) reveals the interplay of the FM and AFM
interactions \cite{baral2017synthesis}. As shown in table \ref{JDM}, our
calculated results of spin exchange couplings also verify the coexistence of
the FM and AFM interactions. Based on these calculated spin exchange
parameters, we calculate N\'{e}el temperature and Curie-Weiss temperature by
MC simulations \cite{metropolis1949monte,MC-1,MC-2}. The $\theta _{CW}$ and$%
\ T_{N}$ are calculated to be -219 K and 370 K respectively, which agrees
well with the experiment \cite{FeGe-1972}.
Similar to the electronic structure of a kagome lattice, the spin wave for a
localized spin model with FM nearest-neighbor magnetic exchange also yields
a flat magnetic band and a Dirac magnon \cite{kagomemagnon}. Using the
calculated spin model parameters, one can obtain the magnon spectrum \cite%
{holstein1940field,wang2021determination}.\ The calculated spin-wave
dispersion along the high-symmetry axis is shown in Fig. \ref{spinwave},
which basically captures the key features of kagome lattice geometry.
Similar with FeSn case \cite{mag-03,mag-11,FeSn-03,FeSn-04}, strongly
dispersive magnons in the xy-plane extend to about 260 meV, where the magnon
dispersion along the out-of-plane direction has relatively small bandwidth
of less than 15 meV, reflecting the quasi-two-dimensional magnetic
properties in kagome FeGe. Meanwhile, the Dirac-like node appears at the K
point at about 107 meV, and we find that DM interactions introduce a gap
around 1 meV at the Dirac point, as shown in the inset of Fig. \ref{spinwave}%
. Furthermore, the single-ion anisotropy produces a spin gap of about 2 meV,
which could be verified in future inelastic neutron scattering experiments.
\begin{figure}[tbph]
\centering\includegraphics[width=0.48\textwidth]{spinwave.jpg}
\caption{Calculated spin-wave dispersion curves along the high-symmetry axis
for FeGe. The insets show the spin gap at $\Gamma$ point induced by
easy-axis anisotropy, and the gap located at about 107 meV of K point
induced by DM interactions.}
\label{spinwave}
\end{figure}
\subsection{The double cone magnetic structure}
\begin{table*}[tbp]
\caption{Four types of $2\times 2\times 2$ CDW phases which can lead to
non-zero DM contribution to double cone spin structure. The corresponding
Wyckoff positions and the coordinates of the atoms in the pristine phase and
these four CDW phases are summarized. }
\label{cdw}%
\begin{tabular}{ccccccccccccccc}
\hline\hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{$P622$%
(type \uppercase\expandafter{\romannumeral1})} & \multicolumn{3}{c|}{$P622$%
(type \uppercase\expandafter{\romannumeral2})} & \multicolumn{3}{c|}{$P6$$%
_{3}$$22$(type \uppercase\expandafter{\romannumeral1})} & \multicolumn{3}{c}{%
$P6$$_{3}$$22$(type \uppercase\expandafter{\romannumeral2})} \\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & & WP & \multicolumn{1}{c|}{
Coordinates} & & WP & \multicolumn{1}{c|}{Coordinates} & & WP &
\multicolumn{1}{c|}{Coordinates} & & WP & Coordinates \\ \hline
\multirow{4}{*}{Ge1} & \multirow{4}{*}{1a} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(0, 0, 0)}} & Ge1 & 1a & \multicolumn{1}{c|}{(0, 0, 0)} & Ge1
& 2e & \multicolumn{1}{c|}{(0, 0, z$_{1}$)} & Ge1 & 2a & \multicolumn{1}{c|}{
(0, 0, 0)} & Ge1 & 2b & (0, 0, 1/4) \\
& & \multicolumn{1}{c|}{} & Ge2 & 1b & \multicolumn{1}{c|}{(0, 0, 1/2)} &
Ge2 & 6i & \multicolumn{1}{c|}{(1/2, 0, z$_{2}$)} & Ge2 & 6g &
\multicolumn{1}{c|}{(x$_{1}$, 0, 0)} & Ge2 & 6h & (x$_{1}$, 2x$_{1}$, 1/4)
\\
& & \multicolumn{1}{c|}{} & Ge3 & 3f & \multicolumn{1}{c|}{(0, 1/2, 0)} &
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & & \\
& & \multicolumn{1}{c|}{} & Ge4 & 3g & \multicolumn{1}{c|}{(0, 1/2, 1/2)} &
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & & \\ \hline
\multirow{4}{*}{Ge2} & \multirow{4}{*}{2d} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(1/3, 2/3, 1/2)}} & Ge5 & 4h & \multicolumn{1}{c|}{(1/3,
2/3, z$_{1}$)} & Ge3 & 2c & \multicolumn{1}{c|}{(1/3, 2/3, 0)} & Ge3 & 2c &
\multicolumn{1}{c|}{(1/3, 2/3, 1/4)} & Ge3 & 4f & (1/3, 2/3, z$_{2}$) \\
& & \multicolumn{1}{c|}{} & Ge6 & 12n & \multicolumn{1}{c|}{(x$_{2}$, y$%
_{2} $, z$_{2}$)} & Ge4 & 2d & \multicolumn{1}{c|}{(1/3, 2/3,1/2)} & Ge4 & 2d
& \multicolumn{1}{c|}{(1/3, 2/3, 3/4)} & Ge4 & 12i & (x$_{3}$, y$_{3}$, z$%
_{3}$) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge5 & 6l &
\multicolumn{1}{c|}{(x$_{3}$, 2x$_{3}$, 0)} & Ge5 & 6h & \multicolumn{1}{c|}{
(x$_{2}$, 2x$_{2}$, 1/4)} & & & \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge6 & 6m &
\multicolumn{1}{c|}{(x$_{4}$, 2x$_{4}$, 1/2)} & Ge6 & 6h &
\multicolumn{1}{c|}{(x$_{3}$, 2x$_{3}$, 1/4)} & & & \\ \hline
\multirow{4}{*}{Fe} & \multirow{4}{*}{3f} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(1/2, 0, 0)}} & Fe1 & 6j & \multicolumn{1}{c|}{(x$_{3}$, 0,
0)} & Fe1 & 12n & \multicolumn{1}{c|}{(x$_{5}$, y$_{5}$, z$_{5}$)} & Fe1 & 6g
& \multicolumn{1}{c|}{(x$_{4}$, 0, 0)} & Fe1 & 6h & (x$_{4}$, 2x$_{4}$, 1/4)
\\
& & \multicolumn{1}{c|}{} & Fe2 & 6k & \multicolumn{1}{c|}{(x$_{4}$, 0, 1/2)
} & Fe2 & 12n & \multicolumn{1}{c|}{(x$_{6}$, y$_{6}$, z$_{6}$)} & Fe2 & 6g
& \multicolumn{1}{c|}{(x$_{5}$, 0, 0)} & Fe2 & 6h & (x$_{5}$, 2x$_{5}$, 1/4)
\\
& & \multicolumn{1}{c|}{} & Fe3 & 6l & \multicolumn{1}{c|}{(x$_{5}$, 2x$%
_{5} $, 0)} & & & \multicolumn{1}{c|}{} & Fe3 & 12i & \multicolumn{1}{c|}{
(x$_{6}$, y$_{6}$, z$_{6}$)} & Fe3 & 12i & (x$_{6}$, y$_{6}$, z$_{6}$) \\
& & \multicolumn{1}{c|}{} & Fe4 & 6m & \multicolumn{1}{c|}{(x$_{6}$, 2x$%
_{6} $, 1/2)} & & & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} &
& & \\ \hline\hline
\end{tabular}%
\end{table*}
At $T_{canting}$ = 60 K, the kagome lattice FeGe becomes
a c-axis double cone AFM structure \cite%
{FeGe-1972,FeGe-1975,FeGe-1978,FeGe-1984,FeGe-1988} where the magnetic
ground state could be written as Eq. (\ref{cone}) in Appendix. Considering
the magnetic interactions and the MAE, the total energy of the double cone
spin structure could be written as Eq. (\ref{etot}) in Appendix. When DM
interactions are not considered, the extremum condition of the total energy
gives the equilibrium value of wave vector $\delta $ and the cone half angle
$\theta $ (i.e. Eq. (\ref{cosq}) and (\ref{sin}) in Appendix)
\begin{eqnarray}
\cos \delta &=&\frac{\sum_{i}N_{ci}J_{ci}}{4\sum_{i}N_{c^{\prime
}i}J_{c^{\prime }i}} \\
\sin ^{2}\theta &=&-\frac{K_{2}-\frac{1}{2N}\sum_{i}N_{c^{\prime
}i}J_{c^{\prime }i}\delta ^{4}}{2K_{4}} \label{mainsin}
\end{eqnarray}%
\ \
Note that the minimum of the total energy requires that the second
derivative of Eq. (\ref{etot}) in Appendix is positive, thus $K_{4}$ must be positive.
Hence $K_{2}-\frac{1}{2N}\sum_{i}N_{c^{\prime }i}J_{c^{\prime }i}\delta ^{4}$
(i.e. the numerator\ of Eq. (\ref{mainsin})) must be negative. However, our
reasonable magnetic parameters cannot explain the double cone magnetic
ground state. The value of wave vector $\delta $ is small in experimental
measurement (0.17 in Ref. \cite{FeGe-1972} and 0.25 in Ref. \cite{FeGe-1977}%
), thus $\delta ^{4}$ is around 0.001. Meanwhile, the value of $\frac{1}{2N}%
\sum_{i}N_{c^{\prime }i}J_{c^{\prime }i}$ is of the order of 1 meV, which obviously cannot explain the double cone magnetic
structure \cite{FeGe-1972}.
We thus consider the effect of DM interactions on double cone spin
structure. Since the exchange interactions between two next nearest neighbor
kagome layers are relatively small, we only consider the Heisenberg and DM
interactions between two nearest neighbor kagome layers, i.e. $J_{ci}$ and $%
\mathbf{D}_{ci}$. We find that wave vector $\delta $ and the cone half angle
$\theta $ have the expressions as (i.e. Eq. (\ref{tanq}) and (\ref{sin2}) in
Appendix)%
\begin{eqnarray}
\tan \delta &=&\frac{\sum_{i,j}D_{ci,j}^{z}}{\sum_{i}N_{ci}J_{ci}} \\
\sin ^{2}\theta &=&-\frac{K_{2}-\frac{1}{2N}\sum_{i,j}D_{ci,j}^{z}\delta }{%
2K_{4}} \label{mainsin2}
\end{eqnarray}
It should be noted that, comparing Eq. (\ref{mainsin}) and (\ref{mainsin2}),
DM interactions are more efficient than Heisenberg interactions for causing
double cone spin structure since $\delta $ is small. Though the space group $%
P6/mmm$ of high-temperature phase in FeGe has a global inversion center, not
all Fe-Fe bonds have inversion symmetry and DM interactions could exist.
However, according to the inversion symmetry of space group $P6/mmm$, the
total contribution of DM interactions to the energy of double cone magnetic
structure in Eq. (\ref{etot}) is absent, i.e. $\sum_{i,j}D_{ci,j}^{z}=0$
(see Appendix D). Meanwhile, mirror symmetries in space group $P6/mmm$ would
also eliminate the contribution of DM interactions based on the symmetry
analysis. Therefore, DM interactions have no net contribution to double cone
magnetic structure with the symmetry of high-temperature phase. For the CDW
phases with the space group of $P6/mmm$ suggested by Ref. \cite%
{2206.12033,2210.06359} (the first two structures of the table \ref{cdw1} in
Appendix), the total contribution of DM interactions is still absent and
cannot explain the magnetic ground state of double cone spin structure.
\subsection{The interplay of CDW and double cone structure}
As mentioned above, DM interactions play a more important role in the
double cone spin structure. Meanwhile, it is very sensitive to atomic
displacements. Therefore, in the following we explore the CDW phases with
symmetry-allowed DM contribution to double cone spin structure which may
explain the canted magnetic ground state.
The $2\times 2\times 2$ supercell structure of CDW phase (compared with the
nonmagnetic pristine phase) is suggested experimentally \cite%
{teng2022discovery,yin2022discovery,2206.12033,2210.06359}. Considering all
CDW\ phases whose associated point group in the maximal subgroups of $D_{6h}$%
, we find 68 different possible CDW phases which are the subgroups of the
parent $P6/mmm$ phase with $2\times 2\times 2$ supercell (see details in
Appendix D). The corresponding relations of atomic positions between the
pristine phase and these proposed CDW phases are all summarized in table \ref%
{cdw1}-\ref{cdw5} of Appendix.
Note that the inversion symmetry and mirror symmetries would all eliminate the net contribution of DM interactions as disscussed
above. We find that among these 68 proposed CDW phases, only four distorted
structures break all these symetries above, and can lead to non-zero DM
contribution in Eq. (\ref{etot}) of Appendix. We list the corresponding
Wyckoff positions (WP) and the coordinates of the atoms in the pristine
phase and these four CDW phases in table \ref{cdw}. They comes from two
space groups $P622$ and $P6_{3}22$. It should be mentioned that there are
two different CDW phases for each of these two space groups, which are
labeled as (type I) and (type II) in table \ref{cdw}. Note that the CDW
phase with $P622$ space group is also suggested in Ref. \cite%
{teng2022discovery}.
Raman spectroscopy is a fast and usually non-destructive technique which can
be used to characterize the structural distortion of materials. Based on the
atomic coordinates in table \ref{cdw}, we predict the irreducible
representation of the Raman active modes of these four proposed CDW phases
using symmetry analysis \cite{Bradley}. For $P622$ (type I) and (type II) CDW phases, the
Raman active modes are 8$\mathrm{{A_{1}}}$ + 26$\mathrm{{E_{1}}}$ + 22$%
\mathrm{{E_{2}}}$ and 10$\mathrm{{A_{1}}}$ + 26$\mathrm{{E_{1}}}$ + 22$%
\mathrm{{E_{2}}}$. Meanwhile, for $P6_{3}22$ (type I) and
(type II), the Raman active modes are 8$\mathrm{{A_{1}}}$ + 24$\mathrm{{E_{1}%
}}$ + 24$\mathrm{{E_{2}}}$ and 10$\mathrm{{A_{1}}}$ + 24$\mathrm{{E_{1}}}$ +
24$\mathrm{{E_{2}}}$, respectively. Note that even within the
same symmetry of space group $P622$, the different structural distortion of
CDW phases $P622$ (type I) and (type II) could result in the different
number of Raman active modes (56 and 58 respectively), which could be
identified by Raman spectroscopy.
\section{CONCLUSION}
In conclusion, we systematically analyze the electronic and magnetic
properties of kagome FeGe. Our numerical results show that this material is
a magnetic metal exhibiting large magnetic splitting around 1.8 eV. The
magnetic splitting makes the flat bands away from Fermi level, and bring two
vHSs near the Fermi level. We estimate the magnetic parameters, and find
that the ferromagnetic nearest-neighbor $J_{1}$ dominates over the others,
while the magnetic interactions between nearest kagome layers favors
antiferromagnetic. Based on these spin exchange parameters, the calculated N%
\'{e}el temperature and Curie-Weiss temperature also agree well with the
experiments. Furthermore, the magnetic excitation spectra are calculated
using linear spin wave theory and a spin gap about 2 meV is predicted. Note
that the double cone magnetic transition at a lower temperature cannot be
reproduced by these reasonable magnetic parameters. Meanwhile, due to the
inversion symmetry and mirror symmetries in the space group $P6/mmm$
of high-temperature phase, the total contribution of DM interactions to the
double cone magnetic structure is absent. Since DM interactions are very
sensitive to small atomic displacements and symmetry restrictions, and also much more
efficient than Heisenberg interactions for causing this canted spin
structure, we propose that the double cone spin structure may arise from the
structural distortion. We explore 68 possible CDW phases of kagome FeGe
which are subgroups of the pristine phase with $2\times 2\times 2$
supercell, and symmetry-allowed four CDW structures which have non-zero DM
contribution and may result in double cone spin structure are proposed.
These four CDW phases belong to two space groups $P622$ and $P6_{3}22$, and
we further propose that they can be identified from their different numbers
of Raman active peaks. Therefore, we believe that symmetry analysis
plays an important role in exploring the possible structural
distortion in complex magnetic configurations.
\section{Acknowledgements}
This work was supported by the NSFC (No. 12188101, 11834006, 12004170,
11790311, 51721001), National Key R\&D Program of China (No.
2018YFA0305704), Natural Science Foundation of Jiangsu Province, China
(Grant No. BK20200326), and the excellent programme in Nanjing University.
Xiangang Wan also acknowledges the support from the Tencent Foundation
through the XPLORER PRIZE.
\section{Appendix}
\subsection{The density of states in kagome FeGe}
The Partial density of states (DOS) of FeGe from LSDA + SOC
calculations are shown in Fig. \ref{dos}.
\begin{figure}[tbph]
\centering\includegraphics[width=0.48\textwidth]{dos.jpg}
\caption{Partial DOS of FeGe from LSDA + SOC
calculations. The Fermi energy is set to zero. (a) and (b) represent the
spin-up and spin-down channel of $d$ orbitals in Fe1 atom located at
(1/2,0,0), while (c) represents the DOS of Ge-$p$ orbitals.}
\label{dos}
\end{figure}
\subsection{The symmetry restrictions on the magnetic interactions}
\begin{table}[tbh]
\caption{The distances, the bond information and the symmetry restricted
interactions of corresponding Fe ions within xy-planes. Here $n$, $n^{\prime
}$ and $R_{l}$\ correspond to $\mathbf{J}_{\boldsymbol{\protect\tau }_{n},%
\boldsymbol{\protect\tau _{n^{\prime }}+}\mathbf{R}_{l}}$, where $R_{l}$ and
$\protect\tau _{n}$ represent the lattice translation vector and the
position of magnetic ions in the lattice basis.\ Three magnetic ions are
located at $\protect\tau _{1}$ (1/2, 0, 0), $\protect\tau _{2}$ (0, 1/2, 0),
and $\protect\tau _{3}$ (1/2, 1/2, 0). The equivalent $\mathbf{D}_{i}$'s are
labeled as the sub-index of $j$, i.e. the $\mathbf{D}_{i,j}$ in the table.}
\label{relation1}%
\begin{tabular}{c|ccc|cc}
\hline\hline
Distance($\mathring{\mathrm{A}}$) & $n$ & $n^{\prime }$ & $R_{l}$ & $J$ & DM
\\ \hline
2.50 & 3 & 1 & (0,1,0) & $J_{1}$ & $\mathbf{D}_{1,1}(0,0,D_{1}^{z})$ \\
& 1 & 2 & (0,-1,0) & $J_{1}$ & $\mathbf{D}_{1,2}(0,0,D_{1}^{z})$ \\
& 2 & 3 & (0,0,0) & $J_{1}$ & $\mathbf{D}_{1,3}(0,0,D_{1}^{z})$ \\
& 3 & 1 & (0,0,0) & $J_{1}$ & $\mathbf{D}_{1,4}(0,0,D_{1}^{z})$ \\
& 1 & 2 & (1,0,0) & $J_{1}$ & $\mathbf{D}_{1,5}(0,0,D_{1}^{z})$ \\
& 2 & 3 & (-1,0,0) & $J_{1}$ & $\mathbf{D}_{1,6}(0,0,D_{1}^{z})$ \\ \hline
4.33 & 1 & 2 & (1,-1,0) & $J_{2}$ & $\mathbf{D}_{2,1}(0,0,D_{2}^{z})$ \\
& 2 & 3 & (0,1,0) & $J_{2}$ & $\mathbf{D}_{2,2}(0,0,D_{2}^{z})$ \\
& 3 & 1 & (-1,0,0) & $J_{2}$ & $\mathbf{D}_{2,3}(0,0,D_{2}^{z})$ \\
& 1 & 2 & (0,0,0) & $J_{2}$ & $\mathbf{D}_{2,4}(0,0,D_{2}^{z})$ \\
& 2 & 3 & (-1,-1,0) & $J_{2}$ & $\mathbf{D}_{2,5}(0,0,D_{2}^{z})$ \\
& 3 & 1 & (1,1,0) & $J_{2}$ & $\mathbf{D}_{2,6}(0,0,D_{2}^{z})$ \\
\hline\hline
\end{tabular}%
\end{table}
\begin{table}[tbh!]
\caption{The distances, the bond information and the symmetry restricted
interactions of corresponding Fe ions between nearest-neighbor $(001)$%
-planes. Here $n$, $n^{\prime }$ and $R_{l}$\ correspond to $\mathbf{J}_{%
\boldsymbol{\protect\tau }_{n},\boldsymbol{\protect\tau _{n^{\prime }}+}%
\mathbf{R}_{l}}$, where $R_{l}$ and $\protect\tau _{n}$ represent the
lattice translation vector and the position of magnetic ions in the lattice
basis. Three magnetic ions are located at $\protect\tau _{1}$ (1/2, 0, 0), $%
\protect\tau _{2}$ (0, 1/2, 0), and $\protect\tau _{3}$ (1/2, 1/2, 0). The
equivalent $\mathbf{D}_{ci}$'s are labeled as the sub-index of $j$, i.e. the
$\mathbf{D}_{ci,j}$ in the table.}
\label{relation2}%
\begin{tabular}{c|ccc|cc}
\hline\hline
Distance($\mathring{\mathrm{A}}$) & $n$ & $n^{\prime }$ & $R_{l}$ & $J$ & DM
\\ \hline
4.05 & 1 & 1 & (0,0,1) & $J_{c1}$ & $\mathbf{D}_{c1,1}(0,0,0)$ \\
& 2 & 2 & (0,0,1) & $J_{c1}$ & $\mathbf{D}_{c1,2}(0,0,0)$ \\
& 3 & 3 & (0,0,1) & $J_{c1}$ & $\mathbf{D}_{c1,3}(0,0,0)$ \\ \hline
4.76 & 3 & 1 & (0,0,1) & $J_{c2}$ & $\mathbf{D}_{c2,1}(D_{c2}^{x},-\sqrt{3}%
D_{c2}^{x},D_{c2}^{z})$ \\
& 1 & 2 & (1,0,1) & $J_{c2}$ & $\mathbf{D}_{c2,2}(D_{c2}^{x},\sqrt{3}%
D_{c2}^{x},D_{c2}^{z})$ \\
& 2 & 3 & (-1,0,1) & $J_{c2}$ & $\mathbf{D}%
_{c2,3}(-2D_{c2}^{x},0,D_{c2}^{z}) $ \\
& 3 & 1 & (0,1,1) & $J_{c2}$ & $\mathbf{D}_{c2,4}(-D_{c2}^{x},\sqrt{3}%
D_{c2}^{x},D_{c2}^{z})$ \\
& 1 & 2 & (0,-1,1) & $J_{c2}$ & $\mathbf{D}_{c2,5}(-D_{c2}^{x},-\sqrt{3}%
D_{c2}^{x},D_{c2}^{z}) $ \\
& 2 & 3 & (0,0,1) & $J_{c2}$ & $\mathbf{D}_{c2,6}(2D_{c2}^{x},0,D_{c2}^{z})$
\\
& 1 & 3 & (0,-1,1) & $J_{c2}$ & $\mathbf{D}_{c2,7}(-D_{c2}^{x},\sqrt{3}%
D_{c2}^{x},-D_{c2}^{z}) $ \\
& 2 & 1 & (0,1,1) & $J_{c2}$ & $\mathbf{D}_{c2,8}(-D_{c2}^{x},-\sqrt{3}%
D_{c2}^{x},-D_{c2}^{z}) $ \\
& 3 & 2 & (0,0,1) & $J_{c2}$ & $\mathbf{D}_{c2,9}(2D_{c2}^{x},0,-D_{c2}^{z})$
\\
& 1 & 3 & (0,0,1) & $J_{c2}$ & $\mathbf{D}_{c2,10}(D_{c2}^{x},-\sqrt{3}%
D_{c2}^{x},-D_{c2}^{z})$ \\
& 2 & 1 & (-1,0,1) & $J_{c2}$ & $\mathbf{D}_{c2,11}(D_{c2}^{x},\sqrt{3}%
D_{c2}^{x},-D_{c2}^{z}) $ \\
& 3 & 2 & (1,0,1) & $J_{c2}$ & $\mathbf{D}%
_{c2,12}(-2D_{c2}^{x},0,-D_{c2}^{z})$ \\ \hline
5.93 & 2 & 1 & (-1,1,1) & $J_{c3}$ & $\mathbf{D}_{c3,1}(-\sqrt{3}%
D_{c3}^{y},D_{c3}^{y},-D_{c3}^{z})$ \\
& 3 & 2 & (0,-1,1) & $J_{c3}$ & $\mathbf{D}%
_{c3,2}(0,-2D_{c3}^{y},-D_{c3}^{z})$ \\
& 1 & 3 & (1,0,1) & $J_{c3}$ & $\mathbf{D}_{c3,3}(\sqrt{3}%
D_{c3}^{y},D_{c3}^{y},-D_{c3}^{z})$ \\
& 2 & 1 & (0,0,1) & $J_{c3}$ & $\mathbf{D}_{c3,4}(\sqrt{3}%
D_{c3}^{y},-D_{c3}^{y},-D_{c3}^{z})$ \\
& 3 & 2 & (1,1,1) & $J_{c3}$ & $\mathbf{D}_{c3,5}(0,2D_{c3}^{y},-D_{c3}^{z})$
\\
& 1 & 3 & (-1,-1,1) & $J_{c3}$ & $\mathbf{D}_{c3,6}(-\sqrt{3}%
D_{c3}^{y},-D_{c3}^{y},-D_{c3}^{z})$ \\
& 1 & 2 & (0,0,1) & $J_{c3}$ & $\mathbf{D}_{c3,7}(\sqrt{3}%
D_{c3}^{y},-D_{c3}^{y},D_{c3}^{z})$ \\
& 2 & 3 & (-1,-1,1) & $J_{c3}$ & $\mathbf{D}%
_{c3,8}(0,2D_{c3}^{y},D_{c3}^{z}) $ \\
& 3 & 1 & (1,1,1) & $J_{c3}$ & $\mathbf{D}_{c3,9}(-\sqrt{3}%
D_{c3}^{y},-D_{c3}^{y},D_{c3}^{z})$ \\
& 1 & 2 & (1,-1,1) & $J_{c3}$ & $\mathbf{D}_{c3,10}(-\sqrt{3}%
D_{c3}^{y},D_{c3}^{y},D_{c3}^{z})$ \\
& 2 & 3 & (0,1,1) & $J_{c3}$ & $\mathbf{D}%
_{c3,11}(0,-2D_{c3}^{y},D_{c3}^{z}) $ \\
& 3 & 1 & (-1,0,1) & $J_{c3}$ & $\mathbf{D}_{c3,12}(\sqrt{3}%
D_{c3}^{y},D_{c3}^{y},D_{c3}^{z})$ \\ \hline\hline
\end{tabular}%
\end{table}
\begin{table}[tbh]
\caption{The distances, the bond information and the symmetry restricted
interactions of corresponding Fe ions between next-nearest-neighbor $(001)$%
-planes. Here $n$, $n^{\prime }$ and $R_{l}$\ correspond to $\mathbf{J}_{%
\boldsymbol{\protect\tau }_{n},\boldsymbol{\protect\tau _{n^{\prime }}+}%
\mathbf{R}_{l}}$, where $R_{l}$ and $\protect\tau _{n}$ represent the
lattice translation vector and the position of magnetic ions in the lattice
basis.\ Three magnetic ions are located at $\protect\tau _{1}$ (1/2, 0, 0), $%
\protect\tau _{2}$ (0, 1/2, 0), and $\protect\tau _{3}$ (1/2, 1/2, 0). The
equivalent $\mathbf{D}_{c^{\prime }i}$'s are labeled as the sub-index of $j$%
, i.e. the $\mathbf{D}_{c^{\prime }i,j}$ in the table.}
\label{relation3}%
\begin{tabular}{c|ccc|cc}
\hline\hline
Distance($\mathring{\mathrm{A}}$) & $n$ & $n^{\prime }$ & $R_{l}$ & $J$ & DM
\\ \hline
8.11 & 1 & 1 & (0,0,2) & $J_{c^{\prime }1}$ & $\mathbf{D}_{c^{\prime
}1,1}(0,0,0)$ \\
& 2 & 2 & (0,0,2) & $J_{c^{\prime }1}$ & $\mathbf{D}_{c^{\prime }1,2}(0,0,0)$
\\
& 3 & 3 & (0,0,2) & $J_{c^{\prime }1}$ & $\mathbf{D}_{c^{\prime }1,3}(0,0,0)$
\\ \hline
8.49 & 2 & 1 & (0,1,2) & $J_{c^{\prime }2}$ & $\mathbf{D}_{c^{\prime
}2,1}(D_{c^{\prime }2}^{x},\sqrt{3}D_{c^{\prime }2}^{x},D_{c^{\prime
}2}^{z}) $ \\
& 3 & 2 & (0,0,2) & $J_{c^{\prime }2}$ & $\mathbf{D}_{c^{\prime
}2,2}(-2D_{c^{\prime }2}^{x},0,D_{c^{\prime }2}^{z})$ \\
& 1 & 3 & (0,-1,2) & $J_{c^{\prime }2}$ & $\mathbf{D}_{c^{\prime
}2,3}(D_{c^{\prime }2}^{x},-\sqrt{3}D_{c^{\prime }2}^{x},D_{c^{\prime
}2}^{z})$ \\
& 2 & 1 & (-1,0,2) & $J_{c^{\prime }2}$ & $\mathbf{D}_{c^{\prime
}2,4}(-D_{c^{\prime }2}^{x},-\sqrt{3}D_{c^{\prime }2}^{x},D_{c^{\prime
}2}^{z})$ \\
& 3 & 2 & (1,0,2) & $J_{c^{\prime }2}$ & $\mathbf{D}_{c^{\prime
}2,5}(2D_{c^{\prime }2}^{x},0,D_{c^{\prime }2}^{z})$ \\
& 1 & 3 & (0,0,2) & $J_{c^{\prime }2}$ & $\mathbf{D}_{c^{\prime
}2,6}(-D_{c^{\prime }2}^{x},\sqrt{3}D_{c^{\prime }2}^{x},D_{c^{\prime
}2}^{z})$ \\
& 1 & 2 & (1,0,2) & $J_{c^{\prime }2}$ & $\mathbf{D}_{c^{\prime
}2,7}(-D_{c^{\prime }2}^{x},-\sqrt{3}D_{c^{\prime }2}^{x},-D_{c^{\prime
}2}^{z})$ \\
& 2 & 3 & (-1,0,2) & $J_{c^{\prime }2}$ & $\mathbf{D}_{c^{\prime
}2,8}(2D_{c^{\prime }2}^{x},0,-D_{c^{\prime }2}^{z})$ \\
& 3 & 1 & (0,0,2) & $J_{c^{\prime }2}$ & $\mathbf{D}_{c^{\prime
}2,9}(-D_{c^{\prime }2}^{x},\sqrt{3}D_{c^{\prime }2}^{x},-D_{c^{\prime
}2}^{z})$ \\
& 1 & 2 & (0,-1,2) & $J_{c^{\prime }2}$ & $\mathbf{D}_{c^{\prime
}2,10}(D_{c^{\prime }2}^{x},\sqrt{3}D_{c^{\prime }2}^{x},-D_{c^{\prime
}2}^{z})$ \\
& 2 & 3 & (0,0,2) & $J_{c^{\prime }2}$ & $\mathbf{D}_{c^{\prime
}2,11}(-2D_{c^{\prime }2}^{x},0,-D_{c^{\prime }2}^{z})$ \\
& 3 & 1 & (0,1,2) & $J_{c^{\prime }2}$ & $\mathbf{D}_{c^{\prime
}2,12}(D_{c^{\prime }2}^{x},-\sqrt{3}D_{c^{\prime }2}^{x},-D_{c^{\prime
}2}^{z})$ \\ \hline\hline
\end{tabular}%
\end{table}
Here we consider a general pairwise spin model
\begin{equation}
H=\sum_{l,n,l^{\prime },n^{\prime }}\mathbf{S}_{ln}\mathbf{J}_{\mathbf{R}%
_{l}+\boldsymbol{\tau }_{n},\mathbf{R}_{l^{\prime }}+\boldsymbol{\tau
_{n^{\prime }}}}\mathbf{S}_{l^{\prime }n^{\prime }} \label{spinmodel}
\end{equation}
where $\mathbf{J}_{\mathbf{R}_{l}+\boldsymbol{\tau }_{n},\mathbf{R}%
_{l^{\prime }}+\boldsymbol{\tau _{n^{\prime }}}}$, a $3\times 3$ tensor,
represents the spin exchange parameters. $\mathbf{R}_{l}$ and $\boldsymbol{%
\tau} _{n}$ represent the lattice translation vector and the position of
magnetic ions in the lattice basis, and $\mathbf{S}_{ln}$ means the spin at
the site of $\mathbf{R}_{l}+\boldsymbol{\tau }_{n}.$Translation symmetry
will restrict $\mathbf{J}_{\mathbf{R}_{l}+\boldsymbol{\tau }_{n},\mathbf{R}%
_{l^{\prime }}+\boldsymbol{\tau _{n^{\prime }}}}$ to be only related to $%
\mathbf{J}_{\boldsymbol{\tau }_{n},\boldsymbol{\tau _{n^{\prime }}+}\mathbf{R%
}_{l^{\prime \prime }}}$ where $R_{l^{\prime \prime }}=R_{l^{\prime }}-R_{l}$%
, irrespective of the starting unit cell. Other spatial symmetries will also
give restrictions on the magnetic exchange interactions. We consider a
general space group element $\{\alpha |\mathbf{t}\}$, where the left part
represents the rotation and the right part means the lattice translation.
Supposing under this symmetry operator, $\mathbf{R}_{m}+\boldsymbol{\tau }%
_{p}$ and $\mathbf{R}_{m^{\prime }}+\boldsymbol{\tau _{p^{\prime }}}$
transfer to $\mathbf{R}_{l}+\boldsymbol{\tau }_{n}$ and $\mathbf{R}%
_{l^{\prime }}+\boldsymbol{\tau _{n^{\prime }}}$, respectively, meanwhile
the transformation of spin becomes $\mathbf{S}_{mp}=M(\alpha )\mathbf{S}%
_{ln} $, where $M(\alpha )$ is the representation matrix of the proper
rotation part of the operation $\alpha $ in the coordinate system, we get
the following expression:
\begin{eqnarray}
H &=&\sum_{l,n,l^{\prime },n^{\prime }}\mathbf{S}_{ln}\mathbf{J}_{\mathbf{R}%
_{l}+\boldsymbol{\tau }_{n},\mathbf{R}_{l^{\prime }}+\boldsymbol{\tau
_{n^{\prime }}}}\mathbf{S}_{l^{\prime }n^{\prime }} \notag \\
&=&\sum_{l,n,l^{\prime },n^{\prime }}\mathbf{S}_{ln}M^{\dag }(\alpha
)M(\alpha )\mathbf{J}_{\mathbf{R}_{l}+\boldsymbol{\tau }_{n},\mathbf{R}%
_{l^{\prime }}+\boldsymbol{\tau _{n^{\prime }}}}M^{\dag }(\alpha )M(\alpha )%
\mathbf{S}_{l^{\prime }n^{\prime }} \notag \\
&=&\sum_{m,p,m^{\prime },p^{\prime }}\mathbf{S}_{mp}\left[ M(\alpha )\mathbf{%
J}_{\mathbf{R}_{l}+\boldsymbol{\tau }_{n},\mathbf{R}_{l^{\prime }}+%
\boldsymbol{\tau _{n^{\prime }}}}M^{\dag }(\alpha )\right] \mathbf{S}%
_{m^{\prime }p^{\prime }}
\end{eqnarray}
Then the exchange interactions should satisfy the following condition:
\begin{equation}
\mathbf{J}_{\mathbf{R}_{m}+\boldsymbol{\tau }_{p},\mathbf{R}_{m^{\prime }}+%
\boldsymbol{\tau _{p^{\prime }}}}=M(\alpha )\mathbf{J}_{\mathbf{R}_{l}+%
\boldsymbol{\tau }_{n},\mathbf{R}_{l^{\prime }}+\boldsymbol{\tau _{n^{\prime
}}}}M^{\dag }(\alpha ) \label{Jrelation}
\end{equation}
After decomposing the $3\times 3$ tensor $\mathbf{J}$ into scalar Heisenberg
term $J$ and vector DM term $\mathbf{D}$ as in the maintext, we obtain the
following results:
\begin{eqnarray}
J_{\mathbf{R}_{m}+\boldsymbol{\tau }_{p},\mathbf{R}_{m^{\prime }}+%
\boldsymbol{\tau _{p^{\prime }}}} &=&J_{\mathbf{R}_{l}+\boldsymbol{\tau }%
_{n},\mathbf{R}_{l^{\prime }}+\boldsymbol{\tau _{n^{\prime }}}} \notag \\
\mathbf{D}_{\mathbf{R}_{m}+\boldsymbol{\tau }_{p},\mathbf{R}_{m^{\prime }}+%
\boldsymbol{\tau _{p^{\prime }}}} &=&M(\alpha )\mathbf{D}_{\mathbf{R}_{l}+%
\boldsymbol{\tau }_{n},\mathbf{R}_{l^{\prime }}+\boldsymbol{\tau _{n^{\prime
}}}} \label{Jrelation-2}
\end{eqnarray}
Meanwhile, it is should be noted that the Heisenberg and DM interactions
obey the following commutation relations
\begin{eqnarray}
J_{\mathbf{R}_{l^{\prime }}+\boldsymbol{\tau _{n^{\prime }},}\mathbf{R}_{l}+%
\boldsymbol{\tau }_{n}} &=&J_{\mathbf{R}_{l}+\boldsymbol{\tau }_{n},\mathbf{R%
}_{l^{\prime }}+\boldsymbol{\tau _{n^{\prime }}}} \notag \\
\mathbf{D}_{\mathbf{R}_{l^{\prime }}+\boldsymbol{\tau _{n^{\prime }},}%
\mathbf{R}_{l}+\boldsymbol{\tau }_{n}} &=&-\mathbf{D}_{\mathbf{R}_{l}+%
\boldsymbol{\tau }_{n},\mathbf{R}_{l^{\prime }}+\boldsymbol{\tau _{n^{\prime
}}}} \label{Jrelation-3}
\end{eqnarray}
According to the above equations (i.e. Eq. (\ref{Jrelation-2}) and (\ref%
{Jrelation-3})), one can obtain the symmetry restricted magnetic
interactions for kagome FeGe with space group $P6/mmm$, as shown in table %
\ref{relation1}, \ref{relation2} and \ref{relation3}. Note that the
equivalent $\mathbf{D}_{i}$'s are labeled as the sub-index of $j$, i.e. the $%
\mathbf{D}_{i,j}$ in table \ref{relation1}, \ref{relation2} and \ref%
{relation3}.
\subsection{The details of double cone structure}
According to the experimental works \cite%
{FeGe-1972,FeGe-1978,FeGe-1984,FeGe-1988}, in hexagonal FeGe there is a
transition from a uniaxial spin system to a double cone spin structure at $%
T_{canting}$ = 60 K \cite{FeGe-1972}, which is expressed by the following
equations:
\begin{eqnarray}
\left\langle S^{x}\right\rangle &=&S\sin \theta \cos \left( \left( \pi \pm
\delta \right) \frac{z}{c}+\varphi \right) \notag \\
\left\langle S^{y}\right\rangle &=&S\sin \theta \sin \left( \left( \pi \pm
\delta \right) \frac{z}{c}+\varphi \right) \notag \\
\left\langle S^{z}\right\rangle &=&S\cos \theta \cos \left( \frac{\pi z}{c}%
\right) \label{cone}
\end{eqnarray}
where $\theta $ is the cone half angle, and $c$ represents the lattice
parameter. If $\delta =0$ there will be a simple tilting of the spins. When $%
\delta $ represents the small angle, Eq. (\ref{cone}) gives a double cone
spin structure. Following the previous works \cite%
{FeGe-1972,FeGe-1978,FeGe-1984,FeGe-1988}, here we consider the MAE with the
expression neglecting terms of order higher than four written as
\begin{equation}
E_{MAE}=K_{2}\sin ^{2}\theta +K_{4}\sin ^{4}\theta \label{mae}
\end{equation}
Therefore the total energy of Eq. (\ref{spinmodel}) and Eq. (\ref{mae}) in
double cone spin structure per unit cell could be written as
\begin{eqnarray}
E(\delta ,\theta ) &=&\sum_{i}N_{ci}J_{ci}(-\sin ^{2}\theta \cos \delta
-\cos ^{2}\theta ) \notag \\
&&+\sum_{i}N_{c^{\prime }i}J_{c^{\prime }i}(\sin ^{2}\theta \cos 2\delta
+\cos ^{2}\theta ) \notag \\
&&-\sum_{i,j}D_{ci,j}^{z}(\sin ^{2}\theta \sin \delta ) \notag \\
&&-\sum_{i,j}D_{c^{\prime }i,j}^{z}(\sin ^{2}\theta \sin 2\delta ) \notag \\
&&+N(K_{2}\sin ^{2}\theta +K_{4}\sin ^{4}\theta ) \label{etot}
\end{eqnarray}
where $N_{ci}$ and $N_{c^{\prime }i}$\ are the corresponding number of
neighbors of $J_{ci}$ and $J_{c^{\prime }i}$, and $N$ represents the number
of magnetic ions in one unit cell. When DM interactions are not considered,
the extremum condition in total energ gives the equilibrium value of wave
vector $\delta $ with following equation \cite{FeGe-1972,FeGe-1988}:
\begin{equation}
\cos \delta =\frac{\sum_{i}N_{ci}J_{ci}}{4\sum_{i}N_{c^{\prime
}i}J_{c^{\prime }i}} \label{cosq}
\end{equation}
While the cone half angle $\theta $ has the expression as%
\begin{equation}
\sin ^{2}\theta =-\frac{K_{2}-\frac{1}{2N}\sum_{i}N_{c^{\prime
}i}J_{c^{\prime }i}\delta ^{4}}{2K_{4}} \label{sin}
\end{equation}%
\ \
A minimum in the total energy (see Eq. (\ref{etot})) will occur only if $%
K_{4}$ is positive, and Eq. (\ref{sin}) requires that $K_{2}-\frac{1}{2N}%
\sum_{i}N_{c^{\prime }i}J_{c^{\prime }i}\delta ^{4}$ must be negative.
When the magnetic interactions including Heisenberg and DM interactions
between two nearest neighbor xy-planes, i.e. $J_{ci}$ and $\mathbf{D}_{ci}$,
are considered, the equilibrium value of wave vector $\delta $ is obtained
by the minimum in total energy written as%
\begin{equation}
\tan \delta =\frac{\sum_{i,j}D_{ci,j}^{z}}{\sum_{i}N_{ci}J_{ci}}
\label{tanq}
\end{equation}
where $j$ is the sub-index of the equivalent $\mathbf{D}_{ci}$'s. Meanwhile,
we find the following expression for $\theta $%
\begin{equation}
\sin ^{2}\theta =-\frac{K_{2}-\frac{1}{2N}\sum_{i,j}D_{ci,j}^{z}\delta }{%
2K_{4}} \label{sin2}
\end{equation}%
\ \
Note that in Eq. (\ref{sin2}), DM interactions are combined with only the
first order of $\delta $, and may have much more efficient than $J_{c^{\prime
}i}$ in Eq. (\ref{sin}) since $\delta $ is small around 0.2 \cite{FeGe-1972,FeGe-1977}. This implies
that DM interactions may be the origin of double cone structure.
\subsection{The symmetry analysis of CDW phases}
The high-temperature phase FeGe crystallizes in space group $P6/mmm$, which
has the generators \{3$_{001}^{+}$$|$0\}, \{2$_{001}^{{}}$$|$0\}, \{2$%
_{110}^{{}}$$|$0\} and \{-1$|$0\}, where the left part represents the
rotation and the right part means the lattice translation (here -1 denotes
the inversion symmetry). According to the inversion symmetry, the total
contribution of DM interactions to the energy of double cone magnetic
structure in Eq. (\ref{etot}) is absent, i.e. $\sum_{i,j}D_{ci,j}^{z}=0$,
which is easy to see from the table \ref{relation2}-\ref{relation3}.
Firstly, each kagome layer is still FM in the double cone magnetic state,
thus the in-plane DM interactions are ineffective. For interlayer DM
interactions with an inversion center such as $\mathbf{D}_{c1}$, the
inversion symmetry restricts it to be zero as shown in table \ref{relation2}%
. Meanwhile, for other interlayer DM interactions, the inversion symmetry
combine the equivalent DM interactions in pairs. For example, as shown in
table \ref{relation2}, the $\mathbf{D}_{c2,1}$ and $\mathbf{D}_{c2,7}$\ are
connected by the inversion symmetry, and have opposite values. Therefore,
the summation over equivalent interlayer DM interactions are all zero due to
the inversion symmetry. Note that not only inversion symmetry, but mirror
symmetries such as \{$m_{001}^{{}}$$|$0\}, \{$m_{110}^{{}}$$|$0\}, \{$%
m_{100}^{{}}$$|$0\}, \{$m_{010}^{{}}$$|$0\}, \{$m_{1-10}^{{}}$$|$0\}, \{$%
m_{120}^{{}}$$|$0\} and \{$m_{210}^{{}}$$|$0\} in space group $P6/mmm$,
would also make the DM contribution to the canted magnetic ground state to
be\ zero based on the similar analysis above. Therefore, DM interactions
have no contribution to double cone magnetic structure with the symmetry of
high-temperature phase.
As mentioned in the maintext, since the $2\times 2\times 2$ supercell
structure of CDW phase (compared with the nonmagnetic pristine phase) is
suggested experimentally \cite%
{teng2022discovery,yin2022discovery,2206.12033,2210.06359}, we present the
possible CDW\ phases of kagome FeGe with $2\times 2\times 2$ supercell. The $%
2\times 2\times 2$ supercell without distortion has the symmetry of space
group $P6/mmm$, the non-primitive translation operations $t_{x}$ \{1$|$%
1/2,0,0\}, $t_{y}$ \{1$|$0,1/2,0\}, $t_{z}\ $\{1$|$0,0,1/2\}, and many
symmetry operations from their combinations. As the subgroups compatible
with $2\times 2\times 2$ supercell of pristine FeGe, the structural
distortion of CDW\ phases would break the non-primitive translation
operations $t_{x}$, $t_{y}$ and $t_{z}$, and possibly break other symmetry\
operations as well. Since the point group associated with high-temperature
phase FeGe ($P6/mmm$) is $D_{6h}$, we consider all CDW\ phases whose
associated point group is $D_{6h}$ itself or in maximal subgroups of $D_{6h}$
($D_{2h}$, $D_{6}$, $C_{6h}$, $C_{6v}$, $D_{3d}$, $D_{3h}$). In total we
find 68 different possible CDW phases, and list the corresponding relations
of atomic positions in the high-temperature phase and all types of proposed
CDW phases in table \ref{cdw1}-\ref{cdw5}. Note that the inversion symmetry
and mirror symmetries in parent group $P6/mmm$ would all eliminate the
contribution of DM interactions based on the symmetry analysis. Among these
68 proposed CDW phases, only four distorted structures do not have the
inversion symmetry and mirror symmetries, which can lead to non-zero DM
contribution to double cone spin structure and may explain this magnetic
ground state. They belong to two space groups $P622$ and $P6_{3}22$, and we
list the corresponding Wyckoff positions and the coordinates of the atoms in
the pristine phase and these four CDW phases in table \ref{cdw} of the
maintext.
\begin{table*}[htbp]
\caption{The corresponding Wyckoff positions and the coordinates of the
atoms in the pristine phase and CDW phases with different symmetries. (PART
\uppercase\expandafter{\romannumeral1}). }
\label{cdw1}%
\begin{tabular}{ccccccccccccccc}
\hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{
SG191-P6/mmm(type \uppercase\expandafter{\romannumeral1})} &
\multicolumn{3}{c|}{SG191-P6/mmm(type \uppercase\expandafter{\romannumeral2})
} & \multicolumn{3}{c|}{SG194-P6$_{3}$/mmc(type \uppercase%
\expandafter{\romannumeral1})} & \multicolumn{3}{c}{SG194-P6$_{3}$/mmc(type %
\uppercase\expandafter{\romannumeral2})} \\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & & WP & \multicolumn{1}{c|}{
Coordinates} & & WP & \multicolumn{1}{c|}{Coordinates} & & WP &
\multicolumn{1}{c|}{Coordinates} & & WP & Coordinates \\ \hline
\multirow{2}{*}{Ge1} & \multirow{2}{*}{1a} & \multicolumn{1}{c|}{%
\multirow{2}{*}{(0, 0, 0)}} & Ge1 & 1a & \multicolumn{1}{c|}{(0, 0, 0)} & Ge1
& 2e & \multicolumn{1}{c|}{(0, 0, z)} & Ge1 & 2a & \multicolumn{1}{c|}{(0,
0, 0)} & Ge1 & 2b & (0, 0, 1/4) \\
& & \multicolumn{1}{c|}{} & Ge2 & 1b & \multicolumn{1}{c|}{(0, 0, 1/2)} &
Ge2 & 6i & \multicolumn{1}{c|}{(1/2, 0, z)} & Ge2 & 6g & \multicolumn{1}{c|}{
(1/2, 0, 0)} & Ge2 & 6h & (x, 2x, 1/4) \\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & Ge3 &
3f & \multicolumn{1}{c|}{(1/2, 0, 0)} & & & \multicolumn{1}{c|}{} &
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} &
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & Ge4 &
3g & \multicolumn{1}{c|}{(1/2, 0, 1/2)} & & & \multicolumn{1}{c|}{} &
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} &
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ \hline
Ge2 & 2d & \multicolumn{1}{c|}{(1/3, 2/3, 1/2)} & Ge5 & 4h &
\multicolumn{1}{c|}{(1/3, 2/3, z)} & Ge3 & 2c & \multicolumn{1}{c|}{(1/3,
2/3, 0)} & Ge3 & 2c & \multicolumn{1}{c|}{(1/3, 2/3, 1/4)} & Ge3 & 4f &
(1/3, 2/3, z) \\
& & \multicolumn{1}{c|}{} & Ge6 & 12o & \multicolumn{1}{c|}{(x, 2x, z)} &
Ge4 & 2d & \multicolumn{1}{c|}{(1/3, 2/3, 1/2)} & Ge4 & 2d &
\multicolumn{1}{c|}{(1/3, 2/3, 1/4)} & Ge4 & 12k & (x, 2x, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge5 & 6l &
\multicolumn{1}{c|}{(x, 2x, 0)} & Ge5 & 6h & \multicolumn{1}{c|}{(x, 2x, 1/4)
} & & & \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge6 & 6m &
\multicolumn{1}{c|}{(x, 2x, 1/2)} & Ge6 & 6h & \multicolumn{1}{c|}{(x, 2x,
1/4)} & & & \\ \hline
\multirow{3}{*}{Fe} & \multirow{3}{*}{3f} & \multicolumn{1}{c|}{%
\multirow{3}{*}{(1/2, 0, 0))}} & Fe1 & 6j & \multicolumn{1}{c|}{(x, 0, 0)} &
Fe1 & 12n & \multicolumn{1}{c|}{(x, 2x, z)} & Fe1 & 12k &
\multicolumn{1}{c|}{(x, 0, 0)} & Fe1 & 6h & (x, 2x, 1/4) \\
& & \multicolumn{1}{c|}{} & Fe2 & 6k & \multicolumn{1}{c|}{(x, 0, 1/2)} &
Fe2 & 12o & \multicolumn{1}{c|}{(x, 0, z)} & Fe2 & 12k & \multicolumn{1}{c|}{
(x, 2x, z)} & Fe2 & 6h & (x, 2x, 1/4) \\
& & \multicolumn{1}{c|}{} & Fe3 & 6l & \multicolumn{1}{c|}{(x, 2x, 0)} & &
& \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Fe3 & 12j & (x, y,
1/4) \\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & Fe4 &
6m & \multicolumn{1}{c|}{(x, 2x, 1/2)} & & & \multicolumn{1}{c|}{} &
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} &
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ \hline
& & & & & & & & & & & & & & \\ \hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{SG193-P6$%
_{3}$/mcm(type \uppercase\expandafter{\romannumeral1})} &
\multicolumn{3}{c|}{SG193-P6$_{3}$/mcm(type \uppercase\expandafter{%
\romannumeral2})} & \multicolumn{3}{c|}{SG192-P6/mcc(type \uppercase%
\expandafter{\romannumeral1})} & \multicolumn{3}{c}{SG192-P6/mcc(type %
\uppercase\expandafter{\romannumeral2})} \\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & & WP & \multicolumn{1}{c|}{
Coordinates} & & WP & \multicolumn{1}{c|}{Coordinates} & & &
\multicolumn{1}{c|}{} & & WP & Coordinates \\ \hline
\multirow{2}{*}{Ge1} & \multirow{2}{*}{1a} & \multicolumn{1}{c|}{%
\multirow{2}{*}{(0, 0, 0)}} & Ge1 & 2b & \multicolumn{1}{c|}{(0, 0, 0)} & Ge1
& 2a & \multicolumn{1}{c|}{(0, 0, 1/4)} & Ge1 & 2b & \multicolumn{1}{c|}{(0,
0, 0)} & Ge1 & 2b & (0, 0, 1/4) \\
& & \multicolumn{1}{c|}{} & Ge2 & 6f & \multicolumn{1}{c|}{(1/2, 0, 0)} &
Ge2 & 6g & \multicolumn{1}{c|}{(x, 0, 1/4)} & Ge2 & 6g & \multicolumn{1}{c|}{
(1/2, 0, 0)} & Ge2 & 6f & (1/2, 0, 1/4) \\ \hline
\multirow{2}{*}{Ge2} & \multirow{2}{*}{2d} & \multicolumn{1}{c|}{%
\multirow{2}{*}{(1/3, 2/3, 1/2)}} & Ge3 & 4c & \multicolumn{1}{c|}{(1/3,
2/3, 1/4)} & Ge3 & 4d & \multicolumn{1}{c|}{(1/3, 2/3, 0)} & Ge3 & 4c &
\multicolumn{1}{c|}{(1/3, 2/3, 1/4)} & Ge3 & 4d & (1/3, 2/3, z) \\
& & \multicolumn{1}{c|}{} & Ge4 & 12j & \multicolumn{1}{c|}{(x, y, 1/4)} &
Ge4 & 12i & \multicolumn{1}{c|}{(x, 2x, 0)} & Ge4 & 12k &
\multicolumn{1}{c|}{(x, 2x, 1/4)} & Ge4 & 12l & (x, y, 0) \\ \hline
\multirow{3}{*}{Fe} & \multirow{3}{*}{3f} & \multicolumn{1}{c|}{%
\multirow{3}{*}{(1/2, 0, 0))}} & Fe1 & 12i & \multicolumn{1}{c|}{(x, 0, z)}
& Fe1 & 6g & \multicolumn{1}{c|}{(x, 0, 1/4)} & Fe1 & 12l &
\multicolumn{1}{c|}{(x, y, 0)} & Fe1 & 12j & (x, 0, 1/4) \\
& & \multicolumn{1}{c|}{} & Fe2 & 12k & \multicolumn{1}{c|}{(x 2x ,0)} & Fe2
& 6g & \multicolumn{1}{c|}{(x, 0, 1/4)} & Fe2 & 12l & \multicolumn{1}{c|}{
(x, y, 0)} & Fe2 & 12k & (x, 2x, 1/4) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Fe3 & 12j &
\multicolumn{1}{c|}{(x, y, 1/4)} & & & \multicolumn{1}{c|}{} & & & \\
\hline
& & & & & & & & & & & & & & \\ \hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{SG190-P$%
\overline{6}$2c(type \uppercase\expandafter{\romannumeral1})} &
\multicolumn{3}{c|}{SG190-P$\overline{6}$2c(type \uppercase%
\expandafter{\romannumeral2})} & \multicolumn{3}{c|}{SG189-P$\overline{6}$%
2m(type \uppercase\expandafter{\romannumeral1})} & \multicolumn{3}{c}{SG189-P%
$\overline{6}$2m(type \uppercase\expandafter{\romannumeral2})} \\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & \multicolumn{1}{c}{} &
\multicolumn{1}{c}{WP} & \multicolumn{1}{c|}{Coordinates} &
\multicolumn{1}{c}{} & \multicolumn{1}{c}{WP} & \multicolumn{1}{c|}{
Coordinates} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{WP} &
\multicolumn{1}{c|}{Coordinates} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{
WP} & \multicolumn{1}{c}{Coordinates} \\ \hline
\multirow{4}{*}{Ge1} & \multirow{4}{*}{1a} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(0, 0, 0)}} & Ge1 & 2a & \multicolumn{1}{l|}{(0, 0, 0)} & Ge1
& 2b & \multicolumn{1}{l|}{(0, 0, 1/4)} & Ge1 & 1a & \multicolumn{1}{l|}{(0,
0, 0)} & Ge1 & 2e & (0, 0, z) \\
& & \multicolumn{1}{c|}{} & Ge2 & 6g & \multicolumn{1}{l|}{(x, 0, 0)} & Ge2
& 6h & \multicolumn{1}{l|}{(x, y, 1/4)} & Ge2 & 1b & \multicolumn{1}{l|}{(0,
0, 1/2)} & Ge2 & 6i & (x, 0, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{l|}{} & & &
\multicolumn{1}{l|}{} & Ge3 & 3f & \multicolumn{1}{l|}{(x, 0, 0)} & & &
\\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{l|}{} & & &
\multicolumn{1}{l|}{} & Ge4 & 3g & \multicolumn{1}{l|}{(x, 0, 1/2)} & & &
\\ \hline
\multirow{4}{*}{Ge2} & \multirow{4}{*}{2d} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(1/3, 2/3, 1/2)}} & Ge3 & 2c & \multicolumn{1}{l|}{(1/3,
2/3, 1/4)} & Ge3 & 4f & \multicolumn{1}{l|}{(1/3, 2/3, z)} & Ge5 & 4h &
\multicolumn{1}{l|}{(1/3, 2/3, z)} & Ge3 & 2c & (1/3, 2/3, 0) \\
& & \multicolumn{1}{c|}{} & Ge4 & 2d & \multicolumn{1}{l|}{(1/3, 2/3, 3/4)}
& Ge4 & 12i & \multicolumn{1}{l|}{(x, y, z)} & Ge6 & 12l &
\multicolumn{1}{l|}{(x, y, z)} & Ge4 & 2d & (1/3, 2/3, 1/2) \\
& & \multicolumn{1}{c|}{} & Ge5 & 6h & \multicolumn{1}{l|}{(x, y, 1/4)} &
& & \multicolumn{1}{l|}{} & & & \multicolumn{1}{l|}{} & Ge5 & 6j & (x, y,
0) \\
& & \multicolumn{1}{c|}{} & Ge6 & 6h & \multicolumn{1}{l|}{(x, y, 1/4)} &
& & \multicolumn{1}{l|}{} & & & \multicolumn{1}{l|}{} & Ge6 & 6k & (x, y,
1/2) \\ \hline
\multirow{6}{*}{Fe} & \multirow{6}{*}{3f} & \multicolumn{1}{c|}{%
\multirow{6}{*}{(1/2, 0, 0))}} & Fe1 & 6g & \multicolumn{1}{l|}{(x, 0, 0)} &
Fe1 & 6h & \multicolumn{1}{l|}{(x, y, 1/4)} & Fe1 & 3f & \multicolumn{1}{l|}{
(x, 0, 0)} & Fe1 & 6i & (x, 0, z) \\
& & \multicolumn{1}{c|}{} & Fe2 & 6g & \multicolumn{1}{l|}{(x, 0, 0)} & Fe2
& 6h & \multicolumn{1}{l|}{(x, y, 1/4)} & Fe2 & 3f & \multicolumn{1}{l|}{(x,
0, 0)} & Fe2 & 6i & (x, 0, z) \\
& & \multicolumn{1}{c|}{} & Fe3 & 12i & \multicolumn{1}{l|}{(x, y, z)} & Fe3
& 6h & \multicolumn{1}{l|}{(x, y, 1/4)} & Fe3 & 3g & \multicolumn{1}{l|}{(x,
0, 1/2)} & Fe3 & 12l & (x, y, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{l|}{} & Fe4 & 6h &
\multicolumn{1}{l|}{(x, y, 1/4)} & Fe4 & 3g & \multicolumn{1}{l|}{(x, 0, 1/2)
} & & & \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{l|}{} & & &
\multicolumn{1}{l|}{} & Fe5 & 6j & \multicolumn{1}{l|}{(x, y, 0)} & & &
\\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{l|}{} & & &
\multicolumn{1}{l|}{} & Fe6 & 6k & \multicolumn{1}{l|}{(x, y, 1/2)} & & &
\\ \hline
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & & &
& & & & & & & & & \\ \hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{SG188-P$%
\overline{6}$c2(type \uppercase\expandafter{\romannumeral1})} &
\multicolumn{3}{c|}{SG188-P$\overline{6}$c2(type \uppercase%
\expandafter{\romannumeral2})} & \multicolumn{3}{c|}{SG187-P$\overline{6}$%
m2(type \uppercase\expandafter{\romannumeral1})} & \multicolumn{3}{c}{SG187-P%
$\overline{6}$m2(type \uppercase\expandafter{\romannumeral2})} \\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & \multicolumn{1}{c}{} &
\multicolumn{1}{c}{WP} & \multicolumn{1}{c|}{Coordinates} &
\multicolumn{1}{c}{} & \multicolumn{1}{c}{WP} & \multicolumn{1}{c|}{
Coordinates} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{WP} &
\multicolumn{1}{c|}{Coordinates} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{
WP} & \multicolumn{1}{c}{Coordinates} \\ \hline
\multicolumn{1}{l}{\multirow{4}{*}{Ge1}} & \multicolumn{1}{l}{%
\multirow{4}{*}{1a}} & \multicolumn{1}{l|}{\multirow{4}{*}{(0, 0, 0)}} & Ge1
& 2a & \multicolumn{1}{l|}{(0, 0, 0)} & Ge1 & 2d & \multicolumn{1}{l|}{(1/3,
2/3, 1/4)} & Ge1 & 1a & \multicolumn{1}{l|}{(0, 0, 0)} & Ge1 & 2h & (1/3,
2/3, z) \\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & Ge2 &
6j & \multicolumn{1}{l|}{(x, 2x, 0)} & Ge2 & 6k & \multicolumn{1}{l|}{(x, y,
1/4)} & Ge2 & 1b & \multicolumn{1}{l|}{(0, 0, 1/2)} & Ge2 & 6n & (x, 2x, z)
\\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & & &
\multicolumn{1}{l|}{} & & & \multicolumn{1}{l|}{} & Ge3 & 3j &
\multicolumn{1}{l|}{(x, 2x, 0)} & & & \\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & & &
\multicolumn{1}{l|}{} & & & \multicolumn{1}{l|}{} & Ge4 & 3k &
\multicolumn{1}{l|}{(x, 2x, 1/2)} & & & \\ \hline
\multicolumn{1}{l}{\multirow{8}{*}{Ge2}} & \multicolumn{1}{l}{%
\multirow{8}{*}{2d}} & \multicolumn{1}{l|}{\multirow{8}{*}{(1/3, 2/3, 1/2)}}
& Ge3 & 2d & \multicolumn{1}{l|}{(2/3, 1/3, 1/4)} & Ge3 & 2a &
\multicolumn{1}{l|}{(0, 0, 0)} & Ge5 & 2i & \multicolumn{1}{l|}{(2/3, 1/3, z)
} & Ge3 & 1a & (0, 0, 0) \\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & Ge4 &
2f & \multicolumn{1}{l|}{(1/3, 2/3, 1/4)} & Ge4 & 2e & \multicolumn{1}{l|}{
(2/3, 1/3, 0)} & Ge6 & 2h & \multicolumn{1}{l|}{(1/3, 2/3, z)} & Ge4 & 1b &
(0, 0, 1/2) \\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & Ge5 &
6k & \multicolumn{1}{l|}{(x, y, 1/4)} & Ge5 & 6j & \multicolumn{1}{l|}{(x,
2x, 0)} & Ge7 & 6n & \multicolumn{1}{l|}{(x, 2x, z)} & Ge5 & 1e & (2/3, 1/3,
0) \\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & Ge6 &
6k & \multicolumn{1}{l|}{(x, y, 1/4)} & Ge6 & 6j & \multicolumn{1}{l|}{(x,
2x, 1/2)} & Ge8 & 6n & \multicolumn{1}{l|}{(x, 2x, z)} & Ge6 & 1f & (2/3,
1/3, 1/2) \\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & & &
\multicolumn{1}{l|}{} & & & \multicolumn{1}{l|}{} & & &
\multicolumn{1}{l|}{} & Ge7 & 3j & (x, 2x, 0) \\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & & &
\multicolumn{1}{l|}{} & & & \multicolumn{1}{l|}{} & & &
\multicolumn{1}{l|}{} & Ge8 & 3j & (x, 2x, 0) \\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & & &
\multicolumn{1}{l|}{} & & & \multicolumn{1}{l|}{} & & &
\multicolumn{1}{l|}{} & Ge9 & 3k & (x, 2x, 1/2) \\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & & &
\multicolumn{1}{l|}{} & & & \multicolumn{1}{l|}{} & & &
\multicolumn{1}{l|}{} & Ge10 & 3k & (x, 2x, 1/2) \\ \hline
\multicolumn{1}{l}{\multirow{6}{*}{Fe}} & \multicolumn{1}{l}{%
\multirow{6}{*}{3f}} & \multicolumn{1}{l|}{\multirow{6}{*}{(1/2, 0, 0)}} &
Fe1 & 6j & \multicolumn{1}{l|}{(x, 2x, 0)} & Fe1 & 6k & \multicolumn{1}{l|}{
(x, y, 1/4)} & Fe1 & 3j & \multicolumn{1}{l|}{(x, 2x, 0)} & Fe1 & 6n & (x,
2x ,z) \\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & Fe2 &
6j & \multicolumn{1}{l|}{(x, 2x, 0)} & Fe2 & 6k & \multicolumn{1}{l|}{(x, y,
1/4)} & Fe2 & 3j & \multicolumn{1}{l|}{(x, 2x, 0)} & Fe2 & 6n & (x, 2x ,z)
\\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & Fe3 &
12l & \multicolumn{1}{l|}{(x, y, z)} & Fe3 & 6k & \multicolumn{1}{l|}{(x, y,
1/4)} & Fe3 & 3k & \multicolumn{1}{l|}{(x, 2x, 1/2)} & Fe3 & 12o & (x, y, z)
\\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & & &
\multicolumn{1}{l|}{} & Fe4 & 6k & \multicolumn{1}{l|}{(x, y, 1/4)} & Fe4 &
3k & \multicolumn{1}{l|}{(x, 2x, 1/2)} & & & \\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & & &
\multicolumn{1}{l|}{} & & & \multicolumn{1}{l|}{} & Fe5 & 6l &
\multicolumn{1}{l|}{(x, y, 0)} & & & \\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & & &
\multicolumn{1}{l|}{} & & & \multicolumn{1}{l|}{} & Fe6 & 6m &
\multicolumn{1}{l|}{(x, y, 1/2)} & & & \\ \hline
\end{tabular}%
\end{table*}
\begin{table*}[htbp]
\caption{The corresponding Wyckoff positions and the coordinates of the
atoms in the pristine phase and CDW phases with different symmetries. (PART
\uppercase\expandafter{\romannumeral2}).}
\label{cdw2}%
\begin{tabular}{ccccccccccccccc}
\hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{SG186-P$%
\overline{6}_{3}$mc(type \uppercase\expandafter{\romannumeral1})} &
\multicolumn{3}{c|}{SG185-P$\overline{6}_{3}$cm(type \uppercase%
\expandafter{\romannumeral1})} & \multicolumn{3}{c|}{SG184-P6cc(type %
\uppercase\expandafter{\romannumeral1})} & \multicolumn{3}{c}{
SG183-P6mm(type \uppercase\expandafter{\romannumeral1})} \\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & & WP & \multicolumn{1}{c|}{
Coordinates} & & WP & \multicolumn{1}{c|}{Coordinates} & & WP &
\multicolumn{1}{c|}{Coordinates} & & WP & Coordinates \\ \hline
\multirow{4}{*}{Ge1} & \multirow{4}{*}{1a} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(0, 0, 0)}} & Ge1 & 2a & \multicolumn{1}{c|}{(0, 0, z)} & Ge1
& 2a & \multicolumn{1}{c|}{(0, 0, z)} & Ge1 & 2a & \multicolumn{1}{c|}{(0,
0, z)} & Ge1 & 1a & (0, 0, z) \\
& & \multicolumn{1}{c|}{} & Ge2 & 6c & \multicolumn{1}{c|}{(x, 0, z)} & Ge2
& 6c & \multicolumn{1}{c|}{(x, 2x, z)} & Ge2 & 6c & \multicolumn{1}{c|}{
(1/2, 0, z)} & Ge2 & 1a & (0, 0, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge3 & 3c & (1/2, 0, z)
\\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge4 & 3c & (1/2, 0, z)
\\ \hline
\multirow{4}{*}{Ge2} & \multirow{4}{*}{2d} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(1/3, 2/3, 1/2)}} & Ge3 & 4b & \multicolumn{1}{c|}{(1/3,
2/3, z)} & Ge3 & 2b & \multicolumn{1}{c|}{(1/3, 2/3, z)} & Ge3 & 4b &
\multicolumn{1}{c|}{(1/3, 2/3, z)} & Ge5 & 2b & (1/3, 2/3, z) \\
& & \multicolumn{1}{c|}{} & Ge4 & 12d & \multicolumn{1}{c|}{(x, y, z)} & Ge4
& 2b & \multicolumn{1}{c|}{(1/3, 2/3, z)} & Ge4 & 12d & \multicolumn{1}{c|}{
(x, y, z)} & Ge6 & 2b & (1/3, 2/3, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge5 & 6c &
\multicolumn{1}{c|}{(x, 2x, z)} & & & \multicolumn{1}{c|}{} & Ge7 & 6e &
(x, 2x, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge6 & 6c &
\multicolumn{1}{c|}{(x, 2x, z)} & & & \multicolumn{1}{c|}{} & Ge8 & 6e &
(x, 2x, z) \\ \hline
\multirow{4}{*}{Fe} & \multirow{4}{*}{3f} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(1/2, 0, 0)}} & Fe1 & 6c & \multicolumn{1}{c|}{(x, 0, z)} &
Fe1 & 6c & \multicolumn{1}{c|}{(x, 2x, z)} & Fe1 & 12d & \multicolumn{1}{c|}{
(x, y, z)} & Fe1 & 6d & (x, 0, z) \\
& & \multicolumn{1}{c|}{} & Fe2 & 6c & \multicolumn{1}{c|}{(x, 0, z)} & Fe2
& 6c & \multicolumn{1}{c|}{(x, 2x, z)} & Fe2 & 12d & \multicolumn{1}{c|}{(x,
y, z)} & Fe2 & 6d & (x, 0, z) \\
& & \multicolumn{1}{c|}{} & Fe3 & 12d & \multicolumn{1}{c|}{(x, y, z)} & Fe3
& 12d & \multicolumn{1}{c|}{(x, y, z)} & & & \multicolumn{1}{c|}{} & Fe3 &
6d & (x, 2x, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Fe4 & 6d & (x, 2x, z)
\\ \hline
& & & & & & & & & & & & & & \\ \hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{SG182-P6$%
_{3}$22(type \uppercase\expandafter{\romannumeral1})} & \multicolumn{3}{c|}{
SG182-P6$_{3}$22(type \uppercase\expandafter{\romannumeral2})} &
\multicolumn{3}{c|}{SG177-P622(type \uppercase\expandafter{\romannumeral1})}
& \multicolumn{3}{c}{SG177-P622(type \uppercase\expandafter{\romannumeral2})}
\\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & & WP & \multicolumn{1}{c|}{
Coordinates} & & WP & \multicolumn{1}{c|}{Coordinates} & & WP &
\multicolumn{1}{c|}{Coordinates} & & WP & Coordinates \\ \hline
\multirow{4}{*}{Ge1} & \multirow{4}{*}{1a} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(0, 0, 0)}} & Ge1 & 2a & \multicolumn{1}{c|}{(0, 0, 0)} & Ge1
& 2b & \multicolumn{1}{c|}{(0, 0, 1/4)} & Ge1 & 1a & \multicolumn{1}{c|}{(0,
0, 0)} & Ge1 & 2e & (0, 0, z) \\
& & \multicolumn{1}{c|}{} & Ge2 & 6g & \multicolumn{1}{c|}{(x, 0, 0)} & Ge2
& 6h & \multicolumn{1}{c|}{(x, 2x, 1/4)} & Ge2 & 1b & \multicolumn{1}{c|}{
(0, 0, 1/2)} & Ge2 & 6i & (1/2, 0, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Ge3 & 3f & \multicolumn{1}{c|}{(0, 1/2, 0)} & & &
\\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Ge4 & 3g & \multicolumn{1}{c|}{(0, 1/2, 1/2)} & &
& \\ \hline
\multirow{4}{*}{Ge2} & \multirow{4}{*}{2d} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(1/3, 2/3, 1/2)}} & Ge3 & 2c & \multicolumn{1}{c|}{(1/3,
2/3, 1/4)} & Ge3 & 4f & \multicolumn{1}{c|}{(1/3, 2/3, z)} & Ge5 & 4h &
\multicolumn{1}{c|}{(1/3, 2/3, z)} & Ge3 & 2c & (1/3, 2/3, 0) \\
& & \multicolumn{1}{c|}{} & Ge4 & 2d & \multicolumn{1}{c|}{(1/3, 2/3, 3/4)}
& Ge4 & 12i & \multicolumn{1}{c|}{(x, y, z)} & Ge6 & 12n &
\multicolumn{1}{c|}{(x, y, z)} & Ge4 & 2d & (1/3, 2/3,1/2) \\
& & \multicolumn{1}{c|}{} & Ge5 & 6h & \multicolumn{1}{c|}{(x, 2x, 1/4)} &
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge5 & 6l & (x,
2x, 0) \\
& & \multicolumn{1}{c|}{} & Ge6 & 6h & \multicolumn{1}{c|}{(x, 2x, 1/4)} &
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge6 & 6m & (x,
2x, 1/2) \\ \hline
\multirow{4}{*}{Fe} & \multirow{4}{*}{3f} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(1/2, 0, 0)}} & Fe1 & 6g & \multicolumn{1}{c|}{(x, 0, 0)} &
Fe1 & 6h & \multicolumn{1}{c|}{(x, 2x, 1/4)} & Fe1 & 6j &
\multicolumn{1}{c|}{(x, 0, 0)} & Fe1 & 12n & (x, y, z) \\
& & \multicolumn{1}{c|}{} & Fe2 & 6g & \multicolumn{1}{c|}{(x, 0, 0)} & Fe2
& 6h & \multicolumn{1}{c|}{(x, 2x, 1/4)} & Fe2 & 6k & \multicolumn{1}{c|}{
(x, 0, 1/2)} & Fe1 & 12n & (x, y, z) \\
& & \multicolumn{1}{c|}{} & Fe3 & 12i & \multicolumn{1}{c|}{(x, y, z)} & Fe3
& 12i & \multicolumn{1}{c|}{(x, y, z)} & Fe3 & 6l & \multicolumn{1}{c|}{(x,
2x, 0)} & & & \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Fe4 & 6m & \multicolumn{1}{c|}{(x, 2x, 1/2)} & & &
\\ \hline
& & & & & & & & & & & & & & \\ \hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{SG176-P6$%
_{3}$/m(type \uppercase\expandafter{\romannumeral1})} & \multicolumn{3}{c|}{
SG176-P6$_{3}$/m(type \uppercase\expandafter{\romannumeral2})} &
\multicolumn{3}{c|}{SG175-P6/m(type \uppercase\expandafter{\romannumeral1})}
& \multicolumn{3}{c}{SG175-P6/m(type \uppercase\expandafter{\romannumeral2})}
\\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & & WP & \multicolumn{1}{c|}{
Coordinates} & & WP & \multicolumn{1}{c|}{Coordinates} & & WP &
\multicolumn{1}{c|}{Coordinates} & & WP & Coordinates \\ \hline
\multirow{4}{*}{Ge1} & \multirow{4}{*}{1a} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(0, 0, 0)}} & Ge1 & 2b & \multicolumn{1}{c|}{(0, 0, 0)} & Ge1
& 2a & \multicolumn{1}{c|}{(0, 0, 1/4)} & Ge1 & 1a & \multicolumn{1}{c|}{(0,
0, 0)} & Ge1 & 2e & (0, 1/2, z) \\
& & \multicolumn{1}{c|}{} & Ge2 & 6g & \multicolumn{1}{c|}{(1/2, 0, 0)} &
Ge2 & 6h & \multicolumn{1}{c|}{(x, y, 1/4)} & Ge2 & 1b & \multicolumn{1}{c|}{
(0, 0, 1/2)} & Ge2 & 6i & (0, 0, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Ge3 & 3f & \multicolumn{1}{c|}{(1/2, 0, 0)} & & &
\\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Ge4 & 3g & \multicolumn{1}{c|}{(1/2, 0, 1/2)} & &
& \\ \hline
\multirow{4}{*}{Ge2} & \multirow{4}{*}{2d} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(1/3, 2/3, 1/2)}} & Ge3 & 2c & \multicolumn{1}{c|}{(1/3,
2/3, 1/4)} & Ge3 & 4f & \multicolumn{1}{c|}{(1/3, 2/3, z)} & Ge5 & 4h &
\multicolumn{1}{c|}{(1/3, 2/3, z)} & Ge3 & 2c & (1/3, 2/3, 0) \\
& & \multicolumn{1}{c|}{} & Ge4 & 2d & \multicolumn{1}{c|}{(1/3, 2/3, 3/4)}
& Ge4 & 12i & \multicolumn{1}{c|}{(x, y, z)} & Ge6 & 12l &
\multicolumn{1}{c|}{(x, y, z)} & Ge4 & 2d & (1/3, 2/3, 1/2) \\
& & \multicolumn{1}{c|}{} & Ge5 & 6h & \multicolumn{1}{c|}{(x, y, 1/4)} &
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge5 & 6j & (x, y,
0) \\
& & \multicolumn{1}{c|}{} & Ge6 & 6h & \multicolumn{1}{c|}{(x, y, 1/4)} &
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge6 & 6k & (x, y,
1/2) \\ \hline
\multirow{4}{*}{Fe} & \multirow{4}{*}{3f} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(1/2, 0, 0)}} & Fe1 & 12i & \multicolumn{1}{c|}{(x, y, z)} &
Fe1 & 6h & \multicolumn{1}{c|}{(x, y, 1/4)} & Fe1 & 6j & \multicolumn{1}{c|}{
(x, y, 0)} & Fe1 & 12l & (x, y, z) \\
& & \multicolumn{1}{c|}{} & Fe2 & 12i & \multicolumn{1}{c|}{(x, y, z)} & Fe2
& 6h & \multicolumn{1}{c|}{(x, y, 1/4)} & Fe2 & 6j & \multicolumn{1}{c|}{(x,
y, 0)} & Fe2 & 12l & (x, y, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Fe3 & 6h &
\multicolumn{1}{c|}{(x, y, 1/4)} & Fe3 & 6k & \multicolumn{1}{c|}{(x, y, 1/2)
} & & & \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Fe4 & 6h &
\multicolumn{1}{c|}{(x, y, 1/4)} & Fe4 & 6k & \multicolumn{1}{c|}{(x, y, 1/2)
} & & & \\ \hline
& & & & & & & & & & & & & & \\ \hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{SG165-P$%
\overline{3}$c1(type \uppercase\expandafter{\romannumeral1})} &
\multicolumn{3}{c|}{SG165-P$\overline{3}$c1(type \uppercase%
\expandafter{\romannumeral2})} & \multicolumn{3}{c|}{SG164-P$\overline{3}$%
m1(type \uppercase\expandafter{\romannumeral1})} & \multicolumn{3}{c}{SG164-P%
$\overline{3}$m1(type \uppercase\expandafter{\romannumeral2})} \\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & & WP & \multicolumn{1}{c|}{
Coordinates} & & WP & \multicolumn{1}{c|}{Coordinates} & & WP &
\multicolumn{1}{c|}{Coordinates} & & WP & Coordinates \\ \hline
\multirow{4}{*}{Ge1} & \multirow{4}{*}{1a} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(0, 0, 0)}} & Ge1 & 2b & \multicolumn{1}{c|}{(0, 0, 0)} & Ge1
& 2a & \multicolumn{1}{c|}{(0, 0, 1/4)} & Ge1 & 1a & \multicolumn{1}{c|}{(0,
0, 0)} & Ge1 & 2c & (0, 0, z) \\
& & \multicolumn{1}{c|}{} & Ge2 & 6e & \multicolumn{1}{c|}{(1/2, 0, 0)} &
Ge2 & 6f & \multicolumn{1}{c|}{(x, 0, 1/4)} & Ge2 & 1b & \multicolumn{1}{c|}{
(0, 0, 1/2)} & Ge2 & 6i & (x, 2x z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Ge3 & 3e & \multicolumn{1}{c|}{(0, 1/2, 0)} & & &
\\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Ge4 & 3f & \multicolumn{1}{c|}{(0, 1/2, 1/2)} & &
& \\ \hline
\multirow{4}{*}{Ge2} & \multirow{4}{*}{2d} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(1/3, 2/3, 1/2)}} & Ge3 & 4d & \multicolumn{1}{c|}{(1/3,
2/3, z)} & Ge3 & 4d & \multicolumn{1}{c|}{(1/3, 2/3, z)} & Ge5 & 2d &
\multicolumn{1}{c|}{(1/3, 2/3, z)} & Ge3 & 2d & (1/3, 2/3, z) \\
& & \multicolumn{1}{c|}{} & Ge4 & 12g & \multicolumn{1}{c|}{(x, y, z)} & Ge4
& 12g & \multicolumn{1}{c|}{(x, y, z)} & Ge6 & 2d & \multicolumn{1}{c|}{
(1/3, 2/3, z)} & Ge4 & 2d & (1/3, 2/3, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Ge7 & 6i & \multicolumn{1}{c|}{(x, 2x, z)} & Ge5 & 6i
& (x, 2x, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Ge8 & 6i & \multicolumn{1}{c|}{(x, 2x, z)} & Ge6 & 6i
& (x, 2x, z) \\ \hline
\multirow{3}{*}{Fe} & \multirow{3}{*}{3f} & \multicolumn{1}{c|}{%
\multirow{3}{*}{(1/2, 0, 0)}} & Fe1 & 12g & \multicolumn{1}{c|}{(x, y, z)} &
Fe1 & 6f & \multicolumn{1}{c|}{(x, 0, 1/4)} & Fe1 & 6i & \multicolumn{1}{c|}{
(x, 2x, z)} & Fe1 & 6i & (x, 2x, z) \\
& & \multicolumn{1}{c|}{} & Fe2 & 12g & \multicolumn{1}{c|}{(x, y, z)} & Fe2
& 6f & \multicolumn{1}{c|}{(x, 0, 1/4)} & Fe2 & 6i & \multicolumn{1}{c|}{(x,
2x, z)} & Fe2 & 6i & (x, 2x, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Fe3 & 12g &
\multicolumn{1}{c|}{(x, y, z)} & Fe3 & 12j & \multicolumn{1}{c|}{(x, y, z)}
& Fe3 & 12j & (x, y, z) \\ \hline
& & & & & & & & & & & & & &
\end{tabular}%
\end{table*}
\begin{table*}[htbp]
\caption{The corresponding Wyckoff positions and the coordinates of the
atoms in the pristine phase and CDW phases with different symmetries. (PART
\uppercase\expandafter{\romannumeral3}).}
\label{cdw3}%
\begin{tabular}{ccccccccccccccc}
\hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{SG163-P$%
\overline{3}$1c(type \uppercase\expandafter{\romannumeral1})} &
\multicolumn{3}{c|}{SG163-P$\overline{3}$1c(type \uppercase%
\expandafter{\romannumeral2})} & \multicolumn{3}{c|}{SG162-P$\overline{3}$%
1m(type \uppercase\expandafter{\romannumeral1})} & \multicolumn{3}{c}{SG162-P%
$\overline{3}$1m(type \uppercase\expandafter{\romannumeral2})} \\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & & WP & \multicolumn{1}{c|}{
Coordinates} & & WP & \multicolumn{1}{c|}{Coordinates} & & WP &
\multicolumn{1}{c|}{Coordinates} & & WP & Coordinates \\ \hline
\multirow{4}{*}{Ge1} & \multirow{4}{*}{1a} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(0, 0, 0)}} & Ge1 & 2b & \multicolumn{1}{c|}{(0, 0, 0)} & Ge1
& 2a & \multicolumn{1}{c|}{(0, 0, 1/4)} & Ge1 & 1a & \multicolumn{1}{c|}{(0,
0, 0)} & Ge1 & 2e & (0, 0, z) \\
& & \multicolumn{1}{c|}{} & Ge2 & 6g & \multicolumn{1}{c|}{(0, 1/2, 0)} &
Ge2 & 6h & \multicolumn{1}{c|}{(x, 2x, 1/4)} & Ge2 & 1b &
\multicolumn{1}{c|}{(0, 0, 1/2)} & Ge2 & 6k & (x, 0, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Ge3 & 3f & \multicolumn{1}{c|}{(1/2, 0, 0)} & & &
\\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Ge4 & 3g & \multicolumn{1}{c|}{(1/2, 0, 1/2)} & &
& \\ \hline
\multirow{4}{*}{Ge2} & \multirow{4}{*}{2d} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(1/3, 2/3, 1/2)}} & Ge3 & 2c & \multicolumn{1}{c|}{(1/3,
2/3, 1/4)} & Ge3 & 4f & \multicolumn{1}{c|}{(1/3, 2/3, z)} & Ge5 & 4h &
\multicolumn{1}{c|}{(1/3, 2/3, z)} & Ge3 & 2c & (1/3, 2/3, 0) \\
& & \multicolumn{1}{c|}{} & Ge4 & 2d & \multicolumn{1}{c|}{(1/3, 2/3, 3/4)}
& Ge4 & 12i & \multicolumn{1}{c|}{(x, y, z)} & Ge6 & 12l &
\multicolumn{1}{c|}{(x, y, z)} & Ge4 & 2d & (1/3, 2/3, 1/2) \\
& & \multicolumn{1}{c|}{} & Ge5 & 6h & \multicolumn{1}{c|}{(x, 2x, 1/4)} &
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge5 & 6i & (x,
2x, 0) \\
& & \multicolumn{1}{c|}{} & Ge6 & 6h & \multicolumn{1}{c|}{(x, 2x, 1/4)} &
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge6 & 6j & (x,
2x, 1/2) \\ \hline
\multirow{4}{*}{Fe} & \multirow{4}{*}{3f} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(1/2, 0, 0))}} & Fe1 & 12i & \multicolumn{1}{c|}{(x, y, z)}
& Fe1 & 6h & \multicolumn{1}{c|}{(x, 2x, 1/4)} & Fe1 & 6i &
\multicolumn{1}{c|}{(x, 2x, 0)} & Fe1 & 6k & (x, 0, z) \\
& & \multicolumn{1}{c|}{} & Fe2 & 12i & \multicolumn{1}{c|}{(x, y, z)} & Fe2
& 6h & \multicolumn{1}{c|}{(x, 2x, 1/4)} & Fe2 & 6j & \multicolumn{1}{c|}{
(x, 2x, 1/2)} & Fe2 & 6k & (x, 0, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Fe3 & 12i &
\multicolumn{1}{c|}{(x, y, z)} & Fe3 & 6k & \multicolumn{1}{c|}{(x, 0, z)} &
Fe3 & 12i & (x, y, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Fe4 & 6k & \multicolumn{1}{c|}{(x, 0, z)} & & &
\\ \hline
& & & & & & & & & & & & & & \\ \hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{
SG68-Ccce(type \uppercase\expandafter{\romannumeral1})} &
\multicolumn{3}{c|}{SG68-Ccce(type \uppercase\expandafter{\romannumeral2})}
& \multicolumn{3}{c|}{SG68-Ccce(type \uppercase\expandafter{\romannumeral3})}
& \multicolumn{3}{c}{SG68-Ccce(type \uppercase\expandafter{\romannumeral4})}
\\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & & WP & \multicolumn{1}{c|}{
Coordinates} & & WP & \multicolumn{1}{c|}{Coordinates} & & WP &
\multicolumn{1}{c|}{Coordinates} & & WP & Coordinates \\ \hline
\multirow{3}{*}{Ge1} & \multirow{3}{*}{1a} & \multicolumn{1}{c|}{%
\multirow{3}{*}{(0, 0, 0)}} & Ge1 & 8c & \multicolumn{1}{c|}{(1/4, 1/4, 0)}
& Ge1 & 8e & \multicolumn{1}{c|}{(x, 1/4, 1/4)} & Ge1 & 8g &
\multicolumn{1}{c|}{(0, 1/4, z)} & Ge1 & 4a & (0, 1/4, 1/4) \\
& & \multicolumn{1}{c|}{} & Ge2 & 8d & \multicolumn{1}{c|}{(0, 0, 0)} & Ge2
& 8f & \multicolumn{1}{c|}{(0, y, 1/4)} & Ge2 & 8h & \multicolumn{1}{c|}{
(1/4, 0, z)} & Ge2 & 4b & (0, 1/4, 3/4) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge3 & 8h & (1/4, 0, z)
\\ \hline
Ge2 & 2d & \multicolumn{1}{c|}{(1/3, 2/3, 1/2)} & Ge3 & 8f &
\multicolumn{1}{c|}{(0, y, 1/4)} & Ge3 & 16i & \multicolumn{1}{c|}{(x, y, z)}
& Ge3 & 8f & \multicolumn{1}{c|}{(0, y, 1/4)} & Ge3 & 16i & (x, y, z) \\
& & \multicolumn{1}{c|}{} & Ge4 & 8f & \multicolumn{1}{c|}{(0, y, 1/4)} &
Ge4 & 16i & \multicolumn{1}{c|}{(x, y, z)} & Ge4 & 8f & \multicolumn{1}{c|}{
(0, y, 1/4)} & Ge4 & 16i & (x, y, z) \\
& & \multicolumn{1}{c|}{} & Ge5 & 16i & \multicolumn{1}{c|}{(x, y, z)} & &
& \multicolumn{1}{c|}{} & Ge5 & 16i & \multicolumn{1}{c|}{(x, y, z)} & & &
\\ \hline
\multirow{5}{*}{Fe} & \multirow{5}{*}{3f} & \multicolumn{1}{c|}{%
\multirow{5}{*}{(1/2, 0, 0))}} & Fe1 & 8g & \multicolumn{1}{c|}{(0, 1/4, z)}
& Fe1 & 4a & \multicolumn{1}{c|}{(0, 1/4, 1/4)} & Fe1 & 8c &
\multicolumn{1}{c|}{(1/4, 1/4, 0)} & Fe1 & 8e & (x, 1/4, 1/4) \\
& & \multicolumn{1}{c|}{} & Fe2 & 8h & \multicolumn{1}{c|}{(1/4, 0, z)} &
Fe2 & 4b & \multicolumn{1}{c|}{(0, 1/4, 3/4)} & Fe2 & 8d &
\multicolumn{1}{c|}{(0, 0, 1/2)} & Fe2 & 8f & (0, y, 1/4) \\
& & \multicolumn{1}{c|}{} & Fe3 & 16i & \multicolumn{1}{c|}{(x, y, z)} & Fe3
& 8h & \multicolumn{1}{c|}{(1/4, 0, z)} & Fe3 & 16i & \multicolumn{1}{c|}{
(x, y, z)} & Fe3 & 16i & (x, y, z) \\
& & \multicolumn{1}{c|}{} & Fe4 & 16i & \multicolumn{1}{c|}{(x, y, z)} & Fe4
& 16i & \multicolumn{1}{c|}{(x, y, z)} & Fe4 & 16i & \multicolumn{1}{c|}{(x,
y, z)} & Fe4 & 16i & (x, y, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Fe5 & 16i &
\multicolumn{1}{c|}{(x, y, z)} & & & \multicolumn{1}{c|}{} & & & \\
\hline
& & & & & & & & & & & & & & \\ \hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{
SG67-Cmme(type \uppercase\expandafter{\romannumeral1})} &
\multicolumn{3}{c|}{SG67-Cmme(type \uppercase\expandafter{\romannumeral2})}
& \multicolumn{3}{c|}{SG67-Cmme(type \uppercase\expandafter{\romannumeral3})}
& \multicolumn{3}{c}{SG67-Cmme(type \uppercase\expandafter{\romannumeral4})}
\\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & & WP & \multicolumn{1}{c|}{
Coordinates} & & WP & \multicolumn{1}{c|}{Coordinates} & & WP &
\multicolumn{1}{c|}{Coordinates} & & WP & Coordinates \\ \hline
\multirow{4}{*}{Ge1} & \multirow{4}{*}{1a} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(0, 0, 0)}} & Ge1 & 4c & \multicolumn{1}{c|}{(0, 0, 0)} & Ge1
& 4a & \multicolumn{1}{c|}{(1/4, 0, 0)} & Ge1 & 4g & \multicolumn{1}{c|}{(0,
1/4, z)} & Ge1 & 8n & (x, 1/4, z) \\
& & \multicolumn{1}{c|}{} & Ge2 & 4d & \multicolumn{1}{c|}{(0, 0, 1/2)} &
Ge2 & 4b & \multicolumn{1}{c|}{(1/4, 0, 1/2)} & Ge2 & 4g &
\multicolumn{1}{c|}{(0, 1/4, z)} & Ge2 & 8m & (0, y, z) \\
& & \multicolumn{1}{c|}{} & Ge3 & 4e & \multicolumn{1}{c|}{(1/4, 1/4, 0)} &
Ge3 & 4g & \multicolumn{1}{c|}{(0, 1/4, z)} & Ge3 & 8l & \multicolumn{1}{c|}{
(1/4, 0, z)} & & & \\
& & \multicolumn{1}{c|}{} & Ge4 & 4f & \multicolumn{1}{c|}{(1/4, 1/4, 1/2)}
& Ge4 & 4g & \multicolumn{1}{c|}{(0, 1/4, z)} & & & \multicolumn{1}{c|}{}
& & & \\ \hline
\multirow{4}{*}{Ge2} & \multirow{4}{*}{2d} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(1/3, 2/3, 1/2)}} & Ge5 & 8m & \multicolumn{1}{c|}{(0, y, z)}
& Ge5 & 8m & \multicolumn{1}{c|}{(0, y, z)} & Ge4 & 8j & \multicolumn{1}{c|}{
(1/4, y, 0)} & Ge3 & 8j & (1/4, y, 0) \\
& & \multicolumn{1}{c|}{} & Ge6 & 8m & \multicolumn{1}{c|}{(0, y, z)} & Ge6
& 8m & \multicolumn{1}{c|}{(0, y, z)} & Ge5 & 8k & \multicolumn{1}{c|}{(1/4,
y, 1/2)} & Ge4 & 8k & (1/4, y, 1/2) \\
& & \multicolumn{1}{c|}{} & Ge7 & 16o & \multicolumn{1}{c|}{(x, y, z)} & Ge7
& 16o & \multicolumn{1}{c|}{(x, y, z)} & Ge6 & 8m & \multicolumn{1}{c|}{(0,
y, z)} & Ge5 & 8m & (0, y, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Ge7 & 8m & \multicolumn{1}{c|}{(0, y, z)} & Ge6 & 8m
& (0, y, z) \\ \hline
\multirow{6}{*}{Fe} & \multirow{6}{*}{3f} & \multicolumn{1}{c|}{%
\multirow{6}{*}{(1/2, 0, 0))}} & Fe1 & 4a & \multicolumn{1}{c|}{(1/4, 0, 0)}
& Fe1 & 4c & \multicolumn{1}{c|}{(0, 0, 0)} & Fe1 & 8n & \multicolumn{1}{c|}{
(x, 1/4, z)} & Fe1 & 4g & (0, 1/4, z) \\
& & \multicolumn{1}{c|}{} & Fe2 & 4b & \multicolumn{1}{c|}{(1/4, 0, 1/2)} &
Fe2 & 4d & \multicolumn{1}{c|}{(0, 0, 1/2)} & Fe2 & 8m & \multicolumn{1}{c|}{
(0, y, z)} & Fe2 & 4g & (0, 1/4, z) \\
& & \multicolumn{1}{c|}{} & Fe3 & 4g & \multicolumn{1}{c|}{(0, 1/4, z)} &
Fe3 & 4e & \multicolumn{1}{c|}{(1/4, 1/4, 0)} & Fe3 & 16o &
\multicolumn{1}{c|}{(x, y, z)} & Fe3 & 8l & (1/4, 0, z) \\
& & \multicolumn{1}{c|}{} & Fe4 & 4g & \multicolumn{1}{c|}{(0, 1/4, z)} &
Fe4 & 4f & \multicolumn{1}{c|}{(1/4, 1/4, 1/2)} & Fe4 & 16o &
\multicolumn{1}{c|}{(x, y, z)} & Fe4 & 16o & (x, y, z) \\
& & \multicolumn{1}{c|}{} & Fe5 & 16o & \multicolumn{1}{c|}{(x, y, z)} & Fe5
& 16o & \multicolumn{1}{c|}{(x, y, z)} & & & \multicolumn{1}{c|}{} & Fe5 &
16o & (x, y, z) \\
& & \multicolumn{1}{c|}{} & Fe6 & 16o & \multicolumn{1}{c|}{(x, y, z)} & Fe6
& 16o & \multicolumn{1}{c|}{(x, y, z)} & & & \multicolumn{1}{c|}{} & & &
\\ \hline
& & & & & & & & & & & & & & \\ \hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{
SG66-Cccm(type \uppercase\expandafter{\romannumeral1})} &
\multicolumn{3}{c|}{SG66-Cccm(type \uppercase\expandafter{\romannumeral2})}
& \multicolumn{3}{c|}{SG66-Cccm(type \uppercase\expandafter{\romannumeral3})}
& \multicolumn{3}{c}{SG66-Cccm(type \uppercase\expandafter{\romannumeral4})}
\\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & & WP & \multicolumn{1}{c|}{
Coordinates} & & WP & \multicolumn{1}{c|}{Coordinates} & & WP &
\multicolumn{1}{c|}{Coordinates} & & WP & Coordinates \\ \hline
\multirow{4}{*}{Ge1} & \multirow{4}{*}{1a} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(0, 0, 0)}} & Ge1 & 4c & \multicolumn{1}{c|}{(0, 0, 0)} & Ge1
& 4a & \multicolumn{1}{c|}{(0, 0, 1/4)} & Ge1 & 8l & \multicolumn{1}{c|}{(x,
y, 0)} & Ge1 & 8g & (x, 0, 1/4) \\
& & \multicolumn{1}{c|}{} & Ge2 & 4d & \multicolumn{1}{c|}{(0, 0, 1/2)} &
Ge2 & 4b & \multicolumn{1}{c|}{(0, 1/2, 1/4)} & Ge2 & 8l &
\multicolumn{1}{c|}{(x, y, 0)} & Ge2 & 8h & (0, y, 1/4) \\
& & \multicolumn{1}{c|}{} & Ge3 & 4e & \multicolumn{1}{c|}{(1/4, 1/4, 0)} &
Ge3 & 8k & \multicolumn{1}{c|}{(1/4, 1/4, 1/4)} & & & \multicolumn{1}{c|}{}
& & & \\
& & \multicolumn{1}{c|}{} & Ge4 & 4f & \multicolumn{1}{c|}{(1/4, 1/4, 1/2)}
& & & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & & \\
\hline
\multirow{4}{*}{Ge2} & \multirow{4}{*}{2d} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(1/3, 2/3, 1/2)}} & Ge5 & 8h & \multicolumn{1}{c|}{(0, y,
1/4)} & Ge4 & 8l & \multicolumn{1}{c|}{(x, y, 0)} & Ge3 & 8h &
\multicolumn{1}{c|}{(0, y, 1/4)} & Ge3 & 8l & (x, y, 0) \\
& & \multicolumn{1}{c|}{} & Ge6 & 8h & \multicolumn{1}{c|}{(0, y, 1/4)} &
Ge5 & 8l & \multicolumn{1}{c|}{(x, y, 0)} & Ge4 & 8h & \multicolumn{1}{c|}{
(0, y, 1/4)} & Ge4 & 8l & (x, y, 0) \\
& & \multicolumn{1}{c|}{} & Ge7 & 16m & \multicolumn{1}{c|}{(x, y, z)} & Ge6
& 8l & \multicolumn{1}{c|}{(x, y, 0)} & Ge5 & 16m & \multicolumn{1}{c|}{(x,
y, z)} & Ge5 & 8l & (x, y, 0) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge7 & 8l &
\multicolumn{1}{c|}{(x, y, 0)} & & & \multicolumn{1}{c|}{} & Ge6 & 8l &
(x, y, 0) \\ \hline
\multirow{8}{*}{Fe} & \multirow{8}{*}{3f} & \multicolumn{1}{c|}{%
\multirow{8}{*}{(1/2, 0, 0))}} & Fe1 & 8l & \multicolumn{1}{c|}{(x, y, 0)} &
Fe1 & 8g & \multicolumn{1}{c|}{(x, 0, 1/4)} & Fe1 & 4c & \multicolumn{1}{c|}{
(0, 0, 0)} & Fe1 & 4a & (0, 0, 1/4) \\
& & \multicolumn{1}{c|}{} & Fe2 & 8l & \multicolumn{1}{c|}{(x, y, 0)} & Fe2
& 8h & \multicolumn{1}{c|}{(0, y, 1/4)} & Fe2 & 4d & \multicolumn{1}{c|}{(0,
0, 1/2)} & Fe2 & 4b & (0, 1/2, 1/4) \\
& & \multicolumn{1}{c|}{} & Fe3 & 8l & \multicolumn{1}{c|}{(x, y, 0)} & Fe3
& 16m & \multicolumn{1}{c|}{(x, y, z)} & Fe3 & 4e & \multicolumn{1}{c|}{
(1/4, 1/4, 0)} & Fe3 & 8k & (1/4, 1/4, 1/4) \\
& & \multicolumn{1}{c|}{} & Fe4 & 8l & \multicolumn{1}{c|}{(x, y, 0)} & Fe4
& 16m & \multicolumn{1}{c|}{(x, y, z)} & Fe4 & 4f & \multicolumn{1}{c|}{
(1/4, 1/4, 1/2)} & Fe4 & 16m & (x, y, z) \\
& & \multicolumn{1}{c|}{} & Fe5 & 8l & \multicolumn{1}{c|}{(x, y, 0)} & &
& \multicolumn{1}{c|}{} & Fe5 & 8l & \multicolumn{1}{c|}{(x, y, 0)} & Fe5 &
16m & (x, y, z) \\
& & \multicolumn{1}{c|}{} & Fe6 & 8l & \multicolumn{1}{c|}{(x, y, 0)} & &
& \multicolumn{1}{c|}{} & Fe6 & 8l & \multicolumn{1}{c|}{(x, y, 0)} & & &
\\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Fe7 & 8l & \multicolumn{1}{c|}{(x, y, 0)} & & &
\\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Fe8 & 8l & \multicolumn{1}{c|}{(x, y, 0)} & & &
\\ \hline
\end{tabular}%
\end{table*}
\begin{table*}[htbp]
\caption{The corresponding Wyckoff positions and the coordinates of the
atoms in the pristine phase and CDW phases with different symmetries. (PART
\uppercase\expandafter{\romannumeral4}).}
\label{cdw4}%
\begin{tabular}{ccccccccccccccc}
\hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{
SG65-Cmmm(type \uppercase\expandafter{\romannumeral1})} &
\multicolumn{3}{c|}{SG65-Cmmm(case2)} & \multicolumn{3}{c|}{SG65-Cmmm(type %
\uppercase\expandafter{\romannumeral3})} & \multicolumn{3}{c}{SG65-Cmmm(type %
\uppercase\expandafter{\romannumeral4})} \\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & & WP & \multicolumn{1}{c|}{
Coordinates} & & WP & \multicolumn{1}{c|}{Coordinates} & & WP &
\multicolumn{1}{c|}{Coordinates} & & WP & Coordinates \\ \hline
\multirow{6}{*}{Ge1} & \multirow{6}{*}{1a} & \multicolumn{1}{c|}{%
\multirow{6}{*}{(0, 0, 0)}} & Ge1 & 2a & \multicolumn{1}{c|}{(0, 0, 0)} & Ge1
& 4i & \multicolumn{1}{c|}{(0, y, 0)} & Ge1 & 4k & \multicolumn{1}{c|}{(0,
0, z)} & Ge1 & 8o & (x, 0, z) \\
& & \multicolumn{1}{c|}{} & Ge2 & 2b & \multicolumn{1}{c|}{(0, 1/2, 0)} &
Ge2 & 4j & \multicolumn{1}{c|}{(0, y, 1/2)} & Ge2 & 4l & \multicolumn{1}{c|}{
(0, 1/2, z)} & Ge2 & 8n & (0, y, z) \\
& & \multicolumn{1}{c|}{} & Ge3 & 2c & \multicolumn{1}{c|}{(0, 1/2, 1/2)} &
Ge3 & 4g & \multicolumn{1}{c|}{(x, 0, 0)} & Ge3 & 8m & \multicolumn{1}{c|}{
(1/4, 1/4, z)} & & & \\
& & \multicolumn{1}{c|}{} & Ge4 & 2d & \multicolumn{1}{c|}{(0, 0, 1/2)} &
Ge4 & 4h & \multicolumn{1}{c|}{(x, 0, 1/2)} & & & \multicolumn{1}{c|}{} &
& & \\
& & \multicolumn{1}{c|}{} & Ge5 & 4e & \multicolumn{1}{c|}{(1/4, 1/4, 0)} &
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & & \\
& & \multicolumn{1}{c|}{} & Ge6 & 4f & \multicolumn{1}{c|}{(1/4, 1/4, 1/2)}
& & & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & & \\
\hline
\multirow{6}{*}{Ge2} & \multirow{6}{*}{2d} & \multicolumn{1}{c|}{%
\multirow{6}{*}{(1/3, 2/3, 1/2)}} & Ge7 & 8n & \multicolumn{1}{c|}{(0, y, z)}
& Ge5 & 8n & \multicolumn{1}{c|}{(0, y, z)} & Ge4 & 4i & \multicolumn{1}{c|}{
(0, y, 0)} & Ge3 & 4i & (0, y, 0) \\
& & \multicolumn{1}{c|}{} & Ge8 & 8n & \multicolumn{1}{c|}{(0, y, z)} & Ge6
& 8n & \multicolumn{1}{c|}{(0, y, z)} & Ge5 & 4i & \multicolumn{1}{c|}{(0,
y, 0)} & Ge4 & 4i & (0, y, 0) \\
& & \multicolumn{1}{c|}{} & Ge9 & 16r & \multicolumn{1}{c|}{(x, y, z)} & Ge7
& 16r & \multicolumn{1}{c|}{(x, y, z)} & Ge6 & 4j & \multicolumn{1}{c|}{(0,
y, 1/2)} & Ge5 & 4j & (0, y, 1/2) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Ge7 & 4j & \multicolumn{1}{c|}{(0, y, 1/2)} & Ge6 &
4j & (0, y, 1/2) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Ge8 & 8p & \multicolumn{1}{c|}{(x, y, 0)} & Ge7 & 8p
& (x, y, 0) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Ge9 & 8q & \multicolumn{1}{c|}{(x, y, 1/2)} & Ge8 &
8q & (x, y, 1/2) \\ \hline
\multirow{10}{*}{Fe} & \multirow{10}{*}{3f} & \multicolumn{1}{c|}{%
\multirow{10}{*}{(1/2, 0, 0)}} & Fe1 & 4g & \multicolumn{1}{c|}{(x, 0, 0)} &
Fe1 & 2a & \multicolumn{1}{c|}{(0, 0, 0)} & Fe1 & 8o & \multicolumn{1}{c|}{
(x, 0, z)} & Fe1 & 4k & (0, 0, z) \\
& & \multicolumn{1}{c|}{} & Fe2 & 4h & \multicolumn{1}{c|}{(x, 0, 1/2)} &
Fe2 & 2b & \multicolumn{1}{c|}{(0, 1/2, 0)} & Fe2 & 8n & \multicolumn{1}{c|}{
(0, y, z)} & Fe2 & 4l & (0, 1/2, z) \\
& & \multicolumn{1}{c|}{} & Fe3 & 4i & \multicolumn{1}{c|}{(0, y, 0)} & Fe3
& 2c & \multicolumn{1}{c|}{(0, 1/2, 1/2)} & Fe3 & 16r & \multicolumn{1}{c|}{
(x, y, z)} & Fe3 & 8m & (1/4, 1/4, z) \\
& & \multicolumn{1}{c|}{} & Fe4 & 4j & \multicolumn{1}{c|}{(0, y, 1/2)} &
Fe4 & 2d & \multicolumn{1}{c|}{(0, 0, 1/2)} & Fe4 & 16r &
\multicolumn{1}{c|}{(x, y, z)} & Fe4 & 16r & (x, y, z) \\
& & \multicolumn{1}{c|}{} & Fe5 & 8p & \multicolumn{1}{c|}{(x, y, 0)} & Fe5
& 4e & \multicolumn{1}{c|}{(1/4, 1/4, 0)} & & & \multicolumn{1}{c|}{} & Fe5
& 16r & (x, y, z) \\
& & \multicolumn{1}{c|}{} & Fe6 & 8p & \multicolumn{1}{c|}{(x, y, 0)} & Fe6
& 4f & \multicolumn{1}{c|}{(1/4, 1/4, 1/2)} & & & \multicolumn{1}{c|}{} &
& & \\
& & \multicolumn{1}{c|}{} & Fe7 & 8q & \multicolumn{1}{c|}{(x, y, 1/2)} &
Fe7 & 8p & \multicolumn{1}{c|}{(x, y, 0)} & & & \multicolumn{1}{c|}{} & &
& \\
& & \multicolumn{1}{c|}{} & Fe8 & 8q & \multicolumn{1}{c|}{(x, y, 1/2)} &
Fe8 & 8p & \multicolumn{1}{c|}{(x, y, 0)} & & & \multicolumn{1}{c|}{} & &
& \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Fe9 & 8q &
\multicolumn{1}{c|}{(x, y, 1/2)} & & & \multicolumn{1}{c|}{} & & & \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Fe10 & 8q &
\multicolumn{1}{c|}{(x, y, 1/2)} & & & \multicolumn{1}{c|}{} & & & \\
\hline
& & & & & & & & & & & & & & \\ \hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{
SG64-Cmce(type \uppercase\expandafter{\romannumeral1})} &
\multicolumn{3}{c|}{SG64-Cmce(type \uppercase\expandafter{\romannumeral2})}
& \multicolumn{3}{c|}{SG64-Cmce(type \uppercase\expandafter{\romannumeral3})}
& \multicolumn{3}{c}{SG64-Cmce(type \uppercase\expandafter{\romannumeral4})}
\\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & & WP & \multicolumn{1}{c|}{
Coordinates} & & WP & \multicolumn{1}{c|}{Coordinates} & & WP &
\multicolumn{1}{c|}{Coordinates} & & WP & Coordinates \\ \hline
\multirow{3}{*}{Ge1} & \multirow{3}{*}{1a} & \multicolumn{1}{c|}{%
\multirow{3}{*}{(0, 0, 0)}} & Ge1 & 4a & \multicolumn{1}{c|}{(0, 0, 0)} & Ge1
& 4a & \multicolumn{1}{c|}{(0, 0, 0)} & Ge1 & 8e & \multicolumn{1}{c|}{(1/4,
y, 1/4)} & Ge1 & 8e & (1/4, y, 1/4) \\
& & \multicolumn{1}{c|}{} & Ge2 & 4b & \multicolumn{1}{c|}{(0, 0, 1/2)} &
Ge2 & 4b & \multicolumn{1}{c|}{(0, 0, 1/2)} & Ge2 & 8f & \multicolumn{1}{c|}{
(0, y, z)} & Ge2 & 8f & (0, y, z) \\
& & \multicolumn{1}{c|}{} & Ge3 & 8c & \multicolumn{1}{c|}{(1/4, 1/4, 0)} &
Ge3 & 8c & \multicolumn{1}{c|}{(1/4, 1/4, 0)} & & & \multicolumn{1}{c|}{}
& & & \\ \hline
\multirow{4}{*}{Ge2} & \multirow{4}{*}{2d} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(1/3, 2/3, 1/2)}} & Ge4 & 16g & \multicolumn{1}{c|}{(x, y, z)
} & Ge4 & 8e & \multicolumn{1}{c|}{(1/4, y, 1/4)} & Ge3 & 8d &
\multicolumn{1}{c|}{(x, 0, 0)} & Ge3 & 8f & (0, y, z) \\
& & \multicolumn{1}{c|}{} & Ge5 & 16g & \multicolumn{1}{c|}{(x, y, z)} & Ge5
& 8e & \multicolumn{1}{c|}{(1/4, y, 1/4)} & Ge4 & 8d & \multicolumn{1}{c|}{
(x, 0, 0)} & Ge4 & 8f & (0, y, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge6 & 8f &
\multicolumn{1}{c|}{(0, y, z)} & Ge5 & 16g & \multicolumn{1}{c|}{(x, y, z)}
& Ge5 & 16g & (x, y, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge7 & 8f &
\multicolumn{1}{c|}{(0, y, z)} & & & \multicolumn{1}{c|}{} & & & \\
\hline
\multirow{5}{*}{Fe} & \multirow{5}{*}{3f} & \multicolumn{1}{c|}{%
\multirow{5}{*}{(1/2, 0, 0))}} & Fe1 & 8d & \multicolumn{1}{c|}{(x, 0, 0)} &
Fe1 & 8e & \multicolumn{1}{c|}{(1/4, y, 1/4)} & Fe1 & 8e &
\multicolumn{1}{c|}{(1/4, y, 1/4)} & Fe1 & 8e & (1/4, y, 1/4) \\
& & \multicolumn{1}{c|}{} & Fe2 & 8f & \multicolumn{1}{c|}{(0, y, z)} & Fe2
& 8f & \multicolumn{1}{c|}{(0, y, z)} & Fe2 & 8f & \multicolumn{1}{c|}{(0,
y, z)} & Fe2 & 8f & (0, y, z) \\
& & \multicolumn{1}{c|}{} & Fe3 & 16g & \multicolumn{1}{c|}{(x, y, z)} & Fe3
& 16g & \multicolumn{1}{c|}{(x, y, z)} & Fe3 & 16g & \multicolumn{1}{c|}{(x,
y, z)} & Fe3 & 16g & (x, y, z) \\
& & \multicolumn{1}{c|}{} & Fe4 & 16g & \multicolumn{1}{c|}{(x, y, z)} & Fe4
& 16g & \multicolumn{1}{c|}{(x, y, z)} & Fe4 & 16g & \multicolumn{1}{c|}{(x,
y, z)} & Fe4 & 16g & (x, y, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & & \\ \hline
& & & & & & & & & & & & & & \\ \hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{
SG64-Cmce(type \uppercase\expandafter{\romannumeral5})} &
\multicolumn{3}{c|}{SG64-Cmce(type \uppercase\expandafter{\romannumeral6})}
& \multicolumn{3}{c|}{SG64-Cmce(type \uppercase\expandafter{\romannumeral7})}
& \multicolumn{3}{c}{SG64-Cmce(type \uppercase\expandafter{\romannumeral8})}
\\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & & WP & \multicolumn{1}{c|}{
Coordinates} & & WP & \multicolumn{1}{c|}{Coordinates} & & WP &
\multicolumn{1}{c|}{Coordinates} & & WP & Coordinates \\ \hline
\multirow{3}{*}{Ge1} & \multirow{3}{*}{1a} & \multicolumn{1}{c|}{%
\multirow{3}{*}{(0, 0, 0)}} & Ge1 & 8e & \multicolumn{1}{c|}{(1/4, y, 1/4)}
& Ge1 & 8e & \multicolumn{1}{c|}{(1/4, y, 1/4)} & Ge1 & 8d &
\multicolumn{1}{c|}{(x, 0, 0)} & Ge1 & 8d & (x, 0, 0) \\
& & \multicolumn{1}{c|}{} & Ge2 & 8f & \multicolumn{1}{c|}{(0, y, z)} & Ge2
& 8f & \multicolumn{1}{c|}{(0, y, z)} & Ge2 & 8f & \multicolumn{1}{c|}{(0,
y, z)} & Ge2 & 8f & (0, y, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & & \\ \hline
\multirow{4}{*}{Ge2} & \multirow{4}{*}{2d} & \multicolumn{1}{c|}{%
\multirow{4}{*}{(1/3, 2/3, 1/2)}} & Ge3 & 8f & \multicolumn{1}{c|}{(0, y, z)}
& Ge3 & 8d & \multicolumn{1}{c|}{(x, 0, 0)} & Ge3 & 16g &
\multicolumn{1}{c|}{(x, y, z)} & Ge4 & 8e & (1/4, y, 1/4) \\
& & \multicolumn{1}{c|}{} & Ge4 & 8f & \multicolumn{1}{c|}{(0, y, z)} & Ge4
& 8d & \multicolumn{1}{c|}{(x, 0, 0)} & Ge4 & 16g & \multicolumn{1}{c|}{(x,
y, z)} & Ge5 & 8e & (1/4, y, 1/4) \\
& & \multicolumn{1}{c|}{} & Ge5 & 16g & \multicolumn{1}{c|}{(x, y, z)} & Ge5
& 16g & \multicolumn{1}{c|}{(x, y, z)} & & & \multicolumn{1}{c|}{} & Ge6 &
8f & (0, y, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge7 & 8f & (0, y, z)
\\ \hline
Fe & 3f & \multicolumn{1}{c|}{(1/2, 0, 0))} & Fe1 & 8e & \multicolumn{1}{c|}{
(1/4, y, 1/4)} & Fe1 & 8e & \multicolumn{1}{c|}{(1/4, y, 1/4)} & Fe1 & 4a &
\multicolumn{1}{c|}{(0, 0, 0)} & Fe1 & 4a & (0, 0, 0) \\
\multirow{4}{*}{} & \multirow{4}{*}{} & \multicolumn{1}{c|}{\multirow{4}{*}{}
} & Fe2 & 8f & \multicolumn{1}{c|}{(0, y, z)} & Fe2 & 8f &
\multicolumn{1}{c|}{(0, y, z)} & Fe2 & 4b & \multicolumn{1}{c|}{(0, 0, 1/2)}
& Fe2 & 4b & (0, 0, 1/2) \\
& & \multicolumn{1}{c|}{} & Fe3 & 16g & \multicolumn{1}{c|}{(x, y, z)} & Fe3
& 16g & \multicolumn{1}{c|}{(x, y, z)} & Fe3 & 8c & \multicolumn{1}{c|}{
(1/4, 1/4, 0)} & Fe3 & 8c & (1/4, 1/4, 0) \\
& & \multicolumn{1}{c|}{} & Fe4 & 16g & \multicolumn{1}{c|}{(x, y, z)} & Fe4
& 16g & \multicolumn{1}{c|}{(x, y, z)} & Fe4 & 16g & \multicolumn{1}{c|}{(x,
y, z)} & Fe4 & 16g & (x, y, z) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Fe5 & 16g & \multicolumn{1}{c|}{(x, y, z)} & Fe5 &
16g & (x, y, z) \\ \hline
\end{tabular}%
\end{table*}
\begin{table*}[htbp]
\caption{The corresponding Wyckoff positions and the coordinates of the
atoms in the pristine phase and CDW phases with different symmetries. (PART
\uppercase\expandafter{\romannumeral5}).}
\label{cdw5}%
\begin{tabular}{ccccccccccccccc}
\hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{
SG63-Cmcm(type\uppercase\expandafter{\romannumeral1})} & \multicolumn{3}{c|}{
SG63-Cmcm(type \uppercase\expandafter{\romannumeral2})} &
\multicolumn{3}{c|}{SG63-Cmcm(type \uppercase\expandafter{\romannumeral3})}
& \multicolumn{3}{c}{SG63-Cmcm(type \uppercase\expandafter{\romannumeral4})}
\\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & & WP & \multicolumn{1}{c|}{
Coordinates} & & WP & \multicolumn{1}{c|}{Coordinates} & & WP &
\multicolumn{1}{c|}{Coordinates} & & WP & Coordinates \\ \hline
\multirow{3}{*}{Ge1} & \multirow{3}{*}{1a} & \multicolumn{1}{c|}{%
\multirow{3}{*}{(0, 0, 0)}} & Ge1 & 4a & \multicolumn{1}{c|}{(0, 0, 0)} & Ge1
& 4a & \multicolumn{1}{c|}{(0, 0, 0)} & Ge1 & 4c & \multicolumn{1}{c|}{(0,
y, 1/4)} & Ge1 & 4c & (0, y, 1/4) \\
& & \multicolumn{1}{c|}{} & Ge2 & 4b & \multicolumn{1}{c|}{(0, 1/2, 0)} &
Ge2 & 4b & \multicolumn{1}{c|}{(0, 1/2, 0)} & Ge2 & 4c & \multicolumn{1}{c|}{
(0, y, 1/4)} & Ge2 & 4c & (0, y, 1/4) \\
& & \multicolumn{1}{c|}{} & Ge3 & 8d & \multicolumn{1}{c|}{(1/4, 1/4, 0)} &
Ge3 & 8d & \multicolumn{1}{c|}{(1/4, 1/4, 0)} & Ge3 & 8g &
\multicolumn{1}{c|}{(x, y, 1/4)} & Ge3 & 8g & (x, y, 1/4) \\ \hline
\multirow{6}{*}{Ge2} & \multirow{6}{*}{2d} & \multicolumn{1}{c|}{%
\multirow{6}{*}{(1/3, 2/3, 1/2)}} & Ge4 & 8g & \multicolumn{1}{c|}{(x, y,
1/4)} & Ge4 & 4c & \multicolumn{1}{c|}{(0, y, 1/4)} & Ge4 & 8e &
\multicolumn{1}{c|}{(x, 0, 0)} & Ge4 & 8e & (x, 0, 0) \\
& & \multicolumn{1}{c|}{} & Ge5 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} &
Ge5 & 4c & \multicolumn{1}{c|}{(0, y, 1/4)} & Ge5 & 8e & \multicolumn{1}{c|}{
(x, 0, 0)} & Ge5 & 8e & (x, 0, 0) \\
& & \multicolumn{1}{c|}{} & Ge6 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} &
Ge6 & 4c & \multicolumn{1}{c|}{(0, y, 1/4)} & Ge6 & 16h &
\multicolumn{1}{c|}{(x, y, z)} & Ge6 & 16h & (x, y, z) \\
& & \multicolumn{1}{c|}{} & Ge7 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} &
Ge7 & 4c & \multicolumn{1}{c|}{(0, y, 1/4)} & \multicolumn{1}{l}{} &
\multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l}{} &
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\
& & \multicolumn{1}{c|}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} &
\multicolumn{1}{l|}{} & Ge8 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} &
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} &
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge9 & 8g &
\multicolumn{1}{c|}{(x, y, 1/4)} & & & \multicolumn{1}{c|}{} & & & \\
\hline
\multirow{7}{*}{Fe} & \multirow{7}{*}{3f} & \multicolumn{1}{c|}{%
\multirow{7}{*}{(1/2, 0, 0))}} & Fe1 & 8e & \multicolumn{1}{c|}{(x, 0, 0)} &
Fe1 & 8e & \multicolumn{1}{c|}{(x, 0, 0)} & Fe1 & 4c & \multicolumn{1}{c|}{
(0, y, 1/4)} & Fe1 & 4c & (0, y, 1/4) \\
& & \multicolumn{1}{c|}{} & Fe2 & 8f & \multicolumn{1}{c|}{(0, y, z)} & Fe2
& 8f & \multicolumn{1}{c|}{(0, y, z)} & Fe2 & 4c & \multicolumn{1}{c|}{(0,
y, 1/4)} & Fe2 & 4c & (0, y, 1/4) \\
& & \multicolumn{1}{c|}{} & Fe3 & 16h & \multicolumn{1}{c|}{(x, y, z)} & Fe3
& 16h & \multicolumn{1}{c|}{(x, y, z)} & Fe3 & 8g & \multicolumn{1}{c|}{(x,
y, 1/4)} & Fe3 & 8g & (x, y, 1/4) \\
& & \multicolumn{1}{c|}{} & Fe4 & 16h & \multicolumn{1}{c|}{(x, y, z)} & Fe4
& 16h & \multicolumn{1}{c|}{(x, y, z)} & Fe4 & 8g & \multicolumn{1}{c|}{(x,
y, 1/4)} & Fe4 & 8g & (x, y, 1/4) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Fe5 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} & Fe5 &
8g & (x, y, 1/4) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Fe6 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} & Fe6 &
8g & (x, y, 1/4) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Fe7 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} & Fe7 &
8g & (x, y, 1/4) \\ \hline
& & & & & & & & & & & & & & \\ \hline
\multicolumn{3}{c|}{Pristine phase(P6/mmm)} & \multicolumn{3}{c|}{
SG63-Cmcm(type \uppercase\expandafter{\romannumeral5})} &
\multicolumn{3}{c|}{SG63-Cmcm(type \uppercase\expandafter{\romannumeral6})}
& \multicolumn{3}{c|}{SG63-Cmcm(type \uppercase\expandafter{\romannumeral7})}
& \multicolumn{3}{c}{SG63-Cmcm(type \uppercase\expandafter{\romannumeral8})}
\\ \hline
& WP & \multicolumn{1}{c|}{Coordinates} & & WP & \multicolumn{1}{c|}{
Coordinates} & & WP & \multicolumn{1}{c|}{Coordinates} & & WP &
\multicolumn{1}{c|}{Coordinates} & & WP & Coordinates \\ \hline
\multirow{3}{*}{Ge1} & \multirow{3}{*}{1a} & \multicolumn{1}{c|}{%
\multirow{3}{*}{(0, 0, 0)}} & Ge1 & 4c & \multicolumn{1}{c|}{(0, y, 1/4)} &
Ge1 & 4c & \multicolumn{1}{c|}{(0, y, 1/4)} & Ge1 & 8e & \multicolumn{1}{c|}{
(x, 0, 0)} & Ge1 & 8e & (x, 0, 0) \\
& & \multicolumn{1}{c|}{} & Ge2 & 4c & \multicolumn{1}{c|}{(0, y, 1/4)} &
Ge2 & 4c & \multicolumn{1}{c|}{(0, y, 1/4)} & Ge2 & 8f & \multicolumn{1}{c|}{
(0, y, z)} & Ge2 & 8f & (0, y, z) \\
& & \multicolumn{1}{c|}{} & Ge3 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} &
Ge3 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} & & & \multicolumn{1}{c|}{} &
& & \\ \hline
\multirow{6}{*}{Ge2} & \multirow{6}{*}{2d} & \multicolumn{1}{c|}{%
\multirow{6}{*}{(1/3, 2/3, 1/2)}} & Ge4 & 8f & \multicolumn{1}{c|}{(0, y, z)}
& Ge4 & 8f & \multicolumn{1}{c|}{(0, y, z)} & Ge3 & 8g & \multicolumn{1}{c|}{
(x, y, 1/4)} & Ge4 & 4c & (0, y, 1/4) \\
& & \multicolumn{1}{c|}{} & Ge5 & 8f & \multicolumn{1}{c|}{(0, y, z)} & Ge5
& 8f & \multicolumn{1}{c|}{(0, y, z)} & Ge4 & 8g & \multicolumn{1}{c|}{(x,
y, 1/4)} & Ge5 & 4c & (0, y, 1/4) \\
& & \multicolumn{1}{c|}{} & Ge6 & 16h & \multicolumn{1}{c|}{(x, y, z)} & Ge6
& 16h & \multicolumn{1}{c|}{(x, y, z)} & Ge5 & 8g & \multicolumn{1}{c|}{(x,
y, 1/4)} & Ge6 & 4c & (0, y, 1/4) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & Ge6 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} & Ge7 &
4c & (0, y, 1/4) \\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge8 & 8g & (x, y, 1/4)
\\
& & \multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & & &
\multicolumn{1}{c|}{} & & & \multicolumn{1}{c|}{} & Ge9 & 8g & (x, y, 1/4)
\\ \hline
\multirow{7}{*}{Fe} & \multirow{7}{*}{3f} & \multicolumn{1}{c|}{%
\multirow{7}{*}{(1/2, 0, 0))}} & Fe1 & 4c & \multicolumn{1}{c|}{(0, y, 1/4)}
& Fe1 & 4c & \multicolumn{1}{c|}{(0, y, 1/4)} & Fe1 & 4a &
\multicolumn{1}{c|}{(0, 0, 0)} & Fe1 & 4a & (0, 0, 0) \\
& & \multicolumn{1}{c|}{} & Fe2 & 4c & \multicolumn{1}{c|}{(0, y, 1/4)} &
Fe2 & 4c & \multicolumn{1}{c|}{(0, y, 1/4)} & Fe2 & 4b & \multicolumn{1}{c|}{
(0, 1/2, 0)} & Fe2 & 4b & (0, 1/2, 0) \\
& & \multicolumn{1}{c|}{} & Fe3 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} &
Fe3 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} & Fe3 & 8d & \multicolumn{1}{c|}{
(1/4, 1/4, 0)} & Fe3 & 8d & (1/4, 1/4, 0) \\
& & \multicolumn{1}{c|}{} & Fe4 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} &
Fe4 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} & Fe4 & 16h &
\multicolumn{1}{c|}{(x, y, z)} & Fe4 & 16h & (x, y, z) \\
& & \multicolumn{1}{c|}{} & Fe5 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} &
Fe5 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} & Fe5 & 16h &
\multicolumn{1}{c|}{(x, y, z)} & Fe5 & 16h & (x, y, z) \\
& & \multicolumn{1}{c|}{} & Fe6 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} &
Fe6 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} & & & \multicolumn{1}{c|}{} &
& & \\
& & \multicolumn{1}{c|}{} & Fe7 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} &
Fe7 & 8g & \multicolumn{1}{c|}{(x, y, 1/4)} & & & \multicolumn{1}{c|}{} &
& & \\ \hline
\end{tabular}%
\end{table*}
\clearpage
\bibliographystyle{aps}
| 51,667 |
\section{Introduction}
This paper is concerned with the problem
\begin{equation}\label{leq}
\left\{
\begin{aligned}
&u_{xx}+{\lambda} f(u)=0, \quad x \in (-1,1) \setminus \{ 0 \},
\\
&u_x(-1)=u_x(1)=0,
\\
&u(-0)+au_x(-0)=u(+0)-au_x(+0),
\\
&u_x(-0)=u_x(+0),
\end{aligned}
\right.
\end{equation}
where ${\lambda}>0$ is a bifurcation parameter,
$a>0$ is a fixed constant
and $-0$ (resp. $+0$) stands for the left-hand limit (resp. the right-hand limit)
as $x$ approaches $0$.
Throughout the paper,
we assume that $f$ satisfies the following conditions:
\begin{equation}\label{basf}
\left\{
\begin{aligned}
&f \in C^2({\mathbb R}), \\
&f(-1)=f(0)=f(1)=0, \ f'(-1)<0, \ f'(0)>0, \ f'(1)<0, \\
&{\operatorname{sgn}}(u) f(u)>0 \quad \mbox{for} \ u \in (-1,1) \setminus \{0\}, \\
&f(u)=-f(-u) \quad \mbox{for} \ u \in (-1,1).
\end{aligned}
\right.
\end{equation}
Here ${\operatorname{sgn}}$ is the sign function.
Our interest is the structure of solutions of \eqref{leq}
in the bifurcation diagram.
\subsection{Background and known results}
The problem \eqref{leq} is related to the stationary problem
of the scalar reaction-diffusion equation
\begin{equation}\label{rde}
\left\{
\begin{aligned}
&u_t=\Delta u+{\lambda} f(u), && \tilde x \in {\Omega}, \\
&\partial_\nu u=0, && \tilde x \in \partial {\Omega},
\end{aligned}
\right.
\end{equation}
where ${\Omega}$ is a bounded domain in a Euclidean space
and $\partial_\nu$ stands for the outward normal derivative.
The existence of stable nonconstant stationary solutions
is one of main interests in the study of \eqref{rde}.
In \cite{CH78,Ma79},
it was shown that
\eqref{rde} has no stable nonconstant stationary solutions if ${\Omega}$ is convex,
while in \cite{Ma79}, the existence of such solutions was proved
when ${\Omega}$ is a dumbbell-shaped domain.
Here, by dumbbell-shaped domain,
we mean a domain given by the union of two regions
and a thin tubular channel connecting them.
The structure of stable nonconstant stationary solutions was studied
by deriving a finite-dimensional limiting equation
as the thickness of the channel approaches zero.
The limiting equation was obtained and examined by \cite{V83},
and it was shown that a stable nonconstant stationary solution appears
through a secondary bifurcation if $\Omega$ is symmetric (see also \cite{HV84}).
In~\cite{Mo90},
the stability of nonconstant stationary solutions was investigated
by constructing an invariant manifold (see also \cite{F90,MEF91,MJ92}).
The problem \eqref{leq} is derived as another type of limiting equation.
For small ${\varepsilon}>0$,
let ${\Omega}_{\varepsilon}$ be a thin dumbbell-shaped domain shown in Figure~\ref{tdsd}.
Then we can expect that solutions of \eqref{rde}
are approximated by those of some equation in one space dimension,
since ${\Omega}_{\varepsilon}$ shrinks to the interval $(-1,1)$ as ${\varepsilon} \to 0$.
It is indeed shown in \cite{K} that stationary solutions of \eqref{rde} are approximated by
solutions of \eqref{leq} in the following sense:
for any nondegenerate solution $u$ of \eqref{leq},
there exists a corresponding stationary solution $u_{\varepsilon}$ of \eqref{rde} with ${\Omega}={\Omega}_{\varepsilon}$
such that $u_{\varepsilon}$ converges to $u$ in an appropriate sense as ${\varepsilon} \to 0$.
Therefore the analysis of \eqref{leq} provides
information on the structure of solutions of \eqref{rde}.
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, scale=1.0]{tddom.pdf}
\caption{Thin dumbbell-shaped domain.}
\label{tdsd}
\end{figure}
Not only equations in higher dimensions,
but also other equations relate to \eqref{leq}.
Let $u$ be a solution of \eqref{leq} and let $\overline{u}$ be defined by
\begin{equation*}
\overline{u}(x):=\left\{
\begin{aligned}
&u(x+a) && (x \in [-1-a,-a)),
\\
&\frac{u_x(+0)+u_x(-0)}{2}x +\frac{u(+0)+u(-0)}{2} && (x \in [-a,a]),
\\
&u(x-a) && (x \in (a,1+a]).
\end{aligned}
\right.
\end{equation*}
Then, in a suitable sense, $\overline{u}$ satisfies the problem
\begin{equation*
\left\{
\begin{aligned}
&(\overline{u})_{xx}+{\lambda} (1-\chi_{(-a,a)}(x)) f(\overline{u})=0, \quad x \in (-1-a,1+a),
\\
&(\overline{u})_x(-1-a)=(\overline{u})_x(1+a)=0,
\end{aligned}
\right.
\end{equation*}
where $\chi_A$ denotes the indicator function of a set $A$.
Therefore \eqref{leq} can be regarded as one of boundary value problems
for equations with variable coefficients.
In many such problems, secondary bifurcations are observed.
For instance, the Neumann problem for the equation $(c(x)u_x)_x+{\lambda} (u-u^3)=0$
with a piecewise constant function $c(x)$ was studied by \cite{HR85},
and the Dirichlet problem for the equation $u_{xx}+d({\lambda},x,u)=0$
with $d({\lambda},x,u)=|x|^{\lambda} u^p$, $|x|^{\lambda} e^u$, or $(1-\chi_{(-{\lambda},{\lambda})}(x))u^p$
was investigated by \cite{KST18,ST19,T13,T17}.
We remark that the last two conditions of \eqref{leq} also appear
when we consider the Schr\"{o}dinger operators with ${\delta}'$-interactions
(for recent studies see \cite{AN13,AN14,AG18,AG20,GO20}).
The aim of this paper is to find bifurcation points
on branches of solutions bifurcating from $u=0$,
and to determine the Morse index of solutions on the branches.
We first observe that each branch consists of odd solutions or even solutions.
Then, under additional assumptions on $f$,
we show that a secondary bifurcation occurs only on the branch of odd solutions.
Finally, we prove that the Morse index of an odd solution with $m$ zeros
changes from $m+1$ to $m$ through a secondary bifurcation.
In particular, we conclude that
the Morse index of a monotone odd solution becomes zero after the secondary bifurcation.
This fact is consistent with the result in \cite{V83}.
\subsection{Main result}
To state our main result,
we set up notation to be used throughout the paper.
Set
\begin{equation*}
X_0:=\{ u \in C^2([-1,1] \setminus \{0\});
u_{xx}|_{[-1,0)} \mbox{ and } u_{xx}|_{(0,1]} \mbox{ are uniformly continuous}\}.
\end{equation*}
If $u \in X_0$, then $u|_{[-1,0)}$ (resp. $u|_{(0,1]}$)
can be extended to a $C^2$ function on $[-1,0]$ (resp. $[0,1]$).
Hence we see that limits $u(\pm 0)$ and $u_x(\pm 0)$ exist for any $u \in X_0$
and that $X_0$ is a Banach space endowed with the norm
\begin{equation*}
\| u\|_{X_0}=\sup_{x \in [-1,1] \setminus \{0\}} |u(x)|
+\sum_{k=1}^2 \sup_{x \in [-1,1] \setminus \{0\}} \left| \frac{d^k u}{dx^k}(x)\right|.
\end{equation*}
We focus on solutions of \eqref{leq} in the set $X \subset X_0$ defined by
\begin{equation*}
X:=\{ u \in X_0; |u(x)|<1 \mbox{ for } x \in [-1,1] \setminus \{0\} \}.
\end{equation*}
Let ${{\mathcal S}}$ denote the set of all pairs
$({\lambda},u) \in (0,\infty) \times X$ satisfying \eqref{leq}:
\begin{equation*}
{{\mathcal S}}:=\bigcup_{{\lambda} \in (0,\infty)} \{ {\lambda}\} \times {{\mathcal S}}_{\lambda},
\qquad
{{\mathcal S}}_{\lambda} :=\{ u \in X; u \mbox{ satisfies } \eqref{leq}\}.
\end{equation*}
Note that ${{\mathcal S}}$ contains a trivial branch ${{\mathcal L}}:=\{ ({\lambda},0)\}_{{\lambda} \in (0,\infty)}$.
For ${{\mathcal A}} \subset (0,\infty) \times X$,
we write
\begin{equation*}
-{{\mathcal A}}:=\{ ({\lambda},-u); ({\lambda},u) \in {{\mathcal A}} \}.
\end{equation*}
Then we see from the last condition of \eqref{basf}
that $-{{\mathcal A}} \subset {{\mathcal S}}$ if ${{\mathcal A}} \subset {{\mathcal S}}$.
We define ${{\mathcal S}}^o \subset {{\mathcal S}}$ and ${{\mathcal S}}^e \subset {{\mathcal S}}$ by
\begin{gather*}
{{\mathcal S}}^o:=\bigcup_{{\lambda} \in (0,\infty)} \{ {\lambda}\} \times {{\mathcal S}}_{\lambda}^o,
\qquad
{{\mathcal S}}^e:=\bigcup_{{\lambda} \in (0,\infty)} \{ {\lambda}\} \times {{\mathcal S}}_{\lambda}^e,
\\
{{\mathcal S}}_{\lambda}^o:=\{ u \in {{\mathcal S}}_{\lambda} \setminus \{0\};
u(-x)=-u(x) \mbox{ for } x \in [-1,1] \setminus \{0\} \},
\\
{{\mathcal S}}_{\lambda}^e:=\{ u \in {{\mathcal S}}_{\lambda} \setminus \{0\};
u(-x)=u(x) \mbox{ for } x \in [-1,1] \setminus \{0\} \}.
\end{gather*}
For a solution $u \in {{\mathcal S}}_{\lambda}$,
we also discuss the linearized eigenvalue problem
\begin{equation}\label{llevp}
\left\{
\begin{aligned}
&\varphi_{xx}+{\lambda} f'(u)\varphi=\mu \varphi, \quad x \in (-1,1) \setminus \{ 0 \},
\\
&\varphi_x(-1)=\varphi_x(1)=0,
\\
&\varphi(-0)+a\varphi_x(-0)=\varphi(+0)-a\varphi_x(+0),
\\
&\varphi_x(-0)=\varphi_x(+0).
\end{aligned}
\right.
\end{equation}
It is shown that the set of eigenvalues of \eqref{llevp}
consists of a sequence of real numbers which diverges to $-\infty$
(see Lemma~\ref{lem:ipmi} in Section~\ref{sec:pre}).
We say that $u$ is nondegenerate if all the eigenvalues are nonzero.
The number of positive eigenvalues is called the Morse index
and is denoted by $i(u)$.
For $n \in {\mathbb N}$,
we define ${\lambda}_n \in (0,\infty)$ by
\begin{equation*
{\lambda}_n:=\left\{
\begin{aligned}
&\frac{z_{k}^2}{f'(0)} && \mbox{if } n=2k-1, k \in {\mathbb N},
\\
&\frac{(k\pi)^2}{f'(0)} && \mbox{if } n=2k, k \in {\mathbb N},
\end{aligned}
\right.
\end{equation*}
where $z_k$ denotes the unique root of the equation $az\tan z=1$ in $((k-1)\pi,(k-1/2)\pi)$.
Note that ${\lambda}_n<{\lambda}_{n+1}$.
We set
\begin{equation*}
F(u):=2\int_0^u f(s)ds,
\end{equation*}
and fix $u_0 \in (0,1)$.
The main result of this paper is stated as follows.
\begin{thm}\label{mthm}
In addition to \eqref{basf}, assume that the following conditions are satisfied:
\begin{subequations}\label{aasfs}
\begin{gather}
f'(u)\left\{
\begin{aligned}
&>0 && \mbox{if } u \in (-u_0,u_0),
\\
&<0 && \mbox{if } u \in (-1,-u_0) \cup (u_0,1),
\end{aligned}
\right.
\label{aasfs0}
\\
{\operatorname{sgn}} (u) \frac{f'(u)F(u)}{f(u)^2} \mbox{ is strictly decreasing in }
(-u_0,u_0) \setminus \{0\},
\label{aasfs1}
\\
{\operatorname{sgn}} (u) \frac{d}{du} \left( \frac{f'(u)F(u)^{\frac{3}{2}}}{f(u)^3}\right) \le 0
\quad \mbox{for } u \in (-1,-u_0) \cup (u_0,1).
\label{aasfs2}
\end{gather}
\end{subequations}
Then for every $n \in {\mathbb N}$,
there exists ${{\mathcal C}}_n \subset {{\mathcal S}}$
with the following properties.
\begin{enumerate}[(i)]
\item
${{\mathcal C}}_n$ is a $C^1$ curve in $(0,\infty) \times X_0$
parametrized by ${\lambda} \in ({\lambda}_n,\infty)$ and bifurcates from $({\lambda}_n,0)$.
Moreover,
\begin{equation*}
{{\mathcal S}}^o=\bigcup_{k=1}^\infty {{\mathcal C}}_{2k-1} \cup (-{{\mathcal C}}_{2k-1}),
\qquad
{{\mathcal S}}^e=\bigcup_{k=1}^\infty {{\mathcal C}}_{2k} \cup (-{{\mathcal C}}_{2k}).
\end{equation*}
\item
There is no bifurcation point on ${{\mathcal L}} \setminus \{ ({\lambda}_n,0)\}_{n \in {\mathbb N}}$ and ${{\mathcal S}}^e$.
\item
For every odd number $n \in {\mathbb N}$,
the curve ${{\mathcal C}}_n$ (resp. $-{{\mathcal C}}_n$) has a unique bifurcation point
$({\lambda}^*_n,u^*_n)$ (resp. $({\lambda}^*_n,-u^*_n)$).
Furthermore, bifurcating solutions form a $C^1$ curve
in a neighborhood of each bifurcation point.
\item
Let $({\lambda},u) \in {{\mathcal C}}_n \cup (-{{\mathcal C}}_n)$.
If $n$ is even, then $u$ is nondegenerate and $i(u)=n$;
if $n$ is odd, then $u$ is nondegenerate unless ${\lambda}={\lambda}^*_n$ and
\begin{equation*}
i(u)=\left\{
\begin{aligned}
&n && ({\lambda}<{\lambda}^*_n),
\\
&n-1 && ({\lambda} \ge {\lambda}^*_n).
\end{aligned}
\right.
\end{equation*}
Here ${\lambda}^*_n$ is the number given in (iii).
\end{enumerate}
\end{thm}
The bifurcation diagram of \eqref{leq} is shown in Figure~\ref{bcdg}.
\begin{remk}
$f(u)=u-u^3$ and $f(u)=\sin \pi u$
are typical examples satisfying \eqref{aasfs},
namely all of the conditions \eqref{aasfs0}, \eqref{aasfs1} and \eqref{aasfs2}.
The assumption \eqref{aasfs} is used
to show the uniqueness of bifurcation points on ${{\mathcal C}}_{2k-1}$.
The assertions (i) and (ii) are verified under the weaker assumption
\begin{equation}\label{aasfw}
\frac{f'(u)u}{f(u)}<1 \quad \mbox{for } u \in (-1,1) \setminus \{0\}.
\end{equation}
The fact that \eqref{aasfs} implies \eqref{aasfw} is not difficult to check.
For the convenience of the reader,
we give the proof of this fact in Appendix~\ref{appendixA}.
Let $n$ be even and let $({\lambda},u) \in {{\mathcal C}}_n$.
Then we have $u_x(+0)=u_x(-0)=0$, since $u$ is even.
Combining this with \eqref{leq} shows that $u$ can be extended smoothly up to $x=0$.
Therefore $u$ coincides with a solution $\tilde u$
of the usual Neumann problem for the equation
$\tilde u_{xx}+{\lambda} f(\tilde u)=0$ in $(-1,1)$.
It is, however, not obvious that $u$ and $\tilde u$ have the same Morse index,
since the corresponding linearized eigenvalue problems are different.
\end{remk}
\begin{figure}[h]
\includegraphics[scale=1.0]{thm_diag.pdf}
\caption{Bifurcation diagram of \eqref{leq}.}
\label{bcdg}
\end{figure}
The main task in the proof of Theorem~\ref{mthm}
is to investigate the behavior of eigenvalues of \eqref{llevp}.
For $({\lambda},u) \in {{\mathcal C}}_{2k-1}$ and $n \in {\mathbb N} \cup \{0\}$,
let $\mu_n ({\lambda})$ denote the $(n+1)$-th largest eigenvalue of \eqref{llevp}.
Then the condition $\mu_n ({\lambda}_0)=0$ is necessary
in order for a bifurcation to occur at $({\lambda}_0,u)$.
A theory of local bifurcations shows that $({\lambda}_0,u)$ is indeed a bifurcation point
if the additional condition $d\mu_n({\lambda}_0)/d{\lambda} \neq 0$ is satisfied.
The main difficulty arises from the verification of this condition.
We will show in Sections~\ref{sec:sb} and \ref{sec:pt} that the assumption \eqref{aasfs}
gives $d\mu_{2k-2}({\lambda}^*)/d{\lambda}<0$ for any ${\lambda}^*$ satisfying $\mu_{2k-2} ({\lambda}^*)=0$.
This enables us to conclude that a secondary bifurcation occurs on ${{\mathcal C}}_{2k-1}$ just once.
This paper is organized as follows.
Section~\ref{sec:pre} provides some preliminaries.
A main task is to reduce the problem \eqref{leq} to a finite dimensional problem.
In Section~\ref{sec:pb},
we study primary branches bifurcating from the trivial branch ${\mathcal L}$.
In Section~\ref{sec:sb},
we discuss the number of bifurcation points on the primary branches.
Section~\ref{sec:pt} is devoted to the evaluation of the Morse index
and the proof of Theorem~\ref{mthm}.
\section{Preliminaries}\label{sec:pre}
In this section,
we first introduce a function $G$ to be used throughout the study,
next convert \eqref{leq} into a finite dimensional problem by the shooting method,
then study general properties of eigenpairs of \eqref{llevp},
and finally give sufficient conditions for a bifurcation
by applying a bifurcation theorem developed in \cite{CR71}.
\subsection{Definition and properties of $G$}
Set
\begin{equation*}
{\beta}_0:=\sqrt{2\int_0^1 f(s)ds} \left( =\sqrt{F(1)}=\sqrt{F(-1)} \right),
\qquad
I:=(-{\beta},{\beta}_0).
\end{equation*}
From \eqref{basf}, we see that the function
\begin{equation}\label{Ginv}
(-1,1) \ni u \mapsto {\operatorname{sgn}} (u) \sqrt{F(u)} \in I
\end{equation}
is strictly increasing.
We then define a function $G:I \to (-1,1)$ to be the inverse of this function,
that is, $G(v)$ is determined by the relation
\begin{equation*
F(G(v))=v^2, \quad |G(v)|<1, \quad G(v) \left\{
\begin{aligned}
&<0 && \mbox{if } v \in (-{\beta}_0,0), \\
&>0 && \mbox{if } v \in (0,{\beta}_0).
\end{aligned}
\right.
\end{equation*}
Furthermore, we set
\begin{equation}\label{hdef}
h(v):=
1-\frac{G''(v)v}{G'(v)},
\qquad
H(v):=v^2 -\frac{G(v)v}{G'(v)}.
\end{equation}
In the following lemma,
we collect properties and formulas for $G$.
Although part of the lemma is shown in \cite{S90},
we prove all the assertions in Appendix~\ref{appendixB} for readers' convenience.
\begin{lem}\label{lem:Gpro}
The following hold.
\begin{enumerate}[(i)]
\item
$G(v) \in C^2(I) \cap C^3(I \setminus \{0\})$
and $G''(v)v \in C^1(I)$.
\item
\eqref{aasfs0}, \eqref{aasfs1} and \eqref{aasfs2}
are equivalent to the following conditions
\eqref{aashs0}, \eqref{aashs1} and \eqref{aashs2}, respectively:
\begin{subequations}\label{aashs}
\begin{gather}
h(v)\left\{
\begin{aligned}
&>0 && \mbox{if } v \in (-v_0,v_0),
\\
&<0 && \mbox{if } v \in (-{\beta}_0,-v_0) \cup (v_0,{\beta}_0),
\end{aligned}
\right.
\label{aashs0}
\\
{\operatorname{sgn}} (v) h(v) \mbox{ is strictly decreasing in } (-v_0,v_0) \setminus \{0\},
\label{aashs1}
\\
{\operatorname{sgn}} (v) \frac{d}{dv} (G'(v)h(v)) \le 0 \quad \mbox{for } v \in (-{\beta}_0,-v_0) \cup (v_0,{\beta}_0).
\label{aashs2}
\end{gather}
\end{subequations}
Here $v_0=G^{-1}(u_0)$.
Moreover, \eqref{aasfw} holds if and only if
\begin{equation}\label{aasGw}
{\operatorname{sgn}}(v) H'(v)>0 \quad \mbox{for } v \in I \setminus \{0\}.
\end{equation}
\item
There are constants $c>0$ and $C>0$ such that for all $v \in I$,
\begin{gather}
\frac{c}{\sqrt{{\beta}_0-|v|}} \le G'(v) \le \frac{C}{\sqrt{{\beta}_0-|v|}},
\label{Gpas} \\
{\operatorname{sgn}}(v) G''(v) \ge \frac{c}{({\beta}_0-|v|)^{3/2}}-C.
\label{Gdpas}
\end{gather}
\end{enumerate}
\end{lem}
The function $G$ is introduced in \cite{S90,W83,W86}
to obtain a simple expression of solutions of the equation $w_{xx}+f(w)=0$.
A solution $w$ satisfying $|w|<1$
corresponds to the closed orbit $w_x^2+F(w)=c_0$ in the $ww_x$-plane,
where $c_0$ is a nonnegative constant.
By the change of variables $w=G(\tilde w)$,
the orbit is transformed into the circle $w_x^2+\tilde w^2=c_0$ in the $\tilde ww_x$-plane.
Hence $w$ is written as $w(x)=G(\sqrt{c_0} \cos \rho (x))$
for some suitable function $\rho (x)$.
The details of this argument are given in the next subsection.
\subsection{Reduction to a finite dimensional problem}
For ${\beta}_1,{\beta}_2 \in I$ and ${\lambda} \in (0,\infty)$,
let $u_1$ and $u_2$ to be solutions of the initial value problems
\begin{equation}\label{u12ivp}
\left\{
\begin{aligned}
&(u_1)_{xx}+{\lambda} f(u_1)=0, \quad x \in {\mathbb R}, \\
&u_1(-1)=G({\beta}_1), \ (u_1)_x(-1)=0,
\end{aligned}
\right.
\qquad \qquad
\left\{
\begin{aligned}
&(u_2)_{xx}+{\lambda} f(u_2)=0, \quad x \in {\mathbb R}, \\
&u_2(1)=G({\beta}_2), \ (u_2)_x(1)=0.
\end{aligned}
\right.
\end{equation}
Then the function $u$ defined by
\begin{equation*}
u(x)=\left\{
\begin{aligned}
&u_1(x) && (x \in [-1,0)), \\
&u_2(x) && (x \in (0,1])
\end{aligned}
\right.
\end{equation*}
satisfies \eqref{leq} if and only if
\begin{equation}\label{smoeq}
\left\{
\begin{aligned}
u_1(0)+a(u_1)_x(0)&=u_2(0)-a(u_2)_x(0), \\
(u_1)_x(0)&=(u_2)_x(0).
\end{aligned}
\right.
\end{equation}
Now we introduce a solution $(U,V)=(U(y,{\beta}),V(y,{\beta}))$ of the initial value problem
\begin{equation}\label{UVeq}
\left\{
\begin{aligned}
&U_y=V, \quad V_y=-f(U), && y \in {\mathbb R}, \\
&(U(0,{\beta}),V(0,{\beta}))=(G({\beta}),0).
\end{aligned}
\right.
\end{equation}
Then we have
\begin{gather}
u_1(x)=U\left( \sqrt{\lambda} (x+1),{\beta}_1 \right),
\quad
u_2(x)=U\left( \sqrt{\lambda} (1-x),{\beta}_2 \right),
\label{u12U}
\\
(u_1)_x (x)=\sqrt{\lambda} V\left( \sqrt{\lambda} (x+1),{\beta}_1 \right),
\quad
(u_2)_x (x)=-\sqrt{\lambda} V\left( \sqrt{\lambda} (1-x),{\beta}_2 \right).
\nonumber
\end{gather}
Hence \eqref{smoeq} is equivalent to the equation
\begin{equation}\label{smteq}
(P({\lambda},{\beta}_1),Q({\lambda},{\beta}_1))=(P({\lambda},{\beta}_2),-Q({\lambda},{\beta}_2)),
\end{equation}
where
\begin{equation*}
P({\lambda},{\beta}):=U(\sqrt{\lambda},{\beta}) +a\sqrt{\lambda} V(\sqrt{\lambda},{\beta}),
\qquad
Q({\lambda},{\beta}):=V(\sqrt{\lambda},{\beta}).
\end{equation*}
We remark that
\begin{gather}
U(x,-{\beta})=-U(x,{\beta}),
\qquad
V(x,-{\beta})=-V(x,{\beta}),
\label{UVsy}
\\
P({\lambda},-{\beta})=-P({\lambda},{\beta}),
\qquad
Q({\lambda},-{\beta})=-Q({\lambda},{\beta}),
\label{PQsy}
\end{gather}
which follow from the fact that $f$ and $G$ are odd functions.
Let us investigate the relation between \eqref{leq} and \eqref{smteq}.
The set of solutions of \eqref{smteq} is denoted by
\begin{gather*}
{{\mathcal T}}:=\bigcup_{{\lambda} \in (0,\infty)} \{ {\lambda}\} \times {{\mathcal T}}_{\lambda},
\\
\mathcal{T}_{\lambda} :=\{ ({\beta}_1,{\beta}_2) \in I \times I;
(P({\lambda},{\beta}_1),Q({\lambda},{\beta}_1))=(P({\lambda},{\beta}_2),-Q({\lambda},{\beta}_2)) \}.
\end{gather*}
We define $C^2$ mappings $\Phi :X \to I \times I$ and $\Psi_{\lambda} :I \times I \to X$ by
\begin{gather*}
\Phi (u):=(G^{-1}(u(-1)),G^{-1}(u(1))),
\\
\Psi_{\lambda} ({\beta}_1,{\beta}_2)(x) :=\left\{
\begin{aligned}
&U\left( \sqrt{\lambda} (x+1),{\beta}_1 \right) (=u_1(x)) && \mbox{for } x \in [-1,0), \\
&U\left( \sqrt{\lambda} (1-x),{\beta}_2 \right) (=u_2(x)) && \mbox{for } x \in (0,1].
\end{aligned}
\right.
\end{gather*}
We note that
\begin{equation}\label{Pssy}
\Psi_{\lambda} (-{\beta}_1,-{\beta}_1)(x)=-\Psi_{\lambda} ({\beta}_1,{\beta}_2)(x),
\end{equation}
which is obtained immediately by \eqref{UVsy}.
By the following lemma, we see that
\eqref{leq} is reduced to the finite dimensional problem \eqref{smteq}.
\begin{lem}\label{ppoo}
$\Phi|_{\mathcal{S}_{\lambda}}$ and $\Psi_{\lambda}|_{\mathcal{T}_{\lambda}}$
are one-to-one correspondences between $\mathcal{S}_{\lambda}$ and $\mathcal{T}_{\lambda}$.
More precisely,
$\Phi(\mathcal{S}_{\lambda})=\mathcal{T_{\lambda}}$, $\Psi_{\lambda}(\mathcal{T}_{\lambda})=\mathcal{S}_{\lambda}$,
$\Psi_{\lambda} \circ \Phi|_{\mathcal{S}_{\lambda}}=id_{\mathcal{S}_{\lambda}}$
and $\Phi \circ \Psi_{\lambda}|_{\mathcal{T}_{\lambda}}=id_{\mathcal{T}_{\lambda}}$.
\end{lem}
\begin{proof}
That $\Psi_{\lambda}(\mathcal{T}_{\lambda}) \subset \mathcal{S}_{\lambda}$
and $\Phi \circ \Psi_{\lambda}|_{\mathcal{T}_{\lambda}}=id_{\mathcal{T}_{\lambda}}$
follow from the definitions of $\Phi$ and $\Psi_{\lambda}$.
To see $\Phi(\mathcal{S}_{\lambda}) \subset \mathcal{T}_{\lambda}$ and
$\Psi_{\lambda} \circ \Phi|_{\mathcal{S}_{\lambda}}=id_{\mathcal{S}_{\lambda}}$,
let $u \in \mathcal{S}_{\lambda}$ and $({\beta}_1,{\beta}_2)=\Phi(u)$.
Then, by the first two conditions of \eqref{leq},
we see that $u_1:=u|_{[-1,0)}$ and $u_2:=u|_{(0,1]}$ satisfy \eqref{u12ivp}.
Hence $u_1$ and $u_2$ are given by \eqref{u12U},
which gives $(\Psi_{\lambda} \circ \Phi)(u)=u$.
Since the last two conditions of \eqref{leq} lead to \eqref{smoeq},
we deduce that $\Phi (u)=({\beta}_1,{\beta}_2) \in \mathcal{T}_{\lambda}$.
We have thus proved $\Phi(\mathcal{S}_{\lambda}) \subset \mathcal{T}_{\lambda}$ and
$\Psi_{\lambda} \circ \Phi|_{\mathcal{S}_{\lambda}}=id_{\mathcal{S}_{\lambda}}$.
This completes the proof.
\end{proof}
The nondegeneracy of a solution of \eqref{leq}
can also be determined by the corresponding solution of \eqref{smteq}.
We set
\begin{align*}
D({\lambda},{\beta}_1,{\beta}_2):=&\det
\begin{pmatrix}
P_{\beta} ({\lambda},{\beta}_1) & -P_{\beta} ({\lambda},{\beta}_2) \\
Q_{\beta} ({\lambda},{\beta}_1) & Q_{\beta} ({\lambda},{\beta}_2)
\end{pmatrix}
\\
=&P_{\beta} ({\lambda},{\beta}_1)Q_{\beta} ({\lambda},{\beta}_2) +Q_{\beta} ({\lambda},{\beta}_1)P_{\beta} ({\lambda},{\beta}_2),
\end{align*}
where $P_{\beta}$ and $Q_{\beta}$ stand for derivatives with respect to ${\beta}$.
\begin{lem}\label{lem:ndcD}
Let $u \in \mathcal{S}_{\lambda}$ and put $({\beta}_1,{\beta}_2)=\Phi (u) \in {\mathcal T}_{\lambda}$.
Then $u$ is nondegenerate if and only if $D({\lambda},{\beta}_1,{\beta}_2) \neq 0$.
\end{lem}
\begin{proof}
Let $u_1$ and $u_2$ be given by \eqref{u12U} and define
\begin{equation}\label{phjdef}
\phi_j:=\frac{1}{G'({\beta}_j)} \frac{\partial u_j}{\partial {\beta}_j},
\quad
j=1,2.
\end{equation}
Then, by the definitions of $P$ and $Q$, we have
\begin{equation}\label{PQphj}
P_{\beta} ({\lambda},{\beta}_j)=G'({\beta}_j) \left. \left\{ \phi_j +(-1)^{j-1} a(\phi_j)_x\right\} \right|_{x=0},
\quad
Q_{\beta} ({\lambda},{\beta}_j)=\frac{(-1)^{j-1}G'({\beta}_j)}{\sqrt{{\lambda}}} (\phi_j)_x |_{x=0}.
\end{equation}
Hence the condition $D({\lambda},{\beta}_1,{\beta}_2)=0$ holds if and only if
\begin{equation}\label{Ddcon}
\mbox{there is } ({\alpha}_1,{\alpha}_2) \neq (0,0) \mbox{ such that }
\left\{
\begin{aligned}
{\alpha}_1 \{ \phi_1+a(\phi_1)_x\}|_{x=0}&={\alpha}_2 \{ \phi_2-a(\phi_2)_x\}|_{x=0}, &&
\\
{\alpha}_1 (\phi_1)_x|_{x=0}&={\alpha}_2 (\phi_2)_x|_{x=0},
&&
\end{aligned}
\right.
\end{equation}
which means that $(P_{\beta}({\lambda},{\beta}_1), Q_{\beta}({\lambda},{\beta}_1))$
and $(-P_{\beta}({\lambda},{\beta}_2), Q_{\beta}({\lambda},{\beta}_2))$ are linearly dependent.
Suppose that $D({\lambda},{\beta}_1,{\beta}_2)=0$,
that is, the condition \eqref{Ddcon} holds.
By definition, we see that $\phi_j$ satisfies
\begin{equation}\label{duaeq}
(\phi_j)_{xx} +{\lambda} f'(u_j) \phi_j=0,
\qquad
\phi_j|_{x=(-1)^j}=1,
\qquad
(\phi_j)_x|_{x=(-1)^j}=0.
\end{equation}
We define $\varphi \in X_0 \setminus \{0\}$ by
\begin{equation*}
\varphi (x)=\left\{
\begin{aligned}
&{\alpha}_1 \phi_1(x) && (x \in [-1,0)), \\
&{\alpha}_2 \phi_2(x) && (x \in (0,1]).
\end{aligned}
\right.
\end{equation*}
Then we see from \eqref{Ddcon} and \eqref{duaeq} that
$\varphi$ satisfies \eqref{llevp} for $\mu=0$.
This means that $u$ is not nondegenerate.
Suppose conversely that $u$ is not nondegenerate, that is,
there is $\varphi \in X_0 \setminus \{0\}$ satisfying \eqref{llevp} for $\mu=0$.
If $\varphi$ vanished at $x=-1$ or $x=1$,
we would have $\varphi \equiv 0$
due to the uniqueness of solutions of initial value problems.
Hence both $\varphi (-1)$ and $\varphi (1)$ are nonzero.
Since $\phi_j$ and $\varphi$ satisfy the same differential equation,
we deduce that
\begin{equation}\label{pjvpj}
\phi_1 (x)=\frac{1}{\varphi (-1)}\varphi (x) \quad (x \in [-1,0)),
\quad
\phi_2 (x)=\frac{1}{\varphi (1)}\varphi (x) \quad (x \in (0,1]).
\end{equation}
This implies that \eqref{Ddcon} holds
for $({\alpha}_1,{\alpha}_2)=(\varphi (-1),\varphi (1)) \neq (0,0)$.
Thus $D({\lambda},{\beta}_1,{\beta}_2)=0$, and the lemma follows.
\end{proof}
Let us derive explicit formulas for $P$ and $Q$ by solving \eqref{UVeq}.
For $(y,{\beta}) \in {\mathbb R} \times I$,
let $\Theta \in {\mathbb R}$ be given implicitly by the relation
\begin{equation}\label{Thdef}
\int_0^\Theta G'({\beta} \cos \tau)d\tau =y.
\end{equation}
Since $G'$ is positive,
the left-hand side is strictly increasing with respect to $\Theta$
and diverges to $\pm \infty$ as $\Theta \to \pm \infty$.
Hence $\Theta=\Theta (y,{\beta})$
is uniquely determined for every $(y,{\beta}) \in {\mathbb R} \times I$.
Moreover, by the implicit function theorem,
we infer that $\Theta$ is of class $C^1$.
We put $\theta ({\lambda},{\beta}) :=\Theta (\sqrt{{\lambda}},{\beta})$,
that is, $\theta$ is determined by
\begin{equation}\label{tblr}
\int_0^\theta G'({\beta} \cos \tau)d\tau =\sqrt{\lambda}.
\end{equation}
We note that $\theta>0$.
\begin{lem}
$P$ and $Q$ are written as
\begin{equation}\label{PQrep}
P=G({\beta} \cos \theta)-a\sqrt{\lambda} {\beta} \sin \theta,
\qquad
Q=-{\beta} \sin \theta.
\end{equation}
Furthermore,
\begin{equation}\label{pqdb}
\left\{
\begin{aligned}
P_{\beta}&=G'({\beta} \cos \theta) \cos \theta
-a\sin \theta \int_0^\theta G'({\beta} \cos \tau) d\tau
\\
&\quad +\left( {\beta} \sin \theta +a\frac{{\beta} \cos \theta}{G'({\beta} \cos \theta)}
\int_0^\theta G'({\beta} \cos \tau) d\tau \right)
\int_0^\theta G''({\beta} \cos \tau) \cos \tau d\tau,
\\
Q_{\beta}&=-\sin \theta+\frac{{\beta} \cos \theta}{G'({\beta} \cos \theta)}
\int_0^\theta G''({\beta} \cos \tau) \cos \tau d\tau.
\end{aligned}
\right.
\end{equation}
\end{lem}
\begin{proof}
Define
\begin{equation*
(U,V):=(G({\beta} \cos \Theta),-{\beta} \sin \Theta).
\end{equation*}
Differentiating both sides of \eqref{Thdef}
yields $G'({\beta} \cos \Theta) \Theta_y=1$,
and hence
\begin{gather*}
U_y=G'({\beta} \cos \Theta) \cdot (-{\beta} \sin \Theta) \cdot \Theta_y=V,
\\
V_y=(-{\beta} \cos \Theta) \cdot \Theta_y
=-\frac{{\beta} \cos \Theta}{G'({\beta} \cos \Theta)}=-f(U),
\end{gather*}
where we have used \eqref{dGfo} in deriving the last equality.
Since $\Theta|_{y=0}=0$, we have $(U,V)|_{y=0}=(G({\beta}),0)$.
This shows that $(U,V)$ satisfies \eqref{UVeq},
and therefore we obtain \eqref{PQrep}.
By \eqref{PQrep},
we have
\begin{equation}\label{pqdb0}
\left\{
\begin{aligned}
&P_{\beta}=G'({\beta} \cos \theta)(\cos \theta -{\beta} \theta_{\beta} \sin \theta)
-a\sqrt{{\lambda}} (\sin \theta +{\beta} \theta_{\beta} \cos \theta),
\\
&Q_{\beta}=-\sin \theta -{\beta} \theta_{\beta} \cos \theta.
\end{aligned}
\right.
\end{equation}
Differentiating \eqref{tblr} gives
\begin{equation}\label{thdb}
\theta_{\beta} =-\frac{1}{G'({\beta} \cos \theta))} \int_0^\theta G''({\beta} \cos \tau) \cos \tau d\tau.
\end{equation}
\eqref{pqdb} is then derived easily
by plugging \eqref{tblr} and \eqref{thdb} into \eqref{pqdb0}.
\end{proof}
\subsection{Properties of eigenpairs of \eqref{llevp}}
We recall known facts for the eigenvalue problem
\begin{equation}\label{gevp}
\left\{
\begin{aligned}
&\psi_{xx}+q(x) \psi=\nu w(x) \psi, && x \in (b,c), \\
&\psi_x(b)=\psi_x(c)=0,
\end{aligned}
\right.
\end{equation}
where $q$ and $w$ are given functions and $b,c \in {\mathbb R}$, $b<c$.
Since we deal with the case where $q$ and $w$ are not necessarily continuous,
we consider eigenfunctions in a generalized sense:
by an eigenfunction of \eqref{gevp}
we mean a function $\psi \in W^{2,1}(b,c) \setminus \{0\}$
satisfying the differential equation in \eqref{gevp} almost everywhere
and the boundary condition in the usual sense
(note that $W^{2,1}(b,c) \subset C^1([b,c])$).
\begin{thm}[\cite{A64}]\label{thm:gevp}
Suppose that $q$ and $w$ satisfy
\begin{equation}\label{assmp:gevp}
\left\{
\begin{aligned}
&q, w \in L^1(b,c),
\\
&w \ge 0 \quad \mbox{in } (b,c),
\\
&w>0 \quad \mbox{in } (b,b+{\delta}) \cup (c-{\delta},c) \mbox{ for some } {\delta}>0,
\\
&\{ w=0\} \subset \{ q=0\}.
\end{aligned}
\right.
\end{equation}
Then the following hold.
\begin{enumerate}[(i)]
\item
The eigenvalues of \eqref{gevp}
consists of real numbers $\{ \nu_n\}_{n=0}^\infty$ satisfying
\begin{equation*}
\nu_n>\nu_{n+1},
\qquad
\nu_n \to -\infty
\quad (n \to \infty).
\end{equation*}
Furthermore, each eigenvalue is simple, that is,
the eigenspace associated with $\nu_n$ is one-dimensional.
\item
Any eigenfunction corresponding to $\nu_n$ have exactly $n$ zeros in $(b,c)$.
\end{enumerate}
\end{thm}
For the proof of this theorem, see \cite[Theorems~8.4.5 and 8.4.6]{A64}.
In what follows,
we denote by ${\mathcal E}$ the set of all pairs $(q,w)$ satisfying \eqref{assmp:gevp},
and write $\nu_n(q)$ instead of $\nu_n$ if we emphasize the dependence on $q$.
\begin{lem}\label{lem:coev}
For $n \in {\mathbb N} \cup \{0\}$, the following hold.
\begin{enumerate}[(i)]
\item
Let $(q,w), (\tilde q,w) \in {{\mathcal E}}$.
Suppose that $q \ge \tilde q$ in $(b,c)$
and $q>\tilde q$ in some nonempty open subinterval of $(b,c)$.
Then $\nu_n(q)>\nu_n(\tilde q)$.
\item
For any ${\varepsilon}>0$ and $(q,w) \in {{\mathcal E}}$,
there exists ${\delta}>0$ such that $|\nu_n(q)-\nu_n(\tilde q)|<{\varepsilon}$
whenever $(\tilde q,w) \in {{\mathcal E}}$ and $\| q -\tilde q\|_{L^1(b,c)}<{\delta}$.
In other words, the mapping $q \mapsto \nu_n (q)$ is continuous.
\end{enumerate}
\end{lem}
In the case where $w$ is positive,
the above lemma is well-known
and can be shown by an argument based on Pr\"{u}fer's transformation.
It is not difficult to check that the same argument works
under the conditions stated in the lemma.
We omit the detailed proof.
Let us examine the properties of eigenvalues of \eqref{llevp}
using Theorem~\ref{thm:gevp} and Lemma~\ref{lem:coev}.
For $u \in X_0$,
we define a function $\overline{u}$ by
\begin{equation*}
\overline{u}(x):=\left\{
\begin{aligned}
&u(x+a) && (x \in [-1-a,-a)),
\\
&\frac{u_x(+0)+u_x(-0)}{2}x +\frac{u(+0)+u(-0)}{2} && (x \in [-a,a]),
\\
&u(x-a) && (x \in (a,1+a]).
\end{aligned}
\right.
\end{equation*}
Then it is straightforward to show that
if $u(-0)+au_x(-0)=u(+0)-au_x(+0)$ and $u_x(-0)=u_x(+0)$,
then $\overline{u} \in W^{2,\infty}(-1-a,1+a)$.
We set
\begin{equation*}
w_0(x):=\left\{
\begin{aligned}
&1 && (x \in [-1-a,-a) \cup (a,1+a]),
\\
&0 && (x \in [-a,a]).
\end{aligned}
\right.
\end{equation*}
\begin{lem}\label{lem:ipmi}
Let $u \in X_0$.
Then the set of eigenvalues of \eqref{llevp}
consists of an infinite sequence of real numbers which diverges to $-\infty$.
Furthermore, each eigenvalue is simple and continuous
with respect to $({\lambda},u) \in (0,\infty) \times X_0$.
\end{lem}
\begin{proof}
Assume that $(\mu,\varphi)$ is an eigenpair of \eqref{llevp}.
Then $\overline{\varphi}$ satisfies $\overline{\varphi} \in W^{2,\infty}(-1-a,1+a)$,
$(\overline{\varphi})_x(-1-a)=(\overline{\varphi})_x(1+a)=0$ and
\begin{equation*}
(\overline{\varphi})_{xx} +{\lambda} f'(\overline{u}) \overline{\varphi}=\mu \overline{\varphi}
\quad \mbox{in } (-1-a,-a) \cup (a,1+a),
\qquad
(\overline{\varphi})_{xx}=0 \quad \mbox{in } (-a,a).
\end{equation*}
Hence we see that $(\mu,\overline{\varphi})$ is an eigenpair of \eqref{gevp} with
\begin{equation}\label{bcqw}
b=-1-a, \quad c=1+a, \quad q={\lambda} w_0 f'(\overline{u}), \quad w=w_0.
\end{equation}
Conversely, let $(\nu,\psi)$ be an eigenpair of \eqref{gevp}
for $b$, $c$, $q$, $w$ given above.
Then we have $\psi_{xx}=0$ in $(-a,a)$,
which shows that $\psi$ is a linear function in $(-a,a)$.
Therefore
\begin{equation}\label{pssr}
\psi_x (-a)=\psi_x (a)=\frac{\psi (a) -\psi (-a)}{2a}
\big( =(\mbox{the slope of the graph of } \psi \mbox{ in } (-a,a))\big).
\end{equation}
We put
\begin{equation*}
\underline{\psi} (x)=\left\{
\begin{aligned}
&\psi (x-a) && (x \in [-1,0)),
\\
&\psi (x+a) && (x \in (0,1]).
\end{aligned}
\right.
\end{equation*}
Since $q={\lambda} w_0 f'(\overline{u})$ and $w=w_0$
are uniformly continuous on $[-1-a,a) \cup (a,1+a]$,
we see that $\psi_{xx}(=-q\psi +\nu w\psi)$ is also uniformly continuous on the same region.
Hence we have $\underline{\psi} \in X_0$.
Furthermore, \eqref{gevp} and \eqref{pssr} imply that
\eqref{llevp} is satisfied for $(\mu,\varphi)=(\nu,\underline{\psi})$.
We thus conclude that $(\nu,\underline{\psi})$ is an eigenpair of \eqref{llevp}.
The above argument shows that the set of eigenvalues of \eqref{llevp}
coincides with that of \eqref{gevp}.
It is straightforward to check that
$q={\lambda} w_0 f'(\overline{u})$ and $w=w_0$ satisfy \eqref{assmp:gevp},
and therefore we obtain the desired conclusion
by (i) of Theorem~\ref{thm:gevp} and (ii) of Lemma~\ref{lem:coev}.
\end{proof}
From now on, for $n \in {\mathbb N} \cup \{0\}$,
let $\mu_n(u)$ stand for the $(n+1)$-th largest eigenvalue of \eqref{llevp}.
\begin{lem}\label{lem:veef}
Let $\varphi$ be an eigenfunction of \eqref{llevp} corresponding to $\mu_n(u)$.
Then
\begin{equation*}
\varphi (-1)\varphi (1)
\left\{
\begin{aligned}
&>0 && \mbox{if } n \mbox{ is even},
\\
&<0 && \mbox{if } n \mbox{ is odd}.
\end{aligned}
\right.
\end{equation*}
\end{lem}
\begin{proof}
As shown in the proofs of Lemmas~\ref{lem:ndcD} and \ref{lem:ipmi},
we know that $\varphi (-1)\varphi (1)$ is nonzero
and that $(\mu_n(u),\overline{\varphi})$ is the $(n+1)$-th eigenpair of \eqref{gevp}
with $b,c,q,w$ given by \eqref{bcqw}.
By (ii) of Theorem~\ref{thm:gevp},
we see that $\overline{\varphi}$ has exactly $n$ zeros in $(-1-a,1+a)$.
Hence $\overline{\varphi}(-1-a)=\varphi (-1)$ and $\overline{\varphi}(-1-a)=\varphi (1)$
have the same sign if $n$ is even and have opposite signs if $n$ is odd.
\end{proof}
We conclude this subsection with a lemma which provides basic estimates of the Morse index.
\begin{lem}\label{lem:Mies}
Suppose that \eqref{aasfw} holds and that $u \in {{\mathcal S}}_{\lambda} \setminus \{0\}$
vanishes at exactly $n$ points in $(-1,1) \setminus \{0\}$.
Then $\mu_{n+1}(u)<0$ if $u(-0)u(+0)<0$ and $\mu_{n}(u)<0$ if $u(-0)u(+0)>0$.
\end{lem}
\begin{proof}
Set
\begin{equation*}
q:=\left\{
\begin{aligned}
&{\lambda} w_0 \frac{f(\overline{u})}{\overline{u}} && \mbox{if } \ \overline{u} \neq 0,
\\
&{\lambda} w_0 f'(0) && \mbox{if } \ \overline{u}=0,
\end{aligned}
\right.
\qquad
\tilde q:={\lambda} w_0 f'(\overline{u}).
\end{equation*}
Then one can easily check that $(q,w_0), (\tilde q,w_0) \in {{\mathcal E}}$.
From the assumptions $u \not\equiv 0$ and \eqref{aasfw},
we see that $q \ge \tilde q$ in $(-1-a,1+a)$
and $q>\tilde q$ in a nonempty open subinterval of $(-1-a,1+a)$.
Hence (i) of Lemma~\ref{lem:coev} shows that $\nu_j (q)>\nu_j(\tilde q)$ for all $j$.
As shown in the proof Lemma~\ref{lem:ipmi},
we know that $\mu_j(u)=\nu_j(\tilde q)$.
Therefore the lemma is proved if we show that
\begin{equation}\label{tu0ev}
\nu_{n+1} (q)=0 \ \mbox{ if } \ u(-0)u(+0)<0,
\qquad
\nu_n (q)=0 \ \mbox{ if } \ u(-0)u(+0)>0.
\end{equation}
The assumption $u \in {{\mathcal S}}_{\lambda} \setminus \{0\}$ shows that
$\overline{u} \in W^{2,\infty}(-1-a,1+a) \setminus \{0\}$ and
\begin{equation*}
\left\{
\begin{aligned}
&(\overline{u})_{xx}+{\lambda} w_0(x) f(\overline{u})=0, && x \in (-1-a,1+a),
\\
&(\overline{u})_x(-1-a)=(\overline{u})_x(1+a)=0.
\end{aligned}
\right.
\end{equation*}
Note that the above equation is written as
$(\overline{u})_{xx}+q(x) \overline{u}=0$.
Hence we infer that $\nu_m(q)=0$ for some $m$
and $\overline{u}$ is an eigenfunction corresponding to $\nu_m(q)=0$.
Since $u$ is assumed to vanish at exactly $n$ points in $(-1,1) \setminus \{0\}$,
we deduce that
\begin{equation*}
(\mbox{the number of zeros of } \overline{u} \mbox{ in } (-1-a,1+a))
=\left\{
\begin{aligned}
&n+1 && \mbox{if } u(-0)u(+0)<0,
\\
&n && \mbox{if } u(-0)u(+0)>0.
\end{aligned}
\right.
\end{equation*}
By (ii) of Theorem~\ref{thm:gevp},
we conclude that $m=n+1$ if $u(-0)u(+0)<0$ and $m=n$ if $u(-0)u(+0)>0$.
Thus \eqref{tu0ev} is verified, and the proof is complete.
\end{proof}
\subsection{Conditions for a bifurcation}
In this subsection,
we observe that sufficient conditions for a solution to be a bifurcation point
are described by means of $D({\lambda},{\beta}_1,{\beta}_2)$.
The precise statement is given as follows.
\begin{prop}\label{prop:bt}
Let $J \subset (0,\infty)$ be an open interval containing a point ${\lambda}_0$
and suppose that ${{\mathcal C}}=\{ ({\lambda},u(\cdot,{\lambda}))\}_{{\lambda} \in J} \subset {{\mathcal S}}$
is a $C^1$ curve in $(0,\infty) \times X_0$.
Set $({\beta}_1({\lambda}),{\beta}_2({\lambda})):=\Phi (u(\cdot,{\lambda}))$.
Then the following hold.
\begin{enumerate}[(i)]
\item
If $D({\lambda}_0,{\beta}_1({\lambda}_0),{\beta}_2({\lambda}_0)) \neq 0$, then
there is a neighborhood
${{\mathcal N}}$ of $({\lambda}_0,u(\cdot,{\lambda}_0))$ in $(0,\infty) \times X_0$
such that ${{\mathcal S}} \cap {{\mathcal N}}={{\mathcal C}} \cap {{\mathcal N}}$.
\item
Suppose that
\begin{equation}\label{assmp:bt}
D({\lambda}_0,{\beta}_1({\lambda}_0),{\beta}_2({\lambda}_0))=0,
\qquad
\left. \frac{d}{d{\lambda}} D({\lambda},{\beta}_1({\lambda}),{\beta}_2({\lambda})) \right|_{{\lambda}={\lambda}_0} \neq 0.
\end{equation}
Then there exists ${\tilde {\mathcal C}} \subset {{\mathcal S}}$
such that
\begin{gather*}
{\tilde {\mathcal C}} \mbox{ is a } C^1 \mbox{ curve in } (0,\infty) \times X_0
\mbox{ intersecting } {{\mathcal C}} \mbox{ transversally at } ({\lambda}_0,u(\cdot,{\lambda}_0)),
\\
{{\mathcal S}} \cap {{\mathcal N}}=({{\mathcal C}} \cup {\tilde {\mathcal C}}) \cap {{\mathcal N}}
\mbox{ for some neighborhood } {{\mathcal N}} \mbox{ of } ({\lambda}_0,u(\cdot,{\lambda}_0))
\mbox{ in } (0,\infty) \times X_0.
\end{gather*}
\item
Let \eqref{assmp:bt} hold,
and take $n \in {\mathbb N} \cup \{0\}$ satisfying $\mu_n (u(\cdot,{\lambda}_0))=0$.
Then $\mu_n (u(\cdot,{\lambda}))$ is continuously differentiable
in a neighborhood of ${\lambda}_0$ and
\begin{equation}\label{evdf}
{\operatorname{sgn}} \left( \left. \frac{d}{d{\lambda}} \mu_n (u(\cdot,{\lambda})) \right|_{{\lambda}={\lambda}_0} \right)
=(-1)^{n-1} {\operatorname{sgn}} \left(
\left. \frac{d}{d{\lambda}} D({\lambda},{\beta}_1({\lambda}),{\beta}_2({\lambda})) \right|_{{\lambda}={\lambda}_0} \right).
\end{equation}
\end{enumerate}
\end{prop}
\begin{remk}
The existence of the number $n$ appeared in (iii) is guaranteed by Lemma~\ref{lem:ndcD}.
\end{remk}
To show the above proposition,
we recall an abstract bifurcation theorem proved in \cite[Theorem~1.7]{CR71}.
In what follows,
we denote by $K(T)$ and $R(T)$ the kernel and range of a linear operator $T$, respectively.
\begin{thm}[\cite{CR71}]\label{thm:crbt}
Let $\mathcal{X}$ and $\mathcal{Y}$ be Banach spaces, $J$ an open interval
and $\mathcal{F}=\mathcal{F}({\lambda},w)$ a $C^1$ mapping
from $J \times \mathcal{X}$ to $\mathcal{Y}$.
Assume that
\begin{gather*}
D_{{\lambda} w}^2 \mathcal{F} \mbox{ and } D_{ww}^2 \mathcal{F}
\mbox{ exist and continuous in } J \times \mathcal{X},
\\
\mathcal{F}({\lambda},0)=0 \quad \mbox{for all } {\lambda} \in J,
\\
\dim K(D_w \mathcal{F}({\lambda}_0,0))=\operatorname{codim} R(D_w \mathcal{F}({\lambda}_0,0))=1
\quad \mbox{for some } {\lambda}_0 \in J,
\\
D_{{\lambda} w}^2 \mathcal{F}({\lambda}_0,0) \varphi_0 \notin R(D_w \mathcal{F}({\lambda}_0,0)),
\quad \mbox{where } K(D_w \mathcal{F}({\lambda}_0,0))=\operatorname{span}\{ \varphi_0\}.
\end{gather*}
Then there exist an open interval $\tilde J$ containing $0$,
a $C^1$ curve $\{ ({\Lambda} (s),W(s))\}_{s \in \tilde J} \subset J \times \mathcal{X}$
and a neighborhood $\mathcal{N} \subset J \times \mathcal{X}$ of $({\lambda}_0,0)$
such that
\begin{equation}\label{lwpr}
\left\{
\begin{gathered}
({\Lambda} (0),W(0))=({\lambda}_0,0), \quad W_s(0)=\varphi_0,
\\
\mathcal{F}({\Lambda} (s),W(s))=0 \quad \mbox{for all } s \in \tilde J,
\\
\{ ({\lambda},w) \in \mathcal{N}; \mathcal{F}({\lambda},w)=0\}
=\left( \{ ({\lambda},0)\}_{{\lambda} \in J} \cup \{ ({\Lambda} (s),W(s))\}_{s \in \tilde J} \right) \cap {{\mathcal N}},
\end{gathered}
\right.
\end{equation}
where $W_s$ stands for the derivative of $W$ with respect to $s$.
\end{thm}
We will use the following lemma to ensure
the differentiability of an eigenpair of \eqref{llevp} with respect to ${\lambda}$.
For the proof,
we refer the reader to \cite[Proposition~I.7.2]{K12}.
\begin{lem}[\cite{K12}]\label{lem:devf}
In addition to the assumptions of Theorem~\ref{thm:crbt},
suppose that ${\mathcal X}$ is continuously embedded in ${\mathcal Y}$
and that $R(D_w \mathcal{F}({\lambda}_0,0))$ is a complement of
$K(D_w \mathcal{F}({\lambda}_0,0))=\operatorname{span}\{ \varphi_0\}$ in ${{\mathcal Y}}$:
\begin{equation*
{{\mathcal Y}}=R(D_w \mathcal{F}({\lambda}_0,0)) \oplus \operatorname{span}\{ \varphi_0\}.
\end{equation*}
Then there exist an open interval $\tilde J$ containing ${\lambda}_0$ and $C^1$ mappings
$\tilde J \ni {\lambda} \mapsto \mu({\lambda}) \in {\mathbb R}$
and $\tilde J \ni {\lambda} \mapsto \varphi ({\lambda}) \in {{\mathcal X}}$ such that
\begin{equation*}
\mu({\lambda}_0)=0,
\qquad
\varphi ({\lambda}_0)=\varphi_0,
\qquad
D_w \mathcal{F}({\lambda},0) \varphi ({\lambda}) =\mu({\lambda}) \varphi ({\lambda}).
\end{equation*}
\end{lem}
\begin{remk}\label{rem:dmu}
We will apply Lemma~\ref{lem:devf} in the special case where
${\mathcal Y}$ is also continuously embedded in some Hilbert space ${\mathcal Z}$ and
\begin{equation*}
R(D_w \mathcal{F}({\lambda}_0,0)) =\operatorname{span}\{ \varphi_0\}^{\perp} \cap {{\mathcal Y}}
=\{ \varphi \in {{\mathcal Y}}; \langle \varphi,\varphi_0 \rangle=0 \}.
\end{equation*}
Here ${}^\perp$ and $\langle \cdot,\cdot \rangle$
stand for the orthogonal complement and the inner product in ${\mathcal Z}$, respectively.
Then by differentiating the equality
$\langle D_w \mathcal{F}({\lambda},0) \varphi ({\lambda}),\varphi_0 \rangle
=\mu({\lambda}) \langle \varphi ({\lambda}),\varphi_0 \rangle$,
we obtain the well-known formula
\begin{equation}\label{fode}
\frac{d\mu}{d{\lambda}}({\lambda}_0)
=\frac{\langle D_{{\lambda} w}^2 \mathcal{F}({\lambda}_0,0) \varphi_0,\varphi_0 \rangle}{
\langle \varphi_0,\varphi_0 \rangle}.
\end{equation}
\end{remk}
To apply Theorem~\ref{thm:crbt} to our problem,
we set up function spaces ${{\mathcal X}}$, ${{\mathcal Y}}$ and ${{\mathcal Z}}$.
We choose
\begin{gather*}
{{\mathcal X}}:=\left\{ u \in X_0 ; \,
\begin{aligned}
&u_x(-1)=u_x(1)=0
\\
&u(-0) +au_x(-0)=u(+0)-au_x(+0)
\\
&u_x(-0)=u_x(+0)
\end{aligned}
\right\},
\\
{{\mathcal Y}}:=\{ u \in C([-1,1] \setminus \{0\});
u|_{[-1,0)} \mbox{ and } u|_{(0,1]} \mbox{ are uniformly continuous}\},
\\
{{\mathcal Z}}:=L^2(-1,1).
\end{gather*}
Then ${{\mathcal X}}$ is a closed linear subspace of $X_0$
and ${{\mathcal Y}}$ is a Banach space endowed with the uniform norm.
Let $\langle \cdot, \cdot \rangle$ denote the inner product on ${{\mathcal Z}}$:
\begin{equation*}
\langle u,v \rangle:=\int_{-1}^1 u(x)v(x) dx,
\quad u,v \in {{\mathcal Z}}.
\end{equation*}
We note that integrating by parts yields
\begin{align}
\langle \varphi_{xx}+q\varphi, \psi \rangle -\langle \varphi, \psi_{xx}+q\psi \rangle
=&\big[\varphi_x \psi -\varphi \psi_x\big]^{x=-0}_{x=-1}
+\big[\varphi_x \psi -\varphi \psi_x\big]_{x=+0}^{x=1}
\nonumber
\\
=&\big[\varphi_x \psi -\varphi \psi_x\big]_{x=-1}^{x=1}
+\big\{ \varphi_x (\psi +a\psi_x) -(\varphi +a\varphi_x) \psi_x \big\} \big|_{x=-0}
\nonumber
\\
&-\big\{ \varphi_x (\psi -a\psi_x) -(\varphi -a\varphi_x) \psi_x \big\} \big|_{x=+0}
\label{Tsym}
\end{align}
for all $\varphi,\psi \in X_0$ and $q \in {{\mathcal Y}}$.
We define a $C^2$ mapping ${{\mathcal G}}:(0,\infty) \times {{\mathcal X}} \to {{\mathcal Y}}$ by
\begin{equation*
{{\mathcal G}} ({\lambda},u):=u_{xx} +{\lambda} f(u).
\end{equation*}
By definition, we have
${{\mathcal S}}=\{ ({\lambda},u) \in (0,\infty) \times {{\mathcal X}};{{\mathcal G}} ({\lambda},u)=0, |u|<1\}$.
The following lemma can be shown in a standard way.
We give a proof in Appendix~\ref{appendixB} for readers' convenience.
\begin{lem}\label{lem:bt1}
Let $q \in {{\mathcal Y}}$ and define a linear operator $T:{{\mathcal X}} \to {{\mathcal Y}}$
by $T:=d^2/dx^2 +q$.
Then
\begin{equation}\label{ReKp}
R(T)=K(T)^\perp \cap {{\mathcal Y}}
=\{ \varphi \in {{\mathcal Y}}; \langle \varphi,\psi \rangle=0 \mbox{ for all } \psi \in K(T)\}.
\end{equation}
\end{lem}
Let us prove Proposition~\ref{prop:bt}.
\begin{proof}[Proof of Proposition~\ref{prop:bt}]
Set
\begin{equation*}
T_{\lambda}:=D_u \mathcal{G}({\lambda},u(\cdot,{\lambda}))=\frac{d^2}{dx^2}+{\lambda} f'(u(\cdot,{\lambda})).
\end{equation*}
From Lemma~\ref{lem:ndcD},
we see that the condition $D({\lambda}_0,{\beta}_1({\lambda}_0),{\beta}_2({\lambda}_0)) \neq 0$
gives $K(T_{{\lambda}_0})=\{0\}$.
This with Lemma~\ref{lem:bt1} shows that
the linear operator $T_{{\lambda}_0}:{{\mathcal X}} \to {{\mathcal Y}}$
is an isomorphism if $D({\lambda}_0,{\beta}_1({\lambda}_0),{\beta}_2({\lambda}_0)) \neq 0$.
Hence the assertion (i) follows from the implicit function theorem.
We prove (ii) and (iii).
In what follows, let \eqref{assmp:bt} hold.
We put
\begin{equation*
\mathcal{F}({\lambda},w):=\mathcal{G}({\lambda},u(\cdot,{\lambda})+w),
\quad ({\lambda},w) \in J \times {{\mathcal X}},
\end{equation*}
and check that the conditions of Theorem~\ref{thm:crbt} and Lemma~\ref{lem:devf} hold.
We infer that ${{\mathcal F}}:J \times {{\mathcal X}} \to {{\mathcal Y}}$ is continuously differentiable
and has continuous derivatives $D^2_{{\lambda} w} {{\mathcal F}}$ and $D_{ww}^2 {{\mathcal F}}$,
since ${\mathcal G}$ is of class $C^2$ and
the mapping $J \ni {\lambda} \mapsto u(\cdot,{\lambda}) \in {{\mathcal X}}$ is of class $C^1$.
Moreover, by assumption, we have ${{\mathcal F}}({\lambda},0)=0$ for ${\lambda} \in J$.
Since the condition $D({\lambda}_0,{\beta}_1 ({\lambda}_0),{\beta}_2 ({\lambda}_0))=0$ is assumed,
we see from Lemmas~\ref{lem:ndcD} and \ref{lem:ipmi} that
for some $\varphi_0 \in {{\mathcal X}} \setminus \{0\}$,
\begin{equation}\label{ktlz}
K(T_{{\lambda}_0})=\operatorname{span}\{ \varphi_0\}.
\end{equation}
For $j=1,2$, let $\phi_j=\phi_j(x,{\lambda})$ be given by \eqref{phjdef} with ${\beta}_j={\beta}_j({\lambda})$.
We put ${\alpha}_1:=\varphi_0(-1)$, ${\alpha}_2:=\varphi_0(1)$ and
\begin{equation*}
\varphi (x,{\lambda}):=\left\{
\begin{aligned}
&{\alpha}_1 \phi_1(x,{\lambda}) && (x \in [-1,0)), \\
&{\alpha}_2 \phi_2(x,{\lambda}) && (x \in (0,1]).
\end{aligned}
\right.
\end{equation*}
As shown in the proof of Lemma~\ref{lem:ndcD} (see \eqref{pjvpj}),
both ${\alpha}_1$ and ${\alpha}_2$ are nonzero and
the assumption $D({\lambda}_0,{\beta}_1 ({\lambda}_0),{\beta}_2 ({\lambda}_0))=0$ gives
\begin{equation}\label{pseph}
\varphi (\cdot,{\lambda}_0)=\varphi_0.
\end{equation}
By \eqref{PQphj}, we have
\begin{equation*}
-\frac{{\alpha}_1 {\alpha}_2 \sqrt{{\lambda}}}{G'({\beta}_1({\lambda})) G'({\beta}_2({\lambda}))} D({\lambda},{\beta}_1({\lambda}),{\beta}_2({\lambda}))
=\det
\begin{pmatrix}
(\varphi +a\varphi_x)|_{x=-0} & (\varphi -a\varphi_x)|_{x=+0} \\
\varphi_x|_{x=-0} & \varphi_x|_{x=+0}
\end{pmatrix}.
\end{equation*}
Differentiating this with respect to ${\lambda}$,
we find that
\begin{align}
&-\frac{{\alpha}_1 {\alpha}_2 \sqrt{{\lambda}_0}}{G'({\beta}_1({\lambda}_0)) G'({\beta}_2({\lambda}_0))}
\left. \frac{d}{d{\lambda}} D({\lambda},{\beta}_1({\lambda}),{\beta}_2({\lambda})) \right|_{{\lambda}={\lambda}_0}
\nonumber
\\
&=\left. \det
\begin{pmatrix}
(\varphi_{\lambda} +a\varphi_{x{\lambda}})_{x=-0} & (\varphi_{\lambda} -a\varphi_{x{\lambda}})|_{x=+0} \\
(\varphi_0)_x|_{x=-0} & (\varphi_0)_x|_{x=+0}
\end{pmatrix}
\right|_{{\lambda}={\lambda}_0}
\nonumber
\\
&\quad +\left. \det
\begin{pmatrix}
\left. \left\{ \varphi_0 +a(\varphi_0)_x \right\} \right|_{x=-0}
& \left. \left\{ \varphi_0 -a(\varphi_0)_x \right\} \right|_{x=+0} \\
\varphi_{x{\lambda}} |_{x=-0} & \varphi_{x{\lambda}} |_{x=+0}
\end{pmatrix}
\right|_{{\lambda}={\lambda}_0},
\label{dDps}
\end{align}
where we have used the assumption $D({\lambda}_0,{\beta}_1 ({\lambda}_0),{\beta}_2 ({\lambda}_0))=0$
and \eqref{pseph}.
Since $\phi_j$ satisfies \eqref{duaeq},
we see that $\varphi_{xx}+{\lambda} f'(u(\cdot,{\lambda}))\varphi =0$ in $(-1,1) \setminus \{0\}$
and $\varphi_x=0$ at $x=\pm 1$.
Hence, using \eqref{Tsym}
for $\varphi=\varphi_0$, $\psi=\varphi(\cdot,{\lambda})$ and $q={\lambda} f'(u(\cdot,{\lambda}))$,
we have
\begin{align*}
\langle T_{\lambda} \varphi_0, \varphi \rangle
=&\big[ (\varphi_0)_x (\varphi +a\varphi_x)
-\{ \varphi_0 +a(\varphi_0)_x\} \varphi_x \big] \big|_{x=-0}
\\
&-\big[ (\varphi_0)_x (\varphi -a\varphi_x)
-\{ \varphi_0 -a(\varphi_0)_x\} \varphi_x \big] \big|_{x=+0}.
\end{align*}
We differentiate this equality with respect to ${\lambda}$.
By \eqref{ktlz}, the derivative of the left-hand side at ${\lambda}={\lambda}_0$ is computed as
\begin{equation*}
\left. \frac{d}{d{\lambda}} \langle T_{\lambda} \varphi_0, \varphi \rangle \right|_{{\lambda}={\lambda}_0}
=\left. \frac{d}{d{\lambda}} \langle T_{\lambda} \varphi_0, \varphi_0 \rangle \right|_{{\lambda}={\lambda}_0}
+\left. \langle T_{{\lambda}_0} \varphi_0, \varphi_{\lambda} \rangle \right|_{{\lambda}={\lambda}_0}
=\langle D_{{\lambda} w}^2 \mathcal{F}({\lambda}_0,0) \varphi_0,\varphi_0 \rangle.
\end{equation*}
Therefore
\begin{align*}
\langle D_{{\lambda} w}^2 \mathcal{F}({\lambda}_0,0) \varphi_0,\varphi_0 \rangle
=&\big[ (\varphi_0)_x (\varphi_{\lambda} +a\varphi_{x{\lambda}})
-\{ \varphi_0 +a(\varphi_0)_x\} \varphi_{x{\lambda}} \big] \big|_{x=-0,{\lambda}={\lambda}_0}
\\
&-\big[ (\varphi_0)_x (\varphi_{\lambda} -a\varphi_{x{\lambda}})
-\{ \varphi_0 -a(\varphi_0)_x\} \varphi_{x{\lambda}} \big] \big|_{x=+0,{\lambda}={\lambda}_0}.
\end{align*}
Notice that the right-hand side of this equality coincides with that of \eqref{dDps},
since $\varphi_0$ satisfies
\begin{equation*}
\{ \varphi_0 +a(\varphi_0)_x\} |_{x=-0}=\{ \varphi_0 -a(\varphi_0)_x\} |_{x=+0},
\quad
(\varphi_0)_x|_{x=-0}=(\varphi_0)_x|_{x=+0}.
\end{equation*}
Thus
\begin{equation}\label{rncd}
\langle D_{{\lambda} w}^2 \mathcal{F}({\lambda}_0,0) \varphi_0,\varphi_0 \rangle
=-\frac{{\alpha}_1 {\alpha}_2 \sqrt{{\lambda}_0}}{G'({\beta}_1({\lambda}_0)) G'({\beta}_2({\lambda}_0))}
\left. \frac{d}{d{\lambda}} D({\lambda},{\beta}_1({\lambda}),{\beta}_2({\lambda})) \right|_{{\lambda}={\lambda}_0}.
\end{equation}
From \eqref{rncd} and the second condition of \eqref{assmp:bt},
we see that $\langle D_{{\lambda} w}^2 \mathcal{F}({\lambda}_0,0) \varphi_0,\varphi_0 \rangle \neq 0$.
Combining this with Lemma~\ref{lem:bt1} and \eqref{ktlz} gives
\begin{equation*}
D_{{\lambda} w}^2 \mathcal{F}({\lambda}_0,0) \varphi_0
\notin \operatorname{span}\{ \varphi_0\}^{\perp} \cap {{\mathcal Y}}
=R(T_{{\lambda}_0})=R(D_w \mathcal{F}({\lambda}_0,0)).
\end{equation*}
Hence all the conditions of Theorem~\ref{thm:crbt} and Lemma~\ref{lem:devf} are satisfied.
Applying Theorem~\ref{thm:crbt},
we obtain a $C^1$ curve
$\{ ({\Lambda}(s),W(\cdot,s))\}_{s \in \tilde J} \subset J \times {{\mathcal X}}$ satisfying \eqref{lwpr}.
Then one can directly check that
$\tilde {{\mathcal C}}:=\{ ({\Lambda}(s),u(\cdot,{\Lambda}(s))+W(\cdot,s))\}_{s \in \tilde J}$
is the one having the desired properties stated in (ii).
It remains to derive \eqref{evdf}.
By Lemma~\ref{lem:devf},
we have an eigenvalue $\mu({\lambda})$ of $D_w {{\mathcal F}}({\lambda},0)=T_{\lambda}$
which is of class $C^1$ and satisfies $\mu({\lambda}_0)=0$.
It is easily seen that $\mu({\lambda})$ is an eigenvalue of \eqref{llevp} for $u=u(\cdot,{\lambda})$.
Hence $\mu ({\lambda})$ coincides with $\mu_n (u(\cdot,{\lambda}))$,
since Lemma~\ref{lem:ipmi} shows that
each eigenvalue $\mu_m (u(\cdot,{\lambda}))$, $m \in {\mathbb N} \cup \{0\}$,
is isolated and continuous with respect to ${\lambda}$.
In particular, $\mu_n (u(\cdot,{\lambda}))$ is continuously differentiable
in a neighborhood of ${\lambda}_0$.
As noted in Remark~\ref{rem:dmu},
we can compute the derivative of $\mu ({\lambda})=\mu_n (u(\cdot,{\lambda}))$
by the formula \eqref{fode}.
Therefore we see from \eqref{rncd} and Lemma~\ref{lem:veef} that
\begin{align*}
{\operatorname{sgn}} \left( \left. \frac{d}{d{\lambda}} \mu_n (u(\cdot,{\lambda})) \right|_{{\lambda}={\lambda}_0} \right)
&={\operatorname{sgn}} \left( -\varphi_0 (-1)\varphi_0 (1)
\left. \frac{d}{d{\lambda}} D({\lambda},{\beta}_1({\lambda}),{\beta}_2({\lambda})) \right|_{{\lambda}={\lambda}_0} \right)
\\
&=(-1)^{n-1} {\operatorname{sgn}} \left(
\left. \frac{d}{d{\lambda}} D({\lambda},{\beta}_1({\lambda}),{\beta}_2({\lambda})) \right|_{{\lambda}={\lambda}_0} \right).
\end{align*}
We thus obtain \eqref{evdf}, and the proof is complete.
\end{proof}
\section{Primary branches}\label{sec:pb}
In this section, we examine primary branches of solutions of \eqref{leq}
bifurcating from the trivial branch ${{\mathcal L}}=\{ ({\lambda},0)\}_{{\lambda} \in (0,\infty)}$.
To set up notation, we introduce the function
\begin{equation*}
g({\beta},\phi) :=\left\{
\begin{aligned}
&\frac{G({\beta} \cos \phi)}{{\beta}}-a\sin \phi \int_0^{\phi}G'({\beta} \cos \tau)d\tau
&& \mbox{if } {\beta} \neq 0,
\\
&G'(0)\cos \phi -aG'(0) \phi \sin \phi
&& \mbox{if } {\beta}=0.
\end{aligned}
\right.
\end{equation*}
In Lemma~\ref{lem:sgez} below, we will show that
for $k \in {\mathbb N}$, there exists $\phi_k=\phi_k({\beta}) \in C^1(I)$ satisfying
$\phi_k({\beta}) \in ((k-1)\pi,(k-1/2)\pi)$ and $g({\beta},\phi_k({\beta}))=0$.
Then we put
\begin{gather*}
{\lambda}_k^o({\beta}):=\left( \int_0^{\phi_k ({\beta})} G'({\beta} \cos \tau) d\tau \right)^2,
\qquad
{\lambda}_k^e({\beta}):=\left( \int_0^{k\pi} G'({\beta} \cos \tau) d\tau \right)^2,
\\
u^o_k(x.{\beta}):=\Psi_{{\lambda}_k^o({\beta})} ({\beta},-{\beta})(x),
\qquad
u^e_k(x,{\beta}):=\Psi_{{\lambda}_k^e({\beta})} ({\beta},{\beta})(x),
\\
{{\mathcal C}}_k^o:=\{ ({\lambda}_k^o ({\beta}),u^o_k(\cdot,{\beta}))\}_{{\beta} \in I},
\qquad
{{\mathcal C}}_k^e:=\{ ({\lambda}_k^e ({\beta}),u^e_k(\cdot,{\beta}))\}_{{\beta} \in I}.
\end{gather*}
By the definitions of $g$ and $z_k$, we have
\begin{equation}\label{phpr1}
\phi_k (0)=z_k.
\end{equation}
This together with the fat that $G'(0)=1/\sqrt{f'(0)}$ (see \eqref{Gdao} below) gives
\begin{gather*}
{\lambda}_k^o(0)=(\phi_k (0) G'(0))^2={\lambda}_{2k-1},
\qquad
{\lambda}_k^e(0):=(k\pi G'(0))^2={\lambda}_{2k}.
\end{gather*}
Therefore ${{\mathcal C}}_k^o$ (resp. ${{\mathcal C}}_k^e$) is a $C^1$ curve in $(0,\infty) \times X$
which intersects ${\mathcal L}$ at $({\lambda}_k^o(0),u^o_k(\cdot,0))=({\lambda}_{2k-1},0)$
(resp. $({\lambda}_k^e(0),u^e_k(\cdot,0))=({\lambda}_{2k},0)$).
We define $C_{k,+}^o$, $C_{k,-}^o$, $C_{k,+}^e$ and $C_{k,-}^e$ by
\begin{gather*}
{{\mathcal C}}_{k,+}^o:=\{ ({\lambda}_k^o ({\beta}),u^o_k(\cdot,{\beta}))\}_{{\beta} \in (0,{\beta}_0)},
\qquad
{{\mathcal C}}_{k,-}^o:=\{ ({\lambda}_k^o ({\beta}),u^o_k(\cdot,{\beta}))\}_{{\beta} \in (-{\beta}_0,0)},
\\
{{\mathcal C}}_{k,+}^e:=\{ ({\lambda}_k^e ({\beta}),u^e_k(\cdot,{\beta}))\}_{{\beta} \in (0,{\beta}_0)},
\qquad
{{\mathcal C}}_{k,-}^e:=\{ ({\lambda}_k^e ({\beta}),u^e_k(\cdot,{\beta}))\}_{{\beta} \in (-{\beta}_0,0)}.
\end{gather*}
We prove the following proposition.
\begin{prop}\label{prop:pb}
The following hold.
\begin{enumerate}[(i)]
\item
There hold
\begin{gather}
{{\mathcal C}}_{k,-}^o=-{{\mathcal C}}_{k,+}^o,
\qquad
{{\mathcal C}}_{k,-}^e=-{{\mathcal C}}_{k,+}^e,
\label{cksy}
\\
\ \bigcup_{k=1}^\infty {{\mathcal C}}_{k,+}^o \cup {{\mathcal C}}_{k,-}^o =\mathcal{S}^o,
\qquad
\bigcup_{k=1}^\infty {{\mathcal C}}_{k,+}^e \cup {{\mathcal C}}_{k,-}^e =\mathcal{S}^e.
\label{cues}
\end{gather}
Furthermore, for every ${\lambda} \in (0,\infty)$, there exists a neighborhood
${\mathcal N}$ of $({\lambda},0)$ in $(0,\infty) \times X_0$ such that
\begin{equation}\label{lucl}
{{\mathcal S}} \cap {{\mathcal N}}=\left\{
\begin{aligned}
&({{\mathcal C}}_k^o \cup {{\mathcal L}}) \cap {{\mathcal N}} && \mbox{if } {\lambda}={\lambda}_{2k-1},
\\
&({{\mathcal C}}_k^e \cup {{\mathcal L}}) \cap {{\mathcal N}} && \mbox{if } {\lambda}={\lambda}_{2k},
\\
&{{\mathcal L}} \cap {{\mathcal N}} && \mbox{if } {\lambda} \neq {\lambda}_{2k-1}, {\lambda}_{2k}.
\end{aligned}
\right.
\end{equation}
\item
Assume \eqref{aasfw}. Then
\begin{equation}\label{sdlk}
{\operatorname{sgn}} ({\beta}) \frac{d{\lambda}_k^o}{d{\beta}} ({\beta})>0,
\quad
{\operatorname{sgn}} ({\beta}) \frac{d{\lambda}_k^e}{d{\beta}} ({\beta})>0
\quad \mbox{for } {\beta} \in I \setminus \{0\}.
\end{equation}
In particular, $C_{k,+}^o$ and $C_{k,-}^o$ (resp. $C_{k,+}^e$ and $C_{k,-}^e$)
are parametrized by ${\lambda} \in ({\lambda}_{2k-1},\infty)$ (resp. ${\lambda} \in ({\lambda}_{2k},\infty)$).
\end{enumerate}
\end{prop}
We begin with solving the equation $g({\beta},\phi)=0$.
\begin{lem}\label{lem:sgez}
For $k \in {\mathbb N}$ and ${\beta} \in I$,
there exists $\phi_k ({\beta}) \in ((k-1)\pi,(k-1/2)\pi)$ with the following properties:
\begin{gather}
\{ \phi \in [0,\infty); g({\beta},\phi)=0\} =\{ \phi_k({\beta}))\}_{k=1}^\infty,
\label{phpr3}
\\
\phi_k (-{\beta})=\phi_k ({\beta}),
\label{phpr4}
\\
\lim_{{\beta} \to \pm {\beta}_0} \phi_k ({\beta})=(k-1)\pi.
\label{phpr2}
\end{gather}
Furthermore, $\phi_k \in C^1(I)$ and
\begin{equation}\label{dphk}
\frac{d\phi_k}{d{\beta}}({\beta}) =\frac{J({\beta},\phi_k({\beta}))}{I({\beta},\phi_k({\beta}))},
\end{equation}
where
\begin{gather*}
I({\beta},\phi):={\beta} \left\{ \left( \frac{G'({\beta} \cos \phi) {\beta} \sin \phi}{G({\beta} \cos \phi)}
+\frac{\cos \phi}{\sin \phi} \right) \int_0^{\phi}G'({\beta} \cos \tau)d\tau
+G'({\beta} \cos \phi) \right\},
\\
J({\beta},\phi):=\left( \frac{G'({\beta} \cos \phi) {\beta} \cos \phi}{G({\beta} \cos \phi)} -1 \right)
\int_0^\phi G'({\beta} \cos \tau)d\tau
-{\beta} \int_0^\phi G''({\beta} \cos \tau) \cos \tau d\tau.
\end{gather*}
\end{lem}
\begin{proof}
For the moment suppose that $k$ is odd.
We note that $g \in C^1(I \times {\mathbb R})$,
since $g$ is written as
\begin{equation*}
g({\beta},\phi)=\cos \phi \int_0^1 G'(t{\beta} \cos \tau) dt
-a\sin \phi \int_0^{\phi}G'({\beta} \cos \tau)d\tau.
\end{equation*}
From the fact that $G'$ is positive, we have
\begin{equation*}
g({\beta},(k-1)\pi)>0,
\qquad
g({\beta},\phi)<0 \quad \mbox{for } \phi \in \left[ \left( k-\frac{1}{2}\right) \pi,k\pi \right].
\end{equation*}
Furthermore,
\begin{align*}
g_\phi ({\beta},\phi) &=-(1+a)G'({\beta} \cos \phi) \sin \phi
-a\cos \phi \int_0^{\phi}G'({\beta} \cos \tau)d\tau
\\
&<0 \quad \mbox{for } \phi \in \left( (k-1) \pi,\left( k-\frac{1}{2}\right)\pi \right).
\end{align*}
Therefore there exists $\phi_k({\beta}) \in ((k-1)\pi,(k-1/2) \pi)$
such that
\begin{equation*}
\{ \phi \in [(k-1)\pi,k\pi]; g({\beta},\phi)=0\}=\{ \phi_k({\beta})\},
\end{equation*}
and the implicit function theorem shows that $\phi_k \in C^1(I)$.
The case where $k$ is even can be dealt with in the same way.
We have thus shown the existence of $\phi_k({\beta})$ and \eqref{phpr3}.
The fact that $G$ is odd yields $g(-{\beta},\phi)=g({\beta},\phi)$,
and hence $g(-{\beta},\phi_k({\beta}))=g({\beta},\phi_k({\beta}))=0$.
From \eqref{phpr3}, we obtain \eqref{phpr4}.
We prove \eqref{phpr2}.
To obtain a contradiction,
suppose that $\phi_k ({\beta})$ does not converge to $(k-1)\pi$
as ${\beta} \to {\beta}_0$ or ${\beta} \to -{\beta}_0$.
Then we can take $\{ {\beta}_j\}_{j=1}^\infty \subset I$
and ${\delta} \in (0,\pi/2)$ such that
\begin{equation*}
|{\beta}_j|<|{\beta}_{j+1}|,
\qquad
|{\beta}_j| \to {\beta}_0 \quad (j \to \infty),
\qquad
\phi_k({\beta}_j) \ge (k-1)\pi +{\delta}.
\end{equation*}
By \eqref{Gpas} and the monotone convergence theorem,
\begin{align*}
\int_0^{\phi_k({\beta}_j)}G'({\beta}_j \cos \tau)d\tau
&\ge c\int_0^{(k-1)\pi +{\delta}} \frac{d\tau}{\sqrt{{\beta}_0-|{\beta}_j| \cos \tau}}
\\
&\to c\int_0^{(k-1)\pi +{\delta}} \frac{d\tau}{\sqrt{{\beta}_0-{\beta}_0 \cos \tau}}
=\infty \quad (j \to \infty).
\end{align*}
From this and the fact that $|\sin \phi_k({\beta}_j)| \ge |\sin {\delta}|$,
we have $|g({\beta}_j,\phi_k({\beta}_j))| \to \infty$ as $j \to \infty$.
This contradicts the equality $g({\beta}_j,\phi_k({\beta}_j))=0$,
and therefore \eqref{phpr2} holds.
It remains to prove \eqref{dphk}.
Differentiating $g({\beta},\phi_k({\beta}))=0$ yields
\begin{align}
\frac{d\phi_k}{d{\beta}}({\beta}) &=-\frac{g_{\beta} ({\beta},\phi_k({\beta}))}{g_\phi ({\beta},\phi_k({\beta}))}
\nonumber
\\
&=\left. \frac{G'({\beta} \cos \phi) \cos \phi -G({\beta} \cos \phi)/{\beta}
-a{\beta} \sin \phi \int_0^{\phi}G''({\beta} \cos \tau) \cos \tau d\tau}{
(1+a)G'({\beta} \cos \phi) {\beta} \sin \phi +a{\beta} \cos \phi \int_0^{\phi}G'({\beta} \cos \tau)d\tau}
\right|_{\phi=\phi_k({\beta})}.
\label{dphk0}
\end{align}
We note that
\begin{equation}\label{ge0e}
\frac{1}{a}=\left. \frac{{\beta} \sin \phi}{G({\beta} \cos \phi)}
\int_0^{\phi}G'({\beta} \cos \tau)d\tau \right|_{\phi=\phi_k({\beta})},
\end{equation}
which follows from $g({\beta},\phi_k({\beta}))=0$.
Plugging this into \eqref{dphk0},
we obtain \eqref{dphk}.
Therefore the lemma follows.
\end{proof}
We prove the further property of $\phi_k$ to be used later.
\begin{lem}\label{lem:phli}
For each $k \in {\mathbb N}$,
\begin{equation*}
\liminf_{{\beta} \to \pm {\beta}_0} \frac{|\sin \phi_k ({\beta})|}{\sqrt{{\beta}_0-|{\beta}|}}>0.
\end{equation*}
\end{lem}
\begin{proof}
We use \eqref{Gpas} to obtain
\begin{equation*}
G'({\beta} \cos \tau) \le \frac{C}{\sqrt{{\beta}_0-|{\beta} \cos \tau|}}
\le \frac{C}{\sqrt{{\beta}_0-|{\beta}|}}.
\end{equation*}
Hence, by \eqref{ge0e}, we have
\begin{equation*}
\frac{1}{a} \le \frac{C|{\beta}| \phi_k({\beta})}{|G({\beta} \cos \phi_k({\beta}))|}
\cdot \frac{|\sin \phi_k ({\beta})|}{\sqrt{{\beta}_0-|{\beta}|}}.
\end{equation*}
The desired inequality is verified by combining this with \eqref{phpr2}.
\end{proof}
It is known (see \cite{O61,S90,SW81}) that the condition \eqref{aasfw} implies the inequality
\begin{equation*}
{\operatorname{sgn}}({\beta}) \int_0^{l\pi} G''({\beta} \cos \tau) \cos \tau d\tau>0
\quad ({\beta} \in I \setminus \{0\}, l \in {\mathbb N}),
\end{equation*}
which is equivalent to
\begin{equation*}
{\operatorname{sgn}}({\alpha}) \frac{d}{d{\alpha}} \int_0^{\alpha} \frac{ds}{\sqrt{F({\alpha})-F(s)}}>0
\quad ({\alpha} \in (-1,1) \setminus \{0\}).
\end{equation*}
In order to show \eqref{sdlk},
we generalize this inequality.
\begin{lem}\label{lem:gdpe}
Let \eqref{aasfw} hold.
Then for all ${\beta} \in I \setminus \{0\}$ and $\phi \in (0,\infty)$,
\begin{align*}
&{\beta} \int_0^\phi G''({\beta} \cos \tau) \cos \tau d\tau
\\
&>\left\{
\begin{aligned}
&-\left( G'({\beta} \cos \phi) \cos \phi
-\frac{G'({\beta})}{G({\beta})} G({\beta} \cos \phi) \right) \frac{1}{\sin \phi}
&& \mbox{if } \sin \phi \neq 0,
\\
&0 && \mbox{if } \sin \phi=0.
\end{aligned}
\right.
\end{align*}
\end{lem}
\begin{proof}
We first consider the case $\sin \phi \neq 0$.
Put
\begin{equation*}
\tilde H(v):=\left\{
\begin{aligned}
&\frac{G(v)}{v} && (v \neq 0),
\\
&G'(0) && (v=0).
\end{aligned}
\right.
\end{equation*}
From Lemma~\ref{lem:Gpro},
one can check that $\tilde H \in C^1(I)$ and $\tilde H'(v)=G'(v)H(v)/v^3$.
Then
\begin{align}
{\beta} \int_0^\phi G''({\beta} \cos \tau) \cos \tau d\tau
&={\beta} \int_0^\phi \left( G''({\beta} \cos \tau)
-\frac{G'({\beta}) {\beta}}{G({\beta})} \tilde H'({\beta} \cos \tau) \right) \cos \tau d\tau
\nonumber
\\
&\quad +\frac{G'({\beta}){\beta}^2}{G({\beta})} \int_0^\phi \tilde H'({\beta} \cos \tau) \cos \tau d\tau
\nonumber
\\
&=-\int_0^\phi \frac{\partial}{\partial \tau} \left( G'({\beta} \cos \tau)
-\frac{G'({\beta}) {\beta}}{G({\beta})} \tilde H({\beta} \cos \tau) \right)
\cdot \frac{\cos \tau}{\sin \tau} d\tau
\nonumber
\\
&\quad +\frac{G'({\beta})}{G({\beta}){\beta}}
\int_0^\phi \frac{H({\beta} \cos \tau)G'({\beta} \cos \tau)}{\cos^2 \tau} d\tau.
\label{gdpe1}
\end{align}
We apply integration by parts to obtain
\begin{align}
&\int_0^\phi \frac{\partial}{\partial \tau} \left( G'({\beta} \cos \tau)
-\frac{G'({\beta}) {\beta}}{G({\beta})} \tilde H({\beta} \cos \tau) \right)
\cdot \frac{\cos \tau}{\sin \tau} d\tau
\nonumber
\\
&=\left( G'({\beta} \cos \phi)
-\frac{G'({\beta}) {\beta}}{G({\beta})} \tilde H({\beta} \cos \phi) \right) \frac{\cos \phi}{\sin \phi}
\nonumber
\\
&\quad +\int_0^\phi \left( G'({\beta} \cos \tau)
-\frac{G'({\beta}) {\beta}}{G({\beta})} \tilde H({\beta} \cos \tau) \right) \frac{1}{\sin^2 \tau} d\tau.
\label{gdpe2}
\end{align}
This computation is valid, since
\begin{equation}\label{ghas}
G'({\beta} \cos \phi)
-\frac{G'({\beta}) {\beta}}{G({\beta})} \tilde H({\beta} \cos \phi) =O(1-|\cos \tau|)=O(\sin^2 \tau)
\quad
\mbox{as } \tau \to l\pi, \ l \in {\mathbb Z}.
\end{equation}
Note that the integrand of the second term on the right of \eqref{gdpe2}
is written as
\begin{align*}
G'({\beta} \cos \tau) -\frac{G'({\beta}) {\beta}}{G({\beta})} \tilde H({\beta} \cos \tau)
&=\frac{G'({\beta})G'({\beta} \cos \tau)}{G({\beta}) {\beta}}
\left( \frac{G({\beta}) {\beta}}{G'({\beta})} -
\frac{G({\beta} \cos \tau) {\beta}}{G'({\beta} \cos \tau) \cos \tau} \right)
\\
&=\frac{G'({\beta})G'({\beta} \cos \tau)}{G({\beta}) {\beta}}
\left( \frac{H({\beta} \cos \tau)}{\cos^2 \tau} -H({\beta}) \right).
\end{align*}
Therefore
\begin{align*}
&\int_0^\phi \frac{\partial}{\partial \tau} \left( G'({\beta} \cos \tau)
-\frac{G'({\beta}) {\beta}}{G({\beta})} \tilde H({\beta} \cos \tau) \right)
\cdot \frac{\cos \tau}{\sin \tau} d\tau
\\
&=\left( G'({\beta} \cos \phi) \cos \phi
-\frac{G'({\beta})}{G({\beta})} G({\beta} \cos \phi) \right) \frac{1}{\sin \phi}
\\
&\quad +\frac{G'({\beta})}{G({\beta}) {\beta}}
\int_0^\phi \left( \frac{H({\beta} \cos \tau)}{\cos^2 \tau} -H({\beta}) \right)
\frac{G'({\beta} \cos \tau)}{\sin^2 \tau} d\tau.
\end{align*}
Substituting this into \eqref{gdpe1},
we find that
\begin{align*}
{\beta} \int_0^\phi G''({\beta} \cos \tau) \cos \tau d\tau
&=-\left( G'({\beta} \cos \phi) \cos \phi
-\frac{G'({\beta})}{G({\beta})} G({\beta} \cos \phi) \right) \frac{1}{\sin \phi}
\\
&\quad +\frac{G'({\beta})}{G({\beta}){\beta}} \int_0^\phi
\frac{G'({\beta} \cos \tau)(H({\beta}) -H({\beta} \cos \tau))}{\sin^2 \tau} d\tau.
\end{align*}
Since the assumption \eqref{aasfw} implies \eqref{aasGw},
we deduce that the second term on the right of this equality is positive.
Thus we obtain the desired inequality.
In the case $\sin \phi =0$,
we see from \eqref{ghas} that the first term on the right of \eqref{gdpe2} vanishes,
and hence the same argument works.
\end{proof}
To obtain odd and even solutions of \eqref{leq},
we find solutions of \eqref{smteq} satisfying either ${\beta}_1=-{\beta}_2$ or ${\beta}_1={\beta}_2$.
\begin{lem}\label{lem:PQzs}
There hold
\begin{gather*}
\{ ({\lambda},{\beta}) \in (0,\infty) \times (I \setminus \{0\}); ({\lambda},{\beta},-{\beta}) \in {{\mathcal T}} \}
=\bigcup_{k=1}^\infty \{ ({\lambda}_k^o({\beta}),{\beta}) \}_{{\beta} \in I \setminus \{0\}},
\\
\{ ({\lambda},{\beta}) \in (0,\infty) \times (I \setminus \{0\}); ({\lambda},{\beta},{\beta}) \in {{\mathcal T}} \}
=\bigcup_{k=1}^\infty \{ ({\lambda}_k^e({\beta}),{\beta}) \}_{{\beta} \in I \setminus \{0\}}.
\end{gather*}
\end{lem}
\begin{proof}
We see from \eqref{PQsy} that
$({\lambda},{\beta},-{\beta}) \in {{\mathcal T}}$ if and only if $P({\lambda},{\beta})=0$,
and that $({\lambda},{\beta},{\beta}) \in {{\mathcal T}}$ if and only if $Q({\lambda},{\beta})=0$.
Hence
\begin{gather*}
\{ ({\lambda},{\beta}); ({\lambda},{\beta},-{\beta}) \in {{\mathcal T}}, {\beta} \neq 0\}
=\{ ({\lambda},{\beta}); P({\lambda},{\beta})=0, {\beta} \neq 0\},
\\
\{ ({\lambda},{\beta}); ({\lambda},{\beta},{\beta}) \in {{\mathcal T}}, {\beta} \neq 0\}
=\{ ({\lambda},{\beta}); Q({\lambda},{\beta})=0, {\beta} \neq 0\}.
\end{gather*}
We fix ${\beta} \in I \setminus \{0\}$.
By definition, we have $P({\lambda},{\beta})={\beta} g({\beta},\theta({\lambda},{\beta}))$.
This together with \eqref{phpr3} and \eqref{tblr} yields
\begin{equation*}
\{ {\lambda} ; P({\lambda},{\beta})=0\}
=\{ {\lambda} ; g({\beta},\theta({\lambda},{\beta}))=0\}
=\bigcup_{k=1}^\infty \{ {\lambda} ; \theta ({\lambda},{\beta})=\phi_k ({\beta})\}
=\bigcup_{k=1}^\infty \{ {\lambda}_k^o({\beta})\}.
\end{equation*}
Moreover, by \eqref{tblr},
\begin{equation*}
\{ {\lambda} ; Q({\lambda},{\beta})=0\}
=\{ {\lambda} ; \sin \theta ({\lambda},{\beta})=0\}
=\bigcup_{k=1}^\infty \{ {\lambda} ; \theta ({\lambda},{\beta})=k\pi \}
=\bigcup_{k=1}^\infty \{ {\lambda}_k^e({\beta})\}.
\end{equation*}
Therefore we obtain the desired conclusion.
\end{proof}
Put
\begin{equation*}
\tilde {{\mathcal S}}_{\lambda}^o:=\{ u \in {{\mathcal S}}_{\lambda} \setminus \{0\}; u(-1)=-u(1) \},
\qquad
\tilde {{\mathcal S}}_{\lambda}^e:=\{ u \in {{\mathcal S}}_{\lambda} \setminus \{0\}; u(-1)=u(1) \}.
\end{equation*}
\begin{lem}\label{lem:sose}
There hold ${{\mathcal S}}_{\lambda}^o=\tilde {{\mathcal S}}_{\lambda}^o$ and ${{\mathcal S}}_{\lambda}^e=\tilde {{\mathcal S}}_{\lambda}^e$.
\end{lem}
\begin{proof}
It is clear that ${{\mathcal S}}_{\lambda}^o \subset \tilde {{\mathcal S}}_{\lambda}^o$.
To show $\tilde {{\mathcal S}}_{\lambda}^o \subset {{\mathcal S}}_{\lambda}^o$,
we let $u \in \tilde {{\mathcal S}}_{\lambda}^o$.
Then $u_1:=u|_{[-1,0)}$ and $u_2:=u|_{(0,1]}$ satisfy \eqref{u12ivp}
for ${\beta}_1=G^{-1}(u(-1))$ and ${\beta}_2=G^{-1}(u(1))$.
Since the assumption $u \in \tilde {{\mathcal S}}_{\lambda}^o$ yields ${\beta}_1=-{\beta}_2$,
we see that $u_1(x)$ and $-u_2(-x)$ satisfy the same initial value problem.
Hence $u_1(x)=-u_2(-x)$, which gives $u \in {{\mathcal S}}_{\lambda}^o$.
We have thus proved ${{\mathcal S}}_{\lambda}^o=\tilde {{\mathcal S}}_{\lambda}^o$.
The equality ${{\mathcal S}}_{\lambda}^e=\tilde {{\mathcal S}}_{\lambda}^e$ can be shown in the same way.
\end{proof}
We are now in a position to prove Proposition~\ref{prop:pb}.
\begin{proof}[Proof of Proposition~\ref{prop:pb}]
By \eqref{Pssy}, \eqref{phpr4} and the fact that $G$ is odd,
we have
\begin{equation}\label{lksy}
({\lambda}_k^o (-{\beta}),u^o_k(\cdot,-{\beta}))=({\lambda}_k^o ({\beta}),-u^o_k(\cdot,{\beta})),
\quad
({\lambda}_k^e (-{\beta}),u^e_k(\cdot,-{\beta}))=({\lambda}_k^e ({\beta}),-u^e_k(\cdot,{\beta})).
\end{equation}
Hence \eqref{cksy} follows.
Lemmas~\ref{ppoo} and \ref{lem:sose} yield
\begin{gather*}
{{\mathcal S}}_{\lambda}^o=\tilde {{\mathcal S}}_{\lambda}^o=
\{ \Psi_{\lambda} ({\beta},-{\beta}); ({\beta},-{\beta}) \in \mathcal{T}_{\lambda}, {\beta} \neq 0 \},
\\
{{\mathcal S}}_{\lambda}^e=\tilde {{\mathcal S}}_{\lambda}^e=
\{ \Psi_{\lambda} ({\beta},{\beta}); ({\beta},{\beta}) \in \mathcal{T}_{\lambda}, {\beta} \neq 0 \}.
\end{gather*}
Combining these with Lemma~\ref{lem:PQzs} shows that
\begin{align*}
{{\mathcal S}}^o=\{ ({\lambda},\Psi_{\lambda} ({\beta},-{\beta})); ({\lambda},{\beta},-{\beta}) \in \mathcal{T}, {\beta} \neq 0\}
=\bigcup_{k=1}^\infty \{ ({\lambda}_k^o ({\beta}),u^o_k(\cdot,{\beta}))\}_{{\beta} \in I \setminus \{0\}},
\\
{{\mathcal S}}^e=\{ ({\lambda},\Psi_{\lambda} ({\beta},{\beta})); ({\lambda},{\beta},{\beta}) \in \mathcal{T}, {\beta} \neq 0\}
=\bigcup_{k=1}^\infty \{ ({\lambda}_k^e ({\beta}),u^e_k(\cdot,{\beta}))\}_{{\beta} \in I \setminus \{0\}}.
\end{align*}
Therefore we obtain \eqref{cues}.
By \eqref{tblr} and the fact that $G'(0)=1/\sqrt{f'(0)}$ (see \eqref{Gdao} below),
we deduce that $\theta({\lambda},0)=\sqrt{{\lambda}}/G'(0)=\sqrt{f'(0){\lambda}}$.
Hence it follows from \eqref{pqdb} that
\begin{align*}
D({\lambda},0,0)&=2P_{\beta}({\lambda},0)Q_{\beta} ({\lambda},0)
\\
&=-\frac{2}{\sqrt{f'(0)}} \left( \cos \sqrt{f'(0){\lambda}}
-a\sqrt{f'(0){\lambda}} \sin \sqrt{f'(0){\lambda}} \right) \sin \sqrt{f'(0){\lambda}}.
\end{align*}
This shows that $D({\lambda},0,0)=0$ if and only if
$\sqrt{f'(0){\lambda}}=z_k$ or $\sqrt{f'(0){\lambda}}=k\pi$ for some $k \in {\mathbb N}$.
Moreover,
\begin{gather*}
\left. \frac{d}{d{\lambda}} D({\lambda},0,0) \right|_{{\lambda}=z_k^2/f'(0)}
=\frac{\sqrt{f'(0)}}{z_k} \{ (1+a)\sin z_k +a z_k \cos z_k\} \sin z_k>0,
\\
\left. \frac{d}{d{\lambda}} D({\lambda},0,0) \right|_{{\lambda}=(k\pi)^2/f'(0)}
=-\frac{\sqrt{f'(0)}}{k\pi} <0.
\end{gather*}
Thus, using (i) and (ii) of Proposition~\ref{prop:bt} for ${{\mathcal C}}={{\mathcal L}}$,
we conclude that \eqref{lucl} holds.
What is left is to show (ii).
Assume \eqref{aasfw} and let ${\beta} \in I \setminus \{0\}$.
The estimate for ${\lambda}_k^e({\beta})$ is directly derived from Lemma~\ref{lem:gdpe}.
Indeed,
\begin{equation*}
{\beta} \frac{d}{d{\beta}} \sqrt{{\lambda}_k^e({\beta})}
={\beta} \int_0^{k\pi} G''({\beta} \cos \tau)\cos \tau d\tau>0.
\end{equation*}
Let us consider the estimate for ${\lambda}_k^o({\beta})$.
We use \eqref{dphk} to obtain
\begin{align*}
{\beta} \frac{d}{d{\beta}} \sqrt{{\lambda}_k^o({\beta})}
&={\beta} \left( \int_0^{\phi_k({\beta})} G''({\beta} \cos \tau) \cos \tau d\tau
+\frac{d\phi_k}{d{\beta}}({\beta}) G'({\beta} \cos \phi_k ({\beta})) \right) \\
&=\frac{{\beta}}{I({\beta},\phi)} \left. \left( I({\beta},\phi) \int_0^{\phi} G''({\beta} \cos \tau) \cos \tau d\tau
+J({\beta},\phi) G'({\beta}\cos \phi) \right) \right|_{\phi=\phi_k({\beta})}.
\end{align*}
Let $\phi \in ((k-1)\pi,(k-1/2)\pi)$.
Then
\begin{equation}\label{sgni}
\frac{I({\beta},\phi)}{{\beta}} =\left( \frac{G'({\beta} \cos \phi) {\beta} \sin \phi}{G({\beta} \cos \phi)}
+\frac{\cos \phi}{\sin \phi} \right) \int_0^{\phi}G'({\beta} \cos \tau)d\tau +G'({\beta} \cos \phi) >0.
\end{equation}
By a direct computation, we have
\begin{align}
&\left. \left( I({\beta},\phi) \int_0^{\phi} G''({\beta} \cos \tau) \cos \tau d\tau
+J({\beta},\phi) G'({\beta}\cos \phi) \right) \right/ \int_0^{\phi}G'({\beta} \cos \tau)d\tau
\nonumber
\\
&=\left( \frac{G'({\beta} \cos \phi) {\beta} \sin \phi}{G({\beta} \cos \phi)}
+\frac{\cos \phi}{\sin \phi} \right)
{\beta} \int_0^{\phi}G''({\beta} \cos \tau) \cos \tau d\tau
\nonumber
\\
&\quad +\left( \frac{G'({\beta} \cos \phi) {\beta} \cos \phi}{G({\beta} \cos \phi)} -1 \right)
G'({\beta} \cos \phi).
\label{igjg0}
\end{align}
Applying Lemma~\ref{lem:gdpe} shows that the right-hand side of this equality is bounded below by
\begin{align*}
&\left( \frac{G'({\beta} \cos \phi) {\beta} \sin \phi}{G({\beta} \cos \phi)}
+\frac{\cos \phi}{\sin \phi} \right) \cdot \left\{ -\left( G'({\beta} \cos \phi) \cos \phi
-\frac{G'({\beta})}{G({\beta})} G({\beta} \cos \phi) \right) \frac{1}{\sin \phi} \right\}
\\
&\quad +\left( \frac{G'({\beta} \cos \phi) {\beta} \cos \phi}{G({\beta} \cos \phi)} -1 \right)
G'({\beta} \cos \phi)
\\
&=\left( {\beta}^2 \sin^2 \phi +\frac{G({\beta} \cos \phi) {\beta} \cos \phi}{G'({\beta} \cos \phi)}
-\frac{G({\beta}) {\beta}}{G'({\beta})} \right) \frac{G'({\beta}) G'({\beta} \cos \phi)}{G({\beta}) {\beta} \sin^2 \phi}
\\
&=\frac{G'({\beta}) G'({\beta} \cos \phi)(H({\beta}) -H({\beta} \cos \phi))}{G({\beta}) {\beta} \sin^2 \phi}.
\end{align*}
This is positive, since the assumption \eqref{aasfw} implies \eqref{aasGw}.
Hence it follows that
\begin{equation}\label{igjg}
I({\beta},\phi) \int_0^{\phi} G''({\beta} \cos \tau) \cos \tau d\tau
+J({\beta},\phi) G'({\beta}\cos \phi) >0.
\end{equation}
Combining \eqref{sgni} and \eqref{igjg},
we obtain
\begin{equation*}
{\beta} \frac{d}{d{\beta}} \sqrt{{\lambda}_k^o({\beta})}>0.
\end{equation*}
Thus (ii) is verified, and the proof is complete.
\end{proof}
\section{Secondary bifurcations}\label{sec:sb}
In this section, we consider bifurcation points on ${{\mathcal S}}^o$ and ${{\mathcal S}}^e$.
\subsection{Nonexistence of bifurcation points on ${{\mathcal S}}^e$}
The following lemma shows that no bifurcation point exists on ${{\mathcal S}}^e$.
\begin{lem}\label{lem:ndse}
Assume \eqref{aasfw}.
Then for every $({\lambda},u) \in {{\mathcal S}}^e$,
there is a neighborhood ${\mathcal N}$ of $({\lambda},u)$ in $(0,\infty) \times X_0$
such that ${{\mathcal S}} \cap {{\mathcal N}} ={{\mathcal S}}^e \cap {{\mathcal N}}$.
\end{lem}
\begin{proof}
By the assumption \eqref{aasfw} and Proposition~\ref{prop:pb},
we see that ${{\mathcal S}}^e$ is the union of $C^1$ curves parametrized by ${\lambda}$.
Therefore, according to (i) of Proposition~\ref{prop:bt},
we only need to show that $D({\lambda}_k^e ({\beta}),{\beta},{\beta}) \neq 0$
for ${\beta} \in I \setminus \{0\}$.
Using \eqref{pqdb} and the fact that $\theta ({\lambda}_k^e ({\beta}),{\beta})=k\pi$ gives
\begin{align*}
D({\lambda}_k^e ({\beta}),{\beta},{\beta}) &=2P_{\beta} ({\lambda}_k^e ({\beta}),{\beta}) Q_{\beta} ({\lambda}_k^e ({\beta}),{\beta})
\\
&=2\left\{ G'({\beta}) +a\frac{{\beta}}{G'({\beta})}
\left( \int_0^{k\pi} G'({\beta} \cos \tau) d\tau \right)
\left( \int_0^{k\pi} G''({\beta} \cos \tau) \cos \tau d\tau \right) \right\}
\\
&\quad \times \frac{{\beta}}{G'({\beta})} \int_0^{k\pi} G''({\beta} \cos \tau) \cos \tau d\tau.
\end{align*}
Lemma~\ref{lem:gdpe} shows that this is positive,
and hence the lemma follows.
\end{proof}
\subsection{The number of bifurcation points on ${{\mathcal S}}^o$}
We show that ${{\mathcal C}}_{k,+}^o$ and ${{\mathcal C}}_{k,-}^o$ have a unique bifurcation point,
provided that \eqref{aasfs} holds.
\begin{prop}\label{prop:bpco}
Assume \eqref{aasfs}.
Then for $k \in {\mathbb N}$,
there exists a $C^1$ curve $\tilde {{\mathcal C}}_{k,+}^o \subset {{\mathcal S}}$ such that
$\tilde {{\mathcal C}}_{k,+}^o$ intersects ${{\mathcal C}}_{k,+}^o$ transversally
at some point $({\lambda}^*_{k,+},u^*_{k,+})$.
Moreover, for each $({\lambda},u) \in {{\mathcal C}}_{k,+}^o$,
there is a neighborhood ${{\mathcal N}}$ of $({\lambda},u)$ such that
\begin{equation*}
{{\mathcal S}} \cap {\mathcal N} =\left\{
\begin{aligned}
&{{\mathcal C}}_{k,+}^o \cap {{\mathcal N}} && \mbox{if } ({\lambda},u) \neq ({\lambda}^*_{k,+},u^*_{k,+}),
\\
&({{\mathcal C}}_{k,+}^o \cup \tilde {{\mathcal C}}_{k,+}^o) \cap {{\mathcal N}}
&& \mbox{if } ({\lambda},u)=({\lambda}^*_{k,+},u^*_{k,+}).
\end{aligned}
\right.
\end{equation*}
The same assertion holds for ${{\mathcal C}}_{k,-}^o$ in place of ${{\mathcal C}}_{k,+}^o$.
\end{prop}
To prove this proposition, we examine the behavior of $D({\lambda}_k^o ({\beta}),{\beta},-{\beta})$.
First we consider $P_{\beta} ({\lambda}_k^o ({\beta}),{\beta})$.
\begin{lem}\label{lem:Pbnz}
If \eqref{aasfw} holds,
then $(-1)^{k-1} P_{\beta} ({\lambda}_k^o ({\beta}),{\beta})>0$
for all $k \in {\mathbb N}$ and ${\beta} \in I \setminus \{0\}$.
\end{lem}
\begin{proof}
By definition, we have
\begin{equation}\label{thph}
\theta({\lambda}_k^o ({\beta}),{\beta})=\phi_k({\beta}).
\end{equation}
This together with \eqref{pqdb} and \eqref{ge0e} yields
\begin{align*}
P_{\beta}({\lambda}_k^o ({\beta}),{\beta})
&=\left. \frac{G'({\beta} \cos \phi) {\beta} \cos \phi -G({\beta} \cos \phi)}{{\beta}}
\right|_{\phi=\phi_k({\beta})}
\\
&\quad \left. +\left( {\beta} \sin \phi
+\frac{G({\beta} \cos \phi) \cos \phi}{G'({\beta} \cos \phi) \sin \phi} \right)
\int_0^{\phi} G''({\beta} \cos \tau) \cos \tau d\tau \right|_{\phi=\phi_k({\beta})}.
\end{align*}
It follows from \eqref{igjg0} that
\begin{align*}
&\left. \frac{G'({\beta} \cos \phi) {\beta}}{G({\beta} \cos \phi)}
\int_0^{\phi}G'({\beta} \cos \tau)d\tau \right|_{\phi=\phi_k({\beta})} \cdot P_{\beta}({\lambda}_k^o ({\beta}),{\beta})
\\
&=\left. \left( I({\beta},\phi) \int_0^{\phi} G''({\beta} \cos \tau) \cos \tau d\tau
+J({\beta},\phi) G'({\beta}\cos \phi) \right) \right|_{\phi=\phi_k({\beta})}.
\end{align*}
Therefore the desired inequality follows from \eqref{igjg}.
\end{proof}
Next we consider $Q_{\beta}({\lambda}_k^o ({\beta}),{\beta})$.
We note that \eqref{pqdb} and \eqref{thph} yield
\begin{equation*}
Q_{\beta}({\lambda}_k^o ({\beta}),{\beta})=R({\beta},\phi_k({\beta})),
\quad
R({\beta},\phi):=-\sin \phi +\frac{{\beta} \cos \phi}{G'({\beta} \cos \phi)}
\int_0^\phi G''({\beta} \cos \tau) \cos \tau d\tau.
\end{equation*}
\begin{lem}\label{lem:qb0l}
For every $k \in {\mathbb N}$,
\begin{equation}\label{Qoas}
(-1)^{k-1} Q_{\beta}({\lambda}_k^o (0),0)<0,
\qquad
\lim_{{\beta} \to \pm {\beta}_0} (-1)^{k-1} Q_{\beta}({\lambda}_k^o ({\beta}),{\beta}) =\infty.
\end{equation}
\end{lem}
\begin{proof}
We see from \eqref{phpr1} that $Q_{\beta}({\lambda}_k^o (0),0)=R(0,z_k)=-\sin z_k$.
Hence the first inequality of \eqref{Qoas} holds.
We examine the limit of $Q_{\beta}({\lambda}_k^o ({\beta}),{\beta})$ as ${\beta} \to \pm {\beta}_0$.
By \eqref{Gdpas}, we have
\begin{align*}
{\beta} \int_0^{\phi_k({\beta})} G''({\beta} \cos \tau) \cos \tau d\tau
&\ge \int_0^{\phi_k({\beta})} |{\beta} \cos \tau|
\left\{ \frac{c}{({\beta}_0-|{\beta} \cos \tau|)^{3/2}} -C \right\} d\tau
\\
&\ge c\int_{(k-1)\pi}^{\phi_k({\beta})}
\frac{|{\beta} \cos \tau|}{({\beta}_0-|{\beta} \cos \tau|)^{3/2}} d\tau -C|{\beta}| \phi_k({\beta}).
\end{align*}
Since
\begin{equation*}
{\beta}_0-|{\beta} \cos \tau| ={\beta}_0 -|{\beta}| +|{\beta}| (1-|\cos \tau|)
\le {\beta}_0 -|{\beta}| +{\beta}_0 \sin^2 \tau,
\end{equation*}
we find that
\begin{align*}
\int_{(k-1)\pi}^{\phi_k({\beta})} \frac{|\cos \tau|}{({\beta}_0-|{\beta} \cos \tau|)^{3/2}}d\tau
&\ge \int_{(k-1)\pi}^{\phi_k({\beta})} \frac{|\cos \tau|}{({\beta}_0 -|{\beta}| +{\beta}_0 \sin^2 \tau)^{3/2}}d\tau
\\
&=\frac{1}{\sqrt{{\beta}_0}({\beta}_0-|{\beta}|)}
\int_0^{\frac{\sqrt{{\beta}_0}}{\sqrt{{\beta}_0-|{\beta}|}}|\sin \phi_k({\beta})|} \frac{d\eta}{(1+\eta^2)^{3/2}},
\end{align*}
where we have used the change of variables
$\eta=\sqrt{{\beta}_0}|\sin \tau|/\sqrt{{\beta}_0-|{\beta}|}$.
Lemma~\ref{lem:phli} implies that
the integral on the right is bounded below by some positive constant.
Therefore
\begin{equation}\label{Qoes2}
{\beta} \int_0^{\phi_k({\beta})} G''({\beta} \cos \tau) \cos \tau d\tau
\ge \frac{\tilde c}{{\beta}_0-|{\beta}|} -C|{\beta}| \phi_k({\beta}),
\end{equation}
where $\tilde c>0$ is a constant.
From \eqref{Gpas}, we see that
\begin{equation}\label{Qoes1}
G'({\beta} \cos \phi_k({\beta})) \le \frac{C}{\sqrt{{\beta}_0-|{\beta} \cos \phi_k({\beta})|}}
\le \frac{C}{\sqrt{{\beta}_0-|{\beta}|}}.
\end{equation}
Combining \eqref{Qoes2}, \eqref{Qoes1} and \eqref{phpr2} gives
\begin{align*}
&(-1)^{k-1} Q_{\beta}({\lambda}_k^o ({\beta}),{\beta})
\\
&\ge (-1)^k \sin \phi_k({\beta}) +(-1)^{k-1} \cos \phi_k({\beta})
\left( \frac{\tilde c}{\sqrt{{\beta}_0-|{\beta}|}} -C|{\beta}| \phi_k({\beta}) \sqrt{{\beta}_0-|{\beta}|}\right)
\\
&\to \infty \quad ({\beta} \to \pm {\beta}_0).
\end{align*}
We thus obtain \eqref{Qoas}.
\end{proof}
Lemma~\ref{lem:qb0l} shows that
$Q_{\beta}({\lambda}_k^o ({\beta}^*),{\beta}^*)=0$ for some ${\beta}^* \in I \setminus\{0\}$.
In what follows, we fix such ${\beta}^*$.
Set $\phi^*:=\phi_k({\beta}^*)$.
Since $Q_{\beta}({\lambda}_k^o ({\beta}),{\beta})=R({\beta},\phi_k ({\beta}))$,
we see that the condition $Q_{\beta}({\lambda}_k^o ({\beta}^*),{\beta}^*)=0$ is equivalent to
\begin{equation}\label{Qoez}
\frac{{\beta}^* \cos \phi^*}{G'({\beta}^* \cos \phi^*)}
\int_0^{\phi^*} G''({\beta}^* \cos \tau) \cos \tau d\tau =\sin \phi^*.
\end{equation}
We investigate the properties of $({\beta}^*,\phi^*)$.
\begin{lem}\label{lem:phes}
There holds
\begin{equation*}
1-\frac{{\beta}^* \sin \phi^*}{\cos \phi^*} \frac{d\phi_k}{d{\beta}}({\beta}^*)>0.
\end{equation*}
\end{lem}
\begin{proof}
A direct computation yields
\begin{align*}
&\frac{1}{{\beta}^*} I({\beta}^*,\phi^*) -\frac{\sin \phi^*}{\cos \phi^*}J({\beta}^*,\phi^*) \\
&=\frac{1}{\sin \phi^* \cos \phi^*} \int_0^{\phi^*} G'({\beta}^* \cos \tau)d\tau
+G'({\beta}^* \cos \phi^*) +\frac{{\beta}^* \sin \phi^*}{\cos \phi^*}
\int_0^{\phi^*} G''({\beta}^* \cos \tau) \cos \tau d\tau
\\
&=\frac{1}{\sin \phi^* \cos \phi^*} \int_0^{\phi^*} G'({\beta}^* \cos \tau)d\tau
+\frac{G'({\beta}^* \cos \phi^*)}{\cos^2 \phi^*}
\\
&>0,
\end{align*}
where we have used \eqref{Qoez} in deriving the second equality.
This together with \eqref{sgni} shows that
\begin{equation*}
1-\frac{{\beta}^* \sin \phi^*}{\cos \phi^*} \frac{d\phi_k}{d{\beta}}({\beta}^*)
=\frac{{\beta}^*}{I({\beta}^*,\phi^*)}
\left( \frac{1}{{\beta}^*} I({\beta}^*,\phi^*) -\frac{\sin \phi^*}{\cos \phi^*}J({\beta}^*,\phi^*) \right) >0,
\end{equation*}
as claimed.
\end{proof}
We recall that the function $h(v)$ is given by \eqref{hdef}.
\begin{lem}\label{lem:bclv}
If \eqref{aasfs0} and \eqref{aasfw} hold,
then $h({\beta}^* \cos \phi^*)>0$.
\end{lem}
\begin{proof}
To obtain a contradiction, we suppose that $h({\beta}^* \cos \phi^*) \le 0$.
By Lemma~\ref{lem:gdpe},
we have
\begin{equation}\label{bclv1}
{\beta}^* \int_0^{\phi^*} G''({\beta}^* \cos \tau) \cos \tau d\tau
\ge {\beta}^* \int_{(k-1)\pi}^{\phi^*} G''({\beta}^* \cos \tau) \cos \tau d\tau.
\end{equation}
According to (ii) of Lemma~\ref{lem:Gpro}, we can use \eqref{aashs0}.
Hence, from the assumption $h({\beta}^* \cos \phi^*) \le 0$,
we find that
\begin{equation*}
h({\beta}^* \cos \tau)<0 \quad \mbox{for all } \tau \in [(k-1)\pi,\phi^*).
\end{equation*}
This particularly implies that the function $G'({\beta}^* \cos \tau)/|\cos \tau|$
is decreasing on $[(k-1)\pi,\phi^*)$, since $(G'(v)/v)'=-G'(v)h(v)/v^2$.
Therefore
\begin{align*}
G''({\beta}^* \cos \tau) {\beta}^* \cos \tau
&=G'({\beta}^* \cos \tau)(1-h({\beta}^* \cos \tau))
\\
&>\frac{G'({\beta}^* \cos \phi^*)}{|\cos \phi^*|} |\cos \tau|
\quad \mbox{for all } \tau \in [(k-1)\pi,\phi^*).
\end{align*}
Plugging this into \eqref{bclv1},
we obtain
\begin{equation*}
{\beta}^* \int_0^{\phi^*} G''({\beta}^* \cos \tau) \cos \tau d\tau
>\frac{G'({\beta}^* \cos \phi^*)}{|\cos \phi^*|}
\int_{(k-1)\pi}^{\phi^*} |\cos \tau| d\tau
=\frac{G'({\beta}^* \cos \phi^*) \sin \phi^*}{\cos \phi^*}.
\end{equation*}
This contradicts \eqref{Qoez}, and therefore $h({\beta}^* \cos \phi^*)>0$.
\end{proof}
\begin{lem}\label{lem:rbes1}
Let \eqref{aasfs0} and \eqref{aasfs1} hold and assume that $h({\beta}^*)>0$.
Then
\begin{equation}\label{rbe1}
R_{\beta}({\beta}^*,\phi^*) {\beta}^* \sin \phi^* >0.
\end{equation}
\end{lem}
\begin{proof}
We note that the assumptions \eqref{aasfs0} and \eqref{aasfs1}
give \eqref{aashs0} and \eqref{aashs1}.
By \eqref{aashs0} and the assumption $h({\beta}^*)>0$,
we have
\begin{equation}\label{bces2}
|{\beta}^* \cos \tau | <v_0 \quad \mbox{for all } \tau \in [0,\phi^*].
\end{equation}
A direct computation yields
\begin{align*}
R_{\beta} ({\beta},\phi)&=-\frac{G''({\beta} \cos \phi) {\beta} \cos^2 \phi}{G'({\beta} \cos \phi)^2}
\int_0^\phi G''({\beta} \cos \tau) \cos \tau d\tau
\\
&\quad +\frac{\cos \phi}{G'({\beta} \cos \phi)}
\frac{\partial}{\partial {\beta}} \left( {\beta} \int_0^\phi G''({\beta} \cos \tau) \cos \tau d\tau \right).
\end{align*}
Since $(G''(v)v)' =-(G'(v)h(v))'+G''(v)$,
we have
\begin{align*}
\frac{\partial}{\partial {\beta}} \left( G''({\beta} \cos \tau) {\beta} \cos \tau \right)
&=-\frac{d}{dv} (G'(v) h(v))|_{v={\beta} \cos \tau} \cdot \cos \tau +G''({\beta} \cos \tau) \cos \tau
\\
&=\frac{\partial}{\partial \tau} \left( G'({\beta} \cos \tau) h({\beta} \cos \tau)\right)
\cdot \frac{\cos \tau}{{\beta} \sin \tau} +G''({\beta} \cos \tau) \cos \tau,
\end{align*}
and hence
\begin{align}
R_{\beta} ({\beta},\phi) &=\frac{h({\beta} \cos \phi)\cos \phi}{G'({\beta} \cos \phi)}
\int_0^\phi G''({\beta} \cos \tau) \cos \tau d\tau
\nonumber
\\
&\quad +\frac{\cos \phi}{G'({\beta} \cos \phi) {\beta}}
\int_0^\phi \frac{\partial}{\partial \tau} \left( G'({\beta} \cos \tau) h({\beta} \cos \tau)\right)
\cdot \frac{\cos \tau}{\sin \tau} d\tau.
\label{rbre}
\end{align}
Note that
\begin{align*}
&h({\beta} \cos \phi) \int_0^\phi G''({\beta} \cos \tau) \cos \tau d\tau
\\
&=(h({\beta} \cos \phi)-h({\beta})) \int_0^\phi G''({\beta} \cos \tau) \cos \tau d\tau
-\frac{h({\beta})}{{\beta}} \int_0^\phi \left( \frac{\partial}{\partial \tau} G'({\beta} \cos \tau)\right)
\cdot \frac{\cos \tau}{\sin \tau} d\tau.
\end{align*}
Substituting this into \eqref{rbre}, we obtain
\begin{align*}
R_{\beta}({\beta},\phi) &=\frac{(h({\beta} \cos \phi)-h({\beta})) \cos \phi}{G'({\beta} \cos \phi)}
\int_0^\phi G''({\beta} \cos \tau) \cos \tau d\tau
\\
&\quad +\frac{\cos \phi}{G'({\beta} \cos \phi) {\beta}}
\int_0^\phi \frac{\partial}{\partial \tau} \big\{ G'({\beta} \cos \tau) (h({\beta} \cos \tau) -h({\beta}))
\big\} \cdot \frac{\cos \tau}{\sin \tau} d\tau.
\end{align*}
We now apply integration by parts to the second term on the right.
Then, since
\begin{equation*}
h({\beta} \cos \tau) -h({\beta}) =O(1-|\cos \tau|)=O(\sin^2 \tau)
\quad
\mbox{as } \tau \to l\pi, \ l \in {\mathbb Z},
\end{equation*}
we have
\begin{align*}
&\int_0^\phi \frac{\partial}{\partial \tau} \big\{ G'({\beta} \cos \tau) (h({\beta} \cos \tau) -h({\beta}))
\big\} \cdot \frac{\cos \tau}{\sin \tau} d\tau
\\
&=\frac{G'({\beta} \cos \phi) (h({\beta} \cos \phi) -h({\beta})) \cos \phi}{\sin \phi}
+\int_0^\phi \frac{G'({\beta} \cos \tau)(h({\beta} \cos \tau) -h({\beta}))}{\sin^2 \tau} d\tau.
\end{align*}
This together with \eqref{Qoez} shows that
\begin{align*}
&R_{\beta} ({\beta}^*,\phi^*) {\beta}^* \sin \phi^*
\\
&=\left( \frac{{\beta}^* \sin \phi^* \cos \phi^*}{G'({\beta}^* \cos \phi^*)}
\int_0^{\phi^*} G''({\beta}^* \cos \tau) \cos \tau d\tau
+\cos^2 \phi^* \right) (h({\beta}^* \cos \phi^*)-h({\beta}^*))
\\
&\quad +\frac{\sin \phi^* \cos \phi^*}{G'({\beta}^* \cos \phi^*)}
\int_0^{\phi^*} \frac{G'({\beta}^* \cos \tau)(h({\beta}^* \cos \tau) -h({\beta}^*))}{\sin^2 \tau} d\tau
\\
&=h({\beta}^* \cos \phi^*)-h({\beta}^*)
+\frac{\sin \phi^* \cos \phi^*}{G'({\beta}^* \cos \phi^*)}
\int_0^{\phi^*} \frac{G'({\beta}^* \cos \tau)(h({\beta}^* \cos \tau) -h({\beta}^*))}{\sin^2 \tau} d\tau.
\end{align*}
We see from \eqref{aashs1} and \eqref{bces2} that the right-hand side is positive,
and \eqref{rbe1} is proved.
\end{proof}
\begin{lem}\label{lem:rbes2}
Let \eqref{aasfs0}, \eqref{aasfs2} and \eqref{aasfw} hold
and assume that $h({\beta}^*) \le 0$.
Then
\begin{equation}\label{rbe2}
R_{\beta} ({\beta}^*,\phi^*) {\beta}^* \sin \phi^* >h({\beta}^* \cos \phi^*).
\end{equation}
\end{lem}
\begin{proof}
We see from (ii) of Lemma~\ref{lem:Gpro} that
\eqref{aashs0} and \eqref{aashs2} are satisfied.
By \eqref{aashs0}, Lemma~\ref{lem:bclv} and the assumption $h({\beta}^*) \le 0$,
we can take $\tau_0 \in [0,\phi^*-(k-1)\pi)$ such that $|{\beta}^*| \cos \tau_0=v_0$.
Put
\begin{equation*}
{{\mathcal I}}:=((k-1)\pi+\tau_0,\phi^*] \cup \bigcup_{j=1}^{k-1} ((j-1)\pi+\tau_0, j\pi -\tau_0).
\end{equation*}
Then, from \eqref{aashs0}, we have
\begin{equation}\label{bces}
|{\beta}^* \cos \tau| \left\{
\begin{aligned}
&<v_0 && \mbox{if } \tau \in {{\mathcal I}},
\\
&\ge v_0 && \mbox{if } \tau \in [0,\phi^*] \setminus {{\mathcal I}},
\end{aligned}
\right.
\end{equation}
and
\begin{equation}\label{shbc}
h({\beta}^*\cos \tau) \left\{
\begin{aligned}
&>0 && \mbox{if } \tau \in {{\mathcal I}},
\\
&=0 && \mbox{if } \tau \in \partial {{\mathcal I}} \setminus \{ \phi^*\}.
\end{aligned}
\right.
\end{equation}
We estimate the second term on the right of \eqref{rbre}.
Using integration by parts and \eqref{shbc},
we have
\begin{align*}
&\int_{{\mathcal I}} \frac{\partial}{\partial \tau} \left( G'({\beta}^* \cos \tau) h({\beta}^* \cos \tau)\right)
\cdot \frac{\cos \tau}{\sin \tau} d\tau
\\
&=\frac{G'({\beta}^* \cos \phi^*) h({\beta}^* \cos \phi^*)\cos \phi^*}{\sin \phi^*}
+\int_{{\mathcal I}} \frac{G'({\beta}^* \cos \tau) h({\beta}^* \cos \tau)}{\sin^2 \tau} d\tau
\\
&>\frac{G'({\beta}^* \cos \phi^*) h({\beta}^* \cos \phi^*)\cos \phi^*}{\sin \phi^*}.
\end{align*}
Furthermore, \eqref{aashs2} and \eqref{bces} yield
\begin{align*}
&\int_{[0,\phi^*] \setminus {{\mathcal I}}}
\frac{\partial}{\partial \tau} \left( G'({\beta}^* \cos \tau) h({\beta}^* \cos \tau)\right)
\cdot \frac{\cos \tau}{\sin \tau} d\tau
\\
&=-\int_{[0,\phi^*] \setminus {{\mathcal I}}}
\left. \left\{ v \frac{d}{dv} (G'(v) h(v)) \right\} \right|_{v={\beta}^* \cos \tau} d\tau
\\
&\ge 0.
\end{align*}
Plugging these inequalities into \eqref{rbre} and using \eqref{Qoez},
we obtain
\begin{align*}
&R_{\beta} ({\beta}^*,\phi^*) {\beta}^* \sin \phi^*
\\
&>\frac{h({\beta}^* \cos \phi^*) {\beta}^* \sin \phi^* \cos \phi^*}{G'({\beta}^* \cos \phi^*)}
\int_0^{\phi^*} G''({\beta}^* \cos \tau) \cos \tau d\tau
+h({\beta}^* \cos \phi^*) \cos^2 \phi^*
\\
&=h({\beta}^* \cos \phi^*),
\end{align*}
which proves the lemma.
\end{proof}
\begin{lem}\label{ddbp}
Assume \eqref{aasfs}.
Then
\begin{equation}\label{dQnz}
(-1)^{k-1} {\operatorname{sgn}} ({\beta}^*) \left. \frac{d}{d{\beta}}Q_{\beta}({\lambda}_k^o ({\beta}),{\beta}) \right|_{{\beta}={\beta}^*}>0.
\end{equation}
\end{lem}
\begin{proof}
Since $Q_{\beta}({\lambda}_k^o ({\beta}),{\beta})=R({\beta},\phi_k({\beta}))$, we have
\begin{equation*
\left. \frac{d}{d{\beta}}Q_{\beta}({\lambda}_k^o ({\beta}),{\beta}) \right|_{{\beta}={\beta}^*}
=R_{\beta} ({\beta}^*,\phi^*) +R_\phi ({\beta}^*,\phi^*) \frac{d\phi_k}{d{\beta}}({\beta}^*).
\end{equation*}
To estimate this, we compute $R_\phi ({\beta}^*,\phi^*)$.
A direct computation gives
\begin{align*}
R_\phi ({\beta},\phi)&=-\cos \phi
-\frac{(G'({\beta} \cos \phi)-G''({\beta} \cos \phi) {\beta} \cos \phi) {\beta} \sin \phi}{G'({\beta} \cos \phi)^2}
\int_0^\phi G''({\beta} \cos \tau) \cos \tau d\tau
\\
&\quad +\frac{{\beta} \cos \phi}{G'({\beta} \cos \phi)} \cdot G''({\beta} \cos \phi) \cos \phi
\\
&=-\left( \cos \phi +\frac{{\beta} \sin \phi}{G'({\beta} \cos \phi)}
\int_0^\phi G''({\beta} \cos \tau) \cos \tau d\tau \right) h({\beta} \cos \phi).
\end{align*}
From \eqref{Qoez}, we find that
\begin{equation}\label{rphe}
R_\phi ({\beta}^*,\phi^*)
=-\frac{h({\beta}^* \cos \phi^*)}{\cos \phi^*}.
\end{equation}
We consider the two cases: $h({\beta}^*)>0$ and $h({\beta}^*) \le 0$.
We first consider the latter case.
Lemma~\ref{lem:rbaf} shows that \eqref{aasfw} is satisfied,
and hence we can apply Lemmas~\ref{lem:bclv} and \ref{lem:rbes2}.
From \eqref{rbe2} and \eqref{rphe},
we have
\begin{align*}
{\beta}^* \sin \phi^* \cdot \left. \frac{d}{d{\beta}}Q_{\beta}({\lambda}_k^o ({\beta}),{\beta}) \right|_{{\beta}={\beta}^*}
>h({\beta}^* \cos \phi^*)
\left( 1 -\frac{{\beta}^* \sin \phi^*}{\cos \phi^*} \frac{d\phi_k}{d{\beta}}({\beta}^*) \right).
\end{align*}
Lemmas~\ref{lem:phes} and \ref{lem:bclv} show that the right-hand side is positive.
Since the sign of ${\beta}^* \sin \phi^*$ coincides with that of $(-1)^{k-1} {\operatorname{sgn}} ({\beta}^*)$,
we obtain \eqref{dQnz}.
Let us consider the other case $h({\beta}^*)>0$.
Then we have $(-1)^{k-1} {\operatorname{sgn}} ({\beta}^*) R_{\beta} ({\beta}^*,\phi^*)>0$ by Lemma~\ref{lem:rbes1}.
Moreover, we see from Lemma~\ref{lem:bclv} and \eqref{rphe} that
$(-1)^{k-1}R_\phi ({\beta}^*,\phi^*)<0$.
Therefore it is sufficient to show that
\begin{equation*
{\operatorname{sgn}} ({\beta}^*) \frac{d\phi_k}{d{\beta}}({\beta}^*) \le 0.
\end{equation*}
To prove this, we define
\begin{equation*}
\tilde h(v):=\left\{
\begin{aligned}
&\frac{G'(v)v}{G(v)} && (v \neq 0),
\\
&1 && (v=0).
\end{aligned}
\right.
\end{equation*}
By (i) of Lemma~\ref{lem:Gpro},
we see that $\tilde h \in C(I) \cap C^2 (I \setminus \{0\})$ and $G\tilde h \in C^1(I)$.
Notice that
\begin{equation*}
\frac{d}{dv} \left( \frac{G(v)^2 \tilde h'(v)}{G'(v)} \right) =-G(v)h'(v)
\quad \mbox{for } v \in I \setminus \{0\}.
\end{equation*}
\eqref{aashs1} shows that the right-hand side of this equality is nonnegative if $v \in (v_0,v_0)$,
and hence
\begin{equation*}
{\operatorname{sgn}}(v) \frac{G(v)^2 \tilde h'(v)}{G'(v)}
\ge \lim_{v \to 0} {\operatorname{sgn}}(v) \frac{G(v)^2 \tilde h'(v)}{G'(v)}
=0 \quad \mbox{for } v \in (v_0,v_0) \setminus \{0\}.
\end{equation*}
From this we have
\begin{equation}\label{edth}
{\operatorname{sgn}}(v) \tilde h(v) \mbox{ is nondecreasing in } (v_0,v_0) \setminus \{0\}.
\end{equation}
Since $G'(v)+G''(v)v=(G(v)\tilde h(v))'$, we see that $J({\beta},\phi)$ is written as
\begin{align*}
J({\beta},\phi)
&=\tilde h({\beta} \cos \phi) \int_0^\phi G'({\beta} \cos \tau)d\tau
+\int_0^\phi \frac{\partial}{\partial \tau}
\big( G({\beta} \cos \tau) \tilde h({\beta} \cos \tau) \big) \cdot \frac{1}{{\beta} \sin \tau} d\tau
\\
&=(\tilde h({\beta} \cos \phi) -\tilde h({\beta})) \int_0^\phi G'({\beta} \cos \tau)d\tau
\\
&\quad +\int_0^\phi \frac{\partial}{\partial \tau} \big\{ G({\beta} \cos \tau) (\tilde h({\beta} \cos \tau)
-\tilde h({\beta})) \big\} \cdot \frac{1}{{\beta} \sin \tau} d\tau.
\end{align*}
Integrating by parts shows that
the second term on the right is computed as
\begin{align*}
&\int_0^\phi \frac{\partial}{\partial \tau}
\big\{ G({\beta} \cos \tau) (\tilde h({\beta} \cos \tau) -\tilde h({\beta})) \big\} \cdot \frac{1}{{\beta} \sin \tau} d\tau
\\
&=(\tilde h({\beta} \cos \phi) -\tilde h({\beta})) \frac{G({\beta} \cos \phi)}{{\beta} \sin \phi}
+\int_0^\phi (\tilde h({\beta} \cos \tau) -\tilde h({\beta}))
\frac{G({\beta} \cos \tau) \cos \tau}{{\beta} \sin^2 \tau} d\tau.
\end{align*}
Therefore
\begin{align}
J({\beta}^*,\phi^*) &=(\tilde h({\beta}^* \cos \phi^*) -\tilde h({\beta}^*))
\left( \int_0^{\phi^*} G'({\beta}^* \cos \tau)d\tau
+\frac{G({\beta}^* \cos \phi^*)}{{\beta}^* \sin \phi^*}\right)
\nonumber
\\
&\quad +\int_0^{\phi^*} (\tilde h({\beta}^* \cos \tau) -\tilde h({\beta}^*))
\frac{G({\beta}^* \cos \tau) \cos \tau}{{\beta}^* \sin^2 \tau} d\tau.
\label{Jbpa}
\end{align}
By \eqref{aashs0} and the assumption $h({\beta}^*)>0$,
we have $|{\beta}^* \cos \tau|<v_0$ for all $\tau \in [0,\phi^*]$.
It follows from \eqref{edth} that the right-hand side of \eqref{Jbpa} is nonpositive,
and hence $J({\beta}^*,\phi^*) \le 0$.
This together with \eqref{sgni} shows that
\begin{equation*}
{\operatorname{sgn}} ({\beta}^*) \frac{d\phi_k}{d{\beta}}({\beta}^*)
=\frac{{\operatorname{sgn}} ({\beta}^*)}{I({\beta}^*,\phi^*)} \cdot J({\beta}^*,\phi^*) \le 0.
\end{equation*}
This completes the proof.
\end{proof}
We can now prove Proposition~\ref{prop:bpco}.
\begin{proof}[Proof of Proposition~\ref{prop:bpco}]
By \eqref{PQsy}, we have
\begin{equation*}
D({\lambda}_k^o ({\beta}),{\beta},-{\beta})=2P_{\beta} ({\lambda}_k^o ({\beta}),{\beta})Q_{\beta} ({\lambda}_k^o ({\beta}),{\beta}).
\end{equation*}
In particular, Lemma~\ref{lem:Pbnz} yields
\begin{equation*}
{{\mathcal D}}:=\{ {\beta} \in I \setminus \{0\}; D({\lambda}_k^o ({\beta}),{\beta},-{\beta})=0\}
=\{ {\beta} \in I \setminus \{0\}; Q_{\beta} ({\lambda}_k^o ({\beta}),{\beta})=0\}.
\end{equation*}
We see from Lemmas~\ref{lem:Pbnz} and \ref{lem:qb0l} that
\begin{equation*}
D({\lambda}_k^o ({\beta}),{\beta},-{\beta}) \left\{
\begin{aligned}
&<0 && \mbox{if } |{\beta}| \mbox{ is small},
\\
&>0 && \mbox{if } |{\beta}| \mbox{ is close to } {\beta}_0.
\end{aligned}
\right.
\end{equation*}
Furthermore, Lemmas~\ref{lem:Pbnz} and \ref{ddbp} show that
for any ${\beta}^* \in {{\mathcal D}}$,
\begin{equation}\label{dDba}
{\operatorname{sgn}} ({\beta}^*) \left. \frac{d}{d{\beta}} D({\lambda}_k^o ({\beta}),{\beta},-{\beta}) \right|_{{\beta}={\beta}^*}
=2{\operatorname{sgn}} ({\beta}^*) P_{\beta} ({\lambda}_k^o ({\beta}^*),{\beta}^*)
\cdot \left. \frac{d}{d{\beta}} Q_{\beta} ({\lambda}_k^o ({\beta}),{\beta}) \right|_{{\beta}={\beta}^*} >0.
\end{equation}
Therefore there exist ${\beta}_+^* \in (0,{\beta}_0)$ and ${\beta}_-^* \in (-{\beta}_0,0)$ such that
\begin{equation}\label{cDtp}
{{\mathcal D}}=\{ {\beta}_+^*,{\beta}_-^*\}.
\end{equation}
From Lemma~\ref{lem:rbaf},
we know that \eqref{aasfw} is satisfied under the assumption \eqref{aasfs}.
Hence (ii) of Proposition~\ref{prop:pb} shows that
${{\mathcal C}}_{k,+}^o$ and ${{\mathcal C}}_{k,-}^o$ are written as
\begin{equation}\label{copl}
{{\mathcal C}}_{k,+}^o=\{ ({\lambda},u^o_k(\cdot,{\beta}_+^o({\lambda})))\}_{{\lambda} \in ({\lambda}_{2k-1},\infty)},
\qquad
{{\mathcal C}}_{k,-}^o=\{ ({\lambda},u^o_k(\cdot,{\beta}_-^o({\lambda})))\}_{{\lambda} \in ({\lambda}_{2k-1},\infty)},
\end{equation}
where ${\beta}_+^o({\lambda})$ (resp. ${\beta}_-^o({\lambda})$)
is the inverse of the function $(0,{\beta}_0) \ni {\beta} \mapsto {\lambda}_k^o({\beta})$
(resp. $(-{\beta}_0,0) \ni {\beta} \mapsto {\lambda}_k^o({\beta})$).
Then, by \eqref{dDba}, \eqref{cDtp} and \eqref{sdlk}, we have
\begin{equation}\label{dDab}
\left\{
\begin{gathered}
D({\lambda},{\beta}_\pm^o({\lambda}),-{\beta}_\pm^o({\lambda}))=0
\mbox{ if and only if } {\lambda}={\lambda}_k^o({\beta}_\pm^*),
\\
\left. \frac{d}{d{\lambda}} D({\lambda},{\beta}_\pm^o({\lambda}),-{\beta}_\pm^o({\lambda}))
\right|_{{\lambda}={\lambda}_k^o({\beta}_\pm^*)}
=\left( \frac{d{\lambda}_k^o}{d{\beta}}({\beta}_\pm^*) \right)^{-1} \cdot
\left. \frac{d}{d{\beta}} D({\lambda}_k^o ({\beta}),{\beta},-{\beta}) \right|_{{\beta}={\beta}_\pm^*}>0.
\end{gathered}
\right.
\end{equation}
Thus we obtain the desired conclusion by applying (i) and (ii) of Proposition~\ref{prop:bt}.
\end{proof}
\begin{remk}\label{rem:sybp}
Since \eqref{PQsy} and \eqref{lksy} give
$Q_{\beta} ({\lambda}_k^o (-{\beta}),-{\beta})=-Q_{\beta} ({\lambda}_k^o ({\beta}),{\beta})$,
we infer that ${\beta}_-^*=-{\beta}_+^*$.
Hence the bifurcation points
$({\lambda}_k^o({\beta}_+^*),u^o_k(\cdot,{\beta}_+^*)) \in {{\mathcal C}}_{k,+}^o$
and $({\lambda}_k^o({\beta}_-^*),u^o_k(\cdot,{\beta}_-^*)) \in {{\mathcal C}}_{k,-}^o$
obtained in Proposition~\ref{prop:bpco} satisfy
\begin{equation*}
({\lambda}_k^o({\beta}_-^*),u^o_k(\cdot,{\beta}_-^*))
=({\lambda}_k^o({\beta}_+^*),-u^o_k(\cdot,{\beta}_+^*)).
\end{equation*}
\end{remk}
\subsection{Remark on the assumption \eqref{aasfs}}
At the end of this section,
we observe that Proposition~\ref{prop:bpco} is still true for $k=1$
if we drop the assumption \eqref{aasfs1}.
\begin{prop}
Under the assumptions \eqref{aasfs0}, \eqref{aasfs2} and \eqref{aasfw},
Proposition~\ref{prop:bpco} holds for $k=1$.
\end{prop}
\begin{proof}
Let ${\beta}^* \in I \setminus \{0\}$ satisfy $Q({\lambda}_1^o ({\beta}^*),{\beta}^*)=0$.
If \eqref{dQnz} is satisfied for $k=1$,
then the proposition can be proved in the same way as Proposition~\ref{prop:bpco}.
Therefore we only need to show that
\begin{equation}\label{hban}
h({\beta}^*) \le 0,
\end{equation}
since this enables us to apply Lemma~\ref{lem:rbes2} to obtain \eqref{dQnz}.
Contrary to \eqref{hban}, suppose that $h({\beta}^*)>0$.
Then \eqref{aashs0} yields
\begin{equation*}
h({\beta}^* \cos \tau)>0 \quad \mbox{for all } \tau \in [0,\phi^*],
\end{equation*}
where $\phi^*:=\phi_1({\beta}^*) \in (0,\pi/2)$.
From this and the fact that $(G'(v)/v)'=-G'(v)h(v)/v^2$,
we see that the function $G'({\beta}^* \cos \tau)/\cos \tau$
is increasing on $[0,\phi^*]$.
Hence
\begin{align*}
{\beta}^* \int_0^{\phi^*} G''({\beta}^* \cos \tau) \cos \tau d\tau
&=\int_0^{\phi^*} G'({\beta}^* \cos \tau) (1-h({\beta}^* \cos \tau)) d\tau
\\
&<\int_0^{\phi^*} \frac{G'({\beta}^* \cos \phi^*)}{\cos \phi^*} \cos \tau d\tau
\\
&=\frac{G'({\beta}^* \cos \phi^*) \sin \phi^*}{\cos \phi^*}.
\end{align*}
This gives $Q({\lambda}_1^o ({\beta}^*),{\beta}^*)<0$, a contradiction.
Therefore we obtain \eqref{hban}, and the proof is complete.
\end{proof}
\section{Proof of Theorem~\ref{mthm}}\label{sec:pt}
To prove Theorem~\ref{mthm},
we compute the Morse index of solutions on ${{\mathcal S}}^e$ and ${{\mathcal S}}^o$.
We write ${\lambda}^*$ for the number ${\lambda}_{k,+}^*$ obtained in Proposition~\ref{prop:bpco}.
\begin{prop}\label{prop:mi}
For $k \in {\mathbb N}$,
the following hold.
\begin{enumerate}[(i)]
\item
Let \eqref{aasfw} hold and let $({\lambda},u) \in {{\mathcal C}}_k^e$.
Then $u$ is nondegenerate and $i(u)=2k$.
\item
Let \eqref{aasfs} hold and let $({\lambda},u) \in {{\mathcal C}}_k^o$.
Then $u$ is nondegenerate unless ${\lambda} \neq {\lambda}^*$ and
\begin{equation*}
i(u)=\left\{
\begin{aligned}
&2k-1 && ({\lambda}<{\lambda}^*),
\\
&2k-2 && ({\lambda} \ge {\lambda}^*).
\end{aligned}
\right.
\end{equation*}
\end{enumerate}
\end{prop}
In what follows, we fix $k \in {\mathbb N}$.
For $n \in {\mathbb N} \cup \{0\}$,
let $\mu_n^o({\beta})$ (resp. $\mu_n^e({\beta})$)
denote the $(n+1)$-th largest eigenvalue of \eqref{llevp}
for $({\lambda},u)=({\lambda}_k^o({\beta}),u^o_k(\cdot,{\beta})) \in {{\mathcal C}}_k^o$
(resp. $({\lambda},u)=({\lambda}_k^e({\beta}),u^e_k(\cdot,{\beta})) \in {{\mathcal C}}_k^e$).
We see from Lemma~\ref{lem:ipmi} that
$\mu_n^o({\beta})$ and $\mu_n^e({\beta})$ are continuous with respect to ${\beta}$.
In the following two lemmas,
we give basic estimates of $\mu_n^o({\beta})$ and $\mu_n^e({\beta})$.
\begin{lem}\label{lem:evaz}
There hold $\mu_{2k-2}^o(0)>0$ and $\mu_{2k-1}^e(0)>0$.
\end{lem}
\begin{proof}
It is elementary to show that
the $(n+1)$-th eigenvalue $\mu_n$ of \eqref{llevp} for $u=0$
is given by
\begin{equation*}
\mu_{2k-2}={\lambda} f'(0)-\{(k-1)\pi \}^2,
\qquad
\mu_{2k-1}={\lambda} f'(0)-z_k^2
\quad
(k \in {\mathbb N}).
\end{equation*}
Hence
\begin{equation*}
\mu_{2k-2}^o(0)>\mu_{2k-1}^o(0)={\lambda}_{2k-1} f'(0)-z_k^2=0,
\qquad
\mu_{2k-1}^e(0)>\mu_{2k}^e(0)={\lambda}_{2k} f'(0)-(k\pi)^2=0,
\end{equation*}
as desired.
\end{proof}
\begin{lem}\label{lem:eves}
Assume that \eqref{aasfw} holds.
Then $\mu_{2k-1}^o({\beta})<0$ and $\mu_{2k}^e({\beta})<0$
for all ${\beta} \in I \setminus \{0\}$.
\end{lem}
\begin{proof}
Let $Z(w)$ denote the number of zeros of a function $w$ in $(-1,1) \setminus \{0\}$.
By Lemma~\ref{lem:Mies},
it suffices to show that
\begin{gather*
Z(u^o_k(\cdot,{\beta}))=2k-2,
\quad
Z(u^e_k(\cdot,{\beta}))=2k,
\\
u^o_k(-0,{\beta})u^o_k(+0,{\beta})<0,
\quad
u^e_k(-0,{\beta})u^e_k(+0,{\beta})>0
\end{gather*}
for ${\beta} \in I \setminus \{0\}$.
To derive these,
we recall that any $u \in {{\mathcal S}}_{\lambda}$ is written as
\begin{equation*}
u(x)=\left\{
\begin{aligned}
&U\left( \sqrt{\lambda} (x+1),{\beta}_1 \right)
=G\left( {\beta}_1 \cos \Theta \left( \sqrt{\lambda} (x+1),{\beta}_1\right) \right)
&& \mbox{for } x \in [-1,0), \\
&U\left( \sqrt{\lambda} (1-x),{\beta}_2 \right)
=G\left( {\beta}_2 \cos \Theta \left( \sqrt{\lambda} (1-x),{\beta}_2\right) \right)
&& \mbox{for } x \in (0,1],
\end{aligned}
\right.
\end{equation*}
where ${\beta}_1=G(u(-1))$ and ${\beta}_2=G(u(1))$.
This implies that if ${\beta}_1 {\beta}_2 \neq 0$ and
\begin{equation*}
\left( m_j-\frac{1}{2}\right) \pi< \theta ({\lambda},{\beta}_j)
=\Theta (\sqrt{{\lambda}},{\beta}_j)
\le \left( m_j+\frac{1}{2}\right) \pi \quad \mbox{for some } m_j \in {\mathbb N} \cup \{0\},
\end{equation*}
then $Z(u)=m_1+m_2$ and ${\operatorname{sgn}}(u(-0)u(+0))={\operatorname{sgn}} ((-1)^{m_1+m_2} {\beta}_1 {\beta}_2)$.
Since we know that $\theta({\lambda}_k^o ({\beta}),{\beta})=\phi_k({\beta}) \in ((k-1)\pi,(k-1/2)\pi)$
and $\theta({\lambda}_k^e ({\beta}),{\beta})=k\pi$,
we have
\begin{gather*}
Z(u^o_k(\cdot,{\beta}))=(k-1)+(k-1)=2k-2,
\quad
Z(u^e_k(\cdot,{\beta}))=k+k=2k,
\\
{\operatorname{sgn}}(u^o_k(-0,{\beta})u^o_k(+0,{\beta}))={\operatorname{sgn}} \left( (-1)^{(k-1)+(k-1)} \cdot (-{\beta}^2)\right)
={\operatorname{sgn}}(-{\beta}^2)<0,
\\
{\operatorname{sgn}}(u^e_k(-0,{\beta})u^e_k(+0,{\beta}))={\operatorname{sgn}} \left( (-1)^{k+k} \cdot {\beta}^2\right)
={\operatorname{sgn}}( {\beta}^2)>0
\end{gather*}
for ${\beta} \in I \setminus \{0\}$.
Therefore the lemma follows.
\end{proof}
Let us show Proposition~\ref{prop:mi}.
\begin{proof}[Proof of Proposition~\ref{prop:mi}]
First we prove (i).
Lemma \ref{lem:evaz} shows that $\mu_{2k-1}^e({\beta})$ is positive if $|{\beta}|$ is small enough.
As shown in the proof of Lemma~\ref{lem:ndse},
we know that $D({\lambda}_k^e({\beta}),{\beta},{\beta}) \neq 0$ for ${\beta} \in I \setminus \{0\}$.
From this and Lemma~\ref{lem:ndcD}, we see that $\mu_{2k-1}^e({\beta})$ never vanishes.
Therefore $\mu_{2k-1}^e({\beta})>0$ for all ${\beta} \in I$.
Thus (i) is verified by combining this with Lemma~\ref{lem:eves}.
Next we prove (ii).
We recall that \eqref{cDtp} holds.
Hence Lemma~\ref{lem:ndcD} gives
\begin{gather}
\mu_n^o({\beta}) \neq 0 \mbox{ for all } n \in {\mathbb N} \cup \{0\}
\mbox{ and } {\beta} \in I \setminus \{0,{\beta}_+^*,{\beta}_-^*\},
\label{ednv}
\\
\mu_{n_+}^o({\beta}_+^*)=\mu_{n_-}^o({\beta}_-^*)=0
\mbox{ for some } n_+,n_- \in {\mathbb N} \cup \{0\}.
\nonumber
\end{gather}
Moreover, Lemma~\ref{lem:evaz} shows that
\begin{equation*}
\mu_n^o({\beta}) \mbox{ is positive if } |{\beta}| \mbox{ is small and } n \le 2k-2.
\end{equation*}
Combining these with Lemma~\ref{lem:eves},
we deduce that
\begin{gather}
\mu_{2k-2}^o({\beta}_+^*)=\mu_{2k-2}^o({\beta}_-^*)=0,
\label{cevv}
\\
\mu_{2k-3}^o({\beta}) >0 \quad \mbox{for all } {\beta} \in I,
\mbox{ provided } k \ge 2.
\label{revp}
\end{gather}
To investigate the behavior of $\mu_{2k-2}^o({\beta})$,
we apply (iii) of Proposition~\ref{prop:bt}.
For this purpose,
we use the parametrization of ${{\mathcal C}}_{k,+}^o$ and ${{\mathcal C}}_{k,+}^o$ as in \eqref{copl}.
Then $\mu_n^o({\beta}_\pm^o ({\lambda}))=\mu_n(u^o_k(\cdot,{\beta}_\pm^o ({\lambda}))$.
By \eqref{cevv}, we can apply \eqref{evdf}
for $n=2k-2$, $u(\cdot,{\lambda})=u^o_k(\cdot,{\beta}_\pm^o ({\lambda}))$,
${\beta}_1({\lambda})={\beta}_\pm^o ({\lambda})$ and ${\beta}_2({\lambda})=-{\beta}_\pm^o ({\lambda})$ to obtain
\begin{equation*}
{\operatorname{sgn}} \left( \left. \frac{d}{d{\lambda}} \mu_{2k-2}^o({\beta}_\pm^o ({\lambda}))
\right|_{{\lambda}={\lambda}_k^o({\beta}_\pm^*)} \right) =-{\operatorname{sgn}} \left( \left. \frac{d}{d{\lambda}}
D({\lambda},{\beta}_{\pm}^o({\lambda}),-{\beta}_{\pm}^o({\lambda})) \right|_{{\lambda}={\lambda}_k^o({\beta}_\pm^*)} \right).
\end{equation*}
According to \eqref{dDab}, we know that the right-hand side is negative.
Therefore it follows from \eqref{ednv} that
\begin{equation*}
\mu_{2k-2}^o({\beta}_\pm^o ({\lambda})) \left\{
\begin{aligned}
&>0 && \mbox{if } {\lambda}<{\lambda}_k^o({\beta}_\pm^*),
\\
&<0 && \mbox{if } {\lambda}>{\lambda}_k^o({\beta}_\pm^*).
\end{aligned}
\right.
\end{equation*}
Combining this with Lemma~\ref{lem:eves} and \eqref{revp},
we conclude that
\begin{equation*}
i(u^o_k(\cdot,{\beta}_\pm^o ({\lambda}))= \left\{
\begin{aligned}
&2k-1 && \mbox{if } {\lambda}<{\lambda}_k^o({\beta}_\pm^*),
\\
&2k-2 && \mbox{if } {\lambda} \ge {\lambda}_k^o({\beta}_\pm^*).
\end{aligned}
\right.
\end{equation*}
As noted in Remark~\ref{rem:sybp},
we know that ${\lambda}_k^o({\beta}_+^*)={\lambda}_k^o({\beta}_-^*)$.
Consequently, (ii) is proved.
\end{proof}
We are now in a position to prove Theorem~\ref{mthm}.
\begin{proof}[Proof of Theorem~\ref{mthm}]
We note that by the assumption \eqref{aasfs}, the condition \eqref{aasfw} is satisfied.
We put ${{\mathcal C}}_{2k-1}={{\mathcal C}}_{k,+}^o$, ${{\mathcal C}}_{2k}={{\mathcal C}}_{k,+}^e$.
Then (i) and (ii) follow immediately from
Proposition~\ref{prop:pb} and Lemma~\ref{lem:ndse}.
Moreover, (iii) and (iv) are direct consequences of
Propositions~\ref{prop:bpco} and \ref{prop:mi} and Remark~\ref{rem:sybp}.
Therefore the proof is complete.
\end{proof}
\section*{Acknowledgements}
The author would like to thank Professors Satoshi Tanaka and Masahito Ohta
for calling his attention to the references
\cite{AN13,AG18,S90}.
This work is supported in part by the Grant-in-Aid for Early-Career Scientists 19K14574,
Japan Society for the Promotion of Science.
| 52,274 |
\section{Introduction}
Noise induced escapes from potential wells as well as noise driven transitions over potential barriers have been studied for a long time.
Pioneering works of Farkas \cite{farkas1927} and Kramers \cite{kramers1940} related the escape rate to the height of the potential barrier and the system temperature.
Further development in the theory of stochastic process and Brownian motion \cite{borodin2002} allow for detailed studies of noise driven systems on theoretical \cite{mcnamara1989,hanggi1990,gammaitoni1998,doering1992} and experimental \cite{russell1999use,evstigneev2005,schmitt2006} levels.
Presence of noise can facilitate escape kinetics resulting in optimal input/output synchronization (stochastic resonance \cite{mcnamara1989}), fastest escape kinetics (stochastic resonant activation \cite{doering1992}) and directed transport (Brownian motors \cite{magnasco1993,reimann2002}) to name a few.
These effects turned out to be relevant in practical applications \cite{simonotto1997visual,priplata2002noise} and biological realms \cite{goelrichter1974,hanggi2002}.
Properties of stochastic systems can be significantly affected by stochastic resetting \cite{evans2011diffusion,pal2019first,evans2020stochastic}, which at selected time instants brings a particle back to a given point.
The restarting captures real life phenomena like returning to the home range, starting from the scratch or starting a new attempt.
\bdt{
The noise induced barrier crossing is a process underlying many physical phenomena.
The efficiency of the barrier crossing can be characterized by the mean first passage time (MFPT), which measures the average time needed to cross the barrier for the first time.
The MFPT can be optimized in various manners but only some of them are applicable in the context of rock climbing, which we intend to model here.
For example in the stochastic resonant activation simultaneous action of noise and modulation of the barrier can expedite the escape kinetics \cite{doering1992}.
Analogous function plays stochastic resetting which eliminates subotimal trajectories \cite{evans2020stochastic}.
The MFPT can also be optimized by shaping \cite{li2017transports,li2016levy,li2020transition,innerbichler2020enhancing,chupeau2020optimizing}, e.g., corrugating the potential barrier, under action of Gaussian \cite{innerbichler2020enhancing,chupeau2020optimizing} and non-Gaussian noises \cite{li2017transports,li2016levy,li2020transition}.
In these setups the system is typically one dimensional and the motion is defined by setting the initial and final points.
The optimization of the MFPT in 1D does not affect the route, in the sense that all intermediate points belong to the interval $(x_i,x_f)$.
On the one hand, rock climbing and hiking can be performed along existing routes, which are almost 1D.
On the other hand, in order to expedite the time needed to reach the final point, one can relax the assumption that the trajectory is fixed by allowing the hiker/climber to select not only grips but also intermediate points within a large range.
Such a scenario moves the optimization problem to the graph-theory \cite{west2001introduction,bollobas2013modern} dealing, among others, with problems of finding shortest paths \cite{beardwood1959shortest,frank1969shortest,fu1998expected}, average path length \cite{fronczak2004average} and the traveling salesman problem \cite{applegate2011traveling,laporte1990selective}.
Obviously, a more general framework gives more space for the optimization as it provides the possibility to avoid more difficult parts of the route.
Nevertheless, it can violate the rules of the ``climbing game'', because it could excessively reduce climbing difficulties.
For the sake of the climber's ethics, within the studied model, we assume that the climbing route is marked out with such precision that it allows only minimal freedom to choose climbing grips only.
}
Rock climbing is one of the processes which bears a lot of similarities with surmounting of the potential barrier in the noise-driven systems and stochastic resetting.
Following these links, we reinterpret the problem of optimal rope length \bdt{for motions along fixed paths}.
We assume that the climber acts as a noise driven particle while the rope assures that during the climbing a climber cannot come down below the beginning of a pitch.
Therefore, a beginning of each pitch serves as the reflecting boundary or a point to which a climber can be ``reset'' if he makes a mistake.
\bdt{This is consistent with the free climbing ethics, which requires a return to the starting point of a pitch after any strain on the belay system (i.e., falling off, hanging on the rope, catching a hook or a loop).}
The whole climbing route is divided into multiple pitches and each of them need to be surmounted.
In such a scenario one can ask several questions: Is there the optimal rope length which assures minimal climbing time? Does the optimal rope length depend on the climber's skills?
Using the concept of the mean first passage time, we optimize total climbing time with respect to the rope length.
We demonstrate that typically there exists the optimal rope length, which is the increasing function of the climber skills.
Consequently, more experienced climbers can use longer ropes and divide the whole route into a smaller number of longer pitches.
\bdt{
Hiking and rock climbing are inevitably connected with estimation of the time needed to complete the route.
For hiking this time can be predicted initially using, for example, the Naismith's rule \cite{naismith1892excursions}, which assumes 12 min per 1 km (on the map) and additional 10 minutes for every 100 m in ascent.
It is used, along with its extensions, in mountain guidebooks.
The Naismith's rule not only gives a reasonable estimate for the hiking time but it shows that not only the distance but also the slope matters.
The Naismith's rule can be further modified \cite{langmuir1969mountain,irmischer2018measuring}.
For longer routes one needs to incorporate the fatigue factor, which is introduced by the Tranter's correction.
Therefore, for sufficiently long routes, especially on inclined surfaces, the time is no longer proportional to the distance.
The fatigue factor is also observed in the other activities, e.g., running, as the time to finish a marathon is not equal to twice the time to complete a half-marathon.
Typically this time is a couple of percent larger than two.
The estimate can be provided by the Riegel's formula \cite{riegel1997time,blythe2016prediction} which states that the time to complete a marathon is $2^{1.06}\approx 2.085$ times longer that the time to finish half-marathon \footnote{Using data for the 18. PZU Cracovia Marathon (https://wyniki.datasport.pl/results2888/), on average, we get 2.09.}.
Consequently, already for movement along flat comfortable surfaces the time to complete the route does not need to be a linear function of the distance.
The situation gets more difficult on horizontal surfaces, which are not fully flat.
In that context, we can think for example about walking through an ice field that may be crisscrossed by crevasses.
The climber does not enter the crevasses (thus potential energy is constant), but must jump over them, overcome them with the help of a ladder or even bypass them.
Consequently, the motion is significantly disturbed and the mountaineer does not have to move at a constant speed.
The fatigue factor can be quite large and increases in time.
Thus, the time of motion does not need to scale linearly with the distance.
}
\bdt{
In contrast to hiking, which actually is a long, brisk walk, climbing is the activity in which hands, feet and other parts of the body are used to climb steep objects.
During hiking the difficulty comes from: distance and ascent. During climbing, in addition to length and ascent, the difficulty (grade) of the climbing route is the important factor.
The difficulty is further amplified by the tiredness, difficulties with proper regeneration \cite{heyman2009effects} and energy needs \cite{norman2004running}.
Consequently, the relation between time and distance gets more complex.
Moreover, in hiking, chances of failure are smaller and after failure hikers typically do not take the second attempt right after the failure, while re-attempting is common for climbers.
Therefore, in overall, we consider the Brownian motion with restarting as a proxy to describe climbing, especially in multi pitch routes.
}
Similar considerations as described in the context of rock climbing can be carried out for Himalaya mountaineering, especially done in winter.
The risk of having to turn back between camps due to random factors or components independent of the climber (such as the weather) increases with the distance between camps and decreases with the climber's experience.
In the event of a withdrawal between bivouacs, the climber returns to the previous camp. Having more experience allows climbers to reduce the number of bivouacs by increasing the distance between them.
There is an optimal distance between the camps that allows climbers to overcome the wall in the shortest time.
Therefore, the described model should be treated as a universal model of climbing.
Finally, in the context of rock climbing and mountaineering one should mention Marian Smoluchowski for his contribution not only to the theory of Brownian motion \cite{smoluchowski1906b2,smoluchowski1916} and stochastic processes \cite{ulam1957marian} but also for his achievements in mountain climbing \cite{fulinski1998apb}.
\section{Model}
The free climbing model is based on the overdamped Langevin equation \cite{risken1996fokker,gardiner2009}
\begin{equation}
\frac{dx}{dt} = -V'(x) + \sigma \xi(t),
\label{eq:langevin}
\end{equation}
which describes the 1D noise driven motion in the external potential $V(x)$. In Eq.~(\ref{eq:langevin}), $\xi(t)$ stands for the Gaussian white noise satisfying $\langle \xi(t) \rangle=0$ and $\langle \xi(t) \xi(s) \rangle=\delta(t-s)$.
For a stochastic process one can study the first passage time problems \cite{gardiner2009}.
For a particle starting at $x=x_0$, the mean exit time (mean first passage time) $\mathcal{T}(x_0)$ from the bounded domain restricted by the reflecting boundary at $x=A$ and absorbing boundary at $x=B$ ($A<B$) \cite{gardiner2009} reads
\begin{equation}
\mathcal{T}(x_0 \to B) = \frac{2}{\sigma^2}\int_{x_0}^B dy \exp\left[ \frac{2V(y)}{\sigma^2} \right]
\int_{A}^{y} dz \exp\left[ -\frac{2V(z)}{\sigma^2} \right].
\label{eq:mfpt}
\end{equation}
The problem of optimization of climbing time will be explored by analyzing the mean first passage time in systems with \bdt{moving} boundaries.
In other words, if the climber finishes a pitch, reflecting and absorbing boundaries are moved to new positions.
\bdt{The reflecting boundary replaces the end of the completed pitch, while the absorbing boundary is moved to the most distant (up the hill) point which can be reached with the used rope.}
The heart of the theoretical framework is provided by Eq.~(\ref{eq:mfpt}) as it allows us to calculate the average time to make a pitch.
\begin{figure}[!h]%
\centering
\begin{tabular}{c}
\includegraphics[angle=0,width=0.7\columnwidth]{potential.pdf} \\
\end{tabular}
\caption{A sample potential profile $V(x)$ representing the climbing route. $x_i$ and $x_f$ stand for starting (reflecting) and final (absorbing) positions.
The path is characterized by the width (distance between boundaries) $\Delta=x_f-x_i$, height difference $H=V(x_f)-V(x_i)$ and the total length $L=\int_{x_i}^{x_f}\sqrt{1+[V'(x)]^2}dx$.
During climbing the whole path is divided into pitches of a fixed length (\bdt{measured along the slope}) determined by the rope length $l$.
}
\label{fig:potential}
\end{figure}
For the fixed potential $V(x)$, using initial $x_i$ and final $x_f$ positions, see Fig.~\ref{fig:potential}, from Eq.~(\ref{eq:mfpt}) one can calculate the mean time which is required to climb any point, e.g., the top (maximum of the potential).
The path is characterized by its total length $L=\int_{x_i}^{x_f} \sqrt{1+[V'(x)]^2}dx$, width $\Delta = x_f-x_i$ and height difference $H=V(x_f)-V(x_i)$.
In real situations, $V(x)$ is typically non-decaying function on $[x_i,x_f]$, i.e., $V'(x) \geqslant 0$ for $x\in [x_i,x_f]$.
\bdt{Nevertheless, it might also happen that $V(x)$ decays.
In such a case the climber can perform abseiling, free solo descending or free descending.
Among these option abseiling is the fast one while two others scenarios are disproportionately long relative to the MFPT predictions.
Consequently, the locally decaying $V(x)$ cannot be easily incorporated into the studied model and we leave this issue for further studies.
}
During climbing the whole path is divided into multiple pitches, which are assumed to be of a fixed (\bdt{measured along the slope}) length determined by the rope length $l$.
Only the last pitch can be shorter than the remaining ones.
The process of surmounting the potential in the presence of reflecting and absorbing boundaries resembles climbing.
The first reflecting boundary ($x_i$) is the base for attacking the wall, while the ultimate absorbing boundary is the point to be reached ($x_f$).
The MFPT calculated directly from Eq.~(\ref{eq:mfpt}) assumes that the whole potential barrier is traversed at once.
Nevertheless, for each trajectory, (multiple) returns to the base point $x_i$ and \bdt{revisits to other points} are allowed \bdt{as stochastic trajectories meander between absorbing and reflecting boundaries}.
In free climbing, climbers use climbing equipment, e.g., ropes, to protect against injury during falls and to limit the length of segments (pitches).
Ends of successfully passed segments (determined by the full rope length $l$) are used as new starting points, which can be considered as further reflecting boundaries.
In course of time, the reflecting boundary shifts upwards by discrete steps, consequently after the unsuccessful attempt the climber starts not from the base point $x_i$ but from the final point of the previous segment.
This effect is the analog of stochastic resetting \cite{evans2011diffusion,evans2011diffusion-jpa,evans2020stochastic}, but the restart (\bdt{to the end of the last competed pitch}) takes place after each mistake.
The mechanism of \bdt{moving} reflecting and absorbing boundaries significantly changes the barrier surmounting process.
In this context, one can ask the question what is the optimal rope length $l$ which gives the smallest time to reach the final point, e.g., the top of the mountain $x_f$, see Fig.~\ref{fig:potential}.
The optimal length $l$ needs to optimize the full time needed to reach the top of the wall, which consists of climbing time (including the time needed to secure the rope) and constant time needed to prepare the starting point for every pitch.
The whole route is divided into many segments (pitches).
Each segment starts with a reflecting boundary ($A$) and ends with the absorbing one ($B$).
We assume that time needed to pass each segment is the sum of the climbing time given by Eq.~(\ref{eq:mfpt}) and time to prepare every pitch $\kappa$.
For simplicity, it is assumed that $\kappa$ is a constant and independent of the rope length $l$.
Nevertheless, we have also checked other options, which have not changed results qualitatively, proving generality of the drawn conclusions.
Finally, the MFPT depends on the system temperature, i.e., $\sigma=\sqrt{2T}$.
The $\sigma$ parameter in Eq.~(\ref{eq:langevin}) measures the climber skills.
The larger $\sigma$ the better climber, i.e., the MFPT decays with the increase in $\sigma$, see Eq.~(\ref{eq:mfpt}).
\section{Results \& Discussion}
We use fully solvable cases of free motion ($V(x)=0$) and constant slope ($V(x) \propto x$) which are already capable of revealing general properties of the free climbing model.
Afterwards, we switch to a parabolic wall ($V(x) \propto x^2$) and discuss the general case.
\subsection{Flat horizontal wall (free motion)}
\begin{figure}[!h]%
\centering
\begin{tabular}{c}
\includegraphics[angle=0,width=0.7\columnwidth]{interval.pdf} \\
\end{tabular}
\caption{A finite interval of length $L$ and width $\Delta$ ($\Delta = L$) is divided into $\Delta/l$ segments of length $l$.
}
\label{fig:interval}
\end{figure}
We start with the simplest case of $V(x)=0$, see Fig.~\ref{fig:interval}.
In such a case the distance between boundaries ($\Delta$) and route length ($L$) are the same.
For $A=0$ Eq.~(\ref{eq:mfpt}) gives
\begin{equation}
\mathcal{T}(x_0 \to B) = \frac{(B -x_0)(B +x_0)}{\sigma^2}.
\label{eq:mfpt-flat}
\end{equation}
\bdt{As follows from Eq.~(\ref{eq:mfpt-flat}) the time needed to pass the given distance does not scale linearly with the distance.
One can think about this lack of linearity as the fatigue factor which can be further amplified by the various types of obstacles on the flat surface, e.g., on an ice field.
}
The climber starts the climbing at the reflecting boundary, i.e., $x_i=0$, and the whole interval $\Delta$ is divided into segments given by the rope length $l$.
Each segment can be passed in the same, fixed time $\mathcal{T}(0 \to l)+\kappa$, i.e.,
$\frac{l^2}{\sigma^2} + \kappa$, because in the absence of the deterministic force all pitches are the same.
In the model (Model~A), the time needed to pass the whole route $\Delta$ is equal to
\begin{equation}
\mathcal{T} = \frac{\Delta}{l} \left[ \frac{l^2}{\sigma^2} + \kappa \right].
\label{eq:free-cont}
\end{equation}
\bdt{The Model~A, defined by Eq.~(\ref{eq:free-cont}), can be also called the fractional/partial model, because it incorporates only a fraction of time $\kappa$ to prepare the last pitch.
}
The time given by Eq.~(\ref{eq:free-cont}) is minimal for
\begin{equation}
l_{\mathrm{opt}}=\min\{\sigma \sqrt{\kappa} , \Delta\}.
\end{equation}
If the rope length is smaller than the interval width, i.e., $l_{\mathrm{opt}}= \sigma \sqrt{\kappa} < \Delta$, the mean exit time is equal to
\begin{equation}
\mathcal{T}_{\mathrm{min}}= \frac{2 \sqrt{\kappa } \Delta }{\sigma }.
\end{equation}
Otherwise, optimal rope length is $l_{\mathrm{opt}}=\Delta$ and $ \mathcal{T}_{\mathrm{min}}=\frac{\Delta^2}{\sigma^2}+\kappa$.
For $l_{\mathrm{opt}}=\Delta$, the further increase in the rope length does not change the climbing time.
The mean exit time grows with increasing $L$ ($\Delta$) (longer overall distance), increasing securing time $\kappa$ (longer breaks) and decreasing $\sigma$ (decreasing climbers’ skills).
In the limit of $\kappa\to 0$, the climbing time vanishes because time to pass each segment tends to zero as $l^2$, while the number of segments grows like $1/l$ making the product behave as $l$.
\bdt{In this very special, artificial limit, both the climbing time and the optimal rope length tend to 0.
Consequently, the model reveals unphysical behavior as velocity growths to infinity, i.e., it is greater than the speed of light ($v \gg c$).
On the one hand, this controversy seems to be apparent, because of the unrealistic assumption of $\kappa\to0$, meaning that a climber can start a new pitch in zero time.
On the other hand, for finite $\kappa$ and rope length $l\to 0$ the climbing time diverges as the climber needs to start an infinite number of pitches.
Nevertheless, already the $l\to 0$ limit is problematic, because the climber moves forward due to shifting of the reflecting boundary.
}
\begin{figure}[!h]%
\centering
\begin{tabular}{c}
\includegraphics[angle=0,width=0.98\columnwidth]{x0.pdf} \\
\end{tabular}
\caption{(Color online): The dependence of the MFPT on the rope length $l$ for a free particle with $\Delta=1$ and $\Delta = 2$ for $\kappa=1$.
Solid lines show results for the \bdt{Model~A (fractional/partial)}, see Eq.~(\ref{eq:free-cont}), while points to the \bdt{Model~B (discrete)}, see Eq.~(\ref{eq:free-disc}).
}
\label{fig:free}
\end{figure}
Formally one should consider the discrete (Model~B) version of the \bdt{fractional/partial} model, see Eq.~(\ref{eq:free-cont}), for which
\begin{equation}
\mathcal{T} = \left\lfloor \frac{\Delta}{l} \right\rfloor \left[ \frac{l^2}{\sigma^2} + \kappa \right] + \frac{\Lambda^2}{\sigma^2} + \kappa,
\label{eq:free-disc}
\end{equation}
where $\Lambda=\Delta-\lfloor \Delta/l \rfloor l$ is the length of the last pitch for which the whole time $\kappa$ has to be added to the overall climbing time.
In Eq.~(\ref{eq:free-disc}), the $\lfloor \Delta/l \rfloor$ stands for the floor function which returns the greatest integer less than or equal to $\Delta/l$, which is the number of pitches of length equal to the full rope length $l$.
In Fig.~\ref{fig:free} MFPTs as a function of rope length $l$ for various $\Delta$ and $\sigma$ are presented.
Solid lines represent the \bdt{fractional/partial} model, see Eq.~(\ref{eq:free-cont}) while points correspond to Eq.~(\ref{eq:free-disc}).
The main difference between \bdt{fractional/partial} and discrete models comes from the last pitch.
In both cases the last pitch is climbed at the same time but in the \bdt{fractional/partial} model only a fraction of $\kappa$ proportional to the length of the last pitch is added to the overall climbing time.
From the discrete model it is clearly visible that there is no benefit in increasing the rope length if it does not result in the reduction of the number of pitches. Therefore, considered rope length should satisfy $l=\Delta/n$, where $n \in \mathbb{N}$.
For intermediate rope lengths the number of pitches is unaffected, but the total passing time is increased, because pitches are longer.
For very experienced climbers it can turn out that the optimal rope length $l$ can be equal to the route length $L$.
Finally, the climbing time can be decreased by giving up the rope, i.e., in the free solo climbing.
Interesting situation is observed for the \bdt{fractional/partial} model.
For such a model the time needed to pass the whole route can be not only a non-monotonous function of the rope length, but it can be piecewise smooth. This is especially well visible for $\Delta=2$ with $\sigma=1$, where there exists such MFPT which can be recorded for three distinct values of the rope length.
Moreover, at favorable rope lengths, i.e., for $l=L/n$, the \bdt{fractional/partial} and the discrete models are equivalent, see Eqs.~(\ref{eq:free-cont}) and (\ref{eq:free-disc}).
\subsection{Fixed (linear) slope (linear potential)}
\begin{figure}[!h]%
\centering
\begin{tabular}{c}
\includegraphics[angle=0,width=0.7\columnwidth]{slope.pdf} \\
\end{tabular}
\caption{The fixed linear slope $V(x)=\tan(\alpha) x$ of total length $L=\Delta/\cos\alpha=H/\sin\alpha$ is divided into $(\Delta/\cos\alpha) / l$ segments of length $l$.
The slope of the potential is determined by the route parameters $\Delta$ and $H$, i.e., $\tan\alpha={H}/{\Delta}$.
}
\label{fig:slope-setup}
\end{figure}
The next fully tractable case is $V(x)=\tan(\alpha) x$, which corresponds to the fixed linear slope with $\tan{\alpha}=H/\Delta$, see Fig.~\ref{fig:slope-setup}.
For $A=0$, Eq.~(\ref{eq:mfpt}) gives the climbing time
\begin{equation}
\mathcal{T}(x_0 \to B) = \frac{\sigma ^2 e^{\frac{2 B \tan\alpha}{\sigma ^2}}}{2 \tan^2\alpha}-\frac{B}{\tan\alpha}-\frac{\sigma ^2 e^{\frac{2 x_0 \tan\alpha}{\sigma ^2}}}{2 \tan^2\alpha}+\frac{x_0}{\tan\alpha}
\end{equation}
which for $x_0=0$ reduces to
\begin{equation}
\mathcal{T}(0 \to B) = \frac{\sigma ^2 e^{\frac{2 B \tan\alpha}{\sigma ^2}}}{2 \tan^2\alpha}-\frac{B}{\tan\alpha}-\frac{\sigma ^2}{2 \tan^2\alpha}.
\label{eq:t0slope}
\end{equation}
For each segment $B$ is determined by the rope length~$l$, i.e.,
$
B={l}{\cos\alpha},
$
where $\alpha$ is the angle characterizing the slope ($\tan \alpha=H/\Delta$).
Analogously like for $V(x)=0$ the time to climb each segment is constant because of the constant slope (constant force).
In the \bdt{fractional/partial} model, the time needed to climb to the top is equal to
\begin{equation}
\mathcal{T}=\frac{\frac{\Delta}{\cos\alpha }}{l} \left[ \mathcal{T}(0 \to l\cos\alpha) + \kappa \right],
\label{eq:slope-cont}
\end{equation}
where $\mathcal{T}(0 \to l\cos\alpha)$ is given by Eq.~(\ref{eq:t0slope}) with $B=l\cos\alpha$.
For ropes longer than the slope length, $l>L=\Delta/\cos\alpha = H/\sin\alpha$, the climbing time is equal to $\mathcal{T}(0\to\Delta)+\kappa$, where $\mathcal{T}(0\to\Delta)$ is given by Eq.~(\ref{eq:t0slope}) with $B=\Delta$.
In the limit of $\alpha\to 0$ the free motion, already considered in Fig.~\ref{fig:free}, is recovered.
Similarly like for a free walker, also for the fixed slope one should consider the discretized version
\begin{equation}
\mathcal{T}=\left\lfloor\frac{\frac{\Delta}{\cos\alpha }}{l}\right\rfloor \left[ \mathcal{T}(0 \to l\cos\alpha) + \kappa \right] + \mathcal{T}(0\to\Lambda\cos\alpha)+\kappa,
\label{eq:slope-disc}
\end{equation}
where $\Lambda=\Delta/\cos\alpha-\lfloor \Delta/l\cos\alpha \rfloor l$ the length of the last pitch.
For the constant slope, as for free motion, there is a discrete spectrum of favorable rope lengths $l=L/n$ ($n\in\mathbb{N}$).
Nevertheless, this time local minima of MFPT are better pronounced than for the free motion.
Fig.~\ref{fig:slope} depicts results for $\Delta=1$ with the fixed slope of $\alpha=\pi/4$ ($L=\Delta/\cos\alpha=\sqrt{2}$) and $\alpha=4\pi/9$ ($L=\Delta/\cos\alpha \approx 5.75$).
With the increasing time $\kappa$ the optimal rope length $l_{\mathrm{opt}}$ (as long as it is smaller than the route length) increases, because the $\kappa/l$ term moves optimum to the right.
For growing $\alpha$ (with fixed $\Delta$) the route length $L$ and, as well as optimal rope length $l_{\mathrm{opt}}$ grow.
\begin{figure}[!h]%
\centering
\begin{tabular}{c}
\includegraphics[angle=0,width=0.98\columnwidth]{x1.pdf} \\
\end{tabular}
\caption{(Color online): The dependence of the MFPT on the rope length $l$ for fixed slope with $\Delta=1$ and $\kappa=1$.
Various groups of curves correspond to various slopes of $\pi/4$ and $4\pi/9$.
Solid lines show results for the \bdt{Model~A (fractional/partial)}, see Eq.~(\ref{eq:slope-cont}), while points for the \bdt{Model~B (discrete)}, see Eq.~(\ref{eq:slope-disc}).
}
\label{fig:slope}
\end{figure}
\subsection{General walls (parabolic and more general potentials)}
For general potentials $V(x)$ the MFPT can be calculated by Eq.~(\ref{eq:mfpt}).
The calculation of MFPT corresponding to a given rope length $l$ is more complex, as widths and heights of the segment are determined by the rope length.
In particular, knowing the starting point $a$ and the rope length $l$ one can find the intermediate point $b$ to be reached from the following constraint
\begin{equation}
l=\int_{a}^b \sqrt{1+[V'(x)]^2}dx.
\end{equation}
For the general potential $V(x)$, the very same, described above, resonant effect is also recorded.
Approximately the MFPT from $a \to b$ is proportional to $\exp[h/\sigma^2]$ with $h=V(b)-V(a)\approx (b-a) V'(a)$ \bdt{being the barrier height between $a$ and $b$}.
Therefore, the MFPT is the polynomial in $(b-a)$ and the total MFPT can be optimized with respect to $l$ in analogous ways like for a free particle and the fixed slope.
For instance, Fig.~\ref{fig:x2} presents results for $V(x)=x^2/2$ with various $\Delta$ and $\sigma$.
For $\Delta=2$ the route length $L\approx 2.96$ while for $\Delta=5$ $L\approx 13.9$.
Solid lines correspond to the \bdt{fractional/partial} model while points to the discrete model.
For $l=L/n$ ($n\in\mathbb{N}$) both models are equivalent.
\begin{figure}[!h]%
\centering
\begin{tabular}{c}
\includegraphics[angle=0,width=0.98\columnwidth]{x2-xm2.pdf} \\
\includegraphics[angle=0,width=0.98\columnwidth]{x2-xm5.pdf} \\
\end{tabular}
\caption{(Color online): The dependence of the MFPT on the rope length $l$ for parabolic potential $V(x)=x^2/2$ with $\kappa=1$ and various widths $\Delta$.
Solid lines show results for the \bdt{Model~A (fractional/partial)}, while points for the \bdt{Model~B (discrete)}.
}
\label{fig:x2}
\end{figure}
Finally, in more general realms one can relax the assumption that the time $\tau$ needed to start each pitch is constant.
If $\tau = \kappa + f(l)$, where $f(l)$ is the increasing function of the rope length, combs-like parts of MFPT curves move up, making minima more pronounced.
The constant part of the securing time $\kappa$ assures that there exists the optimal rope length giving rise to the minimal total climbing time, while $f(l)$ makes minima corresponding to favorable rope lengths deeper.
\section{Final remarks}
The theory of stochastic processes is a widely accepted part of statistical physics which is typically used for examination of multiple physical models under action of noise.
Here, we have demonstrated that the very same tools can be effectively used to describe the process of rock climbing.
During rock climbing, long climbing routes are divided into smaller pitches (segments).
Every pitch works as the segment restricted by the reflecting boundary (beginning of the pitch) and absorbing boundary (end of the pitch).
The pitch length cannot be longer than the rope length, but here we have additionally assumed that all pitches (except the last one) are of the same fixed length given by the rope length.
The time needed to pass the whole route can be calculated using the mean first passage time formalism.
The minimal passing time is determined by the interplay between pitch climbing time and time needed to start each pitch.
\bdt{Under non-realistic assumption,} that segments could be prepared without any time cost (penalty) the optimal rope length tends to 0.
Otherwise, due to constant time penalty, there could exist an optimal rope length which is shorter than the route length.
The optimal rope length is selected among favorable rope lengths, i.e., $L/n$ ($n\in \mathbb{N}$).
Intermediate rope length does not decrease climbing time.
Finally, experienced climbers can use longer ropes as for longer ropes the total number of pitches is smaller.
\section*{Acknowledgements}
This research was supported in part by PLGrid Infrastructure and by the National Science Center (Poland) grant 2018/31/N/ST2/00598.
JB acknowledges financial support of the statutory research fund of ICSC PAS.
\section*{Data availability}
The data that support the findings of this study are available from the corresponding author (BD) upon reasonable request.
\def\url#1{}
| 8,872 |
\section{Introduction}
\label{sec:intro}
Wilkinson Microwave Anisotropy Probe (\textit{WMAP}\xspace, \cite{Bennett:2013}) observed the microwave sky in five frequency bands ranging from 23 to 91 \ifmmode $\,GHz$\else \,GHz\fi\ at a resolution which varies between 52\rlap{.}$^{\scriptstyle\prime}$\ to 12\rlap{.}$^{\scriptstyle\prime}$. More recently, {\it Planck\/}\ provide the full sky maps in total nine frequency bands ranging from 23 \ifmmode $\,GHz$\else \,GHz\fi\ to 857 \ifmmode $\,GHz$\else \,GHz\fi\ with beam size ranges from 32\rlap{.}$^{\scriptstyle\prime}$\ to 5\rlap{.}$^{\scriptstyle\prime}$. The last two channels of the {\it Planck\/}\ (545 and 857 \ifmmode $\,GHz$\else \,GHz\fi) are not polarization-sensitive and mainly designed for intensity observation. All these multi-frequency maps are the mixture of cosmological, Galactic and extra-galactic components (e.g., CMB anisotropies, thermal dust, synchrotron, spin dust/Anomalous Microwave Emission (AME), faint/strong radio and infrared sources, thermal/kinetic Sunyaev-Zeldovich (tSZ/kSZ) effects etc.). However, for polarization, the spectrum is less complex. The high-frequency ends of the spectrum are dominated by thermal emission from Galactic dust\citep{planck-XXI:2015}. Low-frequency bands are synchrotron dominated. In addition to these, hints of polarized AME has been found \citep{Leitch:1997,Finkbeiner:2004}. However, it seems that this component plays an important role at 10-60 \ifmmode $\,GHz$\else \,GHz\fi\ \citep{de_Oliveira-Costa:2004}, and it has a low polarization degree (1-2\%, \cite{Genova-Santos:2017}).
Separating the astrophysical sources is a crucial step in the scientific exploitation of such rich data. Over the past few years, the study of the Galactic thermal dust and synchrotron has been tied up with observational cosmology \citep{Hazumi:2019, SO:2020, CMB-S4:2016, PICO:2019} that is searching for primordial B-mode polarization in CMB, a proof of epoch of inflation \citep{Guth:1981}. The reason for this entanglement is that the expected B-mode signal in CMB imprinted from the primordial Gravitational waves during inflation is highly obscured by polarized Galactic emissions of thermal dust and synchrotron \citep{planck-I:2020}. The level of contamination depends on the energy scale of inflation \citep{Knox:2002}. Therefore, the separated foreground maps will help in building accurate modelling of thermal dust and synchrotron polarization models \citep{T_Ghosh:2017, Adak:2019, Guillet:2018, Regaldo:2020, Clark:2019, Fauvet:2011} in this regard. Furthermore, the component maps will help in detailed understanding of thermal dust and synchrotron emission, Galactic magnetic field, Galactic astrophysics etc.
Several component separation methods have been developed over the past decades to clean the CMB signal from foregrounds, systematic effects, extra-galactic emissions. For intensity data the widely used techniques in \textit{WMAP}\xspace\ and {\it Planck\/}\ mission are ILC \citep{Tegmark:1997}, \ensuremath{\tt SMICA}\ \citep{Delabrouille:2003}, \ensuremath{\tt Commander}\ \citep{Eriksen:2008}, \ensuremath{\tt NILC}\ \citep{Basak:2011}, \ensuremath{\tt SEVEM}\ \citep{Fernandez-Cobos:2012}, \ensuremath{\tt SILC}\ \citep{SILC:2016}, \ensuremath{\tt L-GMCA}\ \citep{LGMCA:2014} and many more to clean CMB temperature from others contamination. Out of these methods, \ensuremath{\tt Commander}\ is a Bayesian fitting technique that can provide all astrophysical foreground maps along with the cleaned CMB map. A generalized version of Needlet ILC called \ensuremath{\tt GNILC}\ \citep{planck-XLVIII:2016} estimate the thermal dust maps disentangling from other Galactic foregrounds and Cosmic Infrared Background emission. Not all of these methods mentioned above provide foreground polarization maps. An updated version of \ensuremath{\tt SMICA}, \ensuremath{\tt Commander}\ and \ensuremath{\tt GNILC}\ can only provide polarized thermal dust and synchrotron maps.
Our interest lies in applying the ILC method in separating thermal dust and synchrotron polarization templates using multi-frequency data. The standard ILC method is extensively used to recover the CMB temperature maps by a weighted sum of multi-frequency data \citep{Tegmark:1997, Basak:2011, Eriksen:2004}. This paper presents another way of application of ILC aiming to estimate the foreground signals for which the electromagnetic spectrum is known. The simplicity of ILC is that it does not assumes anything about the model of the components. ILC estimates the weights by minimizing the variance of the resulting map. The minimization is generally done either in pixel space \citep{Tegmark:1997} or in harmonic space \citep{Kim:2009}. This method is only applicable to the spin-0 fields where quantities are not projected in local frames. However, in the case of polarization, we need to deal with the components having polarization vectors projected in the local frame. Stokes {$Q$ } and {$U$ } are not projected in a global reference frame like temperature. The mean and variance for individual spinorial components, therefore, are not defined. Therefore, a natural extension of ILC in the individual {$Q$ }, {$U$ } field is not possible. The straightforward way to apply a similar version of the ILC method for polarization data is to work on E- and B- mode maps \citep{Basak:2013}. However, only partial sky polarization data are commonly available in a real scenario, and decomposing them to E- and B- maps is not a trivial task. \cite{PILC:2016} develop an algorithm generalizing the standard ILC method, which applies to {$Q$ } $\pm$ i{$U$ }, called polarization ILC (PILC). Although {$Q$ } $\pm$ i{$U$ } transforms like spin-2 variable \citep{Hu_and_White:1997}, since PILC approach is based on minimization of covariant quantity, it preserves the coherence of the spinorial description. The performance of the standard ILC has limitations. It assumes all components are specially uncorrelated, whereas the Galactic foregrounds are not. For example, polarized thermal dust and synchrotron are found to be correlated \citep{Choi_and_Page:2015}. However, adding multiple constraints to reduce the contamination from other astrophysical components can significantly improve the standard ILC's performance. This method is called constrained ILC (cILC, \cite{Remazeilles:2011}). \cite{Remazeilles:2011} use this method for simultaneous estimation of CMB and thermal Sunyaev–Zeldovich emission. \cite{Hurier_2013} present this method in a more general form.
In this paper, we develop an algorithm combining the extended version of the PILC method for heavily constraint equations (similar to cILC) with the recently developed moment expansion method of the foregrounds modelling in \cite{Chluba:2017}. Moment expansion is a powerful approach proposed by \cite{Chluba:2017} to describe the unknown complexity of the foregrounds due to variations of the spectral properties along the line-of-sight (LOS) inside the beam and across the sky. In short, moment expansion is a perturbative approach of foreground modelling under some assumption of spectral energy distribution (SED) of the components. Therefore, our method is a semi-blind component separation algorithm that performs in interface of the blind and parametric component separation methods. In the current paper, we aim to demonstrate the performance of this algorithm in estimation of thermal dust and synchrotron {$Q$ }, {$U$ } maps at 353 \ifmmode $\,GHz$\else \,GHz\fi\ and 30 \ifmmode $\,GHz$\else \,GHz\fi\ respectively. We use three sets of \textit{WMAP}\xspace\, and {\it Planck\/}\ simulated maps with varying foreground complexity. The purpose of using different set of simulations is to check the robustness of the algorithm independent of complexity of the foreground model. A similar method has been applied in \cite{Remazeilles:2020} for CMB B-mode recovery , mapping relativistic tSZ effect in \cite{Remazeilles_chluba:2020} and recovery of spectral distortion signal in \cite{Rotti:2020}. Besides, we anticipate that a similar method can also be applicable in global 21 cm signal recovery.
The paper is organized as follows. In Sect.~\ref{sec:data}, we describe the simulated data sets and binary mask used in the paper. Section.~\ref{sec:method} summarizes the methods applied in the analysis. In Sect.~\ref{sec:srategy}, we explain the strategy of implementing the method discussed in Sect.~\ref{sec:method}. In Sect.~\ref{sec:opt_sim}, we discuss the main results. Finally in Section.~\ref{sec:conclusion}, we conclude the results.
\section{Data used}
\label{sec:data}
In this section, we describe the Galactic mask and simulated data used in this paper.
\subsection{Global Mask used}
\label{sec:mask}
Due to the anisotropic nature of the foreground contributions, the application of the ILC method over whole sky data is not the most efficient way. Therefore, we use the intermediate to high Galactic region in the analysis. We use 78\% Galactic mask publicly available in Planck Legacy Archive\footnote{\url{pla.esac.esa.int/pla}}. The mask is provided in \ensuremath{\tt HEALPix}\footnote{\url{https://healpix.jpl.nasa.gov/}} \citep{Gorski:2005} grid at \ensuremath{N_{\rm side}}\ = 2048. We downgrade the mask at \ensuremath{N_{\rm side}}\ = 256. In Figure.~\ref{fig:mask}, we present the Galactic mask at \ensuremath{N_{\rm side}}\ = 256. Hereafter, we call this mask {GAL78}.
\begin{figure}
\centering
\includegraphics[width=8.4cm]{mask_gal_ns256.pdf}
\caption{The {GAL78}\ mask that comprises 78\% of the sky. The masked region is shown in grey color and the sky used for analysis in this paper is shown in red.}
\label{fig:mask}
\end{figure}
\subsection{Simulated data}
\label{sec:sim}
We use PySM\footnote{\url{https://github.com/bthorne93/PySM_public}} \citep{pysm:2017} for simulating Stokes IQU maps. We use $K, Ka, Q, V, W$ \textit{WMAP}\xspace\ bands and {\it Planck\/}\ all Low-frequency instrument (LFI, \cite{Mennella:2011}) bands and High-frequency instrument (HFI, \cite{planck-IV:2011}) polarization-sensitive bands in simulations. The maps are smoothed at a common resolution of FWHM = 1$^{\ifmmode^\circ\else$^\circ$\fi}$ and projected at \ensuremath{\tt HEALPix}\ grid at \ensuremath{N_{\rm side}}\ = 256. We consider CMB, thermal dust, synchrotron, AME and instrument noise in all simulations. We express the final maps in Rayleigh-Jeans (RJ) unit. For CMB, we use realization of fully lensed maps at tensor-to-scalar ratio $r$ = 0.0. The values of the cosmological parameters are motivated from recent {\it Planck\/}\ determined values reported in \cite{planck-VI:2018}. The \textit{WMAP}\xspace\ noise RMS ($\sigma_0$) for polarization are 1435, 1472, 2197, 3141, 6560 $\ifmmode \,\mu$K$\else \,$\mu$\hbox{K}\fi$ at $K, Ka, Q, V, W$ bands respectively. We compute noise RMS at each pixels following $\sigma_{w} (p) = \sigma_0/\sqrt{N_{obs}} (p)$, where $N_{obs} (p)$ is the \textit{WMAP}\xspace\ scanning pattern at \ensuremath{N_{\rm side}}\ = 512. Finally, we simulate white noise maps from $\sigma_{w} (p)$ maps at \ensuremath{N_{\rm side}}\ = 512, smooth them at FWHM = 1$^{\ifmmode^\circ\else$^\circ$\fi}$ and downgraded at \ensuremath{N_{\rm side}}\ = 256. We use FFP10 noise maps \citep{planck-x:2016} for {\it Planck\/}\ frequencies that are available in PLA. We use $\ensuremath{\tt a2}$ model (AME is denoted by $\ensuremath{\tt a}$) for simulating AME, where 2\% global polarization is introduced as described in \citep{pysm:2017}. We finally prepare following three sets of simulations with different thermal dust and synchrotron model in PySM which we describe below.
\begin{itemize}
\item
SET1: We use PySM $\ensuremath{\tt d1s1}$ model, where thermal dust and synchrotron is denoted by $\ensuremath{\tt d}$ and $\ensuremath{\tt s}$ respectively and corresponding base models are described in \cite{pysm:2017}. In $\ensuremath{\tt d1s1}$ model, PySM follow modified blackbody (MBB) for thermal dust and power-law for synchrotron. In $\ensuremath{\tt d1s1}$ model, PySM use \ensuremath{\tt Commander}\ recovered thermal dust template at 353 \ifmmode $\,GHz$\else \,GHz\fi\ \citep{planck-x:2016} and \textit{WMAP}\xspace\ 23 GHz map \citep{Bennett:2013} as synchrotron template for polarization. Thermal dust temperature and spectral index map used here is derived using \ensuremath{\tt Commander}. Synchrotron spectral index map is taken from \cite{Miville-Desch:2008}. \\
\item
SET2: We use PySM $\ensuremath{\tt d4s3}$ model. This model uses a two-component thermal dust model with the templates derived in \citep{Meisner:2014}. $\ensuremath{\tt s3}$ follows a curved power-law model with a baseline curvature value of -0.052 at 23 \ifmmode $\,GHz$\else \,GHz\fi. \\
\item
SET2: We use PySM $\ensuremath{\tt d7s1}$ model, where thermal dust model is replaced by dust grain characterization based model described in \cite{Brandon:2017}.
\end{itemize}
\section{Methods}
\label{sec:method}
\subsection{Moment expansion of foreground emissions}
Foreground emissions are thought to be a superposition of the emission from individual emitting blocks that can be characterized by varying SEDs. Therefore, when we observe the sky within some beam; the line-of-sight and spatial average over SEDs are inevitable. These effects alter the spectral properties of the observed emissions. For example, although spectral properties of the synchrotron emission can be described as a power-law model for individual blocks, after averaging inside the beam, it remains no longer the power-law \citep{Remazeilles:2020}. This effect results in frequency-frequency decorrelation. Aside from the above two averaging effects, downgrading the maps at lower angular resolution also gives rise to the spectral averaging effect.
\cite{Chluba:2017} propose moment expansion method, one unique approach of foreground modelling to take into account all of these averaging effects. In this section, we briefly describe the moment expansion method of \cite{Chluba:2017} and especially apply it to thermal dust and synchrotron SEDs.
The Galactic foregrounds can be considered as a collection of emissions of amplitude $\delta I_{\nu}(p, s)$ from different emitting layers along each LOS. $p$ denotes the pixel and $s$ denotes distance of the layer along LOS. Let us assume that we know the form of spectral properties $f(\nu , \boldsymbol{\beta})$ of the components, where $\boldsymbol{\beta} \equiv [{\beta}_1, {\beta}_2, .., {\beta}_n] (p, s)$ denotes the general form of spectral parameters of the component of interest (e.g, For thermal dust the spectral parameters are dust temperature $T_d (p, s)$ and spectral index $\beta_{d} (p, s)$). The spectral properties likely vary across the sky inside instrumental beam as well as along LOS. However, averaging along the LOS and inside the instrumental beam, both have physically the same effect, leading to a mixture of SEDs of the emitting layers. Considering that there are infinite layers along each LOS, we can statistically model the total observed emission $I_{\nu} (p)$\footnote{Here, by $I_{\nu} (p)$, we denote Stokes $I (p)$, $Q (p)$, $U (p)$ or $E (p)$, $B (p)$ at some frequency $\nu$. Hereafter, $p$ is the central pixel of the beam.} as overall observed amplitude $I_{\nu_{0}} (p)$ at some pivot frequency $\nu_{0}$ multiplied by statistical average of SEDs, along LOS and inside the beam, $f(\nu , \boldsymbol{\beta} (p))$:
\begin{align}
\label{eq:fg}
I_{\nu} (p) = I_{\nu_{0}} (p) f(\nu , \boldsymbol{\beta} (p ))
\end{align}
As shown in \cite{Chluba:2017}, we can expand $f(\nu , \boldsymbol{\beta} (p ))$ using multi-dimensional Taylor series as\footnote{We follow the convention: ${ \partial\beta_1^{\, i} \partial\beta_2^{\, j}\cdots \partial\beta_n^{\,k} f \left(\nu, \, \overline{\boldsymbol{\beta}} (p)\right)} = {\partial^{i+j+..+k f(\nu, \boldsymbol{\beta})}\over{\partial\beta_1^{\, i} \partial\beta_2^{\, j}\cdots \partial\beta_n^{\,k}}}\Big|_{\overline{\boldsymbol{\beta}}}$},
\begin{align}
\label{eq:f_moment_expansion}
f(\nu ,\boldsymbol{\beta}(p))&=f (\nu, \overline{\boldsymbol{\beta}})(p)
+\sum_i (\beta_i (p) -\overline{\beta}_i) \,\partial_{{\beta}_i} f(\nu, \overline{\boldsymbol{\beta}})
\nonumber\\[-0.5mm]
&\!\!\!\!
+\frac{1}{2!}\sum_i \sum_j (\beta_i (p) -\overline{\beta}_i)(\beta_j (p) -\overline{\beta}_j) \,\partial_{{\beta}_i}\partial_{{\beta}_j} f (\nu , \overline{\boldsymbol{\beta}})
\nonumber\\[-0.5mm]
&\quad+ \ldots,
\end{align}
where $\overline{\boldsymbol{\beta}} \equiv [\overline{\beta}_1, \overline{\beta}_2, .., \overline{\beta}_n]$ is the pivot value of the SED vector.
The moment map of order $i + j + ...+ k$ is defined in \cite{Chluba:2017} as:
\begin{flalign}
\label{eq:moment}
m_{ij...k} (p)
= I_{\nu_{0}} (p){\left(\beta_1(p)-\overline{\beta}_1\right)^{i}\left(\beta_2(p)-\overline{\beta}_2\right)^{j}\cdots\left(\beta_n(p)-\overline{\beta}_n\right)^{k}\over i!j!\cdots k!}.
\end{flalign}
The beauty of this approach is that foregrounds can be expressed in terms of spatially varying moments having respective constant spectral properties across the sky which is given by,
\begin{align}
{ \partial\beta_1^{\, i} \partial\beta_2^{\, j}\cdots \partial\beta_n^{\,k} f \left(\nu, \, \overline{\boldsymbol{\beta}}\right)}.
\end{align}
One can now consider the moment maps $m_{ij...k} (p)$ as different astrophysical components of total foreground contribution in multi-frequency data. These components can easily be incorporated in cILC framework, which has been described in Sect.~\ref{sec:cILC}.
In the present work, we consider the thermal dust and synchrotron as the main polarized foreground components. We apply the moment expansion particularly for these two components described below.
It is widely accepted that the synchrotron emission follows power-law in RJ unit,
\begin{align}
f_{\rm sync}\left(\nu, \beta_s(p)\right) = \left({\nu \over \nu_s}\right)^{\beta_s(p)},
\end{align}
where $\beta_s(p)$ is the synchrotron spectral index map.
The thermal dust follows the MBB spectrum,
\begin{align}
f_{\rm dust}\left(\nu, \beta_d(p), T_d(p)\right) = \left({\nu \over \nu_d}\right)^{\beta_d(p)+1} {\exp\left({h\nu_d\over k_BT_d(p)}\right)-1\over \exp\left({h\nu\over k_BT_d(p)}\right)-1},
\end{align}
in RJ unit, where $\beta_d(p)$ and $T_{d} (p)$ denote dust spectral index and temperature map respectively.
Implementation of the moment expansion for synchrotron spectral parameter up to second-order yields,
\begin{align}
\label{eq:sync_moments}
I_{\rm sync, \nu}(p) &= I_{\nu_s}(p) \left( \frac{\nu}{\nu_s}\right)^{\,\overline{\beta}_s\,\left(1+\frac{\Delta \beta_s(p) }{\overline{\beta}_s}\right)}\cr
&=I_{\nu_s}(p) \bigg[ f_{\rm sync} \left(\nu,\overline{\beta}_s\right)\cr
&+ \,\Delta \beta_s(p)\,\partial_{{\beta}_s} f_{\rm sync} \left(\nu,\overline{\beta}_s\right)
\\ \nonumber
&+{1\over 2} \,\,\Delta \beta^2_s(p)\,\partial^2_{{\beta}_s} f_{\rm sync} \left(\nu,\overline{\beta}_s\right)+\cdots \bigg],
\end{align}
where $\Delta \beta_s(p)=\beta_s(p) - \overline{\beta}_s$, and
\begin{flalign}
\label{eq:sync_moments1}
f_{\rm sync} \left(\nu,\overline{\beta}_s\right) &= \left({\nu\over\nu_s}\right)^{\overline{\beta}_s},\nonumber \\
\partial_{{\beta}_s} f_{\rm sync} \left(\nu,\overline{\beta}_s\right) &= \ln\left({\nu\over\nu_s}\right)f_{\rm sync} \left(\nu,\overline{\beta}_s\right),\\
\partial^2_{{\beta}_s} f_{\rm sync} \left(\nu,\overline{\beta}_s\right) &= \left[\ln\left({\nu\over\nu_s}\right)\right]^2f_{\rm sync} \left(\nu,\overline{\beta}_s\right). \nonumber
\end{flalign}
Similarly, for thermal dust, the moment expansion yields,
\begin{align}
\label{eq:dust_moments}
I_{\rm dust,\, \nu}(p)=\,&I_{\nu_d}(p) \bigg[ f_{\rm dust} \left(\nu,\overline{\beta}_d\right)\cr
&+\,\Delta \beta_d(p)\;\partial_{{\beta}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right)\cr
&+\,\Delta T_d(p)\;\partial_{{T}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right)\cr
&+{1\over 2}\,\,\Delta \beta^2_d(p)\;
\partial^2_{{\beta}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right)\cr
&+\,
\Delta \beta_d(p)
\Delta T_d(p)\;
\partial_{{\beta}_d}\partial_{{T}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right)\cr
&+{1\over 2}\,\,
\Delta T^2_d(p)\;
\partial^2_{{T}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right)\cr
&+\cdots \bigg],
\end{align}
where $\Delta \beta_d(p)=\beta_d(p) - \overline{\beta}_d$, $\Delta T_d(p)=T_d(p) - \overline{T}_d$, and
\begin{flalign}
\label{eq:dust_moments1}
&f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right) = \left({\nu \over \nu_d}\right)^{\overline{\beta}_d+1} {\exp\left({\overline{x}_d}\right)-1\over \exp\left({\overline{x}}\right)-1},\nonumber\\
&\partial_{{\beta}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right) = \ln\left({\nu\over\nu_d}\right)f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right),\nonumber\\
&\partial_{{T}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right) = {1\over\overline{T}_{\!d}}\left[{ \overline{x}\exp \left( {\overline{x}} \right) \over \exp \left( {\overline{x}} \right) - 1} - { \overline{x}_d\exp \left( {\overline{x}_d} \right) \over \exp \left( {\overline{x}_d} \right) - 1 }\right] f_{\rm dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right),\nonumber\\
&\partial^2_{{\beta}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right) = \left[\ln\left({\nu\over\nu_d}\right)\right]^2f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right),\\
&\partial^2_{{T}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right) = \left[\overline{x}\coth\left({\overline{x}\over 2}\right) - \overline{x}_d\coth\left({\overline{x}_d\over 2}\right)\right] {1\over \overline{T}_{\!d}}\partial_{{T}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right),\nonumber\\
&\partial_{{\beta}_d}\partial_{{T}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right) = \ln\left({\nu\over\nu_d}\right)\partial_{{T}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right) \nonumber
\end{flalign}
are the moment SEDs up to second-order moment expansion. Here, $x = {h \nu\over K_B T_d}$, $x_d = {h \nu_d\over K_B T_d}$ and $\overline{x} = {h \nu\over K_B \overline{T}_d}$.
\subsection{Basics of ILC algorithm}
\label{sec:cleaning}
This section review the different methodology of implementation of ILC based algorithm, which allows us to deal with different spinorial components. First, we review the implementation of standard ILC to the temperature field (spin-0 field) in Sect.~\ref{sec:T_ILC}. In Sect.~\ref{sec:ILC}, we describe the generalization of standard ILC in the spinorial frame. Next, we review the extension of the standard ILC method for a set of constraint equations, called cILC in Sect.~\ref{sec:cILC}. Finally, in Sect.~\ref{sec:cMILC}, we describe the application of cILC in framework of moment expansion in the context of the current paper.
\subsubsection{Temperature implementation of standard ILC}
\label{sec:T_ILC}
The total observed temperature map $T_{\nu}$(p) at frequency $\nu$ is assumed to be a combination of all astrophysical and cosmological signals,
\begin{equation}
\label{eq:T_ilc_data}
T_{\nu} (p) = a_{\nu} S_c (p) + n_{\nu} (p),
\end{equation}
where $S_{c} (p)$ is the cth component having electromagnetic spectrum $a_{\nu}$. Let us assume $a_{\nu}$ is constant over the sky. $n_{\nu} (p)$ contains the rest of the components and noise in temperature data at frequency $\nu$. For convenience, lets rewire the Eq.~\ref{eq:T_ilc_data} in vector form for all $N_{obs}$ channels,
\begin{equation}
\label{eq:T_ilc_data_v}
\textbf{T} (p) = \boldsymbol{a}{S_c} (p) + \textbf{n} (p),
\end{equation}
where vectors $\textbf{T} (p)$ and $\textbf{n} (p)$ contains data and noise for all frequencies.
In standard ILC framework, the estimated component is,
\begin{equation}
\label{eq:T_weighted_sum}
\hat{S}_c (p) = \sum_{\nu} w_{\nu} T_{\nu}(p),
\end{equation}
that has minimum variance, i.e.,
\begin{equation}
\label{eq:T_variance}
\frac{\partial}{\partial \boldsymbol{w}}\boldsymbol{w^T \mathscr{C} w} = 0,
\end{equation}
where $\boldsymbol{\mathscr{C}} = \left\langle d d^{T} \right\rangle$ is the covariance matrix of dimension $N_{obs} \times N_{obs}$ of the temperature data maps and $\left\langle .. \right\rangle$ denotes the average is taken over all pixels inside the region of interest. $w_{\nu}$ is the ILC weight at frequency $\nu$.
For unbiased estimation of $\hat{S}_c (p)$, we must assume the ILC weights $\boldsymbol{w^{T}}$ = $(w_{1}, w_2, ... w_{N_{obs}})$ should satisfy the constraint,
\begin{equation}
\label{eq:T_constrain}
\boldsymbol{w^T a} = 1.
\end{equation}
Combining Eq.~\ref{eq:T_variance} and Eq.~\ref{eq:T_constrain} with Lagrange multiplier $\lambda$, we get,
\begin{equation}
\label{eq:T_lagrange}
\frac{\partial}{\partial \boldsymbol{w}} \left [\boldsymbol{w^T \mathscr{C} w} + \lambda (1 - \boldsymbol{w^T a})\right] = 0.
\end{equation}
Solving the system of equation.~\ref{eq:T_lagrange}, the ILC weights are determined as,
\begin{equation}
\label{eq:T_opt_weight}
\boldsymbol{w}^{T} = \boldsymbol{ a^{T} \mathscr{C}^{-1} (a \mathscr{C}^{-1} a)^{-1}}.
\end{equation}
\subsubsection{ILC in polarization}
\label{sec:ILC}
The straightforward generalization of standard ILC for polarization is application of the method described in Sect.~\ref{sec:T_ILC} on {$E$}- and {$B$ } maps decomposed from {$Q$ }, {$U$ } maps. Decomposition of {$Q$ }, {$U$ } maps to {$E$}- and {$B$ } maps over incomplete sky is not a trivial task. Because some amount of {$E$}-mode leaks into B-mode maps during decomposition over incomplete sky. \cite{PILC:2016} generalize the standard ILC for $P_{\nu}^{\pm}(p) = Q_{\nu} (p) \pm iU_{\nu}(p)$ maps which transform like spin-2 field. In this section, we briefly review this technique. The $P_{\nu}^{\pm}(p)$ map at frequency $\nu$ can be considered as a sum of component maps,
\begin{equation}
\label{eq:ilc_data}
P_{\nu}^{\pm} (p) = \sum_{c = 1}^{N_{c}} {A}_{\nu c} P_{c}^{\pm} (p) + N_{\nu}^{\pm} (p),
\end{equation}
where $P_c^{\pm}(p) = Q_c (p) \pm iU_c (p)$ indicates the spin-2 quantities of the individual components, $Q_c (p)$, $U_c (p)$ being the Stokes {$Q$ }, {$U$ } maps of the components. $A_{\nu c}$ is the coefficient of the \textit{mixing matrix} $\textbf{A}$. $N_{\nu}^{\pm} = Q_n (p) \pm iU_n (p)$ indicates the spin-2 field of the instrument noise at frequency $\nu$ and $N_c$ is the number of the components present in the data.
Assuming the mixing matrix is constant across the sky or over the domain of some pixels $\mathscrsfs{D} (p)$, the Eq.~\ref{eq:ilc_data} can be rewritten in vector form for all $N_{obs}$ observed channels as,
\begin{equation}
\label{eq:ilc_data_v}
\textbf{P}^{\pm} (p) = \textbf{A } P_c^{\pm} (p) + \textbf{N}^{\pm} (p)
\end{equation}
where $\textbf{P}^{\pm} (p)$ and $\textbf{N}^{\pm} (p)$ are respectively the vectors containing data and noise spin-2 fields for all $N_{obs}$ observed channels at pixel $p$. $\boldsymbol{P_c}^{\pm} (p)$ vector contains the spin-2 fields of the components. Mixing matrix $\textbf{A}$ has the dimension of $N_{obs} \times N_{c}$.
\cite{PILC:2016} originally develop the method for estimating the CMB polarization maps where the spectral property of CMB is assumed to be unity in the thermodynamic unit ($K_{CMB}$). Here, we describe the method for a general component $P_c^{\pm} (p)$ which has a spectral property $f_c$. The ILC approach demands prior information of the spectral property $f_c$ of the component of interest $P_c^{\pm} (p)$ and estimates that component map from the weighted sum of the total frequency maps. \cite{PILC:2016} assumes these weights are the complex numbers and hence the component of interest can be estimated as,
\begin{equation}
\label{eq:weighted_sum_pilc}
\hat{P}_c^{\pm} (p) = (\boldsymbol{w}^{T} \pm i\, \boldsymbol{m}^T) \boldsymbol{P}^{\pm} (p) = \sum_{\nu} (w_{\nu} \pm i\, m_{\nu}) P_{\nu}^{\pm} (p).
\end{equation}
The weights are determined from minimum variance of $ |\hat{P}_c (p)|^2 $ in such a way that spectrum $f_c$ Of the component must satisfy the following constraint equations,
\begin{flalign}
\label{eq:constrain_pilc}
&\boldsymbol{w^T f_c} = 1,\nonumber\\
&\boldsymbol{m^T f_c} = 0.
\end{flalign}
A special case of the Eq.~\ref{eq:constrain_pilc} is that where $m_{\nu}$ is zero for all the frequencies. A similar approach has been described in \cite{Kim:2009}. Here, we adopt this special case instead of a more general version of the algorithm described in Sect.2.2 of \cite{PILC:2016}. Therefore, Eq.~\ref{eq:weighted_sum_pilc} gets simplified to the standard form,
\begin{equation}
\label{eq:weighted_sum}
\hat{P}_c^{\pm} (p) = \boldsymbol{w}^{T} \boldsymbol{P}^{\pm} (p) = \sum_{\nu} w_{\nu} P_{\nu}^{\pm} (p),
\end{equation}
that must has minimum variance, i.e.,
\begin{equation}
\label{eq:variance}
\frac{\partial}{\partial \boldsymbol{w}} \left\langle |\hat{P}_c (p)|^2 \right\rangle = \boldsymbol{w^T C w} = 0,
\end{equation}
with the constraint,
\begin{equation}
\label{eq:constrain}
\boldsymbol{w^T f_c} = 1,
\end{equation}
where, $\textbf{C} = \left\langle \boldsymbol{d} (p)\boldsymbol{d}^{\dagger} (p) \right\rangle$ is the covariance matrix of dimension of $N_{obs} \times N_{obs}$ of the data maps ($\dagger$ denotes conjugate transpose of the matrix) and the $\boldsymbol{w^{T}}$ = $(w_{1}, w_2, ... w_{N_{obs}})$ are the weights to the $N_{obs}$ frequency maps. The elements of the covariance matrix is computed as,
\begin{equation}
\label{eq:cov}
C_{\nu \nu^{'}} = \left\langle Q_{\nu} (p) Q_{\nu^{'}} (p) + U_{\nu} (p) U_{\nu^{'}} (p) \right\rangle
\end{equation}
Note that $ \boldsymbol{d} (p)\boldsymbol{d}^{\dagger} (p)$ is a covariant quantity and hence defined in a global reference frame.
Here, $\boldsymbol{f_c}$ is related to mixing matrix \textbf{A} through $\boldsymbol{f_c = A e_c}$, where $\boldsymbol{e_c}$ is a vector of dimension $1 \times N_c$ of which all the elements are zero except the cth element that is one, $e_c = [0,0,0,..1,..0]^{T}$
The weights can be computed by solving $N_{obs}$ linear system of the equation along with Eq.~\ref{eq:constrain} using Lagrange undetermined multiplier method. A straightforward algebra yields,
\begin{equation}
\label{eq:lagrange_multiplies}
\begin{pmatrix}
2\boldsymbol{C} & -\boldsymbol{f_c}\\
\boldsymbol{f_c}^T & 0
\end{pmatrix} \begin{pmatrix} \boldsymbol{w}\\ \lambda \end{pmatrix}\,\, = \,\, \begin{pmatrix} \boldsymbol{0}\\ 1\end{pmatrix} ,
\end{equation}
where \textbf{0} denotes the column matrices of all elements zero, and $\lambda$ is the Lagrange multiplier. Solving the system of equation.~\ref{eq:lagrange_multiplies}, we obtain the weights,
\begin{equation}
\label{eq:opt_weight}
\boldsymbol{w}^{T} = \boldsymbol{ f_c^{T} C^{-1} (f_c C^{-1} f_c)^{-1}}.
\end{equation}
Finally, the estimated component map is,
\begin{flalign}
\label{eq:opt_component}
\hat{P}_c^{\pm} (p)& = \boldsymbol{(f_c C^{-1} f_c)^{-1} f_c^{T} C^{-1} P^{\pm} (p)}\\ \nonumber& = P_c^{\pm} (p) + \sum_{i = 1, i\neq c}^{N_{c}-1} w_{\nu} ({A}_{\nu c} P_{i}^{\pm} (p) + N_{\nu}^{\pm} (p))\\\nonumber & = P_c^{\pm} (p) + F_c^{\pm} (p) + N_c^{\pm} (p).
\end{flalign}
The beauty of this method is that we can directly work on {$Q$ }, {$U$ } space over an incomplete sky. This is useful since the Galactic masks are conventionally defined in {$Q$ }, {$U$ } space. It is essential to use the Galactic masks. Otherwise, ILC weights will be determined mainly by the variance of the pixels at the Galactic plane.
It is important to note that the estimated map is biased due to the non-zero projection $F_c^{\pm}(p)$ of the SEDs of other components on the SEDs of the component of interest. Besides, the solution is biased by residual leakage of instrumental noise $N_c^{\pm}(p)$ and chance correlation between components. However, one can demand that the solution can be made better by minimizing the variance and optimizing the weights having a unit response to the $f_c$ and simultaneously zero response to other components' SEDs. This method is called constrained ILC, which has been described in the next section.
\subsubsection{Constrained ILC in general form}
\label{sec:cILC}
When the emission spectra of some of the components are known, it is possible to deproject them using additional constraint equations in the variance minimization process of ILC. \cite{Remazeilles:2011} have applied this method in simultaneous estimation of CMB and tSZ components. However, in practice, we can put constraints for any number of components $N_{rc +1}$ of known SEDs as,
\begin{align}
\label{eq:set_of_constrain}
& \boldsymbol{w^{T}f_1} = 0,\nonumber\\
&\boldsymbol{w^{T}f_2} = 0,\nonumber\\
\vdots \nonumber \\
&\boldsymbol{w^{T}f_c} = 1,\\
\vdots \nonumber \\
& \boldsymbol{w^{T}f_{N_{rc}+1}} =0.\nonumber
\end{align}
Here, our goal is to estimate the cth component eliminating the contamination of selected $N_{rc}$ components. To express the constraint equations in more general from, we can define a matrix $\textbf{F}$ of dimension $N_{obs} \times (N_{rc} + 1)$ as,
\begin{equation}
\label{eq:F_matrix}
\boldsymbol{F} = \begin{pmatrix}
f_1[1] & \cdots & f_{N_{rc} +1} \\
\vdots & \ddots & \vdots \\
f_{1}[N_{obs}]& \cdots & f_{N_{obs}}[N_{obs}]
\end{pmatrix}.
\end{equation}
Then the set of equations.~\ref{eq:set_of_constrain}\, now can be conveniently expressed as,
\begin{equation}
\label{eq:set_of_constrain1}
\boldsymbol{F^{T} w} = \boldsymbol{e},
\end{equation}
where $\boldsymbol{e} = [0, 0, ... 1,..0]^{T}$ is the column matrix with all elements zero except cth element that is one. In this case, Eq.~\ref{eq:lagrange_multiplies} can be generalized to,
\begin{equation}
\label{eq:lagrange_multiplies_gen}
\begin{pmatrix}
2\boldsymbol{C} & -\boldsymbol{F}\\
\boldsymbol{F}^T & 0
\end{pmatrix} \begin{pmatrix} \boldsymbol{w}\\ \boldsymbol{\lambda} \end{pmatrix}\,\, = \,\, \begin{pmatrix} 0\\ \boldsymbol{e}\end{pmatrix} ,
\end{equation}
where $\boldsymbol{\lambda} = (\lambda_1, \lambda_2, ..., \lambda_{N_{rc} + 1})^{T}$ is the vector containing $N_{rc} + 1$ Lagrange multipliers. Simple algebraic solution of system of equation.~\ref{eq:lagrange_multiplies_gen} gives the optimized weights as,
\begin{equation}
\label{eq:cmilc_weights}
\boldsymbol{w}^T = \boldsymbol{e^{T}} (F^{T}C^{-1}F)^{-1} F^{T} C^{-1}.
\end{equation}
The estimated component can be expressed as,
\begin{equation}
\label{eq:opt_component_gen}
\hat{P}_c^{\pm} (p) = \{ \boldsymbol{e^{T}} (F^{T}C^{-1}F)^{-1} F^{T} C^{-1}\} \boldsymbol{P}^{\pm} (p).
\end{equation}
The variance of standard ILC is less than that of cILC (See Section. 3.4 of \cite{Remazeilles:2020}). It causes a larger noise residual compared to that for standard ILC because of large constraints. However, cILC reduces the foreground residual compared to standard ILC. Therefore, we need to find the optimum number of constraints to balance the noise penalty and leakage from unconstrained components to the recovered map.
\subsubsection{Moment based constrained ILC for estimation of dust and synchrotron maps}
\label{sec:cMILC}
We want to highlight that the zeroth-order moment maps in Eq.~\ref{eq:sync_moments}, and Eq.~\ref{eq:dust_moments} are, in principle, the synchrotron and thermal dust templates at respective pivot frequencies. Here, we aim to estimate thermal dust and synchrotron templates at pivot frequencies of 353 \ifmmode $\,GHz$\else \,GHz\fi\ and 30 \ifmmode $\,GHz$\else \,GHz\fi\ respectively. For that, we make use of the cILC method for a set of constraints applied on the moment SEDs of different order in Eq.~\ref{eq:sync_moments}, and Eq.~\ref{eq:dust_moments}. In short, we are aiming to estimate the zeroth-order moment maps of thermal dust and synchrotron using the cILC framework projecting out other higher-order moments applying the orthogonality condition to higher-order moment SEDs w.r.to the SED of the zeroth-order moments of the respective components. Hereafter, we refer this method to be cMILC algorithm.
For estimating thermal dust template at 353\ifmmode $\,GHz$\else \,GHz\fi, we adopt a subset of the following constraints in cMILC algorithm:
\begin{equation}
\label{eq:cmilc_dust}
\left.\begin{aligned}
& \boldsymbol{w }^{\rm T} \cdot f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 1 \\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot {f_{cmb} } = 0\\[1.5mm]
&\boldsymbol{w }^{\rm T} \cdot f_{sync}\left(\nu, \overline{\beta}_s\right) = 0 \\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial_{{\beta}_s}f_{sync}\left(\nu, \overline{\beta}_s\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial_{{\beta}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial_{{T}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial^2_{{\beta}_s}f_{sync}\left(\nu, \overline{\beta}_s\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial^2_{{\beta}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial^2_{{T}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial_{{\beta}_d}\partial_{{T}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0.
\end{aligned}\right\}
\end{equation}
Similarly, for estimating synchrotron tempalate at 30 \ifmmode $\,GHz$\else \,GHz\fi, we simply interchange the first and third constraints in Eq~\ref{eq:cmilc_dust}:
\begin{equation}
\label{eq:cmilc_sync}
\left.\begin{aligned}
& \boldsymbol{w }^{\rm T} \cdot f_{sync}\left(\nu, \overline{\beta}_s\right) = 1 \\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot {f_{cmb} } = 0\\[1.5mm]
&\boldsymbol{w }^{\rm T} \cdot f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0 \\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial_{{\beta}_s}f_{sync}\left(\nu, \overline{\beta}_s\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial_{{\beta}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial_{{T}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial^2_{{\beta}_s}f_{sync}\left(\nu, \overline{\beta}_s\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial^2_{{\beta}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial^2_{{T}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial_{{\beta}_d}\partial_{{T}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0.
\end{aligned}\right\}
\end{equation}
Here ${f_{cmb} }$ denotes the unit conversion factor of CMB from thermodynamic unit to RJ unit, $f_{cmb} = \frac{x_c^2e^x_c}{(e^{x_c} - 1)^2}$, where $x_c = \frac{h\nu}{k_BT_{CMB}}$ ($T_{CMB}$ = 2.7255 K).
The matrix $\boldsymbol{F}$ in Eq.~\ref{eq:F_matrix} contains the moment SEDs. For example, for thermal dust estimation, the matrix looks like,
\begin{equation}
\boldsymbol{F} = \left( \boldsymbol{f_{\rm dust}}(\nu, \overline{\beta}_d, \overline{T}_{\!d}), \boldsymbol{{f_{cmb} }}, \boldsymbol{f_{sync}}\left(\nu, \overline{\beta}_s\right), ....., \boldsymbol{\partial_{{\beta}_d}\partial_{{T}_d}f_{dust}}(\nu, \overline{\beta}_d, \overline{T}_{\!d}) \right)\nonumber,
\end{equation}
with $\boldsymbol{e} = [1, 0, .....,0]^{T}$. For synchrotron estimation, columns of $\boldsymbol{f_{\rm dust}}$ and $\boldsymbol{f_{\rm sync}}$ in $\boldsymbol{F}$ interchanges. However, the dimension of the $\boldsymbol{F} $ matrix varies depending on the number of the moments passed to cMILC algorithm. As discussed in Sect.~\ref{sec:cILC}, the larger number of the constraints cause extra noise penalty; projecting out all the moments up to second-order does not ensure the estimated map is the optimized solution of the cMILC algorithm. We should make a balance between mitigation of the residual leakage from unconstrained components and degradation of noise residual through the choice of an optimum number of constraints as discussed in Sect.~\ref{sec:srategy}.
\begin{table*}[hbtp]
\caption{The list of the subsets of the SEDs passed to cMILC algorithm in different iterations for estimating dust template. The condition $\textbf{w}^T.f_{\rm dust} = 1$ is applied along with orthogonal condition to rest of the SEDs in each iteration to de-project the corresponding maps. The Ids of each of the iterations are displayed in first column. }
\label{table:dust_constrains}
\begin{tabular}{llc}
\toprule
Id & Subsets of moment SEDs \\
\hline
\hline
cMILC01 & $f_{\rm dust}$ ; ${f_{cmb} }$ \\
cMILC02 & $f_{\rm dust}$ ; $f_{\rm sync}$ \\
cMILC03 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ \\
cMILC04 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ \\
cMILC05 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ \\
cMILC06 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ \\
cMILC07 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ \\
cMILC08& $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$\\
cMILC09 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_T\,f_{\rm dust}$\\
cMILC10 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ \\
cMILC11 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_T\,f_{\rm dust}$ \\
cMILC12 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ \\
cMILC13 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$ \\
cMILC14 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$\\
cMILC15 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_T\,f_{\rm dust}$ \\
cMILC16 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$ \\
cMILC17 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_\beta\,f_{\rm dust}$ \\
cMILC18 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$ ; $\partial^2_T\,f_{\rm dust}$ \\
cMILC19 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$ \\
cMILC20 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_T\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$\\
cMILC21 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_\beta\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$\\
cMILC22 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$\\
cMILC23 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$ ; $\partial^2_T\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$\\
cMILC24 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_T\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\caption{The list of the subsets of the SEDs passed to cMILC algorithm in different iterations for estimating synchrotron template. The condition $\textbf{w}^T.f_{\rm sync} = 1$ is applied along with orthogonal condition to rest of the SEDs in each iteration to de-project the corresponding maps. The Ids of each of the iterations are displayed in first column. }
\label{table:sync_constrains}
\begin{tabular}{llc}
\toprule
Id & Subsets of moment SEDs \\
\hline
\hline
cMILC01 & $f_{\rm sync}$ ; ${f_{cmb} }$ \\
cMILC02 & $f_{\rm sync}$ ; $f_{\rm dust}$ \\
cMILC03 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ \\
cMILC04 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm dust}$ \\
cMILC05 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ \\
cMILC06 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm dust}$ \\
cMILC07 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ \\
cMILC08& $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$\\
cMILC09 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_T\,f_{\rm dust}$\\
cMILC10 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ \\
cMILC11 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_T\,f_{\rm dust}$ \\
cMILC12 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ \\
cMILC13 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$ \\
cMILC14 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$ \\
cMILC15 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_T\,f_{\rm dust}$ \\
cMILC16 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$ \\
cMILC17 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_\beta\,f_{\rm dust}$ \\
cMILC18 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$ ; $\partial^2_T\,f_{\rm dust}$ \\
cMILC19 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$ \\
cMILC20 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_T\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$\\
cMILC21 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_\beta\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$\\
cMILC22 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$\\
cMILC23 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$ ; $\partial^2_T\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$\\
cMILC24 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_T\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$\\
\bottomrule
\end{tabular}
\end{table*}
\section{Implementation strategy}
\label{sec:srategy}
We apply the cMILC algorithm in pixel space over {GAL78}\ mask. Since cMILC is based on the cILC algorithm, we pass the multi-frequency simulated data and different subsets of moment SEDs in different iterations. The possible subsets of moment SEDs for different iteration used in this analysis are listed in Table.~\ref{table:dust_constrains} (for thermal dust estimation) and Table.~\ref{table:sync_constrains} (for synchrotron estimation). The only difference of Table~\ref{table:dust_constrains} and Table.~\ref{table:sync_constrains} is the columns of $f_{dust}$ and $f_{sync}$ have been interchanged. To construct these moment SEDs, we should choose the pivot values of the parameters involved and pivot frequencies in moment expansion. In principle, the pivot parameters should be chosen differently in different simulations in Sect.~\ref{sec:sim}. In fact, pivot parameters should also be changed when we are using higher-order moments to describe the data.
However, in the interest of speedy analysis, we use fixed values of pivot parameters throughout the study independent of the set of simulations used.
We adopt the pivot synchrotron spectral index, $\overline{\beta}_s$ = -3.00 \citep{Miville-Desch:2008, Krachmalnicoff:2018, Kogut:2007}. For thermal dust, we adopt the pivot dust temperature, $\overline{T}_d$ = 19.4 K \citep{planck-XLVIII:2016} and dust spectral index, $\overline{\beta}_d$ = 1.53 \citep{planck_XI:2014, planck-x:2016, planck-XI:2018}. We choose the pivot frequencies for the synchrotron and thermal dust are $\nu_s$ = 30 \ifmmode $\,GHz$\else \,GHz\fi\ and $\nu_d$ = 353 \ifmmode $\,GHz$\else \,GHz\fi\ respectively.
After implementing the cMILC algorithm for each of the iterations listed in in Table.~\ref{table:dust_constrains} (Table.~\ref{table:sync_constrains}) with corresponding subset of moment SEDs, we apply the cMILC weights to the total frequency maps to estimate the thermal dust map at 353 \ifmmode $\,GHz$\else \,GHz\fi\ (synchrotron map at 30 \ifmmode $\,GHz$\else \,GHz\fi). Our simulations are absolutely calibrated (unlike {\it Planck\/}\ and \textit{WMAP}\xspace\ data) and hence do not attach any additional frequency-dependent terms with component maps except their respective SEDs. To assess the residual leakage from noise, we apply the same weights to the input noise maps. To evaluate the residual leakage from CMB, AME and other unconstrained higher-order moments of thermal dust and synchrotron (hereafter, we refer them together by \textit{moment residual}), we apply same weights to these components as well. In summary, the algorithm returns the dust map at 353 \ifmmode $\,GHz$\else \,GHz\fi\ and synchrotron at 30 \ifmmode $\,GHz$\else \,GHz\fi\ along with corresponding maps of moment residual and noise residual for different iterations simply by interchanging the first and third constrains in a set of Eq.~\ref{eq:cmilc_dust}.
\section{Results}
\label{sec:opt_sim}
In this section, we investigate the cMILC results of recovered thermal dust and synchrotron maps to demonstrate the performance of the method. In this section, we present the results for the simulation in SET1 only. The similar results for rest of the simulations are presented in Appendix.~\ref{sec:other_sim_results}.
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\linewidth]{dust_recovered_Q_maps_modeld1s1a2_ns256_pcmilc.pdf} \par
\includegraphics[width=\linewidth]{dust_recovered_U_maps_modeld1s1a2_ns256_pcmilc.pdf} \par
\end{multicols}
\caption{cMILC results of estimation of thermal dust template for different iterations when deprojecting more and more moments with increasing constraints for the simulation in SET1. \textit{Left panel} shows the results of thermal dust {$Q$ } maps, and \textit{right panel} shows the results of thermal dust {$U$ } maps. The patches are 70$^{\ifmmode^\circ\else$^\circ$\fi}$ $\times$ 70$^{\ifmmode^\circ\else$^\circ$\fi}$ shown in gnomonic projection centered at $(l, b)$ = (90$^{\ifmmode^\circ\else$^\circ$\fi}$, -80$^{\ifmmode^\circ\else$^\circ$\fi}$). All maps are smoothed at a resolution of FWHM = 60\rlap{.}$^{\scriptstyle\prime}$. The first row shows the input thermal dust map. The first, second and third columns of the subsequent rows show the recovered thermal dust maps, moment residual maps and noise residual maps respectively for some selected cMILC iterations starting from cMILC03 to cMILC19. Moment residual reduces significantly with deprojection of more and more higher-order moments up to an optimum choice of constraints till cMILC12. After that, residual increases with increasing constraints. Among all these maps, cMILC12 gives the best recovered maps.}
\label{fig:dust_maps_sim_d1s1}
\end{figure*}
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\linewidth]{sync_recovered_Q_maps_modeld1s1a2_ns256_pcmilc.pdf} \par
\includegraphics[width=\linewidth]{sync_recovered_U_maps_modeld1s1a2_ns256_pcmilc.pdf} \par
\end{multicols}
\caption{cMILC results of estimation of synchrotron template for different iterations when deprojecting more and more moments with increasing constraints for the simulation in SET1. \textit{Left panel} shows the results of synchrotron {$Q$ } maps, and \textit{right panel} shows the results of synchrotron {$U$ } maps. The patches are 70$^{\ifmmode^\circ\else$^\circ$\fi}$ $\times$ 70$^{\ifmmode^\circ\else$^\circ$\fi}$ shown in gnomonic projection centered at $(l, b)$ = (90$^{\ifmmode^\circ\else$^\circ$\fi}$, -80$^{\ifmmode^\circ\else$^\circ$\fi}$). All maps are smoothed at a resolution of FWHM = 60\rlap{.}$^{\scriptstyle\prime}$. The first row shows the input synchrotron map. The first, second and third columns of the subsequent rows show the recovered synchrotron maps, moment residual maps and noise residual maps respectively for some selected cMILC iterations starting from cMILC03 to cMILC19. Moment residual reduces significantly with deprojection of higher-order moments up to an optimum choice of constraints till cMILC12. After that, residual increases with increasing constraints. Among all these maps, cMILC12 gives the best recovered maps.}
\label{fig:sync_maps_sim_d1s1}
\end{figure*}
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\linewidth]{dust_QQ_correlation_maps_modeld1s1a2_ns256_pcmilc.pdf}\par
\includegraphics[width=\linewidth]{dust_UU_correlation_maps_modeld1s1a2_ns256_pcmilc.pdf}
\end{multicols}
\caption{Contour plots of 2D-histogram of input {$Q$ } (\textit{left panel}) and {$U$ } (\textit{right panel}) dust maps and recovered dust maps for simulation in SET1. 1$\sigma$ and 2$\sigma$ contours are shown here for cMILC12 (orange) and cMILC15 (blue) iterations. Most of the pixels are distributed inside a tiny region of distribution for output maps of cMILC12. Whereas pixels for output maps of cMILC15 are distributed inside a far bigger range of the distribution. This implies use of more than 7 constraints deteriorates the performance of algorithm for given instrument sensitivity and channels. }
\label{fig:dust_TT_correlation_d1s1}
\end{figure*}
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\linewidth]{sync_QQ_correlation_maps_modeld1s1a2_ns256_pcmilc.pdf}\par
\includegraphics[width=\linewidth]{sync_UU_correlation_maps_modeld1s1a2_ns256_pcmilc.pdf}\par
\end{multicols}
\caption{Contour plots of 2D-histogram of input {$Q$ } (\textit{left panel}) and {$U$ } (\textit{right panel}) synchrotron maps and recovered synchrotron maps for simulation in SET1. 1$\sigma$ and 2$\sigma$ contours are shown here for cMILC12 (orange) and cMILC15 (blue) iterations. Most of the pixels are distributed inside a tiny region of distribution for output maps of cMILC12. Whereas pixels for output maps of cMILC15 are distributed inside a far bigger range of the distribution. This implies use of more than 7 constraints deteriorates the performance of algorithm for given instrument sensitivity and channels. }
\label{fig:sync_TT_correlation_d1s1}
\end{figure*}
\begin{figure}
\includegraphics[width=9cm]{Dust_power_spectra_data_modeld1s1a2_maps_ns256_cMILC12.pdf}\par
\includegraphics[width=9cm]{Sync_power_spectra_data_modeld1s1a2_maps_ns256_cMILC12.pdf} \par
\caption{EE (circles) and BB (triangles) power spectra for thermal dust (\textit{upper panel}) and synchrotron (\textit{lower panel}) maps. Power spectra of input maps of simulation in SET1 are shown in blue, and that of recovered maps for cMILC12 iteration are shown in green. All spectra are computed over {GAL78}\ apodized mask using \ensuremath{\tt Xpol}. Error bars are 1$\sigma$ uncertainties analytically computed from \ensuremath{\tt Xpol}. The dashed lines indicate the respective best-fit power-law model power spectra. Corresponding best-fit parameters are listed in Table.~\ref{table3}.}
\label{fig:sim_dust_sync_power_d1s1}
\end{figure}
\subsection{Inspection of recovered maps}
\label{sec:map_inspection}
We first inspect the quality of the recovered dust and synchrotron polarization maps and compare them with input maps of respective components. For illustration, we also investigate the amount of residual leakage from unconstrained components and moments as well as residual leakage of noise.
In Figure.~\ref{fig:dust_maps_sim_d1s1}, we summarize the cMILC results of estimation of thermal dust for simulation in SET1 for some selected iterations. We display 70$^{\ifmmode^\circ\else$^\circ$\fi}$ $\times$ 70$^{\ifmmode^\circ\else$^\circ$\fi}$ patches in gnomonic projection centered at the Galactic longitude and latitude, $(l, b)$ = (90$^{\ifmmode^\circ\else$^\circ$\fi}$, -80$^{\ifmmode^\circ\else$^\circ$\fi}$). \textit{Left panel} presents the results of {$Q$ } and \textit{right panel} presents the results of {$U$ }. The first rows show the input thermal dust {$Q$ }, {$U$ } maps, the subsequent rows show the output maps at 353 \ifmmode $\,GHz$\else \,GHz\fi\ of selected cMILC iterations that use different subset of moment SEDs. The corresponding iteration's Ids are shown on the left side of the maps. The First columns show the estimated thermal dust maps at 353 \ifmmode $\,GHz$\else \,GHz\fi, the second columns show the moment residual maps, and the third columns show the noise residual maps. Similar results for estimation of synchrotron map at 30 \ifmmode $\,GHz$\else \,GHz\fi\ are presented in Figure.~\ref{fig:sync_maps_sim_d1s1} over the same sky region. The cMILC03 iteration deprojects zeroth-order moments (${f_{cmb} }$ ; $f_{\rm sync}$) only. Therefore, the moment residuals are reasonably high for this iteration. Deprojecting $\partial_\beta\,f_{\rm dust}$ along with zeroth-order moments (\textit{third rows}) does not reduce the residual much to recovered maps. The moment residual reduces significantly when we deproject all zeroth- and first-order moments in cMILC10 and one of the second-order moments in cMILC11 and cMILC12. Inspecting second columns of the Figure.~\ref{fig:dust_maps_sim_d1s1} and Figure.~\ref{fig:sync_maps_sim_d1s1}, we confirm that moment residual reduces up to cMILC12 as we project out more and more moments. Inspecting the first columns, one can hardly distinguish the map-level differences in the recovered maps for cMILC03, cMILC06, cMILC10, cMILC11 and cMILC12. However, comparing the last two columns, we confirm that recovered maps for cMILC12 are the best in the sense the moment residual leakage is the least for this iteration. We also run the algorithm for simulation in absence of AME. We notice residual leakage in that case is order of magnitude less. In iterations from cMILC15 to cMILC19, we project out all the moment maps up to first order along with subsets of two second-order moments. In Figure.~\ref{fig:dust_maps_sim_d1s1} and Figure.~\ref{fig:sync_maps_sim_d1s1}, we display only the results for cMILC19 out of these four iterations where we project out two second-order moments ($\partial^2_\beta\,f_{\rm sync}$, $\partial_\beta\partial_T\,f_{\rm dust}$) along with all zeroth- and first-order moments.
The recovered maps in this iteration are noisy. This implies, The noise degradation for larger constrains prevents us from getting further better recovery. A similar trend in recovered maps, residual leakage from moment maps and noise have been found for other sets of simulations and shown in Appendix.~\ref{sec:other_sim_results}. Therefore, we do not inspect the rest of the iterations, which de-project more higher-order moments.
To further diagnose the recovered maps, we plot 1$\sigma$ and 2$\sigma$ contours of 2D histogram of input maps and recovered maps for cMILC12 (orange) and cMILC15 (blue) iterations in Figure.~\ref{fig:dust_TT_correlation_d1s1} (for thermal dust) and Figure.~\ref{fig:sync_TT_correlation_d1s1} (for synchrotron). We find, most of the pixels are distributed inside a very tiny region distribution for recovered maps of cMILC12 compared to that of cMILC15. Also the correlation between input and recovered maps are significanly better for cMILC12 than that of cMILC15. We find the correlation coefficients between input thermal dust maps and estimated thermal dust of cMILC12 and cMILC15 iterations are 0.78, 0.99 (for {$Q$ }) and 0.67, 0.99 (for {$U$ }) respectively. Similarly, the correlation coefficients for synchrotron estimation in cMILC12 and cMILC15 iterations are 0.65, 0.99 (for {$Q$ }) and 0.61, 0.99 (for {$U$ }) respectively. This is another proof in support of using more than seven constraints degrades the performance of cMILC algorithm for given sensitivity and frequency coverage.
Doing all these assessments, therefore, we note that cMILC12 provides the best recovered thermal dust and synchrotron maps for joint analysis \textit{WMAP}\xspace\ and {\it Planck\/}\ maps. However, this is not a generic solution for any mission. The performance of cMILC depends on the sensitivity and frequency coverage of the experiments.
\subsection{comparison of the power spectrum}
\label{sec:power_spectra}
In Figure.~\ref{fig:sim_dust_sync_power_d1s1}, we compare the angular power spectra of thermal dust (\textit{upper panel}) and synchrotron (\textit{lower panel}) maps as estimated for cMILC12 and input maps. We compute $EE$ and $BB$ power spectra over {GAL78}\ apodized mask using \ensuremath{\tt Xpol}\ \citep{Tristram:2005}. Results from input maps are shown in blue and that of recovered maps are shown in green. The $EE$ and $BB$ power spectra are presented in Figure.~\ref{fig:sim_dust_sync_power_d1s1} with circles and triangles respectively. The 1$\sigma$ uncertainties are analytically estimated using \ensuremath{\tt Xpol}. We fit the power spectra with power-law model,
\begin{equation}
\label{eq:power-law}
\ensuremath{{\cal D}_{\ell}^{XX}} = A_{XX} (\ell/80)^{\alpha_{XX}+2},
\end{equation}
where $A_{XX}$ is the best-fit amplitude at $\ell =80$, $\alpha_{XX}$ is the best-fit spectral index and $XX=\{EE, BB\}$. We use $\ell$ range of 30-160 of thermal dust power spectra and 2-140 of synchrotron power spectra for fitting Eq.~\ref{eq:power-law} with \ensuremath{\tt MPFIT}\ routine following the same same machinery
as in \cite{planck-XI:2018}. The best-fit power-law model power spectra are shown in dashed lines in Figure.~\ref{fig:sim_dust_sync_power_d1s1}. The corresponding best-fit parameters are listed in Table.~\ref{table3}.
Overall, we find an excellent agreement between power spectra of input and recovered maps both for thermal dust and synchrotron. All the parameters are comparable within 3$\sigma$ statistical uncertainty. Most importantly, we find the power ratio of $B$- and $E$- mode ($A_{BB}/A_{EE}$) measured both for input and recovered map is $\sim$0.56 for thermal dust, and $\sim$0.34 for synchrotron which are very similar to the corresponding values reported in \cite{planck-VI:2018}.
\begin{table}
\caption{ Best-fit parameters of the power-law model fitted to the thermal dust and synchrotron power spectra of the input and recovered maps in cMILC12 iteration. 30 $\leq \ell \leq$ 160 range has been used for fitting thermal dust power spectra, and 2 $\leq \ell \leq$ 140 range has been used for fitting for synchrotron power spectra.}
\label{table3}
\begin{centering}
\begin{tabular}{ p{3.2cm} p{2.0cm} p{2.0cm} }
\hline
parameters & input map & output map\\
\hline
\hline
\textbf{thermal dust; $\ell$ = 30-160}& &\\
$A_{EE}$&555.14 $\pm$ 7.61 & 556.84 $\pm$ 7.63 \\
$A_{BB}$& 313.68 $\pm$ 4.35 &314.22 $\pm$ 4.36\\
$A_{BB}/A_{EE}$ & 0.57 $\pm$ 0.02& 0.56 $\pm$ 0.02\\
$\alpha_{EE}$&-2.30 $\pm$ 0.03&-2.31 $\pm$ 0.03\\
$\alpha_{BB}$&-2.17 $\pm$ 0.03&-2.19 $\pm$ 0.03\\
\hline
\textbf{Synchrotron; $\ell$ = 2-140} &&\\
$A_{EE}$&6.91 $\pm$ 0.10 &6.74 $\pm$ 0.09 \\
$A_{BB}$&2.35 $\pm$ 0.03 &2.24 $\pm$ 0.03\\
$A_{BB}/A_{EE}$ & 0.34 $\pm$ 0.01 & 0.33 $\pm$ 0.01\\
$\alpha_{EE}$&-2.50 $\pm$ 0.03&-2.49 $\pm$ 0.03\\
$\alpha_{BB}$&-2.59 $\pm$ 0.03&-2.62 $\pm$ 0.03\\
\hline
\end{tabular}
\end{centering}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=18cm]{dust_variance_maps_modeld1s1a2d4s3a2d7s2a2_ns256_pcmilc.pdf}
\caption{Evolution of the standard deviation of the output maps at 353 \ifmmode $\,GHz$\else \,GHz\fi\ for simulation in SET1 (green), SET2 (black) and SET3 (magenta) with different cMILC iterations starting from cMILC01 to cMILC19 where we pass different subsets moment SEDs. The \textit{left panel} presents the standard deviations of the recovered thermal dust maps, \textit{middle panel} presents the standard deviations of the moment residual maps, and \textit{right panel} presents the standard deviations of the noise residual maps at 353 \ifmmode $\,GHz$\else \,GHz\fi. }
\label{fig:stat_res_dust}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=18cm]{sync_variance_maps_modeld1s1a2d4s3a2d7s2a2_ns256_pcmilc.pdf}
\caption{Evolution of the standard deviation of the output maps at 30 \ifmmode $\,GHz$\else \,GHz\fi\ for simulation in SET1 (green), SET2 (black) and SET3 (magenta) with different cMILC iterations starting from cMILC01 to cMILC19 where we pass different subsets moment SEDs. The \textit{left panel} presents the standard deviations of the recovered synchrotron maps, \textit{middle panel} presents the standard deviations of the moment residual maps, and \textit{right panel} presents the standard deviations of the noise residual maps at 30 \ifmmode $\,GHz$\else \,GHz\fi.}
\label{fig:stat_res_sync}
\end{figure*}
\subsection{Statistics of residuals from moment and noise maps}
\label{sec:stat_residuals}
Besides the map level investigation, its is also important to assess the statistical properties of the estimated maps, residual leakage from other components which are not projected out and noise residual maps. In Figure.~\ref{fig:stat_res_dust}, we present the standard deviation $\sqrt{C}_{353\, GHz \,\times \,353\, GHz}$ ($C_{\nu,\nu^{'}}$ is defined in Eq.~\ref{eq:cov} ) of the recovered thermal dust map (\textit{left panel}), residual leakage from moment maps (\textit{middle panel}) and noise residual maps (\textit{right panel}) for different cMILC iterations. Similarly, in Figure.~\ref{fig:stat_res_sync}, we present the standard deviation $\sqrt{C}_{30 \,GHz \, \times \,30\, GHz}$ of similar maps for estimation of synchrotron for different cMILC iterations. Here, we display the results for all three set of simulations for easy caparison.
In \textit{left panel} of Figure.~\ref{fig:stat_res_dust} and Figure.~\ref{fig:stat_res_sync}, we find the standard deviations of the recovered maps are increasing with increasing number of constraints in cILC algorithm. However, for the iterations, which pass same number of constraints to cMILC algorithm but project out a different subset of moments, the standard deviations are either comparable or change. For example, standard deviations of recovered maps are approximately the same for the iterations from cMILC11 to cMILC14 which pass 7 constraints but different second-order moment SEDs along with all zeroth- and first-order moment SEDs to the cMILC algorithm. Whilst, standard deviations of recovered maps for the iterations from cMILC15 to cMILC19 changes although each of the iterations pass 8 moment SEDs to the cMILC algorithm but project out a different subset of two second-order moments along with all zeroth- and first-order moments. This implies, changes in standard deviations of the recovered maps for fixed number of constraints are subjected to the subset of moment SEDs passed to the algorithm.
Increasing standard deviation with an increasing number of constraints gives rise to a misleading expectation that projecting out more moments always come with an additional noise penalty. Third panels of Figure.~\ref{fig:stat_res_dust} and Figure.~\ref{fig:stat_res_sync} demonstrate that this is an inaccurate extrapolation. Furthermore, the reduction of the leakage from higher-order moments indefinitely with an increasing number of constraints for given sensitivity and frequency coverage is also incorrect information. On the contrary, in \textit{middle panels} of Figure.~\ref{fig:stat_res_dust} and Figure.~\ref{fig:stat_res_sync}, we find, for a given sensitivity and frequency coverage of the experiments, leakage from higher-order moments reduces up to projecting out an optimum number of moments and reaches to a minimum value. After that residual increases with projecting out more moments that is clear from \textit{middle panels} of Figure.~\ref{fig:stat_res_dust} and Figure.~\ref{fig:stat_res_sync}.
Therefore, we would like to emphasize that the increasing number of constraints in the cMILC algorithm does not always come with noise penalty and indefinite reduction of residual from unconstrained moments in the recovered maps. It has a more complicated behaviour depending on the complexity of the foregrounds, sensitivity and frequency coverage of the mission.
\section{Conclusion}
\label{sec:conclusion}
In the present work, we develop a new semi-blind components separation method using constrained ILC in the language of moment expansion introduced in Sect.~\ref{sec:cMILC}. We apply this algorithm to three sets of simulations with varying thermal dust and synchrotron complexity to demonstrate the performance of the algorithm. We use \textit{WMAP}\xspace\ and {\it Planck\/}\ instrument specification for current work. Our main objective is to estimate the zeroth-order moment maps of thermal dust and synchrotron at respective pivot frequencies 353 \ifmmode $\,GHz$\else \,GHz\fi\ and 30 \ifmmode $\,GHz$\else \,GHz\fi\ by projecting out the higher-order moments. The zeroth-order moment maps eventually are the individual foreground templates of respective components at respective pivot frequencies as discussed in Sect.~\ref{sec:cMILC}. We find the best combination of the moment SEDs to project out the specific moments that optimize the trade-off between residual from unconstrained higher-order moments and noise degradation in the templates. However, this combination is not robust and specific to the sensitivity and frequency coverage of the instruments. We show the performance of the cMILC method is optimal up to a specific number of constraints applied for given instrument sensitivity and channels. After that, the performance of algorithm deteriorates with increasing constraints since the residual bias from unconstrained moments increases.
Furthermore, we show deprojecting more and more higher-order moments does not always come with noise penalty. It depends on the combination of moment SEDs passed to the algorithm. Eventually, this aspect would be more apparent if we would work with high sensitive instrument data like PICO \citep{PICO:2019} to estimate low signal-to-noise components like B-mode signal in CMB. We do not apply constraints on AME in the present work since the moment description of this component is not available in literature. We notice that unconstrained AME introduce an extra bias that is order of magnitude high in comparison to that from unconstrained moments.
Overall, this is a new method to estimate the foreground templates. We develop this method on spin-2 fields and can easily be extended to the spin-0 field. However, for intensity maps, lots of foreground components contribute, unlike polarization. Developing a moment description for some of the foregrounds in intensity (e.g., AME and CO line emissions) will be essential for optimal performance of cMILC algorithm. This turns into a high dimensional problem and finding the most relevant SEDs to project out using a very limited number of frequency coverage (only 12 channels is used in this work) is substantially challenging. Therefore, we do not apply this method to the intensities. However, the number of moment SEDs required for the optimal solution is directly related to the requirement of the number of frequency channels with some sensitivity. Thus algorithm can be useful for optimizing the design of the future CMB experiments.
The algorithm we have developed works over any sky fraction. Therefore, in principle, we can jointly analyse ground-based and space-based CMB mission data using this algorithm. The most challenging parts of working with real data using this algorithm are calibration and beam uncertainties. In the present work, we assume the maps are absolutely calibrated, and Gaussian FWHM can perfectly describe beams. However, for real data, calibration coefficient uncertainties for each channel, which are a multiplicative factor for each frequency maps, introduce an uncertainty in the frequency scaling of each of the components. Therefore, the optimal combination of moment SEDs for given instrumental sensitivity and frequency coverage may converge to imperfect solution of the component maps. Beam uncertainties induce a similar bias as calibration uncertainties. This impacts strongly the high $\ell$ modes, especially for high signal to noise data \citep{Basak:2013}. These issues require specific attention to the exact response of the detectors, precise calibration of the instrument, especially re-calibration of data sets from different instruments inside the algorithm itself. In a follow up paper, \cite{Adak:2021} (In preparation), we demonstrate the application of cMILC algorithm on \textit{WMAP}\xspace\ and {\it Planck\/}\ real data, re-calibration of the data in the same algorithm etc.
Finally, this algorithm is in principle applicable to recover any foreground templates, moment maps of any order at any frequency. While we mainly focus on the estimation of foreground maps in the current paper, one can extend this work for cleaning the CMB {$Q$ }, {$U$ } maps from foreground contamination over incomplete sky. Furthermore, the moment expansion method is extremely useful and be applicable to extract the CMB spectral distortion signal \citep{Rotti:2020}, 21cm global signal, CMB B-mode signal \citep{Remazeilles:2020} etc. This approach also allows us to use external templates to minimise the contribution of extra components, a similar approach like the internal template fitting \citep{Fernandez-Cobos:2012}.
\section*{Data Availability}
The {GAL78}\ mask is taken from PLA (\url{pla.esac.esa.int/pla/}).
\section*{Acknowledgements}
DA acknowledges the University Grants Commission India for providing financial support as Senior Research Fellow. This work was supported by Science and Engineering Research Board, Department of Science and Technology, Govt. of India grant number SERB/ECR/2018/000826. Some of the computations in this paper are done on the Pegasus cluster\footnote{\url{http://hpc.iucaa.in/}} at IUCAA. DA acknowledges Prof. Tarun Souradeep, Dr. Tuhin Ghosh and Dr. Shabbir Shaikh for useful discussion regarding this work.
\bibliographystyle{mn2e}
\section{Introduction}
\label{sec:intro}
Wilkinson Microwave Anisotropy Probe (\textit{WMAP}\xspace, \cite{Bennett:2013}) observed the microwave sky in five frequency bands ranging from 23 to 91 \ifmmode $\,GHz$\else \,GHz\fi\ at a resolution which varies between 52\rlap{.}$^{\scriptstyle\prime}$\ to 12\rlap{.}$^{\scriptstyle\prime}$. More recently, {\it Planck\/}\ provide the full sky maps in total nine frequency bands ranging from 23 \ifmmode $\,GHz$\else \,GHz\fi\ to 857 \ifmmode $\,GHz$\else \,GHz\fi\ with beam size ranges from 32\rlap{.}$^{\scriptstyle\prime}$\ to 5\rlap{.}$^{\scriptstyle\prime}$. The last two channels of the {\it Planck\/}\ (545 and 857 \ifmmode $\,GHz$\else \,GHz\fi) are not polarization-sensitive and mainly designed for intensity observation. All these multi-frequency maps are the mixture of cosmological, Galactic and extra-galactic components (e.g., CMB anisotropies, thermal dust, synchrotron, spin dust/Anomalous Microwave Emission (AME), faint/strong radio and infrared sources, thermal/kinetic Sunyaev-Zeldovich (tSZ/kSZ) effects etc.). However, for polarization, the spectrum is less complex. The high-frequency ends of the spectrum are dominated by thermal emission from Galactic dust\citep{planck-XXI:2015}. Low-frequency bands are synchrotron dominated. In addition to these, hints of polarized AME has been found \citep{Leitch:1997,Finkbeiner:2004}. However, it seems that this component plays an important role at 10-60 \ifmmode $\,GHz$\else \,GHz\fi\ \citep{de_Oliveira-Costa:2004}, and it has a low polarization degree (1-2\%, \cite{Genova-Santos:2017}).
Separating the astrophysical sources is a crucial step in the scientific exploitation of such rich data. Over the past few years, the study of the Galactic thermal dust and synchrotron has been tied up with observational cosmology \citep{Hazumi:2019, SO:2020, CMB-S4:2016, PICO:2019} that is searching for primordial B-mode polarization in CMB, a proof of epoch of inflation \citep{Guth:1981}. The reason for this entanglement is that the expected B-mode signal in CMB imprinted from the primordial Gravitational waves during inflation is highly obscured by polarized Galactic emissions of thermal dust and synchrotron \citep{planck-I:2020}. The level of contamination depends on the energy scale of inflation \citep{Knox:2002}. Therefore, the separated foreground maps will help in building accurate modelling of thermal dust and synchrotron polarization models \citep{T_Ghosh:2017, Adak:2019, Guillet:2018, Regaldo:2020, Clark:2019, Fauvet:2011} in this regard. Furthermore, the component maps will help in detailed understanding of thermal dust and synchrotron emission, Galactic magnetic field, Galactic astrophysics etc.
Several component separation methods have been developed over the past decades to clean the CMB signal from foregrounds, systematic effects, extra-galactic emissions. For intensity data the widely used techniques in \textit{WMAP}\xspace\ and {\it Planck\/}\ mission are ILC \citep{Tegmark:1997}, \ensuremath{\tt SMICA}\ \citep{Delabrouille:2003}, \ensuremath{\tt Commander}\ \citep{Eriksen:2008}, \ensuremath{\tt NILC}\ \citep{Basak:2011}, \ensuremath{\tt SEVEM}\ \citep{Fernandez-Cobos:2012}, \ensuremath{\tt SILC}\ \citep{SILC:2016}, \ensuremath{\tt L-GMCA}\ \citep{LGMCA:2014} and many more to clean CMB temperature from others contamination. Out of these methods, \ensuremath{\tt Commander}\ is a Bayesian fitting technique that can provide all astrophysical foreground maps along with the cleaned CMB map. A generalized version of Needlet ILC called \ensuremath{\tt GNILC}\ \citep{planck-XLVIII:2016} estimate the thermal dust maps disentangling from other Galactic foregrounds and Cosmic Infrared Background emission. Not all of these methods mentioned above provide foreground polarization maps. An updated version of \ensuremath{\tt SMICA}, \ensuremath{\tt Commander}\ and \ensuremath{\tt GNILC}\ can only provide polarized thermal dust and synchrotron maps.
Our interest lies in applying the ILC method in separating thermal dust and synchrotron polarization templates using multi-frequency data. The standard ILC method is extensively used to recover the CMB temperature maps by a weighted sum of multi-frequency data \citep{Tegmark:1997, Basak:2011, Eriksen:2004}. This paper presents another way of application of ILC aiming to estimate the foreground signals for which the electromagnetic spectrum is known. The simplicity of ILC is that it does not assumes anything about the model of the components. ILC estimates the weights by minimizing the variance of the resulting map. The minimization is generally done either in pixel space \citep{Tegmark:1997} or in harmonic space \citep{Kim:2009}. This method is only applicable to the spin-0 fields where quantities are not projected in local frames. However, in the case of polarization, we need to deal with the components having polarization vectors projected in the local frame. Stokes {$Q$ } and {$U$ } are not projected in a global reference frame like temperature. The mean and variance for individual spinorial components, therefore, are not defined. Therefore, a natural extension of ILC in the individual {$Q$ }, {$U$ } field is not possible. The straightforward way to apply a similar version of the ILC method for polarization data is to work on E- and B- mode maps \citep{Basak:2013}. However, only partial sky polarization data are commonly available in a real scenario, and decomposing them to E- and B- maps is not a trivial task. \cite{PILC:2016} develop an algorithm generalizing the standard ILC method, which applies to {$Q$ } $\pm$ i{$U$ }, called polarization ILC (PILC). Although {$Q$ } $\pm$ i{$U$ } transforms like spin-2 variable \citep{Hu_and_White:1997}, since PILC approach is based on minimization of covariant quantity, it preserves the coherence of the spinorial description. The performance of the standard ILC has limitations. It assumes all components are specially uncorrelated, whereas the Galactic foregrounds are not. For example, polarized thermal dust and synchrotron are found to be correlated \citep{Choi_and_Page:2015}. However, adding multiple constraints to reduce the contamination from other astrophysical components can significantly improve the standard ILC's performance. This method is called constrained ILC (cILC, \cite{Remazeilles:2011}). \cite{Remazeilles:2011} use this method for simultaneous estimation of CMB and thermal Sunyaev–Zeldovich emission. \cite{Hurier_2013} present this method in a more general form.
In this paper, we develop an algorithm combining the extended version of the PILC method for heavily constraint equations (similar to cILC) with the recently developed moment expansion method of the foregrounds modelling in \cite{Chluba:2017}. Moment expansion is a powerful approach proposed by \cite{Chluba:2017} to describe the unknown complexity of the foregrounds due to variations of the spectral properties along the line-of-sight (LOS) inside the beam and across the sky. In short, moment expansion is a perturbative approach of foreground modelling under some assumption of spectral energy distribution (SED) of the components. Therefore, our method is a semi-blind component separation algorithm that performs in interface of the blind and parametric component separation methods. In the current paper, we aim to demonstrate the performance of this algorithm in estimation of thermal dust and synchrotron {$Q$ }, {$U$ } maps at 353 \ifmmode $\,GHz$\else \,GHz\fi\ and 30 \ifmmode $\,GHz$\else \,GHz\fi\ respectively. We use three sets of \textit{WMAP}\xspace\, and {\it Planck\/}\ simulated maps with varying foreground complexity. The purpose of using different set of simulations is to check the robustness of the algorithm independent of complexity of the foreground model. A similar method has been applied in \cite{Remazeilles:2020} for CMB B-mode recovery , mapping relativistic tSZ effect in \cite{Remazeilles_chluba:2020} and recovery of spectral distortion signal in \cite{Rotti:2020}. Besides, we anticipate that a similar method can also be applicable in global 21 cm signal recovery.
The paper is organized as follows. In Sect.~\ref{sec:data}, we describe the simulated data sets and binary mask used in the paper. Section.~\ref{sec:method} summarizes the methods applied in the analysis. In Sect.~\ref{sec:srategy}, we explain the strategy of implementing the method discussed in Sect.~\ref{sec:method}. In Sect.~\ref{sec:opt_sim}, we discuss the main results. Finally in Section.~\ref{sec:conclusion}, we conclude the results.
\section{Data used}
\label{sec:data}
In this section, we describe the Galactic mask and simulated data used in this paper.
\subsection{Global Mask used}
\label{sec:mask}
Due to the anisotropic nature of the foreground contributions, the application of the ILC method over whole sky data is not the most efficient way. Therefore, we use the intermediate to high Galactic region in the analysis. We use 78\% Galactic mask publicly available in Planck Legacy Archive\footnote{\url{pla.esac.esa.int/pla}}. The mask is provided in \ensuremath{\tt HEALPix}\footnote{\url{https://healpix.jpl.nasa.gov/}} \citep{Gorski:2005} grid at \ensuremath{N_{\rm side}}\ = 2048. We downgrade the mask at \ensuremath{N_{\rm side}}\ = 256. In Figure.~\ref{fig:mask}, we present the Galactic mask at \ensuremath{N_{\rm side}}\ = 256. Hereafter, we call this mask {GAL78}.
\begin{figure}
\centering
\includegraphics[width=8.4cm]{mask_gal_ns256.pdf}
\caption{The {GAL78}\ mask that comprises 78\% of the sky. The masked region is shown in grey color and the sky used for analysis in this paper is shown in red.}
\label{fig:mask}
\end{figure}
\subsection{Simulated data}
\label{sec:sim}
We use PySM\footnote{\url{https://github.com/bthorne93/PySM_public}} \citep{pysm:2017} for simulating Stokes IQU maps. We use $K, Ka, Q, V, W$ \textit{WMAP}\xspace\ bands and {\it Planck\/}\ all Low-frequency instrument (LFI, \cite{Mennella:2011}) bands and High-frequency instrument (HFI, \cite{planck-IV:2011}) polarization-sensitive bands in simulations. The maps are smoothed at a common resolution of FWHM = 1$^{\ifmmode^\circ\else$^\circ$\fi}$ and projected at \ensuremath{\tt HEALPix}\ grid at \ensuremath{N_{\rm side}}\ = 256. We consider CMB, thermal dust, synchrotron, AME and instrument noise in all simulations. We express the final maps in Rayleigh-Jeans (RJ) unit. For CMB, we use realization of fully lensed maps at tensor-to-scalar ratio $r$ = 0.0. The values of the cosmological parameters are motivated from recent {\it Planck\/}\ determined values reported in \cite{planck-VI:2018}. The \textit{WMAP}\xspace\ noise RMS ($\sigma_0$) for polarization are 1435, 1472, 2197, 3141, 6560 $\ifmmode \,\mu$K$\else \,$\mu$\hbox{K}\fi$ at $K, Ka, Q, V, W$ bands respectively. We compute noise RMS at each pixels following $\sigma_{w} (p) = \sigma_0/\sqrt{N_{obs}} (p)$, where $N_{obs} (p)$ is the \textit{WMAP}\xspace\ scanning pattern at \ensuremath{N_{\rm side}}\ = 512. Finally, we simulate white noise maps from $\sigma_{w} (p)$ maps at \ensuremath{N_{\rm side}}\ = 512, smooth them at FWHM = 1$^{\ifmmode^\circ\else$^\circ$\fi}$ and downgraded at \ensuremath{N_{\rm side}}\ = 256. We use FFP10 noise maps \citep{planck-x:2016} for {\it Planck\/}\ frequencies that are available in PLA. We use $\ensuremath{\tt a2}$ model (AME is denoted by $\ensuremath{\tt a}$) for simulating AME, where 2\% global polarization is introduced as described in \citep{pysm:2017}. We finally prepare following three sets of simulations with different thermal dust and synchrotron model in PySM which we describe below.
\begin{itemize}
\item
SET1: We use PySM $\ensuremath{\tt d1s1}$ model, where thermal dust and synchrotron is denoted by $\ensuremath{\tt d}$ and $\ensuremath{\tt s}$ respectively and corresponding base models are described in \cite{pysm:2017}. In $\ensuremath{\tt d1s1}$ model, PySM follow modified blackbody (MBB) for thermal dust and power-law for synchrotron. In $\ensuremath{\tt d1s1}$ model, PySM use \ensuremath{\tt Commander}\ recovered thermal dust template at 353 \ifmmode $\,GHz$\else \,GHz\fi\ \citep{planck-x:2016} and \textit{WMAP}\xspace\ 23 GHz map \citep{Bennett:2013} as synchrotron template for polarization. Thermal dust temperature and spectral index map used here is derived using \ensuremath{\tt Commander}. Synchrotron spectral index map is taken from \cite{Miville-Desch:2008}. \\
\item
SET2: We use PySM $\ensuremath{\tt d4s3}$ model. This model uses a two-component thermal dust model with the templates derived in \citep{Meisner:2014}. $\ensuremath{\tt s3}$ follows a curved power-law model with a baseline curvature value of -0.052 at 23 \ifmmode $\,GHz$\else \,GHz\fi. \\
\item
SET2: We use PySM $\ensuremath{\tt d7s1}$ model, where thermal dust model is replaced by dust grain characterization based model described in \cite{Brandon:2017}.
\end{itemize}
\section{Methods}
\label{sec:method}
\subsection{Moment expansion of foreground emissions}
Foreground emissions are thought to be a superposition of the emission from individual emitting blocks that can be characterized by varying SEDs. Therefore, when we observe the sky within some beam; the line-of-sight and spatial average over SEDs are inevitable. These effects alter the spectral properties of the observed emissions. For example, although spectral properties of the synchrotron emission can be described as a power-law model for individual blocks, after averaging inside the beam, it remains no longer the power-law \citep{Remazeilles:2020}. This effect results in frequency-frequency decorrelation. Aside from the above two averaging effects, downgrading the maps at lower angular resolution also gives rise to the spectral averaging effect.
\cite{Chluba:2017} propose moment expansion method, one unique approach of foreground modelling to take into account all of these averaging effects. In this section, we briefly describe the moment expansion method of \cite{Chluba:2017} and especially apply it to thermal dust and synchrotron SEDs.
The Galactic foregrounds can be considered as a collection of emissions of amplitude $\delta I_{\nu}(p, s)$ from different emitting layers along each LOS. $p$ denotes the pixel and $s$ denotes distance of the layer along LOS. Let us assume that we know the form of spectral properties $f(\nu , \boldsymbol{\beta})$ of the components, where $\boldsymbol{\beta} \equiv [{\beta}_1, {\beta}_2, .., {\beta}_n] (p, s)$ denotes the general form of spectral parameters of the component of interest (e.g, For thermal dust the spectral parameters are dust temperature $T_d (p, s)$ and spectral index $\beta_{d} (p, s)$). The spectral properties likely vary across the sky inside instrumental beam as well as along LOS. However, averaging along the LOS and inside the instrumental beam, both have physically the same effect, leading to a mixture of SEDs of the emitting layers. Considering that there are infinite layers along each LOS, we can statistically model the total observed emission $I_{\nu} (p)$\footnote{Here, by $I_{\nu} (p)$, we denote Stokes $I (p)$, $Q (p)$, $U (p)$ or $E (p)$, $B (p)$ at some frequency $\nu$. Hereafter, $p$ is the central pixel of the beam.} as overall observed amplitude $I_{\nu_{0}} (p)$ at some pivot frequency $\nu_{0}$ multiplied by statistical average of SEDs, along LOS and inside the beam, $f(\nu , \boldsymbol{\beta} (p))$:
\begin{align}
\label{eq:fg}
I_{\nu} (p) = I_{\nu_{0}} (p) f(\nu , \boldsymbol{\beta} (p ))
\end{align}
As shown in \cite{Chluba:2017}, we can expand $f(\nu , \boldsymbol{\beta} (p ))$ using multi-dimensional Taylor series as\footnote{We follow the convention: ${ \partial\beta_1^{\, i} \partial\beta_2^{\, j}\cdots \partial\beta_n^{\,k} f \left(\nu, \, \overline{\boldsymbol{\beta}} (p)\right)} = {\partial^{i+j+..+k f(\nu, \boldsymbol{\beta})}\over{\partial\beta_1^{\, i} \partial\beta_2^{\, j}\cdots \partial\beta_n^{\,k}}}\Big|_{\overline{\boldsymbol{\beta}}}$},
\begin{align}
\label{eq:f_moment_expansion}
f(\nu ,\boldsymbol{\beta}(p))&=f (\nu, \overline{\boldsymbol{\beta}})(p)
+\sum_i (\beta_i (p) -\overline{\beta}_i) \,\partial_{{\beta}_i} f(\nu, \overline{\boldsymbol{\beta}})
\nonumber\\[-0.5mm]
&\!\!\!\!
+\frac{1}{2!}\sum_i \sum_j (\beta_i (p) -\overline{\beta}_i)(\beta_j (p) -\overline{\beta}_j) \,\partial_{{\beta}_i}\partial_{{\beta}_j} f (\nu , \overline{\boldsymbol{\beta}})
\nonumber\\[-0.5mm]
&\quad+ \ldots,
\end{align}
where $\overline{\boldsymbol{\beta}} \equiv [\overline{\beta}_1, \overline{\beta}_2, .., \overline{\beta}_n]$ is the pivot value of the SED vector.
The moment map of order $i + j + ...+ k$ is defined in \cite{Chluba:2017} as:
\begin{flalign}
\label{eq:moment}
m_{ij...k} (p)
= I_{\nu_{0}} (p){\left(\beta_1(p)-\overline{\beta}_1\right)^{i}\left(\beta_2(p)-\overline{\beta}_2\right)^{j}\cdots\left(\beta_n(p)-\overline{\beta}_n\right)^{k}\over i!j!\cdots k!}.
\end{flalign}
The beauty of this approach is that foregrounds can be expressed in terms of spatially varying moments having respective constant spectral properties across the sky which is given by,
\begin{align}
{ \partial\beta_1^{\, i} \partial\beta_2^{\, j}\cdots \partial\beta_n^{\,k} f \left(\nu, \, \overline{\boldsymbol{\beta}}\right)}.
\end{align}
One can now consider the moment maps $m_{ij...k} (p)$ as different astrophysical components of total foreground contribution in multi-frequency data. These components can easily be incorporated in cILC framework, which has been described in Sect.~\ref{sec:cILC}.
In the present work, we consider the thermal dust and synchrotron as the main polarized foreground components. We apply the moment expansion particularly for these two components described below.
It is widely accepted that the synchrotron emission follows power-law in RJ unit,
\begin{align}
f_{\rm sync}\left(\nu, \beta_s(p)\right) = \left({\nu \over \nu_s}\right)^{\beta_s(p)},
\end{align}
where $\beta_s(p)$ is the synchrotron spectral index map.
The thermal dust follows the MBB spectrum,
\begin{align}
f_{\rm dust}\left(\nu, \beta_d(p), T_d(p)\right) = \left({\nu \over \nu_d}\right)^{\beta_d(p)+1} {\exp\left({h\nu_d\over k_BT_d(p)}\right)-1\over \exp\left({h\nu\over k_BT_d(p)}\right)-1},
\end{align}
in RJ unit, where $\beta_d(p)$ and $T_{d} (p)$ denote dust spectral index and temperature map respectively.
Implementation of the moment expansion for synchrotron spectral parameter up to second-order yields,
\begin{align}
\label{eq:sync_moments}
I_{\rm sync, \nu}(p) &= I_{\nu_s}(p) \left( \frac{\nu}{\nu_s}\right)^{\,\overline{\beta}_s\,\left(1+\frac{\Delta \beta_s(p) }{\overline{\beta}_s}\right)}\cr
&=I_{\nu_s}(p) \bigg[ f_{\rm sync} \left(\nu,\overline{\beta}_s\right)\cr
&+ \,\Delta \beta_s(p)\,\partial_{{\beta}_s} f_{\rm sync} \left(\nu,\overline{\beta}_s\right)
\\ \nonumber
&+{1\over 2} \,\,\Delta \beta^2_s(p)\,\partial^2_{{\beta}_s} f_{\rm sync} \left(\nu,\overline{\beta}_s\right)+\cdots \bigg],
\end{align}
where $\Delta \beta_s(p)=\beta_s(p) - \overline{\beta}_s$, and
\begin{flalign}
\label{eq:sync_moments1}
f_{\rm sync} \left(\nu,\overline{\beta}_s\right) &= \left({\nu\over\nu_s}\right)^{\overline{\beta}_s},\nonumber \\
\partial_{{\beta}_s} f_{\rm sync} \left(\nu,\overline{\beta}_s\right) &= \ln\left({\nu\over\nu_s}\right)f_{\rm sync} \left(\nu,\overline{\beta}_s\right),\\
\partial^2_{{\beta}_s} f_{\rm sync} \left(\nu,\overline{\beta}_s\right) &= \left[\ln\left({\nu\over\nu_s}\right)\right]^2f_{\rm sync} \left(\nu,\overline{\beta}_s\right). \nonumber
\end{flalign}
Similarly, for thermal dust, the moment expansion yields,
\begin{align}
\label{eq:dust_moments}
I_{\rm dust,\, \nu}(p)=\,&I_{\nu_d}(p) \bigg[ f_{\rm dust} \left(\nu,\overline{\beta}_d\right)\cr
&+\,\Delta \beta_d(p)\;\partial_{{\beta}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right)\cr
&+\,\Delta T_d(p)\;\partial_{{T}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right)\cr
&+{1\over 2}\,\,\Delta \beta^2_d(p)\;
\partial^2_{{\beta}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right)\cr
&+\,
\Delta \beta_d(p)
\Delta T_d(p)\;
\partial_{{\beta}_d}\partial_{{T}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right)\cr
&+{1\over 2}\,\,
\Delta T^2_d(p)\;
\partial^2_{{T}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right)\cr
&+\cdots \bigg],
\end{align}
where $\Delta \beta_d(p)=\beta_d(p) - \overline{\beta}_d$, $\Delta T_d(p)=T_d(p) - \overline{T}_d$, and
\begin{flalign}
\label{eq:dust_moments1}
&f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right) = \left({\nu \over \nu_d}\right)^{\overline{\beta}_d+1} {\exp\left({\overline{x}_d}\right)-1\over \exp\left({\overline{x}}\right)-1},\nonumber\\
&\partial_{{\beta}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right) = \ln\left({\nu\over\nu_d}\right)f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right),\nonumber\\
&\partial_{{T}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right) = {1\over\overline{T}_{\!d}}\left[{ \overline{x}\exp \left( {\overline{x}} \right) \over \exp \left( {\overline{x}} \right) - 1} - { \overline{x}_d\exp \left( {\overline{x}_d} \right) \over \exp \left( {\overline{x}_d} \right) - 1 }\right] f_{\rm dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right),\nonumber\\
&\partial^2_{{\beta}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right) = \left[\ln\left({\nu\over\nu_d}\right)\right]^2f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right),\\
&\partial^2_{{T}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right) = \left[\overline{x}\coth\left({\overline{x}\over 2}\right) - \overline{x}_d\coth\left({\overline{x}_d\over 2}\right)\right] {1\over \overline{T}_{\!d}}\partial_{{T}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right),\nonumber\\
&\partial_{{\beta}_d}\partial_{{T}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right) = \ln\left({\nu\over\nu_d}\right)\partial_{{T}_d} f_{\rm dust} \left(\nu,\overline{\beta}_d, \overline{T}_{\!d}\right) \nonumber
\end{flalign}
are the moment SEDs up to second-order moment expansion. Here, $x = {h \nu\over K_B T_d}$, $x_d = {h \nu_d\over K_B T_d}$ and $\overline{x} = {h \nu\over K_B \overline{T}_d}$.
\subsection{Basics of ILC algorithm}
\label{sec:cleaning}
This section review the different methodology of implementation of ILC based algorithm, which allows us to deal with different spinorial components. First, we review the implementation of standard ILC to the temperature field (spin-0 field) in Sect.~\ref{sec:T_ILC}. In Sect.~\ref{sec:ILC}, we describe the generalization of standard ILC in the spinorial frame. Next, we review the extension of the standard ILC method for a set of constraint equations, called cILC in Sect.~\ref{sec:cILC}. Finally, in Sect.~\ref{sec:cMILC}, we describe the application of cILC in framework of moment expansion in the context of the current paper.
\subsubsection{Temperature implementation of standard ILC}
\label{sec:T_ILC}
The total observed temperature map $T_{\nu}$(p) at frequency $\nu$ is assumed to be a combination of all astrophysical and cosmological signals,
\begin{equation}
\label{eq:T_ilc_data}
T_{\nu} (p) = a_{\nu} S_c (p) + n_{\nu} (p),
\end{equation}
where $S_{c} (p)$ is the cth component having electromagnetic spectrum $a_{\nu}$. Let us assume $a_{\nu}$ is constant over the sky. $n_{\nu} (p)$ contains the rest of the components and noise in temperature data at frequency $\nu$. For convenience, lets rewire the Eq.~\ref{eq:T_ilc_data} in vector form for all $N_{obs}$ channels,
\begin{equation}
\label{eq:T_ilc_data_v}
\textbf{T} (p) = \boldsymbol{a}{S_c} (p) + \textbf{n} (p),
\end{equation}
where vectors $\textbf{T} (p)$ and $\textbf{n} (p)$ contains data and noise for all frequencies.
In standard ILC framework, the estimated component is,
\begin{equation}
\label{eq:T_weighted_sum}
\hat{S}_c (p) = \sum_{\nu} w_{\nu} T_{\nu}(p),
\end{equation}
that has minimum variance, i.e.,
\begin{equation}
\label{eq:T_variance}
\frac{\partial}{\partial \boldsymbol{w}}\boldsymbol{w^T \mathscr{C} w} = 0,
\end{equation}
where $\boldsymbol{\mathscr{C}} = \left\langle d d^{T} \right\rangle$ is the covariance matrix of dimension $N_{obs} \times N_{obs}$ of the temperature data maps and $\left\langle .. \right\rangle$ denotes the average is taken over all pixels inside the region of interest. $w_{\nu}$ is the ILC weight at frequency $\nu$.
For unbiased estimation of $\hat{S}_c (p)$, we must assume the ILC weights $\boldsymbol{w^{T}}$ = $(w_{1}, w_2, ... w_{N_{obs}})$ should satisfy the constraint,
\begin{equation}
\label{eq:T_constrain}
\boldsymbol{w^T a} = 1.
\end{equation}
Combining Eq.~\ref{eq:T_variance} and Eq.~\ref{eq:T_constrain} with Lagrange multiplier $\lambda$, we get,
\begin{equation}
\label{eq:T_lagrange}
\frac{\partial}{\partial \boldsymbol{w}} \left [\boldsymbol{w^T \mathscr{C} w} + \lambda (1 - \boldsymbol{w^T a})\right] = 0.
\end{equation}
Solving the system of equation.~\ref{eq:T_lagrange}, the ILC weights are determined as,
\begin{equation}
\label{eq:T_opt_weight}
\boldsymbol{w}^{T} = \boldsymbol{ a^{T} \mathscr{C}^{-1} (a \mathscr{C}^{-1} a)^{-1}}.
\end{equation}
\subsubsection{ILC in polarization}
\label{sec:ILC}
The straightforward generalization of standard ILC for polarization is application of the method described in Sect.~\ref{sec:T_ILC} on {$E$}- and {$B$ } maps decomposed from {$Q$ }, {$U$ } maps. Decomposition of {$Q$ }, {$U$ } maps to {$E$}- and {$B$ } maps over incomplete sky is not a trivial task. Because some amount of {$E$}-mode leaks into B-mode maps during decomposition over incomplete sky. \cite{PILC:2016} generalize the standard ILC for $P_{\nu}^{\pm}(p) = Q_{\nu} (p) \pm iU_{\nu}(p)$ maps which transform like spin-2 field. In this section, we briefly review this technique. The $P_{\nu}^{\pm}(p)$ map at frequency $\nu$ can be considered as a sum of component maps,
\begin{equation}
\label{eq:ilc_data}
P_{\nu}^{\pm} (p) = \sum_{c = 1}^{N_{c}} {A}_{\nu c} P_{c}^{\pm} (p) + N_{\nu}^{\pm} (p),
\end{equation}
where $P_c^{\pm}(p) = Q_c (p) \pm iU_c (p)$ indicates the spin-2 quantities of the individual components, $Q_c (p)$, $U_c (p)$ being the Stokes {$Q$ }, {$U$ } maps of the components. $A_{\nu c}$ is the coefficient of the \textit{mixing matrix} $\textbf{A}$. $N_{\nu}^{\pm} = Q_n (p) \pm iU_n (p)$ indicates the spin-2 field of the instrument noise at frequency $\nu$ and $N_c$ is the number of the components present in the data.
Assuming the mixing matrix is constant across the sky or over the domain of some pixels $\mathscrsfs{D} (p)$, the Eq.~\ref{eq:ilc_data} can be rewritten in vector form for all $N_{obs}$ observed channels as,
\begin{equation}
\label{eq:ilc_data_v}
\textbf{P}^{\pm} (p) = \textbf{A } P_c^{\pm} (p) + \textbf{N}^{\pm} (p)
\end{equation}
where $\textbf{P}^{\pm} (p)$ and $\textbf{N}^{\pm} (p)$ are respectively the vectors containing data and noise spin-2 fields for all $N_{obs}$ observed channels at pixel $p$. $\boldsymbol{P_c}^{\pm} (p)$ vector contains the spin-2 fields of the components. Mixing matrix $\textbf{A}$ has the dimension of $N_{obs} \times N_{c}$.
\cite{PILC:2016} originally develop the method for estimating the CMB polarization maps where the spectral property of CMB is assumed to be unity in the thermodynamic unit ($K_{CMB}$). Here, we describe the method for a general component $P_c^{\pm} (p)$ which has a spectral property $f_c$. The ILC approach demands prior information of the spectral property $f_c$ of the component of interest $P_c^{\pm} (p)$ and estimates that component map from the weighted sum of the total frequency maps. \cite{PILC:2016} assumes these weights are the complex numbers and hence the component of interest can be estimated as,
\begin{equation}
\label{eq:weighted_sum_pilc}
\hat{P}_c^{\pm} (p) = (\boldsymbol{w}^{T} \pm i\, \boldsymbol{m}^T) \boldsymbol{P}^{\pm} (p) = \sum_{\nu} (w_{\nu} \pm i\, m_{\nu}) P_{\nu}^{\pm} (p).
\end{equation}
The weights are determined from minimum variance of $ |\hat{P}_c (p)|^2 $ in such a way that spectrum $f_c$ Of the component must satisfy the following constraint equations,
\begin{flalign}
\label{eq:constrain_pilc}
&\boldsymbol{w^T f_c} = 1,\nonumber\\
&\boldsymbol{m^T f_c} = 0.
\end{flalign}
A special case of the Eq.~\ref{eq:constrain_pilc} is that where $m_{\nu}$ is zero for all the frequencies. A similar approach has been described in \cite{Kim:2009}. Here, we adopt this special case instead of a more general version of the algorithm described in Sect.2.2 of \cite{PILC:2016}. Therefore, Eq.~\ref{eq:weighted_sum_pilc} gets simplified to the standard form,
\begin{equation}
\label{eq:weighted_sum}
\hat{P}_c^{\pm} (p) = \boldsymbol{w}^{T} \boldsymbol{P}^{\pm} (p) = \sum_{\nu} w_{\nu} P_{\nu}^{\pm} (p),
\end{equation}
that must has minimum variance, i.e.,
\begin{equation}
\label{eq:variance}
\frac{\partial}{\partial \boldsymbol{w}} \left\langle |\hat{P}_c (p)|^2 \right\rangle = \boldsymbol{w^T C w} = 0,
\end{equation}
with the constraint,
\begin{equation}
\label{eq:constrain}
\boldsymbol{w^T f_c} = 1,
\end{equation}
where, $\textbf{C} = \left\langle \boldsymbol{d} (p)\boldsymbol{d}^{\dagger} (p) \right\rangle$ is the covariance matrix of dimension of $N_{obs} \times N_{obs}$ of the data maps ($\dagger$ denotes conjugate transpose of the matrix) and the $\boldsymbol{w^{T}}$ = $(w_{1}, w_2, ... w_{N_{obs}})$ are the weights to the $N_{obs}$ frequency maps. The elements of the covariance matrix is computed as,
\begin{equation}
\label{eq:cov}
C_{\nu \nu^{'}} = \left\langle Q_{\nu} (p) Q_{\nu^{'}} (p) + U_{\nu} (p) U_{\nu^{'}} (p) \right\rangle
\end{equation}
Note that $ \boldsymbol{d} (p)\boldsymbol{d}^{\dagger} (p)$ is a covariant quantity and hence defined in a global reference frame.
Here, $\boldsymbol{f_c}$ is related to mixing matrix \textbf{A} through $\boldsymbol{f_c = A e_c}$, where $\boldsymbol{e_c}$ is a vector of dimension $1 \times N_c$ of which all the elements are zero except the cth element that is one, $e_c = [0,0,0,..1,..0]^{T}$
The weights can be computed by solving $N_{obs}$ linear system of the equation along with Eq.~\ref{eq:constrain} using Lagrange undetermined multiplier method. A straightforward algebra yields,
\begin{equation}
\label{eq:lagrange_multiplies}
\begin{pmatrix}
2\boldsymbol{C} & -\boldsymbol{f_c}\\
\boldsymbol{f_c}^T & 0
\end{pmatrix} \begin{pmatrix} \boldsymbol{w}\\ \lambda \end{pmatrix}\,\, = \,\, \begin{pmatrix} \boldsymbol{0}\\ 1\end{pmatrix} ,
\end{equation}
where \textbf{0} denotes the column matrices of all elements zero, and $\lambda$ is the Lagrange multiplier. Solving the system of equation.~\ref{eq:lagrange_multiplies}, we obtain the weights,
\begin{equation}
\label{eq:opt_weight}
\boldsymbol{w}^{T} = \boldsymbol{ f_c^{T} C^{-1} (f_c C^{-1} f_c)^{-1}}.
\end{equation}
Finally, the estimated component map is,
\begin{flalign}
\label{eq:opt_component}
\hat{P}_c^{\pm} (p)& = \boldsymbol{(f_c C^{-1} f_c)^{-1} f_c^{T} C^{-1} P^{\pm} (p)}\\ \nonumber& = P_c^{\pm} (p) + \sum_{i = 1, i\neq c}^{N_{c}-1} w_{\nu} ({A}_{\nu c} P_{i}^{\pm} (p) + N_{\nu}^{\pm} (p))\\\nonumber & = P_c^{\pm} (p) + F_c^{\pm} (p) + N_c^{\pm} (p).
\end{flalign}
The beauty of this method is that we can directly work on {$Q$ }, {$U$ } space over an incomplete sky. This is useful since the Galactic masks are conventionally defined in {$Q$ }, {$U$ } space. It is essential to use the Galactic masks. Otherwise, ILC weights will be determined mainly by the variance of the pixels at the Galactic plane.
It is important to note that the estimated map is biased due to the non-zero projection $F_c^{\pm}(p)$ of the SEDs of other components on the SEDs of the component of interest. Besides, the solution is biased by residual leakage of instrumental noise $N_c^{\pm}(p)$ and chance correlation between components. However, one can demand that the solution can be made better by minimizing the variance and optimizing the weights having a unit response to the $f_c$ and simultaneously zero response to other components' SEDs. This method is called constrained ILC, which has been described in the next section.
\subsubsection{Constrained ILC in general form}
\label{sec:cILC}
When the emission spectra of some of the components are known, it is possible to deproject them using additional constraint equations in the variance minimization process of ILC. \cite{Remazeilles:2011} have applied this method in simultaneous estimation of CMB and tSZ components. However, in practice, we can put constraints for any number of components $N_{rc +1}$ of known SEDs as,
\begin{align}
\label{eq:set_of_constrain}
& \boldsymbol{w^{T}f_1} = 0,\nonumber\\
&\boldsymbol{w^{T}f_2} = 0,\nonumber\\
\vdots \nonumber \\
&\boldsymbol{w^{T}f_c} = 1,\\
\vdots \nonumber \\
& \boldsymbol{w^{T}f_{N_{rc}+1}} =0.\nonumber
\end{align}
Here, our goal is to estimate the cth component eliminating the contamination of selected $N_{rc}$ components. To express the constraint equations in more general from, we can define a matrix $\textbf{F}$ of dimension $N_{obs} \times (N_{rc} + 1)$ as,
\begin{equation}
\label{eq:F_matrix}
\boldsymbol{F} = \begin{pmatrix}
f_1[1] & \cdots & f_{N_{rc} +1} \\
\vdots & \ddots & \vdots \\
f_{1}[N_{obs}]& \cdots & f_{N_{obs}}[N_{obs}]
\end{pmatrix}.
\end{equation}
Then the set of equations.~\ref{eq:set_of_constrain}\, now can be conveniently expressed as,
\begin{equation}
\label{eq:set_of_constrain1}
\boldsymbol{F^{T} w} = \boldsymbol{e},
\end{equation}
where $\boldsymbol{e} = [0, 0, ... 1,..0]^{T}$ is the column matrix with all elements zero except cth element that is one. In this case, Eq.~\ref{eq:lagrange_multiplies} can be generalized to,
\begin{equation}
\label{eq:lagrange_multiplies_gen}
\begin{pmatrix}
2\boldsymbol{C} & -\boldsymbol{F}\\
\boldsymbol{F}^T & 0
\end{pmatrix} \begin{pmatrix} \boldsymbol{w}\\ \boldsymbol{\lambda} \end{pmatrix}\,\, = \,\, \begin{pmatrix} 0\\ \boldsymbol{e}\end{pmatrix} ,
\end{equation}
where $\boldsymbol{\lambda} = (\lambda_1, \lambda_2, ..., \lambda_{N_{rc} + 1})^{T}$ is the vector containing $N_{rc} + 1$ Lagrange multipliers. Simple algebraic solution of system of equation.~\ref{eq:lagrange_multiplies_gen} gives the optimized weights as,
\begin{equation}
\label{eq:cmilc_weights}
\boldsymbol{w}^T = \boldsymbol{e^{T}} (F^{T}C^{-1}F)^{-1} F^{T} C^{-1}.
\end{equation}
The estimated component can be expressed as,
\begin{equation}
\label{eq:opt_component_gen}
\hat{P}_c^{\pm} (p) = \{ \boldsymbol{e^{T}} (F^{T}C^{-1}F)^{-1} F^{T} C^{-1}\} \boldsymbol{P}^{\pm} (p).
\end{equation}
The variance of standard ILC is less than that of cILC (See Section. 3.4 of \cite{Remazeilles:2020}). It causes a larger noise residual compared to that for standard ILC because of large constraints. However, cILC reduces the foreground residual compared to standard ILC. Therefore, we need to find the optimum number of constraints to balance the noise penalty and leakage from unconstrained components to the recovered map.
\subsubsection{Moment based constrained ILC for estimation of dust and synchrotron maps}
\label{sec:cMILC}
We want to highlight that the zeroth-order moment maps in Eq.~\ref{eq:sync_moments}, and Eq.~\ref{eq:dust_moments} are, in principle, the synchrotron and thermal dust templates at respective pivot frequencies. Here, we aim to estimate thermal dust and synchrotron templates at pivot frequencies of 353 \ifmmode $\,GHz$\else \,GHz\fi\ and 30 \ifmmode $\,GHz$\else \,GHz\fi\ respectively. For that, we make use of the cILC method for a set of constraints applied on the moment SEDs of different order in Eq.~\ref{eq:sync_moments}, and Eq.~\ref{eq:dust_moments}. In short, we are aiming to estimate the zeroth-order moment maps of thermal dust and synchrotron using the cILC framework projecting out other higher-order moments applying the orthogonality condition to higher-order moment SEDs w.r.to the SED of the zeroth-order moments of the respective components. Hereafter, we refer this method to be cMILC algorithm.
For estimating thermal dust template at 353\ifmmode $\,GHz$\else \,GHz\fi, we adopt a subset of the following constraints in cMILC algorithm:
\begin{equation}
\label{eq:cmilc_dust}
\left.\begin{aligned}
& \boldsymbol{w }^{\rm T} \cdot f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 1 \\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot {f_{cmb} } = 0\\[1.5mm]
&\boldsymbol{w }^{\rm T} \cdot f_{sync}\left(\nu, \overline{\beta}_s\right) = 0 \\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial_{{\beta}_s}f_{sync}\left(\nu, \overline{\beta}_s\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial_{{\beta}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial_{{T}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial^2_{{\beta}_s}f_{sync}\left(\nu, \overline{\beta}_s\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial^2_{{\beta}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial^2_{{T}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial_{{\beta}_d}\partial_{{T}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0.
\end{aligned}\right\}
\end{equation}
Similarly, for estimating synchrotron tempalate at 30 \ifmmode $\,GHz$\else \,GHz\fi, we simply interchange the first and third constraints in Eq~\ref{eq:cmilc_dust}:
\begin{equation}
\label{eq:cmilc_sync}
\left.\begin{aligned}
& \boldsymbol{w }^{\rm T} \cdot f_{sync}\left(\nu, \overline{\beta}_s\right) = 1 \\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot {f_{cmb} } = 0\\[1.5mm]
&\boldsymbol{w }^{\rm T} \cdot f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0 \\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial_{{\beta}_s}f_{sync}\left(\nu, \overline{\beta}_s\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial_{{\beta}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial_{{T}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial^2_{{\beta}_s}f_{sync}\left(\nu, \overline{\beta}_s\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial^2_{{\beta}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial^2_{{T}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0\\[1.5mm]
& \boldsymbol{w }^{\rm T} \cdot \partial_{{\beta}_d}\partial_{{T}_d}f_{dust}\left(\nu, \overline{\beta}_d, \overline{T}_{\!d}\right) = 0.
\end{aligned}\right\}
\end{equation}
Here ${f_{cmb} }$ denotes the unit conversion factor of CMB from thermodynamic unit to RJ unit, $f_{cmb} = \frac{x_c^2e^x_c}{(e^{x_c} - 1)^2}$, where $x_c = \frac{h\nu}{k_BT_{CMB}}$ ($T_{CMB}$ = 2.7255 K).
The matrix $\boldsymbol{F}$ in Eq.~\ref{eq:F_matrix} contains the moment SEDs. For example, for thermal dust estimation, the matrix looks like,
\begin{equation}
\boldsymbol{F} = \left( \boldsymbol{f_{\rm dust}}(\nu, \overline{\beta}_d, \overline{T}_{\!d}), \boldsymbol{{f_{cmb} }}, \boldsymbol{f_{sync}}\left(\nu, \overline{\beta}_s\right), ....., \boldsymbol{\partial_{{\beta}_d}\partial_{{T}_d}f_{dust}}(\nu, \overline{\beta}_d, \overline{T}_{\!d}) \right)\nonumber,
\end{equation}
with $\boldsymbol{e} = [1, 0, .....,0]^{T}$. For synchrotron estimation, columns of $\boldsymbol{f_{\rm dust}}$ and $\boldsymbol{f_{\rm sync}}$ in $\boldsymbol{F}$ interchanges. However, the dimension of the $\boldsymbol{F} $ matrix varies depending on the number of the moments passed to cMILC algorithm. As discussed in Sect.~\ref{sec:cILC}, the larger number of the constraints cause extra noise penalty; projecting out all the moments up to second-order does not ensure the estimated map is the optimized solution of the cMILC algorithm. We should make a balance between mitigation of the residual leakage from unconstrained components and degradation of noise residual through the choice of an optimum number of constraints as discussed in Sect.~\ref{sec:srategy}.
\begin{table*}[hbtp]
\caption{The list of the subsets of the SEDs passed to cMILC algorithm in different iterations for estimating dust template. The condition $\textbf{w}^T.f_{\rm dust} = 1$ is applied along with orthogonal condition to rest of the SEDs in each iteration to de-project the corresponding maps. The Ids of each of the iterations are displayed in first column. }
\label{table:dust_constrains}
\begin{tabular}{llc}
\toprule
Id & Subsets of moment SEDs \\
\hline
\hline
cMILC01 & $f_{\rm dust}$ ; ${f_{cmb} }$ \\
cMILC02 & $f_{\rm dust}$ ; $f_{\rm sync}$ \\
cMILC03 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ \\
cMILC04 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ \\
cMILC05 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ \\
cMILC06 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ \\
cMILC07 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ \\
cMILC08& $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$\\
cMILC09 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_T\,f_{\rm dust}$\\
cMILC10 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ \\
cMILC11 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_T\,f_{\rm dust}$ \\
cMILC12 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ \\
cMILC13 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$ \\
cMILC14 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$\\
cMILC15 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_T\,f_{\rm dust}$ \\
cMILC16 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$ \\
cMILC17 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_\beta\,f_{\rm dust}$ \\
cMILC18 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$ ; $\partial^2_T\,f_{\rm dust}$ \\
cMILC19 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$ \\
cMILC20 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_T\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$\\
cMILC21 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_\beta\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$\\
cMILC22 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$\\
cMILC23 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$ ; $\partial^2_T\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$\\
cMILC24 & $f_{\rm dust}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_T\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\caption{The list of the subsets of the SEDs passed to cMILC algorithm in different iterations for estimating synchrotron template. The condition $\textbf{w}^T.f_{\rm sync} = 1$ is applied along with orthogonal condition to rest of the SEDs in each iteration to de-project the corresponding maps. The Ids of each of the iterations are displayed in first column. }
\label{table:sync_constrains}
\begin{tabular}{llc}
\toprule
Id & Subsets of moment SEDs \\
\hline
\hline
cMILC01 & $f_{\rm sync}$ ; ${f_{cmb} }$ \\
cMILC02 & $f_{\rm sync}$ ; $f_{\rm dust}$ \\
cMILC03 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ \\
cMILC04 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm dust}$ \\
cMILC05 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ \\
cMILC06 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm dust}$ \\
cMILC07 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ \\
cMILC08& $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$\\
cMILC09 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm sync}$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_T\,f_{\rm dust}$\\
cMILC10 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ \\
cMILC11 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_T\,f_{\rm dust}$ \\
cMILC12 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ \\
cMILC13 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$ \\
cMILC14 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$ \\
cMILC15 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_T\,f_{\rm dust}$ \\
cMILC16 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$ \\
cMILC17 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_\beta\,f_{\rm dust}$ \\
cMILC18 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$ ; $\partial^2_T\,f_{\rm dust}$ \\
cMILC19 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$ \\
cMILC20 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_T\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$\\
cMILC21 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_\beta\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$\\
cMILC22 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$\\
cMILC23 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$ ; $\partial^2_T\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$\\
cMILC24 & $f_{\rm sync}$ ; ${f_{cmb} }$ ; $f_{\rm dust}$ ; $\partial_\beta\,f_{\rm sync}$ ; $\partial_\beta\,f_{\rm dust}$ ; $\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm sync}$ ; $\partial^2_T\,f_{\rm dust}$ ; $\partial_\beta\partial_T\,f_{\rm dust}$ ; $\partial^2_\beta\,f_{\rm dust}$\\
\bottomrule
\end{tabular}
\end{table*}
\section{Implementation strategy}
\label{sec:srategy}
We apply the cMILC algorithm in pixel space over {GAL78}\ mask. Since cMILC is based on the cILC algorithm, we pass the multi-frequency simulated data and different subsets of moment SEDs in different iterations. The possible subsets of moment SEDs for different iteration used in this analysis are listed in Table.~\ref{table:dust_constrains} (for thermal dust estimation) and Table.~\ref{table:sync_constrains} (for synchrotron estimation). The only difference of Table~\ref{table:dust_constrains} and Table.~\ref{table:sync_constrains} is the columns of $f_{dust}$ and $f_{sync}$ have been interchanged. To construct these moment SEDs, we should choose the pivot values of the parameters involved and pivot frequencies in moment expansion. In principle, the pivot parameters should be chosen differently in different simulations in Sect.~\ref{sec:sim}. In fact, pivot parameters should also be changed when we are using higher-order moments to describe the data.
However, in the interest of speedy analysis, we use fixed values of pivot parameters throughout the study independent of the set of simulations used.
We adopt the pivot synchrotron spectral index, $\overline{\beta}_s$ = -3.00 \citep{Miville-Desch:2008, Krachmalnicoff:2018, Kogut:2007}. For thermal dust, we adopt the pivot dust temperature, $\overline{T}_d$ = 19.4 K \citep{planck-XLVIII:2016} and dust spectral index, $\overline{\beta}_d$ = 1.53 \citep{planck_XI:2014, planck-x:2016, planck-XI:2018}. We choose the pivot frequencies for the synchrotron and thermal dust are $\nu_s$ = 30 \ifmmode $\,GHz$\else \,GHz\fi\ and $\nu_d$ = 353 \ifmmode $\,GHz$\else \,GHz\fi\ respectively.
After implementing the cMILC algorithm for each of the iterations listed in in Table.~\ref{table:dust_constrains} (Table.~\ref{table:sync_constrains}) with corresponding subset of moment SEDs, we apply the cMILC weights to the total frequency maps to estimate the thermal dust map at 353 \ifmmode $\,GHz$\else \,GHz\fi\ (synchrotron map at 30 \ifmmode $\,GHz$\else \,GHz\fi). Our simulations are absolutely calibrated (unlike {\it Planck\/}\ and \textit{WMAP}\xspace\ data) and hence do not attach any additional frequency-dependent terms with component maps except their respective SEDs. To assess the residual leakage from noise, we apply the same weights to the input noise maps. To evaluate the residual leakage from CMB, AME and other unconstrained higher-order moments of thermal dust and synchrotron (hereafter, we refer them together by \textit{moment residual}), we apply same weights to these components as well. In summary, the algorithm returns the dust map at 353 \ifmmode $\,GHz$\else \,GHz\fi\ and synchrotron at 30 \ifmmode $\,GHz$\else \,GHz\fi\ along with corresponding maps of moment residual and noise residual for different iterations simply by interchanging the first and third constrains in a set of Eq.~\ref{eq:cmilc_dust}.
\section{Results}
\label{sec:opt_sim}
In this section, we investigate the cMILC results of recovered thermal dust and synchrotron maps to demonstrate the performance of the method. In this section, we present the results for the simulation in SET1 only. The similar results for rest of the simulations are presented in Appendix.~\ref{sec:other_sim_results}.
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\linewidth]{dust_recovered_Q_maps_modeld1s1a2_ns256_pcmilc.pdf} \par
\includegraphics[width=\linewidth]{dust_recovered_U_maps_modeld1s1a2_ns256_pcmilc.pdf} \par
\end{multicols}
\caption{cMILC results of estimation of thermal dust template for different iterations when deprojecting more and more moments with increasing constraints for the simulation in SET1. \textit{Left panel} shows the results of thermal dust {$Q$ } maps, and \textit{right panel} shows the results of thermal dust {$U$ } maps. The patches are 70$^{\ifmmode^\circ\else$^\circ$\fi}$ $\times$ 70$^{\ifmmode^\circ\else$^\circ$\fi}$ shown in gnomonic projection centered at $(l, b)$ = (90$^{\ifmmode^\circ\else$^\circ$\fi}$, -80$^{\ifmmode^\circ\else$^\circ$\fi}$). All maps are smoothed at a resolution of FWHM = 60\rlap{.}$^{\scriptstyle\prime}$. The first row shows the input thermal dust map. The first, second and third columns of the subsequent rows show the recovered thermal dust maps, moment residual maps and noise residual maps respectively for some selected cMILC iterations starting from cMILC03 to cMILC19. Moment residual reduces significantly with deprojection of more and more higher-order moments up to an optimum choice of constraints till cMILC12. After that, residual increases with increasing constraints. Among all these maps, cMILC12 gives the best recovered maps.}
\label{fig:dust_maps_sim_d1s1}
\end{figure*}
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\linewidth]{sync_recovered_Q_maps_modeld1s1a2_ns256_pcmilc.pdf} \par
\includegraphics[width=\linewidth]{sync_recovered_U_maps_modeld1s1a2_ns256_pcmilc.pdf} \par
\end{multicols}
\caption{cMILC results of estimation of synchrotron template for different iterations when deprojecting more and more moments with increasing constraints for the simulation in SET1. \textit{Left panel} shows the results of synchrotron {$Q$ } maps, and \textit{right panel} shows the results of synchrotron {$U$ } maps. The patches are 70$^{\ifmmode^\circ\else$^\circ$\fi}$ $\times$ 70$^{\ifmmode^\circ\else$^\circ$\fi}$ shown in gnomonic projection centered at $(l, b)$ = (90$^{\ifmmode^\circ\else$^\circ$\fi}$, -80$^{\ifmmode^\circ\else$^\circ$\fi}$). All maps are smoothed at a resolution of FWHM = 60\rlap{.}$^{\scriptstyle\prime}$. The first row shows the input synchrotron map. The first, second and third columns of the subsequent rows show the recovered synchrotron maps, moment residual maps and noise residual maps respectively for some selected cMILC iterations starting from cMILC03 to cMILC19. Moment residual reduces significantly with deprojection of higher-order moments up to an optimum choice of constraints till cMILC12. After that, residual increases with increasing constraints. Among all these maps, cMILC12 gives the best recovered maps.}
\label{fig:sync_maps_sim_d1s1}
\end{figure*}
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\linewidth]{dust_QQ_correlation_maps_modeld1s1a2_ns256_pcmilc.pdf}\par
\includegraphics[width=\linewidth]{dust_UU_correlation_maps_modeld1s1a2_ns256_pcmilc.pdf}
\end{multicols}
\caption{Contour plots of 2D-histogram of input {$Q$ } (\textit{left panel}) and {$U$ } (\textit{right panel}) dust maps and recovered dust maps for simulation in SET1. 1$\sigma$ and 2$\sigma$ contours are shown here for cMILC12 (orange) and cMILC15 (blue) iterations. Most of the pixels are distributed inside a tiny region of distribution for output maps of cMILC12. Whereas pixels for output maps of cMILC15 are distributed inside a far bigger range of the distribution. This implies use of more than 7 constraints deteriorates the performance of algorithm for given instrument sensitivity and channels. }
\label{fig:dust_TT_correlation_d1s1}
\end{figure*}
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\linewidth]{sync_QQ_correlation_maps_modeld1s1a2_ns256_pcmilc.pdf}\par
\includegraphics[width=\linewidth]{sync_UU_correlation_maps_modeld1s1a2_ns256_pcmilc.pdf}\par
\end{multicols}
\caption{Contour plots of 2D-histogram of input {$Q$ } (\textit{left panel}) and {$U$ } (\textit{right panel}) synchrotron maps and recovered synchrotron maps for simulation in SET1. 1$\sigma$ and 2$\sigma$ contours are shown here for cMILC12 (orange) and cMILC15 (blue) iterations. Most of the pixels are distributed inside a tiny region of distribution for output maps of cMILC12. Whereas pixels for output maps of cMILC15 are distributed inside a far bigger range of the distribution. This implies use of more than 7 constraints deteriorates the performance of algorithm for given instrument sensitivity and channels. }
\label{fig:sync_TT_correlation_d1s1}
\end{figure*}
\begin{figure}
\includegraphics[width=9cm]{Dust_power_spectra_data_modeld1s1a2_maps_ns256_cMILC12.pdf}\par
\includegraphics[width=9cm]{Sync_power_spectra_data_modeld1s1a2_maps_ns256_cMILC12.pdf} \par
\caption{EE (circles) and BB (triangles) power spectra for thermal dust (\textit{upper panel}) and synchrotron (\textit{lower panel}) maps. Power spectra of input maps of simulation in SET1 are shown in blue, and that of recovered maps for cMILC12 iteration are shown in green. All spectra are computed over {GAL78}\ apodized mask using \ensuremath{\tt Xpol}. Error bars are 1$\sigma$ uncertainties analytically computed from \ensuremath{\tt Xpol}. The dashed lines indicate the respective best-fit power-law model power spectra. Corresponding best-fit parameters are listed in Table.~\ref{table3}.}
\label{fig:sim_dust_sync_power_d1s1}
\end{figure}
\subsection{Inspection of recovered maps}
\label{sec:map_inspection}
We first inspect the quality of the recovered dust and synchrotron polarization maps and compare them with input maps of respective components. For illustration, we also investigate the amount of residual leakage from unconstrained components and moments as well as residual leakage of noise.
In Figure.~\ref{fig:dust_maps_sim_d1s1}, we summarize the cMILC results of estimation of thermal dust for simulation in SET1 for some selected iterations. We display 70$^{\ifmmode^\circ\else$^\circ$\fi}$ $\times$ 70$^{\ifmmode^\circ\else$^\circ$\fi}$ patches in gnomonic projection centered at the Galactic longitude and latitude, $(l, b)$ = (90$^{\ifmmode^\circ\else$^\circ$\fi}$, -80$^{\ifmmode^\circ\else$^\circ$\fi}$). \textit{Left panel} presents the results of {$Q$ } and \textit{right panel} presents the results of {$U$ }. The first rows show the input thermal dust {$Q$ }, {$U$ } maps, the subsequent rows show the output maps at 353 \ifmmode $\,GHz$\else \,GHz\fi\ of selected cMILC iterations that use different subset of moment SEDs. The corresponding iteration's Ids are shown on the left side of the maps. The First columns show the estimated thermal dust maps at 353 \ifmmode $\,GHz$\else \,GHz\fi, the second columns show the moment residual maps, and the third columns show the noise residual maps. Similar results for estimation of synchrotron map at 30 \ifmmode $\,GHz$\else \,GHz\fi\ are presented in Figure.~\ref{fig:sync_maps_sim_d1s1} over the same sky region. The cMILC03 iteration deprojects zeroth-order moments (${f_{cmb} }$ ; $f_{\rm sync}$) only. Therefore, the moment residuals are reasonably high for this iteration. Deprojecting $\partial_\beta\,f_{\rm dust}$ along with zeroth-order moments (\textit{third rows}) does not reduce the residual much to recovered maps. The moment residual reduces significantly when we deproject all zeroth- and first-order moments in cMILC10 and one of the second-order moments in cMILC11 and cMILC12. Inspecting second columns of the Figure.~\ref{fig:dust_maps_sim_d1s1} and Figure.~\ref{fig:sync_maps_sim_d1s1}, we confirm that moment residual reduces up to cMILC12 as we project out more and more moments. Inspecting the first columns, one can hardly distinguish the map-level differences in the recovered maps for cMILC03, cMILC06, cMILC10, cMILC11 and cMILC12. However, comparing the last two columns, we confirm that recovered maps for cMILC12 are the best in the sense the moment residual leakage is the least for this iteration. We also run the algorithm for simulation in absence of AME. We notice residual leakage in that case is order of magnitude less. In iterations from cMILC15 to cMILC19, we project out all the moment maps up to first order along with subsets of two second-order moments. In Figure.~\ref{fig:dust_maps_sim_d1s1} and Figure.~\ref{fig:sync_maps_sim_d1s1}, we display only the results for cMILC19 out of these four iterations where we project out two second-order moments ($\partial^2_\beta\,f_{\rm sync}$, $\partial_\beta\partial_T\,f_{\rm dust}$) along with all zeroth- and first-order moments.
The recovered maps in this iteration are noisy. This implies, The noise degradation for larger constrains prevents us from getting further better recovery. A similar trend in recovered maps, residual leakage from moment maps and noise have been found for other sets of simulations and shown in Appendix.~\ref{sec:other_sim_results}. Therefore, we do not inspect the rest of the iterations, which de-project more higher-order moments.
To further diagnose the recovered maps, we plot 1$\sigma$ and 2$\sigma$ contours of 2D histogram of input maps and recovered maps for cMILC12 (orange) and cMILC15 (blue) iterations in Figure.~\ref{fig:dust_TT_correlation_d1s1} (for thermal dust) and Figure.~\ref{fig:sync_TT_correlation_d1s1} (for synchrotron). We find, most of the pixels are distributed inside a very tiny region distribution for recovered maps of cMILC12 compared to that of cMILC15. Also the correlation between input and recovered maps are significanly better for cMILC12 than that of cMILC15. We find the correlation coefficients between input thermal dust maps and estimated thermal dust of cMILC12 and cMILC15 iterations are 0.78, 0.99 (for {$Q$ }) and 0.67, 0.99 (for {$U$ }) respectively. Similarly, the correlation coefficients for synchrotron estimation in cMILC12 and cMILC15 iterations are 0.65, 0.99 (for {$Q$ }) and 0.61, 0.99 (for {$U$ }) respectively. This is another proof in support of using more than seven constraints degrades the performance of cMILC algorithm for given sensitivity and frequency coverage.
Doing all these assessments, therefore, we note that cMILC12 provides the best recovered thermal dust and synchrotron maps for joint analysis \textit{WMAP}\xspace\ and {\it Planck\/}\ maps. However, this is not a generic solution for any mission. The performance of cMILC depends on the sensitivity and frequency coverage of the experiments.
\subsection{comparison of the power spectrum}
\label{sec:power_spectra}
In Figure.~\ref{fig:sim_dust_sync_power_d1s1}, we compare the angular power spectra of thermal dust (\textit{upper panel}) and synchrotron (\textit{lower panel}) maps as estimated for cMILC12 and input maps. We compute $EE$ and $BB$ power spectra over {GAL78}\ apodized mask using \ensuremath{\tt Xpol}\ \citep{Tristram:2005}. Results from input maps are shown in blue and that of recovered maps are shown in green. The $EE$ and $BB$ power spectra are presented in Figure.~\ref{fig:sim_dust_sync_power_d1s1} with circles and triangles respectively. The 1$\sigma$ uncertainties are analytically estimated using \ensuremath{\tt Xpol}. We fit the power spectra with power-law model,
\begin{equation}
\label{eq:power-law}
\ensuremath{{\cal D}_{\ell}^{XX}} = A_{XX} (\ell/80)^{\alpha_{XX}+2},
\end{equation}
where $A_{XX}$ is the best-fit amplitude at $\ell =80$, $\alpha_{XX}$ is the best-fit spectral index and $XX=\{EE, BB\}$. We use $\ell$ range of 30-160 of thermal dust power spectra and 2-140 of synchrotron power spectra for fitting Eq.~\ref{eq:power-law} with \ensuremath{\tt MPFIT}\ routine following the same same machinery
as in \cite{planck-XI:2018}. The best-fit power-law model power spectra are shown in dashed lines in Figure.~\ref{fig:sim_dust_sync_power_d1s1}. The corresponding best-fit parameters are listed in Table.~\ref{table3}.
Overall, we find an excellent agreement between power spectra of input and recovered maps both for thermal dust and synchrotron. All the parameters are comparable within 3$\sigma$ statistical uncertainty. Most importantly, we find the power ratio of $B$- and $E$- mode ($A_{BB}/A_{EE}$) measured both for input and recovered map is $\sim$0.56 for thermal dust, and $\sim$0.34 for synchrotron which are very similar to the corresponding values reported in \cite{planck-VI:2018}.
\begin{table}
\caption{ Best-fit parameters of the power-law model fitted to the thermal dust and synchrotron power spectra of the input and recovered maps in cMILC12 iteration. 30 $\leq \ell \leq$ 160 range has been used for fitting thermal dust power spectra, and 2 $\leq \ell \leq$ 140 range has been used for fitting for synchrotron power spectra.}
\label{table3}
\begin{centering}
\begin{tabular}{ p{3.2cm} p{2.0cm} p{2.0cm} }
\hline
parameters & input map & output map\\
\hline
\hline
\textbf{thermal dust; $\ell$ = 30-160}& &\\
$A_{EE}$&555.14 $\pm$ 7.61 & 556.84 $\pm$ 7.63 \\
$A_{BB}$& 313.68 $\pm$ 4.35 &314.22 $\pm$ 4.36\\
$A_{BB}/A_{EE}$ & 0.57 $\pm$ 0.02& 0.56 $\pm$ 0.02\\
$\alpha_{EE}$&-2.30 $\pm$ 0.03&-2.31 $\pm$ 0.03\\
$\alpha_{BB}$&-2.17 $\pm$ 0.03&-2.19 $\pm$ 0.03\\
\hline
\textbf{Synchrotron; $\ell$ = 2-140} &&\\
$A_{EE}$&6.91 $\pm$ 0.10 &6.74 $\pm$ 0.09 \\
$A_{BB}$&2.35 $\pm$ 0.03 &2.24 $\pm$ 0.03\\
$A_{BB}/A_{EE}$ & 0.34 $\pm$ 0.01 & 0.33 $\pm$ 0.01\\
$\alpha_{EE}$&-2.50 $\pm$ 0.03&-2.49 $\pm$ 0.03\\
$\alpha_{BB}$&-2.59 $\pm$ 0.03&-2.62 $\pm$ 0.03\\
\hline
\end{tabular}
\end{centering}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=18cm]{dust_variance_maps_modeld1s1a2d4s3a2d7s2a2_ns256_pcmilc.pdf}
\caption{Evolution of the standard deviation of the output maps at 353 \ifmmode $\,GHz$\else \,GHz\fi\ for simulation in SET1 (green), SET2 (black) and SET3 (magenta) with different cMILC iterations starting from cMILC01 to cMILC19 where we pass different subsets moment SEDs. The \textit{left panel} presents the standard deviations of the recovered thermal dust maps, \textit{middle panel} presents the standard deviations of the moment residual maps, and \textit{right panel} presents the standard deviations of the noise residual maps at 353 \ifmmode $\,GHz$\else \,GHz\fi. }
\label{fig:stat_res_dust}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=18cm]{sync_variance_maps_modeld1s1a2d4s3a2d7s2a2_ns256_pcmilc.pdf}
\caption{Evolution of the standard deviation of the output maps at 30 \ifmmode $\,GHz$\else \,GHz\fi\ for simulation in SET1 (green), SET2 (black) and SET3 (magenta) with different cMILC iterations starting from cMILC01 to cMILC19 where we pass different subsets moment SEDs. The \textit{left panel} presents the standard deviations of the recovered synchrotron maps, \textit{middle panel} presents the standard deviations of the moment residual maps, and \textit{right panel} presents the standard deviations of the noise residual maps at 30 \ifmmode $\,GHz$\else \,GHz\fi.}
\label{fig:stat_res_sync}
\end{figure*}
\subsection{Statistics of residuals from moment and noise maps}
\label{sec:stat_residuals}
Besides the map level investigation, its is also important to assess the statistical properties of the estimated maps, residual leakage from other components which are not projected out and noise residual maps. In Figure.~\ref{fig:stat_res_dust}, we present the standard deviation $\sqrt{C}_{353\, GHz \,\times \,353\, GHz}$ ($C_{\nu,\nu^{'}}$ is defined in Eq.~\ref{eq:cov} ) of the recovered thermal dust map (\textit{left panel}), residual leakage from moment maps (\textit{middle panel}) and noise residual maps (\textit{right panel}) for different cMILC iterations. Similarly, in Figure.~\ref{fig:stat_res_sync}, we present the standard deviation $\sqrt{C}_{30 \,GHz \, \times \,30\, GHz}$ of similar maps for estimation of synchrotron for different cMILC iterations. Here, we display the results for all three set of simulations for easy caparison.
In \textit{left panel} of Figure.~\ref{fig:stat_res_dust} and Figure.~\ref{fig:stat_res_sync}, we find the standard deviations of the recovered maps are increasing with increasing number of constraints in cILC algorithm. However, for the iterations, which pass same number of constraints to cMILC algorithm but project out a different subset of moments, the standard deviations are either comparable or change. For example, standard deviations of recovered maps are approximately the same for the iterations from cMILC11 to cMILC14 which pass 7 constraints but different second-order moment SEDs along with all zeroth- and first-order moment SEDs to the cMILC algorithm. Whilst, standard deviations of recovered maps for the iterations from cMILC15 to cMILC19 changes although each of the iterations pass 8 moment SEDs to the cMILC algorithm but project out a different subset of two second-order moments along with all zeroth- and first-order moments. This implies, changes in standard deviations of the recovered maps for fixed number of constraints are subjected to the subset of moment SEDs passed to the algorithm.
Increasing standard deviation with an increasing number of constraints gives rise to a misleading expectation that projecting out more moments always come with an additional noise penalty. Third panels of Figure.~\ref{fig:stat_res_dust} and Figure.~\ref{fig:stat_res_sync} demonstrate that this is an inaccurate extrapolation. Furthermore, the reduction of the leakage from higher-order moments indefinitely with an increasing number of constraints for given sensitivity and frequency coverage is also incorrect information. On the contrary, in \textit{middle panels} of Figure.~\ref{fig:stat_res_dust} and Figure.~\ref{fig:stat_res_sync}, we find, for a given sensitivity and frequency coverage of the experiments, leakage from higher-order moments reduces up to projecting out an optimum number of moments and reaches to a minimum value. After that residual increases with projecting out more moments that is clear from \textit{middle panels} of Figure.~\ref{fig:stat_res_dust} and Figure.~\ref{fig:stat_res_sync}.
Therefore, we would like to emphasize that the increasing number of constraints in the cMILC algorithm does not always come with noise penalty and indefinite reduction of residual from unconstrained moments in the recovered maps. It has a more complicated behaviour depending on the complexity of the foregrounds, sensitivity and frequency coverage of the mission.
\section{Conclusion}
\label{sec:conclusion}
In the present work, we develop a new semi-blind components separation method using constrained ILC in the language of moment expansion introduced in Sect.~\ref{sec:cMILC}. We apply this algorithm to three sets of simulations with varying thermal dust and synchrotron complexity to demonstrate the performance of the algorithm. We use \textit{WMAP}\xspace\ and {\it Planck\/}\ instrument specification for current work. Our main objective is to estimate the zeroth-order moment maps of thermal dust and synchrotron at respective pivot frequencies 353 \ifmmode $\,GHz$\else \,GHz\fi\ and 30 \ifmmode $\,GHz$\else \,GHz\fi\ by projecting out the higher-order moments. The zeroth-order moment maps eventually are the individual foreground templates of respective components at respective pivot frequencies as discussed in Sect.~\ref{sec:cMILC}. We find the best combination of the moment SEDs to project out the specific moments that optimize the trade-off between residual from unconstrained higher-order moments and noise degradation in the templates. However, this combination is not robust and specific to the sensitivity and frequency coverage of the instruments. We show the performance of the cMILC method is optimal up to a specific number of constraints applied for given instrument sensitivity and channels. After that, the performance of algorithm deteriorates with increasing constraints since the residual bias from unconstrained moments increases.
Furthermore, we show deprojecting more and more higher-order moments does not always come with noise penalty. It depends on the combination of moment SEDs passed to the algorithm. Eventually, this aspect would be more apparent if we would work with high sensitive instrument data like PICO \citep{PICO:2019} to estimate low signal-to-noise components like B-mode signal in CMB. We do not apply constraints on AME in the present work since the moment description of this component is not available in literature. We notice that unconstrained AME introduce an extra bias that is order of magnitude high in comparison to that from unconstrained moments.
Overall, this is a new method to estimate the foreground templates. We develop this method on spin-2 fields and can easily be extended to the spin-0 field. However, for intensity maps, lots of foreground components contribute, unlike polarization. Developing a moment description for some of the foregrounds in intensity (e.g., AME and CO line emissions) will be essential for optimal performance of cMILC algorithm. This turns into a high dimensional problem and finding the most relevant SEDs to project out using a very limited number of frequency coverage (only 12 channels is used in this work) is substantially challenging. Therefore, we do not apply this method to the intensities. However, the number of moment SEDs required for the optimal solution is directly related to the requirement of the number of frequency channels with some sensitivity. Thus algorithm can be useful for optimizing the design of the future CMB experiments.
The algorithm we have developed works over any sky fraction. Therefore, in principle, we can jointly analyse ground-based and space-based CMB mission data using this algorithm. The most challenging parts of working with real data using this algorithm are calibration and beam uncertainties. In the present work, we assume the maps are absolutely calibrated, and Gaussian FWHM can perfectly describe beams. However, for real data, calibration coefficient uncertainties for each channel, which are a multiplicative factor for each frequency maps, introduce an uncertainty in the frequency scaling of each of the components. Therefore, the optimal combination of moment SEDs for given instrumental sensitivity and frequency coverage may converge to imperfect solution of the component maps. Beam uncertainties induce a similar bias as calibration uncertainties. This impacts strongly the high $\ell$ modes, especially for high signal to noise data \citep{Basak:2013}. These issues require specific attention to the exact response of the detectors, precise calibration of the instrument, especially re-calibration of data sets from different instruments inside the algorithm itself. In a follow up paper, \cite{Adak:2021} (In preparation), we demonstrate the application of cMILC algorithm on \textit{WMAP}\xspace\ and {\it Planck\/}\ real data, re-calibration of the data in the same algorithm etc.
Finally, this algorithm is in principle applicable to recover any foreground templates, moment maps of any order at any frequency. While we mainly focus on the estimation of foreground maps in the current paper, one can extend this work for cleaning the CMB {$Q$ }, {$U$ } maps from foreground contamination over incomplete sky. Furthermore, the moment expansion method is extremely useful and be applicable to extract the CMB spectral distortion signal \citep{Rotti:2020}, 21cm global signal, CMB B-mode signal \citep{Remazeilles:2020} etc. This approach also allows us to use external templates to minimise the contribution of extra components, a similar approach like the internal template fitting \citep{Fernandez-Cobos:2012}.
\section*{Data Availability}
The {GAL78}\ mask is taken from PLA (\url{pla.esac.esa.int/pla/}).
\section*{Acknowledgements}
DA acknowledges the University Grants Commission India for providing financial support as Senior Research Fellow. This work was supported by Science and Engineering Research Board, Department of Science and Technology, Govt. of India grant number SERB/ECR/2018/000826. Some of the computations in this paper are done on the Pegasus cluster\footnote{\url{http://hpc.iucaa.in/}} at IUCAA. DA acknowledges Prof. Tarun Souradeep, Dr. Tuhin Ghosh and Dr. Shabbir Shaikh for useful discussion regarding this work.
\bibliographystyle{mn2e}
| 54,766 |
\section{Introduction}
\IEEEPARstart{O}{bject} detection is a fundamental problem in computer vision, which can be applied in instance segmentation, scene understanding, pose estimation, image captioning and multiple objects tracking (MOT), to name a few. Given an arbitrary image, the goal of object detection is to determine the presence of the predefined categories and locate them in this image. Recently, with the development of convolutional neural network, learning based object detection methods have achieved remarkable progress beyond the traditional detection methods. Meanwhile, in order to train and evaluate the performance of different detection models, certain solid benchmarks for object detection have also been proposed by researchers.
The state of the object is actually complicated (camouflage, occlusion or high-speed state), which brings challenges to object detection methods. Those challenges include (1) Complex environment: objects is obscured by smoke or flames; (2) Intra-class variance: the appearance of the same category could be quite different; (3) Inter-class similarity: the appearance of the different categories could be quite similar; (4) Scale: objects at different distances would generate scale differences; (5) Motion blur: objects are usually in motion; (6) Camouflage: objects are decorated with camouflage. Therefore, existing object detection methods suffer from undesirable performance.
In this work, we propose a Loss-Guided Attention RCNN (LGA-RCNN) to tackle those challenges by highlighting representative region. We find that in dense detection framework, RoI module can generate almost all features of foreground objects and the bottleneck of performance lies in the classification of RoI features. Thus, we append a LGA module behind RoI feature layers, which predicts $k$ Gaussian masks on RoI feature maps to seek discriminative parts of objects for more accurate classification. In addition, an extra classification loss is imposed on masked RoI feature maps to ensure that those Gaussian masks converge to optimal locations. Compared with common attention modules like CBAM~\cite{woo2018cbam} which only focus on contextual information (rather than global information), our method makes full use of global information to mine representative local parts. Besides, time and memory consumption of our method are also better than global-range methods like non-local~\cite{wang2018non}.
Our contributions can be summaried as follows.
We propose LGA-RCNN which utilizes a loss-guided attention (LGA) module to highlight representative region of objects and improve detection performance.
\section{Related Works}
\subsection{Datasets}
Datasets play a very important role in the history of learning-based object detection methods. Previous detection datasets can be divided into single-category object datasets and multi-category object datasets (general object datasets). Single-category object dataset only contains one specific category of object such as face~\cite{fddbTech, faceevaluation15,klare2015pushing,yang2016wider}, pedestrian~\cite{dollar2011pedestrian,zhang2017citypersons,zhang2019widerperson}, vehicle~\cite{cordts2016cityscapes}, apple~\cite{hani2020minneapple}, etc. Multi-category object dataset contains multiple types of objects such as person, bicycle or car. Previous representative works of multi-category object datasets include ImageNet~\cite{deng2009imagenet}, PASCAL VOC 2007~\cite{everingham2010pascal}, PASCAL VOC 2012~\cite{everingham2015pascal}, MS COCO~\cite{lin2014microsoft} and Open Images~\cite{kuznetsova2018open}. Specifically, the detailed information of each dataset is listed in Table~\ref{comparsion of dataset}.
\begin{table}[t]
\caption{Comparison of Object Detection Benchmarks}
\resizebox{0.49\textwidth}{!}{
\begin{tabular}{ccccccc}
\toprule
Dataset & Specific Field & Categories & Boxes/Images \\ \midrule
Pascal VOC~\cite{everingham2015pascal} & Not Specific & 20 & 2.4 \\
ImageNet~\cite{deng2009imagenet} & Not Specific & 200 & 1.1 \\
COCO~\cite{lin2014microsoft} & Not Specific & 80 & 7.3 \\
OpenImages~\cite{kuznetsova2018open} & Not Specific & 600 & 9.3 \\ \bottomrule
\end{tabular}%
}
\label{comparsion of dataset}
\end{table}
Although those datasets show their effectiveness under the verification of numerous algorithms, they are collected for generic object detection, in which the types of objects are broad but not specialized. The dataset for a specific field is necessary because the characteristics of objects in different fields are quite different. And detection methods in specific field need to be improved to adapt to these characteristics, such as apple detection using enhanced YOLO-v3~\cite{tian2019apple}. Thus, a robust detection algorithm is quite necessary.
\subsection{Methods}
According to whether to utilize region proposal, object detecion methods can be divided into two mainstreams, two-stage methods and one-stage methods.
\subsubsection{Two-Stage Methods}
Similar to tranditional object detection methods, two-stage object detection methods utilize a region porposal stage to generate sufficient candidate regions.
Inspired by selective search~\cite{uijlings2013selective}, Girshick~\cite{girshick2014rich} proposes RCNN in 2014 for generic object detection. However, repetitive feature extraction in RCNN causes slow operation. Thus, He et al.~\cite{he2015spatial} propose SPPNet to reduce calculation time by obtaining proposals from the whole feature maps rather than the whole source image. Besides, Fast RCNN~\cite{girshick2015fast} is proposed with a Region of Interest (RoI) pooling layer to generate proposals of the same scale. Networks behind RoI layer become end-to-end so that detection speed is accelerated. Moreover, Ren et al.~\cite{ren2015faster} replace selective search with Region Proposal Network (RPN) in Faster RCNN, which sets $k$ anchors with different aspect raito in feature maps to generate proposals.
Recently, more two-stage methods~\cite{dai2016r, li2017light, gkioxari2019mesh, lu2019grid, beery2020context} are proposed to enhance speed and performance. However, due to the existence of RoI, the speed of the two-stage method is still slow and cannot meet the requirements of real-time detection. Thus, one-stage methods are proposed.
\subsubsection{One-Stage Methods}
Unlike two-stage methods, one-stage methods achieve object detection without a distinct region proposal stage. According to whether to utilize anchor, they can be further devided into anchor-based methods and anchor-free methods.
Anchor-based one-stage methods apply anchors to classify object category directly rather than to generate region proposals. Liu et al.~\cite{liu2016ssd} propose a fully convolutional network SSD, which sets anchors in features with multiple scale to achieve detection on objects with different size. Then, Kong et al.~\cite{kong2017ron} propose enchanced SSD algorithm, RON, that adds multiple deconvolutional layers to improve the detection capability in small objects. Lin et al.~\cite{lin2017focal} propose RetinaNet with 9 anchors in each FPN scale. This work also introduces the focal loss to solve the imbalance between positive sample assignment and negative sample assignment.
Those anchor-based one-stage methods are dependent on the setting of the anchor parameters to a large extent and unreasonable configuration prevents the anchor box from matching the target box well, resulting in performance drop. Thus, anchor-free one-stage methods are proposed~\cite{redmon2016you, law2018cornernet, zhou2019objects, zhou2019bottom}. Specifically, YOLO~\cite{redmon2016you} regards the object detection problem as the regression problem, where the feature map is split into $S\times S$ grid cells and each cell is responsible for preditcing objects centered at this cell. CornerNet~\cite{law2018cornernet} and CenterNet~\cite{zhou2019objects} convert object detection problem into a keypoint detection problem. Besides, ExtremeNet~\cite{zhou2019bottom} utilizes the labeled data in the segmentation dataset to predict the boundary points and the center point of the object. The boundary points are guaranteed to fall into foreground area, so they are easier to detect than corner points. However, this algorithm needs to be trained with the mask annotation, increasing the acquisition cost.
\section{LGA R-CNN}
\label{method}
As illustrated above, several challenges exist in object detection, e.g., occlusion, camouflage, and complex environment, which causes the performance drop to some degree. Thus, targeting at addressing this issue, we propose LGA R-CNN for object detection.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\textwidth]{det_bench.pdf}
\end{center}
\caption{Illustration of our proposed LGA R-CNN. Foreground object feature is extracted by Region Proposal Network (RPN). Then, we utilize a Loss-Guided Attention (LGA) module to predict several Gaussian maps from object feature and highlight discriminative part of feature map using those predicted Gaussian maps. LGA module is supervised and guided by classification loss under highlighted feature map. Furthermore, in order to achieve better regression performance, we fuse global information (original feature) and local information (highlighted feature) for final classification and regression.}
\label{fig:framework}
\end{figure*}
\subsection{Overall}
We build our method LGA-RCNN based on R-CNN framework and the whole pipeline is illustrated in Figure~\ref{fig:framework}. Given an arbitrary image, RCNN detector utilizes backbone network and region proposal network (RPN) to generate feature maps with certain proposals. Then, RoI align is applied to crop RoI feature maps from the whole image feature maps. In such a dense detection framework, the bottleneck of performance lies on networks behind RoI features. Thus, besides the common classification and regression branches, we append auxiliary LGA module on RoI feature maps to predict and highlight representative regions for more accurate classification. Afterwards, those highlighted features are fused with the original RoI feature for preciser classification and regression.
\subsection{LGA Module}
The principal of designing LGA module is to mine and highlight those more representative and discriminative regions of the object, and reduce the adverse effect in potential region with occlusion, camouflage or other interference. To achieve this target, the proposed component should be able to sense the global information and seek the local region with more discriminative clues. Thus, we utilize a network to predict the Gaussian attention masks from the global RoI features. Assuming that those representative regions should be discriminative enough for a detector to do the classification, e.g., a person's face is strong enough to be distinguished from other categories, we attach a classification loss to force LGA to learn a better attention. Furthermore, original global information need to be maintained for accurate locating and classification fine-tuning. Thus, we fuse those masked local-enhanced feature maps with those original feature maps for final detection heads.
\subsubsection{Gaussian mask prediction}
Common attention module such as CBAM~\cite{woo2018cbam} is implemented with channel-wise pooling and spatial-wise convolution, which thus leads to the lack of the global information. Non-Local methods are able to percept global information, but they are much more complicated and time-consuming. In LGA module, we construct a learnable mapping function to map the global features into a Gaussian parameters ($\mu$ and $\sigma$) then transfer those parameters into Gaussian masks. To be specific, given RoI feature $x$ with $256$ channels and $7\times7$ spational resolution, we first downsample the feature into a lower channel dimension to avoid high complexity by network $f_{d}$. Then, network $f_{c}$ is applied on the downsampled feature to predict Gaussian parameters.
\begin{equation}
\begin{aligned}
\mu &= S_{ratio}*Tanh(f_{c}(f_{d}(x))) \\
\sigma &= ReLU(f_{c}(f_{d}(x))) + 1
\end{aligned}
\end{equation}
We utilize $Tanh$ and $S_{ratio}$ to ensure that $\mu$ falls in the range of the spatial resolution of feature ($[0,7]$ in this case) and $ReLU$ to ensure $\sigma$ is no less than $1$. Actually, Gaussian parameters are capable of representing some instance-level semantic information. For a RoI region of $7\times7$ size, the way we obtain gaussian parameters ensure that it can sense high-level semantic feature of the target instance. $\mu$ can be regarded as a position prediction on the discriminative region, while $\sigma$ can be regarded as the scale of this region.
\subsubsection{Loss-Guided Training}
After initialized, different Gaussian masks pay attention to different regions, i.e., different local features are enhanced. We hope that those Gaussian masks would focus on more representative and discriminative regions. For example, when it comes to a picture with excavators and vehicles, those unique parts such as caterpillar tread is more discriminative than similar parts like steel shell. To achieve this, we apply an extra classification loss on masked RoI feature maps for supervision. Assuming that common attention module do benefit the performance where they probably focuses on the steel shell, however the highlighted feature could be a disadvantage to distinguish excavators out of vehicles. Loss-Guided training attention is designed to focus on a more discriminative region like barrel, which would not be a part of the vehicles. With the supervision of the classification loss on Gaussian feature, the LGA module is forced to search for the aforementioned region to make the new-attached loss decline.
\subsubsection{Feature Fusion}
Although classification accuracy is improved by enhanced local informaion, part of global information is sacrificed in those highlighted RoI features. Therefore, inaccurate position regression would appear if we directly using highlighted features to locate the object. In order to maintain the accuracy and robustness of the bboxes regression process, we fuse masked RoI feature maps with original RoI feature maps to combine local information with global information. Then, we apply final detection on fused RoI features. Furthermore, part of Gaussian mask focuses on marginal region of the object. Thus, fused RoI features can sense more on the outline of the objects, which enhances the result of location.
\section{Conclusion}
In this work, we analyze certain challenges in object detection including camouflage, motion blur, compliated environment, intra-class variance, inter-class similarity and scale. Then, we propose the Loss-Guided Attention RCNN (LGA-RCNN) to address those issues by adding LGA module in common R-CNN framework. LGA module utilizes a network to predict Gaussian masks from RoI features and force those masks to focus on representative regions of object by an extra LGA loss.
\newpage
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{./IEEEtran}
| 3,607 |
\section{Introduction}
\label{sec:intro}
Images captured from whole sky imagers (WSIs) have been widely analyzed for various remote sensing applications such as radio-communications, climate analysis, and weather forecasting. These imagers provide images with high temporal and spatial resolution at low-cost as compared to their satellite counterparts~\cite{Dev2016GRSM, wang2018ground}. This makes them especially useful in the fields where short-term analysis is required, such as solar irradiance forecasting~\cite{solar_irr_pred, IGARSS_solar}.
While WSIs can generate a large amount of data, the generated images are often prone to high noise from sun glare, bird flocks, and dust particles. Thus, post processing of the generated images generally becomes an important pre-requisite. In such a scenario, a reduction in the apparatus design will not only helps to reduce the overall information retrieval cost but will also create a possibility to install multiple imagers for same locality in order to further reduce the noise.
\begin{table}[!ht]
\centering
\begin{tabular}{ || m{6cm} | c || }
\hline\hline
\textbf{Items} & \textbf{Cost (in \$)} \\
\hline\hline
Raspberry Pi 4 Model-B with 4GB RAM & 70 \\
\hline
Raspberry Pi High Quality Camera (with interchangeable lens base) & 70 \\
\hline
6mm Wide Angle Lens for Raspberry Pi High Quality Camera & 35 \\
\hline
Portable 1.5TB External Hard Drive & 60 \\
\hline
64GB microSDXC Memory Card & 10 \\
\hline
USB Type-C 15.3W Power Supply & 10 \\
\hline
DS3231 RTC Module & 3 \\
\hline
Duracell 2032 Coin Battery 3V & 2 \\
\hline
Micro HDMI to HDMI Cable (6 feet) & 7 \\
\hline
Metal Armour Case for Raspberry Pi 4 & 18 \\
\hline
Electronic Items: LDR, Push Button, LEDs, Resistances, and Breadboard & 4 \\
\hline
Polystyrene Insulating Ice Box & 3 \\
\hline
Plastic bottle and wooden pedestal for camera & 1 \\
\hline
Glass Dome & 1 \\
\hline
PVC Plyboard (1 sq. feet) & 1 \\
\hline
Electrical Accessories \& Other Items & 4 \\
\hline\hline
\textbf{Total Cost} & \textbf{299} \\
\hline\hline
\end{tabular}
\caption{Component description and cost analysis for designing the low-cost WSI}\vspace{-0.4cm}
\label{table:costTable}
\end{table}
An ideal WSI design must be climate proofed, widely implementable and should capture high resolution images. Different models have been developed for various industrial and research purposes. TSI~880~\cite{long2001total} and the panorama camera~\cite{chen2012howis} are being used for weather monitoring. UTSA~Sky Imager~\cite{richardson2017low}, WAHRSIS~\cite{WAHRSIS}, Canon~IXUS~II with FOV~180\degree ~\cite{kazantzidis2012cloud}, and Hemispheric Sky Imager~\cite{Long} are being used in cloud classification and wind data monitoring system applications. Monitoring of air traffic controls is another application of WSIs, an example of which is the IR~NEC~TS~9230 camera~\cite{infrared_UK}.
While TSI~880 is a commercialized model developed by Yankee Environmental Systems and costs around $\$30,000$, WSIs developed by individual research groups are generally cheaper like WAHRSIS with development cost of $\$1769$~\cite{WAHRSIS}. More recently, the UTSA~Sky Imager was released in $2017$ with strikingly low cost of $\$500$~\cite{richardson2017low}.
In this paper, we present a novel low-cost design of ground-based WSI based on the Raspberry Pi prototyping platform. Using much cheaper off-the-shelf alternatives, the design was built with an effective cost of under $\$300$. The list of detailed components and their prices is given in Table~\ref{table:costTable}.
\section{WSI Design and Construction}\label{sec:WSIdesign}
\subsection{Mechanical Design and Construction}\label{sec:mechDesign}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.85\columnwidth]{WSI_1.png}\\
\includegraphics[width=0.85\columnwidth]{WSI_2.png}
\caption{Mechanical design of the WSI. \textit{A}: glass dome; \textit{B}: camera assembly \& LDR; \textit{C}: camera holder and pedestal; \textit{D}: insulating thermocol ice box; \textit{E, F}: PVC plyboard; \textit{G, H}: bricks to keep apparatus in place; \textit{I}: small wedges; \textit{J}: raised pedestal with bricks}
\label{fig:elecFrameworkWSI}
\end{figure}
A polystyrene-based insulation box (which is generally used to store dry ice) was placed on a pedestal to prevent the basic electronic setup from outside heat and rain. The box contains Raspberry Pi, hard drive, breadboard circuit, battery and switch. LEDs were placed in the breadboard circuit to monitor basic performance of hard drive and WSI without attaching a screen in the setup. The Raspberry Pi was enclosed in a metal armored case that comes with two fans and thermal tapes to prevent the Pi from heating. To ensure efficient outflow of heat generated by the components inside the polystyrene box, small wedges (shown by I label in Figure 1) were left on one side. Further, to protect our electronic setup from rain, a large PVC Plyboard was placed on top of the box with it's edges lying way outside the boundary of the box.
The camera was mounted on a wooden pedestal inside an empty plastic bottle. A hole was made in the bottle cap to make space for lens of the camera. Glass dome was fixed with cement, paint, and fast setting epoxy resin and hardener on the hollow cap. It was done to make the camera dome water resistant and protect the camera from rain and other contaminants. To adjust proper shutter settings, a light dependent resistance (LDR) was placed alongside the camera in the dome.
To connect the camera and LDR (inside the plastic bottle unit) with the electronic section (in the polystyrene box unit), slits were made in the plastic bottle and polystyrene box. Then, the two units were taped together to prevent separation and breakage. To further prevent rain and moisture from penetrating through the slits, a polyethylene based thin protective cover was used. It was fixed with the bottle cap and the lid of polystyrene box in order to cover the slits in the middle. Finally, the whole setup was protected by bricks from all sides to strengthen it.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.75\columnwidth]{setup.jpg}\vspace{0.1cm}\\
\includegraphics[angle=270,origin=c,width=0.75\columnwidth]{WSI_internal.jpg}\vspace{-0.8cm}
\caption{Functional form of the designed low cost WSI}
\label{fig:functionalWSI}
\vspace{-0.5cm}
\end{figure}
\subsection{Electronic Framework \& Logic Handling}\label{sec:elecFrameworkWSI}
The electronic framework for the WSI is designed over the Raspberry Pi prototyping platform (as shown in Fig.~\ref{fig:elecFrameworkWSI}). At the software end, we have bifurcated the system into -
\begin{itemize}
\setlength\itemsep{0em}
\item the camera handling part, and
\item the data handling part
\end{itemize}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\columnwidth]{WSI_electronic_framework.pdf}
\caption{Electronic framework of the designed low-cost WSI}
\label{fig:elecFrameworkWSI}
\vspace{-0.1cm}
\end{figure}
The camera handling part is responsible for capturing the images periodically. It is managed via a cron job. While the main script is tasked only to capture the image and save it in the local memory ($\mu$SDXC), the cron job is responsible to execute the script after every $5$ minutes. Since the cron job is managed by the Linux operating system itself, it ensures the fail-safe operation of the camera. The main script looks at the values of LDR and adjusts the shutter time of the camera accordingly. If the LDR value is too low, it means night and hence the shutter is allowed to remain open for a longer time than usual in order to get a better view of the night sky.
Since the local memory ($\mu$SDXC) is very small ($64$GB), the camera needs to store images elsewhere in order to be able to run for a longer period of time. To keep the camera completely portable and easy to install at remote locations, we provided the system with a $1.5$TB (enough to store images for more than $600$ days) external memory (HDD). At $1200$ hours and $2400$ hours every day, the system checks for the presence of the external HDD and if it is available, it transfers all the data from the $\mu$SDXC to the HDD. The HDD can be manually removed and reinstalled to take a second backup of the captured images. To facilitate safe removal, $4$ LEDs and a push button is provided alongside the system. The complete process of HDD management is defined in the Algorithm~\ref{algo:HDDmanagement}. Since the main code for HDD management script runs in infinite loop, it is configured as daemon and the script's running status is checked and maintained by another cron job.
\setlength{\textfloatsep}{15pt
\begin{algorithm}[!ht]
$\texttt{pb} \gets \text{handle for push button}$\\
$\texttt{l}_{\texttt{1}}, \texttt{l}_{\texttt{2}}, \texttt{l}_{\texttt{3}}, \texttt{l}_{\texttt{4}} \gets \text{handles for 4 LEDs}$\\
$\texttt{HDDstate} \gets TRUE$\\
$\texttt{ct()} \gets$ utility function to fetch current system time\\
$\texttt{hs()} \gets$ utility function to fetch exact HDD state\\
\Comment{\textbf{Interrupt Service Routine:}}
\If{$state(\texttt{pb}) == pressed$}{
$state(\texttt{l}_{\texttt{1}}) = ON$\;
safely eject HDD\;
$state(\texttt{HDDstate}) = FALSE$\;
$state(\texttt{l}_{\texttt{2}}) = ON$\;
$wait(5s)$\;
$state(\texttt{l}_{\texttt{1}}, \texttt{l}_{\texttt{2}}) = OFF$\;
}
\Comment{\textbf{Main Code (run in infinite loop):}}
\eIf{$\texttt{HDDstate}$ and \texttt{ct()} == (1200 or 2400)}{
$state(\texttt{l}_{\texttt{4}}) = ON$\;
transfer data from $\mu$SDXC to HDD\;
$state(\texttt{l}_{\texttt{4}}) = OFF$\;
}{
\If{not $\texttt{HDDstate}$ and $\texttt{hs()}$}{
$\texttt{HDDstate} = TRUE$\;
$state(\texttt{l}_{\texttt{3}}) = ON$\;
$wait(5s)$\;
$state(\texttt{l}_{\texttt{3}}) = OFF$\;
}
}
\caption{HDD management daemon}
\label{algo:HDDmanagement}
\end{algorithm}
\section{Results \& Limitations}\label{sec:results}
The designed WSI is capturing sky/cloud images for over $6$ months now with only a few instances of failure. Figure~\ref{fig:WSIcaptuedImages} shows some of the captured images during different times of the day. Salient features of the designed model can be summarized as follows:
\vspace{-0.5em}
\begin{itemize}
\setlength\itemsep{-0.4em}
\item Captures images in high resolution while automatically adjusting the shutter speed enabling it to capture images in low-light conditions
\item Local backup facility with easy handling of hard drive to avoid dependence on internet
\item Deploys RTC to keep track of time during power outage
\item Durable and low-cost chassis which is well-protected from outside heat and other weather conditions
\end{itemize}
While the cloud cover can be identified clearly in the captured images, some of them yet suffer from the following problems:
\begin{itemize}
\setlength\itemsep{-0.4em}
\item Unrealistic coloration of the images due to infra-red rays that are being captured by the camera
\item Sun glare coming in through the corner of images making it hard to identify the cloud cover accurately
\item Dust particles and water vapors sometimes get deposited over the glass dome (which is protecting the camera) resulting in blurry images
\item Although in a rare case, the camera sensor gets heated up and fails to capture any image whatsoever
\end{itemize}
\vspace{-0.2cm}
\begin{figure*}
\centering
\includegraphics[width=0.23\textwidth]{day_clearSky.jpg}
\includegraphics[width=0.23\textwidth]{day_WhiteClouds1_birds.jpg}
\includegraphics[width=0.23\textwidth]{day_patternedClouds.jpg}
\includegraphics[width=0.23\textwidth]{day_DarkClouds.jpg}\\
\includegraphics[width=0.23\textwidth]{fullMoon_clearSky.jpg}
\includegraphics[width=0.23\textwidth]{night_cloudy.jpg}
\includegraphics[width=0.23\textwidth]{night_patternedClouds.jpg}
\includegraphics[width=0.23\textwidth]{dusk_scatteredClouds.jpg}
\caption{Sample images that were captured by the designed WSI. \textit{Starting from top left corner in a clockwise manner -} clear sky during day; thick white clouds during day; patterned clouds during day; thick dark clouds during day; almost clear sky at dusk; patterned clouds during night; thick white clouds during night; and clear sky during night (full-moon).}
\label{fig:WSIcaptuedImages}
\end{figure*}
\section{Conclusions \& Future Work}
\label{sec:conc}
This paper presents an extremely low-cost design for a ground-based WSI using readily available components. The camera is capable of capturing high-resolution images ($4056\times3040$) with varying shutter speeds based on the LDR readings. With the capability to capture images at all times of the day and at a regular interval of $5$ minutes, the WSI is suitable for various applications including (but not limited to) correlation studies with meteorological variables to assist in forecasting of solar irradiance and rainfall events. Furthermore, the additional feature of creating local backup makes the device ultra portable and thereby suitable for remote locations where internet is generally not accessible.
In future, we plan to reduce the limitations of the current design. To resolve the issue of rising temperature of the camera sensor, we are planning to add a cooling device for the camera as we have for the Raspberry Pi within the box. Further, adding an IR-filter underneath the lens of the camera is planned for automatic color correction in the images. Lastly, a defogger/heating element might be added to the glass dome so that the condensed water droplets and/or ice can be removed automatically from it in order to obtain clear images. Although, the modifications will increase the overall cost of the WSI, the increment is not expected to be a significant one.
\balance
| 4,086 |
\section*{Introduction}
The recently proved Razumov--Stroganov conjecture~\cite{RS-conj,ProofRS} is a correspondence between, on the one hand, combinatorially defined quantities called Fully Packed Loop (FPL) configurations, and on the other hand, components of the groundstate vector of the Hamiltonian in the Completely Packed Loop model. These quantities are indexed by noncrossing, perfect matchings $\pi$ of $2n$ points (cf. definition in Section~\ref{representations}).The number of FPL configurations with associated matching $\pi$ will be denoted $A_\pi$, while the corresponding components of the groundstate vector in the Completely Packed Loop model are written $\Psi_\pi$. The Razumov--Stroganov conjecture states then that $A_\pi=\Psi_\pi$ for any $\pi$.
The goal of this article is to exhibit some surprising properties of these numbers when one studies matchings with nested arches $(\pi)_p=(\cdots(\pi)\cdots)$, which means that there are $p$ nested arches above the matching $\pi$. It was conjectured in ~\cite{Zuber-conj}, and subsequently proved in~\cite{CKLN,artic47}, that the quantities $A_{(\pi)_p}$ and $ \Psi_{(\pi)_p}$ are polynomial in $p$. We define then the polynomial $A_\pi(t)$ such that $A_\pi(p)=A_{(\pi)_p}$ when $p$ is a nonnegative integer.
\medskip
This paper deals with certain conjectures about these polynomials. Let $\pi$ be a matching with $n$ arches: the main conjectures deal with the description of real roots of the polynomials (Conjecture~\ref{conj:realroots}), their values at negative integers between $1-n$ and $-1$ (Conjecture~\ref{conj:dec}), evaluations at $-n$ (Conjecture~\ref{conj:gpi}) and finally the positivity of the coefficients (Conjecture~\ref{conj:posX}).
We gather some evidence for the conjectures, and prove some special cases (cf. Theorem ~\ref{th:subleading} and Theorem~\ref{th:firstroot}). In the Completely Packed Loop model, one can in fact define bivariate polynomials $\Psi(\tau,t)$ that coincide with $\Psi(t)$ at $\tau=1$; it turns out that most of our conjectures admit a natural generalization in this context also, which in some sense is more evidence for the original conjectures.
We believe these conjectures can help us understand better the numbers $A_\pi$. Moreover, our work on these conjectures has some interesting byproducts: first, the conjectured root multiplicities of the polynomials $A_\pi(t)$ have nice combinatorial descriptions in terms of $\pi$ (see Section~\ref{sub:combdef}). Then, from the proof of Theorem ~\ref{th:subleading}, we deduce some nice formulas about products of hook lengths of partitions (Proposition~\ref{prop:newhookformulas}). Also, the proof of Theorem~\ref{th:firstroot} involves the introduction of a new multivariate integral.
\medskip
Let us give a detailed outline of this article, where $\pi$ will refer to a matching with $n$ arches. In Section~\ref{sec:defi}, we define the quantities $A_\pi$ and $\Psi_\pi$, and formulate the Razumov--Stroganov conjecture. We introduce in Section~\ref{sec:polynomials} the central objects of our study, the polynomials $A_\pi(t)$. It is also recalled how to approach the computation of these polynomials.
The main conjectures about the $A_\pi(t)$ are gathered in Section~\ref{sec:conj}: they are Conjectures~\ref{conj:realroots}, ~\ref{conj:dec},~\ref{conj:gpi} and ~\ref{conj:posX}. We give also numerous evidence for these conjectures, the most important one being perhaps that they have been checked for all matchings with $n\leq 8$.
The next two sections address particular cases of some of the conjectures: in Section~\ref{sec:subleading}, we are concerned with the computation of the subleading term of the polynomials. The main result, Theorem~\ref{th:subleading}, shows that this is a positive number both for $A_\pi(t)$; it is thus a special case of Conjecture~\ref{conj:posX}. We give two proofs of this result, from which we derive some nice formulas mixing hook lengths and contents of partitions (Proposition \ref{prop:newhookformulas}). Section~\ref{sec:firstroot} is concerned with the proof that if $\{1,2n\}$ is not an arch in $\pi$, then $A_\pi(-1)=0$; this is a special case of Conjecture~\ref{conj:realroots}. The proof relies on the multivariate polynomial extension of $\Psi_\pi$, the main properties of which are recalled briefly.
Section~\ref{sec:tau} deals with certain bivariate polynomials $\Psi_\pi(\tau,t)$ which specialize to $A_\pi(t)$ when $\tau=1$. It turns out that the conjectures of Section~\ref{sec:conj} generalize in a very satisfying way. We finally give two appendices: Appendix~\ref{app:equivab} gives a proof of the technical result in Theorem~\ref{th:equivab}, while Appendix~\ref{app:examples} lists some data on the polynomials $A_\pi(t)$.
\section{Definitions}
\label{sec:defi}
We first introduce matchings and different notions related to them. We then describe Fully Packed Loop configurations, as well as the Completely Packed Loop model.
\subsection{Matchings}
\label{representations}
A matching\footnote{our matchings are usually called {\em perfect noncrossing matchings} in the literature, but this is the only kind of matchings we will encounter so there will be no possible confusion.} $\pi$ of size $n$ is defined as a set of $n$ disjoint pairs of integers $\{1,\ldots,2n\}$, which are {\em noncrossing} in the sense that if $\{i,j\}$ and $\{k,l\}$ are two pairs in $\pi$ with $i<j$ and $k<l$, then it is forbidden to have $i<k<j<l$ or $k<i<l<j$. We will represent matchings by sets of arches on $2n$ horizontally aligned points labeled from $1$ to $2n$. There are $\frac{1}{n+1}\binom{2n}{n}$ matchings with $n$ pairs, which is the famous $n$th Catalan number.
Matchings can be represented by other equivalent objects:
\begin{itemize}
\item A well-formed sequence of parentheses, also called \emph{parenthesis word}. Given an arch in a matching, the point connected to the left (respectively to the right) is encoded by an opening parenthesis (resp. by a closing parenthesis);
\[
\begin{tikzpicture}[scale=0.25]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,1.5) and (5,1.5) .. (5,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\draw[line] (-.5,0) -- (5.5,0);
\end{tikzpicture}
\Leftrightarrow ()(())
\]
\item A Dyck Path, which is a path between $(0,0)$ and $(2n,0)$ with steps NE $(1,1)$ and SE $(1,-1)$ that never goes under the horizontal line $y=0$. An opening parenthesis corresponds to a NE step, and a closing one to a SE step;
\[
()(()) \Leftrightarrow
\begin{tikzpicture}[scale=0.25, baseline=2pt]
\draw[dyck] (0,0) -- (1,1) -- (2,0) -- (3,1) -- (4,2) -- (5,1) -- (6,0);
\end{tikzpicture}
\]
\item A Young diagram is a collection of boxes, arranged in left-justified rows, such that the size of the rows is weakly decreasing from top to bottom. Matchings with $n$ arches are in bijection with Young diagrams such that the $i$th row from the top has no more than $n-i$ boxes. The Young diagram can be constructed as the complement of a Dyck path, rotated $45^\circ$ counterclockwise;
\[
\begin{tikzpicture}[scale=0.25, baseline=3pt]
\draw[dyck] (0,0) -- (1,1) -- (2,0) -- (3,1) -- (4,2) -- (5,1) -- (6,0);
\draw[young, dotted] (1,1) -- (3,3);
\draw[young, dotted] (2,0) -- (4,2);
\draw[young, dotted] (1,1) -- (2,0);
\draw[young, dotted] (2,2) -- (3,1);
\draw[young, dotted] (3,3) -- (4,2);
\end{tikzpicture}
\Leftrightarrow
\begin{tikzpicture}[scale=0.25, baseline=-10pt]
\draw[young] (0,0) -- (0,-2);
\draw[young] (1,0) -- (1,-2);
\draw[young] (0,0) -- (1,0);
\draw[young] (0,-1) -- (1,-1);
\draw[young] (0,-2) -- (1,-2);
\end{tikzpicture}
\]
\item A sequence $a=\{a_1,\ldots,a_n\}\subseteq\{1,\ldots,2n\}$, such that $a_{i-1}<a_i$ and $a_i\leq 2i-1$ for all $i$. Here $a_i$ is the position of the $i$th opening parenthesis.
\[
()(()) \Leftrightarrow \{1,3,4\}
\]
\end{itemize}
We will often identify matchings under those different representations, through the bijections explained above. We may need at times to stress a particular representation: thus we write $Y(\pi)$ for the Young diagram associated to $\pi$, and $a(\pi)$ for the increasing sequence associated to $\pi$, etc...
We will represent $p$ nested arches around a matching $\pi$ by ``$(\pi)_p$'', and $p$ consecutive small arches by ``$()^p$''; thus for instance
\[
((((()()))))()()()=(()^2)_4()^3.
\]
We define a {\em partial order} on matchings as follows: $\sigma \leq \pi$ if the Young diagram of $\pi$ contains the Young diagram of $\sigma$, that is $Y(\sigma)\subseteq Y(\pi)$. In the Dyck path representation, this means that the path corresponding to $\sigma$ is always weakly above the path corresponding to $\pi$; in the sequence representation, if we write $a=a(\sigma)$ and $a'=a(\pi)$, then this is simply expressed by $a_i\leq a'_i$ for all $i$.
Given a matching $\pi$, we define $d(\pi)$ as the total number of boxes in the Young diagram $Y(\pi)$. We also let $\pi^*$ be the conjugate matching of $\pi$, defined by: $\{i,j\}$ is an arch in $\pi^*$ if and only if $\{2n+1-j,2n+1-i\}$ is an arch in $\pi$. This corresponds to a mirror symmetry of the parenthesis word, and a transposition in the Young diagram. We also define a natural {\em rotation} $r$ on matchings: $i,j$ are linked by an arch in $r(\pi)$ if and only if $i+1,j+1$ are linked in $\pi$ (where indices are taken modulo $2n$). These last two notions are illustrated on Figure~\ref{fig:matchings}.
\begin{figure}[!ht]
\begin{align*}
\pi&=
\begin{tikzpicture}[scale=0.25]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,2) and (9,2) .. (9,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\draw[arche] (5,0) .. controls (5,1) and (8,1) .. (8,0);
\draw[arche] (6,0) .. controls (6,.5) and (7,.5) .. (7,0);
\draw[line] (-.5,0) -- (9.5,0);
\end{tikzpicture}
&
\pi^*&=
\begin{tikzpicture}[scale=0.25]
\draw[arche] (9,0) .. controls (9,.5) and (8,.5) .. (8,0);
\draw[arche] (7,0) .. controls (7,2) and (0,2) .. (0,0);
\draw[arche] (6,0) .. controls (6,.5) and (5,.5) .. (5,0);
\draw[arche] (4,0) .. controls (4,1) and (1,1) .. (1,0);
\draw[arche] (3,0) .. controls (3,.5) and (2,.5) .. (2,0);
\draw[line] (-.5,0) -- (9.5,0);
\end{tikzpicture}
&
r(\pi)&=
\begin{tikzpicture}[scale=0.25]
\draw[arche] (0,0) .. controls (0,2.5) and (9,2.5) .. (9,0);
\draw[arche] (1,0) .. controls (1,2) and (8,2) .. (8,0);
\draw[arche] (2,0) .. controls (2,.5) and (3,.5) .. (3,0);
\draw[arche] (4,0) .. controls (4,1) and (7,1) .. (7,0);
\draw[arche] (5,0) .. controls (5,.5) and (6,.5) .. (6,0);
\draw[line] (-.5,0) -- (9.5,0);
\end{tikzpicture}
\end{align*}
\caption{A matching, its conjugate, and the rotated matching.\label{fig:matchings}}
\end{figure}
We need additional notions related to the Young diagram representation. So let $Y$ be a young diagram, and $u$ one of its boxes. The {\em hook length} $h(u)$ is the number of boxes below $u$ in the same column, or to its right in the same row (including the box $u$ itself). We note $H_Y$ the product of all hook lengths, i.e. $H_Y=\prod_{u\in Y} h(u)$. The {\em content} $c(u)$ is given by $y-x$ if $u$ is located in the $x$th row from the top and the $y$th column from the left; we write $u=(x,y)$ in this case. The {\em rim} of $Y$ consists of all boxes of $Y$ which are on its southeast boundary; removing the rim of a partition leaves another partition, and repeating this operation until the partition is empty gives us the {\em rim decomposition} of $Y$.
\subsection{Fully Packed Loops}
\label{sub:FPLintro}
A {\emph Fully Packed Loop configuration} (FPL) of size $n$ is a subgraph of the square grid with $n^2$ vertices, such that each vertex is connected to exactly two edges. We furthermore impose the following boundary conditions: the grid is assumed to have n external edges on each side, and we select alternatively every second of these edges to be part of our FPLs. By convention, we fix that the topmost external edge on the left boundary is part of the selected edges, which fixes thus the entire boundary of our FPLs. We number these external edges counterclockwise from $1$ to $2n$, see Figure~\ref{fig:fplexample}.
\begin{figure}[!ht]
\begin{center}
\includegraphics[page=5,width=0.7\textwidth]{FPL_Tiago_fig}
\end{center}
\caption{FPL with its associated matching \label{fig:fplexample}}
\end{figure}
In each FPL configuration $F$ the chosen external edges are clearly linked by paths which do not cross each other. We define $\pi(F)$ as the set of pairs $\{i,j\}$ of integers in $\{1,\ldots,2n\}$ such that the external edges labeled $i$ and $j$ are linked by a path in $F$. Then $\pi(F)$ is a matching in the sense of Section~\ref{representations}; an example is given on the right of Figure~\ref{fig:fplexample}.
\begin{defi}[$A_\pi$]
For any matching $\pi$, we define $A_\pi$ as the number of FPLs $F$ such that $\pi(F)=\pi$.
\end{defi}
A result of Wieland~\cite{wieland} shows that a rotation on matchings leaves the numbers $A_\pi$ invariant, and it is then easily seen that conjugation of matchings also leaves them invariant:
\begin{thm}[\cite{wieland}]
\label{thm:invar_api}
For any matching $\pi$, we have $A_\pi=A_{r(\pi)}$ and $A_\pi=A_{\pi^*}$.
\end{thm}
Now we let $A_n$ be the total number of FPLs of size $n$; by definition we have $A_n=\sum_\pi A_\pi$ where $\pi$ goes through all matchings with $n$ arches. We also define $A_{n}^V$ as the number of FPLs of size $n$ which are invariant with respect to vertical symmetry. It is easily seen that $A_{2n}^V=0$. We have the famous product expressions of these quantities:
\begin{align}
A_n&=\prod_{k=0}^{n-1} \frac{(3k+1)!}{(n+k)!}; \\
A_{2n+1}^V&= \frac{1}{2^n}\prod_{k=1}^n\frac{(6k-2)!(2k-1)!}{(4k-1)!(4k-2)!}.
\end{align}
The original proofs can be found in~\cite{Zeil-ASM,Kup-ASM} for $A_n$, and~\cite{MR1954236} for $A_{n}^V$.
\subsection{Completely Packed Loop model}
\label{sub:O1}
In this subsection we explain briefly the Completely Packed Loop Model (CPL) with periodic boundary conditions; for more details see~\cite{artic47, hdr, dG-review}. Let $n$ be an integer, and define a {\em state} as a column vector indexed by matchings of size $n$.
Let $e_i$ be the operator on matchings which creates a new arch at $(i,i+1)$, and join the vertices formerly linked to $i$ and $i+1$, as shown in the following examples:
\begin{align*}
e_3
\begin{tikzpicture}[scale=0.25, baseline=-3pt]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,1.5) and (5,1.5) .. (5,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\draw[line] (-.5,0) -- (5.5,0);
\end{tikzpicture} =
\begin{tikzpicture}[scale=0.25, baseline=-3pt]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,1.5) and (5,1.5) .. (5,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\draw[line] (-.5,0) -- (5.5,0);
\draw[arche] (0,0) -- (0,-1);
\draw[arche] (1,0) -- (1,-1);
\draw[arche] (2,0) .. controls (2,-.5) and (3,-.5) .. (3,0);
\draw[arche] (2,-1) .. controls (2,-.5) and (3,-.5) .. (3,-1);
\draw[arche] (4,0) -- (4,-1);
\draw[arche] (5,0) -- (5,-1);
\draw[line] (-.5,-1) -- (5.5,-1);
\end{tikzpicture} &=
\begin{tikzpicture}[scale=0.25, baseline=-3pt]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,.5) and (3,.5) .. (3,0);
\draw[arche] (4,0) .. controls (4,.5) and (5,.5) .. (5,0);
\draw[line] (-.5,0) -- (5.5,0);
\end{tikzpicture}\\
e_4
\begin{tikzpicture}[scale=0.25, baseline=-3pt]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,1.5) and (5,1.5) .. (5,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\draw[line] (-.5,0) -- (5.5,0);
\end{tikzpicture} =
\begin{tikzpicture}[scale=0.25, baseline=-3pt]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,1.5) and (5,1.5) .. (5,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\draw[line] (-.5,0) -- (5.5,0);
\draw[arche] (0,0) -- (0,-1);
\draw[arche] (1,0) -- (1,-1);
\draw[arche] (3,0) .. controls (3,-.5) and (4,-.5) .. (4,0);
\draw[arche] (3,-1) .. controls (3,-.5) and (4,-.5) .. (4,-1);
\draw[arche] (2,0) -- (2,-1);
\draw[arche] (5,0) -- (5,-1);
\draw[line] (-.5,-1) -- (5.5,-1);
\end{tikzpicture} &=
\begin{tikzpicture}[scale=0.25, baseline=-3pt]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,1.5) and (5,1.5) .. (5,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\draw[line] (-.5,0) -- (5.5,0);
\end{tikzpicture}
\end{align*}
The operator $e_0$ creates an arch linking the positions 1 and 2n. Attached to these operators is the {\em Hamiltonian}
\[
\mathcal{H}_{2n}=\sum_{i=0}^{2n-1} (1-e_i),
\]
where $1$ is the identity. $\mathcal{H}_{2n}$ acts naturally on states, and the groundstate $(\Psi_\pi)_{\pi:|\pi|=n}$ attached to $\mathcal{H}_{2n}$ is defined as follows:
\begin{defi}[$\Psi_\pi$]
\label{defi:psipi}
Let $n$ be a positive integer. We define the groundstate in the Completely Packed Loop model as the vector $\Psi=(\Psi_\pi)_{\pi:|\pi|=n}$ which is the solution of $\mathcal{H}_{2n}\Psi=0$, normalized by $\Psi_{()_n}=1$.
\end{defi}
By the Perron-Frobenius theorem, this is well defined. We have then the followings properties:
\begin{thm}
\label{th:propPsipi}
Let $n$ be a positive integer.
\begin{itemize}
\item For any $\pi$, $\Psi_{r(\pi)}=\Psi_{\pi^*}=\Psi_{\pi}$.
\item The numbers $\Psi_\pi$ are positive integers.
\item $\sum_\pi \Psi_\pi = A_n$, where the sum is over matchings such that $|\pi|=n$.
\end{itemize}
\end{thm}
The stability by rotation and conjugation is clear from the symmetry of the problem. The integral property was proved in~\cite[Section 4.4]{artic43}, while the sum rule was proved in~\cite{artic31}. The computation of this groundstate has received a lot of interest, mainly because of the Razumov--Stroganov (ex-)conjecture.
\subsection{The Razumov--Stroganov conjecture}
A simple computation shows that
\begin{align*}
\Psi_{
\begin{tikzpicture}[scale=0.15]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,.5) and (3,.5) .. (3,0);
\draw[arche] (4,0) .. controls (4,.5) and (5,.5) .. (5,0);
\end{tikzpicture}}
&=2&
\Psi_{
\begin{tikzpicture}[scale=0.15]
\draw[arche] (0,0) .. controls (0,1.5) and (5,1.5) .. (5,0);
\draw[arche] (1,0) .. controls (1,.5) and (2,.5) .. (2,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\end{tikzpicture}}
&=2 &
\Psi_{
\begin{tikzpicture}[scale=0.15]
\draw[arche] (0,0) .. controls (0,1) and (3,1) .. (3,0);
\draw[arche] (1,0) .. controls (1,.5) and (2,.5) .. (2,0);
\draw[arche] (4,0) .. controls (4,.5) and (5,.5) .. (5,0);
\end{tikzpicture}}
&=1\\
\Psi_{
\begin{tikzpicture}[scale=0.15]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,1) and (5,1) .. (5,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\end{tikzpicture}}
&=1 &
\Psi_{
\begin{tikzpicture}[scale=0.15]
\draw[arche] (0,0) .. controls (0,1.5) and (5,1.5) .. (5,0);
\draw[arche] (1,0) .. controls (1,1) and (4,1) .. (4,0);
\draw[arche] (2,0) .. controls (2,.5) and (3,.5) .. (3,0);
\end{tikzpicture}}
&=1
\end{align*}
which are exactly the numbers that appear in the FPL counting:
\medskip
\begin{center}
\includegraphics[page=4,width=0.7\textwidth]{FPL_Tiago_fig}
\end{center}
\medskip
Razumov and Stroganov~\cite{RS-conj} noticed in 2001 that this seems to hold in general, and this was recently proved by Cantini and Sportiello~\cite{ProofRS}:
\begin{thm}[Razumov--Stroganov conjecture]
\label{conj:rs}
The groundstate components of the Completely Packed Loop model count the number of FPL configurations: for any matching $\pi$,
\[
\Psi_\pi=A_{\pi}.
\]
\end{thm}
The proof of Cantini and Sportiello consists in verifying that the relations of Definition~\ref{defi:psipi} hold for the numbers $A_\pi$. We note also that the results of Theorem~\ref{th:propPsipi} are now a corollary of the Razumov--Stroganov conjecture.
\section{Matchings with nested arches and polynomials}
\label{sec:polynomials}
\subsection{Definitions and results}
In~\cite{Zuber-conj}, Zuber computed some $\Psi_{(\pi)_p}$ for some small matchings $\pi$, and $p=0,1,2,...$. Among other things, he conjectured the following:
\begin{thm}[{\cite{CKLN,artic47}}]
\label{zuber}
For any matching $\pi$ and $p$ a nonnegative integer, the quantity $A_{(\pi)_p}$ can be written in the following form:
\[
A_{(\pi)_p}=\frac{P_\pi (p)}{d(\pi)!},
\]
where $P_\pi (p)$ is a polynomial in $p$ of degree $d(\pi)$ with integer coefficients, and leading coefficient equal to $d(\pi)!/H_{\pi}$.
\end{thm}
This was proved first by Caselli, Krattenthaler, Lass and Nadeau in~\cite{CKLN} for $A_{(\pi)_p}$, and by Fonseca and Zinn-Justin in~\cite{artic47} for $\Psi_{(\pi)_p}$. Because of this polynomiality property, we introduce the following notations :
\begin{defi}[$A_\pi(t)$ and $\Psi_\pi(t)$]
We let $A_\pi(t)$ (respectively $\Psi_\pi(t)$) be the polynomial in $t$ such that $A_\pi(p)=A_{(\pi)_p}$ (resp. $\Psi_\pi(p)=\Psi_{(\pi)_p}$) for all integers $p\geq 0$.
\end{defi}
By the Razumov--Stroganov conjecture~\ref{conj:rs} one has clearly for all $\pi$:
\[
A_\pi(t)=\Psi_\pi(t).
\]
We introduced two different notations so that the origin of the quantities involved becomes clearer; in most of this paper however we will only use the notation $A_\pi(t)$. It is the objective of this paper to investigate these polynomials, and give evidence that they possess very interesting properties, in particular when they are evaluated at negative integers. The following proposition sums up some properties of the polynomials.
\begin{prop}
\label{prop:polynomials}
The polynomial $A_\pi(t)$ has degree $d(\pi)$ and leading coefficient $1/H_\pi$. Furthermore, we have $A_\pi(t)=A_{\pi^*}(t)$, and $A_{(\pi)_\ell}(t)=A_{\pi}(t+\ell)$ for any nonnegative integer $\ell$.
\end{prop}
The first part comes from Theorem~\ref{zuber}, while the rest is clear when $t$ is a nonnegative integer and thus holds true in general by polynomiality in $t$.
In this section we will recall briefly certain expressions for these polynomials, and point to other works for the proofs.
\subsection{The FPL case}
If $\pi$ is a matching with $n$ arches, the polynomial $A_\pi(t)$ admits the following expression:
\begin{equation}
\label{eq:apiX}
A_\pi(t)=\sum_{\sigma\leq \pi}a_{\sigma}^\pi\cdot S_\sigma(t-n+1),
\end{equation}
in which $\sigma$ is a parenthesis word (cf. Section~\ref{representations}), the $a_{\sigma}^\pi$ are the nonnegative integers denoted by $a(\sigma,\pi,\mathbf{0}_n)$ in~\cite{Thapper}, and $S_\sigma(t-n+1)$ is the polynomial given by
\[
S_\sigma(t-n+1)=\frac{1}{H_\sigma}\prod_{u\in Y(\sigma)}(t-n+1+c(u)),
\]
in which and $H_\sigma$, $c(u)$ being defined in Section~\ref{representations}. If $N$ denotes a nonnegative integer, $S_\sigma(N)$ enumerates semistandard Young tableaux of shape $Y(\sigma)$ with entries not larger than $N$: this is the {\em hook content formula}, cf.~\cite{StanleyEnum2} for instance.
\medskip
Equation~\eqref{eq:apiX} above can be derived from~\cite[Equation (4)]{Thapper} (itself based on the work ~\cite{CKLN}) together with Conjecture~3.4 in the same paper: this conjecture and the derivation are proved in~\cite{NadFPL1}.
\subsection{The CPL case} \label{sec:pol_qKZ}
In this subsection we briefly explain how to compute bivariate polynomials $\Psi_{\pi}(\tau,t)$, defined as the homogeneous limit of certain multivariate polynomials (see Section~\ref{sec:firstroot} for more details and references). We will be mostly interested in the case $\tau=1$, since we recover the groundstate $\Psi_{\pi}(t)=\Psi_{\pi}(1,t)$, as explained in~\cite{hdr}; we address the case of general $\tau$ in Section~\ref{sec:tau}.
So let $a=\{a_1,\ldots,a_n\}$ be a matching represented as an increasing sequence, and define the polynomial $\Phi_{a}(\tau)$ by:
\[
\Phi_{a}(\tau) = \oint \ldots \oint \prod_i \frac{du_i}{2 \pi i u_i^{a_i}} \prod_{j>i} (u_j-u_i) (1+\tau u_j+u_i u_j).
\]
We can then obtain the $\Psi_\pi(\tau)$ via a certain matrix $C(\tau)$ :
\begin{align}
\Phi_a (\tau)=&\sum_\pi C_{a,\pi}(\tau) \Psi_\pi (\tau);\label{eq:psiphi1}\\
\Psi_\pi (\tau)=&\sum_a C^{-1}_{\pi,a}(\tau) \Phi_a (\tau).\label{eq:psiphi2}
\end{align}
The coefficients $C_{a,\pi}(\tau)$ are given explicitly in~\cite[Appendix A]{artic41}. We just need the following facts:
\begin{prop}[{\cite[Lemma 3]{artic47}}]
\label{prop:Bases}
Let $a$ and $\pi$ be two matchings. Then we have:
\[
C_{a,\pi}(\tau)=\begin{cases}
0 & \textrm{if } \pi \nleq a;\\
1 & \textrm{if } \pi=a;\\
P_{a,\pi} (\tau) & \textrm{if } \pi < a,
\end{cases}
\]
where $P_{a,\pi}(\tau)$ is a polynomial in $\tau$ with degree $\leq d(a)-d(\pi)-2$.
\end{prop}
Moreover, we have
\begin{equation}
\label{eq:capi-tau}
C_{a,\pi}(\tau)=(-1)^{d(a)-d(\pi)} C_{a,\pi}(-\tau),
\end{equation}
since it is a product of polynomials $U_s$ in $\tau$ with degree of the form $d(a)-d(\pi)-2k$, $k\in \mathbb{N}$, and parity given by $d(a)-d(\pi)$: this is an easy consequence of~\cite[p.12 and Appendix C]{artic47}.
By abuse of notation, we write $(a)_p$ to represent $\{1,\ldots,p,p+a_1,\ldots,p+a_n\}$, since this corresponds indeed to adding $p$ nested arches to $\pi(a)$ via the bijections of Section~\ref{sec:defi}. Then
one easy but important lemma for us is the following:
\begin{lemma}[{\cite[Lemma 4]{artic47}}]
\label{lem:sta}
The coefficients $C_{a,\pi}(\tau)$ are stable, that is:
\[
C_{(a)_p,(\pi)_p}(\tau)=C_{a,\pi}(\tau) \qquad \forall p\in \mathbb{N}.
\]
\end{lemma}
We remark that Proposition~\ref{prop:Bases}, Equation~\eqref{eq:capi-tau} and Lemma~\ref{lem:sta} also hold for the coefficients $C^{-1}_{a,\pi}(\tau)$ of the inverse matrix. Now
\begin{align*}
\Phi_{(a)_p} (\tau) =& \oint\ldots\oint \prod_i^{n+p} \frac{du_i}{2\pi i u_i^{\hat{a}_i}} \prod_{j>i} (u_j-u_i)(1+\tau u_j+u_i u_j)\\
=& \oint\ldots\oint \prod_i^n \frac{du_i}{2\pi i u_i^{a_i}} (1+\tau u_i)^p \prod_{j>i} (u_j-u_i)(1+\tau u_j+u_i u_j),
\end{align*}
where we integrated in the first $p$ variables and renamed the rest $u_{p+i}\mapsto u_i$. This is a polynomial in $p$, and we will naturally note $\Phi_{a} (\tau,t)$ the polynomial such that $\Phi_{a} (\tau,p)=\Phi_{(a)_p}(\tau)$.
Finally, from Equation~\eqref{eq:psiphi2} and Lemma~\ref{lem:sta} we obtain the fundamental equation
\begin{equation}
\label{eq:psitauphitau}
\Psi_\pi (\tau,t) = \sum_a C^{-1}_{\pi,a}(\tau) \Phi_a (\tau,t).
\end{equation}
In the special case $\tau=1$, we write $C_{a,\pi}=C_{a,\pi}(1)$, $\Phi_a (t)=\Phi_a (1,t)$ and thus
\begin{equation}
\label{eq:psipiX}
A_\pi(t)=\Psi_\pi (t) = \sum_a C^{-1}_{\pi,a} \Phi_a (t),
\end{equation}
thanks to the Razumov--Stroganov conjecture~\ref{conj:rs}. This gives us a second expression for $A_\pi (t)$, the first one being given by~\eqref{eq:apiX}.
\section{The main conjectures}
\label{sec:conj}
In this section we present several conjectures about the polynomials $A_{\pi}(t)$. For each of them, we will give strong supporting evidence. We will first give a combinatorial construction that is essential in the statement of the conjectures.
\subsection{Combinatorics}
\label{sub:combdef}
We give two rules which define certain integers attached to a matching $\pi$. It turns out that the two rules are equivalent, which is the content of Theorem~\ref{th:equivab}.
Let $\pi$ be a link pattern, and $n=|\pi|$ its number of arches. We let $Y(\pi),d(\pi)$ be the Young diagram of $\pi$ and its number of boxes respectively, as defined in Section~\ref{representations}. We also use the notation $\widehat{x}=2n+1-x$ for $x\in [\![ 1,2n ]\!]$.
\medskip
{\bf Rule $A$:} For $p$ between $1$ and $n-1$, we consider the set $\mathcal{A}_p^L(\pi)$ of arches $\{a_1,a_2\}$ such that $a_1\leq p$ and $p<a_2<\widehat{p}$, and the set $\mathcal{A}_p^R(\pi)$ of arches $\{a_1,a_2\}$ such that $p<a_1<\widehat{p}$ and $\widehat{p}\leq a_2$. It is clear that $|\mathcal{A}_p^L(\pi)|+|\mathcal{A}_p^R(\pi)|$ is an even integer, and we can thus define the integer $m^{(A)}_p(\pi)$ by
\[
m^{(A)}_p(\pi):=\frac{|\mathcal{A}_p^L(\pi)|+|\mathcal{A}_p^R(\pi)|}{2}.
\]
For instance, let $\pi_0$ be the matching with $8$ arches represented below on the left; we give an alternative representation on the right by folding the second half of the points above the first half, so that $\widehat{x}$ and $x$ are vertically aligned. For $p=4$, we get $|\mathcal{A}_p^L(\pi_0)|=3,|\mathcal{A}_p^R(\pi_0)|=1$, which count arches between the regions (O) and (I), and thus $m^{(A)}_4(\pi_0)=4/2=2$. The reader will check that \[m^{(A)}_p(\pi_0)=0,1,2,2,2,1,1\] for $p=1,\ldots,7$.
\begin{center}
\includegraphics[page=3,width=0.9\textwidth]{FPL_Tiago_fig}
\end{center}
{\bf Rule B:} Label the boxes of $Y(\pi)$ by associating $n+1-x-y$ to the box $(x,y)$. Then decompose $Y(\pi)$ in rims (cf. Section~\ref{representations}) and let $R_1,\ldots,R_t$ be the successive rims: using the example $\pi_0$ from rule A, we represented below the $Y(\pi_0)$ with its labeling and decomposition in (three) rims. For a given rim $R_\ell$, denote by $i$ and $j$ the labels appearing at the bottom left and top right of the rim, and by $k$ the minimal value appearing in the rim (so that $k\leq i,j$). We define the multiset $B_\ell$ as
\[
\{k\}\cup\{i,i-1,\ldots,k+1\}\cup\{j,j-1,\ldots k+1\},
\]
and let $B_\pi$ be the union of all multisets $B_\ell$. Finally, we define $m^{(B)}_i(\pi)$ be the multiplicity of the integer $i\in\{1,\ldots,n-1\}$ in $B_\pi$.
In the case of $\pi_0$, the rims give the multisets $\{2,4,3,3\}$, $\{4,5,5\}$ and $\{6,7\}$. Their union is $B_{\pi_0}=\{2,3^2,4^2,5^2,6,7\}$, so that \[m^{(B)}_p(\pi_0)=0,1,2,2,2,1,1\] for $p=1,\ldots,7$.
\begin{center}
\includegraphics[page=2,width=0.2\textwidth]{FPL_Tiago_fig}
\end{center}
We see here that $m^{(A)}_p(\pi_0)=m^{(B)}_p(\pi_0)$ for all $p$, which holds in general:
\begin{thm}
\label{th:equivab}
For any matching $\pi$, and any integer $p$ such that $1\leq p\leq |\pi|-1$, we have $m^{(A)}_p(\pi)=m^{(B)}_p(\pi)$.
\end{thm}
The proof of this theorem is a bit technical, but not difficult; it is given in Appendix~\ref{app:equivab}.
\begin{defi}[$m_p(\pi)$]
For any matching $\pi$ and any integer $p$, we let $m_p(\pi)$ be the common value of $m^{(A)}_p(\pi)$ and $m^{(B)}_p(\pi)$ if $1\leq p\leq |\pi|-1$, and be equal to $0$ otherwise.
\end{defi}
We have then the following result:
\begin{prop}
\label{prop:compatroots}
For any matching $\pi$, we have $\sum_pm_p(\pi)\leq d(\pi)$, and the difference $d(\pi)-\sum_pm_p(\pi)$ is an even integer.
\end{prop}
\begin{proof}
Rule B is more suited to prove this proposition. We will clearly get the result if we can prove that for each rim $R_t$, the number of boxes $r_t$ in $R_t$ is greater or equal than the cardinality $b_t$ of the multiset $B_t$, and the difference between the two quantities is even. Therefore we fix a rim $R_t$, and we use the notations $i,j,k$ from the definition of Rule B. We compute easily $r_t=2n-i-j-1$ while $b_t=i+j-2k+1$. The difference is thus $\delta_t:=r_t-b_t=2(k+n-1-(i+j))$, which is obviously even. It is also nonnegative: indeed, if $c, c'$ are the extreme boxes with the labels $i,j$ respectively, then the minimal value of $k$ is obtained if the rim consists of the boxes to the right of $c$ together with the boxes below $c'$. At the intersection of these two sets of boxes, the value of $k$ is equal to $i+j-n+1$, which shows that $\delta_t$ is nonnegative and completes the proof.
\end{proof}
We will use this result in Section~\ref{sub:realroots}.
\subsection{The conjectures}
\label{sub:conjectures}
The rest of this section will consist of the statement of Conjectures ~\ref{conj:realroots}, ~\ref{conj:dec},~\ref{conj:gpi} and ~\ref{conj:posX}, together with evidence in their support. The first three conjectures are related to values of the polynomials $A_\pi(t)$ when the argument $t$ is a negative integer; what these conjectures imply is that some mysterious combinatorics occur around these values $A_\pi(-p)$. The fourth conjecture states simply that the polynomials $A_\pi(t)$ have positive coefficients, and is thus slightly different in spirit than the other ones, though they are clearly related.
The principal evidence in support of the conjectures, as well as the source of their discovery, is the following result:
\begin{fact}
Conjectures ~\ref{conj:realroots}, ~\ref{conj:dec} and ~\ref{conj:posX} are true for all matchings $\pi$ such that $\pi\leq 8$. Conjecture~\ref{conj:gpi} is true for all $n\leq 8$.
\end{fact}
The corresponding polynomials $A_\pi(t)$ were indeed computed in Mathematica for these values of $\pi$ thanks to Formula~\ref{eq:capi-tau}, and each conjecture was then checked from these exact expressions; note that there are 1430 matchings $|\pi|$ such that $|\pi|=8$. In Appendix~\ref{app:examples} we list the polynomials $A_\pi(t)$ for $|\pi|=4$.
\subsubsection{Real roots}
\label{sub:realroots}
The first conjecture gives a complete description of all real roots of the polynomials $A_{\pi}(t)$:
\begin{conj}
\label{conj:realroots}
All the real roots of the polynomials $A_{\pi}(t)$ are negative integers, and $-p$ appears with multiplicity $m_p(\pi)$. Equivalently, we have a factorization:
\[
A_{\pi}(t) = \frac{1}{|d(\pi)|!} \cdot \left(\prod_{p=1}^{|\pi|-1} (t+p)^{m_p(\pi)}\right)\cdot Q_{\pi} (t),
\]
where $Q_{\pi} (t)$ is a polynomial with integer coefficients and no real roots.
\end{conj}
We must verify first that the definition of the multiplicities is coherent with this conjecture. We know indeed by Theorem~\ref{zuber} that $A_\pi(t)$ has degree $d(\pi)$ in $t$; furthermore the degree of $Q_{\pi}(t)$ is necessarily even, since it is a real polynomial with no real roots. This means that the sum of the $m_p(\pi)$ should not be larger than $d(\pi)$, and should be of the same parity: this is precisely the content of Proposition~\ref{prop:compatroots}.
It is also immediately checked that the conjecture is compatible with the two stability properties from Proposition~\ref{prop:polynomials}, that is $A_\pi(t)=A_{\pi^*}(t)$ and $A_{(\pi)_\ell}(t)=A_{\pi}(t+\ell)$ for any nonnegative integer $\ell$. Indeed $m_p(\pi)=m_p(\pi^*)$ is immediately seen from either one of the rules, as is $m_{p+\ell}\left((\pi)_\ell\right)=m_p(\pi)$.
\medskip
As an example, the polynomial for the matching $\pi_0$ of Section~\ref{sub:combdef} is:
\begin{align*}
A_{\pi_0}(t)=&\frac{(2 + t)(3 + t)^2(4 + t)^2(5 + t)^2(6 + t)(7 + t)}{145152000}\\
&\times (9 t^6+284 t^5+4355 t^4+39660 t^3+225436 t^2+757456 t+123120) \label{ex_n8},
\end{align*}
In the articles~\cite{artic27} for the FPL case, and~\cite{artic38} for the CPL case, the following formula was established:
\[
A_{()_a()_b}(t)=\prod_{i=1}^a \prod_{j=1}^b \frac{t+i+j-1}{i+j-1}.
\]
This is exactly what Conjecture~\ref{conj:realroots} predicts in this case (the constant factor is given by Theorem~\ref{zuber}). This is perhaps easier to see with the definition of the $m_i(\pi)$ by rule B. Here the Young diagram is a rectangle, and it is easily seen that each box will correspond to a root of the polynomial, matching precisely the expression above.
There is an extension of this ``rectangular'' case in the article~\cite{CasKrat}, the results of which can be reformulated as a computation of the polynomials $A_\pi(t)$ when the diagram $Y(\pi)$ is formed of a rectangle together with one more line consisting of one or two boxes, or two more lines with one box each. Then a simple rewriting of the formulas of Theorems~3.2 and ~4.2 in~\cite{CasKrat} shows that the polynomials have indeed\footnote{we did not actually prove that the polynomials $Q_\pi(t)$ only have complex roots when they are of degree 4, though we tested several values; when $Q_\pi(t)$ has degree 2, then from the explicit form in \cite[Theorem~3.2]{CasKrat} one checks that it has a negative discriminant.} the form predicted by Conjecture~\ref{conj:realroots}.
\medskip
In Section~\ref{sec:firstroot}, we will give another piece of evidence for the conjecture, by showing that $-1$ is a root of $A_\pi(t)$ as predicted, that is when there is no arch between $1$ and $2n$ in the matching $\pi$; note though that we will not prove that we have multiplicity $m_1(\pi)=1$ in this case.
\subsubsection{Values for some negative parameters}
We are now interested in the values of the polynomial $A_\pi(t)$ is, when the argument $t$ is specialized to a negative integer which is not a root. Note first that although $A_\pi(t)$ does not have integer coefficients, we have the following:
\begin{prop}
\label{prop:integervalues}
Let $\pi$ be a matching, $p>0$ an integer; then $A_\pi(-p)$ is an integer.
\end{prop}
\begin{proof}
This is standard: for $d=d(\pi)$, the polynomials $\binom{t+d-i}{d},i=0\ldots d$, form a basis of the space of complex polynomials in $t$ of degree $\leq d$. Since $A_\pi(t)$ has degree $d$, we can write
\begin{equation}\label{eq:expansion}
A_\pi(t)=\sum_{i=0}^ dc_i \binom{t+d-i}{d}. \end{equation}
Now $A_\pi(p)=A_{(\pi)_p}$ is an integer when $p$ is a nonnegative integer. Plugging in successively $t=0,1,2,\ldots,d$ in~\eqref{eq:expansion} shows then that $c_0,c_1,\ldots,c_d$ are in fact all integers, which in turn implies that for negative integers $-p$ we have also that $A_\pi(-p)$ is an integer.
\end{proof}
So let $\pi$ be a matching, and $p\in[\![ 0,|\pi|]\!]$ be such that $m_p(\pi)=0$. By Rule A in Section~\ref{sub:combdef}, this means that there are no arches that separate the outer part of $\pi$ consisting of the first $p$ and the last $p$ points (denote it by $\alpha$) from the inner part (denote it by $\beta$), as shown in the picture:
\[
\pi=
\begin{tikzpicture}[scale=.25,baseline=0pt]
\fill [blue!10!white] (4,0) .. controls (4,4) and (-4,4) .. (-4,0) -- (-2,0) .. controls (-2,2) and (2,2) .. (2,0) -- cycle;
\draw [green, snake=brace, mirror snake, segment amplitude=1pt] (-4,0) -- (-2,0);
\draw [green, snake=brace, mirror snake, segment amplitude=1pt] (2,0) -- (4,0);
\draw [black] (-3,-.5) node {\tiny $p$};
\draw [black] (3,-.5) node {\tiny $p$};
\draw [black] (0,0) node {$\beta$};
\draw [black] (0,2) node {$\alpha$};
\end{tikzpicture}
\]
Here $\alpha$ and $\beta$ can be naturally considered as matchings in their own right (when properly relabeled), and we introduce the notation $\pi=\alpha \circ \beta$ in this situation. It turns out that the following numbers play a special role in our second conjecture:
\begin{defi}[$G_\pi$] For any matching $\pi$ we define
\[
G_\pi:=A_{\pi}(-|\pi|).
\]
\end{defi}
By Proposition~\ref{prop:integervalues} above, the $G_\pi$ are actually integers.
The next conjecture says that these numbers seem to appear naturally when evaluating our polynomials at certain negative integers:
\begin{conj}
\label{conj:dec}
Let $\pi$ be a matching and $p$ be an integer between $1$ and $|\pi|-1$ such that $m_p(\pi)=0$, and write $\pi=\alpha \circ \beta$ with $|\alpha|=p$. We have then the following factorization:
\[
A_{\pi}(-p)= G_\alpha A_{\beta}.
\]
\end{conj}
Here we need to verify a certain sign compatibility with Conjecture~\ref{conj:realroots}, which predicts that $A_{\pi}(-p)$ has sign $(-1)^{M_p}$ where $M_p=\sum_{i\leq p}m_i(\pi)$. Now for this range of $i$ we have obviously $m_i(\pi)=m_i(\alpha)$ by rule A, so that $A_{\pi}(-p)$ has sign $(-1)^{d(\alpha)}$ by Proposition~\ref{prop:compatroots}; but this is then (conjecturally) the sign of $G_\alpha$ (cf. Proposition~\ref{prop:propgpi} below), which is coherent with the signs in Conjecture~\ref{conj:dec}.
\subsubsection{Properties of the $G_\pi$}
\label{sub:gpi}
Conjecture~\ref{conj:dec} shows that the numbers $G_\pi$ seem to play a special role in the values of $A_\pi(t)$ at negative integers.
\begin{prop}
\label{prop:propgpi}
For any matching $\pi$, $G_\pi = G_{(\pi)}$ and $G_\pi=G_{\pi^*}$. Moreover, Conjecture~\ref{conj:realroots} implies that $sign(G_\pi)=(-1)^{d(\pi)}$.
\end{prop}
\begin{proof}
The first two properties are immediately derived from the polynomial identities $A_\pi(t+1)=A_{(\pi)}(t)$ and $A_\pi(t)=A_{\pi^*}(t)$ respectively, given in Proposition~\ref{prop:polynomials}. Then, if all real roots of $A_\pi(t)$ are between $-1$ and $1-|\pi|$ as predicted by Conjecture~\ref{conj:realroots}, the sign of $G_\pi$ must be equal to the sign of $(-1)^{d(\pi)}$, since $A_\pi(t)$ has leading term $t^{d(\pi)}/H_{\pi}$ by Theorem~\ref{zuber}.
\end{proof}
We can compute some special cases, corresponding to $Y(\pi)$ being a rectangle, or a rectangle plus an extra row with just one box:
\begin{prop}
We have $G_{()_a()_b}=(-1)^{ab}$, while $G_{(()())_{a-2}()_b}=(-1)^{ab+1}(a+1)$.
\end{prop}
This is easily proved by using the explicit formulas for such $\pi$ which were mentioned in Section~\ref{sub:realroots}. Finally, the most striking features about these numbers are conjectural:
\begin{conj}
\label{conj:gpi}
For any positive integer $n$, we have
\begin{align}
\sum_{\pi : |\pi|=n} |G_\pi|&= A_n\quad\text{and}\quad\sum_{\pi : |\pi|=n} G_\pi = (-1)^{\frac{n(n-1)}{2}}\left(A_n^V\right)^2\label{eq:mysterious}\\
G_{()^n}&=\begin{cases}
(-1)^{\frac{n(n-1)}{2}}\left(A_{n+1}^V\right)^2\quad \text{if $n$ is even}; \\
(-1)^{\frac{n(n-1)}{2}}\left(A_{n}^VA_{n+2}^V\right)\quad\text{if $n$ is odd}.
\end{cases}\label{eq:mysterious2}
\end{align}
\end{conj}
The first equality in~\eqref{eq:mysterious} is particularly interesting: it implies that the unsigned integers $|G_\pi|$, when $\pi$ runs through all matchings of size $n$, sum up to $A_n$, the total number of FPL of size $n$. Of course the $A_\pi$ verify exactly this also, but the properties of $G_\pi$ we have just seen show that the sets of numbers have different behaviors. For instance, the stability property $G_\pi = G_{(\pi)}$ fails for $A_\pi$ obviously, while in general $G_{r(\pi)}\neq G_\pi$. Furthermore, $A_{(()())_{a-2}()_b}=a+b-1$ while $G_{(()())_{a-2}()_b}=(-1)^{ab+1}(a+1)$.
This raises the problem of finding a partition of FPLs of size $n$ --or any other combinatorial object enumerated by $A_n$-- whose blocks $\{\mathcal{G}_\pi\}_{\pi:|\pi|=n}$ verify $|\mathcal{G}_\pi|=|G_{\pi}|$.
\medskip
\noindent {\em Remark:} In fact, part of the conjecture is a consequence of Conjectures~\ref{conj:realroots} and~\ref{conj:dec}. Indeed, it was proved in~\cite{artic47} that, as polynomials, we have:
\begin{equation}
\label{eq:polsumrule}
A_{{()^n}}(t)=\sum_{\pi:|\pi|=n}A_\pi(t-1)
\end{equation}
If one evaluates this for $t=1-n$, then two cases occur:
\begin{itemize}
\item \textit{if $n$ is even}, then we have that $1-n$ is a root of $A_{()^n}(t)$ by Conjecture~\ref{conj:realroots}, and we get from~\eqref{eq:polsumrule} that
\[
\sum_{\pi : |\pi|=n} G_\pi =0;
\]
\item \textit{if $n$ is odd}, then we are in the conditions of Conjecture~\ref{conj:dec}, which tells us that $A_{()^n}(1-n)=G_{()^{n-1}}A_{()}=G_{()^{n-1}}$, and from~\eqref{eq:polsumrule} we have
\[
\sum_{\pi : |\pi|=n} G_\pi =G_{()^{n-1}}.
\]
\end{itemize}
This then proves that the second equality in~\eqref{eq:mysterious} can be deduced from the first case in~\eqref{eq:mysterious2}.
\subsubsection{Positivity of the coefficients}
Our last conjecture is a bit different from the other three ones, in that it does not deal with values of the polynomials, but their coefficients:
\begin{conj}
\label{conj:posX}
For any $\pi$, the coefficients of $A_\pi(t)$ are nonnegative.
\end{conj}
It seems in fact to be true that the polynomials $Q_\pi(t)$ --whose existence is predicted by Conjecture~\ref{conj:realroots}-- also only have nonnegative coefficients.
\medskip
By Theorem~\ref{zuber}, we know already that $A_\pi(t)$ is of degree $d(\pi)$ with a positive leading coefficient, so we will be interested in the {\em subleading} coefficient, that is, the coefficient of $t^{d(\pi)-1}$. We managed to compute this coefficient and prove that it is indeed positive: this is Theorem~\ref{th:subleading} in the next section.
\section{The subleading term of the polynomials}
\label{sec:subleading}
In this section we will prove the following result:
\begin{thm}
\label{th:subleading}
Given a matching $\pi$ of size $n$, $\pi\neq ()_n$, the coefficient of $t^{d(\pi)-1}$ in $A_\pi(t)$ is positive.
\end{thm}
This is a special case of Conjecture~\ref{conj:posX}. We will give two proofs of this theorem, one starting from the expression~\eqref{eq:apiX}, the other based on the expression~\eqref{eq:psipiX}. As a byproduct of these proofs, we will deduce two formulas concerning products of hook lengths (Proposition~\ref{prop:newhookformulas}).
\subsection{First proof}
We use first the expression of $A_\pi(t)$ given by the sum in Equation~\eqref{eq:apiX}:
\[
A_\pi(t)=\sum_{\sigma\leq \pi}a_{\sigma}^\pi\cdot S_\sigma(t+1-n)
\]
We need to gather the terms contributing to the coefficient of $t^{d(\pi)-1}$: they are of two kinds, depending on whether $S_\sigma(t+1-n)$ has degree $d(\sigma)$ equal to $d(\pi)$ or $d(\pi)-1$. Since $\sigma\leq\pi$, the first case occurs only for $\sigma=\pi$, while the second case occurs when $Y(\sigma)$ is obtained from the diagram $Y(\pi)$ by removing a {\em corner} from this diagram, i.e. a box of $Y(\pi)$ which has no box below it and no box to its right. We denote by $Cor(\pi)$ the set of corners of $Y(\pi)$, and we get:
\[
[t^{d(\pi)-1}]A_\pi(t)=\frac{a_{\pi}^{\pi}}{H_\pi}\sum_{u\in Y(\pi)}(1-n+c(u)) +
\sum_{(x,y)\in Cor(\pi)} \frac{a_{\pi-(x,y)}^{\pi}}{H_{\pi-(x,y)}}.
\]
It is proved in~\cite{CKLN} that $a_{\pi}^{\pi}=1$, and in~\cite{NadFPL1} that $a_{\pi-(x,y)}^{\pi}=2n-1-y$ when $(x,y)$ belongs to $Cor(\pi)$. We can then rewrite the previous expression as follows:
\[
\frac{d(\pi)(1-n)}{H_\pi} +\frac{1}{H_\pi}\sum_{u\in Y(\pi)}c(u)+ \sum_{(x,y)\in Cor(\pi)}\frac{(n-1)}{H_{\pi-(x,y)}}+
\sum_{(x,y)\in Cor(\pi)} \frac{(n-y)}{H_{\pi-(x,y)}}.
\]
Now the first and third terms cancel each other because of the {\em hook length formula} (see~\cite{StanleyEnum2} for instance), which is equivalent to
\[
\frac{d(\pi)}{H_\pi}=\sum_{(x,y)\in Cor(\pi)}\frac{1}{H_{\pi-(x,y)}}.
\]
Therefore we are left with
\begin{equation}
\label{eq:FPL_sdt}
[t^{d(\pi)-1}]A_\pi(t)=\frac{1}{H_\pi}\sum_{u\in Y(\pi)}c(u)+\sum_{(x,y)\in Cor(\pi)} \frac{(n-y)}{H_{\pi-(x,y)}}.
\end{equation}
We now wish to prove that this is positive, which is not clear since the first term can be negative. The idea is to remember that $A_\pi(t)=A_{\pi^*}(t)$ by Proposition~\ref{prop:polynomials}. Now when $\pi\mapsto\pi^*$, the box $(x,y)$ is sent to $(y,x)$, all contents change signs, $Cor(\pi)$ is sent to $Cor(\pi^*)$, and hook lengths are preserved. From these observations we get the alternative expression:
\begin{equation}
\label{eq:FPL_sdt2}
[t^{d(\pi)-1}]A_\pi(t)=-\frac{1}{H_\pi}\sum_{u\in Y(\pi)}c(u)+\sum_{(x,y)\in Cor(\pi)} \frac{(n-x)}{H_{\pi-(x,y)}}.
\end{equation}
Clearly in both \eqref{eq:FPL_sdt} and \eqref{eq:FPL_sdt2} the second term is positive, since $y<n$ for all boxes $(x,y)$ in $Y(\pi)$ (there is at least one such box because $\pi \neq ()_n$). Adding~\eqref{eq:FPL_sdt} and~\eqref{eq:FPL_sdt2}, and dividing by $2$, we obtain that the coefficient $[t^{d(\pi)-1}]A_\pi(t)$ is positive:
\begin{equation}
\label{eq:coeff_pos_expression}
[t^{d(\pi)-1}]A_\pi(t)=\sum_{(x,y)\in Cor(\pi)} \frac{(2n-x-y)}{H_{\pi-(x,y)}}.
\end{equation}
\subsection{Second proof}
\label{sub:subleadingO1}
Here we use the results of Section~\ref{sec:pol_qKZ}, with $\tau=1$. Equation~\eqref{eq:psipiX} says that
\[
\Phi_a(t) = A_\pi(t) + \sum_{\sigma<\pi} C_{\pi,\sigma} A_\sigma (t),
\]
where $a=a(\pi)$. By Theorem~\ref{zuber}, we know that $A_\pi(t)$ has degree $d(\pi)$. Furthermore, since $C_{\pi,\sigma}$ has degree $\leq d(\pi)-d(\sigma) -2$ if $\sigma<\pi$, we conclude that the coefficient of $t^{d(\pi)-1}$ in $A_\pi(t)$ and $\Phi_{a(\pi)}(t)$ is the same, so:
\[
[t^{d(\pi)-1}]A_{\pi}(t)= [t^{d(\pi)-1}]\oint\ldots\oint \prod_{i=1}^{|a|} \frac{du_i}{2 \pi i u_i^{a_i}}
(1+u_i)^t \prod_{j>i} (u_j-u_i) (1+u_j+u_i u_j).
\]
If we consider $(1+u_j+u_i u_j)=(1+u_j)+u_i u_j$, we notice that each time we pick the term $u_i u_j$, we decrease $a_i$ and $a_j$ by $1$ and thus the integral corresponds formally to a diagram with two boxes less, so the degree in $t$ decreases by $2$ also; these terms can thus be ignored, which gives:
\begin{align*}
[t^{d(\pi)-1}]A_{\pi}(t)=& [t^{d(\pi)-1}]\oint \ldots \oint \prod_i \frac{du_i}{2 \pi i u_i^{a_i}} (1+u_i)^{t+i-1} \prod_{j>i} (u_j-u_i)\\
=&[t^{d(\pi)-1}]\sum_{\sigma \in S_{|\pi|}} (-1)^\sigma \oint \ldots \oint \prod_i \frac{du_i}{2 \pi i a_i+1-\sigma_i} (1+u_i)^{t+i-1}\\
=&[t^{d(\pi)-1}]\sum_\sigma (-1)^{\sigma} \prod_i \binom{t+i-1}{a_i-\sigma_i}\\
=&[t^{d(\pi)-1}]\det \left| \binom{t+i-1}{a_i-j} \right|.
\end{align*}
Expanding the binomial up to the second order, we get:
\[
\binom{t+i-1}{a_i-j} = t^{a_i-j}\frac{1+\frac{(a_i-j)(2i+j-a_i-1)}{t}}{(a_i-j)!}+\text{terms of lower degree}.
\]
If we compute the subleading term of the determinant we get:
\begin{align}
\label{eq:expdet}
[t^{d(\pi)-1}]A_{\pi}(t)&=[t^{-1}]\det\left| \frac{1+\frac{(a_i-j)(2i+j-a_i-1)}{t}}{(a_i-j)!} \right|\notag\\
& = \sum_{k=0}^{n-1} \det\left| \frac{1}{(a_i-j)!} \times
\begin{cases}
1& \textrm{if }i\neq k\\
(a_i-j)(2i+j-a_i-1)/2& \textrm{if }i= k
\end{cases}
\right|.
\end{align}
We want to show that this expression is equal to the r.h.s. of~\eqref{eq:FPL_sdt2}. First of all, we need to express the quantities involving hooks and contents in terms of the sequence $a$. Notice that the integer $a_i$ is naturally associated to the $(n+1-i)th$ row from the top in $Y(a)$, the length of this row being given by $(a_i-i)$.
\begin{itemize}
\item It is well known (see for instance~\cite[p.132]{Sagan_symmgroup}) that
\begin{equation}
\label{eq:hookdet}
\frac{1}{H_{Y(a)}} = \det \left| \frac{1}{(a_i-j)!} \right|;
\end{equation}
\item The contents in the row indexed by $a_i$ are given by $i-n,i-n+1,\ldots,i-n+(a_i-i-1)$, which sum up to $\frac{1}{2} (a_i-i)(2n-a_i-i+1)$, and therefore we get
\[
\sum_{u \in Y(a)} c(u) = \sum_{i=1}^n\frac{1}{2} (a_i-i)(2n-a_i-i+1);
\]
\item Noticing that $a_i\mapsto a_i-1$ removes a box in $(n+1-i)th$ row, we have:
\begin{equation}
\label{eq:pouet}
\sum_{(x,y)\in Cor(\pi)} \frac{n-x}{H_{\pi-(x,y)}} = \sum_{k=1}^n \det \left| \frac{1}{(a_i-j)!}
\begin{cases}
1 & \textrm{if } i \neq k \\
(a_i-j)(i-1) & \textrm{if } i=k
\end{cases} \right|.
\end{equation}
Here we can sum over all $k$, i.e. all rows, because the determinants corresponding to rows without a corner in $Y(a)$ have two equal rows and thus vanish.
\end{itemize}
Looking back at Equation~\eqref{eq:expdet}, we write
\[(a_i-j)(2i+j-a_i-1)/2=-(a_i-j)(a_i-j-1)/2+(a_i-j)(i-1),\]
and splitting each determinant in two thanks to linearity in the $k$th row. Then the expression obtained by summing the determinants correponding to the second term is precisely~\eqref{eq:pouet}; therefore all that remains to prove is the following lemma:
\begin{lemma}
\label{lem:endproof}
\begin{multline}\label{eq:o1_det}
\sum_{k=1}^{n} \det\left| \frac{1}{(a_i-j)!} \times
\begin{cases}
1& \textrm{if }i\neq k\\
(a_i-j)(a_i-j-1)& \textrm{if }i= k
\end{cases}\right|\\
= \left(\sum_{k=1}^{n}(a_k-k)(a_k-2n+k-1)\right)\times \det \left| \frac{1}{(a_i-j)!} \right|
\end{multline}
\end{lemma}
\begin{proof}
We write $(a_k-k)(a_k-2n+k-1)=a_k(a_k-2n-1)+k(2n-k+1)$ and use linearity of the determinant with respect to line (and column) $k$ to write the r.h.s. of~\eqref{eq:o1_det} as
\begin{multline} \label{eq:cu_det}
\sum_{k=1}^n \det \left|\frac{1}{(a_i-j)!}
\begin{cases}
1 & \textrm{if } i \neq k\\
a_i(a_i-2n-1) & \textrm{if } i=k
\end{cases} \right|\\
+
\sum_{k=1}^n \det \left|\frac{1}{(a_i-j)!}
\begin{cases}
1 & \textrm{if } j \neq k\\
j(2n-j+1) & \textrm{if } j=k
\end{cases} \right|.
\end{multline}
Now we notice that we have the general identity for any variables $a_{ij},b_{ij}$:
\[
\sum_{k=1}^n \det \left|a_{ij}
\begin{cases}
1 & \textrm{if } i \neq k\\
b_{ij} & \textrm{if } i=k
\end{cases} \right|
=\sum_{k=1}^n \det \left|a_{ij}
\begin{cases}
1 & \textrm{if } j \neq k\\
b_{ij} & \textrm{if } j=k
\end{cases} \right|.
\]
Indeed, both correspond to the coefficient of $t^{-1}$ in $\det\left|a_{ij}+a_{ij}b_{ij}/t\right|$, which can be expanded using multilinearity according either to rows or to columns. We use this in the first term of~\eqref{eq:cu_det} and in the l.h.s. in the lemma; putting things together, the r.h.s. of~\eqref{eq:o1_det} minus the l.h.s is equal to:
\[
\sum_{k=1}^n \det \left|\frac{1}{(a_i-j)!}
\begin{cases}
1 & \textrm{if } j \neq k\\
2(n-j)(a_i-j) & \textrm{if } j=k
\end{cases} \right|.
\]
For all $k<n$ the determinants have two proportional columns ($k$ and $k+1$), while for $k=n$ the $n$th column of the determinant is zero. So all these determinants are zero and therefore so is their sum, which achieves the proof of the lemma. \end{proof}
This completes the second proof of Theorem~\ref{th:subleading}.
\subsection{Application to hook length products}
It turns out that some of the computations made to prove Theorem~\ref{th:subleading} have nice applications to certain {\em hook identities}. If $Y$ is a Young diagram, let $Cor(Y)$ be its corners, and $HD(Y)$ (respectively $VD(Y)$) be the horizontal (resp. vertical) dominos which can be removed from $Y$, defined as two boxes which can be removed in the same row (resp. the same column). Then we have the following identities:
\begin{prop}
\label{prop:newhookformulas}
For any Young diagram $Y$ we have:
\[
\frac{2\sum_{u\in Y}c(u)}{H_Y}=\sum_{(x,y)\in Cor(Y)} \frac{(y-x)}{H_{Y-(x,y)}}
\]
and
\[
\frac{2\sum_{u\in Y}c(u) }{H_Y}= \sum_{hd\in HD(Y)} \frac{1}{H_{(Y-hd)}}-\sum_{vd\in VD(Y)} \frac{1}{H_{(Y-vd)}}.
\]
\end{prop}
\begin{proof}
We consider $a$, a sequence such that $Y(a)=Y$. The first formula consists simply in equating the expressions in \eqref{eq:FPL_sdt} and \eqref{eq:FPL_sdt2}.
We will see that the second formula is a reformulation of Lemma~\ref{lem:endproof}. We already identified $\frac{2}{H_Y}\sum_{u\in Y}c(u)$ as the r.h.s. of the lemma, so we want identify the sums on dominos with the l.h.s. in Lemma~\ref{lem:endproof}. We note first that the $k$th determinant in ~\eqref{eq:o1_det} is of the form \eqref{eq:hookdet} for the sequence $a^{(k)}$ which coincides with $a$ except $a^{(k)}_k=a_k-2$. There are three different cases to consider: firstly, if $a^{(k)}$ has two equal terms, the corresponding determinant vanishes. Then, if $a^{(k)}$ is increasing, we obtain one of the terms in the sum over $HD(Y)$. Finally, for $a^{(k)}$ to have distinct terms when it's not increasing, it is necessary and sufficient that $a_k=a_{k-1}+1$ and $a_{k-2}<a_k-2$. The sequence obtained by switching $a_k-2$ and $a_{k-1}$ is then strictly increasing; if we exchange the rows in the determinant, we will get a negative sign. It is then easy to verify that such sequences are those obtained by removing a vertical domino from $Y$, which achieves the proof.
\end{proof}
As pointed out to the second author by V. F\'eray~\cite{ferayperso}, both formulas can in fact be deduced from the representation theory of the symmetric group, using the properties of Jucys-Murphy elements~\cite{jucys2,murphy}.
\section{The first root}
\label{sec:firstroot}
In this section we will prove the following theorem
\begin{thm}
\label{th:firstroot}
For any matching $\pi$ we have
\[
\Psi_\pi(\tau,-1) =
\begin{cases}
\Psi_{\pi'}(\tau)& \textrm{if }\pi=(\pi');\\
0 & \textrm{otherwise.}
\end{cases}
\]
\end{thm}
This is a special case of Conjecture~\ref{conj:realroots} by setting $\tau=1$:
\begin{cor}
If $m_1(\pi)=1$, then $(t+1)$ divides the polynomial $A_\pi(t)$.
\end{cor}
Indeed $m_1(\pi)=1$ precisely when there is no arch between $1$ and $2n$ in $\pi$ (cf. Rule A in Section~\ref{sub:combdef}), which means that $\pi$ cannot be written as $(\pi')$. For the same reason, Theorem~\ref{th:firstroot} is in general a special case of Conjecture~\ref{conj:taurealroot}.
To prove this theorem, we use the multiparameter version of the quantities $\Psi_\pi$.
\subsection{Multiparameter setting} \label{sub:multi}
We recall the principal properties of the multiparameter setting as presented in~\cite{artic47,hdr,artic43}. Note that in fact, it is this setting that was used originally to prove the results of Section~\ref{sec:pol_qKZ}; we presented things backwards because this was not needed outside of this section.
There exist polynomials in $2n$ variables $\Psi_\pi(z_1,\ldots,z_{2n})$ with coefficients in $\mathbb{C}(q)$, indexed by matchings of size $n$, which are defined as solutions of a certain equation~\cite[Formulas 4.2 and 4.3]{hdr} (related to the qKZ equation introduced by Frenkel and Reshetikhin in~\cite{FR-qkz}), which is a generalization of the eigenvector equation defining the $\Psi_\pi$ (cf. Section~\ref{sub:O1}). Here $q$ and $\tau$ are related by $\tau=-q-q^{-1}$, so that $q=\pm e^{2i\pi/3}$ will give $\tau=1$. One can show that these polynomials form a basis of the following vector space $\mathcal{V}_n$:
\begin{defi}[$\mathcal{V}_n$]
We define $\mathcal{V}_n$ as the vector space of all homogeneous polynomials in $2n$ variables, with total degree $\delta=n(n-1)$ and partial degree $\delta_i=n-1$ in each variable, which obey to the \emph{wheel condition}:
\[
\left.P(z_1,\ldots,z_{2n})\right|_{z_k=q^2 z_j=q^4 z_i}=0\qquad \forall k>j>i.
\]
\end{defi}
This vector space has dimension $\frac{(2n)!}{n!(n+1)!}$, the number of matchings of size $|\pi|=n$. The polynomials $\Psi_\pi(z_1,\ldots,z_{2n})$ verify the following important lemma:
\begin{lemma}[\cite{artic41}]\label{lem:dual}
Let $q^\epsilon=\{q^{\epsilon_1},\ldots,q^{\epsilon_{2n}}\}$, where $\epsilon_i=\pm 1$ are such that changing $q^{-1}$ in $($ and changing $q$ in $)$ gives a valid parenthesis word $\pi(\epsilon)$. Then
\[
\Psi_\pi(q^\epsilon) = \tau^{d(\pi)} \delta_{\pi,\epsilon},
\]
where $\delta_{\pi,\epsilon}=1$ when we have $\pi(\epsilon)=\pi$.
\end{lemma}
Since the $\Psi_\pi(z_1,\ldots,z_{2n})$ form a basis of $\mathcal{V}_n$, the lemma shows that a polynomial in this space is determined by its value on these points $q^\epsilon$. There is a small variation of this lemma, for the cases with a big arch $(1,2n)$, cf.~\cite[Formula 4.15]{hdr}\footnote{We do not use the same normalization as in~\cite{hdr}.}:
\[
\Psi_{\pi}(q^{-2},q^{\epsilon},q^2)=\left(\frac{q-1}{q-q^{-1}}\right)^{2(n-1)} \tau^{d(\pi)} q^{-(n-1)} \delta_{(\epsilon),\pi}.
\]
\subsection*{Another basis}We now define another set of polynomials $\Phi_a(z_1,\ldots,z_{2n})$ (indexed by the increasing sequences defined in Section~\ref{representations}), by the integral formula:
\begin{multline}\label{eq:qKZ_var}
\Phi_a(z_1,\ldots,z_{2n})= c_n
\prod_{1\le i<j\le 2n} (qz_i-q^{-1}z_j)\\ \times \oint\ldots\oint \prod_{i=1}^n \frac{dw_i}{2\pi i} \frac{\prod_{1\le i<j\le n}(w_j-w_i)(qw_i-q^{-1}w_j)}{\prod_{1\le k\leq a_i}(w_i-z_k)\prod_{a_i<k\le 2n}(qw_i-q^{-1}z_k)},
\end{multline}
where the integral is performed around the $z_i$ but not around $q^{-2}z_i$, and $c_n=(q-q^{-1})^{-n(n-1)}$. In the limit $z_i=1$ for all $i$ we simply obtain the equations for $\Phi_a(\tau)$ given in Section~\ref{sec:pol_qKZ}, by the change of variables $u_i=\frac{w_i-1}{q w_i - q^{-1}}$. In fact, these polynomials actually also live in $\mathcal{V}_n$ and we have
\begin{align*}
\Phi_a(z_1,\ldots,z_{2n})=\sum_{\pi} C_{a,\pi}(\tau) \Psi_\pi (z_1,\ldots,z_{2n}),
\end{align*}
where the $C_{a,\pi}(\tau)$ are precisely the coefficients that appear in Section~\ref{sec:pol_qKZ}\footnote{In fact, this is the true definition of these coefficients, and the properties listed in Section~\ref{sec:pol_qKZ} are proved from this definition and~\eqref{eq:C_1}.}. Then
\begin{equation} \label{eq:C_1}
\Phi_a(q^\epsilon) = \tau^{d(\epsilon)} C_{a,\epsilon}(\tau),
\end{equation}
which is an immediate application of Lemma~\ref{lem:dual}. Using the lemma's variation, we also have:
\begin{equation} \label{eq:C_2}
\Phi_a(q^{-2},q^\epsilon,q^2) = \tau^{d(\epsilon)} q^{-(n-1)} \left(\frac{q-1}{q-q^{-1}}\right)^{2(n-1)} C_{a,(\epsilon)}.
\end{equation}
\subsection{The proof}
By Lemma~\ref{lem:sta},
\[
\Psi_{\pi}(-1)=\sum_a C^{-1}_{\pi,a} \Phi_{a} (-1).
\]
We now introduce the following multiple integral, inspired by Formula~\eqref{eq:qKZ_var}:
\begin{multline}
\label{eq:minusone}
\Phi_a (z_1,\ldots,z_{2n}|-1):=c_n\frac{z_1 (q-q^{-1})}{qz_1-q^{-1}z_{2n}} \prod_{1 \leq i<j\leq2n} (q z_i -q^{-1} z_j) \\
\times \oint \ldots \oint \prod_i \frac{dw_i}{2i\pi} \frac{\prod_{i<j}(w_j-w_i)(q w_i -q^{-1}w_j)}{\prod_{j\leq a_i} (w_i-z_j)\prod_{j> a_i} (q w_i-q^{-1} z_j)} \prod_i \frac{q w_i - q^{-1} z_{2n}}{q z_1-q^{-1} w_i}.
\end{multline}
The essential property of $\Phi_a (z_1,\ldots,z_{2n}|-1)$ is that if all $z_i=1$, then we get $\Phi_{a}(-1)$; this requires the change of variables $u_i=\frac{w_i-1}{q w_i - q^{-1}}$ already mentioned after Formula~\eqref{eq:qKZ_var}. If we integrate in $w_1$, we obtain:
\begin{multline*}
\Phi_a (z_1,\ldots,z_{2n}|-1)=c_n \prod_{i=2}^{2n-1}(qz_i-q^{-1}z_{2n})\prod_{2\leq i<j}^{2n-1} (q z_i -q^{-1} z_j)\\
\times \oint \ldots \oint \prod_{i=2} \frac{dw_i}{2i\pi} \frac{\prod_{i<j}(w_j-w_i)(q w_i -q^{-1}w_j)}{\prod_{2\leq j\leq a_i} (w_i-z_j)\prod_{2n>j> a_i} (q w_i-q^{-1} z_j)}.
\end{multline*}
The r.h.s. is now factorized in one term which depends on $z_1$ and $z_{2n}$, but not on $a$, and one which does not depend on $z_1$ and $z_{2n}$, and lives in the vector space $\mathcal{V}_{n-1}$ (with parameters $\{z_2,\ldots,z_{2n-1}\}$). Therefore we can write $\Phi_a(z_1,\ldots,z_{2n}|-1)$ as a linear combination of $\Psi_{\pi}(z_2,\ldots,z_{2n-1})$:
\begin{equation} \label{eq:-1_dec}
\Phi_{a}(z_1,\ldots,z_{2n}|-1)= \frac{\prod_{i=2}^{2n-1}(qz_i-q^{-1}z_{2n})}{(q-q^{-1})^{2(n-1)}}
\times \sum_\pi \widehat{C}_{a,\pi}\Psi_{\pi} (z_2,\ldots,z_{2n-1}).
\end{equation}
We have then the following essential lemma:
\begin{lemma}
For any $a,\epsilon$ we have $\widehat{C}_{a,\epsilon}=C_{a,(\epsilon)}$.
\end{lemma}
\begin{proof}
First we integrate Formula~\eqref{eq:qKZ_var} in $w_1$:
\begin{multline*}
\Phi_a(z_1,\ldots,z_{2n})=c_n \prod_{i=2}^{2n-1}(q z_i-q^{-1}z_{2n})\prod_{2\leq i<j<2n} (q z_i -q^{-1} z_j) \\
\times \oint \ldots \oint \prod_{i} \frac{dw_i}{2i\pi} \frac{\prod_{i<j}(w_j-w_i)(q w_i -q^{-1}w_j)}{\prod_{j\leq a_i} (w_i-z_j)\prod_{2n>j> a_i} (q w_i-q^{-1} z_j)}\prod_{i=2}^{2n-1}\frac{qz_1-q^{-1}w_i}{q w_i-q^{-1}z_{2n}}.
\end{multline*}
We then make the substitutions $z_1 \mapsto q^{-2}$ and $z_{2n} \mapsto q^2$:
\begin{multline*}
\Phi_a(q^{-2},z_2,\ldots,z_{2n-1},q^2)=c_n (-1)^{n-1}\prod_{i=2}^{2n-1}(z_i-1) \prod_{2\leq i<j<2n} (q z_i -q^{-1} z_j) \\
\times \oint \ldots \oint \prod_{i=2} \frac{dw_i}{2i\pi} \frac{\prod_{i<j}(w_j-w_i)(q w_i -q^{-1}w_j)}{\prod_{2\leq j\leq a_i} (w_i-z_j)\prod_{2n>j> a_i} (q w_i-q^{-1} z_j)}.
\end{multline*}
Comparing with the formula obtained for $\Phi_{a}(z_1,\ldots,z_{2n}|-1)$, we get:
\[
\Phi_{a}(z_1,\ldots,z_{2n}|-1)= (-1)^{n-1} \prod_{i=2}^{2n-1}\frac{q z_i-q^{-1}z_{2n}}{z_i-1} \Phi_{a}(q^{-2},z_2,\ldots,z_{2n-1},q^2),
\]
which thanks to~\eqref{eq:-1_dec} becomes:
\[
\sum_\epsilon \widehat{C}_{a,\epsilon} \Psi_\epsilon(z_2,\ldots,z_{2n-1})=\frac{(q-q^{-1})^{2(n-1)}}{\prod_{i=2}^{2n-1}{z_i-1}} (-1)^{n-1} \sum_{\pi} C_{a,\pi} \Psi_\pi(q^{-2},z_2,\ldots,z_{2n-1},q^2).
\]
Now the l.h.s. lives in $\mathcal{V}_{n-1}$, so it is determined by the points $(q^\sigma)$ (cf. Lemma~\ref{lem:dual} and its variation):
\[
\sum_\epsilon \widehat{C}_{a,\epsilon} \delta_{\epsilon,\sigma} \tau^{d(\epsilon)}= \sum_\pi C_{a,\pi} \delta_{\pi,(\sigma)} \tau^{d(\pi)},
\]
This simplifies to $\widehat{C}_{a,\sigma}\tau^{d((\sigma))}=C_{a,\sigma} \tau^{d(\sigma)}$; since $d(\sigma)=d((\sigma))$, we get the expected result.
\end{proof}
We can now finish the proof of the theorem. In the limit $z_i=1$ for all $i$, Equation~\eqref{eq:-1_dec} becomes
\[
\Phi_{a}(-1) = \sum_{\pi:|\pi|=n-1} \widehat{C}_{a,\pi} \Psi_\pi.
\]
Using the lemma, and multiplying by $C^{-1}_{\pi,a}$, this becomes:
\begin{align*}
\sum_aC^{-1}_{\pi,a}\Phi_{a}(-1)=&\sum_a\sum_{\epsilon}C^{-1}_{\pi,a}C_{a,(\epsilon)}\Psi_\epsilon \\
\Leftrightarrow\qquad\qquad\Psi_{\pi}(-1)=&\sum_{\epsilon} \delta_{\pi,(\epsilon)} \Psi_\epsilon,
\end{align*}
which achieves the proof.
\section{\texorpdfstring{The $\tau$ case}{The tau case}}
\label{sec:tau}
The bivariate polynomials $\Psi_{\pi}(\tau,t)$ were introduced in Section~\ref{sec:pol_qKZ}. In this section we present conjectures mimicking those of Section~\ref{sec:conj} for these polynomials.
\subsection{Conjectures}
\label{sub:tauconj}
We will give four conjectures, each of them being in fact a natural extension of one of the conjectures of Section~\ref{sec:conj}. All of these conjectures have been verified for all $\Psi_{\pi}(\tau,t)$ with $|\pi|\leq 8$. We begin with roots:
\begin{conj}
\label{conj:taurealroot}
Considering $\Psi_{\pi}(\tau,t)$ as a polynomial in $t$ with coefficients in $\mathbb{Q}[\tau]$, the real roots of $\Psi_{\pi}(\tau,t)$ are negative integers $-p$ and with multiplicity given by $m_p(\pi)$:
\[
\Psi_{\pi}(\tau,t) = \frac{1}{|d(\pi)|!} \times \prod_{i=1}^{|\pi|} (t+i)^{m_i(\pi)} Q_{\pi} (\tau,t),
\]
where $Q_{\pi} (\tau,t)$ is a polynomial in $t$ with no real roots.
\end{conj}
For the example $\pi_0$ of Section~\ref{sub:combdef} we compute:
\begin{align*}
\label{ex_n8}
\Psi_{\pi_0}(\tau,t)=&\frac{(2 + t)(3 + t)^2(4 + t)^2(5 + t)^2(6 + t)(7 + t)}{145152000} \tau^9\\
&\times (84000 + 440640 \tau^2 + 151440 t \tau^2 + 13200 t^2 \tau^2 + 523680 \tau^4 + 394360 t \tau^4 \\&+
110520 t^2 + \tau^413670 t^3 \tau^4 + 630 t^4 \tau^4 + 182880 \tau^6 + 211656 t \tau^6 \\&+
101716 t^2 \tau^6 + 25990 t^3 \tau^6+ 3725 t^4 \tau^6 + 284 t^5 \tau^6 + 9 t^6 \tau^6).
\end{align*}
We then have the natural generalization of the factorization conjecture:
\begin{conj}
\label{conj:taudec}
Let $\pi$ be a matching and $p$ be a integer between $1$ and $|\pi|-1$ such that $m_p(\pi)=0$, so that $\pi=\alpha \circ \beta$ with $|\alpha|=p$; then
\[
\Psi_{\pi}(\tau,-p)= G_\alpha(\tau) \Psi_{\beta}(\tau).
\]
\end{conj}
Here $G_\pi (\tau)$ is naturally defined by $ G_\pi (\tau):=\Psi_\pi(\tau,-|\pi|)$, while $\Psi_{\pi}(\tau)$ was defined in Section~\ref{sec:pol_qKZ} and is equal to $\Psi_{\pi}(\tau,0)$. The values for $|\pi|=4$ are given in Appendix~\ref{app:examples}. These $G_{\pi}(\tau)$ present several properties:
\begin{conj}\label{conj:dec_tau}
We have $G_{\pi}(\tau)=(-1)^{d(\pi)} g_{\pi}(\tau)$ where $g_\pi{\tau}$ is a polynomial with \emph{nonnegative} integer coefficients. Furthermore, we have the sum rule:
\[
\sum_\pi G_\pi (\tau) = \sum_\pi \Psi_\pi (-\tau).
\]
\end{conj}
We will show in Section ~\ref{sub:lead} that the leading term of $g_\pi(\tau)$ is $\tau^{d(\pi)}$; we will actually compute the leading term in $\tau$ of $\Psi_\pi(\tau,p)$ for various integer values of $p$. Another property of these $G_\pi(\tau)$ is that
\[
G_\pi(\tau)=(-1)^{d(\pi)} G_\pi (-\tau),
\]
so that they are odd or even polynomials depending on the parity of $\pi$. More generally, one has $\Psi_\pi(\tau,t)=(-1)^{d(\pi)} \Psi_\pi(-\tau,t)$. Indeed, this is obvious for the polynomials
\[
\Phi_a = \oint \ldots \oint \prod_i \frac{du_i}{u_i^{a_i}}(1+\tau u_i) \prod_{j>i} (u_j-u_i)(1+\tau u_j+u_i u_j),
\]
and as the basis transformation respects this parity, this holds for $\Psi_\pi(\tau,t)$ as well.
Finally, introducing a $\tau$ doesn't change the positivity:
\begin{conj}
The bivariate polynomial $d(\pi)!P_\pi (\tau,t)$ has nonnegative integer coefficients.
\end{conj}
\subsection{\texorpdfstring{The leading term of $\Psi_\pi(\tau,p)$}{The leading term in tau}}
\label{sub:lead}
We now consider $\Psi_\pi(\tau,t)$ as a polynomial in $\tau$, first with coefficients in $\mathbb{C}[t]$, and then with rational coefficients under the specializations $t=p$ for $p$ an integer.
We start by deriving an expression for the leading term in $\tau$ of the polynomial $\Psi_\pi(\tau,t)$. First we consider the leading term in $\tau$ of $\Phi_a(\tau,t)$ for a given sequence $a$. We have
\[
\Phi_a (\tau,t)= \oint\ldots\oint \prod_i \frac{du_i}{2\pi i u_i^{a_i}} (1+\tau u_i)^t \prod_{j>i}(u_j - u_i)(1+\tau u_j+u_i u_j),
\]
It is clear that if we replace $(1+ \tau u_i+u_i u_j)$ for $(1+\tau u_i)$ we don't change the leading term (for the same reasons as in Section~\ref{sub:subleadingO1}). Therefore this last expression has the same leading term in $\tau$ as
\begin{align*}
& \oint \ldots \oint \prod_i \frac{du_i}{2\pi i u_i^{a_i}} (1+\tau u_i)^{t+i-1} \prod_{j>i}(u_j - u_i)\\
= & \sum_{\sigma \in S_n} (-1)^\sigma \oint \ldots \oint \prod_i \frac{du_i}{2\pi i u_i^{a_i+1-\sigma_i}} (1+\tau u_i)^{t+i-1}\\
= & \sum_{\sigma \in S_n} (-1)^\sigma \prod_i \tau^{a_i-\sigma_i} \binom{t+i-1}{a_i-\sigma_i}\\
= & \tau^{d(a)} \det_{n\times n} \left| \binom{t+i-1}{a_i-j} \right |.
\end{align*}
So we know that the degree in $\tau$ of $\Phi_a (\tau,t)$ is $d(a)$. Because of Equation~\eqref{eq:psitauphitau}
and Proposition~\ref{prop:Bases}, it is clear that the leading term of $\Psi_\pi (\tau,t)$ is the same as $\Phi_{a(\pi)} (\tau,t)$. We have thus proved:
\begin{prop}
As a polynomial in $\tau$, the leading term of $\Psi_\pi(\tau,t)$ is given by $D_\pi (t)\tau^{d(\pi)}$, where for $a=a(\pi)$ we have
\[
D_\pi (t)=\det_{n\times n} \left| \binom{t+i-1}{a_i-j} \right|.
\]
\end{prop}
Now we turn to what happens when $t$ is specialized to an integer $p$; by definition the cases $p=0$ and $p=-|\pi|$ correspond respectively to the polynomials $\Psi_\pi(\tau)$ and $G_\pi(\tau)$. Clearly if $D_\pi (p)\neq 0$ then the leading term of $\Psi_\pi(\tau,p)$ is $D_\pi (p)\tau^{d(\pi)}$ by the previous proposition, while if $D_\pi (p)=0$ the leading term is necessarily of smaller degree. Our result is the following:
\begin{thm}
\label{thm:taucoefdom}
Let $\pi$ be a matching, and $p$ be an integer; if $p<0$, we also assume that $\pi$ is not of the form $(\rho)_{|p|}$. Then $D_\pi (p)= 0$ if and only if $1-|\pi|\leq p\leq-1$. Furthermore, \begin{itemize}
\item if $p\geq 0$ then $D_\pi (p)$ counts the number of tableaux of shape $Y(\pi)$ with entries bounded by $p+|\pi|-1$ which are strictly increasing in rows and columns; \item if $p\leq -|\pi|$, then $(-1)^{d(\pi)}D_\pi(p)$ counts the number of tableaux of shape $Y(\pi)$ with entries bounded by $|p|-|\pi|$ which are weakly increasing in rows and columns;
\item if $1-|\pi|\leq p\leq-1$, then\begin{itemize}
\item if $m_{|p|}(\pi)\neq 0$, Conjecture~\ref{conj:taurealroot} implies that $\Psi_\pi(\tau,p)$ is the zero polynomial;
\item if $m_{|p|}(\pi)= 0$ and $\pi=\alpha \circ \beta$ with $|\alpha|=|p|$, Conjecture~\ref{conj:taudec} implies that the leading term of $\Psi_\pi(\tau,p)$ is given by $(-1)^{d(\alpha)}D_\beta(0)\tau^{d(\alpha)+d(\beta)}$.
\end{itemize}
\end{itemize}
\end{thm}
Note that the condition that $\pi$ is not of the form $(\rho)_{|p|}$ is not a restriction, since in such a case $\Psi_\pi(\tau,p)=\Psi_\rho(\tau,0)$.
\begin{proof}
We study separately the three cases:
\medskip
\noindent\emph{Case $p\geq 0$.} The determinant $D_\pi (p)$ is here a particular case of ~\cite[Theorem 6.1]{KrattGenPP}, which says that indeed $D_\pi (p)$ counts tableaux of shape $Y(\pi)$ with entries bounded by $(p+|\pi|-1)$ and increasing in both directions. For example, if $a(\pi)=\{1,2,4,7\}$ and $p=1$ we get
\[
D_{\{1,2,4,7\}}(1)=\det_{4\times 4} \left| \binom{i}{a_i-j} \right| = 11,
\]
corresponding to the $11$ tableaux:
\begin{align*}
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 2};
\draw [red] (1.5,-.5) node {\tiny 3};
\draw [red] (2.5,-.5) node {\tiny 4};
\draw [red] (0.5,-1.5) node {\tiny 4};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 2};
\draw [red] (1.5,-.5) node {\tiny 3};
\draw [red] (2.5,-.5) node {\tiny 4};
\draw [red] (0.5,-1.5) node {\tiny 3};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 1};
\draw [red] (1.5,-.5) node {\tiny 3};
\draw [red] (2.5,-.5) node {\tiny 4};
\draw [red] (0.5,-1.5) node {\tiny 4};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 1};
\draw [red] (1.5,-.5) node {\tiny 3};
\draw [red] (2.5,-.5) node {\tiny 4};
\draw [red] (0.5,-1.5) node {\tiny 3};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 1};
\draw [red] (1.5,-.5) node {\tiny 3};
\draw [red] (2.5,-.5) node {\tiny 4};
\draw [red] (0.5,-1.5) node {\tiny 2};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 1};
\draw [red] (1.5,-.5) node {\tiny 2};
\draw [red] (2.5,-.5) node {\tiny 4};
\draw [red] (0.5,-1.5) node {\tiny 4};
\end{tikzpicture}
\\
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 1};
\draw [red] (1.5,-.5) node {\tiny 2};
\draw [red] (2.5,-.5) node {\tiny 4};
\draw [red] (0.5,-1.5) node {\tiny 3};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 1};
\draw [red] (1.5,-.5) node {\tiny 2};
\draw [red] (2.5,-.5) node {\tiny 4};
\draw [red] (0.5,-1.5) node {\tiny 2};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 1};
\draw [red] (1.5,-.5) node {\tiny 2};
\draw [red] (2.5,-.5) node {\tiny 3};
\draw [red] (0.5,-1.5) node {\tiny 4};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 1};
\draw [red] (1.5,-.5) node {\tiny 2};
\draw [red] (2.5,-.5) node {\tiny 3};
\draw [red] (0.5,-1.5) node {\tiny 3};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 1};
\draw [red] (1.5,-.5) node {\tiny 2};
\draw [red] (2.5,-.5) node {\tiny 3};
\draw [red] (0.5,-1.5) node {\tiny 2};
\end{tikzpicture}
\end{align*}
Note also that the filling of the shape $Y(\pi)$ where the cell $(x,y)$ is labeled by $x+y-1$ is a valid tableau because $x+y\leq n$ holds for every cell, and therefore $D_\pi(p)> 0$ for $p\geq 0$.
\medskip
\noindent\emph{Case $p\leq -|\pi|$.} We use first the transformation $\binom{N}{k}=(-1)^k\binom{N+k-1}{k}$ for each coefficient in $D_\pi(p)$ to get:
\[
D_{\pi}(p) = (-1)^{d(\pi)} \det_{n \times n} \left| \binom{|p|+a_i-i-j}{a_i-j} \right|;
\]
Here the sign comes from $(-1)^{a_i-j}=(-1)^{a_i}(-1)^{-j}$ for the coefficient $(i,j)$, with gives the global sign $(-1)^{\sum_ia_i-\sum_jj}=(-1)^{d(\pi)}$. We can then use ~\cite[Theorem 6.1]{KrattGenPP} in this case also, which gives us that
$ (-1)^{d(\pi)} D_{\pi}(p)$ counts tableaux of shape $Y(\pi)$ with entries between $0$ and $|p|-|\pi|$ which are weakly increasing in both directions. For the same partition $a(\pi)=\{1,2,4,7\}$ and $p=-5$ we get
\[
|D_{\pi} (-5)| = \det_{4 \times 4} \left| \binom{5+a_i-i-j}{5-i}\right| = 7,
\]
which corresponds to the $7$ tableaux
\begin{align*}
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 0};
\draw [red] (1.5,-.5) node {\tiny 0};
\draw [red] (2.5,-.5) node {\tiny 0};
\draw [red] (0.5,-1.5) node {\tiny 0};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 0};
\draw [red] (1.5,-.5) node {\tiny 0};
\draw [red] (2.5,-.5) node {\tiny 1};
\draw [red] (0.5,-1.5) node {\tiny 0};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 0};
\draw [red] (1.5,-.5) node {\tiny 1};
\draw [red] (2.5,-.5) node {\tiny 1};
\draw [red] (0.5,-1.5) node {\tiny 0};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 0};
\draw [red] (1.5,-.5) node {\tiny 0};
\draw [red] (2.5,-.5) node {\tiny 0};
\draw [red] (0.5,-1.5) node {\tiny 1};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 0};
\draw [red] (1.5,-.5) node {\tiny 0};
\draw [red] (2.5,-.5) node {\tiny 1};
\draw [red] (0.5,-1.5) node {\tiny 1};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 0};
\draw [red] (1.5,-.5) node {\tiny 1};
\draw [red] (2.5,-.5) node {\tiny 1};
\draw [red] (0.5,-1.5) node {\tiny 1};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=.25,baseline=0pt]
\draw[black, thick] (0,0) -- (3,0);
\draw[black, thick] (0,-1) -- (3,-1);
\draw[black, thick] (0,-2) -- (1,-2);
\draw[black, thick] (0,0) -- (0,-2);
\draw[black, thick] (1,0) -- (1,-2);
\draw[black, thick] (2,0) -- (2,-1);
\draw[black, thick] (3,0) -- (3,-1);
\draw [red] (0.5,-.5) node {\tiny 1};
\draw [red] (1.5,-.5) node {\tiny 1};
\draw [red] (2.5,-.5) node {\tiny 1};
\draw [red] (0.5,-1.5) node {\tiny 1};
\end{tikzpicture}
\end{align*}
\medskip
Now here also $D_\pi(p)\neq 0$ because the tableau filled zeros is valid. For $p=-|\pi|$, this is the only possible tableau and thus the leading coefficient of $G_\pi(\tau)$ is given by $D_\pi(-|\pi|)=(-1)^{d(\pi)}$.
\medskip
\noindent\emph{Case $-|\pi|<p<0$.} We first want to prove that $D_{\pi}(p)=0$ if $\pi$ is not of the form $(\rho)_{|p|}$. We easily check that $\binom{p+i-1}{a_i-j}$ is zero unless either $(i,j)<(|p|+1,a_{|p|+1})$ or $(i,j)\geq (|p|+1,a_{|p|+1})$. Therefore we get a matrix which splits into two rectangular submatrices; the determinant is zero unless these submatrices are square, which means that $|p|+1=a_{p+1}$, and then
\begin{align*}
D_{\pi}(p) =&
\det_{|p|\times |p|} \left| \binom{p+i-1}{i-j} \right| \times \det_{(|\pi|-|p|)\times (|\pi|-|p|)} \left| \binom{i-1}{\hat{a}_i-j} \right|\\
=& D_{\{1,\ldots,-p\}}(p) \times D_{\hat{a}} (0),
\end{align*}
where $\hat{a}_i=a_{r+i}-r$. The first factor is $1$, and the second is non-zero if and only if $\hat{a}$ corresponds to a matching; this is excluded because $\pi$ would be of the form $(\rho)_{|p|}$, which is excluded. Therefore $D_{\pi}(p)=0$ as wanted.
Now Conjecture~\ref{conj:taurealroot} immediately implies that if $m_{|p|}(\pi)\neq 0$, then $t=p$ is a root of $\Psi_\pi(\tau,t)$, so that $\Psi_\pi(\tau,p)\equiv 0$. If $m_{|p|}(\pi)= 0$, then by Conjecture~\ref{conj:taudec}, the leading term of $\Psi_\pi(\tau,p)$ is equal to the product of the leading terms of $G_\alpha(\tau)$ and $\Psi_\beta(\tau)$. The first one is given by $(-1)^{d(\alpha)}\tau^{d(\alpha)}$ as proved above, while the leading term of $\Psi _\beta(\tau)=\Psi _\beta(\tau,0)$ is given by $D_\beta(0)\tau^{d(\beta)}$, which achieves the proof.
\end{proof}
\section{Further questions}
\subsection{Solving the conjectures}
Since our paper is centered around conjectures, the most immediate problem is to solve them. We listed four conjectures in Section~\ref{sec:conj} which concern roots, specializations and coefficients of the polynomials $A_\pi(t)$. The difficulty here is that existing expressions for the polynomials $A_\pi(t)$, namely~\eqref{eq:apiX} and~\eqref{eq:psipiX}, consist of certain sums of polynomials, so that it makes it uneasy to find real roots of $A_\pi(t)$, and more generally the sign variations when $t$ is a real variable. For the same reasons, it is hard to figure out where the factorization from Conjecture~\ref{conj:dec} comes from. Furthermore, both expressions~\eqref{eq:apiX} and~\eqref{eq:psipiX} involve negative signs, so that the positivity of coefficients is not at all obvious. One way to attack the conjectures would be then to find new expressions for the polynomials; this could be done by either understanding better the quantities involved in~\eqref{eq:apiX} and~\eqref{eq:psipiX}, or coming up with a new decomposition of the FPLs counted by $A_{(\pi)_p}$ for instance.
Note also that the linear relations from Definition~\ref{defi:psipi}, which determine the $A_\pi$ by the Razumov Stroganov correspondence~\ref{conj:rs}, do not seem to be helpful in the case of nested arches. Indeed given a matching $(\pi)_p$, then the linear relation corresponding to $A_{(\pi)_p}$ involves not only quantities of the form $A_{(\pi')_p}$ or $A_{(\pi')_{p-1}}$, but also $A_{()(\pi)_{p-2}()}=A_{()()(\pi)_{p-2}}$, which is not of the form considered in this work. For two matchings $\pi,\pi'$, the quantities $A_{\pi'(\pi)_p}$ are polynomials in $p$ when $p$ is big enough (cf.~\cite[Theorem 6.7]{CKLN}), and these ones are ``stable'' with respect to these Razumov--Stroganov linear relations: it would be very interesting to study these more general polynomials and find out how our conjectures can be extended.
Another angle to attack some of the conjectures (namely Conj.~\ref{conj:realroots},~\ref{conj:dec} and their $\tau$ counterparts~\ref{conj:taurealroot},~\ref{conj:dec_tau}) would be to extend the approach used in the proof of Theorem~\ref{th:firstroot}: one first needs to extend the multivariate integral definition~\eqref{eq:minusone} to any integer $p$, which can easily be done. The problem is that the expressions obtained are fairly more complicated and intricate than in the case $p=-1$. This is work in progress.
\subsection{Combinatorial reciprocity}
The idea underlying our conjectures (Conjecture~\ref{conj:posX} excepted) is that there should be a ``combinatorial reciprocity theorem'' (\cite{StanleyReciprocity}) attached to these polynomials. That is, we believe there exist yet-to-be-discovered combinatorial objects depending on $\pi$ such that $A_\pi(-p)$ is equal (up to sign) to the number of these objects with size $p$. The most well-known example in the literature of such a phenomenon concerns the {\em Ehrhart polynomial} $i_P(t)$ of a lattice polytope $P$, which counts the number of lattice points in $tP$ when $t$ is a positive integer: for such $t$, Ehrhart reciprocity then tells us that $(-1)^{\dim P} i_P(-t)$ counts lattice points strictly in $tP$ (see~\cite{BeckRobins} for instance).
It is natural to wonder if our problem fits in the domain of Ehrhart theory, since most known examples of combinatorial reciprocity can be formulated in terms of Ehrhart polynomials: see for instance certain polynomials attached to graphs~\cite{BeckZasnowherezero, BreuerSanyal}. It cannot be a straightforward application however, in the sense $A_\pi(t)$ is not equal to an Ehrhart polynomial $i_P(t)$ in general: indeed, for any lattice polytope $P$ there cannot be two positive integers $i,j$ such that $i_P(-i)i_P(-j)<0$ since such values are either $0$ or of the sign $(-1)^{\dim P}$ by Ehrhart reciprocity. But for $\pi=()()()()=()^4$ for instance, one computes from the explicit expression given in Appendix~\ref{app:examples} that $A_\pi(-2)=-1$ while $A_\pi(-4)=9$. Moreover, one can also show that if Conjecture~\ref{conj:realroots} holds, then given any finite set $S$ of negative integers (included say in $\{-1,\ldots,1-n\}$) there exists a matching $\pi$ of size $n$ such that the set of negative roots of $A_\pi(t)$ is precisely $S$. This is clearly a behaviour contrasting with Ehrhart polynomials, and even their generalizations to {\em inside-out polytopes} \cite{BeckZasInsideout}.
Conjectures~\ref{conj:realroots} and ~\ref{conj:dec} tell us in particular for which values of $p$ objects counted by $|A_\pi(-p)|$ should exist, and moreover that such objects should {\em split} for certain values of $p$. As pointed out in Section~\ref{sub:gpi}, Conjectures~\ref{conj:dec} and ~\ref{conj:gpi} make it particularly important to figure out what the numbers $G_\pi=A_\pi(-|\pi|)$ count.
\subsection{Consequences of the conjectures}
The conjectures have interesting consequences regarding the numbers $a_\sigma^\pi$ involved in Equation~\eqref{eq:apiX}, since for instance Conjecture~\ref{conj:realroots} directly implies certain linear relations among these numbers. Discovering what these numbers $a_\sigma^\pi$ are is a step in the direction of a new proof of the Razumov--Stroganov conjecture, in the sense that it gives an expression for $A_\pi$ that could be compared to the expressions for $\Psi_\pi$. We note also that a conjectural expression for these numbers $a_\sigma^\pi$ was given in ~\cite{ZJtriangle}, which if true would in fact give another proof of the Razumov--Stroganov conjecture; a special case of this expression is proven in~\cite{NadFPL2}.
| 37,440 |